···151151 The algorithm_params file is write-only and is used to setup152152 compression algorithm parameters.153153154154-What: /sys/block/zram<id>/writeback_compressed154154+What: /sys/block/zram<id>/compressed_writeback155155Date: Decemeber 2025156156Contact: Richard Chang <richardycc@google.com>157157Description:158158- The writeback_compressed device atrribute toggles compressed158158+ The compressed_writeback device atrribute toggles compressed159159 writeback feature.160160161161What: /sys/block/zram<id>/writeback_batch_size
···11-What: /sys/bus/platform/devices/INOU0000:XX/fn_lock_toggle_enable11+What: /sys/bus/platform/devices/INOU0000:XX/fn_lock22Date: November 202533KernelVersion: 6.1944Contact: Armin Wolf <W_Armin@gmx.de>···8899 Reading this file returns the current enable status of the FN lock functionality.10101111-What: /sys/bus/platform/devices/INOU0000:XX/super_key_toggle_enable1111+What: /sys/bus/platform/devices/INOU0000:XX/super_key_enable1212Date: November 20251313KernelVersion: 6.191414Contact: Armin Wolf <W_Armin@gmx.de>1515Description:1616- Allows userspace applications to enable/disable the super key functionality1717- of the integrated keyboard by writing "1"/"0" into this file.1616+ Allows userspace applications to enable/disable the super key of the integrated1717+ keyboard by writing "1"/"0" into this file.18181919- Reading this file returns the current enable status of the super key functionality.1919+ Reading this file returns the current enable status of the super key.20202121What: /sys/bus/platform/devices/INOU0000:XX/touchpad_toggle_enable2222Date: November 2025
+3-3
Documentation/admin-guide/blockdev/zram.rst
···216216writeback_limit_enable RW show and set writeback_limit feature217217writeback_batch_size RW show and set maximum number of in-flight218218 writeback operations219219-writeback_compressed RW show and set compressed writeback feature219219+compressed_writeback RW show and set compressed writeback feature220220comp_algorithm RW show and change the compression algorithm221221algorithm_params WO setup compression algorithm parameters222222compact WO trigger memory compaction···439439By default zram stores written back pages in decompressed (raw) form, which440440means that writeback operation involves decompression of the page before441441writing it to the backing device. This behavior can be changed by enabling442442-`writeback_compressed` feature, which causes zram to write compressed pages442442+`compressed_writeback` feature, which causes zram to write compressed pages443443to the backing device, thus avoiding decompression overhead. To enable444444this feature, execute::445445446446- $ echo yes > /sys/block/zramX/writeback_compressed446446+ $ echo yes > /sys/block/zramX/compressed_writeback447447448448Note that this feature should be configured before the `zramX` device is449449initialized.
+16
Documentation/admin-guide/kernel-parameters.txt
···7474 TPM TPM drivers are enabled.7575 UMS USB Mass Storage support is enabled.7676 USB USB support is enabled.7777+ NVME NVMe support is enabled7778 USBHID USB Human Interface Device support is enabled.7879 V4L Video For Linux support is enabled.7980 VGA The VGA console has been enabled.···47884787 This can be set from sysctl after boot.47894788 See Documentation/admin-guide/sysctl/vm.rst for details.4790478947904790+ nvme.quirks= [NVME] A list of quirk entries to augment the built-in47914791+ nvme quirk list. List entries are separated by a47924792+ '-' character.47934793+ Each entry has the form VendorID:ProductID:quirk_names.47944794+ The IDs are 4-digits hex numbers and quirk_names is a47954795+ list of quirk names separated by commas. A quirk name47964796+ can be prefixed by '^', meaning that the specified47974797+ quirk must be disabled.47984798+47994799+ Example:48004800+ nvme.quirks=7710:2267:bogus_nid,^identify_cns-9900:7711:broken_msi48014801+47914802 ohci1394_dma=early [HW,EARLY] enable debugging via the ohci1394 driver.47924803 See Documentation/core-api/debugging-via-ohci1394.rst for more47934804 info.···81968183 p = USB_QUIRK_SHORT_SET_ADDRESS_REQ_TIMEOUT81978184 (Reduce timeout of the SET_ADDRESS81988185 request from 5000 ms to 500 ms);81868186+ q = USB_QUIRK_FORCE_ONE_CONFIG (Device81878187+ claims zero configurations,81888188+ forcing to 1);81998189 Example: quirks=0781:5580:bk,0a5c:5834:gij8200819082018191 usbhid.mousepoll=
···24242525The ``uniwill-laptop`` driver allows the user to enable/disable:26262727- - the FN and super key lock functionality of the integrated keyboard2727+ - the FN lock and super key of the integrated keyboard2828 - the touchpad toggle functionality of the integrated touchpad29293030See Documentation/ABI/testing/sysfs-driver-uniwill-laptop for details.
···11-.. SPDX-License-Identifier: GPL-2.0-only22-33-Kernel driver sa67mcu44-=====================55-66-Supported chips:77-88- * Kontron sa67mcu99-1010- Prefix: 'sa67mcu'1111-1212- Datasheet: not available1313-1414-Authors: Michael Walle <mwalle@kernel.org>1515-1616-Description1717------------1818-1919-The sa67mcu is a board management controller which also exposes a hardware2020-monitoring controller.2121-2222-The controller has two voltage and one temperature sensor. The values are2323-hold in two 8 bit registers to form one 16 bit value. Reading the lower byte2424-will also capture the high byte to make the access atomic. The unit of the2525-volatge sensors are 1mV and the unit of the temperature sensor is 0.1degC.2626-2727-Sysfs entries2828--------------2929-3030-The following attributes are supported.3131-3232-======================= ========================================================3333-in0_label "VDDIN"3434-in0_input Measured VDDIN voltage.3535-3636-in1_label "VDD_RTC"3737-in1_input Measured VDD_RTC voltage.3838-3939-temp1_input MCU temperature. Roughly the board temperature.4040-======================= ========================================================4141-
+27-3
Documentation/scheduler/sched-ext.rst
···4343 CONFIG_DEBUG_INFO_BTF=y4444 CONFIG_BPF_JIT_ALWAYS_ON=y4545 CONFIG_BPF_JIT_DEFAULT_ON=y4646- CONFIG_PAHOLE_HAS_BTF_TAG=y47464847sched_ext is used only when the BPF scheduler is loaded and running.4948···5758However, when the BPF scheduler is loaded and ``SCX_OPS_SWITCH_PARTIAL`` is5859set in ``ops->flags``, only tasks with the ``SCHED_EXT`` policy are scheduled5960by sched_ext, while tasks with ``SCHED_NORMAL``, ``SCHED_BATCH`` and6060-``SCHED_IDLE`` policies are scheduled by the fair-class scheduler.6161+``SCHED_IDLE`` policies are scheduled by the fair-class scheduler which has6262+higher sched_class precedence than ``SCHED_EXT``.61636264Terminating the sched_ext scheduler program, triggering `SysRq-S`, or6365detection of any internal error including stalled runnable tasks aborts the···345345 The functions prefixed with ``scx_bpf_`` can be called from the BPF346346 scheduler.347347348348+* ``kernel/sched/ext_idle.c`` contains the built-in idle CPU selection policy.349349+348350* ``tools/sched_ext/`` hosts example BPF scheduler implementations.349351350352 * ``scx_simple[.bpf].c``: Minimal global FIFO scheduler example using a···355353 * ``scx_qmap[.bpf].c``: A multi-level FIFO scheduler supporting five356354 levels of priority implemented with ``BPF_MAP_TYPE_QUEUE``.357355356356+ * ``scx_central[.bpf].c``: A central FIFO scheduler where all scheduling357357+ decisions are made on one CPU, demonstrating ``LOCAL_ON`` dispatching,358358+ tickless operation, and kthread preemption.359359+360360+ * ``scx_cpu0[.bpf].c``: A scheduler that queues all tasks to a shared DSQ361361+ and only dispatches them on CPU0 in FIFO order. Useful for testing bypass362362+ behavior.363363+364364+ * ``scx_flatcg[.bpf].c``: A flattened cgroup hierarchy scheduler365365+ implementing hierarchical weight-based cgroup CPU control by compounding366366+ each cgroup's share at every level into a single flat scheduling layer.367367+368368+ * ``scx_pair[.bpf].c``: A core-scheduling example that always makes369369+ sibling CPU pairs execute tasks from the same CPU cgroup.370370+371371+ * ``scx_sdt[.bpf].c``: A variation of ``scx_simple`` demonstrating BPF372372+ arena memory management for per-task data.373373+374374+ * ``scx_userland[.bpf].c``: A minimal scheduler demonstrating user space375375+ scheduling. Tasks with CPU affinity are direct-dispatched in FIFO order;376376+ all others are scheduled in user space by a simple vruntime scheduler.377377+358378ABI Instability359379===============360380361381The APIs provided by sched_ext to BPF schedulers programs have no stability362382guarantees. This includes the ops table callbacks and constants defined in363383``include/linux/sched/ext.h``, as well as the ``scx_bpf_`` kfuncs defined in364364-``kernel/sched/ext.c``.384384+``kernel/sched/ext.c`` and ``kernel/sched/ext_idle.c``.365385366386While we will attempt to provide a relatively stable API surface when367387possible, they are subject to change without warning between kernel
+4
Documentation/sound/alsa-configuration.rst
···23722372 audible volume23732373 * bit 25: ``mixer_capture_min_mute``23742374 Similar to bit 24 but for capture streams23752375+ * bit 26: ``skip_iface_setup``23762376+ Skip the probe-time interface setup (usb_set_interface,23772377+ init_pitch, init_sample_rate); redundant with23782378+ snd_usb_endpoint_prepare() at stream-open time2375237923762380This module supports multiple devices, autoprobe and hotplugging.23772381
+107-99
Documentation/virt/kvm/api.rst
···8435843584368436The valid bits in cap.args[0] are:8437843784388438-=================================== ============================================84398439- KVM_X86_QUIRK_LINT0_REENABLED By default, the reset value for the LVT84408440- LINT0 register is 0x700 (APIC_MODE_EXTINT).84418441- When this quirk is disabled, the reset value84428442- is 0x10000 (APIC_LVT_MASKED).84388438+======================================== ================================================84398439+KVM_X86_QUIRK_LINT0_REENABLED By default, the reset value for the LVT84408440+ LINT0 register is 0x700 (APIC_MODE_EXTINT).84418441+ When this quirk is disabled, the reset value84428442+ is 0x10000 (APIC_LVT_MASKED).8443844384448444- KVM_X86_QUIRK_CD_NW_CLEARED By default, KVM clears CR0.CD and CR0.NW on84458445- AMD CPUs to workaround buggy guest firmware84468446- that runs in perpetuity with CR0.CD, i.e.84478447- with caches in "no fill" mode.84448444+KVM_X86_QUIRK_CD_NW_CLEARED By default, KVM clears CR0.CD and CR0.NW on84458445+ AMD CPUs to workaround buggy guest firmware84468446+ that runs in perpetuity with CR0.CD, i.e.84478447+ with caches in "no fill" mode.8448844884498449- When this quirk is disabled, KVM does not84508450- change the value of CR0.CD and CR0.NW.84498449+ When this quirk is disabled, KVM does not84508450+ change the value of CR0.CD and CR0.NW.8451845184528452- KVM_X86_QUIRK_LAPIC_MMIO_HOLE By default, the MMIO LAPIC interface is84538453- available even when configured for x2APIC84548454- mode. When this quirk is disabled, KVM84558455- disables the MMIO LAPIC interface if the84568456- LAPIC is in x2APIC mode.84528452+KVM_X86_QUIRK_LAPIC_MMIO_HOLE By default, the MMIO LAPIC interface is84538453+ available even when configured for x2APIC84548454+ mode. When this quirk is disabled, KVM84558455+ disables the MMIO LAPIC interface if the84568456+ LAPIC is in x2APIC mode.8457845784588458- KVM_X86_QUIRK_OUT_7E_INC_RIP By default, KVM pre-increments %rip before84598459- exiting to userspace for an OUT instruction84608460- to port 0x7e. When this quirk is disabled,84618461- KVM does not pre-increment %rip before84628462- exiting to userspace.84588458+KVM_X86_QUIRK_OUT_7E_INC_RIP By default, KVM pre-increments %rip before84598459+ exiting to userspace for an OUT instruction84608460+ to port 0x7e. When this quirk is disabled,84618461+ KVM does not pre-increment %rip before84628462+ exiting to userspace.8463846384648464- KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT When this quirk is disabled, KVM sets84658465- CPUID.01H:ECX[bit 3] (MONITOR/MWAIT) if84668466- IA32_MISC_ENABLE[bit 18] (MWAIT) is set.84678467- Additionally, when this quirk is disabled,84688468- KVM clears CPUID.01H:ECX[bit 3] if84698469- IA32_MISC_ENABLE[bit 18] is cleared.84648464+KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT When this quirk is disabled, KVM sets84658465+ CPUID.01H:ECX[bit 3] (MONITOR/MWAIT) if84668466+ IA32_MISC_ENABLE[bit 18] (MWAIT) is set.84678467+ Additionally, when this quirk is disabled,84688468+ KVM clears CPUID.01H:ECX[bit 3] if84698469+ IA32_MISC_ENABLE[bit 18] is cleared.8470847084718471- KVM_X86_QUIRK_FIX_HYPERCALL_INSN By default, KVM rewrites guest84728472- VMMCALL/VMCALL instructions to match the84738473- vendor's hypercall instruction for the84748474- system. When this quirk is disabled, KVM84758475- will no longer rewrite invalid guest84768476- hypercall instructions. Executing the84778477- incorrect hypercall instruction will84788478- generate a #UD within the guest.84718471+KVM_X86_QUIRK_FIX_HYPERCALL_INSN By default, KVM rewrites guest84728472+ VMMCALL/VMCALL instructions to match the84738473+ vendor's hypercall instruction for the84748474+ system. When this quirk is disabled, KVM84758475+ will no longer rewrite invalid guest84768476+ hypercall instructions. Executing the84778477+ incorrect hypercall instruction will84788478+ generate a #UD within the guest.8479847984808480-KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS By default, KVM emulates MONITOR/MWAIT (if84818481- they are intercepted) as NOPs regardless of84828482- whether or not MONITOR/MWAIT are supported84838483- according to guest CPUID. When this quirk84848484- is disabled and KVM_X86_DISABLE_EXITS_MWAIT84858485- is not set (MONITOR/MWAIT are intercepted),84868486- KVM will inject a #UD on MONITOR/MWAIT if84878487- they're unsupported per guest CPUID. Note,84888488- KVM will modify MONITOR/MWAIT support in84898489- guest CPUID on writes to MISC_ENABLE if84908490- KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT is84918491- disabled.84808480+KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS By default, KVM emulates MONITOR/MWAIT (if84818481+ they are intercepted) as NOPs regardless of84828482+ whether or not MONITOR/MWAIT are supported84838483+ according to guest CPUID. When this quirk84848484+ is disabled and KVM_X86_DISABLE_EXITS_MWAIT84858485+ is not set (MONITOR/MWAIT are intercepted),84868486+ KVM will inject a #UD on MONITOR/MWAIT if84878487+ they're unsupported per guest CPUID. Note,84888488+ KVM will modify MONITOR/MWAIT support in84898489+ guest CPUID on writes to MISC_ENABLE if84908490+ KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT is84918491+ disabled.8492849284938493-KVM_X86_QUIRK_SLOT_ZAP_ALL By default, for KVM_X86_DEFAULT_VM VMs, KVM84948494- invalidates all SPTEs in all memslots and84958495- address spaces when a memslot is deleted or84968496- moved. When this quirk is disabled (or the84978497- VM type isn't KVM_X86_DEFAULT_VM), KVM only84988498- ensures the backing memory of the deleted84998499- or moved memslot isn't reachable, i.e KVM85008500- _may_ invalidate only SPTEs related to the85018501- memslot.84938493+KVM_X86_QUIRK_SLOT_ZAP_ALL By default, for KVM_X86_DEFAULT_VM VMs, KVM84948494+ invalidates all SPTEs in all memslots and84958495+ address spaces when a memslot is deleted or84968496+ moved. When this quirk is disabled (or the84978497+ VM type isn't KVM_X86_DEFAULT_VM), KVM only84988498+ ensures the backing memory of the deleted84998499+ or moved memslot isn't reachable, i.e KVM85008500+ _may_ invalidate only SPTEs related to the85018501+ memslot.8502850285038503-KVM_X86_QUIRK_STUFF_FEATURE_MSRS By default, at vCPU creation, KVM sets the85048504- vCPU's MSR_IA32_PERF_CAPABILITIES (0x345),85058505- MSR_IA32_ARCH_CAPABILITIES (0x10a),85068506- MSR_PLATFORM_INFO (0xce), and all VMX MSRs85078507- (0x480..0x492) to the maximal capabilities85088508- supported by KVM. KVM also sets85098509- MSR_IA32_UCODE_REV (0x8b) to an arbitrary85108510- value (which is different for Intel vs.85118511- AMD). Lastly, when guest CPUID is set (by85128512- userspace), KVM modifies select VMX MSR85138513- fields to force consistency between guest85148514- CPUID and L2's effective ISA. When this85158515- quirk is disabled, KVM zeroes the vCPU's MSR85168516- values (with two exceptions, see below),85178517- i.e. treats the feature MSRs like CPUID85188518- leaves and gives userspace full control of85198519- the vCPU model definition. This quirk does85208520- not affect VMX MSRs CR0/CR4_FIXED1 (0x48785218521- and 0x489), as KVM does now allow them to85228522- be set by userspace (KVM sets them based on85238523- guest CPUID, for safety purposes).85038503+KVM_X86_QUIRK_STUFF_FEATURE_MSRS By default, at vCPU creation, KVM sets the85048504+ vCPU's MSR_IA32_PERF_CAPABILITIES (0x345),85058505+ MSR_IA32_ARCH_CAPABILITIES (0x10a),85068506+ MSR_PLATFORM_INFO (0xce), and all VMX MSRs85078507+ (0x480..0x492) to the maximal capabilities85088508+ supported by KVM. KVM also sets85098509+ MSR_IA32_UCODE_REV (0x8b) to an arbitrary85108510+ value (which is different for Intel vs.85118511+ AMD). Lastly, when guest CPUID is set (by85128512+ userspace), KVM modifies select VMX MSR85138513+ fields to force consistency between guest85148514+ CPUID and L2's effective ISA. When this85158515+ quirk is disabled, KVM zeroes the vCPU's MSR85168516+ values (with two exceptions, see below),85178517+ i.e. treats the feature MSRs like CPUID85188518+ leaves and gives userspace full control of85198519+ the vCPU model definition. This quirk does85208520+ not affect VMX MSRs CR0/CR4_FIXED1 (0x48785218521+ and 0x489), as KVM does now allow them to85228522+ be set by userspace (KVM sets them based on85238523+ guest CPUID, for safety purposes).8524852485258525-KVM_X86_QUIRK_IGNORE_GUEST_PAT By default, on Intel platforms, KVM ignores85268526- guest PAT and forces the effective memory85278527- type to WB in EPT. The quirk is not available85288528- on Intel platforms which are incapable of85298529- safely honoring guest PAT (i.e., without CPU85308530- self-snoop, KVM always ignores guest PAT and85318531- forces effective memory type to WB). It is85328532- also ignored on AMD platforms or, on Intel,85338533- when a VM has non-coherent DMA devices85348534- assigned; KVM always honors guest PAT in85358535- such case. The quirk is needed to avoid85368536- slowdowns on certain Intel Xeon platforms85378537- (e.g. ICX, SPR) where self-snoop feature is85388538- supported but UC is slow enough to cause85398539- issues with some older guests that use85408540- UC instead of WC to map the video RAM.85418541- Userspace can disable the quirk to honor85428542- guest PAT if it knows that there is no such85438543- guest software, for example if it does not85448544- expose a bochs graphics device (which is85458545- known to have had a buggy driver).85468546-=================================== ============================================85258525+KVM_X86_QUIRK_IGNORE_GUEST_PAT By default, on Intel platforms, KVM ignores85268526+ guest PAT and forces the effective memory85278527+ type to WB in EPT. The quirk is not available85288528+ on Intel platforms which are incapable of85298529+ safely honoring guest PAT (i.e., without CPU85308530+ self-snoop, KVM always ignores guest PAT and85318531+ forces effective memory type to WB). It is85328532+ also ignored on AMD platforms or, on Intel,85338533+ when a VM has non-coherent DMA devices85348534+ assigned; KVM always honors guest PAT in85358535+ such case. The quirk is needed to avoid85368536+ slowdowns on certain Intel Xeon platforms85378537+ (e.g. ICX, SPR) where self-snoop feature is85388538+ supported but UC is slow enough to cause85398539+ issues with some older guests that use85408540+ UC instead of WC to map the video RAM.85418541+ Userspace can disable the quirk to honor85428542+ guest PAT if it knows that there is no such85438543+ guest software, for example if it does not85448544+ expose a bochs graphics device (which is85458545+ known to have had a buggy driver).85468546+85478547+KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM By default, KVM relaxes the consistency85488548+ check for GUEST_IA32_DEBUGCTL in vmcs1285498549+ to allow FREEZE_IN_SMM to be set. When85508550+ this quirk is disabled, KVM requires this85518551+ bit to be cleared. Note that the vmcs0285528552+ bit is still completely controlled by the85538553+ host, regardless of the quirk setting.85548554+======================================== ================================================85478555854885567.32 KVM_CAP_MAX_VCPU_ID85498557------------------------
+2
Documentation/virt/kvm/locking.rst
···17171818- kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock19192020+- vcpu->mutex is taken outside kvm->slots_lock and kvm->slots_arch_lock2121+2022- kvm->slots_lock is taken outside kvm->irq_lock, though acquiring2123 them together is quite rare.2224
+29-22
MAINTAINERS
···86268626F: include/uapi/drm/lima_drm.h8627862786288628DRM DRIVERS FOR LOONGSON86298629-M: Sui Jingfeng <suijingfeng@loongson.cn>86308629L: dri-devel@lists.freedesktop.org86318631-S: Supported86308630+S: Orphan86328631T: git https://gitlab.freedesktop.org/drm/misc/kernel.git86338632F: drivers/gpu/drm/loongson/86348633···13937139381393813939KERNEL UNIT TESTING FRAMEWORK (KUnit)1393913940M: Brendan Higgins <brendan.higgins@linux.dev>1394013940-M: David Gow <davidgow@google.com>1394113941+M: David Gow <david@davidgow.net>1394113942R: Rae Moar <raemoar63@gmail.com>1394213943L: linux-kselftest@vger.kernel.org1394313944L: kunit-dev@googlegroups.com···1475714758F: drivers/platform/x86/hp/hp_accel.c14758147591475914760LIST KUNIT TEST1476014760-M: David Gow <davidgow@google.com>1476114761+M: David Gow <david@davidgow.net>1476114762L: linux-kselftest@vger.kernel.org1476214763L: kunit-dev@googlegroups.com1476314764S: Maintained···16357163581635816359MEDIATEK T7XX 5G WWAN MODEM DRIVER1635916360M: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>1636016360-R: Chiranjeevi Rapolu <chiranjeevi.rapolu@linux.intel.com>1636116361R: Liu Haijun <haijun.liu@mediatek.com>1636216362R: Ricardo Martinez <ricardo.martinez@linux.intel.com>1636316363L: netdev@vger.kernel.org···1664116643MEMORY MANAGEMENT - CORE1664216644M: Andrew Morton <akpm@linux-foundation.org>1664316645M: David Hildenbrand <david@kernel.org>1664416644-R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1664616646+R: Lorenzo Stoakes <ljs@kernel.org>1664516647R: Liam R. Howlett <Liam.Howlett@oracle.com>1664616648R: Vlastimil Babka <vbabka@kernel.org>1664716649R: Mike Rapoport <rppt@kernel.org>···1677116773MEMORY MANAGEMENT - MISC1677216774M: Andrew Morton <akpm@linux-foundation.org>1677316775M: David Hildenbrand <david@kernel.org>1677416774-R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1677616776+R: Lorenzo Stoakes <ljs@kernel.org>1677516777R: Liam R. Howlett <Liam.Howlett@oracle.com>1677616778R: Vlastimil Babka <vbabka@kernel.org>1677716779R: Mike Rapoport <rppt@kernel.org>···1686216864R: Michal Hocko <mhocko@kernel.org>1686316865R: Qi Zheng <zhengqi.arch@bytedance.com>1686416866R: Shakeel Butt <shakeel.butt@linux.dev>1686516865-R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1686716867+R: Lorenzo Stoakes <ljs@kernel.org>1686616868L: linux-mm@kvack.org1686716869S: Maintained1686816870F: mm/vmscan.c···1687116873MEMORY MANAGEMENT - RMAP (REVERSE MAPPING)1687216874M: Andrew Morton <akpm@linux-foundation.org>1687316875M: David Hildenbrand <david@kernel.org>1687416874-M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1687616876+M: Lorenzo Stoakes <ljs@kernel.org>1687516877R: Rik van Riel <riel@surriel.com>1687616878R: Liam R. Howlett <Liam.Howlett@oracle.com>1687716879R: Vlastimil Babka <vbabka@kernel.org>···1691616918MEMORY MANAGEMENT - THP (TRANSPARENT HUGE PAGE)1691716919M: Andrew Morton <akpm@linux-foundation.org>1691816920M: David Hildenbrand <david@kernel.org>1691916919-M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1692116921+M: Lorenzo Stoakes <ljs@kernel.org>1692016922R: Zi Yan <ziy@nvidia.com>1692116923R: Baolin Wang <baolin.wang@linux.alibaba.com>1692216924R: Liam R. Howlett <Liam.Howlett@oracle.com>···16956169581695716959MEMORY MANAGEMENT - RUST1695816960M: Alice Ryhl <aliceryhl@google.com>1695916959-R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1696116961+R: Lorenzo Stoakes <ljs@kernel.org>1696016962R: Liam R. Howlett <Liam.Howlett@oracle.com>1696116963L: linux-mm@kvack.org1696216964L: rust-for-linux@vger.kernel.org···1697216974MEMORY MAPPING1697316975M: Andrew Morton <akpm@linux-foundation.org>1697416976M: Liam R. Howlett <Liam.Howlett@oracle.com>1697516975-M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1697716977+M: Lorenzo Stoakes <ljs@kernel.org>1697616978R: Vlastimil Babka <vbabka@kernel.org>1697716979R: Jann Horn <jannh@google.com>1697816980R: Pedro Falcato <pfalcato@suse.de>···1700217004M: Andrew Morton <akpm@linux-foundation.org>1700317005M: Suren Baghdasaryan <surenb@google.com>1700417006M: Liam R. Howlett <Liam.Howlett@oracle.com>1700517005-M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1700717007+M: Lorenzo Stoakes <ljs@kernel.org>1700617008R: Vlastimil Babka <vbabka@kernel.org>1700717009R: Shakeel Butt <shakeel.butt@linux.dev>1700817010L: linux-mm@kvack.org···1701717019MEMORY MAPPING - MADVISE (MEMORY ADVICE)1701817020M: Andrew Morton <akpm@linux-foundation.org>1701917021M: Liam R. Howlett <Liam.Howlett@oracle.com>1702017020-M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1702217022+M: Lorenzo Stoakes <ljs@kernel.org>1702117023M: David Hildenbrand <david@kernel.org>1702217024R: Vlastimil Babka <vbabka@kernel.org>1702317025R: Jann Horn <jannh@google.com>···2010620108F: drivers/pci/controller/pci-aardvark.c20107201092010820110PCI DRIVER FOR ALTERA PCIE IP2010920109-M: Joyce Ooi <joyce.ooi@intel.com>2011020111L: linux-pci@vger.kernel.org2011120111-S: Supported2011220112+S: Orphan2011220113F: Documentation/devicetree/bindings/pci/altr,pcie-root-port.yaml2011320114F: drivers/pci/controller/pcie-altera.c2011420115···2035220355F: Documentation/PCI/pci-error-recovery.rst20353203562035420357PCI MSI DRIVER FOR ALTERA MSI IP2035520355-M: Joyce Ooi <joyce.ooi@intel.com>2035620358L: linux-pci@vger.kernel.org2035720357-S: Supported2035920359+S: Orphan2035820360F: Documentation/devicetree/bindings/interrupt-controller/altr,msi-controller.yaml2035920361F: drivers/pci/controller/pcie-altera-msi.c2036020362···21936219402193721941RADOS BLOCK DEVICE (RBD)2193821942M: Ilya Dryomov <idryomov@gmail.com>2193921939-R: Dongsheng Yang <dongsheng.yang@easystack.cn>2194321943+R: Dongsheng Yang <dongsheng.yang@linux.dev>2194021944L: ceph-devel@vger.kernel.org2194121945S: Supported2194221946W: http://ceph.com/···2226422268L: linux-wireless@vger.kernel.org2226522269S: Orphan2226622270F: drivers/net/wireless/rsi/2227122271+2227222272+RELAY2227322273+M: Andrew Morton <akpm@linux-foundation.org>2227422274+M: Jens Axboe <axboe@kernel.dk>2227522275+M: Jason Xing <kernelxing@tencent.com>2227622276+L: linux-kernel@vger.kernel.org2227722277+S: Maintained2227822278+F: Documentation/filesystems/relay.rst2227922279+F: include/linux/relay.h2228022280+F: kernel/relay.c22267222812226822282REGISTER MAP ABSTRACTION2226922283M: Mark Brown <broonie@kernel.org>···23164231582316523159RUST [ALLOC]2316623160M: Danilo Krummrich <dakr@kernel.org>2316723167-R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>2316123161+R: Lorenzo Stoakes <ljs@kernel.org>2316823162R: Vlastimil Babka <vbabka@kernel.org>2316923163R: Liam R. Howlett <Liam.Howlett@oracle.com>2317023164R: Uladzislau Rezki <urezki@gmail.com>···2432824322F: Documentation/devicetree/bindings/pwm/kontron,sl28cpld-pwm.yaml2432924323F: Documentation/devicetree/bindings/watchdog/kontron,sl28cpld-wdt.yaml2433024324F: drivers/gpio/gpio-sl28cpld.c2433124331-F: drivers/hwmon/sa67mcu-hwmon.c2433224325F: drivers/hwmon/sl28cpld-hwmon.c2433324326F: drivers/irqchip/irq-sl28cpld.c2433424327F: drivers/pwm/pwm-sl28cpld.c···24341243362434224337SLAB ALLOCATOR2434324338M: Vlastimil Babka <vbabka@kernel.org>2433924339+M: Harry Yoo <harry.yoo@oracle.com>2434424340M: Andrew Morton <akpm@linux-foundation.org>2434124341+R: Hao Li <hao.li@linux.dev>2434524342R: Christoph Lameter <cl@gentwo.org>2434624343R: David Rientjes <rientjes@google.com>2434724344R: Roman Gushchin <roman.gushchin@linux.dev>2434824348-R: Harry Yoo <harry.yoo@oracle.com>2434924345L: linux-mm@kvack.org2435024346S: Maintained2435124347T: git git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git···2575725751F: include/net/pkt_sched.h2575825752F: include/net/sch_priv.h2575925753F: include/net/tc_act/2575425754+F: include/net/tc_wrapper.h2576025755F: include/uapi/linux/pkt_cls.h2576125756F: include/uapi/linux/pkt_sched.h2576225757F: include/uapi/linux/tc_act/
···784784 /* Number of debug breakpoints/watchpoints for this CPU (minus 1) */785785 unsigned int debug_brps;786786 unsigned int debug_wrps;787787+788788+ /* Last vgic_irq part of the AP list recorded in an LR */789789+ struct vgic_irq *last_lr_irq;787790};788791789792struct kvm_host_psci_config {
···22#ifndef _ASM_RUNTIME_CONST_H33#define _ASM_RUNTIME_CONST_H4455+#ifdef MODULE66+ #error "Cannot use runtime-const infrastructure from modules"77+#endif88+59#include <asm/cacheflush.h>610711/* Sigh. You can still run arm64 in BE mode */
+9
arch/arm64/kernel/cpufeature.c
···23452345 !is_midr_in_range_list(has_vgic_v3))23462346 return false;2347234723482348+ /*23492349+ * pKVM prevents late onlining of CPUs. This means that whatever23502350+ * state the capability is in after deprivilege cannot be affected23512351+ * by a new CPU booting -- this is garanteed to be a CPU we have23522352+ * already seen, and the cap is therefore unchanged.23532353+ */23542354+ if (system_capabilities_finalized() && is_protected_kvm_enabled())23552355+ return cpus_have_final_cap(ARM64_HAS_ICH_HCR_EL2_TDIR);23562356+23482357 if (is_kernel_in_hyp_mode())23492358 res.a1 = read_sysreg_s(SYS_ICH_VTR_EL2);23502359 else
···148148 struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;149149 struct vgic_v3_cpu_if *cpuif = &vgic_cpu->vgic_v3;150150 u32 eoicount = FIELD_GET(ICH_HCR_EL2_EOIcount, cpuif->vgic_hcr);151151- struct vgic_irq *irq;151151+ struct vgic_irq *irq = *host_data_ptr(last_lr_irq);152152153153 DEBUG_SPINLOCK_BUG_ON(!irqs_disabled());154154···158158 /*159159 * EOIMode=0: use EOIcount to emulate deactivation. We are160160 * guaranteed to deactivate in reverse order of the activation, so161161- * just pick one active interrupt after the other in the ap_list,162162- * and replay the deactivation as if the CPU was doing it. We also163163- * rely on priority drop to have taken place, and the list to be164164- * sorted by priority.161161+ * just pick one active interrupt after the other in the tail part162162+ * of the ap_list, past the LRs, and replay the deactivation as if163163+ * the CPU was doing it. We also rely on priority drop to have taken164164+ * place, and the list to be sorted by priority.165165 */166166- list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) {166166+ list_for_each_entry_continue(irq, &vgic_cpu->ap_list_head, ap_list) {167167 u64 lr;168168169169 /*
···599599}600600EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes);601601602602+static bool contpte_all_subptes_match_access_flags(pte_t *ptep, pte_t entry)603603+{604604+ pte_t *cont_ptep = contpte_align_down(ptep);605605+ /*606606+ * PFNs differ per sub-PTE. Match only bits consumed by607607+ * __ptep_set_access_flags(): AF, DIRTY and write permission.608608+ */609609+ const pteval_t cmp_mask = PTE_RDONLY | PTE_AF | PTE_WRITE | PTE_DIRTY;610610+ pteval_t entry_cmp = pte_val(entry) & cmp_mask;611611+ int i;612612+613613+ for (i = 0; i < CONT_PTES; i++) {614614+ pteval_t pte_cmp = pte_val(__ptep_get(cont_ptep + i)) & cmp_mask;615615+616616+ if (pte_cmp != entry_cmp)617617+ return false;618618+ }619619+620620+ return true;621621+}622622+602623int contpte_ptep_set_access_flags(struct vm_area_struct *vma,603624 unsigned long addr, pte_t *ptep,604625 pte_t entry, int dirty)···629608 int i;630609631610 /*632632- * Gather the access/dirty bits for the contiguous range. If nothing has633633- * changed, its a noop.611611+ * Check whether all sub-PTEs in the CONT block already match the612612+ * requested access flags/write permission, using raw per-PTE values613613+ * rather than the gathered ptep_get() view.614614+ *615615+ * __ptep_set_access_flags() can update AF, dirty and write616616+ * permission, but only to make the mapping more permissive.617617+ *618618+ * ptep_get() gathers AF/dirty state across the whole CONT block,619619+ * which is correct for a CPU with FEAT_HAFDBS. But page-table620620+ * walkers that evaluate each descriptor individually (e.g. a CPU621621+ * without DBM support, or an SMMU without HTTU, or with HA/HD622622+ * disabled in CD.TCR) can keep faulting on the target sub-PTE if623623+ * only a sibling has been updated. Gathering can therefore cause624624+ * false no-ops when only a sibling has been updated:625625+ * - write faults: target still has PTE_RDONLY (needs PTE_RDONLY cleared)626626+ * - read faults: target still lacks PTE_AF627627+ *628628+ * Per Arm ARM (DDI 0487) D8.7.1, any sub-PTE in a CONT range may629629+ * become the effective cached translation, so all entries must have630630+ * consistent attributes. Check the full CONT block before returning631631+ * no-op, and when any sub-PTE mismatches, proceed to update the whole632632+ * range.634633 */635635- orig_pte = pte_mknoncont(ptep_get(ptep));636636- if (pte_val(orig_pte) == pte_val(entry))634634+ if (contpte_all_subptes_match_access_flags(ptep, entry))637635 return 0;636636+637637+ /*638638+ * Use raw target pte (not gathered) for write-bit unfold decision.639639+ */640640+ orig_pte = pte_mknoncont(__ptep_get(ptep));638641639642 /*640643 * We can fix up access/dirty bits without having to unfold the contig
···101101 /* Throw in the debugging sections */102102 STABS_DEBUG103103 DWARF_DEBUG104104+ MODINFO104105 ELF_DETAILS105106106107 /* Sections to be discarded -- must be last */
+1
arch/parisc/boot/compressed/vmlinux.lds.S
···9090 /* Sections to be discarded */9191 DISCARDS9292 /DISCARD/ : {9393+ *(.modinfo)9394#ifdef CONFIG_64BIT9495 /* temporary hack until binutils is fixed to not emit these9596 * for static binaries
+1-1
arch/parisc/include/asm/pgtable.h
···8585 printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, (unsigned long)pgd_val(e))86868787/* This is the size of the initially mapped kernel memory */8888-#if defined(CONFIG_64BIT)8888+#if defined(CONFIG_64BIT) || defined(CONFIG_KALLSYMS)8989#define KERNEL_INITIAL_ORDER 26 /* 1<<26 = 64MB */9090#else9191#define KERNEL_INITIAL_ORDER 25 /* 1<<25 = 32MB */
+6-1
arch/parisc/kernel/head.S
···56565757 .import __bss_start,data5858 .import __bss_stop,data5959+ .import __end,data59606061 load32 PA(__bss_start),%r36162 load32 PA(__bss_stop),%r4···150149 * everything ... it will get remapped correctly later */151150 ldo 0+_PAGE_KERNEL_RWX(%r0),%r3 /* Hardwired 0 phys addr start */152151 load32 (1<<(KERNEL_INITIAL_ORDER-PAGE_SHIFT)),%r11 /* PFN count */153153- load32 PA(pg0),%r1152152+ load32 PA(_end),%r1153153+ SHRREG %r1,PAGE_SHIFT,%r1 /* %r1 is PFN count for _end symbol */154154+ cmpb,<<,n %r11,%r1,1f155155+ copy %r1,%r11 /* %r1 PFN count smaller than %r11 */156156+1: load32 PA(pg0),%r1154157155158$pgt_fill_loop:156159 STREGM %r3,ASM_PTE_ENTRY_SIZE(%r1)
+12-8
arch/parisc/kernel/setup.c
···120120#endif121121 printk(KERN_CONT ".\n");122122123123- /*124124- * Check if initial kernel page mappings are sufficient.125125- * panic early if not, else we may access kernel functions126126- * and variables which can't be reached.127127- */128128- if (__pa((unsigned long) &_end) >= KERNEL_INITIAL_SIZE)129129- panic("KERNEL_INITIAL_ORDER too small!");130130-131123#ifdef CONFIG_64BIT132124 if(parisc_narrow_firmware) {133125 printk(KERN_INFO "Kernel is using PDC in 32-bit mode.\n");···270278{271279 int ret, cpunum;272280 struct pdc_coproc_cfg coproc_cfg;281281+282282+ /*283283+ * Check if initial kernel page mapping is sufficient.284284+ * Print warning if not, because we may access kernel functions and285285+ * variables which can't be reached yet through the initial mappings.286286+ * Note that the panic() and printk() functions are not functional287287+ * yet, so we need to use direct iodc() firmware calls instead.288288+ */289289+ const char warn1[] = "CRITICAL: Kernel may crash because "290290+ "KERNEL_INITIAL_ORDER is too small.\n";291291+ if (__pa((unsigned long) &_end) >= KERNEL_INITIAL_SIZE)292292+ pdc_iodc_print(warn1, sizeof(warn1) - 1);273293274294 /* check QEMU/SeaBIOS marker in PAGE0 */275295 running_on_qemu = (memcmp(&PAGE0->pad0, "SeaBIOS", 8) == 0);
···11-/* T4240 Interlaken LAC Portal device tree stub with 24 portals.22- *33- * Copyright 2012 Freescale Semiconductor Inc.44- *55- * Redistribution and use in source and binary forms, with or without66- * modification, are permitted provided that the following conditions are met:77- * * Redistributions of source code must retain the above copyright88- * notice, this list of conditions and the following disclaimer.99- * * Redistributions in binary form must reproduce the above copyright1010- * notice, this list of conditions and the following disclaimer in the1111- * documentation and/or other materials provided with the distribution.1212- * * Neither the name of Freescale Semiconductor nor the1313- * names of its contributors may be used to endorse or promote products1414- * derived from this software without specific prior written permission.1515- *1616- *1717- * ALTERNATIVELY, this software may be distributed under the terms of the1818- * GNU General Public License ("GPL") as published by the Free Software1919- * Foundation, either version 2 of that License or (at your option) any2020- * later version.2121- *2222- * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY2323- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED2424- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE2525- * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY2626- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES2727- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;2828- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND2929- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT3030- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS3131- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.3232- */3333-3434-#address-cells = <0x1>;3535-#size-cells = <0x1>;3636-compatible = "fsl,interlaken-lac-portals";3737-3838-lportal0: lac-portal@0 {3939- compatible = "fsl,interlaken-lac-portal-v1.0";4040- reg = <0x0 0x1000>;4141-};4242-4343-lportal1: lac-portal@1000 {4444- compatible = "fsl,interlaken-lac-portal-v1.0";4545- reg = <0x1000 0x1000>;4646-};4747-4848-lportal2: lac-portal@2000 {4949- compatible = "fsl,interlaken-lac-portal-v1.0";5050- reg = <0x2000 0x1000>;5151-};5252-5353-lportal3: lac-portal@3000 {5454- compatible = "fsl,interlaken-lac-portal-v1.0";5555- reg = <0x3000 0x1000>;5656-};5757-5858-lportal4: lac-portal@4000 {5959- compatible = "fsl,interlaken-lac-portal-v1.0";6060- reg = <0x4000 0x1000>;6161-};6262-6363-lportal5: lac-portal@5000 {6464- compatible = "fsl,interlaken-lac-portal-v1.0";6565- reg = <0x5000 0x1000>;6666-};6767-6868-lportal6: lac-portal@6000 {6969- compatible = "fsl,interlaken-lac-portal-v1.0";7070- reg = <0x6000 0x1000>;7171-};7272-7373-lportal7: lac-portal@7000 {7474- compatible = "fsl,interlaken-lac-portal-v1.0";7575- reg = <0x7000 0x1000>;7676-};7777-7878-lportal8: lac-portal@8000 {7979- compatible = "fsl,interlaken-lac-portal-v1.0";8080- reg = <0x8000 0x1000>;8181-};8282-8383-lportal9: lac-portal@9000 {8484- compatible = "fsl,interlaken-lac-portal-v1.0";8585- reg = <0x9000 0x1000>;8686-};8787-8888-lportal10: lac-portal@A000 {8989- compatible = "fsl,interlaken-lac-portal-v1.0";9090- reg = <0xA000 0x1000>;9191-};9292-9393-lportal11: lac-portal@B000 {9494- compatible = "fsl,interlaken-lac-portal-v1.0";9595- reg = <0xB000 0x1000>;9696-};9797-9898-lportal12: lac-portal@C000 {9999- compatible = "fsl,interlaken-lac-portal-v1.0";100100- reg = <0xC000 0x1000>;101101-};102102-103103-lportal13: lac-portal@D000 {104104- compatible = "fsl,interlaken-lac-portal-v1.0";105105- reg = <0xD000 0x1000>;106106-};107107-108108-lportal14: lac-portal@E000 {109109- compatible = "fsl,interlaken-lac-portal-v1.0";110110- reg = <0xE000 0x1000>;111111-};112112-113113-lportal15: lac-portal@F000 {114114- compatible = "fsl,interlaken-lac-portal-v1.0";115115- reg = <0xF000 0x1000>;116116-};117117-118118-lportal16: lac-portal@10000 {119119- compatible = "fsl,interlaken-lac-portal-v1.0";120120- reg = <0x10000 0x1000>;121121-};122122-123123-lportal17: lac-portal@11000 {124124- compatible = "fsl,interlaken-lac-portal-v1.0";125125- reg = <0x11000 0x1000>;126126-};127127-128128-lportal18: lac-portal@1200 {129129- compatible = "fsl,interlaken-lac-portal-v1.0";130130- reg = <0x12000 0x1000>;131131-};132132-133133-lportal19: lac-portal@13000 {134134- compatible = "fsl,interlaken-lac-portal-v1.0";135135- reg = <0x13000 0x1000>;136136-};137137-138138-lportal20: lac-portal@14000 {139139- compatible = "fsl,interlaken-lac-portal-v1.0";140140- reg = <0x14000 0x1000>;141141-};142142-143143-lportal21: lac-portal@15000 {144144- compatible = "fsl,interlaken-lac-portal-v1.0";145145- reg = <0x15000 0x1000>;146146-};147147-148148-lportal22: lac-portal@16000 {149149- compatible = "fsl,interlaken-lac-portal-v1.0";150150- reg = <0x16000 0x1000>;151151-};152152-153153-lportal23: lac-portal@17000 {154154- compatible = "fsl,interlaken-lac-portal-v1.0";155155- reg = <0x17000 0x1000>;156156-};
-45
arch/powerpc/boot/dts/fsl/interlaken-lac.dtsi
···11-/*22- * T4 Interlaken Look-aside Controller (LAC) device tree stub33- *44- * Copyright 2012 Freescale Semiconductor Inc.55- *66- * Redistribution and use in source and binary forms, with or without77- * modification, are permitted provided that the following conditions are met:88- * * Redistributions of source code must retain the above copyright99- * notice, this list of conditions and the following disclaimer.1010- * * Redistributions in binary form must reproduce the above copyright1111- * notice, this list of conditions and the following disclaimer in the1212- * documentation and/or other materials provided with the distribution.1313- * * Neither the name of Freescale Semiconductor nor the1414- * names of its contributors may be used to endorse or promote products1515- * derived from this software without specific prior written permission.1616- *1717- *1818- * ALTERNATIVELY, this software may be distributed under the terms of the1919- * GNU General Public License ("GPL") as published by the Free Software2020- * Foundation, either version 2 of that License or (at your option) any2121- * later version.2222- *2323- * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY2424- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED2525- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE2626- * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY2727- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES2828- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;2929- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND3030- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT3131- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS3232- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.3333- */3434-3535-lac: lac@229000 {3636- compatible = "fsl,interlaken-lac";3737- reg = <0x229000 0x1000>;3838- interrupts = <16 2 1 18>;3939-};4040-4141-lac-hv@228000 {4242- compatible = "fsl,interlaken-lac-hv";4343- reg = <0x228000 0x1000>;4444- fsl,non-hv-node = <&lac>;4545-};
-43
arch/powerpc/boot/dts/fsl/pq3-mpic-message-B.dtsi
···11-/*22- * PQ3 MPIC Message (Group B) device tree stub [ controller @ offset 0x42400 ]33- *44- * Copyright 2012 Freescale Semiconductor Inc.55- *66- * Redistribution and use in source and binary forms, with or without77- * modification, are permitted provided that the following conditions are met:88- * * Redistributions of source code must retain the above copyright99- * notice, this list of conditions and the following disclaimer.1010- * * Redistributions in binary form must reproduce the above copyright1111- * notice, this list of conditions and the following disclaimer in the1212- * documentation and/or other materials provided with the distribution.1313- * * Neither the name of Freescale Semiconductor nor the1414- * names of its contributors may be used to endorse or promote products1515- * derived from this software without specific prior written permission.1616- *1717- *1818- * ALTERNATIVELY, this software may be distributed under the terms of the1919- * GNU General Public License ("GPL") as published by the Free Software2020- * Foundation, either version 2 of that License or (at your option) any2121- * later version.2222- *2323- * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY2424- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED2525- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE2626- * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY2727- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES2828- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;2929- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND3030- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT3131- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS3232- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.3333- */3434-3535-message@42400 {3636- compatible = "fsl,mpic-v3.1-msgr";3737- reg = <0x42400 0x200>;3838- interrupts = <3939- 0xb4 2 0 04040- 0xb5 2 0 04141- 0xb6 2 0 04242- 0xb7 2 0 0>;4343-};
···11-/*22- * QorIQ FMan v3 1g port #1 device tree stub [ controller @ offset 0x400000 ]33- *44- * Copyright 2012 - 2015 Freescale Semiconductor Inc.55- *66- * Redistribution and use in source and binary forms, with or without77- * modification, are permitted provided that the following conditions are met:88- * * Redistributions of source code must retain the above copyright99- * notice, this list of conditions and the following disclaimer.1010- * * Redistributions in binary form must reproduce the above copyright1111- * notice, this list of conditions and the following disclaimer in the1212- * documentation and/or other materials provided with the distribution.1313- * * Neither the name of Freescale Semiconductor nor the1414- * names of its contributors may be used to endorse or promote products1515- * derived from this software without specific prior written permission.1616- *1717- *1818- * ALTERNATIVELY, this software may be distributed under the terms of the1919- * GNU General Public License ("GPL") as published by the Free Software2020- * Foundation, either version 2 of that License or (at your option) any2121- * later version.2222- *2323- * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY2424- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED2525- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE2626- * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY2727- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES2828- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;2929- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND3030- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT3131- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS3232- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.3333- */3434-3535-fman@400000 {3636- fman0_rx_0x09: port@89000 {3737- cell-index = <0x9>;3838- compatible = "fsl,fman-v3-port-rx";3939- reg = <0x89000 0x1000>;4040- fsl,fman-10g-port;4141- fsl,fman-best-effort-port;4242- };4343-4444- fman0_tx_0x29: port@a9000 {4545- cell-index = <0x29>;4646- compatible = "fsl,fman-v3-port-tx";4747- reg = <0xa9000 0x1000>;4848- fsl,fman-10g-port;4949- fsl,fman-best-effort-port;5050- };5151-5252- ethernet@e2000 {5353- cell-index = <1>;5454- compatible = "fsl,fman-memac";5555- reg = <0xe2000 0x1000>;5656- fsl,fman-ports = <&fman0_rx_0x09 &fman0_tx_0x29>;5757- ptp-timer = <&ptp_timer0>;5858- pcsphy-handle = <&pcsphy1>, <&qsgmiia_pcs1>;5959- pcs-handle-names = "sgmii", "qsgmii";6060- };6161-6262- mdio@e1000 {6363- qsgmiia_pcs1: ethernet-pcs@1 {6464- compatible = "fsl,lynx-pcs";6565- reg = <1>;6666- };6767- };6868-6969- mdio@e3000 {7070- #address-cells = <1>;7171- #size-cells = <0>;7272- compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";7373- reg = <0xe3000 0x1000>;7474- fsl,erratum-a011043; /* must ignore read errors */7575-7676- pcsphy1: ethernet-phy@0 {7777- reg = <0x0>;7878- };7979- };8080-};
···212212 dev->error_state = pci_channel_io_normal;213213 dev->dma_mask = 0xffffffff;214214215215+ /*216216+ * Assume 64-bit addresses for MSI initially. Will be changed to 32-bit217217+ * if MSI (rather than MSI-X) capability does not have218218+ * PCI_MSI_FLAGS_64BIT. Can also be overridden by driver.219219+ */220220+ dev->msi_addr_mask = DMA_BIT_MASK(64);221221+215222 /* Early fixups, before probing the BARs */216223 pci_fixup_device(pci_fixup_early, dev);217224
+2-1
arch/powerpc/kernel/prom_init.c
···28932893 for (node = 0; prom_next_node(&node); ) {28942894 type[0] = '\0';28952895 prom_getprop(node, "device_type", type, sizeof(type));28962896- if (prom_strcmp(type, "escc") && prom_strcmp(type, "i2s"))28962896+ if (prom_strcmp(type, "escc") && prom_strcmp(type, "i2s") &&28972897+ prom_strcmp(type, "media-bay"))28972898 continue;2898289928992900 if (prom_getproplen(node, "#size-cells") != PROM_ERROR)
-10
arch/powerpc/kernel/setup-common.c
···3535#include <linux/of_irq.h>3636#include <linux/hugetlb.h>3737#include <linux/pgtable.h>3838-#include <asm/kexec.h>3938#include <asm/io.h>4039#include <asm/paca.h>4140#include <asm/processor.h>···993994 smp_release_cpus();994995995996 initmem_init();996996-997997- /*998998- * Reserve large chunks of memory for use by CMA for kdump, fadump, KVM and999999- * hugetlb. These must be called after initmem_init(), so that10001000- * pageblock_order is initialised.10011001- */10021002- fadump_cma_init();10031003- kdump_cma_reserve();10041004- kvm_cma_reserve();10059971006998 early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);1007999
+22-4
arch/powerpc/kernel/trace/ftrace.c
···3737 if (addr >= (unsigned long)__exittext_begin && addr < (unsigned long)__exittext_end)3838 return 0;39394040- if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY) &&4141- !IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {4242- addr += MCOUNT_INSN_SIZE;4343- if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS))4040+ if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)) {4141+ if (!IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {4442 addr += MCOUNT_INSN_SIZE;4343+ if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS))4444+ addr += MCOUNT_INSN_SIZE;4545+ } else if (IS_ENABLED(CONFIG_CC_IS_CLANG) && IS_ENABLED(CONFIG_PPC64)) {4646+ /*4747+ * addr points to global entry point though the NOP was emitted at local4848+ * entry point due to https://github.com/llvm/llvm-project/issues/1637064949+ * Handle that here with ppc_function_entry() for kernel symbols while5050+ * adjusting module addresses in the else case, by looking for the below5151+ * module global entry point sequence:5252+ * ld r2, -8(r12)5353+ * add r2, r2, r125454+ */5555+ if (is_kernel_text(addr) || is_kernel_inittext(addr))5656+ addr = ppc_function_entry((void *)addr);5757+ else if ((ppc_inst_val(ppc_inst_read((u32 *)addr)) ==5858+ PPC_RAW_LD(_R2, _R12, -8)) &&5959+ (ppc_inst_val(ppc_inst_read((u32 *)(addr+4))) ==6060+ PPC_RAW_ADD(_R2, _R2, _R12)))6161+ addr += 8;6262+ }4563 }46644765 return addr;
···450450 kbuf->buffer = headers;451451 kbuf->mem = KEXEC_BUF_MEM_UNKNOWN;452452 kbuf->bufsz = headers_sz;453453+454454+ /*455455+ * Account for extra space required to accommodate additional memory456456+ * ranges in elfcorehdr due to memory hotplug events.457457+ */453458 kbuf->memsz = headers_sz + kdump_extra_elfcorehdr_size(cmem);454459 kbuf->top_down = false;455460···465460 }466461467462 image->elf_load_addr = kbuf->mem;468468- image->elf_headers_sz = headers_sz;463463+464464+ /*465465+ * If CONFIG_CRASH_HOTPLUG is enabled, the elfcorehdr kexec segment466466+ * memsz can be larger than bufsz. Always initialize elf_headers_sz467467+ * with memsz. This ensures the correct size is reserved for elfcorehdr468468+ * memory in the FDT prepared for kdump.469469+ */470470+ image->elf_headers_sz = kbuf->memsz;469471 image->elf_headers = headers;470472out:471473 kfree(cmem);
···189189{190190 struct kvm_book3e_206_tlb_entry *gtlbe =191191 get_entry(vcpu_e500, tlbsel, esel);192192- struct tlbe_ref *ref = &vcpu_e500->gtlb_priv[tlbsel][esel].ref;192192+ struct tlbe_priv *tlbe = &vcpu_e500->gtlb_priv[tlbsel][esel];193193194194 /* Don't bother with unmapped entries */195195- if (!(ref->flags & E500_TLB_VALID)) {196196- WARN(ref->flags & (E500_TLB_BITMAP | E500_TLB_TLB0),197197- "%s: flags %x\n", __func__, ref->flags);195195+ if (!(tlbe->flags & E500_TLB_VALID)) {196196+ WARN(tlbe->flags & (E500_TLB_BITMAP | E500_TLB_TLB0),197197+ "%s: flags %x\n", __func__, tlbe->flags);198198 WARN_ON(tlbsel == 1 && vcpu_e500->g2h_tlb1_map[esel]);199199 }200200201201- if (tlbsel == 1 && ref->flags & E500_TLB_BITMAP) {201201+ if (tlbsel == 1 && tlbe->flags & E500_TLB_BITMAP) {202202 u64 tmp = vcpu_e500->g2h_tlb1_map[esel];203203 int hw_tlb_indx;204204 unsigned long flags;···216216 }217217 mb();218218 vcpu_e500->g2h_tlb1_map[esel] = 0;219219- ref->flags &= ~(E500_TLB_BITMAP | E500_TLB_VALID);219219+ tlbe->flags &= ~(E500_TLB_BITMAP | E500_TLB_VALID);220220 local_irq_restore(flags);221221 }222222223223- if (tlbsel == 1 && ref->flags & E500_TLB_TLB0) {223223+ if (tlbsel == 1 && tlbe->flags & E500_TLB_TLB0) {224224 /*225225 * TLB1 entry is backed by 4k pages. This should happen226226 * rarely and is not worth optimizing. Invalidate everything.227227 */228228 kvmppc_e500_tlbil_all(vcpu_e500);229229- ref->flags &= ~(E500_TLB_TLB0 | E500_TLB_VALID);229229+ tlbe->flags &= ~(E500_TLB_TLB0 | E500_TLB_VALID);230230 }231231232232 /*233233 * If TLB entry is still valid then it's a TLB0 entry, and thus234234 * backed by at most one host tlbe per shadow pid235235 */236236- if (ref->flags & E500_TLB_VALID)236236+ if (tlbe->flags & E500_TLB_VALID)237237 kvmppc_e500_tlbil_one(vcpu_e500, gtlbe);238238239239 /* Mark the TLB as not backed by the host anymore */240240- ref->flags = 0;240240+ tlbe->flags = 0;241241}242242243243static inline int tlbe_is_writable(struct kvm_book3e_206_tlb_entry *tlbe)···245245 return tlbe->mas7_3 & (MAS3_SW|MAS3_UW);246246}247247248248-static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,249249- struct kvm_book3e_206_tlb_entry *gtlbe,250250- kvm_pfn_t pfn, unsigned int wimg,251251- bool writable)248248+static inline void kvmppc_e500_tlbe_setup(struct tlbe_priv *tlbe,249249+ struct kvm_book3e_206_tlb_entry *gtlbe,250250+ kvm_pfn_t pfn, unsigned int wimg,251251+ bool writable)252252{253253- ref->pfn = pfn;254254- ref->flags = E500_TLB_VALID;253253+ tlbe->pfn = pfn;254254+ tlbe->flags = E500_TLB_VALID;255255 if (writable)256256- ref->flags |= E500_TLB_WRITABLE;256256+ tlbe->flags |= E500_TLB_WRITABLE;257257258258 /* Use guest supplied MAS2_G and MAS2_E */259259- ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;259259+ tlbe->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;260260}261261262262-static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref)262262+static inline void kvmppc_e500_tlbe_release(struct tlbe_priv *tlbe)263263{264264- if (ref->flags & E500_TLB_VALID) {264264+ if (tlbe->flags & E500_TLB_VALID) {265265 /* FIXME: don't log bogus pfn for TLB1 */266266- trace_kvm_booke206_ref_release(ref->pfn, ref->flags);267267- ref->flags = 0;266266+ trace_kvm_booke206_ref_release(tlbe->pfn, tlbe->flags);267267+ tlbe->flags = 0;268268 }269269}270270···284284 int i;285285286286 for (tlbsel = 0; tlbsel <= 1; tlbsel++) {287287- for (i = 0; i < vcpu_e500->gtlb_params[tlbsel].entries; i++) {288288- struct tlbe_ref *ref =289289- &vcpu_e500->gtlb_priv[tlbsel][i].ref;290290- kvmppc_e500_ref_release(ref);291291- }287287+ for (i = 0; i < vcpu_e500->gtlb_params[tlbsel].entries; i++)288288+ kvmppc_e500_tlbe_release(&vcpu_e500->gtlb_priv[tlbsel][i]);292289 }293290}294291···301304static void kvmppc_e500_setup_stlbe(302305 struct kvm_vcpu *vcpu,303306 struct kvm_book3e_206_tlb_entry *gtlbe,304304- int tsize, struct tlbe_ref *ref, u64 gvaddr,307307+ int tsize, struct tlbe_priv *tlbe, u64 gvaddr,305308 struct kvm_book3e_206_tlb_entry *stlbe)306309{307307- kvm_pfn_t pfn = ref->pfn;310310+ kvm_pfn_t pfn = tlbe->pfn;308311 u32 pr = vcpu->arch.shared->msr & MSR_PR;309309- bool writable = !!(ref->flags & E500_TLB_WRITABLE);312312+ bool writable = !!(tlbe->flags & E500_TLB_WRITABLE);310313311311- BUG_ON(!(ref->flags & E500_TLB_VALID));314314+ BUG_ON(!(tlbe->flags & E500_TLB_VALID));312315313316 /* Force IPROT=0 for all guest mappings. */314317 stlbe->mas1 = MAS1_TSIZE(tsize) | get_tlb_sts(gtlbe) | MAS1_VALID;315315- stlbe->mas2 = (gvaddr & MAS2_EPN) | (ref->flags & E500_TLB_MAS2_ATTR);318318+ stlbe->mas2 = (gvaddr & MAS2_EPN) | (tlbe->flags & E500_TLB_MAS2_ATTR);316319 stlbe->mas7_3 = ((u64)pfn << PAGE_SHIFT) |317320 e500_shadow_mas3_attrib(gtlbe->mas7_3, writable, pr);318321}···320323static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,321324 u64 gvaddr, gfn_t gfn, struct kvm_book3e_206_tlb_entry *gtlbe,322325 int tlbsel, struct kvm_book3e_206_tlb_entry *stlbe,323323- struct tlbe_ref *ref)326326+ struct tlbe_priv *tlbe)324327{325328 struct kvm_memory_slot *slot;326329 unsigned int psize;···452455 }453456 }454457455455- kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg, writable);458458+ kvmppc_e500_tlbe_setup(tlbe, gtlbe, pfn, wimg, writable);456459 kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,457457- ref, gvaddr, stlbe);460460+ tlbe, gvaddr, stlbe);458461 writable = tlbe_is_writable(stlbe);459462460463 /* Clear i-cache for new pages */···471474 struct kvm_book3e_206_tlb_entry *stlbe)472475{473476 struct kvm_book3e_206_tlb_entry *gtlbe;474474- struct tlbe_ref *ref;477477+ struct tlbe_priv *tlbe;475478 int stlbsel = 0;476479 int sesel = 0;477480 int r;478481479482 gtlbe = get_entry(vcpu_e500, 0, esel);480480- ref = &vcpu_e500->gtlb_priv[0][esel].ref;483483+ tlbe = &vcpu_e500->gtlb_priv[0][esel];481484482485 r = kvmppc_e500_shadow_map(vcpu_e500, get_tlb_eaddr(gtlbe),483486 get_tlb_raddr(gtlbe) >> PAGE_SHIFT,484484- gtlbe, 0, stlbe, ref);487487+ gtlbe, 0, stlbe, tlbe);485488 if (r)486489 return r;487490···491494}492495493496static int kvmppc_e500_tlb1_map_tlb1(struct kvmppc_vcpu_e500 *vcpu_e500,494494- struct tlbe_ref *ref,497497+ struct tlbe_priv *tlbe,495498 int esel)496499{497500 unsigned int sesel = vcpu_e500->host_tlb1_nv++;···504507 vcpu_e500->g2h_tlb1_map[idx] &= ~(1ULL << sesel);505508 }506509507507- vcpu_e500->gtlb_priv[1][esel].ref.flags |= E500_TLB_BITMAP;510510+ vcpu_e500->gtlb_priv[1][esel].flags |= E500_TLB_BITMAP;508511 vcpu_e500->g2h_tlb1_map[esel] |= (u64)1 << sesel;509512 vcpu_e500->h2g_tlb1_rmap[sesel] = esel + 1;510510- WARN_ON(!(ref->flags & E500_TLB_VALID));513513+ WARN_ON(!(tlbe->flags & E500_TLB_VALID));511514512515 return sesel;513516}···519522 u64 gvaddr, gfn_t gfn, struct kvm_book3e_206_tlb_entry *gtlbe,520523 struct kvm_book3e_206_tlb_entry *stlbe, int esel)521524{522522- struct tlbe_ref *ref = &vcpu_e500->gtlb_priv[1][esel].ref;525525+ struct tlbe_priv *tlbe = &vcpu_e500->gtlb_priv[1][esel];523526 int sesel;524527 int r;525528526529 r = kvmppc_e500_shadow_map(vcpu_e500, gvaddr, gfn, gtlbe, 1, stlbe,527527- ref);530530+ tlbe);528531 if (r)529532 return r;530533531534 /* Use TLB0 when we can only map a page with 4k */532535 if (get_tlb_tsize(stlbe) == BOOK3E_PAGESZ_4K) {533533- vcpu_e500->gtlb_priv[1][esel].ref.flags |= E500_TLB_TLB0;536536+ vcpu_e500->gtlb_priv[1][esel].flags |= E500_TLB_TLB0;534537 write_stlbe(vcpu_e500, gtlbe, stlbe, 0, 0);535538 return 0;536539 }537540538541 /* Otherwise map into TLB1 */539539- sesel = kvmppc_e500_tlb1_map_tlb1(vcpu_e500, ref, esel);542542+ sesel = kvmppc_e500_tlb1_map_tlb1(vcpu_e500, tlbe, esel);540543 write_stlbe(vcpu_e500, gtlbe, stlbe, 1, sesel);541544542545 return 0;···558561 priv = &vcpu_e500->gtlb_priv[tlbsel][esel];559562560563 /* Triggers after clear_tlb_privs or on initial mapping */561561- if (!(priv->ref.flags & E500_TLB_VALID)) {564564+ if (!(priv->flags & E500_TLB_VALID)) {562565 kvmppc_e500_tlb0_map(vcpu_e500, esel, &stlbe);563566 } else {564567 kvmppc_e500_setup_stlbe(vcpu, gtlbe, BOOK3E_PAGESZ_4K,565565- &priv->ref, eaddr, &stlbe);568568+ priv, eaddr, &stlbe);566569 write_stlbe(vcpu_e500, gtlbe, &stlbe, 0, 0);567570 }568571 break;
+1
arch/powerpc/lib/copyuser_64.S
···562562 li r5,4096563563 b .Ldst_aligned564564EXPORT_SYMBOL(__copy_tofrom_user)565565+EXPORT_SYMBOL(__copy_tofrom_user_base)
+15-30
arch/powerpc/lib/copyuser_power7.S
···55 *66 * Author: Anton Blanchard <anton@au.ibm.com>77 */88+#include <linux/export.h>89#include <asm/ppc_asm.h>99-1010-#ifndef SELFTEST_CASE1111-/* 0 == don't use VMX, 1 == use VMX */1212-#define SELFTEST_CASE 01313-#endif14101511#ifdef __BIG_ENDIAN__1612#define LVS(VRT,RA,RB) lvsl VRT,RA,RB···4347 ld r15,STK_REG(R15)(r1)4448 ld r14,STK_REG(R14)(r1)4549.Ldo_err3:4646- bl CFUNC(exit_vmx_usercopy)5050+ ld r6,STK_REG(R31)(r1) /* original destination pointer */5151+ ld r5,STK_REG(R29)(r1) /* original number of bytes */5252+ subf r7,r6,r3 /* #bytes copied */5353+ subf r3,r7,r5 /* #bytes not copied in r3 */4754 ld r0,STACKFRAMESIZE+16(r1)4855 mtlr r04949- b .Lexit5656+ addi r1,r1,STACKFRAMESIZE5757+ blr5058#endif /* CONFIG_ALTIVEC */51595260.Ldo_err2:···74747575_GLOBAL(__copy_tofrom_user_power7)7676 cmpldi r5,167777- cmpldi cr1,r5,332878777978 std r3,-STACKFRAMESIZE+STK_REG(R31)(r1)8079 std r4,-STACKFRAMESIZE+STK_REG(R30)(r1)···81828283 blt .Lshort_copy83848484-#ifdef CONFIG_ALTIVEC8585-test_feature = SELFTEST_CASE8686-BEGIN_FTR_SECTION8787- bgt cr1,.Lvmx_copy8888-END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)8989-#endif90859186.Lnonvmx_copy:9287 /* Get the source 8B aligned */···25626315: li r3,0257264 blr258265259259-.Lunwind_stack_nonvmx_copy:260260- addi r1,r1,STACKFRAMESIZE261261- b .Lnonvmx_copy262262-263263-.Lvmx_copy:264266#ifdef CONFIG_ALTIVEC267267+_GLOBAL(__copy_tofrom_user_power7_vmx)265268 mflr r0266269 std r0,16(r1)267270 stdu r1,-STACKFRAMESIZE(r1)268268- bl CFUNC(enter_vmx_usercopy)269269- cmpwi cr1,r3,0270270- ld r0,STACKFRAMESIZE+16(r1)271271- ld r3,STK_REG(R31)(r1)272272- ld r4,STK_REG(R30)(r1)273273- ld r5,STK_REG(R29)(r1)274274- mtlr r0275271272272+ std r3,STK_REG(R31)(r1)273273+ std r5,STK_REG(R29)(r1)276274 /*277275 * We prefetch both the source and destination using enhanced touch278276 * instructions. We use a stream ID of 0 for the load side and···283299 ori r10,r7,1 /* stream=1 */284300285301 DCBT_SETUP_STREAMS(r6, r7, r9, r10, r8)286286-287287- beq cr1,.Lunwind_stack_nonvmx_copy288302289303 /*290304 * If source and destination are not relatively aligned we use a···460478err3; stb r0,0(r3)46147946248015: addi r1,r1,STACKFRAMESIZE463463- b CFUNC(exit_vmx_usercopy) /* tail call optimise */481481+ li r3,0482482+ blr464483465484.Lvmx_unaligned_copy:466485 /* Get the destination 16B aligned */···664681err3; stb r0,0(r3)66568266668315: addi r1,r1,STACKFRAMESIZE667667- b CFUNC(exit_vmx_usercopy) /* tail call optimise */684684+ li r3,0685685+ blr686686+EXPORT_SYMBOL(__copy_tofrom_user_power7_vmx)668687#endif /* CONFIG_ALTIVEC */
+2
arch/powerpc/lib/vmx-helper.c
···27272828 return 1;2929}3030+EXPORT_SYMBOL(enter_vmx_usercopy);30313132/*3233 * This function must return 0 because we tail call optimise when calling···5049 set_dec(1);5150 return 0;5251}5252+EXPORT_SYMBOL(exit_vmx_usercopy);53535454int enter_vmx_ops(void)5555{
+14
arch/powerpc/mm/mem.c
···3030#include <asm/setup.h>3131#include <asm/fixmap.h>32323333+#include <asm/fadump.h>3434+#include <asm/kexec.h>3535+#include <asm/kvm_ppc.h>3636+3337#include <mm/mmu_decl.h>34383539unsigned long long memory_limit __initdata;···272268273269void __init arch_mm_preinit(void)274270{271271+272272+ /*273273+ * Reserve large chunks of memory for use by CMA for kdump, fadump, KVM274274+ * and hugetlb. These must be called after pageblock_order is275275+ * initialised.276276+ */277277+ fadump_cma_init();278278+ kdump_cma_reserve();279279+ kvm_cma_reserve();280280+275281 /*276282 * book3s is limited to 16 page sizes due to encoding this in277283 * a 4-bit field for slices.
-5
arch/powerpc/net/bpf_jit.h
···81818282#ifdef CONFIG_PPC6483838484-/* for gpr non volatile registers BPG_REG_6 to 10 */8585-#define BPF_PPC_STACK_SAVE (6 * 8)8686-8784/* If dummy pass (!image), account for maximum possible instructions */8885#define PPC_LI64(d, i) do { \8986 if (!image) \···216219int bpf_add_extable_entry(struct bpf_prog *fp, u32 *image, u32 *fimage, int pass,217220 struct codegen_context *ctx, int insn_idx,218221 int jmp_off, int dst_reg, u32 code);219219-220220-int bpf_jit_stack_tailcallinfo_offset(struct codegen_context *ctx);221222#endif222223223224#endif
+56-71
arch/powerpc/net/bpf_jit_comp.c
···450450451451bool bpf_jit_supports_kfunc_call(void)452452{453453- return true;453453+ return IS_ENABLED(CONFIG_PPC64);454454}455455456456bool bpf_jit_supports_arena(void)···638638 * for the traced function (BPF subprog/callee) to fetch it.639639 */640640static void bpf_trampoline_setup_tail_call_info(u32 *image, struct codegen_context *ctx,641641- int func_frame_offset,642642- int bpf_dummy_frame_size, int r4_off)641641+ int bpf_frame_size, int r4_off)643642{644643 if (IS_ENABLED(CONFIG_PPC64)) {645645- /* See Generated stack layout */646646- int tailcallinfo_offset = BPF_PPC_TAILCALL;647647-648648- /*649649- * func_frame_offset = ...(1)650650- * bpf_dummy_frame_size + trampoline_frame_size651651- */652652- EMIT(PPC_RAW_LD(_R4, _R1, func_frame_offset));653653- EMIT(PPC_RAW_LD(_R3, _R4, -tailcallinfo_offset));644644+ EMIT(PPC_RAW_LD(_R4, _R1, bpf_frame_size));645645+ /* Refer to trampoline's Generated stack layout */646646+ EMIT(PPC_RAW_LD(_R3, _R4, -BPF_PPC_TAILCALL));654647655648 /*656649 * Setting the tail_call_info in trampoline's frame···651658 */652659 EMIT(PPC_RAW_CMPLWI(_R3, MAX_TAIL_CALL_CNT));653660 PPC_BCC_CONST_SHORT(COND_GT, 8);654654- EMIT(PPC_RAW_ADDI(_R3, _R4, bpf_jit_stack_tailcallinfo_offset(ctx)));661661+ EMIT(PPC_RAW_ADDI(_R3, _R4, -BPF_PPC_TAILCALL));662662+655663 /*656656- * From ...(1) above:657657- * trampoline_frame_bottom = ...(2)658658- * func_frame_offset - bpf_dummy_frame_size659659- *660660- * Using ...(2) derived above:661661- * trampoline_tail_call_info_offset = ...(3)662662- * trampoline_frame_bottom - tailcallinfo_offset663663- *664664- * From ...(3):665665- * Use trampoline_tail_call_info_offset to write reference of main's666666- * tail_call_info in trampoline frame.664664+ * Trampoline's tail_call_info is at the same offset, as that of665665+ * any bpf program, with reference to previous frame. Update the666666+ * address of main's tail_call_info in trampoline frame.667667 */668668- EMIT(PPC_RAW_STL(_R3, _R1, (func_frame_offset - bpf_dummy_frame_size)669669- - tailcallinfo_offset));668668+ EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size - BPF_PPC_TAILCALL));670669 } else {671670 /* See bpf_jit_stack_offsetof() and BPF_PPC_TC */672671 EMIT(PPC_RAW_LL(_R4, _R1, r4_off));···666681}667682668683static void bpf_trampoline_restore_tail_call_cnt(u32 *image, struct codegen_context *ctx,669669- int func_frame_offset, int r4_off)684684+ int bpf_frame_size, int r4_off)670685{671686 if (IS_ENABLED(CONFIG_PPC32)) {672687 /*···677692 }678693}679694680680-static void bpf_trampoline_save_args(u32 *image, struct codegen_context *ctx, int func_frame_offset,681681- int nr_regs, int regs_off)695695+static void bpf_trampoline_save_args(u32 *image, struct codegen_context *ctx,696696+ int bpf_frame_size, int nr_regs, int regs_off)682697{683698 int param_save_area_offset;684699685685- param_save_area_offset = func_frame_offset; /* the two frames we alloted */700700+ param_save_area_offset = bpf_frame_size;686701 param_save_area_offset += STACK_FRAME_MIN_SIZE; /* param save area is past frame header */687702688703 for (int i = 0; i < nr_regs; i++) {···705720706721/* Used when we call into the traced function. Replicate parameter save area */707722static void bpf_trampoline_restore_args_stack(u32 *image, struct codegen_context *ctx,708708- int func_frame_offset, int nr_regs, int regs_off)723723+ int bpf_frame_size, int nr_regs, int regs_off)709724{710725 int param_save_area_offset;711726712712- param_save_area_offset = func_frame_offset; /* the two frames we alloted */727727+ param_save_area_offset = bpf_frame_size;713728 param_save_area_offset += STACK_FRAME_MIN_SIZE; /* param save area is past frame header */714729715730 for (int i = 8; i < nr_regs; i++) {···726741 void *func_addr)727742{728743 int regs_off, nregs_off, ip_off, run_ctx_off, retval_off, nvr_off, alt_lr_off, r4_off = 0;729729- int i, ret, nr_regs, bpf_frame_size = 0, bpf_dummy_frame_size = 0, func_frame_offset;730744 struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];731745 struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];732746 struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];747747+ int i, ret, nr_regs, retaddr_off, bpf_frame_size = 0;733748 struct codegen_context codegen_ctx, *ctx;734749 u32 *image = (u32 *)rw_image;735750 ppc_inst_t branch_insn;···755770 * Generated stack layout:756771 *757772 * func prev back chain [ back chain ]758758- * [ ]759759- * bpf prog redzone/tailcallcnt [ ... ] 64 bytes (64-bit powerpc)760760- * [ ] --761761- * LR save area [ r0 save (64-bit) ] | header762762- * [ r0 save (32-bit) ] |763763- * dummy frame for unwind [ back chain 1 ] --764773 * [ tail_call_info ] optional - 64-bit powerpc765774 * [ padding ] align stack frame766775 * r4_off [ r4 (tailcallcnt) ] optional - 32-bit powerpc767776 * alt_lr_off [ real lr (ool stub)] optional - actual lr777777+ * retaddr_off [ return address ]768778 * [ r26 ]769779 * nvr_off [ r25 ] nvr save area770780 * retval_off [ return value ]771781 * [ reg argN ]772782 * [ ... ]773773- * regs_off [ reg_arg1 ] prog ctx context774774- * nregs_off [ args count ]775775- * ip_off [ traced function ]783783+ * regs_off [ reg_arg1 ] prog_ctx784784+ * nregs_off [ args count ] ((u64 *)prog_ctx)[-1]785785+ * ip_off [ traced function ] ((u64 *)prog_ctx)[-2]776786 * [ ... ]777787 * run_ctx_off [ bpf_tramp_run_ctx ]778788 * [ reg argN ]···823843 nvr_off = bpf_frame_size;824844 bpf_frame_size += 2 * SZL;825845846846+ /* Save area for return address */847847+ retaddr_off = bpf_frame_size;848848+ bpf_frame_size += SZL;849849+826850 /* Optional save area for actual LR in case of ool ftrace */827851 if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {828852 alt_lr_off = bpf_frame_size;···853869 /* Padding to align stack frame, if any */854870 bpf_frame_size = round_up(bpf_frame_size, SZL * 2);855871856856- /* Dummy frame size for proper unwind - includes 64-bytes red zone for 64-bit powerpc */857857- bpf_dummy_frame_size = STACK_FRAME_MIN_SIZE + 64;858858-859859- /* Offset to the traced function's stack frame */860860- func_frame_offset = bpf_dummy_frame_size + bpf_frame_size;861861-862862- /* Create dummy frame for unwind, store original return value */872872+ /* Store original return value */863873 EMIT(PPC_RAW_STL(_R0, _R1, PPC_LR_STKOFF));864864- /* Protect red zone where tail call count goes */865865- EMIT(PPC_RAW_STLU(_R1, _R1, -bpf_dummy_frame_size));866874867875 /* Create our stack frame */868876 EMIT(PPC_RAW_STLU(_R1, _R1, -bpf_frame_size));···869893 if (IS_ENABLED(CONFIG_PPC32) && nr_regs < 2)870894 EMIT(PPC_RAW_STL(_R4, _R1, r4_off));871895872872- bpf_trampoline_save_args(image, ctx, func_frame_offset, nr_regs, regs_off);896896+ bpf_trampoline_save_args(image, ctx, bpf_frame_size, nr_regs, regs_off);873897874874- /* Save our return address */898898+ /* Save our LR/return address */875899 EMIT(PPC_RAW_MFLR(_R3));876900 if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))877901 EMIT(PPC_RAW_STL(_R3, _R1, alt_lr_off));878902 else879879- EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));903903+ EMIT(PPC_RAW_STL(_R3, _R1, retaddr_off));880904881905 /*882882- * Save ip address of the traced function.883883- * We could recover this from LR, but we will need to address for OOL trampoline,884884- * and optional GEP area.906906+ * Derive IP address of the traced function.907907+ * In case of CONFIG_PPC_FTRACE_OUT_OF_LINE or BPF program, LR points to the instruction908908+ * after the 'bl' instruction in the OOL stub. Refer to ftrace_init_ool_stub() and909909+ * bpf_arch_text_poke() for OOL stub of kernel functions and bpf programs respectively.910910+ * Relevant stub sequence:911911+ *912912+ * bl <tramp>913913+ * LR (R3) => mtlr r0914914+ * b <func_addr+4>915915+ *916916+ * Recover kernel function/bpf program address from the unconditional917917+ * branch instruction at the end of OOL stub.885918 */886919 if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE) || flags & BPF_TRAMP_F_IP_ARG) {887920 EMIT(PPC_RAW_LWZ(_R4, _R3, 4));888921 EMIT(PPC_RAW_SLWI(_R4, _R4, 6));889922 EMIT(PPC_RAW_SRAWI(_R4, _R4, 6));890923 EMIT(PPC_RAW_ADD(_R3, _R3, _R4));891891- EMIT(PPC_RAW_ADDI(_R3, _R3, 4));892924 }893925894926 if (flags & BPF_TRAMP_F_IP_ARG)895927 EMIT(PPC_RAW_STL(_R3, _R1, ip_off));896928897897- if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))898898- /* Fake our LR for unwind */899899- EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));929929+ if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {930930+ /* Fake our LR for BPF_TRAMP_F_CALL_ORIG case */931931+ EMIT(PPC_RAW_ADDI(_R3, _R3, 4));932932+ EMIT(PPC_RAW_STL(_R3, _R1, retaddr_off));933933+ }900934901935 /* Save function arg count -- see bpf_get_func_arg_cnt() */902936 EMIT(PPC_RAW_LI(_R3, nr_regs));···944958 /* Call the traced function */945959 if (flags & BPF_TRAMP_F_CALL_ORIG) {946960 /*947947- * The address in LR save area points to the correct point in the original function961961+ * retaddr on trampoline stack points to the correct point in the original function948962 * with both PPC_FTRACE_OUT_OF_LINE as well as with traditional ftrace instruction949963 * sequence950964 */951951- EMIT(PPC_RAW_LL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));965965+ EMIT(PPC_RAW_LL(_R3, _R1, retaddr_off));952966 EMIT(PPC_RAW_MTCTR(_R3));953967954968 /* Replicate tail_call_cnt before calling the original BPF prog */955969 if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)956956- bpf_trampoline_setup_tail_call_info(image, ctx, func_frame_offset,957957- bpf_dummy_frame_size, r4_off);970970+ bpf_trampoline_setup_tail_call_info(image, ctx, bpf_frame_size, r4_off);958971959972 /* Restore args */960960- bpf_trampoline_restore_args_stack(image, ctx, func_frame_offset, nr_regs, regs_off);973973+ bpf_trampoline_restore_args_stack(image, ctx, bpf_frame_size, nr_regs, regs_off);961974962975 /* Restore TOC for 64-bit */963976 if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2) && !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL))···970985971986 /* Restore updated tail_call_cnt */972987 if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)973973- bpf_trampoline_restore_tail_call_cnt(image, ctx, func_frame_offset, r4_off);988988+ bpf_trampoline_restore_tail_call_cnt(image, ctx, bpf_frame_size, r4_off);974989975990 /* Reserve space to patch branch instruction to skip fexit progs */976991 if (ro_image) /* image is NULL for dummy pass */···10221037 EMIT(PPC_RAW_LD(_R2, _R1, 24));10231038 if (flags & BPF_TRAMP_F_SKIP_FRAME) {10241039 /* Skip the traced function and return to parent */10251025- EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset));10401040+ EMIT(PPC_RAW_ADDI(_R1, _R1, bpf_frame_size));10261041 EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF));10271042 EMIT(PPC_RAW_MTLR(_R0));10281043 EMIT(PPC_RAW_BLR());···10301045 if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {10311046 EMIT(PPC_RAW_LL(_R0, _R1, alt_lr_off));10321047 EMIT(PPC_RAW_MTLR(_R0));10331033- EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset));10481048+ EMIT(PPC_RAW_ADDI(_R1, _R1, bpf_frame_size));10341049 EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF));10351050 EMIT(PPC_RAW_BLR());10361051 } else {10371037- EMIT(PPC_RAW_LL(_R0, _R1, bpf_frame_size + PPC_LR_STKOFF));10521052+ EMIT(PPC_RAW_LL(_R0, _R1, retaddr_off));10381053 EMIT(PPC_RAW_MTCTR(_R0));10391039- EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset));10541054+ EMIT(PPC_RAW_ADDI(_R1, _R1, bpf_frame_size));10401055 EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF));10411056 EMIT(PPC_RAW_MTLR(_R0));10421057 EMIT(PPC_RAW_BCTR());
+143-38
arch/powerpc/net/bpf_jit_comp64.c
···3232 *3333 * [ prev sp ] <-------------3434 * [ tail_call_info ] 8 |3535- * [ nv gpr save area ] 6*8 + (12*8) |3535+ * [ nv gpr save area ] (6 * 8) |3636+ * [ addl. nv gpr save area] (12 * 8) | <--- exception boundary/callback program3637 * [ local_tmp_var ] 24 |3738 * fp (r31) --> [ ebpf stack space ] upto 512 |3839 * [ frame header ] 32/112 |3940 * sp (r1) ---> [ stack pointer ] --------------4041 *4141- * Additional (12*8) in 'nv gpr save area' only in case of4242- * exception boundary.4242+ * Additional (12 * 8) in 'nv gpr save area' only in case of4343+ * exception boundary/callback.4344 */4545+4646+/* BPF non-volatile registers save area size */4747+#define BPF_PPC_STACK_SAVE (6 * 8)44484549/* for bpf JIT code internal usage */4650#define BPF_PPC_STACK_LOCALS 24···5248 * for additional non volatile registers(r14-r25) to be saved5349 * at exception boundary5450 */5555-#define BPF_PPC_EXC_STACK_SAVE (12*8)5151+#define BPF_PPC_EXC_STACK_SAVE (12 * 8)56525753/* stack frame excluding BPF stack, ensure this is quadword aligned */5854#define BPF_PPC_STACKFRAME (STACK_FRAME_MIN_SIZE + \···129125 * [ ... ] |130126 * sp (r1) ---> [ stack pointer ] --------------131127 * [ tail_call_info ] 8132132- * [ nv gpr save area ] 6*8 + (12*8)128128+ * [ nv gpr save area ] (6 * 8)129129+ * [ addl. nv gpr save area] (12 * 8) <--- exception boundary/callback program133130 * [ local_tmp_var ] 24134131 * [ unused red zone ] 224135132 *136136- * Additional (12*8) in 'nv gpr save area' only in case of137137- * exception boundary.133133+ * Additional (12 * 8) in 'nv gpr save area' only in case of134134+ * exception boundary/callback.138135 */139136static int bpf_jit_stack_local(struct codegen_context *ctx)140137{···153148 }154149}155150156156-int bpf_jit_stack_tailcallinfo_offset(struct codegen_context *ctx)151151+static int bpf_jit_stack_tailcallinfo_offset(struct codegen_context *ctx)157152{158153 return bpf_jit_stack_local(ctx) + BPF_PPC_STACK_LOCALS + BPF_PPC_STACK_SAVE;159154}···242237243238 if (bpf_has_stack_frame(ctx) && !ctx->exception_cb) {244239 /*245245- * exception_cb uses boundary frame after stack walk.246246- * It can simply use redzone, this optimization reduces247247- * stack walk loop by one level.248248- *249240 * We need a stack frame, but we don't necessarily need to250241 * save/restore LR unless we call other functions251242 */···285284 * program(main prog) as third arg286285 */287286 EMIT(PPC_RAW_MR(_R1, _R5));287287+ /*288288+ * Exception callback reuses the stack frame of exception boundary.289289+ * But BPF stack depth of exception callback and exception boundary290290+ * don't have to be same. If BPF stack depth is different, adjust the291291+ * stack frame size considering BPF stack depth of exception callback.292292+ * The non-volatile register save area remains unchanged. These non-293293+ * volatile registers are restored in exception callback's epilogue.294294+ */295295+ EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_1), _R5, 0));296296+ EMIT(PPC_RAW_SUB(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_1), _R1));297297+ EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2),298298+ -BPF_PPC_EXC_STACKFRAME));299299+ EMIT(PPC_RAW_CMPLDI(bpf_to_ppc(TMP_REG_2), ctx->stack_size));300300+ PPC_BCC_CONST_SHORT(COND_EQ, 12);301301+ EMIT(PPC_RAW_MR(_R1, bpf_to_ppc(TMP_REG_1)));302302+ EMIT(PPC_RAW_STDU(_R1, _R1, -(BPF_PPC_EXC_STACKFRAME + ctx->stack_size)));288303 }289304290305 /*···499482 return 0;500483}501484485485+static int zero_extend(u32 *image, struct codegen_context *ctx, u32 src_reg, u32 dst_reg, u32 size)486486+{487487+ switch (size) {488488+ case 1:489489+ /* zero-extend 8 bits into 64 bits */490490+ EMIT(PPC_RAW_RLDICL(dst_reg, src_reg, 0, 56));491491+ return 0;492492+ case 2:493493+ /* zero-extend 16 bits into 64 bits */494494+ EMIT(PPC_RAW_RLDICL(dst_reg, src_reg, 0, 48));495495+ return 0;496496+ case 4:497497+ /* zero-extend 32 bits into 64 bits */498498+ EMIT(PPC_RAW_RLDICL(dst_reg, src_reg, 0, 32));499499+ fallthrough;500500+ case 8:501501+ /* Nothing to do */502502+ return 0;503503+ default:504504+ return -1;505505+ }506506+}507507+508508+static int sign_extend(u32 *image, struct codegen_context *ctx, u32 src_reg, u32 dst_reg, u32 size)509509+{510510+ switch (size) {511511+ case 1:512512+ /* sign-extend 8 bits into 64 bits */513513+ EMIT(PPC_RAW_EXTSB(dst_reg, src_reg));514514+ return 0;515515+ case 2:516516+ /* sign-extend 16 bits into 64 bits */517517+ EMIT(PPC_RAW_EXTSH(dst_reg, src_reg));518518+ return 0;519519+ case 4:520520+ /* sign-extend 32 bits into 64 bits */521521+ EMIT(PPC_RAW_EXTSW(dst_reg, src_reg));522522+ fallthrough;523523+ case 8:524524+ /* Nothing to do */525525+ return 0;526526+ default:527527+ return -1;528528+ }529529+}530530+531531+/*532532+ * Handle powerpc ABI expectations from caller:533533+ * - Unsigned arguments are zero-extended.534534+ * - Signed arguments are sign-extended.535535+ */536536+static int prepare_for_kfunc_call(const struct bpf_prog *fp, u32 *image,537537+ struct codegen_context *ctx,538538+ const struct bpf_insn *insn)539539+{540540+ const struct btf_func_model *m = bpf_jit_find_kfunc_model(fp, insn);541541+ int i;542542+543543+ if (!m)544544+ return -1;545545+546546+ for (i = 0; i < m->nr_args; i++) {547547+ /* Note that BPF ABI only allows up to 5 args for kfuncs */548548+ u32 reg = bpf_to_ppc(BPF_REG_1 + i), size = m->arg_size[i];549549+550550+ if (!(m->arg_flags[i] & BTF_FMODEL_SIGNED_ARG)) {551551+ if (zero_extend(image, ctx, reg, reg, size))552552+ return -1;553553+ } else {554554+ if (sign_extend(image, ctx, reg, reg, size))555555+ return -1;556556+ }557557+ }558558+559559+ return 0;560560+}561561+502562static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)503563{504564 /*···616522617523 /*618524 * tail_call_info++; <- Actual value of tcc here525525+ * Writeback this updated value only if tailcall succeeds.619526 */620527 EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), 1));528528+529529+ /* prog = array->ptrs[index]; */530530+ EMIT(PPC_RAW_MULI(bpf_to_ppc(TMP_REG_2), b2p_index, 8));531531+ EMIT(PPC_RAW_ADD(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2), b2p_bpf_array));532532+ EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2),533533+ offsetof(struct bpf_array, ptrs)));534534+535535+ /*536536+ * if (prog == NULL)537537+ * goto out;538538+ */539539+ EMIT(PPC_RAW_CMPLDI(bpf_to_ppc(TMP_REG_2), 0));540540+ PPC_BCC_SHORT(COND_EQ, out);541541+542542+ /* goto *(prog->bpf_func + prologue_size); */543543+ EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2),544544+ offsetof(struct bpf_prog, bpf_func)));545545+ EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2),546546+ FUNCTION_DESCR_SIZE + bpf_tailcall_prologue_size));547547+ EMIT(PPC_RAW_MTCTR(bpf_to_ppc(TMP_REG_2)));621548622549 /*623550 * Before writing updated tail_call_info, distinguish if current frame···653538 EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_2), _R1, bpf_jit_stack_tailcallinfo_offset(ctx)));654539 /* Writeback updated value to tail_call_info */655540 EMIT(PPC_RAW_STD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_2), 0));656656-657657- /* prog = array->ptrs[index]; */658658- EMIT(PPC_RAW_MULI(bpf_to_ppc(TMP_REG_1), b2p_index, 8));659659- EMIT(PPC_RAW_ADD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), b2p_bpf_array));660660- EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), offsetof(struct bpf_array, ptrs)));661661-662662- /*663663- * if (prog == NULL)664664- * goto out;665665- */666666- EMIT(PPC_RAW_CMPLDI(bpf_to_ppc(TMP_REG_1), 0));667667- PPC_BCC_SHORT(COND_EQ, out);668668-669669- /* goto *(prog->bpf_func + prologue_size); */670670- EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), offsetof(struct bpf_prog, bpf_func)));671671- EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1),672672- FUNCTION_DESCR_SIZE + bpf_tailcall_prologue_size));673673- EMIT(PPC_RAW_MTCTR(bpf_to_ppc(TMP_REG_1)));674541675542 /* tear down stack, restore NVRs, ... */676543 bpf_jit_emit_common_epilogue(image, ctx);···12201123 /* special mov32 for zext */12211124 EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 0, 31));12221125 break;12231223- } else if (off == 8) {12241224- EMIT(PPC_RAW_EXTSB(dst_reg, src_reg));12251225- } else if (off == 16) {12261226- EMIT(PPC_RAW_EXTSH(dst_reg, src_reg));12271227- } else if (off == 32) {12281228- EMIT(PPC_RAW_EXTSW(dst_reg, src_reg));12291229- } else if (dst_reg != src_reg)12301230- EMIT(PPC_RAW_MR(dst_reg, src_reg));11261126+ }11271127+ if (off == 0) {11281128+ /* MOV */11291129+ if (dst_reg != src_reg)11301130+ EMIT(PPC_RAW_MR(dst_reg, src_reg));11311131+ } else {11321132+ /* MOVSX: dst = (s8,s16,s32)src (off = 8,16,32) */11331133+ if (sign_extend(image, ctx, src_reg, dst_reg, off / 8))11341134+ return -1;11351135+ }12311136 goto bpf_alu32_trunc;12321137 case BPF_ALU | BPF_MOV | BPF_K: /* (u32) dst = imm */12331138 case BPF_ALU64 | BPF_MOV | BPF_K: /* dst = (s64) imm */···16961597 &func_addr, &func_addr_fixed);16971598 if (ret < 0)16981599 return ret;16001600+16011601+ /* Take care of powerpc ABI requirements before kfunc call */16021602+ if (insn[i].src_reg == BPF_PSEUDO_KFUNC_CALL) {16031603+ if (prepare_for_kfunc_call(fp, image, ctx, &insn[i]))16041604+ return -1;16051605+ }1699160617001607 ret = bpf_jit_emit_func_call_rel(image, fimage, ctx, func_addr);17011608 if (ret)
+5
arch/powerpc/perf/callchain.c
···103103void104104perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)105105{106106+ perf_callchain_store(entry, perf_arch_instruction_pointer(regs));107107+108108+ if (!current->mm)109109+ return;110110+106111 if (!is_32bit_task())107112 perf_callchain_user_64(entry, regs);108113 else
-1
arch/powerpc/perf/callchain_32.c
···142142 next_ip = perf_arch_instruction_pointer(regs);143143 lr = regs->link;144144 sp = regs->gpr[1];145145- perf_callchain_store(entry, next_ip);146145147146 while (entry->nr < entry->max_stack) {148147 fp = (unsigned int __user *) (unsigned long) sp;
-1
arch/powerpc/perf/callchain_64.c
···7777 next_ip = perf_arch_instruction_pointer(regs);7878 lr = regs->link;7979 sp = regs->gpr[1];8080- perf_callchain_store(entry, next_ip);81808281 while (entry->nr < entry->max_stack) {8382 fp = (unsigned long __user *) sp;
+2-2
arch/powerpc/platforms/83xx/km83xx.c
···155155156156/* list of the supported boards */157157static char *board[] __initdata = {158158- "Keymile,KMETER1",159159- "Keymile,kmpbec8321",158158+ "keymile,KMETER1",159159+ "keymile,kmpbec8321",160160 NULL161161};162162
+2-2
arch/powerpc/platforms/Kconfig.cputype
···276276config PPC_E500277277 select FSL_EMB_PERFMON278278 bool279279- select ARCH_SUPPORTS_HUGETLBFS if PHYS_64BIT || PPC64279279+ select ARCH_SUPPORTS_HUGETLBFS280280 select PPC_SMP_MUXED_IPI281281 select PPC_DOORBELL282282 select PPC_KUEP···337337config PTE_64BIT338338 bool339339 depends on 44x || PPC_E500 || PPC_86xx340340- default y if PHYS_64BIT340340+ default y if PPC_E500 || PHYS_64BIT341341342342config PHYS_64BIT343343 bool 'Large physical address support' if PPC_E500 || PPC_86xx
···1111#include <linux/irqchip/riscv-imsic.h>1212#include <linux/kvm_host.h>1313#include <linux/uaccess.h>1414+#include <linux/cpufeature.h>14151516static int aia_create(struct kvm_device *dev, u32 type)1617{···22212322 if (irqchip_in_kernel(kvm))2423 return -EEXIST;2424+2525+ if (!riscv_isa_extension_available(NULL, SSAIA))2626+ return -ENODEV;25272628 ret = -EBUSY;2729 if (kvm_trylock_all_vcpus(kvm))···441437442438static int aia_has_attr(struct kvm_device *dev, struct kvm_device_attr *attr)443439{444444- int nr_vcpus;440440+ int nr_vcpus, r = -ENXIO;445441446442 switch (attr->group) {447443 case KVM_DEV_RISCV_AIA_GRP_CONFIG:···470466 }471467 break;472468 case KVM_DEV_RISCV_AIA_GRP_APLIC:473473- return kvm_riscv_aia_aplic_has_attr(dev->kvm, attr->attr);469469+ mutex_lock(&dev->kvm->lock);470470+ r = kvm_riscv_aia_aplic_has_attr(dev->kvm, attr->attr);471471+ mutex_unlock(&dev->kvm->lock);472472+ break;474473 case KVM_DEV_RISCV_AIA_GRP_IMSIC:475475- return kvm_riscv_aia_imsic_has_attr(dev->kvm, attr->attr);474474+ mutex_lock(&dev->kvm->lock);475475+ r = kvm_riscv_aia_imsic_has_attr(dev->kvm, attr->attr);476476+ mutex_unlock(&dev->kvm->lock);477477+ break;476478 }477479478478- return -ENXIO;480480+ return r;479481}480482481483struct kvm_device_ops kvm_riscv_aia_device_ops = {
+4
arch/riscv/kvm/aia_imsic.c
···908908 int r, rc = KVM_INSN_CONTINUE_NEXT_SEPC;909909 struct imsic *imsic = vcpu->arch.aia_context.imsic_state;910910911911+ /* If IMSIC vCPU state not initialized then forward to user space */912912+ if (!imsic)913913+ return KVM_INSN_EXIT_TO_USER_SPACE;914914+911915 if (isel == KVM_RISCV_AIA_IMSIC_TOPEI) {912916 /* Read pending and enabled interrupt with highest priority */913917 topei = imsic_mrif_topei(imsic->swfile, imsic->nr_eix,
+5-1
arch/riscv/kvm/mmu.c
···245245bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)246246{247247 struct kvm_gstage gstage;248248+ bool mmu_locked;248249249250 if (!kvm->arch.pgd)250251 return false;···254253 gstage.flags = 0;255254 gstage.vmid = READ_ONCE(kvm->arch.vmid.vmid);256255 gstage.pgd = kvm->arch.pgd;256256+ mmu_locked = spin_trylock(&kvm->mmu_lock);257257 kvm_riscv_gstage_unmap_range(&gstage, range->start << PAGE_SHIFT,258258 (range->end - range->start) << PAGE_SHIFT,259259 range->may_block);260260+ if (mmu_locked)261261+ spin_unlock(&kvm->mmu_lock);260262 return false;261263}262264···539535 goto out_unlock;540536541537 /* Check if we are backed by a THP and thus use block mapping if possible */542542- if (vma_pagesize == PAGE_SIZE)538538+ if (!logging && (vma_pagesize == PAGE_SIZE))543539 vma_pagesize = transparent_hugepage_adjust(kvm, memslot, hva, &hfn, &gpa);544540545541 if (writable) {
···355355 dev->error_state = pci_channel_io_normal;356356 dev->dma_mask = 0xffffffff;357357358358+ /*359359+ * Assume 64-bit addresses for MSI initially. Will be changed to 32-bit360360+ * if MSI (rather than MSI-X) capability does not have361361+ * PCI_MSI_FLAGS_64BIT. Can also be overridden by driver.362362+ */363363+ dev->msi_addr_mask = DMA_BIT_MASK(64);364364+358365 if (of_node_name_eq(node, "pci")) {359366 /* a PCI-PCI bridge */360367 dev->hdr_type = PCI_HEADER_TYPE_BRIDGE;
···3535#endif3636.endm37373838+/*3939+ * WARNING:4040+ *4141+ * A bug in the libgcc unwinder as of at least gcc 15.2 (2026) means that4242+ * the unwinder fails to recognize the signal frame flag.4343+ *4444+ * There is a hacky legacy fallback path in libgcc which ends up4545+ * getting invoked instead. It happens to work as long as BOTH of the4646+ * following conditions are true:4747+ *4848+ * 1. There is at least one byte before the each of the sigreturn4949+ * functions which falls outside any function. This is enforced by5050+ * an explicit nop instruction before the ALIGN.5151+ * 2. The code sequences between the entry point up to and including5252+ * the int $0x80 below need to match EXACTLY. Do not change them5353+ * in any way. The exact byte sequences are:5454+ *5555+ * __kernel_sigreturn:5656+ * 0: 58 pop %eax5757+ * 1: b8 77 00 00 00 mov $0x77,%eax5858+ * 6: cd 80 int $0x805959+ *6060+ * __kernel_rt_sigreturn:6161+ * 0: b8 ad 00 00 00 mov $0xad,%eax6262+ * 5: cd 80 int $0x806363+ *6464+ * For details, see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=1240506565+ */3866 .text3967 .globl __kernel_sigreturn4068 .type __kernel_sigreturn,@function6969+ nop /* libgcc hack: see comment above */4170 ALIGN4271__kernel_sigreturn:4372 STARTPROC_SIGNAL_FRAME IA32_SIGFRAME_sigcontext···81528253 .globl __kernel_rt_sigreturn8354 .type __kernel_rt_sigreturn,@function5555+ nop /* libgcc hack: see comment above */8456 ALIGN8557__kernel_rt_sigreturn:8658 STARTPROC_SIGNAL_FRAME IA32_RT_SIGFRAME_sigcontext
+1-1
arch/x86/include/asm/efi.h
···138138extern int __init efi_reuse_config(u64 tables, int nr_tables);139139extern void efi_delete_dummy_variable(void);140140extern void efi_crash_gracefully_on_page_fault(unsigned long phys_addr);141141-extern void efi_free_boot_services(void);141141+extern void efi_unmap_boot_services(void);142142143143void arch_efi_call_virt_setup(void);144144void arch_efi_call_virt_teardown(void);
···1894189418951895static inline void try_to_enable_x2apic(int remap_mode) { }18961896static inline void __x2apic_enable(void) { }18971897+static inline void __x2apic_disable(void) { }18971898#endif /* !CONFIG_X86_X2APIC */1898189918991900void __init enable_IR_x2apic(void)···24572456 if (x2apic_mode) {24582457 __x2apic_enable();24592458 } else {24592459+ if (x2apic_enabled()) {24602460+ pr_warn_once("x2apic: re-enabled by firmware during resume. Disabling\n");24612461+ __x2apic_disable();24622462+ }24632463+24602464 /*24612465 * Make sure the APICBASE points to the right address24622466 *
+3
arch/x86/kernel/cpu/common.c
···9595unsigned int __max_logical_packages __ro_after_init = 1;9696EXPORT_SYMBOL(__max_logical_packages);97979898+unsigned int __num_nodes_per_package __ro_after_init = 1;9999+EXPORT_SYMBOL(__num_nodes_per_package);100100+98101unsigned int __num_cores_per_package __ro_after_init = 1;99102EXPORT_SYMBOL(__num_cores_per_package);100103
+5-31
arch/x86/kernel/cpu/resctrl/monitor.c
···364364 msr_clear_bit(MSR_RMID_SNC_CONFIG, 0);365365}366366367367-/* CPU models that support MSR_RMID_SNC_CONFIG */367367+/* CPU models that support SNC and MSR_RMID_SNC_CONFIG */368368static const struct x86_cpu_id snc_cpu_ids[] __initconst = {369369 X86_MATCH_VFM(INTEL_ICELAKE_X, 0),370370 X86_MATCH_VFM(INTEL_SAPPHIRERAPIDS_X, 0),···375375 {}376376};377377378378-/*379379- * There isn't a simple hardware bit that indicates whether a CPU is running380380- * in Sub-NUMA Cluster (SNC) mode. Infer the state by comparing the381381- * number of CPUs sharing the L3 cache with CPU0 to the number of CPUs in382382- * the same NUMA node as CPU0.383383- * It is not possible to accurately determine SNC state if the system is384384- * booted with a maxcpus=N parameter. That distorts the ratio of SNC nodes385385- * to L3 caches. It will be OK if system is booted with hyperthreading386386- * disabled (since this doesn't affect the ratio).387387- */388378static __init int snc_get_config(void)389379{390390- struct cacheinfo *ci = get_cpu_cacheinfo_level(0, RESCTRL_L3_CACHE);391391- const cpumask_t *node0_cpumask;392392- int cpus_per_node, cpus_per_l3;393393- int ret;380380+ int ret = topology_num_nodes_per_package();394381395395- if (!x86_match_cpu(snc_cpu_ids) || !ci)382382+ if (ret > 1 && !x86_match_cpu(snc_cpu_ids)) {383383+ pr_warn("CoD enabled system? Resctrl not supported\n");396384 return 1;397397-398398- cpus_read_lock();399399- if (num_online_cpus() != num_present_cpus())400400- pr_warn("Some CPUs offline, SNC detection may be incorrect\n");401401- cpus_read_unlock();402402-403403- node0_cpumask = cpumask_of_node(cpu_to_node(0));404404-405405- cpus_per_node = cpumask_weight(node0_cpumask);406406- cpus_per_l3 = cpumask_weight(&ci->shared_cpu_map);407407-408408- if (!cpus_per_node || !cpus_per_l3)409409- return 1;410410-411411- ret = cpus_per_l3 / cpus_per_node;385385+ }412386413387 /* sanity check: Only valid results are 1, 2, 3, 4, 6 */414388 switch (ret) {
···616616617617 .data618618619619-#if defined(CONFIG_XEN_PV) || defined(CONFIG_PVH)620620-SYM_DATA_START_PTI_ALIGNED(init_top_pgt)621621- .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC622622- .org init_top_pgt + L4_PAGE_OFFSET*8, 0623623- .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC624624- .org init_top_pgt + L4_START_KERNEL*8, 0625625- /* (2^48-(2*1024*1024*1024))/(2^39) = 511 */626626- .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC627627- .fill PTI_USER_PGD_FILL,8,0628628-SYM_DATA_END(init_top_pgt)629629-630630-SYM_DATA_START_PAGE_ALIGNED(level3_ident_pgt)631631- .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC632632- .fill 511, 8, 0633633-SYM_DATA_END(level3_ident_pgt)634634-SYM_DATA_START_PAGE_ALIGNED(level2_ident_pgt)635635- /*636636- * Since I easily can, map the first 1G.637637- * Don't set NX because code runs from these pages.638638- *639639- * Note: This sets _PAGE_GLOBAL despite whether640640- * the CPU supports it or it is enabled. But,641641- * the CPU should ignore the bit.642642- */643643- PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD)644644-SYM_DATA_END(level2_ident_pgt)645645-#else646619SYM_DATA_START_PTI_ALIGNED(init_top_pgt)647620 .fill 512,8,0648621 .fill PTI_USER_PGD_FILL,8,0649622SYM_DATA_END(init_top_pgt)650650-#endif651623652624SYM_DATA_START_PAGE_ALIGNED(level4_kernel_pgt)653625 .fill 511,8,0
+144-55
arch/x86/kernel/smpboot.c
···468468}469469#endif470470471471-/*472472- * Set if a package/die has multiple NUMA nodes inside.473473- * AMD Magny-Cours, Intel Cluster-on-Die, and Intel474474- * Sub-NUMA Clustering have this.475475- */476476-static bool x86_has_numa_in_package;477477-478471static struct sched_domain_topology_level x86_topology[] = {479472 SDTL_INIT(tl_smt_mask, cpu_smt_flags, SMT),480473#ifdef CONFIG_SCHED_CLUSTER···489496 * PKG domain since the NUMA domains will auto-magically create the490497 * right spanning domains based on the SLIT.491498 */492492- if (x86_has_numa_in_package) {499499+ if (topology_num_nodes_per_package() > 1) {493500 unsigned int pkgdom = ARRAY_SIZE(x86_topology) - 2;494501495502 memset(&x86_topology[pkgdom], 0, sizeof(x86_topology[pkgdom]));···506513}507514508515#ifdef CONFIG_NUMA509509-static int sched_avg_remote_distance;510510-static int avg_remote_numa_distance(void)516516+/*517517+ * Test if the on-trace cluster at (N,N) is symmetric.518518+ * Uses upper triangle iteration to avoid obvious duplicates.519519+ */520520+static bool slit_cluster_symmetric(int N)511521{512512- int i, j;513513- int distance, nr_remote, total_distance;522522+ int u = topology_num_nodes_per_package();514523515515- if (sched_avg_remote_distance > 0)516516- return sched_avg_remote_distance;517517-518518- nr_remote = 0;519519- total_distance = 0;520520- for_each_node_state(i, N_CPU) {521521- for_each_node_state(j, N_CPU) {522522- distance = node_distance(i, j);523523-524524- if (distance >= REMOTE_DISTANCE) {525525- nr_remote++;526526- total_distance += distance;527527- }524524+ for (int k = 0; k < u; k++) {525525+ for (int l = k; l < u; l++) {526526+ if (node_distance(N + k, N + l) !=527527+ node_distance(N + l, N + k))528528+ return false;528529 }529530 }530530- if (nr_remote)531531- sched_avg_remote_distance = total_distance / nr_remote;532532- else533533- sched_avg_remote_distance = REMOTE_DISTANCE;534531535535- return sched_avg_remote_distance;532532+ return true;533533+}534534+535535+/*536536+ * Return the package-id of the cluster, or ~0 if indeterminate.537537+ * Each node in the on-trace cluster should have the same package-id.538538+ */539539+static u32 slit_cluster_package(int N)540540+{541541+ int u = topology_num_nodes_per_package();542542+ u32 pkg_id = ~0;543543+544544+ for (int n = 0; n < u; n++) {545545+ const struct cpumask *cpus = cpumask_of_node(N + n);546546+ int cpu;547547+548548+ for_each_cpu(cpu, cpus) {549549+ u32 id = topology_logical_package_id(cpu);550550+551551+ if (pkg_id == ~0)552552+ pkg_id = id;553553+ if (pkg_id != id)554554+ return ~0;555555+ }556556+ }557557+558558+ return pkg_id;559559+}560560+561561+/*562562+ * Validate the SLIT table is of the form expected for SNC, specifically:563563+ *564564+ * - each on-trace cluster should be symmetric,565565+ * - each on-trace cluster should have a unique package-id.566566+ *567567+ * If you NUMA_EMU on top of SNC, you get to keep the pieces.568568+ */569569+static bool slit_validate(void)570570+{571571+ int u = topology_num_nodes_per_package();572572+ u32 pkg_id, prev_pkg_id = ~0;573573+574574+ for (int pkg = 0; pkg < topology_max_packages(); pkg++) {575575+ int n = pkg * u;576576+577577+ /*578578+ * Ensure the on-trace cluster is symmetric and each cluster579579+ * has a different package id.580580+ */581581+ if (!slit_cluster_symmetric(n))582582+ return false;583583+ pkg_id = slit_cluster_package(n);584584+ if (pkg_id == ~0)585585+ return false;586586+ if (pkg && pkg_id == prev_pkg_id)587587+ return false;588588+589589+ prev_pkg_id = pkg_id;590590+ }591591+592592+ return true;593593+}594594+595595+/*596596+ * Compute a sanitized SLIT table for SNC; notably SNC-3 can end up with597597+ * asymmetric off-trace clusters, reflecting physical assymmetries. However598598+ * this leads to 'unfortunate' sched_domain configurations.599599+ *600600+ * For example dual socket GNR with SNC-3:601601+ *602602+ * node distances:603603+ * node 0 1 2 3 4 5604604+ * 0: 10 15 17 21 28 26605605+ * 1: 15 10 15 23 26 23606606+ * 2: 17 15 10 26 23 21607607+ * 3: 21 28 26 10 15 17608608+ * 4: 23 26 23 15 10 15609609+ * 5: 26 23 21 17 15 10610610+ *611611+ * Fix things up by averaging out the off-trace clusters; resulting in:612612+ *613613+ * node 0 1 2 3 4 5614614+ * 0: 10 15 17 24 24 24615615+ * 1: 15 10 15 24 24 24616616+ * 2: 17 15 10 24 24 24617617+ * 3: 24 24 24 10 15 17618618+ * 4: 24 24 24 15 10 15619619+ * 5: 24 24 24 17 15 10620620+ */621621+static int slit_cluster_distance(int i, int j)622622+{623623+ static int slit_valid = -1;624624+ int u = topology_num_nodes_per_package();625625+ long d = 0;626626+ int x, y;627627+628628+ if (slit_valid < 0) {629629+ slit_valid = slit_validate();630630+ if (!slit_valid)631631+ pr_err(FW_BUG "SLIT table doesn't have the expected form for SNC -- fixup disabled!\n");632632+ else633633+ pr_info("Fixing up SNC SLIT table.\n");634634+ }635635+636636+ /*637637+ * Is this a unit cluster on the trace?638638+ */639639+ if ((i / u) == (j / u) || !slit_valid)640640+ return node_distance(i, j);641641+642642+ /*643643+ * Off-trace cluster.644644+ *645645+ * Notably average out the symmetric pair of off-trace clusters to646646+ * ensure the resulting SLIT table is symmetric.647647+ */648648+ x = i - (i % u);649649+ y = j - (j % u);650650+651651+ for (i = x; i < x + u; i++) {652652+ for (j = y; j < y + u; j++) {653653+ d += node_distance(i, j);654654+ d += node_distance(j, i);655655+ }656656+ }657657+658658+ return d / (2*u*u);536659}537660538661int arch_sched_node_distance(int from, int to)···658549 switch (boot_cpu_data.x86_vfm) {659550 case INTEL_GRANITERAPIDS_X:660551 case INTEL_ATOM_DARKMONT_X:661661-662662- if (!x86_has_numa_in_package || topology_max_packages() == 1 ||663663- d < REMOTE_DISTANCE)552552+ if (topology_max_packages() == 1 ||553553+ topology_num_nodes_per_package() < 3)664554 return d;665555666556 /*667667- * With SNC enabled, there could be too many levels of remote668668- * NUMA node distances, creating NUMA domain levels669669- * including local nodes and partial remote nodes.670670- *671671- * Trim finer distance tuning for NUMA nodes in remote package672672- * for the purpose of building sched domains. Group NUMA nodes673673- * in the remote package in the same sched group.674674- * Simplify NUMA domains and avoid extra NUMA levels including675675- * different remote NUMA nodes and local nodes.676676- *677677- * GNR and CWF don't expect systems with more than 2 packages678678- * and more than 2 hops between packages. Single average remote679679- * distance won't be appropriate if there are more than 2680680- * packages as average distance to different remote packages681681- * could be different.557557+ * Handle SNC-3 asymmetries.682558 */683683- WARN_ONCE(topology_max_packages() > 2,684684- "sched: Expect only up to 2 packages for GNR or CWF, "685685- "but saw %d packages when building sched domains.",686686- topology_max_packages());687687-688688- d = avg_remote_numa_distance();559559+ return slit_cluster_distance(from, to);689560 }690561 return d;691562}···695606 o = &cpu_data(i);696607697608 if (match_pkg(c, o) && !topology_same_node(c, o))698698- x86_has_numa_in_package = true;609609+ WARN_ON_ONCE(topology_num_nodes_per_package() == 1);699610700611 if ((i == cpu) || (has_smt && match_smt(c, o)))701612 link_mask(topology_sibling_cpumask, cpu, i);
···836836 }837837838838 efi_check_for_embedded_firmwares();839839- efi_free_boot_services();839839+ efi_unmap_boot_services();840840841841 if (!efi_is_mixed())842842 efi_native_runtime_setup();
+52-3
arch/x86/platform/efi/quirks.c
···341341342342 /*343343 * Because the following memblock_reserve() is paired344344- * with memblock_free_late() for this region in344344+ * with free_reserved_area() for this region in345345 * efi_free_boot_services(), we must be extremely346346 * careful not to reserve, and subsequently free,347347 * critical regions of memory (like the kernel image) or···404404 pr_err("Failed to unmap VA mapping for 0x%llx\n", va);405405}406406407407-void __init efi_free_boot_services(void)407407+struct efi_freeable_range {408408+ u64 start;409409+ u64 end;410410+};411411+412412+static struct efi_freeable_range *ranges_to_free;413413+414414+void __init efi_unmap_boot_services(void)408415{409416 struct efi_memory_map_data data = { 0 };410417 efi_memory_desc_t *md;411418 int num_entries = 0;419419+ int idx = 0;420420+ size_t sz;412421 void *new, *new_md;413422414423 /* Keep all regions for /sys/kernel/debug/efi */415424 if (efi_enabled(EFI_DBG))416425 return;426426+427427+ sz = sizeof(*ranges_to_free) * efi.memmap.nr_map + 1;428428+ ranges_to_free = kzalloc(sz, GFP_KERNEL);429429+ if (!ranges_to_free) {430430+ pr_err("Failed to allocate storage for freeable EFI regions\n");431431+ return;432432+ }417433418434 for_each_efi_memory_desc(md) {419435 unsigned long long start = md->phys_addr;···487471 start = SZ_1M;488472 }489473490490- memblock_free_late(start, size);474474+ /*475475+ * With CONFIG_DEFERRED_STRUCT_PAGE_INIT parts of the memory476476+ * map are still not initialized and we can't reliably free477477+ * memory here.478478+ * Queue the ranges to free at a later point.479479+ */480480+ ranges_to_free[idx].start = start;481481+ ranges_to_free[idx].end = start + size;482482+ idx++;491483 }492484493485 if (!num_entries)···535511 return;536512 }537513}514514+515515+static int __init efi_free_boot_services(void)516516+{517517+ struct efi_freeable_range *range = ranges_to_free;518518+ unsigned long freed = 0;519519+520520+ if (!ranges_to_free)521521+ return 0;522522+523523+ while (range->start) {524524+ void *start = phys_to_virt(range->start);525525+ void *end = phys_to_virt(range->end);526526+527527+ free_reserved_area(start, end, -1, NULL);528528+ freed += (end - start);529529+ range++;530530+ }531531+ kfree(ranges_to_free);532532+533533+ if (freed)534534+ pr_info("Freeing EFI boot services memory: %ldK\n", freed / SZ_1K);535535+536536+ return 0;537537+}538538+arch_initcall(efi_free_boot_services);538539539540/*540541 * A number of config table entries get remapped to virtual addresses
+1-6
arch/x86/platform/pvh/enlighten.c
···25252626const unsigned int __initconst pvh_start_info_sz = sizeof(pvh_start_info);27272828-static u64 __init pvh_get_root_pointer(void)2929-{3030- return pvh_start_info.rsdp_paddr;3131-}3232-3328/*3429 * Xen guests are able to obtain the memory map from the hypervisor via the3530 * HYPERVISOR_memory_op hypercall.···9095 pvh_bootparams.hdr.version = (2 << 8) | 12;9196 pvh_bootparams.hdr.type_of_loader = ((xen_guest ? 0x9 : 0xb) << 4) | 0;92979393- x86_init.acpi.get_root_pointer = pvh_get_root_pointer;9898+ pvh_bootparams.acpi_rsdp_addr = pvh_start_info.rsdp_paddr;9499}9510096101/*
+1-1
arch/x86/xen/enlighten_pv.c
···392392393393 /*394394 * Xen PV would need some work to support PCID: CR3 handling as well395395- * as xen_flush_tlb_others() would need updating.395395+ * as xen_flush_tlb_multi() would need updating.396396 */397397 setup_clear_cpu_cap(X86_FEATURE_PCID);398398
+9
arch/x86/xen/mmu_pv.c
···105105static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;106106#endif107107108108+static pud_t level3_ident_pgt[PTRS_PER_PUD] __page_aligned_bss;109109+static pmd_t level2_ident_pgt[PTRS_PER_PMD] __page_aligned_bss;110110+108111/*109112 * Protects atomic reservation decrease/increase against concurrent increases.110113 * Also protects non-atomic updates of current_pages and balloon lists.···1779177617801777 /* Zap identity mapping */17811778 init_top_pgt[0] = __pgd(0);17791779+17801780+ init_top_pgt[pgd_index(__PAGE_OFFSET_BASE_L4)].pgd =17811781+ __pa_symbol(level3_ident_pgt) + _KERNPG_TABLE_NOENC;17821782+ init_top_pgt[pgd_index(__START_KERNEL_map)].pgd =17831783+ __pa_symbol(level3_kernel_pgt) + _PAGE_TABLE_NOENC;17841784+ level3_ident_pgt[0].pud = __pa_symbol(level2_ident_pgt) + _KERNPG_TABLE_NOENC;1782178517831786 /* Pre-constructed entries are in pfn, so convert to mfn */17841787 /* L4[273] -> level3_ident_pgt */
+1-2
block/blk-map.c
···398398 if (op_is_write(op))399399 memcpy(page_address(page), p, bytes);400400401401- if (bio_add_page(bio, page, bytes, 0) < bytes)402402- break;401401+ __bio_add_page(bio, page, bytes, 0);403402404403 len -= bytes;405404 p += bytes;
+30-15
block/blk-mq.c
···47934793 }47944794}4795479547964796-static int blk_mq_realloc_tag_set_tags(struct blk_mq_tag_set *set,47974797- int new_nr_hw_queues)47964796+static struct blk_mq_tags **blk_mq_prealloc_tag_set_tags(47974797+ struct blk_mq_tag_set *set,47984798+ int new_nr_hw_queues)47984799{47994800 struct blk_mq_tags **new_tags;48004801 int i;4801480248024803 if (set->nr_hw_queues >= new_nr_hw_queues)48034803- goto done;48044804+ return NULL;4804480548054806 new_tags = kcalloc_node(new_nr_hw_queues, sizeof(struct blk_mq_tags *),48064807 GFP_KERNEL, set->numa_node);48074808 if (!new_tags)48084808- return -ENOMEM;48094809+ return ERR_PTR(-ENOMEM);4809481048104811 if (set->tags)48114812 memcpy(new_tags, set->tags, set->nr_hw_queues *48124813 sizeof(*set->tags));48134813- kfree(set->tags);48144814- set->tags = new_tags;4815481448164815 for (i = set->nr_hw_queues; i < new_nr_hw_queues; i++) {48174817- if (!__blk_mq_alloc_map_and_rqs(set, i)) {48184818- while (--i >= set->nr_hw_queues)48194819- __blk_mq_free_map_and_rqs(set, i);48204820- return -ENOMEM;48164816+ if (blk_mq_is_shared_tags(set->flags)) {48174817+ new_tags[i] = set->shared_tags;48184818+ } else {48194819+ new_tags[i] = blk_mq_alloc_map_and_rqs(set, i,48204820+ set->queue_depth);48214821+ if (!new_tags[i])48224822+ goto out_unwind;48214823 }48224824 cond_resched();48234825 }4824482648254825-done:48264826- set->nr_hw_queues = new_nr_hw_queues;48274827- return 0;48274827+ return new_tags;48284828+out_unwind:48294829+ while (--i >= set->nr_hw_queues) {48304830+ if (!blk_mq_is_shared_tags(set->flags))48314831+ blk_mq_free_map_and_rqs(set, new_tags[i], i);48324832+ }48334833+ kfree(new_tags);48344834+ return ERR_PTR(-ENOMEM);48284835}4829483648304837/*···51205113 unsigned int memflags;51215114 int i;51225115 struct xarray elv_tbl;51165116+ struct blk_mq_tags **new_tags;51235117 bool queues_frozen = false;5124511851255119 lockdep_assert_held(&set->tag_list_lock);···51555147 if (blk_mq_elv_switch_none(q, &elv_tbl))51565148 goto switch_back;5157514951505150+ new_tags = blk_mq_prealloc_tag_set_tags(set, nr_hw_queues);51515151+ if (IS_ERR(new_tags))51525152+ goto switch_back;51535153+51585154 list_for_each_entry(q, &set->tag_list, tag_set_list)51595155 blk_mq_freeze_queue_nomemsave(q);51605156 queues_frozen = true;51615161- if (blk_mq_realloc_tag_set_tags(set, nr_hw_queues) < 0)51625162- goto switch_back;51575157+ if (new_tags) {51585158+ kfree(set->tags);51595159+ set->tags = new_tags;51605160+ }51615161+ set->nr_hw_queues = nr_hw_queues;5163516251645163fallback:51655164 blk_mq_update_queue_map(set);
+7-1
block/blk-sysfs.c
···7878 /*7979 * Serialize updating nr_requests with concurrent queue_requests_store()8080 * and switching elevator.8181+ *8282+ * Use trylock to avoid circular lock dependency with kernfs active8383+ * reference during concurrent disk deletion:8484+ * update_nr_hwq_lock -> kn->active (via del_gendisk -> kobject_del)8585+ * kn->active -> update_nr_hwq_lock (via this sysfs write path)8186 */8282- down_write(&set->update_nr_hwq_lock);8787+ if (!down_write_trylock(&set->update_nr_hwq_lock))8888+ return -EBUSY;83898490 if (nr == q->nr_requests)8591 goto unlock;
+11-1
block/elevator.c
···807807 elv_iosched_load_module(ctx.name);808808 ctx.type = elevator_find_get(ctx.name);809809810810- down_read(&set->update_nr_hwq_lock);810810+ /*811811+ * Use trylock to avoid circular lock dependency with kernfs active812812+ * reference during concurrent disk deletion:813813+ * update_nr_hwq_lock -> kn->active (via del_gendisk -> kobject_del)814814+ * kn->active -> update_nr_hwq_lock (via this sysfs write path)815815+ */816816+ if (!down_read_trylock(&set->update_nr_hwq_lock)) {817817+ ret = -EBUSY;818818+ goto out;819819+ }811820 if (!blk_queue_no_elv_switch(q)) {812821 ret = elevator_change(q, &ctx);813822 if (!ret)···826817 }827818 up_read(&set->update_nr_hwq_lock);828819820820+out:829821 if (ctx.type)830822 elevator_put(ctx.type);831823 return ret;
+10-27
drivers/accel/amdxdna/aie2_ctx.c
···165165166166 trace_xdna_job(&job->base, job->hwctx->name, "signaled fence", job->seq);167167168168- amdxdna_pm_suspend_put(job->hwctx->client->xdna);169168 job->hwctx->priv->completed++;170169 dma_fence_signal(fence);171170···185186 cmd_abo = job->cmd_bo;186187187188 if (unlikely(job->job_timeout)) {188188- amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_TIMEOUT);189189+ amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_TIMEOUT);189190 ret = -EINVAL;190191 goto out;191192 }192193193194 if (unlikely(!data) || unlikely(size != sizeof(u32))) {194194- amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ABORT);195195+ amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_ABORT);195196 ret = -EINVAL;196197 goto out;197198 }···201202 if (status == AIE2_STATUS_SUCCESS)202203 amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_COMPLETED);203204 else204204- amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ERROR);205205+ amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_ERROR);205206206207out:207208 aie2_sched_notify(job);···243244 cmd_abo = job->cmd_bo;244245245246 if (unlikely(job->job_timeout)) {246246- amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_TIMEOUT);247247+ amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_TIMEOUT);247248 ret = -EINVAL;248249 goto out;249250 }250251251252 if (unlikely(!data) || unlikely(size != sizeof(u32) * 3)) {252252- amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ABORT);253253+ amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_ABORT);253254 ret = -EINVAL;254255 goto out;255256 }···269270 fail_cmd_idx, fail_cmd_status);270271271272 if (fail_cmd_status == AIE2_STATUS_SUCCESS) {272272- amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ABORT);273273+ amdxdna_cmd_set_error(cmd_abo, job, fail_cmd_idx, ERT_CMD_STATE_ABORT);273274 ret = -EINVAL;274274- goto out;275275+ } else {276276+ amdxdna_cmd_set_error(cmd_abo, job, fail_cmd_idx, ERT_CMD_STATE_ERROR);275277 }276276- amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ERROR);277278278278- if (amdxdna_cmd_get_op(cmd_abo) == ERT_CMD_CHAIN) {279279- struct amdxdna_cmd_chain *cc = amdxdna_cmd_get_payload(cmd_abo, NULL);280280-281281- cc->error_index = fail_cmd_idx;282282- if (cc->error_index >= cc->command_count)283283- cc->error_index = 0;284284- }285279out:286280 aie2_sched_notify(job);287281 return ret;···289297 struct dma_fence *fence;290298 int ret;291299292292- ret = amdxdna_pm_resume_get(hwctx->client->xdna);293293- if (ret)300300+ if (!hwctx->priv->mbox_chann)294301 return NULL;295302296296- if (!hwctx->priv->mbox_chann) {297297- amdxdna_pm_suspend_put(hwctx->client->xdna);298298- return NULL;299299- }300300-301301- if (!mmget_not_zero(job->mm)) {302302- amdxdna_pm_suspend_put(hwctx->client->xdna);303303+ if (!mmget_not_zero(job->mm))303304 return ERR_PTR(-ESRCH);304304- }305305306306 kref_get(&job->refcnt);307307 fence = dma_fence_get(job->fence);···324340325341out:326342 if (ret) {327327- amdxdna_pm_suspend_put(hwctx->client->xdna);328343 dma_fence_put(job->fence);329344 aie2_job_put(job);330345 mmput(job->mm);
+28-8
drivers/accel/amdxdna/aie2_message.c
···4040 return -ENODEV;41414242 ret = xdna_send_msg_wait(xdna, ndev->mgmt_chann, msg);4343- if (ret == -ETIME) {4444- xdna_mailbox_stop_channel(ndev->mgmt_chann);4545- xdna_mailbox_destroy_channel(ndev->mgmt_chann);4646- ndev->mgmt_chann = NULL;4747- }4343+ if (ret == -ETIME)4444+ aie2_destroy_mgmt_chann(ndev);48454946 if (!ret && *hdl->status != AIE2_STATUS_SUCCESS) {5047 XDNA_ERR(xdna, "command opcode 0x%x failed, status 0x%x",···293296 }294297295298 intr_reg = i2x.mb_head_ptr_reg + 4;296296- hwctx->priv->mbox_chann = xdna_mailbox_create_channel(ndev->mbox, &x2i, &i2x,297297- intr_reg, ret);299299+ hwctx->priv->mbox_chann = xdna_mailbox_alloc_channel(ndev->mbox);298300 if (!hwctx->priv->mbox_chann) {299301 XDNA_ERR(xdna, "Not able to create channel");300302 ret = -EINVAL;301303 goto del_ctx_req;304304+ }305305+306306+ ret = xdna_mailbox_start_channel(hwctx->priv->mbox_chann, &x2i, &i2x,307307+ intr_reg, ret);308308+ if (ret) {309309+ XDNA_ERR(xdna, "Not able to create channel");310310+ ret = -EINVAL;311311+ goto free_channel;302312 }303313 ndev->hwctx_num++;304314···314310315311 return 0;316312313313+free_channel:314314+ xdna_mailbox_free_channel(hwctx->priv->mbox_chann);317315del_ctx_req:318316 aie2_destroy_context_req(ndev, hwctx->fw_ctx_id);319317 return ret;···331325332326 xdna_mailbox_stop_channel(hwctx->priv->mbox_chann);333327 ret = aie2_destroy_context_req(ndev, hwctx->fw_ctx_id);334334- xdna_mailbox_destroy_channel(hwctx->priv->mbox_chann);328328+ xdna_mailbox_free_channel(hwctx->priv->mbox_chann);335329 XDNA_DBG(xdna, "Destroyed fw ctx %d", hwctx->fw_ctx_id);336330 hwctx->priv->mbox_chann = NULL;337331 hwctx->fw_ctx_id = -1;···918912 ndev->exec_msg_ops = &npu_exec_message_ops;919913 else920914 ndev->exec_msg_ops = &legacy_exec_message_ops;915915+}916916+917917+void aie2_destroy_mgmt_chann(struct amdxdna_dev_hdl *ndev)918918+{919919+ struct amdxdna_dev *xdna = ndev->xdna;920920+921921+ drm_WARN_ON(&xdna->ddev, !mutex_is_locked(&xdna->dev_lock));922922+923923+ if (!ndev->mgmt_chann)924924+ return;925925+926926+ xdna_mailbox_stop_channel(ndev->mgmt_chann);927927+ xdna_mailbox_free_channel(ndev->mgmt_chann);928928+ ndev->mgmt_chann = NULL;921929}922930923931static inline struct amdxdna_gem_obj *
+37-29
drivers/accel/amdxdna/aie2_pci.c
···330330331331 aie2_runtime_cfg(ndev, AIE2_RT_CFG_CLK_GATING, NULL);332332 aie2_mgmt_fw_fini(ndev);333333- xdna_mailbox_stop_channel(ndev->mgmt_chann);334334- xdna_mailbox_destroy_channel(ndev->mgmt_chann);335335- ndev->mgmt_chann = NULL;333333+ aie2_destroy_mgmt_chann(ndev);336334 drmm_kfree(&xdna->ddev, ndev->mbox);337335 ndev->mbox = NULL;338336 aie2_psp_stop(ndev->psp_hdl);···361363 }362364 pci_set_master(pdev);363365366366+ mbox_res.ringbuf_base = ndev->sram_base;367367+ mbox_res.ringbuf_size = pci_resource_len(pdev, xdna->dev_info->sram_bar);368368+ mbox_res.mbox_base = ndev->mbox_base;369369+ mbox_res.mbox_size = MBOX_SIZE(ndev);370370+ mbox_res.name = "xdna_mailbox";371371+ ndev->mbox = xdnam_mailbox_create(&xdna->ddev, &mbox_res);372372+ if (!ndev->mbox) {373373+ XDNA_ERR(xdna, "failed to create mailbox device");374374+ ret = -ENODEV;375375+ goto disable_dev;376376+ }377377+378378+ ndev->mgmt_chann = xdna_mailbox_alloc_channel(ndev->mbox);379379+ if (!ndev->mgmt_chann) {380380+ XDNA_ERR(xdna, "failed to alloc channel");381381+ ret = -ENODEV;382382+ goto disable_dev;383383+ }384384+364385 ret = aie2_smu_init(ndev);365386 if (ret) {366387 XDNA_ERR(xdna, "failed to init smu, ret %d", ret);367367- goto disable_dev;388388+ goto free_channel;368389 }369390370391 ret = aie2_psp_start(ndev->psp_hdl);···398381 goto stop_psp;399382 }400383401401- mbox_res.ringbuf_base = ndev->sram_base;402402- mbox_res.ringbuf_size = pci_resource_len(pdev, xdna->dev_info->sram_bar);403403- mbox_res.mbox_base = ndev->mbox_base;404404- mbox_res.mbox_size = MBOX_SIZE(ndev);405405- mbox_res.name = "xdna_mailbox";406406- ndev->mbox = xdnam_mailbox_create(&xdna->ddev, &mbox_res);407407- if (!ndev->mbox) {408408- XDNA_ERR(xdna, "failed to create mailbox device");409409- ret = -ENODEV;410410- goto stop_psp;411411- }412412-413384 mgmt_mb_irq = pci_irq_vector(pdev, ndev->mgmt_chan_idx);414385 if (mgmt_mb_irq < 0) {415386 ret = mgmt_mb_irq;···406401 }407402408403 xdna_mailbox_intr_reg = ndev->mgmt_i2x.mb_head_ptr_reg + 4;409409- ndev->mgmt_chann = xdna_mailbox_create_channel(ndev->mbox,410410- &ndev->mgmt_x2i,411411- &ndev->mgmt_i2x,412412- xdna_mailbox_intr_reg,413413- mgmt_mb_irq);414414- if (!ndev->mgmt_chann) {415415- XDNA_ERR(xdna, "failed to create management mailbox channel");404404+ ret = xdna_mailbox_start_channel(ndev->mgmt_chann,405405+ &ndev->mgmt_x2i,406406+ &ndev->mgmt_i2x,407407+ xdna_mailbox_intr_reg,408408+ mgmt_mb_irq);409409+ if (ret) {410410+ XDNA_ERR(xdna, "failed to start management mailbox channel");416411 ret = -EINVAL;417412 goto stop_psp;418413 }···420415 ret = aie2_mgmt_fw_init(ndev);421416 if (ret) {422417 XDNA_ERR(xdna, "initial mgmt firmware failed, ret %d", ret);423423- goto destroy_mgmt_chann;418418+ goto stop_fw;424419 }425420426421 ret = aie2_pm_init(ndev);427422 if (ret) {428423 XDNA_ERR(xdna, "failed to init pm, ret %d", ret);429429- goto destroy_mgmt_chann;424424+ goto stop_fw;430425 }431426432427 ret = aie2_mgmt_fw_query(ndev);433428 if (ret) {434429 XDNA_ERR(xdna, "failed to query fw, ret %d", ret);435435- goto destroy_mgmt_chann;430430+ goto stop_fw;436431 }437432438433 ret = aie2_error_async_events_alloc(ndev);439434 if (ret) {440435 XDNA_ERR(xdna, "Allocate async events failed, ret %d", ret);441441- goto destroy_mgmt_chann;436436+ goto stop_fw;442437 }443438444439 ndev->dev_status = AIE2_DEV_START;445440446441 return 0;447442448448-destroy_mgmt_chann:443443+stop_fw:444444+ aie2_suspend_fw(ndev);449445 xdna_mailbox_stop_channel(ndev->mgmt_chann);450450- xdna_mailbox_destroy_channel(ndev->mgmt_chann);451446stop_psp:452447 aie2_psp_stop(ndev->psp_hdl);453448fini_smu:454449 aie2_smu_fini(ndev);450450+free_channel:451451+ xdna_mailbox_free_channel(ndev->mgmt_chann);452452+ ndev->mgmt_chann = NULL;455453disable_dev:456454 pci_disable_device(pdev);457455
···16811681 * Use acpi_os_map_generic_address to pre-map the reset16821682 * register if it's in system memory.16831683 */16841684- void *rv;16841684+ void __iomem *rv;1685168516861686 rv = acpi_os_map_generic_address(&acpi_gbl_FADT.reset_register);16871687 pr_debug("%s: Reset register mapping %s\n", __func__,
···142142 _pin: PhantomPinned,143143}144144145145+// We do not define any ops. For now, used only to check identity of vmas.146146+static BINDER_VM_OPS: bindings::vm_operations_struct = pin_init::zeroed();147147+148148+// To ensure that we do not accidentally install pages into or zap pages from the wrong vma, we149149+// check its vm_ops and private data before using it.150150+fn check_vma(vma: &virt::VmaRef, owner: *const ShrinkablePageRange) -> Option<&virt::VmaMixedMap> {151151+ // SAFETY: Just reading the vm_ops pointer of any active vma is safe.152152+ let vm_ops = unsafe { (*vma.as_ptr()).vm_ops };153153+ if !ptr::eq(vm_ops, &BINDER_VM_OPS) {154154+ return None;155155+ }156156+157157+ // SAFETY: Reading the vm_private_data pointer of a binder-owned vma is safe.158158+ let vm_private_data = unsafe { (*vma.as_ptr()).vm_private_data };159159+ // The ShrinkablePageRange is only dropped when the Process is dropped, which only happens once160160+ // the file's ->release handler is invoked, which means the ShrinkablePageRange outlives any161161+ // VMA associated with it, so there can't be any false positives due to pointer reuse here.162162+ if !ptr::eq(vm_private_data, owner.cast()) {163163+ return None;164164+ }165165+166166+ vma.as_mixedmap_vma()167167+}168168+145169struct Inner {146170 /// Array of pages.147171 ///···332308 inner.size = num_pages;333309 inner.vma_addr = vma.start();334310311311+ // This pointer is only used for comparison - it's not dereferenced.312312+ //313313+ // SAFETY: We own the vma, and we don't use any methods on VmaNew that rely on314314+ // `vm_private_data`.315315+ unsafe {316316+ (*vma.as_ptr()).vm_private_data = ptr::from_ref(self).cast_mut().cast::<c_void>()317317+ };318318+319319+ // SAFETY: We own the vma, and we don't use any methods on VmaNew that rely on320320+ // `vm_ops`.321321+ unsafe { (*vma.as_ptr()).vm_ops = &BINDER_VM_OPS };322322+335323 Ok(num_pages)336324 }337325···435399 //436400 // Using `mmput_async` avoids this, because then the `mm` cleanup is instead queued to a437401 // workqueue.438438- MmWithUser::into_mmput_async(self.mm.mmget_not_zero().ok_or(ESRCH)?)439439- .mmap_read_lock()440440- .vma_lookup(vma_addr)441441- .ok_or(ESRCH)?442442- .as_mixedmap_vma()443443- .ok_or(ESRCH)?444444- .vm_insert_page(user_page_addr, &new_page)445445- .inspect_err(|err| {446446- pr_warn!(447447- "Failed to vm_insert_page({}): vma_addr:{} i:{} err:{:?}",448448- user_page_addr,449449- vma_addr,450450- i,451451- err452452- )453453- })?;402402+ let mm = MmWithUser::into_mmput_async(self.mm.mmget_not_zero().ok_or(ESRCH)?);403403+ {404404+ let vma_read;405405+ let mmap_read;406406+ let vma = if let Some(ret) = mm.lock_vma_under_rcu(vma_addr) {407407+ vma_read = ret;408408+ check_vma(&vma_read, self)409409+ } else {410410+ mmap_read = mm.mmap_read_lock();411411+ mmap_read412412+ .vma_lookup(vma_addr)413413+ .and_then(|vma| check_vma(vma, self))414414+ };415415+416416+ match vma {417417+ Some(vma) => vma.vm_insert_page(user_page_addr, &new_page)?,418418+ None => return Err(ESRCH),419419+ }420420+ }454421455422 let inner = self.lock.lock();456423···706667 let mmap_read;707668 let mm_mutex;708669 let vma_addr;670670+ let range_ptr;709671710672 {711673 // CAST: The `list_head` field is first in `PageInfo`.712674 let info = item as *mut PageInfo;713675 // SAFETY: The `range` field of `PageInfo` is immutable.714714- let range = unsafe { &*((*info).range) };676676+ range_ptr = unsafe { (*info).range };677677+ // SAFETY: The `range` outlives its `PageInfo` values.678678+ let range = unsafe { &*range_ptr };715679716680 mm = match range.mm.mmget_not_zero() {717681 Some(mm) => MmWithUser::into_mmput_async(mm),···759717 // SAFETY: The lru lock is locked when this method is called.760718 unsafe { bindings::spin_unlock(&raw mut (*lru).lock) };761719762762- if let Some(vma) = mmap_read.vma_lookup(vma_addr) {763763- let user_page_addr = vma_addr + (page_index << PAGE_SHIFT);764764- vma.zap_page_range_single(user_page_addr, PAGE_SIZE);720720+ if let Some(unchecked_vma) = mmap_read.vma_lookup(vma_addr) {721721+ if let Some(vma) = check_vma(unchecked_vma, range_ptr) {722722+ let user_page_addr = vma_addr + (page_index << PAGE_SHIFT);723723+ vma.zap_page_range_single(user_page_addr, PAGE_SIZE);724724+ }765725 }766726767727 drop(mmap_read);
+2-1
drivers/android/binder/process.rs
···12951295 }1296129612971297 pub(crate) fn dead_binder_done(&self, cookie: u64, thread: &Thread) {12981298- if let Some(death) = self.inner.lock().pull_delivered_death(cookie) {12981298+ let death = self.inner.lock().pull_delivered_death(cookie);12991299+ if let Some(death) = death {12991300 death.set_notification_done(thread);13001301 }13011302 }
+33-2
drivers/android/binder/range_alloc/array.rs
···118118 size: usize,119119 is_oneway: bool,120120 pid: Pid,121121- ) -> Result<usize> {121121+ ) -> Result<(usize, bool)> {122122 // Compute new value of free_oneway_space, which is set only on success.123123 let new_oneway_space = if is_oneway {124124 match self.free_oneway_space.checked_sub(size) {···146146 .ok()147147 .unwrap();148148149149- Ok(insert_at_offset)149149+ // Start detecting spammers once we have less than 20%150150+ // of async space left (which is less than 10% of total151151+ // buffer size).152152+ //153153+ // (This will short-circuit, so `low_oneway_space` is154154+ // only called when necessary.)155155+ let oneway_spam_detected =156156+ is_oneway && new_oneway_space < self.size / 10 && self.low_oneway_space(pid);157157+158158+ Ok((insert_at_offset, oneway_spam_detected))159159+ }160160+161161+ /// Find the amount and size of buffers allocated by the current caller.162162+ ///163163+ /// The idea is that once we cross the threshold, whoever is responsible164164+ /// for the low async space is likely to try to send another async transaction,165165+ /// and at some point we'll catch them in the act. This is more efficient166166+ /// than keeping a map per pid.167167+ fn low_oneway_space(&self, calling_pid: Pid) -> bool {168168+ let mut total_alloc_size = 0;169169+ let mut num_buffers = 0;170170+171171+ // Warn if this pid has more than 50 transactions, or more than 50% of172172+ // async space (which is 25% of total buffer size). Oneway spam is only173173+ // detected when the threshold is exceeded.174174+ for range in &self.ranges {175175+ if range.state.is_oneway() && range.state.pid() == calling_pid {176176+ total_alloc_size += range.size;177177+ num_buffers += 1;178178+ }179179+ }180180+ num_buffers > 50 || total_alloc_size > self.size / 4150181 }151182152183 pub(crate) fn reservation_abort(&mut self, offset: usize) -> Result<FreedRange> {
···164164 self.free_oneway_space165165 };166166167167- // Start detecting spammers once we have less than 20%168168- // of async space left (which is less than 10% of total169169- // buffer size).170170- //171171- // (This will short-circut, so `low_oneway_space` is172172- // only called when necessary.)173173- let oneway_spam_detected =174174- is_oneway && new_oneway_space < self.size / 10 && self.low_oneway_space(pid);175175-176167 let (found_size, found_off, tree_node, free_tree_node) = match self.find_best_match(size) {177168 None => {178169 pr_warn!("ENOSPC from range_alloc.reserve_new - size: {}", size);···193202 self.tree.insert(tree_node);194203 self.free_tree.insert(free_tree_node);195204 }205205+206206+ // Start detecting spammers once we have less than 20%207207+ // of async space left (which is less than 10% of total208208+ // buffer size).209209+ //210210+ // (This will short-circuit, so `low_oneway_space` is211211+ // only called when necessary.)212212+ let oneway_spam_detected =213213+ is_oneway && new_oneway_space < self.size / 10 && self.low_oneway_space(pid);196214197215 Ok((found_off, oneway_spam_detected))198216 }
+6-11
drivers/android/binder/thread.rs
···1015101510161016 // Copy offsets if there are any.10171017 if offsets_size > 0 {10181018- {10191019- let mut reader =10201020- UserSlice::new(UserPtr::from_addr(trd_data_ptr.offsets as _), offsets_size)10211021- .reader();10221022- alloc.copy_into(&mut reader, aligned_data_size, offsets_size)?;10231023- }10181018+ let mut offsets_reader =10191019+ UserSlice::new(UserPtr::from_addr(trd_data_ptr.offsets as _), offsets_size)10201020+ .reader();1024102110251022 let offsets_start = aligned_data_size;10261023 let offsets_end = aligned_data_size + offsets_size;···10381041 .step_by(size_of::<u64>())10391042 .enumerate()10401043 {10411041- let offset: usize = view10421042- .alloc10431043- .read::<u64>(index_offset)?10441044- .try_into()10451045- .map_err(|_| EINVAL)?;10441044+ let offset = offsets_reader.read::<u64>()?;10451045+ view.alloc.write(index_offset, &offset)?;10461046+ let offset: usize = offset.try_into().map_err(|_| EINVAL)?;1046104710471048 if offset < end_of_previous_object || !is_aligned(offset, size_of::<u32>()) {10481049 pr_warn!("Got transaction with invalid offset.");
+2
drivers/ata/libata-core.c
···41894189 ATA_QUIRK_FIRMWARE_WARN },4190419041914191 /* Seagate disks with LPM issues */41924192+ { "ST1000DM010-2EP102", NULL, ATA_QUIRK_NOLPM },41924193 { "ST2000DM008-2FR102", NULL, ATA_QUIRK_NOLPM },4193419441944195 /* drives which fail FPDMA_AA activation (some may freeze afterwards)···42324231 /* Devices that do not need bridging limits applied */42334232 { "MTRON MSP-SATA*", NULL, ATA_QUIRK_BRIDGE_OK },42344233 { "BUFFALO HD-QSU2/R5", NULL, ATA_QUIRK_BRIDGE_OK },42344234+ { "QEMU HARDDISK", "2.5+", ATA_QUIRK_BRIDGE_OK },4235423542364236 /* Devices which aren't very happy with higher link speeds */42374237 { "WD My Book", NULL, ATA_QUIRK_1_5_GBPS },
+2-1
drivers/ata/libata-eh.c
···647647 break;648648 }649649650650- if (qc == ap->deferred_qc) {650650+ if (i < ATA_MAX_QUEUE && qc == ap->deferred_qc) {651651 /*652652 * This is a deferred command that timed out while653653 * waiting for the command queue to drain. Since the qc···659659 */660660 WARN_ON_ONCE(qc->flags & ATA_QCFLAG_ACTIVE);661661 ap->deferred_qc = NULL;662662+ cancel_work(&ap->deferred_qc_work);662663 set_host_byte(scmd, DID_TIME_OUT);663664 scsi_eh_finish_cmd(scmd, &ap->eh_done_q);664665 } else if (i < ATA_MAX_QUEUE) {
···928928 bool async_allowed;929929 int ret;930930931931- ret = driver_match_device_locked(drv, dev);931931+ ret = driver_match_device(drv, dev);932932 if (ret == 0) {933933 /* no match */934934 return 0;
+12-4
drivers/block/ublk_drv.c
···4443444344444444 /* Skip partition scan if disabled by user */44454445 if (ub->dev_info.flags & UBLK_F_NO_AUTO_PART_SCAN) {44464446- clear_bit(GD_SUPPRESS_PART_SCAN, &disk->state);44464446+ /* Not clear for unprivileged daemons, see comment above */44474447+ if (!ub->unprivileged_daemons)44484448+ clear_bit(GD_SUPPRESS_PART_SCAN, &disk->state);44474449 } else {44484450 /* Schedule async partition scan for trusted daemons */44494451 if (!ub->unprivileged_daemons)···50085006 return 0;50095007}5010500850115011-static void ublk_ctrl_set_size(struct ublk_device *ub, const struct ublksrv_ctrl_cmd *header)50095009+static int ublk_ctrl_set_size(struct ublk_device *ub, const struct ublksrv_ctrl_cmd *header)50125010{50135011 struct ublk_param_basic *p = &ub->params.basic;50145012 u64 new_size = header->data[0];50135013+ int ret = 0;5015501450165015 mutex_lock(&ub->mutex);50165016+ if (!ub->ub_disk) {50175017+ ret = -ENODEV;50185018+ goto out;50195019+ }50175020 p->dev_sectors = new_size;50185021 set_capacity_and_notify(ub->ub_disk, p->dev_sectors);50225022+out:50195023 mutex_unlock(&ub->mutex);50245024+ return ret;50205025}5021502650225027struct count_busy {···53445335 ret = ublk_ctrl_end_recovery(ub, &header);53455336 break;53465337 case UBLK_CMD_UPDATE_SIZE:53475347- ublk_ctrl_set_size(ub, &header);53485348- ret = 0;53385338+ ret = ublk_ctrl_set_size(ub, &header);53495339 break;53505340 case UBLK_CMD_QUIESCE_DEV:53515341 ret = ublk_ctrl_quiesce_dev(ub, &header);
+12-12
drivers/block/zram/zram_drv.c
···549549 return ret;550550}551551552552-static ssize_t writeback_compressed_store(struct device *dev,552552+static ssize_t compressed_writeback_store(struct device *dev,553553 struct device_attribute *attr,554554 const char *buf, size_t len)555555{···564564 return -EBUSY;565565 }566566567567- zram->wb_compressed = val;567567+ zram->compressed_wb = val;568568569569 return len;570570}571571572572-static ssize_t writeback_compressed_show(struct device *dev,572572+static ssize_t compressed_writeback_show(struct device *dev,573573 struct device_attribute *attr,574574 char *buf)575575{···577577 struct zram *zram = dev_to_zram(dev);578578579579 guard(rwsem_read)(&zram->dev_lock);580580- val = zram->wb_compressed;580580+ val = zram->compressed_wb;581581582582 return sysfs_emit(buf, "%d\n", val);583583}···946946 goto out;947947 }948948949949- if (zram->wb_compressed) {949949+ if (zram->compressed_wb) {950950 /*951951 * ZRAM_WB slots get freed, we need to preserve data required952952 * for read decompression.···960960 set_slot_flag(zram, index, ZRAM_WB);961961 set_slot_handle(zram, index, req->blk_idx);962962963963- if (zram->wb_compressed) {963963+ if (zram->compressed_wb) {964964 if (huge)965965 set_slot_flag(zram, index, ZRAM_HUGE);966966 set_slot_size(zram, index, size);···11001100 */11011101 if (!test_slot_flag(zram, index, ZRAM_PP_SLOT))11021102 goto next;11031103- if (zram->wb_compressed)11031103+ if (zram->compressed_wb)11041104 err = read_from_zspool_raw(zram, req->page, index);11051105 else11061106 err = read_from_zspool(zram, req->page, index);···14291429 *14301430 * Keep the existing behavior for now.14311431 */14321432- if (zram->wb_compressed == false) {14321432+ if (zram->compressed_wb == false) {14331433 /* No decompression needed, complete the parent IO */14341434 bio_endio(req->parent);14351435 bio_put(bio);···15081508 flush_work(&req.work);15091509 destroy_work_on_stack(&req.work);1510151015111511- if (req.error || zram->wb_compressed == false)15111511+ if (req.error || zram->compressed_wb == false)15121512 return req.error;1513151315141514 return decompress_bdev_page(zram, page, index);···30073007static DEVICE_ATTR_RW(writeback_limit);30083008static DEVICE_ATTR_RW(writeback_limit_enable);30093009static DEVICE_ATTR_RW(writeback_batch_size);30103010-static DEVICE_ATTR_RW(writeback_compressed);30103010+static DEVICE_ATTR_RW(compressed_writeback);30113011#endif30123012#ifdef CONFIG_ZRAM_MULTI_COMP30133013static DEVICE_ATTR_RW(recomp_algorithm);···30313031 &dev_attr_writeback_limit.attr,30323032 &dev_attr_writeback_limit_enable.attr,30333033 &dev_attr_writeback_batch_size.attr,30343034- &dev_attr_writeback_compressed.attr,30343034+ &dev_attr_compressed_writeback.attr,30353035#endif30363036 &dev_attr_io_stat.attr,30373037 &dev_attr_mm_stat.attr,···30913091 init_rwsem(&zram->dev_lock);30923092#ifdef CONFIG_ZRAM_WRITEBACK30933093 zram->wb_batch_size = 32;30943094- zram->wb_compressed = false;30943094+ zram->compressed_wb = false;30953095#endif3096309630973097 /* gendisk structure */
···359359int cpuidle_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,360360 bool *stop_tick)361361{362362- /*363363- * If there is only a single idle state (or none), there is nothing364364- * meaningful for the governor to choose. Skip the governor and365365- * always use state 0 with the tick running.366366- */367367- if (drv->state_count <= 1) {368368- *stop_tick = false;369369- return 0;370370- }371371-372362 return cpuidle_curr_governor->select(drv, dev, stop_tick);373363}374364
···11051105{11061106 struct psp_device *psp_master = psp_get_master_device();11071107 struct snp_hv_fixed_pages_entry *entry;11081108- struct sev_device *sev;11091108 unsigned int order;11101109 struct page *page;1111111011121112- if (!psp_master || !psp_master->sev_data)11111111+ if (!psp_master)11131112 return NULL;11141114-11151115- sev = psp_master->sev_data;1116111311171114 order = get_order(PMD_SIZE * num_2mb_pages);11181115···11231126 * This API uses SNP_INIT_EX to transition allocated pages to HV_Fixed11241127 * page state, fail if SNP is already initialized.11251128 */11261126- if (sev->snp_initialized)11291129+ if (psp_master->sev_data &&11301130+ ((struct sev_device *)psp_master->sev_data)->snp_initialized)11271131 return NULL;1128113211291133 /* Re-use freed pages that match the request */···11601162 struct psp_device *psp_master = psp_get_master_device();11611163 struct snp_hv_fixed_pages_entry *entry, *nentry;1162116411631163- if (!psp_master || !psp_master->sev_data)11651165+ if (!psp_master)11641166 return;1165116711661168 /*
+18-6
drivers/firmware/cirrus/cs_dsp.c
···16101610 region_name);1611161116121612 if (reg) {16131613+ /*16141614+ * Although we expect the underlying bus does not require16151615+ * physically-contiguous buffers, we pessimistically use16161616+ * a temporary buffer instead of trusting that the16171617+ * alignment of region->data is ok.16181618+ */16131619 region_len = le32_to_cpu(region->len);16141620 if (region_len > buf_len) {16151621 buf_len = round_up(region_len, PAGE_SIZE);16161616- kfree(buf);16171617- buf = kmalloc(buf_len, GFP_KERNEL | GFP_DMA);16221622+ vfree(buf);16231623+ buf = vmalloc(buf_len);16181624 if (!buf) {16191625 ret = -ENOMEM;16201626 goto out_fw;···1649164316501644 ret = 0;16511645out_fw:16521652- kfree(buf);16461646+ vfree(buf);1653164716541648 if (ret == -EOVERFLOW)16551649 cs_dsp_err(dsp, "%s: file content overflows file data\n", file);···23372331 }2338233223392333 if (reg) {23342334+ /*23352335+ * Although we expect the underlying bus does not require23362336+ * physically-contiguous buffers, we pessimistically use23372337+ * a temporary buffer instead of trusting that the23382338+ * alignment of blk->data is ok.23392339+ */23402340 region_len = le32_to_cpu(blk->len);23412341 if (region_len > buf_len) {23422342 buf_len = round_up(region_len, PAGE_SIZE);23432343- kfree(buf);23442344- buf = kmalloc(buf_len, GFP_KERNEL | GFP_DMA);23432343+ vfree(buf);23442344+ buf = vmalloc(buf_len);23452345 if (!buf) {23462346 ret = -ENOMEM;23472347 goto out_fw;···2378236623792367 ret = 0;23802368out_fw:23812381- kfree(buf);23692369+ vfree(buf);2382237023832371 if (ret == -EOVERFLOW)23842372 cs_dsp_err(dsp, "%s: file content overflows file data\n", file);
+1-1
drivers/firmware/efi/mokvar-table.c
···8585 * as an alternative to ordinary EFI variables, due to platform-dependent8686 * limitations. The memory occupied by this table is marked as reserved.8787 *8888- * This routine must be called before efi_free_boot_services() in order8888+ * This routine must be called before efi_unmap_boot_services() in order8989 * to guarantee that it can mark the table as reserved.9090 *9191 * Implicit inputs:
+2
drivers/firmware/stratix10-rsu.c
···768768 rsu_async_status_callback);769769 if (ret) {770770 dev_err(dev, "Error, getting RSU status %i\n", ret);771771+ stratix10_svc_remove_async_client(priv->chan);771772 stratix10_svc_free_channel(priv->chan);773773+ return ret;772774 }773775774776 /* get DCMF version from firmware */
+126-102
drivers/firmware/stratix10-svc.c
···3737 * service layer will return error to FPGA manager when timeout occurs,3838 * timeout is set to 30 seconds (30 * 1000) at Intel Stratix10 SoC.3939 */4040-#define SVC_NUM_DATA_IN_FIFO 324040+#define SVC_NUM_DATA_IN_FIFO 84141#define SVC_NUM_CHANNEL 44242-#define FPGA_CONFIG_DATA_CLAIM_TIMEOUT_MS 2004242+#define FPGA_CONFIG_DATA_CLAIM_TIMEOUT_MS 20004343#define FPGA_CONFIG_STATUS_TIMEOUT_SEC 304444#define BYTE_TO_WORD_SIZE 445454646/* stratix10 service layer clients */4747#define STRATIX10_RSU "stratix10-rsu"4848-#define INTEL_FCS "intel-fcs"49485049/* Maximum number of SDM client IDs. */5150#define MAX_SDM_CLIENT_IDS 16···104105/**105106 * struct stratix10_svc - svc private data106107 * @stratix10_svc_rsu: pointer to stratix10 RSU device107107- * @intel_svc_fcs: pointer to the FCS device108108 */109109struct stratix10_svc {110110 struct platform_device *stratix10_svc_rsu;111111- struct platform_device *intel_svc_fcs;112111};113112114113/**···248251 * @num_active_client: number of active service client249252 * @node: list management250253 * @genpool: memory pool pointing to the memory region251251- * @task: pointer to the thread task which handles SMC or HVC call252252- * @svc_fifo: a queue for storing service message data253254 * @complete_status: state for completion254254- * @svc_fifo_lock: protect access to service message data queue255255 * @invoke_fn: function to issue secure monitor call or hypervisor call256256 * @svc: manages the list of client svc drivers257257+ * @sdm_lock: only allows a single command single response to SDM257258 * @actrl: async control structure258259 *259260 * This struct is used to create communication channels for service clients, to···264269 int num_active_client;265270 struct list_head node;266271 struct gen_pool *genpool;267267- struct task_struct *task;268268- struct kfifo svc_fifo;269272 struct completion complete_status;270270- spinlock_t svc_fifo_lock;271273 svc_invoke_fn *invoke_fn;272274 struct stratix10_svc *svc;275275+ struct mutex sdm_lock;273276 struct stratix10_async_ctrl actrl;274277};275278···276283 * @ctrl: pointer to service controller which is the provider of this channel277284 * @scl: pointer to service client which owns the channel278285 * @name: service client name associated with the channel286286+ * @task: pointer to the thread task which handles SMC or HVC call287287+ * @svc_fifo: a queue for storing service message data (separate fifo for every channel)288288+ * @svc_fifo_lock: protect access to service message data queue (locking pending fifo)279289 * @lock: protect access to the channel280290 * @async_chan: reference to asynchronous channel object for this channel281291 *···289293 struct stratix10_svc_controller *ctrl;290294 struct stratix10_svc_client *scl;291295 char *name;296296+ struct task_struct *task;297297+ struct kfifo svc_fifo;298298+ spinlock_t svc_fifo_lock;292299 spinlock_t lock;293300 struct stratix10_async_chan *async_chan;294301};···526527 */527528static int svc_normal_to_secure_thread(void *data)528529{529529- struct stratix10_svc_controller530530- *ctrl = (struct stratix10_svc_controller *)data;531531- struct stratix10_svc_data *pdata;532532- struct stratix10_svc_cb_data *cbdata;530530+ struct stratix10_svc_chan *chan = (struct stratix10_svc_chan *)data;531531+ struct stratix10_svc_controller *ctrl = chan->ctrl;532532+ struct stratix10_svc_data *pdata = NULL;533533+ struct stratix10_svc_cb_data *cbdata = NULL;533534 struct arm_smccc_res res;534535 unsigned long a0, a1, a2, a3, a4, a5, a6, a7;535536 int ret_fifo = 0;···554555 a6 = 0;555556 a7 = 0;556557557557- pr_debug("smc_hvc_shm_thread is running\n");558558+ pr_debug("%s: %s: Thread is running!\n", __func__, chan->name);558559559560 while (!kthread_should_stop()) {560560- ret_fifo = kfifo_out_spinlocked(&ctrl->svc_fifo,561561+ ret_fifo = kfifo_out_spinlocked(&chan->svc_fifo,561562 pdata, sizeof(*pdata),562562- &ctrl->svc_fifo_lock);563563+ &chan->svc_fifo_lock);563564564565 if (!ret_fifo)565566 continue;···568569 (unsigned int)pdata->paddr, pdata->command,569570 (unsigned int)pdata->size);570571572572+ /* SDM can only process one command at a time */573573+ pr_debug("%s: %s: Thread is waiting for mutex!\n",574574+ __func__, chan->name);575575+ if (mutex_lock_interruptible(&ctrl->sdm_lock)) {576576+ /* item already dequeued; notify client to unblock it */577577+ cbdata->status = BIT(SVC_STATUS_ERROR);578578+ cbdata->kaddr1 = NULL;579579+ cbdata->kaddr2 = NULL;580580+ cbdata->kaddr3 = NULL;581581+ if (pdata->chan->scl)582582+ pdata->chan->scl->receive_cb(pdata->chan->scl,583583+ cbdata);584584+ break;585585+ }586586+571587 switch (pdata->command) {572588 case COMMAND_RECONFIG_DATA_CLAIM:573589 svc_thread_cmd_data_claim(ctrl, pdata, cbdata);590590+ mutex_unlock(&ctrl->sdm_lock);574591 continue;575592 case COMMAND_RECONFIG:576593 a0 = INTEL_SIP_SMC_FPGA_CONFIG_START;···715700 break;716701 default:717702 pr_warn("it shouldn't happen\n");718718- break;703703+ mutex_unlock(&ctrl->sdm_lock);704704+ continue;719705 }720720- pr_debug("%s: before SMC call -- a0=0x%016x a1=0x%016x",721721- __func__,706706+ pr_debug("%s: %s: before SMC call -- a0=0x%016x a1=0x%016x",707707+ __func__, chan->name,722708 (unsigned int)a0,723709 (unsigned int)a1);724710 pr_debug(" a2=0x%016x\n", (unsigned int)a2);···728712 pr_debug(" a5=0x%016x\n", (unsigned int)a5);729713 ctrl->invoke_fn(a0, a1, a2, a3, a4, a5, a6, a7, &res);730714731731- pr_debug("%s: after SMC call -- res.a0=0x%016x",732732- __func__, (unsigned int)res.a0);715715+ pr_debug("%s: %s: after SMC call -- res.a0=0x%016x",716716+ __func__, chan->name, (unsigned int)res.a0);733717 pr_debug(" res.a1=0x%016x, res.a2=0x%016x",734718 (unsigned int)res.a1, (unsigned int)res.a2);735719 pr_debug(" res.a3=0x%016x\n", (unsigned int)res.a3);···744728 cbdata->kaddr2 = NULL;745729 cbdata->kaddr3 = NULL;746730 pdata->chan->scl->receive_cb(pdata->chan->scl, cbdata);731731+ mutex_unlock(&ctrl->sdm_lock);747732 continue;748733 }749734···818801 break;819802820803 }804804+805805+ mutex_unlock(&ctrl->sdm_lock);821806 }822807823808 kfree(cbdata);···17151696 if (!p_data)17161697 return -ENOMEM;1717169817181718- /* first client will create kernel thread */17191719- if (!chan->ctrl->task) {17201720- chan->ctrl->task =17211721- kthread_run_on_cpu(svc_normal_to_secure_thread,17221722- (void *)chan->ctrl,17231723- cpu, "svc_smc_hvc_thread");17241724- if (IS_ERR(chan->ctrl->task)) {16991699+ /* first caller creates the per-channel kthread */17001700+ if (!chan->task) {17011701+ struct task_struct *task;17021702+17031703+ task = kthread_run_on_cpu(svc_normal_to_secure_thread,17041704+ (void *)chan,17051705+ cpu, "svc_smc_hvc_thread");17061706+ if (IS_ERR(task)) {17251707 dev_err(chan->ctrl->dev,17261708 "failed to create svc_smc_hvc_thread\n");17271709 kfree(p_data);17281710 return -EINVAL;17291711 }17121712+17131713+ spin_lock(&chan->lock);17141714+ if (chan->task) {17151715+ /* another caller won the race; discard our thread */17161716+ spin_unlock(&chan->lock);17171717+ kthread_stop(task);17181718+ } else {17191719+ chan->task = task;17201720+ spin_unlock(&chan->lock);17211721+ }17301722 }1731172317321732- pr_debug("%s: sent P-va=%p, P-com=%x, P-size=%u\n", __func__,17331733- p_msg->payload, p_msg->command,17241724+ pr_debug("%s: %s: sent P-va=%p, P-com=%x, P-size=%u\n", __func__,17251725+ chan->name, p_msg->payload, p_msg->command,17341726 (unsigned int)p_msg->payload_length);1735172717361728 if (list_empty(&svc_data_mem)) {···17771747 p_data->arg[2] = p_msg->arg[2];17781748 p_data->size = p_msg->payload_length;17791749 p_data->chan = chan;17801780- pr_debug("%s: put to FIFO pa=0x%016x, cmd=%x, size=%u\n", __func__,17811781- (unsigned int)p_data->paddr, p_data->command,17821782- (unsigned int)p_data->size);17831783- ret = kfifo_in_spinlocked(&chan->ctrl->svc_fifo, p_data,17501750+ pr_debug("%s: %s: put to FIFO pa=0x%016x, cmd=%x, size=%u\n",17511751+ __func__,17521752+ chan->name,17531753+ (unsigned int)p_data->paddr,17541754+ p_data->command,17551755+ (unsigned int)p_data->size);17561756+17571757+ ret = kfifo_in_spinlocked(&chan->svc_fifo, p_data,17841758 sizeof(*p_data),17851785- &chan->ctrl->svc_fifo_lock);17591759+ &chan->svc_fifo_lock);1786176017871761 kfree(p_data);17881762···18071773 */18081774void stratix10_svc_done(struct stratix10_svc_chan *chan)18091775{18101810- /* stop thread when thread is running AND only one active client */18111811- if (chan->ctrl->task && chan->ctrl->num_active_client <= 1) {18121812- pr_debug("svc_smc_hvc_shm_thread is stopped\n");18131813- kthread_stop(chan->ctrl->task);18141814- chan->ctrl->task = NULL;17761776+ /* stop thread when thread is running */17771777+ if (chan->task) {17781778+ pr_debug("%s: %s: svc_smc_hvc_shm_thread is stopping\n",17791779+ __func__, chan->name);17801780+ kthread_stop(chan->task);17811781+ chan->task = NULL;18151782 }18161783}18171784EXPORT_SYMBOL_GPL(stratix10_svc_done);···18521817 pmem->paddr = pa;18531818 pmem->size = s;18541819 list_add_tail(&pmem->node, &svc_data_mem);18551855- pr_debug("%s: va=%p, pa=0x%016x\n", __func__,18561856- pmem->vaddr, (unsigned int)pmem->paddr);18201820+ pr_debug("%s: %s: va=%p, pa=0x%016x\n", __func__,18211821+ chan->name, pmem->vaddr, (unsigned int)pmem->paddr);1857182218581823 return (void *)va;18591824}···18901855 {},18911856};1892185718581858+static const char * const chan_names[SVC_NUM_CHANNEL] = {18591859+ SVC_CLIENT_FPGA,18601860+ SVC_CLIENT_RSU,18611861+ SVC_CLIENT_FCS,18621862+ SVC_CLIENT_HWMON18631863+};18641864+18931865static int stratix10_svc_drv_probe(struct platform_device *pdev)18941866{18951867 struct device *dev = &pdev->dev;···19041862 struct stratix10_svc_chan *chans;19051863 struct gen_pool *genpool;19061864 struct stratix10_svc_sh_memory *sh_memory;19071907- struct stratix10_svc *svc;18651865+ struct stratix10_svc *svc = NULL;1908186619091867 svc_invoke_fn *invoke_fn;19101868 size_t fifo_size;19111911- int ret;18691869+ int ret, i = 0;1912187019131871 /* get SMC or HVC function */19141872 invoke_fn = get_invoke_func(dev);···19471905 controller->num_active_client = 0;19481906 controller->chans = chans;19491907 controller->genpool = genpool;19501950- controller->task = NULL;19511908 controller->invoke_fn = invoke_fn;19091909+ INIT_LIST_HEAD(&controller->node);19521910 init_completion(&controller->complete_status);1953191119541912 ret = stratix10_svc_async_init(controller);···19591917 }1960191819611919 fifo_size = sizeof(struct stratix10_svc_data) * SVC_NUM_DATA_IN_FIFO;19621962- ret = kfifo_alloc(&controller->svc_fifo, fifo_size, GFP_KERNEL);19631963- if (ret) {19641964- dev_err(dev, "failed to allocate FIFO\n");19651965- goto err_async_exit;19201920+ mutex_init(&controller->sdm_lock);19211921+19221922+ for (i = 0; i < SVC_NUM_CHANNEL; i++) {19231923+ chans[i].scl = NULL;19241924+ chans[i].ctrl = controller;19251925+ chans[i].name = (char *)chan_names[i];19261926+ spin_lock_init(&chans[i].lock);19271927+ ret = kfifo_alloc(&chans[i].svc_fifo, fifo_size, GFP_KERNEL);19281928+ if (ret) {19291929+ dev_err(dev, "failed to allocate FIFO %d\n", i);19301930+ goto err_free_fifos;19311931+ }19321932+ spin_lock_init(&chans[i].svc_fifo_lock);19661933 }19671967- spin_lock_init(&controller->svc_fifo_lock);19681968-19691969- chans[0].scl = NULL;19701970- chans[0].ctrl = controller;19711971- chans[0].name = SVC_CLIENT_FPGA;19721972- spin_lock_init(&chans[0].lock);19731973-19741974- chans[1].scl = NULL;19751975- chans[1].ctrl = controller;19761976- chans[1].name = SVC_CLIENT_RSU;19771977- spin_lock_init(&chans[1].lock);19781978-19791979- chans[2].scl = NULL;19801980- chans[2].ctrl = controller;19811981- chans[2].name = SVC_CLIENT_FCS;19821982- spin_lock_init(&chans[2].lock);19831983-19841984- chans[3].scl = NULL;19851985- chans[3].ctrl = controller;19861986- chans[3].name = SVC_CLIENT_HWMON;19871987- spin_lock_init(&chans[3].lock);1988193419891935 list_add_tail(&controller->node, &svc_ctrl);19901936 platform_set_drvdata(pdev, controller);···19811951 svc = devm_kzalloc(dev, sizeof(*svc), GFP_KERNEL);19821952 if (!svc) {19831953 ret = -ENOMEM;19841984- goto err_free_kfifo;19541954+ goto err_free_fifos;19851955 }19861956 controller->svc = svc;19871957···19891959 if (!svc->stratix10_svc_rsu) {19901960 dev_err(dev, "failed to allocate %s device\n", STRATIX10_RSU);19911961 ret = -ENOMEM;19921992- goto err_free_kfifo;19621962+ goto err_free_fifos;19931963 }1994196419951965 ret = platform_device_add(svc->stratix10_svc_rsu);19961996- if (ret) {19971997- platform_device_put(svc->stratix10_svc_rsu);19981998- goto err_free_kfifo;19991999- }20002000-20012001- svc->intel_svc_fcs = platform_device_alloc(INTEL_FCS, 1);20022002- if (!svc->intel_svc_fcs) {20032003- dev_err(dev, "failed to allocate %s device\n", INTEL_FCS);20042004- ret = -ENOMEM;20052005- goto err_unregister_rsu_dev;20062006- }20072007-20082008- ret = platform_device_add(svc->intel_svc_fcs);20092009- if (ret) {20102010- platform_device_put(svc->intel_svc_fcs);20112011- goto err_unregister_rsu_dev;20122012- }19661966+ if (ret)19671967+ goto err_put_device;2013196820141969 ret = of_platform_default_populate(dev_of_node(dev), NULL, dev);20151970 if (ret)20162016- goto err_unregister_fcs_dev;19711971+ goto err_unregister_rsu_dev;2017197220181973 pr_info("Intel Service Layer Driver Initialized\n");2019197420201975 return 0;2021197620222022-err_unregister_fcs_dev:20232023- platform_device_unregister(svc->intel_svc_fcs);20241977err_unregister_rsu_dev:20251978 platform_device_unregister(svc->stratix10_svc_rsu);20262026-err_free_kfifo:20272027- kfifo_free(&controller->svc_fifo);20282028-err_async_exit:19791979+ goto err_free_fifos;19801980+err_put_device:19811981+ platform_device_put(svc->stratix10_svc_rsu);19821982+err_free_fifos:19831983+ /* only remove from list if list_add_tail() was reached */19841984+ if (!list_empty(&controller->node))19851985+ list_del(&controller->node);19861986+ /* free only the FIFOs that were successfully allocated */19871987+ while (i--)19881988+ kfifo_free(&chans[i].svc_fifo);20291989 stratix10_svc_async_exit(controller);20301990err_destroy_pool:20311991 gen_pool_destroy(genpool);19921992+20321993 return ret;20331994}2034199520351996static void stratix10_svc_drv_remove(struct platform_device *pdev)20361997{19981998+ int i;20371999 struct stratix10_svc_controller *ctrl = platform_get_drvdata(pdev);20382000 struct stratix10_svc *svc = ctrl->svc;20392001···2033201120342012 of_platform_depopulate(ctrl->dev);2035201320362036- platform_device_unregister(svc->intel_svc_fcs);20372014 platform_device_unregister(svc->stratix10_svc_rsu);2038201520392039- kfifo_free(&ctrl->svc_fifo);20402040- if (ctrl->task) {20412041- kthread_stop(ctrl->task);20422042- ctrl->task = NULL;20162016+ for (i = 0; i < SVC_NUM_CHANNEL; i++) {20172017+ if (ctrl->chans[i].task) {20182018+ kthread_stop(ctrl->chans[i].task);20192019+ ctrl->chans[i].task = NULL;20202020+ }20212021+ kfifo_free(&ctrl->chans[i].svc_fifo);20432022 }20232023+20442024 if (ctrl->genpool)20452025 gen_pool_destroy(ctrl->genpool);20462026 list_del(&ctrl->node);
+4-3
drivers/gpib/lpvo_usb_gpib/lpvo_usb_gpib.c
···3838/*3939 * Table of devices that work with this driver.4040 *4141- * Currently, only one device is known to be used in the4242- * lpvo_usb_gpib adapter (FTDI 0403:6001).4141+ * Currently, only one device is known to be used in the lpvo_usb_gpib4242+ * adapter (FTDI 0403:6001) but as this device id is already handled by the4343+ * ftdi_sio USB serial driver the LPVO driver must not bind to it by default.4444+ *4345 * If your adapter uses a different chip, insert a line4446 * in the following table with proper <Vendor-id>, <Product-id>.4547 *···5250 */53515452static const struct usb_device_id skel_table[] = {5555- { USB_DEVICE(0x0403, 0x6001) },5653 { } /* Terminating entry */5754};5855MODULE_DEVICE_TABLE(usb, skel_table);
+5-1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
···14391439 *process_info = info;14401440 }1441144114421442- vm->process_info = *process_info;14421442+ if (cmpxchg(&vm->process_info, NULL, *process_info) != NULL) {14431443+ ret = -EINVAL;14441444+ goto already_acquired;14451445+ }1443144614441447 /* Validate page directory and attach eviction fence */14451448 ret = amdgpu_bo_reserve(vm->root.bo, true);···14821479 amdgpu_bo_unreserve(vm->root.bo);14831480reserve_pd_fail:14841481 vm->process_info = NULL;14821482+already_acquired:14851483 if (info) {14861484 dma_fence_put(&info->eviction_fence->base);14871485 *process_info = NULL;
+13-1
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···26902690 break;26912691 default:26922692 r = amdgpu_discovery_set_ip_blocks(adev);26932693- if (r)26932693+ if (r) {26942694+ adev->num_ip_blocks = 0;26942695 return r;26962696+ }26952697 break;26962698 }26972699···32493247 i = state == AMD_CG_STATE_GATE ? j : adev->num_ip_blocks - j - 1;32503248 if (!adev->ip_blocks[i].status.late_initialized)32513249 continue;32503250+ if (!adev->ip_blocks[i].version)32513251+ continue;32523252 /* skip CG for GFX, SDMA on S0ix */32533253 if (adev->in_s0ix &&32543254 (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GFX ||···32893285 for (j = 0; j < adev->num_ip_blocks; j++) {32903286 i = state == AMD_PG_STATE_GATE ? j : adev->num_ip_blocks - j - 1;32913287 if (!adev->ip_blocks[i].status.late_initialized)32883288+ continue;32893289+ if (!adev->ip_blocks[i].version)32923290 continue;32933291 /* skip PG for GFX, SDMA on S0ix */32943292 if (adev->in_s0ix &&···34993493 int i, r;3500349435013495 for (i = 0; i < adev->num_ip_blocks; i++) {34963496+ if (!adev->ip_blocks[i].version)34973497+ continue;35023498 if (!adev->ip_blocks[i].version->funcs->early_fini)35033499 continue;35043500···35783570 if (!adev->ip_blocks[i].status.sw)35793571 continue;3580357235733573+ if (!adev->ip_blocks[i].version)35743574+ continue;35813575 if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC) {35823576 amdgpu_ucode_free_bo(adev);35833577 amdgpu_free_static_csa(&adev->virt.csa_obj);···3605359536063596 for (i = adev->num_ip_blocks - 1; i >= 0; i--) {36073597 if (!adev->ip_blocks[i].status.late_initialized)35983598+ continue;35993599+ if (!adev->ip_blocks[i].version)36083600 continue;36093601 if (adev->ip_blocks[i].version->funcs->late_fini)36103602 adev->ip_blocks[i].version->funcs->late_fini(&adev->ip_blocks[i]);
+1-1
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
···8383{8484 struct amdgpu_device *adev = drm_to_adev(dev);85858686- if (adev == NULL)8686+ if (adev == NULL || !adev->num_ip_blocks)8787 return;88888989 amdgpu_unregister_gpu_instance(adev);
+8-8
drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
···368368369369 struct drm_property *plane_ctm_property;370370 /**371371- * @shaper_lut_property: Plane property to set pre-blending shaper LUT372372- * that converts color content before 3D LUT. If373373- * plane_shaper_tf_property != Identity TF, AMD color module will371371+ * @plane_shaper_lut_property: Plane property to set pre-blending372372+ * shaper LUT that converts color content before 3D LUT.373373+ * If plane_shaper_tf_property != Identity TF, AMD color module will374374 * combine the user LUT values with pre-defined TF into the LUT375375 * parameters to be programmed.376376 */377377 struct drm_property *plane_shaper_lut_property;378378 /**379379- * @shaper_lut_size_property: Plane property for the size of379379+ * @plane_shaper_lut_size_property: Plane property for the size of380380 * pre-blending shaper LUT as supported by the driver (read-only).381381 */382382 struct drm_property *plane_shaper_lut_size_property;···400400 */401401 struct drm_property *plane_lut3d_property;402402 /**403403- * @plane_degamma_lut_size_property: Plane property to define the max404404- * size of 3D LUT as supported by the driver (read-only). The max size405405- * is the max size of one dimension and, therefore, the max number of406406- * entries for 3D LUT array is the 3D LUT size cubed;403403+ * @plane_lut3d_size_property: Plane property to define the max size404404+ * of 3D LUT as supported by the driver (read-only). The max size is405405+ * the max size of one dimension and, therefore, the max number of406406+ * entries for 3D LUT array is the 3D LUT size cubed.407407 */408408 struct drm_property *plane_lut3d_size_property;409409 /**
+81-35
drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
···446446 return ret;447447}448448449449-static void amdgpu_userq_cleanup(struct amdgpu_usermode_queue *queue,450450- int queue_id)449449+static void amdgpu_userq_cleanup(struct amdgpu_usermode_queue *queue)451450{452451 struct amdgpu_userq_mgr *uq_mgr = queue->userq_mgr;453452 struct amdgpu_device *adev = uq_mgr->adev;···460461 uq_funcs->mqd_destroy(queue);461462 amdgpu_userq_fence_driver_free(queue);462463 /* Use interrupt-safe locking since IRQ handlers may access these XArrays */463463- xa_erase_irq(&uq_mgr->userq_xa, (unsigned long)queue_id);464464 xa_erase_irq(&adev->userq_doorbell_xa, queue->doorbell_index);465465 queue->userq_mgr = NULL;466466 list_del(&queue->userq_va_list);467467 kfree(queue);468468469469 up_read(&adev->reset_domain->sem);470470-}471471-472472-static struct amdgpu_usermode_queue *473473-amdgpu_userq_find(struct amdgpu_userq_mgr *uq_mgr, int qid)474474-{475475- return xa_load(&uq_mgr->userq_xa, qid);476470}477471478472void···617625}618626619627static int620620-amdgpu_userq_destroy(struct drm_file *filp, int queue_id)628628+amdgpu_userq_destroy(struct amdgpu_userq_mgr *uq_mgr, struct amdgpu_usermode_queue *queue)621629{622622- struct amdgpu_fpriv *fpriv = filp->driver_priv;623623- struct amdgpu_userq_mgr *uq_mgr = &fpriv->userq_mgr;624630 struct amdgpu_device *adev = uq_mgr->adev;625625- struct amdgpu_usermode_queue *queue;626631 int r = 0;627632628633 cancel_delayed_work_sync(&uq_mgr->resume_work);629634 mutex_lock(&uq_mgr->userq_mutex);630630- queue = amdgpu_userq_find(uq_mgr, queue_id);631631- if (!queue) {632632- drm_dbg_driver(adev_to_drm(uq_mgr->adev), "Invalid queue id to destroy\n");633633- mutex_unlock(&uq_mgr->userq_mutex);634634- return -EINVAL;635635- }636635 amdgpu_userq_wait_for_last_fence(queue);637636 /* Cancel any pending hang detection work and cleanup */638637 if (queue->hang_detect_fence) {···655672 drm_warn(adev_to_drm(uq_mgr->adev), "trying to destroy a HW mapping userq\n");656673 queue->state = AMDGPU_USERQ_STATE_HUNG;657674 }658658- amdgpu_userq_cleanup(queue, queue_id);675675+ amdgpu_userq_cleanup(queue);659676 mutex_unlock(&uq_mgr->userq_mutex);660677661678 pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);662679663680 return r;681681+}682682+683683+static void amdgpu_userq_kref_destroy(struct kref *kref)684684+{685685+ int r;686686+ struct amdgpu_usermode_queue *queue =687687+ container_of(kref, struct amdgpu_usermode_queue, refcount);688688+ struct amdgpu_userq_mgr *uq_mgr = queue->userq_mgr;689689+690690+ r = amdgpu_userq_destroy(uq_mgr, queue);691691+ if (r)692692+ drm_file_err(uq_mgr->file, "Failed to destroy usermode queue %d\n", r);693693+}694694+695695+struct amdgpu_usermode_queue *amdgpu_userq_get(struct amdgpu_userq_mgr *uq_mgr, u32 qid)696696+{697697+ struct amdgpu_usermode_queue *queue;698698+699699+ xa_lock(&uq_mgr->userq_xa);700700+ queue = xa_load(&uq_mgr->userq_xa, qid);701701+ if (queue)702702+ kref_get(&queue->refcount);703703+ xa_unlock(&uq_mgr->userq_xa);704704+705705+ return queue;706706+}707707+708708+void amdgpu_userq_put(struct amdgpu_usermode_queue *queue)709709+{710710+ if (queue)711711+ kref_put(&queue->refcount, amdgpu_userq_kref_destroy);664712}665713666714static int amdgpu_userq_priority_permit(struct drm_file *filp,···848834 goto unlock;849835 }850836837837+ /* drop this refcount during queue destroy */838838+ kref_init(&queue->refcount);839839+851840 /* Wait for mode-1 reset to complete */852841 down_read(&adev->reset_domain->sem);853842 r = xa_err(xa_store_irq(&adev->userq_doorbell_xa, index, queue, GFP_KERNEL));···1002985 struct drm_file *filp)1003986{1004987 union drm_amdgpu_userq *args = data;10051005- int r;988988+ struct amdgpu_fpriv *fpriv = filp->driver_priv;989989+ struct amdgpu_usermode_queue *queue;990990+ int r = 0;10069911007992 if (!amdgpu_userq_enabled(dev))1008993 return -ENOTSUPP;···10191000 drm_file_err(filp, "Failed to create usermode queue\n");10201001 break;1021100210221022- case AMDGPU_USERQ_OP_FREE:10231023- r = amdgpu_userq_destroy(filp, args->in.queue_id);10241024- if (r)10251025- drm_file_err(filp, "Failed to destroy usermode queue\n");10031003+ case AMDGPU_USERQ_OP_FREE: {10041004+ xa_lock(&fpriv->userq_mgr.userq_xa);10051005+ queue = __xa_erase(&fpriv->userq_mgr.userq_xa, args->in.queue_id);10061006+ xa_unlock(&fpriv->userq_mgr.userq_xa);10071007+ if (!queue)10081008+ return -ENOENT;10091009+10101010+ amdgpu_userq_put(queue);10261011 break;10121012+ }1027101310281014 default:10291015 drm_dbg_driver(dev, "Invalid user queue op specified: %d\n", args->in.op);···1047102310481024 /* Resume all the queues for this process */10491025 xa_for_each(&uq_mgr->userq_xa, queue_id, queue) {10261026+ queue = amdgpu_userq_get(uq_mgr, queue_id);10271027+ if (!queue)10281028+ continue;10291029+10501030 if (!amdgpu_userq_buffer_vas_mapped(queue)) {10511031 drm_file_err(uq_mgr->file,10521032 "trying restore queue without va mapping\n");10531033 queue->state = AMDGPU_USERQ_STATE_INVALID_VA;10341034+ amdgpu_userq_put(queue);10541035 continue;10551036 }1056103710571038 r = amdgpu_userq_restore_helper(queue);10581039 if (r)10591040 ret = r;10411041+10421042+ amdgpu_userq_put(queue);10601043 }1061104410621045 if (ret)···12971266 amdgpu_userq_detect_and_reset_queues(uq_mgr);12981267 /* Try to unmap all the queues in this process ctx */12991268 xa_for_each(&uq_mgr->userq_xa, queue_id, queue) {12691269+ queue = amdgpu_userq_get(uq_mgr, queue_id);12701270+ if (!queue)12711271+ continue;13001272 r = amdgpu_userq_preempt_helper(queue);13011273 if (r)13021274 ret = r;12751275+ amdgpu_userq_put(queue);13031276 }1304127713051278 if (ret)···13361301 int ret;1337130213381303 xa_for_each(&uq_mgr->userq_xa, queue_id, queue) {13041304+ queue = amdgpu_userq_get(uq_mgr, queue_id);13051305+ if (!queue)13061306+ continue;13071307+13391308 struct dma_fence *f = queue->last_fence;1340130913411341- if (!f || dma_fence_is_signaled(f))13101310+ if (!f || dma_fence_is_signaled(f)) {13111311+ amdgpu_userq_put(queue);13421312 continue;13131313+ }13431314 ret = dma_fence_wait_timeout(f, true, msecs_to_jiffies(100));13441315 if (ret <= 0) {13451316 drm_file_err(uq_mgr->file, "Timed out waiting for fence=%llu:%llu\n",13461317 f->context, f->seqno);13181318+ amdgpu_userq_put(queue);13471319 return -ETIMEDOUT;13481320 }13211321+ amdgpu_userq_put(queue);13491322 }1350132313511324 return 0;···14041361void amdgpu_userq_mgr_fini(struct amdgpu_userq_mgr *userq_mgr)14051362{14061363 struct amdgpu_usermode_queue *queue;14071407- unsigned long queue_id;13641364+ unsigned long queue_id = 0;1408136514091409- cancel_delayed_work_sync(&userq_mgr->resume_work);13661366+ for (;;) {13671367+ xa_lock(&userq_mgr->userq_xa);13681368+ queue = xa_find(&userq_mgr->userq_xa, &queue_id, ULONG_MAX,13691369+ XA_PRESENT);13701370+ if (queue)13711371+ __xa_erase(&userq_mgr->userq_xa, queue_id);13721372+ xa_unlock(&userq_mgr->userq_xa);1410137314111411- mutex_lock(&userq_mgr->userq_mutex);14121412- amdgpu_userq_detect_and_reset_queues(userq_mgr);14131413- xa_for_each(&userq_mgr->userq_xa, queue_id, queue) {14141414- amdgpu_userq_wait_for_last_fence(queue);14151415- amdgpu_userq_unmap_helper(queue);14161416- amdgpu_userq_cleanup(queue, queue_id);13741374+ if (!queue)13751375+ break;13761376+13771377+ amdgpu_userq_put(queue);14171378 }1418137914191380 xa_destroy(&userq_mgr->userq_xa);14201420- mutex_unlock(&userq_mgr->userq_mutex);14211381 mutex_destroy(&userq_mgr->userq_mutex);14221382}14231383
···765765 dm->adev->mode_info.crtcs[crtc_index] = acrtc;766766767767 /* Don't enable DRM CRTC degamma property for768768- * 1. Degamma is replaced by color pipeline.769769- * 2. DCE since it doesn't support programmable degamma anywhere.770770- * 3. DCN401 since pre-blending degamma LUT doesn't apply to cursor.768768+ * 1. DCE since it doesn't support programmable degamma anywhere.769769+ * 2. DCN401 since pre-blending degamma LUT doesn't apply to cursor.770770+ * Note: DEGAMMA properties are created even if the primary plane has the771771+ * COLOR_PIPELINE property. User space can use either the DEGAMMA properties772772+ * or the COLOR_PIPELINE property. An atomic commit which attempts to enable773773+ * both is rejected.771774 */772772- if (plane->color_pipeline_property)773773- has_degamma = false;774774- else775775- has_degamma = dm->adev->dm.dc->caps.color.dpp.dcn_arch &&776776- dm->adev->dm.dc->ctx->dce_version != DCN_VERSION_4_01;775775+ has_degamma = dm->adev->dm.dc->caps.color.dpp.dcn_arch &&776776+ dm->adev->dm.dc->ctx->dce_version != DCN_VERSION_4_01;777777778778 drm_crtc_enable_color_mgmt(&acrtc->base, has_degamma ? MAX_COLOR_LUT_ENTRIES : 0,779779 true, MAX_COLOR_LUT_ENTRIES);
···12561256 if (ret)12571257 return ret;1258125812591259+ /* Reject commits that attempt to use both COLOR_PIPELINE and CRTC DEGAMMA_LUT */12601260+ if (new_plane_state->color_pipeline && new_crtc_state->degamma_lut) {12611261+ drm_dbg_atomic(plane->dev,12621262+ "[PLANE:%d:%s] COLOR_PIPELINE and CRTC DEGAMMA_LUT cannot be enabled simultaneously\n",12631263+ plane->base.id, plane->name);12641264+ return -EINVAL;12651265+ }12661266+12591267 ret = amdgpu_dm_plane_fill_dc_scaling_info(adev, new_plane_state, &scaling_info);12601268 if (ret)12611269 return ret;
···9696 dccg->pipe_dppclk_khz[dpp_inst] = req_dppclk;9797}98989999+/*100100+ * On DCN21 S0i3 resume, BIOS programs MICROSECOND_TIME_BASE_DIV to101101+ * 0x00120464 as a marker that golden init has already been done.102102+ * dcn21_s0i3_golden_init_wa() reads this marker later in bios_golden_init()103103+ * to decide whether to skip golden init.104104+ *105105+ * dccg2_init() unconditionally overwrites MICROSECOND_TIME_BASE_DIV to106106+ * 0x00120264, destroying the marker before it can be read.107107+ *108108+ * Guard the call: if the S0i3 marker is present, skip dccg2_init() so the109109+ * WA can function correctly. bios_golden_init() will handle init in that case.110110+ */111111+static void dccg21_init(struct dccg *dccg)112112+{113113+ if (dccg2_is_s0i3_golden_init_wa_done(dccg))114114+ return;115115+116116+ dccg2_init(dccg);117117+}99118100119static const struct dccg_funcs dccg21_funcs = {101120 .update_dpp_dto = dccg21_update_dpp_dto,···122103 .set_fifo_errdet_ovr_en = dccg2_set_fifo_errdet_ovr_en,123104 .otg_add_pixel = dccg2_otg_add_pixel,124105 .otg_drop_pixel = dccg2_otg_drop_pixel,125125- .dccg_init = dccg2_init,106106+ .dccg_init = dccg21_init,126107 .refclk_setup = dccg2_refclk_setup, /* Deprecated - for backward compatibility only */127108 .allow_clock_gating = dccg2_allow_clock_gating,128109 .enable_memory_low_power = dccg2_enable_memory_low_power,
···480480 .start = start,481481 .end = end,482482 .pgmap_owner = pagemap->owner,483483- /*484484- * FIXME: MIGRATE_VMA_SELECT_DEVICE_PRIVATE intermittently485485- * causes 'xe_exec_system_allocator --r *race*no*' to trigger aa486486- * engine reset and a hard hang due to getting stuck on a folio487487- * lock. This should work and needs to be root-caused. The only488488- * downside of not selecting MIGRATE_VMA_SELECT_DEVICE_PRIVATE489489- * is that device-to-device migrations won’t work; instead,490490- * memory will bounce through system memory. This path should be491491- * rare and only occur when the madvise attributes of memory are492492- * changed or atomics are being used.493493- */494494- .flags = MIGRATE_VMA_SELECT_SYSTEM | MIGRATE_VMA_SELECT_DEVICE_COHERENT,483483+ .flags = MIGRATE_VMA_SELECT_SYSTEM | MIGRATE_VMA_SELECT_DEVICE_COHERENT |484484+ MIGRATE_VMA_SELECT_DEVICE_PRIVATE,495485 };496486 unsigned long i, npages = npages_in_range(start, end);497487 unsigned long own_pages = 0, migrated_pages = 0;
···45774577intel_edp_init_dpcd(struct intel_dp *intel_dp, struct intel_connector *connector)45784578{45794579 struct intel_display *display = to_intel_display(intel_dp);45804580+ int ret;4580458145814582 /* this function is meant to be called only once */45824583 drm_WARN_ON(display->drm, intel_dp->dpcd[DP_DPCD_REV] != 0);···46164615 * available (such as HDR backlight controls)46174616 */46184617 intel_dp_init_source_oui(intel_dp);46184618+46194619+ /* Read the ALPM DPCD caps */46204620+ ret = drm_dp_dpcd_read_byte(&intel_dp->aux, DP_RECEIVER_ALPM_CAP,46214621+ &intel_dp->alpm_dpcd);46224622+ if (ret < 0)46234623+ return false;4619462446204625 /*46214626 * This has to be called after intel_dp->edp_dpcd is filled, PSR checks
+56-15
drivers/gpu/drm/i915/display/intel_psr.c
···13071307 u16 sink_y_granularity = crtc_state->has_panel_replay ?13081308 connector->dp.panel_replay_caps.su_y_granularity :13091309 connector->dp.psr_caps.su_y_granularity;13101310- u16 sink_w_granularity = crtc_state->has_panel_replay ?13111311- connector->dp.panel_replay_caps.su_w_granularity :13121312- connector->dp.psr_caps.su_w_granularity;13101310+ u16 sink_w_granularity;13111311+13121312+ if (crtc_state->has_panel_replay)13131313+ sink_w_granularity = connector->dp.panel_replay_caps.su_w_granularity ==13141314+ DP_PANEL_REPLAY_FULL_LINE_GRANULARITY ?13151315+ crtc_hdisplay : connector->dp.panel_replay_caps.su_w_granularity;13161316+ else13171317+ sink_w_granularity = connector->dp.psr_caps.su_w_granularity;1313131813141319 /* PSR2 HW only send full lines so we only need to validate the width */13151320 if (crtc_hdisplay % sink_w_granularity)···2619261426202615 intel_de_write_dsb(display, dsb, PIPE_SRCSZ_ERLY_TPT(crtc->pipe),26212616 crtc_state->pipe_srcsz_early_tpt);26172617+26182618+ if (!crtc_state->dsc.compression_enable)26192619+ return;26202620+26212621+ intel_dsc_su_et_parameters_configure(dsb, encoder, crtc_state,26222622+ drm_rect_height(&crtc_state->psr2_su_area));26222623}2623262426242625static void psr2_man_trk_ctl_calc(struct intel_crtc_state *crtc_state,···26952684 overlap_damage_area->y2 = damage_area->y2;26962685}2697268626982698-static void intel_psr2_sel_fetch_pipe_alignment(struct intel_crtc_state *crtc_state)26872687+static bool intel_psr2_sel_fetch_pipe_alignment(struct intel_crtc_state *crtc_state)26992688{27002689 struct intel_display *display = to_intel_display(crtc_state);27012690 const struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;27022691 u16 y_alignment;26922692+ bool su_area_changed = false;2703269327042694 /* ADLP aligns the SU region to vdsc slice height in case dsc is enabled */27052695 if (crtc_state->dsc.compression_enable &&···27092697 else27102698 y_alignment = crtc_state->su_y_granularity;2711269927122712- crtc_state->psr2_su_area.y1 -= crtc_state->psr2_su_area.y1 % y_alignment;27132713- if (crtc_state->psr2_su_area.y2 % y_alignment)27002700+ if (crtc_state->psr2_su_area.y1 % y_alignment) {27012701+ crtc_state->psr2_su_area.y1 -= crtc_state->psr2_su_area.y1 % y_alignment;27022702+ su_area_changed = true;27032703+ }27042704+27052705+ if (crtc_state->psr2_su_area.y2 % y_alignment) {27142706 crtc_state->psr2_su_area.y2 = ((crtc_state->psr2_su_area.y2 /27152707 y_alignment) + 1) * y_alignment;27082708+ su_area_changed = true;27092709+ }27102710+27112711+ return su_area_changed;27162712}2717271327182714/*···28542834 struct intel_crtc_state *crtc_state = intel_atomic_get_new_crtc_state(state, crtc);28552835 struct intel_plane_state *new_plane_state, *old_plane_state;28562836 struct intel_plane *plane;28572857- bool full_update = false, cursor_in_su_area = false;28372837+ bool full_update = false, su_area_changed;28582838 int i, ret;2859283928602840 if (!crtc_state->enable_psr2_sel_fetch)···29612941 if (ret)29622942 return ret;2963294329642964- /*29652965- * Adjust su area to cover cursor fully as necessary (early29662966- * transport). This needs to be done after29672967- * drm_atomic_add_affected_planes to ensure visible cursor is added into29682968- * affected planes even when cursor is not updated by itself.29692969- */29702970- intel_psr2_sel_fetch_et_alignment(state, crtc, &cursor_in_su_area);29442944+ do {29452945+ bool cursor_in_su_area;2971294629722972- intel_psr2_sel_fetch_pipe_alignment(crtc_state);29472947+ /*29482948+ * Adjust su area to cover cursor fully as necessary29492949+ * (early transport). This needs to be done after29502950+ * drm_atomic_add_affected_planes to ensure visible29512951+ * cursor is added into affected planes even when29522952+ * cursor is not updated by itself.29532953+ */29542954+ intel_psr2_sel_fetch_et_alignment(state, crtc, &cursor_in_su_area);29552955+29562956+ su_area_changed = intel_psr2_sel_fetch_pipe_alignment(crtc_state);29572957+29582958+ /*29592959+ * If the cursor was outside the SU area before29602960+ * alignment, the alignment step (which only expands29612961+ * SU) may pull the cursor partially inside, so we29622962+ * must run ET alignment again to fully cover it. But29632963+ * if the cursor was already fully inside before29642964+ * alignment, expanding the SU area won't change that,29652965+ * so no further work is needed.29662966+ */29672967+ if (cursor_in_su_area)29682968+ break;29692969+ } while (su_area_changed);2973297029742971 /*29752972 * Now that we have the pipe damaged area check if it intersect with···30463009 }3047301030483011skip_sel_fetch_set_loop:30123012+ if (full_update)30133013+ clip_area_update(&crtc_state->psr2_su_area, &crtc_state->pipe_src,30143014+ &crtc_state->pipe_src);30153015+30493016 psr2_man_trk_ctl_calc(crtc_state, full_update);30503017 crtc_state->pipe_srcsz_early_tpt =30513018 psr2_pipe_srcsz_early_tpt_calc(crtc_state, full_update);
···598598 return;599599600600 /*601601+ * Bspec says:602602+ * "(note: VRR needs to be programmed after603603+ * TRANS_DDI_FUNC_CTL and before TRANS_CONF)."604604+ *605605+ * In practice it turns out that ICL can hang if606606+ * TRANS_VRR_VMAX/FLIPLINE are written before607607+ * enabling TRANS_DDI_FUNC_CTL.608608+ */609609+ drm_WARN_ON(display->drm,610610+ !(intel_de_read(display, TRANS_DDI_FUNC_CTL(display, cpu_transcoder)) & TRANS_DDI_FUNC_ENABLE));611611+612612+ /*601613 * This bit seems to have two meanings depending on the platform:602614 * TGL: generate VRR "safe window" for DSB vblank waits603615 * ADL/DG2: make TRANS_SET_CONTEXT_LATENCY effective with VRR···950938void intel_vrr_transcoder_enable(const struct intel_crtc_state *crtc_state)951939{952940 struct intel_display *display = to_intel_display(crtc_state);941941+942942+ intel_vrr_set_transcoder_timings(crtc_state);953943954944 if (!intel_vrr_possible(crtc_state))955945 return;
···156156 u8 color;157157 u32 lr_pe[4], tb_pe[4];158158 const u32 bytemask = 0xff;159159- u32 offset = ctx->cap->sblk->sspp_rec0_blk.base;159159+ u32 offset;160160161161 if (!ctx || !pe_ext)162162 return;163163+164164+ offset = ctx->cap->sblk->sspp_rec0_blk.base;163165164166 c = &ctx->hw;165167 /* program SW pixel extension override for all pipes*/
+14-38
drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
···350350 return true;351351}352352353353-static bool dpu_rm_find_lms(struct dpu_rm *rm,354354- struct dpu_global_state *global_state,355355- uint32_t crtc_id, bool skip_dspp,356356- struct msm_display_topology *topology,357357- int *lm_idx, int *pp_idx, int *dspp_idx)353353+static int _dpu_rm_reserve_lms(struct dpu_rm *rm,354354+ struct dpu_global_state *global_state,355355+ uint32_t crtc_id,356356+ struct msm_display_topology *topology)358357359358{359359+ int lm_idx[MAX_BLOCKS];360360+ int pp_idx[MAX_BLOCKS];361361+ int dspp_idx[MAX_BLOCKS] = {0};360362 int i, lm_count = 0;363363+364364+ if (!topology->num_lm) {365365+ DPU_ERROR("zero LMs in topology\n");366366+ return -EINVAL;367367+ }361368362369 /* Find a primary mixer */363370 for (i = 0; i < ARRAY_SIZE(rm->mixer_blks) &&364371 lm_count < topology->num_lm; i++) {365372 if (!rm->mixer_blks[i])366373 continue;367367-368368- if (skip_dspp && to_dpu_hw_mixer(rm->mixer_blks[i])->cap->dspp) {369369- DPU_DEBUG("Skipping LM_%d, skipping LMs with DSPPs\n", i);370370- continue;371371- }372374373375 /*374376 * Reset lm_count to an even index. This will drop the previous···410408 }411409 }412410413413- return lm_count == topology->num_lm;414414-}415415-416416-static int _dpu_rm_reserve_lms(struct dpu_rm *rm,417417- struct dpu_global_state *global_state,418418- uint32_t crtc_id,419419- struct msm_display_topology *topology)420420-421421-{422422- int lm_idx[MAX_BLOCKS];423423- int pp_idx[MAX_BLOCKS];424424- int dspp_idx[MAX_BLOCKS] = {0};425425- int i;426426- bool found;427427-428428- if (!topology->num_lm) {429429- DPU_ERROR("zero LMs in topology\n");430430- return -EINVAL;431431- }432432-433433- /* Try using non-DSPP LM blocks first */434434- found = dpu_rm_find_lms(rm, global_state, crtc_id, !topology->num_dspp,435435- topology, lm_idx, pp_idx, dspp_idx);436436- if (!found && !topology->num_dspp)437437- found = dpu_rm_find_lms(rm, global_state, crtc_id, false,438438- topology, lm_idx, pp_idx, dspp_idx);439439- if (!found) {411411+ if (lm_count != topology->num_lm) {440412 DPU_DEBUG("unable to find appropriate mixers\n");441413 return -ENAVAIL;442414 }443415444444- for (i = 0; i < topology->num_lm; i++) {416416+ for (i = 0; i < lm_count; i++) {445417 global_state->mixer_to_crtc_id[lm_idx[i]] = crtc_id;446418 global_state->pingpong_to_crtc_id[pp_idx[i]] = crtc_id;447419 global_state->dspp_to_crtc_id[dspp_idx[i]] =
+31-12
drivers/gpu/drm/msm/dsi/dsi_host.c
···584584 * FIXME: Reconsider this if/when CMD mode handling is rewritten to use585585 * transfer time and data overhead as a starting point of the calculations.586586 */587587-static unsigned long dsi_adjust_pclk_for_compression(const struct drm_display_mode *mode,588588- const struct drm_dsc_config *dsc)587587+static unsigned long588588+dsi_adjust_pclk_for_compression(const struct drm_display_mode *mode,589589+ const struct drm_dsc_config *dsc,590590+ bool is_bonded_dsi)589591{590590- int new_hdisplay = DIV_ROUND_UP(mode->hdisplay * drm_dsc_get_bpp_int(dsc),591591- dsc->bits_per_component * 3);592592+ int hdisplay, new_hdisplay, new_htotal;592593593593- int new_htotal = mode->htotal - mode->hdisplay + new_hdisplay;594594+ /*595595+ * For bonded DSI, split hdisplay across two links and round up each596596+ * half separately, passing the full hdisplay would only round up once.597597+ * This also aligns with the hdisplay we program later in598598+ * dsi_timing_setup()599599+ */600600+ hdisplay = mode->hdisplay;601601+ if (is_bonded_dsi)602602+ hdisplay /= 2;603603+604604+ new_hdisplay = DIV_ROUND_UP(hdisplay * drm_dsc_get_bpp_int(dsc),605605+ dsc->bits_per_component * 3);606606+607607+ if (is_bonded_dsi)608608+ new_hdisplay *= 2;609609+610610+ new_htotal = mode->htotal - mode->hdisplay + new_hdisplay;594611595612 return mult_frac(mode->clock * 1000u, new_htotal, mode->htotal);596613}···620603 pclk_rate = mode->clock * 1000u;621604622605 if (dsc)623623- pclk_rate = dsi_adjust_pclk_for_compression(mode, dsc);606606+ pclk_rate = dsi_adjust_pclk_for_compression(mode, dsc, is_bonded_dsi);624607625608 /*626609 * For bonded DSI mode, the current DRM mode has the complete width of the···10109931011994 if (msm_host->dsc) {1012995 struct drm_dsc_config *dsc = msm_host->dsc;10131013- u32 bytes_per_pclk;996996+ u32 bits_per_pclk;10149971015998 /* update dsc params with timing params */1016999 if (!dsc || !mode->hdisplay || !mode->vdisplay) {···1032101510331016 /*10341017 * DPU sends 3 bytes per pclk cycle to DSI. If widebus is10351035- * enabled, bus width is extended to 6 bytes.10181018+ * enabled, MDP always sends out 48-bit compressed data per10191019+ * pclk and on average, DSI consumes an amount of compressed10201020+ * data equivalent to the uncompressed pixel depth per pclk.10361021 *10371022 * Calculate the number of pclks needed to transmit one line of10381023 * the compressed data.···10461027 * unused anyway.10471028 */10481029 h_total -= hdisplay;10491049- if (wide_bus_enabled && !(msm_host->mode_flags & MIPI_DSI_MODE_VIDEO))10501050- bytes_per_pclk = 6;10301030+ if (wide_bus_enabled)10311031+ bits_per_pclk = mipi_dsi_pixel_format_to_bpp(msm_host->format);10511032 else10521052- bytes_per_pclk = 3;10331033+ bits_per_pclk = 24;1053103410541054- hdisplay = DIV_ROUND_UP(msm_dsc_get_bytes_per_line(msm_host->dsc), bytes_per_pclk);10351035+ hdisplay = DIV_ROUND_UP(msm_dsc_get_bytes_per_line(msm_host->dsc) * 8, bits_per_pclk);1055103610561037 h_total += hdisplay;10571038 ha_end = ha_start + hdisplay;
+11-11
drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
···5151#define DSI_PHY_7NM_QUIRK_V4_3 BIT(3)5252/* Hardware is V5.2 */5353#define DSI_PHY_7NM_QUIRK_V5_2 BIT(4)5454-/* Hardware is V7.0 */5555-#define DSI_PHY_7NM_QUIRK_V7_0 BIT(5)5454+/* Hardware is V7.2 */5555+#define DSI_PHY_7NM_QUIRK_V7_2 BIT(5)56565757struct dsi_pll_config {5858 bool enable_ssc;···143143144144 if (pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_PRE_V4_1) {145145 config->pll_clock_inverters = 0x28;146146- } else if ((pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) {146146+ } else if ((pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) {147147 if (pll_freq < 163000000ULL)148148 config->pll_clock_inverters = 0xa0;149149 else if (pll_freq < 175000000ULL)···284284 }285285286286 if ((pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V5_2) ||287287- (pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) {287287+ (pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) {288288 if (pll->vco_current_rate < 1557000000ULL)289289 vco_config_1 = 0x08;290290 else···699699 case MSM_DSI_PHY_MASTER:700700 pll_7nm->slave = pll_7nm_list[(pll_7nm->phy->id + 1) % DSI_MAX];701701 /* v7.0: Enable ATB_EN0 and alternate clock output to external phy */702702- if (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)702702+ if (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)703703 writel(0x07, base + REG_DSI_7nm_PHY_CMN_CTRL_5);704704 break;705705 case MSM_DSI_PHY_SLAVE:···987987 /* Request for REFGEN READY */988988 if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_3) ||989989 (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V5_2) ||990990- (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) {990990+ (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) {991991 writel(0x1, phy->base + REG_DSI_7nm_PHY_CMN_GLBL_DIGTOP_SPARE10);992992 udelay(500);993993 }···10211021 lane_ctrl0 = 0x1f;10221022 }1023102310241024- if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) {10241024+ if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) {10251025 if (phy->cphy_mode) {10261026 /* TODO: different for second phy */10271027 vreg_ctrl_0 = 0x57;···1097109710981098 /* program CMN_CTRL_4 for minor_ver 2 chipsets*/10991099 if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V5_2) ||11001100- (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0) ||11001100+ (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2) ||11011101 (readl(base + REG_DSI_7nm_PHY_CMN_REVISION_ID0) & (0xf0)) == 0x20)11021102 writel(0x04, base + REG_DSI_7nm_PHY_CMN_CTRL_4);11031103···12131213 /* Turn off REFGEN Vote */12141214 if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_3) ||12151215 (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V5_2) ||12161216- (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) {12161216+ (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) {12171217 writel(0x0, base + REG_DSI_7nm_PHY_CMN_GLBL_DIGTOP_SPARE10);12181218 wmb();12191219 /* Delay to ensure HW removes vote before PHY shut down */···15021502#endif15031503 .io_start = { 0xae95000, 0xae97000 },15041504 .num_dsi_phy = 2,15051505- .quirks = DSI_PHY_7NM_QUIRK_V7_0,15051505+ .quirks = DSI_PHY_7NM_QUIRK_V7_2,15061506};1507150715081508const struct msm_dsi_phy_cfg dsi_phy_3nm_kaanapali_cfgs = {···15251525#endif15261526 .io_start = { 0x9ac1000, 0x9ac4000 },15271527 .num_dsi_phy = 2,15281528- .quirks = DSI_PHY_7NM_QUIRK_V7_0,15281528+ .quirks = DSI_PHY_7NM_QUIRK_V7_2,15291529};
+3
drivers/gpu/drm/nouveau/nouveau_connector.c
···12301230 u8 size = msg->size;12311231 int ret;1232123212331233+ if (pm_runtime_suspended(nv_connector->base.dev->dev))12341234+ return -EBUSY;12351235+12331236 nv_encoder = find_encoder(&nv_connector->base, DCB_OUTPUT_DP);12341237 if (!nv_encoder)12351238 return -ENODEV;
+5-4
drivers/gpu/drm/panthor/panthor_sched.c
···893893894894out_sync:895895 /* Make sure the CPU caches are invalidated before the seqno is read.896896- * drm_gem_shmem_sync() is a NOP if map_wc=true, so no need to check896896+ * panthor_gem_sync() is a NOP if map_wc=true, so no need to check897897 * it here.898898 */899899- panthor_gem_sync(&bo->base.base, queue->syncwait.offset,899899+ panthor_gem_sync(&bo->base.base,900900+ DRM_PANTHOR_BO_SYNC_CPU_CACHE_FLUSH_AND_INVALIDATE,901901+ queue->syncwait.offset,900902 queue->syncwait.sync64 ?901903 sizeof(struct panthor_syncobj_64b) :902902- sizeof(struct panthor_syncobj_32b),903903- DRM_PANTHOR_BO_SYNC_CPU_CACHE_FLUSH_AND_INVALIDATE);904904+ sizeof(struct panthor_syncobj_32b));904905905906 return queue->syncwait.kmap + queue->syncwait.offset;906907
+15-1
drivers/gpu/drm/renesas/rz-du/rzg2l_mipi_dsi.c
···11221122 struct mipi_dsi_device *device)11231123{11241124 struct rzg2l_mipi_dsi *dsi = host_to_rzg2l_mipi_dsi(host);11251125+ int bpp;11251126 int ret;1126112711271128 if (device->lanes > dsi->num_data_lanes) {···11321131 return -EINVAL;11331132 }1134113311351135- switch (mipi_dsi_pixel_format_to_bpp(device->format)) {11341134+ bpp = mipi_dsi_pixel_format_to_bpp(device->format);11351135+ switch (bpp) {11361136 case 24:11371137 break;11381138 case 18:···11631161 }1164116211651163 drm_bridge_add(&dsi->bridge);11641164+11651165+ /*11661166+ * Report the required division ratio setting for the MIPI clock dividers.11671167+ *11681168+ * vclk * bpp = hsclk * 8 * num_lanes11691169+ *11701170+ * vclk * DSI_AB_divider = hsclk * 1611711171+ *11721172+ * which simplifies to...11731173+ * DSI_AB_divider = bpp * 2 / num_lanes11741174+ */11751175+ rzg2l_cpg_dsi_div_set_divider(bpp * 2 / dsi->lanes, PLL5_TARGET_DSI);1166117611671177 return 0;11681178}
+1
drivers/gpu/drm/scheduler/sched_main.c
···361361/**362362 * drm_sched_job_done - complete a job363363 * @s_job: pointer to the job which is done364364+ * @result: 0 on success, -ERRNO on error364365 *365366 * Finish the job's fence and resubmit the work items.366367 */
+6-9
drivers/gpu/drm/sitronix/st7586.c
···347347 if (ret)348348 return ret;349349350350+ /*351351+ * Override value set by mipi_dbi_spi_init(). This driver is a bit352352+ * non-standard, so best to set it explicitly here.353353+ */354354+ dbi->write_memory_bpw = 8;355355+350356 /* Cannot read from this controller via SPI */351357 dbi->read_commands = NULL;352358···361355 &st7586_mode, rotation, bufsize);362356 if (ret)363357 return ret;364364-365365- /*366366- * we are using 8-bit data, so we are not actually swapping anything,367367- * but setting mipi->swap_bytes makes mipi_dbi_typec3_command() do the368368- * right thing and not use 16-bit transfers (which results in swapped369369- * bytes on little-endian systems and causes out of order data to be370370- * sent to the display).371371- */372372- dbi->swap_bytes = true;373358374359 drm_mode_config_reset(drm);375360
+2-4
drivers/gpu/drm/solomon/ssd130x.c
···737737 unsigned int height = drm_rect_height(rect);738738 unsigned int line_length = DIV_ROUND_UP(width, 8);739739 unsigned int page_height = SSD130X_PAGE_HEIGHT;740740+ u8 page_start = ssd130x->page_offset + y / page_height;740741 unsigned int pages = DIV_ROUND_UP(height, page_height);741742 struct drm_device *drm = &ssd130x->drm;742743 u32 array_idx = 0;···775774 */776775777776 if (!ssd130x->page_address_mode) {778778- u8 page_start;779779-780777 /* Set address range for horizontal addressing mode */781778 ret = ssd130x_set_col_range(ssd130x, ssd130x->col_offset + x, width);782779 if (ret < 0)783780 return ret;784781785785- page_start = ssd130x->page_offset + y / page_height;786782 ret = ssd130x_set_page_range(ssd130x, page_start, pages);787783 if (ret < 0)788784 return ret;···811813 */812814 if (ssd130x->page_address_mode) {813815 ret = ssd130x_set_page_pos(ssd130x,814814- ssd130x->page_offset + i,816816+ page_start + i,815817 ssd130x->col_offset + x);816818 if (ret < 0)817819 return ret;
+2-2
drivers/gpu/drm/ttm/tests/ttm_bo_test.c
···222222 KUNIT_FAIL(test, "Couldn't create ttm bo reserve task\n");223223224224 /* Take a lock so the threaded reserve has to wait */225225- mutex_lock(&bo->base.resv->lock.base);225225+ dma_resv_lock(bo->base.resv, NULL);226226227227 wake_up_process(task);228228 msleep(20);229229 err = kthread_stop(task);230230231231- mutex_unlock(&bo->base.resv->lock.base);231231+ dma_resv_unlock(bo->base.resv);232232233233 KUNIT_ASSERT_EQ(test, err, -ERESTARTSYS);234234}
+5-6
drivers/gpu/drm/ttm/ttm_bo.c
···11071107static s6411081108ttm_bo_swapout_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo)11091109{11101110- struct ttm_resource *res = bo->resource;11111111- struct ttm_place place = { .mem_type = res->mem_type };11101110+ struct ttm_place place = { .mem_type = bo->resource->mem_type };11121111 struct ttm_bo_swapout_walk *swapout_walk =11131112 container_of(walk, typeof(*swapout_walk), walk);11141113 struct ttm_operation_ctx *ctx = walk->arg.ctx;···11471148 /*11481149 * Move to system cached11491150 */11501150- if (res->mem_type != TTM_PL_SYSTEM) {11511151+ if (bo->resource->mem_type != TTM_PL_SYSTEM) {11511152 struct ttm_resource *evict_mem;11521153 struct ttm_place hop;11531154···1179118011801181 if (ttm_tt_is_populated(tt)) {11811182 spin_lock(&bdev->lru_lock);11821182- ttm_resource_del_bulk_move(res, bo);11831183+ ttm_resource_del_bulk_move(bo->resource, bo);11831184 spin_unlock(&bdev->lru_lock);1184118511851186 ret = ttm_tt_swapout(bdev, tt, swapout_walk->gfp_flags);1186118711871188 spin_lock(&bdev->lru_lock);11881189 if (ret)11891189- ttm_resource_add_bulk_move(res, bo);11901190- ttm_resource_move_to_lru_tail(res);11901190+ ttm_resource_add_bulk_move(bo->resource, bo);11911191+ ttm_resource_move_to_lru_tail(bo->resource);11911192 spin_unlock(&bdev->lru_lock);11921193 }11931194
···266266 return q;267267}268268269269+static void __xe_exec_queue_fini(struct xe_exec_queue *q)270270+{271271+ int i;272272+273273+ q->ops->fini(q);274274+275275+ for (i = 0; i < q->width; ++i)276276+ xe_lrc_put(q->lrc[i]);277277+}278278+269279static int __xe_exec_queue_init(struct xe_exec_queue *q, u32 exec_queue_flags)270280{271281 int i, err;···330320 return 0;331321332322err_lrc:333333- for (i = i - 1; i >= 0; --i)334334- xe_lrc_put(q->lrc[i]);323323+ __xe_exec_queue_fini(q);335324 return err;336336-}337337-338338-static void __xe_exec_queue_fini(struct xe_exec_queue *q)339339-{340340- int i;341341-342342- q->ops->fini(q);343343-344344- for (i = 0; i < q->width; ++i)345345- xe_lrc_put(q->lrc[i]);346325}347326348327struct xe_exec_queue *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *vm,
+35-8
drivers/gpu/drm/xe/xe_gsc_proxy.c
···435435 return 0;436436}437437438438-static void xe_gsc_proxy_remove(void *arg)438438+static void xe_gsc_proxy_stop(struct xe_gsc *gsc)439439{440440- struct xe_gsc *gsc = arg;441440 struct xe_gt *gt = gsc_to_gt(gsc);442441 struct xe_device *xe = gt_to_xe(gt);443443-444444- if (!gsc->proxy.component_added)445445- return;446442447443 /* disable HECI2 IRQs */448444 scoped_guard(xe_pm_runtime, xe) {···451455 }452456453457 xe_gsc_wait_for_worker_completion(gsc);458458+ gsc->proxy.started = false;459459+}460460+461461+static void xe_gsc_proxy_remove(void *arg)462462+{463463+ struct xe_gsc *gsc = arg;464464+ struct xe_gt *gt = gsc_to_gt(gsc);465465+ struct xe_device *xe = gt_to_xe(gt);466466+467467+ if (!gsc->proxy.component_added)468468+ return;469469+470470+ /*471471+ * GSC proxy start is an async process that can be ongoing during472472+ * Xe module load/unload. Using devm managed action to register473473+ * xe_gsc_proxy_stop could cause issues if Xe module unload has474474+ * already started when the action is registered, potentially leading475475+ * to the cleanup being called at the wrong time. Therefore, instead476476+ * of registering a separate devm action to undo what is done in477477+ * proxy start, we call it from here, but only if the start has478478+ * completed successfully (tracked with the 'started' flag).479479+ */480480+ if (gsc->proxy.started)481481+ xe_gsc_proxy_stop(gsc);454482455483 component_del(xe->drm.dev, &xe_gsc_proxy_component_ops);456484 gsc->proxy.component_added = false;···530510 */531511int xe_gsc_proxy_start(struct xe_gsc *gsc)532512{513513+ struct xe_gt *gt = gsc_to_gt(gsc);533514 int err;534515535516 /* enable the proxy interrupt in the GSC shim layer */···542521 */543522 err = xe_gsc_proxy_request_handler(gsc);544523 if (err)545545- return err;524524+ goto err_irq_disable;546525547526 if (!xe_gsc_proxy_init_done(gsc)) {548548- xe_gt_err(gsc_to_gt(gsc), "GSC FW reports proxy init not completed\n");549549- return -EIO;527527+ xe_gt_err(gt, "GSC FW reports proxy init not completed\n");528528+ err = -EIO;529529+ goto err_irq_disable;550530 }551531532532+ gsc->proxy.started = true;552533 return 0;534534+535535+err_irq_disable:536536+ gsc_proxy_irq_toggle(gsc, false);537537+ return err;553538}
+2
drivers/gpu/drm/xe/xe_gsc_types.h
···5858 struct mutex mutex;5959 /** @proxy.component_added: whether the component has been added */6060 bool component_added;6161+ /** @proxy.started: whether the proxy has been started */6262+ bool started;6163 /** @proxy.bo: object to store message to and from the GSC */6264 struct xe_bo *bo;6365 /** @proxy.to_gsc: map of the memory used to send messages to the GSC */
···2233use core::{44 cmp,55- mem,66- sync::atomic::{77- fence,88- Ordering, //99- }, //55+ mem, //106};117128use kernel::{···142146#[repr(C)]143147// There is no struct defined for this in the open-gpu-kernel-source headers.144148// Instead it is defined by code in `GspMsgQueuesInit()`.145145-struct Msgq {149149+// TODO: Revert to private once `IoView` projections replace the `gsp_mem` module.150150+pub(super) struct Msgq {146151 /// Header for sending messages, including the write pointer.147147- tx: MsgqTxHeader,152152+ pub(super) tx: MsgqTxHeader,148153 /// Header for receiving messages, including the read pointer.149149- rx: MsgqRxHeader,154154+ pub(super) rx: MsgqRxHeader,150155 /// The message queue proper.151156 msgq: MsgqData,152157}153158154159/// Structure shared between the driver and the GSP and containing the command and message queues.155160#[repr(C)]156156-struct GspMem {161161+// TODO: Revert to private once `IoView` projections replace the `gsp_mem` module.162162+pub(super) struct GspMem {157163 /// Self-mapping page table entries.158158- ptes: PteArray<{ GSP_PAGE_SIZE / size_of::<u64>() }>,164164+ ptes: PteArray<{ Self::PTE_ARRAY_SIZE }>,159165 /// CPU queue: the driver writes commands here, and the GSP reads them. It also contains the160166 /// write and read pointers that the CPU updates.161167 ///162168 /// This member is read-only for the GSP.163163- cpuq: Msgq,169169+ pub(super) cpuq: Msgq,164170 /// GSP queue: the GSP writes messages here, and the driver reads them. It also contains the165171 /// write and read pointers that the GSP updates.166172 ///167173 /// This member is read-only for the driver.168168- gspq: Msgq,174174+ pub(super) gspq: Msgq,175175+}176176+177177+impl GspMem {178178+ const PTE_ARRAY_SIZE: usize = GSP_PAGE_SIZE / size_of::<u64>();169179}170180171181// SAFETY: These structs don't meet the no-padding requirements of AsBytes but···203201204202 let gsp_mem =205203 CoherentAllocation::<GspMem>::alloc_coherent(dev, 1, GFP_KERNEL | __GFP_ZERO)?;206206- dma_write!(gsp_mem[0].ptes = PteArray::new(gsp_mem.dma_handle())?)?;207207- dma_write!(gsp_mem[0].cpuq.tx = MsgqTxHeader::new(MSGQ_SIZE, RX_HDR_OFF, MSGQ_NUM_PAGES))?;208208- dma_write!(gsp_mem[0].cpuq.rx = MsgqRxHeader::new())?;204204+205205+ let start = gsp_mem.dma_handle();206206+ // Write values one by one to avoid an on-stack instance of `PteArray`.207207+ for i in 0..GspMem::PTE_ARRAY_SIZE {208208+ dma_write!(gsp_mem, [0]?.ptes.0[i], PteArray::<0>::entry(start, i)?);209209+ }210210+211211+ dma_write!(212212+ gsp_mem,213213+ [0]?.cpuq.tx,214214+ MsgqTxHeader::new(MSGQ_SIZE, RX_HDR_OFF, MSGQ_NUM_PAGES)215215+ );216216+ dma_write!(gsp_mem, [0]?.cpuq.rx, MsgqRxHeader::new());209217210218 Ok(Self(gsp_mem))211219 }···329317 //330318 // - The returned value is between `0` and `MSGQ_NUM_PAGES`.331319 fn gsp_write_ptr(&self) -> u32 {332332- let gsp_mem = self.0.start_ptr();333333-334334- // SAFETY:335335- // - The 'CoherentAllocation' contains at least one object.336336- // - By the invariants of `CoherentAllocation` the pointer is valid.337337- (unsafe { (*gsp_mem).gspq.tx.write_ptr() } % MSGQ_NUM_PAGES)320320+ super::fw::gsp_mem::gsp_write_ptr(&self.0)338321 }339322340323 // Returns the index of the memory page the GSP will read the next command from.···338331 //339332 // - The returned value is between `0` and `MSGQ_NUM_PAGES`.340333 fn gsp_read_ptr(&self) -> u32 {341341- let gsp_mem = self.0.start_ptr();342342-343343- // SAFETY:344344- // - The 'CoherentAllocation' contains at least one object.345345- // - By the invariants of `CoherentAllocation` the pointer is valid.346346- (unsafe { (*gsp_mem).gspq.rx.read_ptr() } % MSGQ_NUM_PAGES)334334+ super::fw::gsp_mem::gsp_read_ptr(&self.0)347335 }348336349337 // Returns the index of the memory page the CPU can read the next message from.···347345 //348346 // - The returned value is between `0` and `MSGQ_NUM_PAGES`.349347 fn cpu_read_ptr(&self) -> u32 {350350- let gsp_mem = self.0.start_ptr();351351-352352- // SAFETY:353353- // - The ['CoherentAllocation'] contains at least one object.354354- // - By the invariants of CoherentAllocation the pointer is valid.355355- (unsafe { (*gsp_mem).cpuq.rx.read_ptr() } % MSGQ_NUM_PAGES)348348+ super::fw::gsp_mem::cpu_read_ptr(&self.0)356349 }357350358351 // Informs the GSP that it can send `elem_count` new pages into the message queue.359352 fn advance_cpu_read_ptr(&mut self, elem_count: u32) {360360- let rptr = self.cpu_read_ptr().wrapping_add(elem_count) % MSGQ_NUM_PAGES;361361-362362- // Ensure read pointer is properly ordered.363363- fence(Ordering::SeqCst);364364-365365- let gsp_mem = self.0.start_ptr_mut();366366-367367- // SAFETY:368368- // - The 'CoherentAllocation' contains at least one object.369369- // - By the invariants of `CoherentAllocation` the pointer is valid.370370- unsafe { (*gsp_mem).cpuq.rx.set_read_ptr(rptr) };353353+ super::fw::gsp_mem::advance_cpu_read_ptr(&self.0, elem_count)371354 }372355373356 // Returns the index of the memory page the CPU can write the next command to.···361374 //362375 // - The returned value is between `0` and `MSGQ_NUM_PAGES`.363376 fn cpu_write_ptr(&self) -> u32 {364364- let gsp_mem = self.0.start_ptr();365365-366366- // SAFETY:367367- // - The 'CoherentAllocation' contains at least one object.368368- // - By the invariants of `CoherentAllocation` the pointer is valid.369369- (unsafe { (*gsp_mem).cpuq.tx.write_ptr() } % MSGQ_NUM_PAGES)377377+ super::fw::gsp_mem::cpu_write_ptr(&self.0)370378 }371379372380 // Informs the GSP that it can process `elem_count` new pages from the command queue.373381 fn advance_cpu_write_ptr(&mut self, elem_count: u32) {374374- let wptr = self.cpu_write_ptr().wrapping_add(elem_count) & MSGQ_NUM_PAGES;375375- let gsp_mem = self.0.start_ptr_mut();376376-377377- // SAFETY:378378- // - The 'CoherentAllocation' contains at least one object.379379- // - By the invariants of `CoherentAllocation` the pointer is valid.380380- unsafe { (*gsp_mem).cpuq.tx.set_write_ptr(wptr) };381381-382382- // Ensure all command data is visible before triggering the GSP read.383383- fence(Ordering::SeqCst);382382+ super::fw::gsp_mem::advance_cpu_write_ptr(&self.0, elem_count)384383 }385384}386385
+69-32
drivers/gpu/nova-core/gsp/fw.rs
···4040 },4141};42424343+// TODO: Replace with `IoView` projections once available; the `unwrap()` calls go away once we4444+// switch to the new `dma::Coherent` API.4545+pub(super) mod gsp_mem {4646+ use core::sync::atomic::{4747+ fence,4848+ Ordering, //4949+ };5050+5151+ use kernel::{5252+ dma::CoherentAllocation,5353+ dma_read,5454+ dma_write,5555+ prelude::*, //5656+ };5757+5858+ use crate::gsp::cmdq::{5959+ GspMem,6060+ MSGQ_NUM_PAGES, //6161+ };6262+6363+ pub(in crate::gsp) fn gsp_write_ptr(qs: &CoherentAllocation<GspMem>) -> u32 {6464+ // PANIC: A `dma::CoherentAllocation` always contains at least one element.6565+ || -> Result<u32> { Ok(dma_read!(qs, [0]?.gspq.tx.0.writePtr) % MSGQ_NUM_PAGES) }().unwrap()6666+ }6767+6868+ pub(in crate::gsp) fn gsp_read_ptr(qs: &CoherentAllocation<GspMem>) -> u32 {6969+ // PANIC: A `dma::CoherentAllocation` always contains at least one element.7070+ || -> Result<u32> { Ok(dma_read!(qs, [0]?.gspq.rx.0.readPtr) % MSGQ_NUM_PAGES) }().unwrap()7171+ }7272+7373+ pub(in crate::gsp) fn cpu_read_ptr(qs: &CoherentAllocation<GspMem>) -> u32 {7474+ // PANIC: A `dma::CoherentAllocation` always contains at least one element.7575+ || -> Result<u32> { Ok(dma_read!(qs, [0]?.cpuq.rx.0.readPtr) % MSGQ_NUM_PAGES) }().unwrap()7676+ }7777+7878+ pub(in crate::gsp) fn advance_cpu_read_ptr(qs: &CoherentAllocation<GspMem>, count: u32) {7979+ let rptr = cpu_read_ptr(qs).wrapping_add(count) % MSGQ_NUM_PAGES;8080+8181+ // Ensure read pointer is properly ordered.8282+ fence(Ordering::SeqCst);8383+8484+ // PANIC: A `dma::CoherentAllocation` always contains at least one element.8585+ || -> Result {8686+ dma_write!(qs, [0]?.cpuq.rx.0.readPtr, rptr);8787+ Ok(())8888+ }()8989+ .unwrap()9090+ }9191+9292+ pub(in crate::gsp) fn cpu_write_ptr(qs: &CoherentAllocation<GspMem>) -> u32 {9393+ // PANIC: A `dma::CoherentAllocation` always contains at least one element.9494+ || -> Result<u32> { Ok(dma_read!(qs, [0]?.cpuq.tx.0.writePtr) % MSGQ_NUM_PAGES) }().unwrap()9595+ }9696+9797+ pub(in crate::gsp) fn advance_cpu_write_ptr(qs: &CoherentAllocation<GspMem>, count: u32) {9898+ let wptr = cpu_write_ptr(qs).wrapping_add(count) % MSGQ_NUM_PAGES;9999+100100+ // PANIC: A `dma::CoherentAllocation` always contains at least one element.101101+ || -> Result {102102+ dma_write!(qs, [0]?.cpuq.tx.0.writePtr, wptr);103103+ Ok(())104104+ }()105105+ .unwrap();106106+107107+ // Ensure all command data is visible before triggering the GSP read.108108+ fence(Ordering::SeqCst);109109+ }110110+}111111+43112/// Empty type to group methods related to heap parameters for running the GSP firmware.44113enum GspFwHeapParams {}45114···777708 entryOff: num::usize_into_u32::<GSP_PAGE_SIZE>(),778709 })779710 }780780-781781- /// Returns the value of the write pointer for this queue.782782- pub(crate) fn write_ptr(&self) -> u32 {783783- let ptr = core::ptr::from_ref(&self.0.writePtr);784784-785785- // SAFETY: `ptr` is a valid pointer to a `u32`.786786- unsafe { ptr.read_volatile() }787787- }788788-789789- /// Sets the value of the write pointer for this queue.790790- pub(crate) fn set_write_ptr(&mut self, val: u32) {791791- let ptr = core::ptr::from_mut(&mut self.0.writePtr);792792-793793- // SAFETY: `ptr` is a valid pointer to a `u32`.794794- unsafe { ptr.write_volatile(val) }795795- }796711}797712798713// SAFETY: Padding is explicit and does not contain uninitialized data.···791738 /// Creates a new RX queue header.792739 pub(crate) fn new() -> Self {793740 Self(Default::default())794794- }795795-796796- /// Returns the value of the read pointer for this queue.797797- pub(crate) fn read_ptr(&self) -> u32 {798798- let ptr = core::ptr::from_ref(&self.0.readPtr);799799-800800- // SAFETY: `ptr` is a valid pointer to a `u32`.801801- unsafe { ptr.read_volatile() }802802- }803803-804804- /// Sets the value of the read pointer for this queue.805805- pub(crate) fn set_read_ptr(&mut self, val: u32) {806806- let ptr = core::ptr::from_mut(&mut self.0.readPtr);807807-808808- // SAFETY: `ptr` is a valid pointer to a `u32`.809809- unsafe { ptr.write_volatile(val) }810741 }811742}812743
···14521452 hid_warn(pidff->hid, "unknown ramp effect layout\n");1453145314541454 if (PIDFF_FIND_FIELDS(set_condition, PID_SET_CONDITION, 1)) {14551455- if (test_and_clear_bit(FF_SPRING, dev->ffbit) ||14561456- test_and_clear_bit(FF_DAMPER, dev->ffbit) ||14571457- test_and_clear_bit(FF_FRICTION, dev->ffbit) ||14581458- test_and_clear_bit(FF_INERTIA, dev->ffbit))14551455+ bool test = false;14561456+14571457+ test |= test_and_clear_bit(FF_SPRING, dev->ffbit);14581458+ test |= test_and_clear_bit(FF_DAMPER, dev->ffbit);14591459+ test |= test_and_clear_bit(FF_FRICTION, dev->ffbit);14601460+ test |= test_and_clear_bit(FF_INERTIA, dev->ffbit);14611461+ if (test)14591462 hid_warn(pidff->hid, "unknown condition effect layout\n");14601463 }14611464
+2-14
drivers/hwmon/Kconfig
···1493149314941494config SENSORS_LM7514951495 tristate "National Semiconductor LM75 and compatibles"14961496- depends on I2C14971497- depends on I3C || !I3C14961496+ depends on I3C_OR_I2C14981497 select REGMAP_I2C14991498 select REGMAP_I3C if I3C15001499 help···1925192619261927 This driver can also be built as a module. If so, the module19271928 will be called raspberrypi-hwmon.19281928-19291929-config SENSORS_SA67MCU19301930- tristate "Kontron sa67mcu hardware monitoring driver"19311931- depends on MFD_SL28CPLD || COMPILE_TEST19321932- help19331933- If you say yes here you get support for the voltage and temperature19341934- monitor of the sa67 board management controller.19351935-19361936- This driver can also be built as a module. If so, the module19371937- will be called sa67mcu-hwmon.1938192919391930config SENSORS_SL28CPLD19401931 tristate "Kontron sl28cpld hardware monitoring driver"···2381239223822393config SENSORS_TMP10823832394 tristate "Texas Instruments TMP108"23842384- depends on I2C23852385- depends on I3C || !I3C23952395+ depends on I3C_OR_I2C23862396 select REGMAP_I2C23872397 select REGMAP_I3C if I3C23882398 help
···310310311311 /*312312 * If set to true the host controller registers are reserved for313313- * ACPI AML use.313313+ * ACPI AML use. Needs extra protection by acpi_lock.314314 */315315 bool acpi_reserved;316316+ struct mutex acpi_lock;316317};317318318319#define FEATURE_SMBUS_PEC BIT(0)···895894 int hwpec, ret;896895 struct i801_priv *priv = i2c_get_adapdata(adap);897896898898- if (priv->acpi_reserved)897897+ mutex_lock(&priv->acpi_lock);898898+ if (priv->acpi_reserved) {899899+ mutex_unlock(&priv->acpi_lock);899900 return -EBUSY;901901+ }900902901903 pm_runtime_get_sync(&priv->pci_dev->dev);902904···939935 iowrite8(SMBHSTSTS_INUSE_STS | STATUS_FLAGS, SMBHSTSTS(priv));940936941937 pm_runtime_put_autosuspend(&priv->pci_dev->dev);938938+ mutex_unlock(&priv->acpi_lock);942939 return ret;943940}944941···14701465 * further access from the driver itself. This device is now owned14711466 * by the system firmware.14721467 */14731473- i2c_lock_bus(&priv->adapter, I2C_LOCK_SEGMENT);14681468+ mutex_lock(&priv->acpi_lock);1474146914751470 if (!priv->acpi_reserved && i801_acpi_is_smbus_ioport(priv, address)) {14761471 priv->acpi_reserved = true;···14901485 else14911486 status = acpi_os_write_port(address, (u32)*value, bits);1492148714931493- i2c_unlock_bus(&priv->adapter, I2C_LOCK_SEGMENT);14881488+ mutex_unlock(&priv->acpi_lock);1494148914951490 return status;14961491}···15501545 priv->adapter.dev.parent = &dev->dev;15511546 acpi_use_parent_companion(&priv->adapter.dev);15521547 priv->adapter.retries = 3;15481548+ mutex_init(&priv->acpi_lock);1553154915541550 priv->pci_dev = dev;15551551 priv->features = id->driver_data;
+12
drivers/i3c/Kconfig
···2222if I3C2323source "drivers/i3c/master/Kconfig"2424endif # I3C2525+2626+config I3C_OR_I2C2727+ tristate2828+ default m if I3C=m2929+ default I2C3030+ help3131+ Device drivers using module_i3c_i2c_driver() can use either3232+ i2c or i3c hosts, but cannot be built-in for the kernel when3333+ CONFIG_I3C=m.3434+3535+ Add 'depends on I2C_OR_I3C' in Kconfig for those drivers to3636+ get the correct dependencies.
···303303 if (msleep_interruptible(1000))304304 return -EINTR;305305306306- ret = sps30_serial_command(state, SPS30_SERIAL_READ_MEAS, NULL, 0, meas, num * sizeof(num));306306+ ret = sps30_serial_command(state, SPS30_SERIAL_READ_MEAS, NULL, 0, meas, num * sizeof(*meas));307307 if (ret < 0)308308 return ret;309309 /* if measurements aren't ready sensor returns empty frame */
+1-1
drivers/iio/dac/ds4424.c
···140140141141 switch (mask) {142142 case IIO_CHAN_INFO_RAW:143143- if (val < S8_MIN || val > S8_MAX)143143+ if (val <= S8_MIN || val > S8_MAX)144144 return -EINVAL;145145146146 if (val > 0) {
···322322 }323323 case IIO_CHAN_INFO_RAW:324324 /* Resume device */325325- pm_runtime_get_sync(mpu3050->dev);325325+ ret = pm_runtime_resume_and_get(mpu3050->dev);326326+ if (ret)327327+ return ret;326328 mutex_lock(&mpu3050->lock);327329328330 ret = mpu3050_set_8khz_samplerate(mpu3050);···649647static int mpu3050_buffer_preenable(struct iio_dev *indio_dev)650648{651649 struct mpu3050 *mpu3050 = iio_priv(indio_dev);650650+ int ret;652651653653- pm_runtime_get_sync(mpu3050->dev);652652+ ret = pm_runtime_resume_and_get(mpu3050->dev);653653+ if (ret)654654+ return ret;654655655656 /* Unless we have OUR trigger active, run at full speed */656656- if (!mpu3050->hw_irq_trigger)657657- return mpu3050_set_8khz_samplerate(mpu3050);657657+ if (!mpu3050->hw_irq_trigger) {658658+ ret = mpu3050_set_8khz_samplerate(mpu3050);659659+ if (ret)660660+ pm_runtime_put_autosuspend(mpu3050->dev);661661+ }658662659659- return 0;663663+ return ret;660664}661665662666static int mpu3050_buffer_postdisable(struct iio_dev *indio_dev)
+1-2
drivers/iio/gyro/mpu3050-i2c.c
···1919 struct mpu3050 *mpu3050 = i2c_mux_priv(mux);20202121 /* Just power up the device, that is all that is needed */2222- pm_runtime_get_sync(mpu3050->dev);2323- return 0;2222+ return pm_runtime_resume_and_get(mpu3050->dev);2423}25242625static int mpu3050_i2c_bypass_deselect(struct i2c_mux_core *mux, u32 chan_id)
+1-1
drivers/iio/imu/adis.c
···526526527527 adis->spi = spi;528528 adis->data = data;529529- if (!adis->ops->write && !adis->ops->read && !adis->ops->reset)529529+ if (!adis->ops)530530 adis->ops = &adis_default_ops;531531 else if (!adis->ops->write || !adis->ops->read || !adis->ops->reset)532532 return -EINVAL;
···637637 break;638638 }639639640640- if (!open_drain)641641- val |= INV_ICM45600_INT1_CONFIG2_PUSH_PULL;640640+ if (open_drain)641641+ val |= INV_ICM45600_INT1_CONFIG2_OPEN_DRAIN;642642643643 ret = regmap_write(st->map, INV_ICM45600_REG_INT1_CONFIG2, val);644644 if (ret)···744744 */745745 fsleep(5 * USEC_PER_MSEC);746746747747+ /* set pm_runtime active early for disable vddio resource cleanup */748748+ ret = pm_runtime_set_active(dev);749749+ if (ret)750750+ return ret;751751+747752 ret = inv_icm45600_enable_regulator_vddio(st);748753 if (ret)749754 return ret;···781776 if (ret)782777 return ret;783778784784- ret = devm_pm_runtime_set_active_enabled(dev);779779+ ret = devm_pm_runtime_enable(dev);785780 if (ret)786781 return ret;787782
+8
drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
···19431943 irq_type);19441944 return -EINVAL;19451945 }19461946+19471947+ /*19481948+ * Acking interrupts by status register does not work reliably19491949+ * but seem to work when this bit is set.19501950+ */19511951+ if (st->chip_type == INV_MPU9150)19521952+ st->irq_mask |= INV_MPU6050_INT_RD_CLEAR;19531953+19461954 device_set_wakeup_capable(dev, true);1947195519481956 st->vdd_supply = devm_regulator_get(dev, "vdd");
+2
drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
···390390/* enable level triggering */391391#define INV_MPU6050_LATCH_INT_EN 0x20392392#define INV_MPU6050_BIT_BYPASS_EN 0x2393393+/* allow acking interrupts by any register read */394394+#define INV_MPU6050_INT_RD_CLEAR 0x10393395394396/* Allowed timestamp period jitter in percent */395397#define INV_MPU6050_TS_PERIOD_JITTER 4
+4-1
drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
···248248 switch (st->chip_type) {249249 case INV_MPU6000:250250 case INV_MPU6050:251251- case INV_MPU9150:252251 /*253252 * WoM is not supported and interrupt status read seems to be broken for254253 * some chips. Since data ready is the only interrupt, bypass interrupt···256257 wom_bits = 0;257258 int_status = INV_MPU6050_BIT_RAW_DATA_RDY_INT;258259 goto data_ready_interrupt;260260+ case INV_MPU9150:261261+ /* IRQ needs to be acked */262262+ wom_bits = 0;263263+ break;259264 case INV_MPU6500:260265 case INV_MPU6515:261266 case INV_MPU6880:
+4-2
drivers/iio/industrialio-buffer.c
···228228 written = 0;229229 add_wait_queue(&rb->pollq, &wait);230230 do {231231- if (!indio_dev->info)232232- return -ENODEV;231231+ if (!indio_dev->info) {232232+ ret = -ENODEV;233233+ break;234234+ }233235234236 if (!iio_buffer_space_available(rb)) {235237 if (signal_pending(current)) {
+1-1
drivers/iio/light/bh1780.c
···109109 case IIO_LIGHT:110110 pm_runtime_get_sync(&bh1780->client->dev);111111 value = bh1780_read_word(bh1780, BH1780_REG_DLOW);112112+ pm_runtime_put_autosuspend(&bh1780->client->dev);112113 if (value < 0)113114 return value;114114- pm_runtime_put_autosuspend(&bh1780->client->dev);115115 *val = value;116116117117 return IIO_VAL_INT;
+1-2
drivers/iio/magnetometer/Kconfig
···143143 tristate "MEMSIC MMC5633 3-axis magnetic sensor"144144 select REGMAP_I2C145145 select REGMAP_I3C if I3C146146- depends on I2C147147- depends on I3C || !I3C146146+ depends on I3C_OR_I2C148147 help149148 Say yes here to build support for the MEMSIC MMC5633 3-axis150149 magnetic sensor.
+1-1
drivers/iio/magnetometer/tlv493d.c
···171171 switch (ch) {172172 case TLV493D_AXIS_X:173173 val = FIELD_GET(TLV493D_BX_MAG_X_AXIS_MSB, b[TLV493D_RD_REG_BX]) << 4 |174174- FIELD_GET(TLV493D_BX2_MAG_X_AXIS_LSB, b[TLV493D_RD_REG_BX2]) >> 4;174174+ FIELD_GET(TLV493D_BX2_MAG_X_AXIS_LSB, b[TLV493D_RD_REG_BX2]);175175 break;176176 case TLV493D_AXIS_Y:177177 val = FIELD_GET(TLV493D_BY_MAG_Y_AXIS_MSB, b[TLV493D_RD_REG_BY]) << 4 |
···11# SPDX-License-Identifier: GPL-2.0-only22config AMD_SBRMI_I2C33 tristate "AMD side band RMI support"44- depends on I2C44+ depends on I3C_OR_I2C55 depends on ARM || ARM64 || COMPILE_TEST66 select REGMAP_I2C77- depends on I3C || !I3C87 select REGMAP_I3C if I3C98 help109 Side band RMI over I2C/I3C support for AMD out of band management.
+63-7
drivers/net/bonding/bond_main.c
···15091509 return features;15101510}1511151115121512+static int bond_header_create(struct sk_buff *skb, struct net_device *bond_dev,15131513+ unsigned short type, const void *daddr,15141514+ const void *saddr, unsigned int len)15151515+{15161516+ struct bonding *bond = netdev_priv(bond_dev);15171517+ const struct header_ops *slave_ops;15181518+ struct slave *slave;15191519+ int ret = 0;15201520+15211521+ rcu_read_lock();15221522+ slave = rcu_dereference(bond->curr_active_slave);15231523+ if (slave) {15241524+ slave_ops = READ_ONCE(slave->dev->header_ops);15251525+ if (slave_ops && slave_ops->create)15261526+ ret = slave_ops->create(skb, slave->dev,15271527+ type, daddr, saddr, len);15281528+ }15291529+ rcu_read_unlock();15301530+ return ret;15311531+}15321532+15331533+static int bond_header_parse(const struct sk_buff *skb, unsigned char *haddr)15341534+{15351535+ struct bonding *bond = netdev_priv(skb->dev);15361536+ const struct header_ops *slave_ops;15371537+ struct slave *slave;15381538+ int ret = 0;15391539+15401540+ rcu_read_lock();15411541+ slave = rcu_dereference(bond->curr_active_slave);15421542+ if (slave) {15431543+ slave_ops = READ_ONCE(slave->dev->header_ops);15441544+ if (slave_ops && slave_ops->parse)15451545+ ret = slave_ops->parse(skb, haddr);15461546+ }15471547+ rcu_read_unlock();15481548+ return ret;15491549+}15501550+15511551+static const struct header_ops bond_header_ops = {15521552+ .create = bond_header_create,15531553+ .parse = bond_header_parse,15541554+};15551555+15121556static void bond_setup_by_slave(struct net_device *bond_dev,15131557 struct net_device *slave_dev)15141558{···1560151615611517 dev_close(bond_dev);1562151815631563- bond_dev->header_ops = slave_dev->header_ops;15191519+ bond_dev->header_ops = slave_dev->header_ops ?15201520+ &bond_header_ops : NULL;1564152115651522 bond_dev->type = slave_dev->type;15661523 bond_dev->hard_header_len = slave_dev->hard_header_len;···2846280128472802 continue;2848280328042804+ case BOND_LINK_FAIL:28052805+ case BOND_LINK_BACK:28062806+ slave_dbg(bond->dev, slave->dev, "link_new_state %d on slave\n",28072807+ slave->link_new_state);28082808+ continue;28092809+28492810 default:28502850- slave_err(bond->dev, slave->dev, "invalid new link %d on slave\n",28112811+ slave_err(bond->dev, slave->dev, "invalid link_new_state %d on slave\n",28512812 slave->link_new_state);28522813 bond_propose_link_state(slave, BOND_LINK_NOCHANGE);28532814···34283377 } else if (is_arp) {34293378 return bond_arp_rcv(skb, bond, slave);34303379#if IS_ENABLED(CONFIG_IPV6)34313431- } else if (is_ipv6) {33803380+ } else if (is_ipv6 && likely(ipv6_mod_enabled())) {34323381 return bond_na_rcv(skb, bond, slave);34333382#endif34343383 } else {···51205069{51215070 struct bond_up_slave *usable, *all;5122507151235123- usable = rtnl_dereference(bond->usable_slaves);51245124- rcu_assign_pointer(bond->usable_slaves, usable_slaves);51255125- kfree_rcu(usable, rcu);51265126-51275072 all = rtnl_dereference(bond->all_slaves);51285073 rcu_assign_pointer(bond->all_slaves, all_slaves);51295074 kfree_rcu(all, rcu);50755075+50765076+ if (BOND_MODE(bond) == BOND_MODE_BROADCAST) {50775077+ kfree_rcu(usable_slaves, rcu);50785078+ return;50795079+ }50805080+50815081+ usable = rtnl_dereference(bond->usable_slaves);50825082+ rcu_assign_pointer(bond->usable_slaves, usable_slaves);50835083+ kfree_rcu(usable, rcu);51305084}5131508551325086static void bond_reset_slave_arr(struct bonding *bond)
···88#include <linux/units.h>99#include <linux/can/dev.h>10101111-#define CAN_CALC_MAX_ERROR 50 /* in one-tenth of a percent */1111+#define CAN_CALC_MAX_ERROR 500 /* max error 5% */12121313/* CiA recommended sample points for Non Return to Zero encoding. */1414static int can_calc_sample_point_nrz(const struct can_bittiming *bt)
···12711271 if (ret)12721272 goto err_napi;1273127312741274+ /* Reset the phy settings */12751275+ ret = xgbe_phy_reset(pdata);12761276+ if (ret)12771277+ goto err_irqs;12781278+12791279+ /* Start the phy */12741280 ret = phy_if->phy_start(pdata);12751281 if (ret)12761282 goto err_irqs;1277128312781284 hw_if->enable_tx(pdata);12791285 hw_if->enable_rx(pdata);12861286+ /* Synchronize flag with hardware state after enabling TX/RX.12871287+ * This prevents stale state after device restart cycles.12881288+ */12891289+ pdata->data_path_stopped = false;1280129012811291 udp_tunnel_nic_reset_ntf(netdev);12821282-12831283- /* Reset the phy settings */12841284- ret = xgbe_phy_reset(pdata);12851285- if (ret)12861286- goto err_txrx;1287129212881293 netif_tx_start_all_queues(netdev);12891294···12981293 clear_bit(XGBE_STOPPED, &pdata->dev_state);1299129413001295 return 0;13011301-13021302-err_txrx:13031303- hw_if->disable_rx(pdata);13041304- hw_if->disable_tx(pdata);1305129613061297err_irqs:13071298 xgbe_free_irqs(pdata);
+75-7
drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
···19421942static void xgbe_rx_adaptation(struct xgbe_prv_data *pdata)19431943{19441944 struct xgbe_phy_data *phy_data = pdata->phy_data;19451945- unsigned int reg;19451945+ int reg;1946194619471947 /* step 2: force PCS to send RX_ADAPT Req to PHY */19481948 XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_RX_EQ_CTRL4,···1964196419651965 /* Step 4: Check for Block lock */1966196619671967- /* Link status is latched low, so read once to clear19681968- * and then read again to get current state19671967+ reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);19681968+ if (reg < 0)19691969+ goto set_mode;19701970+19711971+ /* Link status is latched low so that momentary link drops19721972+ * can be detected. If link was already down read again19731973+ * to get the latest state.19691974 */19701970- reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);19711971- reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);19751975+ if (!pdata->phy.link && !(reg & MDIO_STAT1_LSTATUS)) {19761976+ reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);19771977+ if (reg < 0)19781978+ goto set_mode;19791979+ }19801980+19721981 if (reg & MDIO_STAT1_LSTATUS) {19731982 /* If the block lock is found, update the helpers19741983 * and declare the link up···2015200620162007 /* perform rx adaptation */20172008 xgbe_rx_adaptation(pdata);20092009+}20102010+20112011+/*20122012+ * xgbe_phy_stop_data_path - Stop TX/RX to prevent packet corruption20132013+ * @pdata: driver private data20142014+ *20152015+ * This function stops the data path (TX and RX) to prevent packet20162016+ * corruption during critical PHY operations like RX adaptation.20172017+ * Must be called before initiating RX adaptation when link goes down.20182018+ */20192019+static void xgbe_phy_stop_data_path(struct xgbe_prv_data *pdata)20202020+{20212021+ if (pdata->data_path_stopped)20222022+ return;20232023+20242024+ /* Stop TX/RX to prevent packet corruption during RX adaptation */20252025+ pdata->hw_if.disable_tx(pdata);20262026+ pdata->hw_if.disable_rx(pdata);20272027+ pdata->data_path_stopped = true;20282028+20292029+ netif_dbg(pdata, link, pdata->netdev,20302030+ "stopping data path for RX adaptation\n");20312031+}20322032+20332033+/*20342034+ * xgbe_phy_start_data_path - Re-enable TX/RX after RX adaptation20352035+ * @pdata: driver private data20362036+ *20372037+ * This function re-enables the data path (TX and RX) after RX adaptation20382038+ * has completed successfully. Only called when link is confirmed up.20392039+ */20402040+static void xgbe_phy_start_data_path(struct xgbe_prv_data *pdata)20412041+{20422042+ if (!pdata->data_path_stopped)20432043+ return;20442044+20452045+ pdata->hw_if.enable_rx(pdata);20462046+ pdata->hw_if.enable_tx(pdata);20472047+ pdata->data_path_stopped = false;20482048+20492049+ netif_dbg(pdata, link, pdata->netdev,20502050+ "restarting data path after RX adaptation\n");20182051}2019205220202053static void xgbe_phy_rx_reset(struct xgbe_prv_data *pdata)···28522801 if (pdata->en_rx_adap) {28532802 /* if the link is available and adaptation is done,28542803 * declare link up28042804+ *28052805+ * Note: When link is up and adaptation is done, we can28062806+ * safely re-enable the data path if it was stopped28072807+ * for adaptation.28552808 */28562856- if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done)28092809+ if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done) {28102810+ xgbe_phy_start_data_path(pdata);28572811 return 1;28122812+ }28582813 /* If either link is not available or adaptation is not done,28592814 * retrigger the adaptation logic. (if the mode is not set,28602815 * then issue mailbox command first)28612816 */28172817+28182818+ /* CRITICAL: Stop data path BEFORE triggering RX adaptation28192819+ * to prevent CRC errors from packets corrupted during28202820+ * the adaptation process. This is especially important28212821+ * when AN is OFF in 10G KR mode.28222822+ */28232823+ xgbe_phy_stop_data_path(pdata);28242824+28622825 if (pdata->mode_set) {28632826 xgbe_phy_rx_adaptation(pdata);28642827 } else {···28802815 xgbe_phy_set_mode(pdata, phy_data->cur_mode);28812816 }2882281728832883- if (pdata->rx_adapt_done)28182818+ if (pdata->rx_adapt_done) {28192819+ /* Adaptation complete, safe to re-enable data path */28202820+ xgbe_phy_start_data_path(pdata);28842821 return 1;28222822+ }28852823 } else if (reg & MDIO_STAT1_LSTATUS)28862824 return 1;28872825
+4
drivers/net/ethernet/amd/xgbe/xgbe.h
···12431243 bool en_rx_adap;12441244 int rx_adapt_retries;12451245 bool rx_adapt_done;12461246+ /* Flag to track if data path (TX/RX) was stopped for RX adaptation.12471247+ * This prevents packet corruption during the adaptation window.12481248+ */12491249+ bool data_path_stopped;12461250 bool mode_set;12471251 bool sph;12481252};
+11
drivers/net/ethernet/arc/emac_main.c
···934934 /* Set poll rate so that it polls every 1 ms */935935 arc_reg_set(priv, R_POLLRATE, clock_frequency / 1000000);936936937937+ /*938938+ * Put the device into a known quiescent state before requesting939939+ * the IRQ. Clear only EMAC interrupt status bits here; leave the940940+ * MDIO completion bit alone and avoid writing TXPL_MASK, which is941941+ * used to force TX polling rather than acknowledge interrupts.942942+ */943943+ arc_reg_set(priv, R_ENABLE, 0);944944+ arc_reg_set(priv, R_STATUS, RXINT_MASK | TXINT_MASK | ERR_MASK |945945+ TXCH_MASK | MSER_MASK | RXCR_MASK |946946+ RXFR_MASK | RXFL_MASK);947947+937948 ndev->irq = irq;938949 dev_info(dev, "IRQ is %d\n", ndev->irq);939950
+2-2
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
···979979980980 if (bnxt_get_nr_rss_ctxs(bp, req_rx_rings) !=981981 bnxt_get_nr_rss_ctxs(bp, bp->rx_nr_rings) &&982982- netif_is_rxfh_configured(dev)) {983983- netdev_warn(dev, "RSS table size change required, RSS table entries must be default to proceed\n");982982+ (netif_is_rxfh_configured(dev) || bp->num_rss_ctx)) {983983+ netdev_warn(dev, "RSS table size change required, RSS table entries must be default (with no additional RSS contexts present) to proceed\n");984984 return -EINVAL;985985 }986986
+12-19
drivers/net/ethernet/broadcom/genet/bcmgenet.c
···13421342 }13431343}1344134413451345-void bcmgenet_eee_enable_set(struct net_device *dev, bool enable,13461346- bool tx_lpi_enabled)13451345+void bcmgenet_eee_enable_set(struct net_device *dev, bool enable)13471346{13481347 struct bcmgenet_priv *priv = netdev_priv(dev);13491348 u32 off = priv->hw_params->tbuf_offset + TBUF_ENERGY_CTRL;···1362136313631364 /* Enable EEE and switch to a 27Mhz clock automatically */13641365 reg = bcmgenet_readl(priv->base + off);13651365- if (tx_lpi_enabled)13661366+ if (enable)13661367 reg |= TBUF_EEE_EN | TBUF_PM_EN;13671368 else13681369 reg &= ~(TBUF_EEE_EN | TBUF_PM_EN);···13811382 priv->clk_eee_enabled = false;13821383 }1383138413841384- priv->eee.eee_enabled = enable;13851385- priv->eee.tx_lpi_enabled = tx_lpi_enabled;13861385}1387138613881387static int bcmgenet_get_eee(struct net_device *dev, struct ethtool_keee *e)13891388{13901389 struct bcmgenet_priv *priv = netdev_priv(dev);13911391- struct ethtool_keee *p = &priv->eee;13901390+ int ret;1392139113931392 if (GENET_IS_V1(priv))13941393 return -EOPNOTSUPP;···13941397 if (!dev->phydev)13951398 return -ENODEV;1396139913971397- e->tx_lpi_enabled = p->tx_lpi_enabled;14001400+ ret = phy_ethtool_get_eee(dev->phydev, e);14011401+ if (ret)14021402+ return ret;14031403+14041404+ /* tx_lpi_timer is maintained by the MAC hardware register; the14051405+ * PHY-level eee_cfg timer is not set for GENET.14061406+ */13981407 e->tx_lpi_timer = bcmgenet_umac_readl(priv, UMAC_EEE_LPI_TIMER);1399140814001400- return phy_ethtool_get_eee(dev->phydev, e);14091409+ return 0;14011410}1402141114031412static int bcmgenet_set_eee(struct net_device *dev, struct ethtool_keee *e)14041413{14051414 struct bcmgenet_priv *priv = netdev_priv(dev);14061406- struct ethtool_keee *p = &priv->eee;14071407- bool active;1408141514091416 if (GENET_IS_V1(priv))14101417 return -EOPNOTSUPP;···14161415 if (!dev->phydev)14171416 return -ENODEV;1418141714191419- p->eee_enabled = e->eee_enabled;14201420-14211421- if (!p->eee_enabled) {14221422- bcmgenet_eee_enable_set(dev, false, false);14231423- } else {14241424- active = phy_init_eee(dev->phydev, false) >= 0;14251425- bcmgenet_umac_writel(priv, e->tx_lpi_timer, UMAC_EEE_LPI_TIMER);14261426- bcmgenet_eee_enable_set(dev, active, e->tx_lpi_enabled);14271427- }14181418+ bcmgenet_umac_writel(priv, e->tx_lpi_timer, UMAC_EEE_LPI_TIMER);1428141914291420 return phy_ethtool_set_eee(dev->phydev, e);14301421}
···333333334334 mdio_node = of_get_child_by_name(np, "mdio");335335 if (!mdio_node)336336- return 0;336336+ return -ENODEV;337337338338 phy_node = of_get_next_child(mdio_node, NULL);339339- if (!phy_node)339339+ if (!phy_node) {340340+ err = -ENODEV;340341 goto of_put_mdio_node;342342+ }341343342344 err = of_property_read_u32(phy_node, "reg", &addr);343345 if (err)···425423426424 addr = netc_get_phy_addr(gchild);427425 if (addr < 0) {426426+ if (addr == -ENODEV)427427+ continue;428428+428429 dev_err(dev, "Failed to get PHY address\n");429430 return addr;430431 }···437432 "Find same PHY address in EMDIO and ENETC node\n");438433 return -EINVAL;439434 }440440-441441- /* The default value of LaBCR[MDIO_PHYAD_PRTAD ] is442442- * 0, so no need to set the register.443443- */444444- if (!addr)445445- continue;446435447436 switch (bus_devfn) {448437 case IMX95_ENETC0_BUS_DEVFN:···577578578579 addr = netc_get_phy_addr(np);579580 if (addr < 0) {581581+ if (addr == -ENODEV)582582+ return 0;583583+580584 dev_err(dev, "Failed to get PHY address\n");581585 return addr;582586 }583583-584584- /* The default value of LaBCR[MDIO_PHYAD_PRTAD] is 0,585585- * so no need to set the register.586586- */587587- if (!addr)588588- return 0;589587590588 if (phy_mask & BIT(addr)) {591589 dev_err(dev,
···492492{493493 struct iavf_adapter *adapter = netdev_priv(netdev);494494 u32 new_rx_count, new_tx_count;495495- int ret = 0;496495497496 if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))498497 return -EINVAL;···536537 }537538538539 if (netif_running(netdev)) {539539- iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED);540540- ret = iavf_wait_for_reset(adapter);541541- if (ret)542542- netdev_warn(netdev, "Changing ring parameters timeout or interrupted waiting for reset");540540+ adapter->flags |= IAVF_FLAG_RESET_NEEDED;541541+ iavf_reset_step(adapter);543542 }544543545545- return ret;544544+ return 0;546545}547546548547/**···17201723{17211724 struct iavf_adapter *adapter = netdev_priv(netdev);17221725 u32 num_req = ch->combined_count;17231723- int ret = 0;1724172617251727 if ((adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_ADQ) &&17261728 adapter->num_tc) {···1741174517421746 adapter->num_req_queues = num_req;17431747 adapter->flags |= IAVF_FLAG_REINIT_ITR_NEEDED;17441744- iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED);17481748+ adapter->flags |= IAVF_FLAG_RESET_NEEDED;17491749+ iavf_reset_step(adapter);1745175017461746- ret = iavf_wait_for_reset(adapter);17471747- if (ret)17481748- netdev_warn(netdev, "Changing channel count timeout or interrupted waiting for reset");17491749-17501750- return ret;17511751+ return 0;17511752}1752175317531754/**
+28-53
drivers/net/ethernet/intel/iavf/iavf_main.c
···186186}187187188188/**189189- * iavf_wait_for_reset - Wait for reset to finish.190190- * @adapter: board private structure191191- *192192- * Returns 0 if reset finished successfully, negative on timeout or interrupt.193193- */194194-int iavf_wait_for_reset(struct iavf_adapter *adapter)195195-{196196- int ret = wait_event_interruptible_timeout(adapter->reset_waitqueue,197197- !iavf_is_reset_in_progress(adapter),198198- msecs_to_jiffies(5000));199199-200200- /* If ret < 0 then it means wait was interrupted.201201- * If ret == 0 then it means we got a timeout while waiting202202- * for reset to finish.203203- * If ret > 0 it means reset has finished.204204- */205205- if (ret > 0)206206- return 0;207207- else if (ret < 0)208208- return -EINTR;209209- else210210- return -EBUSY;211211-}212212-213213-/**214189 * iavf_allocate_dma_mem_d - OS specific memory alloc for shared code215190 * @hw: pointer to the HW structure216191 * @mem: ptr to mem struct to fill out···3011303630123037 adapter->flags |= IAVF_FLAG_PF_COMMS_FAILED;3013303830393039+ iavf_ptp_release(adapter);30403040+30143041 /* We don't use netif_running() because it may be true prior to30153042 * ndo_open() returning, so we can't assume it means all our open30163043 * tasks have finished, since we're not holding the rtnl_lock here.···30883111}3089311230903113/**30913091- * iavf_reset_task - Call-back task to handle hardware reset30923092- * @work: pointer to work_struct31143114+ * iavf_reset_step - Perform the VF reset sequence31153115+ * @adapter: board private structure30933116 *30943094- * During reset we need to shut down and reinitialize the admin queue30953095- * before we can use it to communicate with the PF again. We also clear30963096- * and reinit the rings because that context is lost as well.30973097- **/30983098-static void iavf_reset_task(struct work_struct *work)31173117+ * Requests a reset from PF, polls for completion, and reconfigures31183118+ * the driver. Caller must hold the netdev instance lock.31193119+ *31203120+ * This can sleep for several seconds while polling HW registers.31213121+ */31223122+void iavf_reset_step(struct iavf_adapter *adapter)30993123{31003100- struct iavf_adapter *adapter = container_of(work,31013101- struct iavf_adapter,31023102- reset_task);31033124 struct virtchnl_vf_resource *vfres = adapter->vf_res;31043125 struct net_device *netdev = adapter->netdev;31053126 struct iavf_hw *hw = &adapter->hw;···31083133 int i = 0, err;31093134 bool running;3110313531113111- netdev_lock(netdev);31363136+ netdev_assert_locked(netdev);3112313731133138 iavf_misc_irq_disable(adapter);31143139 if (adapter->flags & IAVF_FLAG_RESET_NEEDED) {···31533178 dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n",31543179 reg_val);31553180 iavf_disable_vf(adapter);31563156- netdev_unlock(netdev);31573181 return; /* Do not attempt to reinit. It's dead, Jim. */31583182 }31593183···31643190 iavf_startup(adapter);31653191 queue_delayed_work(adapter->wq, &adapter->watchdog_task,31663192 msecs_to_jiffies(30));31673167- netdev_unlock(netdev);31683193 return;31693194 }31703195···3183321031843211 iavf_change_state(adapter, __IAVF_RESETTING);31853212 adapter->flags &= ~IAVF_FLAG_RESET_PENDING;32133213+32143214+ iavf_ptp_release(adapter);3186321531873216 /* free the Tx/Rx rings and descriptors, might be better to just31883217 * re-use them sometime in the future···3306333133073332 adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;3308333333093309- wake_up(&adapter->reset_waitqueue);33103310- netdev_unlock(netdev);33113311-33123334 return;33133335reset_err:33143336 if (running) {···33143342 }33153343 iavf_disable_vf(adapter);3316334433173317- netdev_unlock(netdev);33183345 dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n");33463346+}33473347+33483348+static void iavf_reset_task(struct work_struct *work)33493349+{33503350+ struct iavf_adapter *adapter = container_of(work,33513351+ struct iavf_adapter,33523352+ reset_task);33533353+ struct net_device *netdev = adapter->netdev;33543354+33553355+ netdev_lock(netdev);33563356+ iavf_reset_step(adapter);33573357+ netdev_unlock(netdev);33193358}3320335933213360/**···45944611static int iavf_change_mtu(struct net_device *netdev, int new_mtu)45954612{45964613 struct iavf_adapter *adapter = netdev_priv(netdev);45974597- int ret = 0;4598461445994615 netdev_dbg(netdev, "changing MTU from %d to %d\n",46004616 netdev->mtu, new_mtu);46014617 WRITE_ONCE(netdev->mtu, new_mtu);4602461846034619 if (netif_running(netdev)) {46044604- iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED);46054605- ret = iavf_wait_for_reset(adapter);46064606- if (ret < 0)46074607- netdev_warn(netdev, "MTU change interrupted waiting for reset");46084608- else if (ret)46094609- netdev_warn(netdev, "MTU change timed out waiting for reset");46204620+ adapter->flags |= IAVF_FLAG_RESET_NEEDED;46214621+ iavf_reset_step(adapter);46104622 }4611462346124612- return ret;46244624+ return 0;46134625}4614462646154627/**···5408543054095431 /* Setup the wait queue for indicating transition to down status */54105432 init_waitqueue_head(&adapter->down_waitqueue);54115411-54125412- /* Setup the wait queue for indicating transition to running state */54135413- init_waitqueue_head(&adapter->reset_waitqueue);5414543354155434 /* Setup the wait queue for indicating virtchannel events */54165435 init_waitqueue_head(&adapter->vc_waitqueue);
-1
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
···27362736 case VIRTCHNL_OP_ENABLE_QUEUES:27372737 /* enable transmits */27382738 iavf_irq_enable(adapter, true);27392739- wake_up(&adapter->reset_waitqueue);27402739 adapter->flags &= ~IAVF_FLAG_QUEUES_DISABLED;27412740 break;27422741 case VIRTCHNL_OP_DISABLE_QUEUES:
···18291829 if ((dev->driver_info->flags & FLAG_NOARP) != 0)18301830 net->flags |= IFF_NOARP;1831183118321832- if (net->max_mtu > (dev->hard_mtu - net->hard_header_len))18321832+ if ((dev->driver_info->flags & FLAG_NOMAXMTU) == 0 &&18331833+ net->max_mtu > (dev->hard_mtu - net->hard_header_len))18331834 net->max_mtu = dev->hard_mtu - net->hard_header_len;1834183518351835- if (net->mtu > net->max_mtu)18361836- net->mtu = net->max_mtu;18361836+ if (net->mtu > (dev->hard_mtu - net->hard_header_len))18371837+ net->mtu = dev->hard_mtu - net->hard_header_len;1837183818381839 } else if (!info->in || !info->out)18391840 status = usbnet_get_endpoints(dev, udev);
+17-17
drivers/net/xen-netfront.c
···1646164616471647 /* avoid the race with XDP headroom adjustment */16481648 wait_event(module_wq,16491649- xenbus_read_driver_state(np->xbdev->otherend) ==16491649+ xenbus_read_driver_state(np->xbdev, np->xbdev->otherend) ==16501650 XenbusStateReconfigured);16511651 np->netfront_xdp_enabled = true;16521652···17641764 do {17651765 xenbus_switch_state(dev, XenbusStateInitialising);17661766 err = wait_event_timeout(module_wq,17671767- xenbus_read_driver_state(dev->otherend) !=17671767+ xenbus_read_driver_state(dev, dev->otherend) !=17681768 XenbusStateClosed &&17691769- xenbus_read_driver_state(dev->otherend) !=17691769+ xenbus_read_driver_state(dev, dev->otherend) !=17701770 XenbusStateUnknown, XENNET_TIMEOUT);17711771 } while (!err);17721772···26262626{26272627 int ret;2628262826292629- if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)26292629+ if (xenbus_read_driver_state(dev, dev->otherend) == XenbusStateClosed)26302630 return;26312631 do {26322632 xenbus_switch_state(dev, XenbusStateClosing);26332633 ret = wait_event_timeout(module_wq,26342634- xenbus_read_driver_state(dev->otherend) ==26352635- XenbusStateClosing ||26362636- xenbus_read_driver_state(dev->otherend) ==26372637- XenbusStateClosed ||26382638- xenbus_read_driver_state(dev->otherend) ==26392639- XenbusStateUnknown,26402640- XENNET_TIMEOUT);26342634+ xenbus_read_driver_state(dev, dev->otherend) ==26352635+ XenbusStateClosing ||26362636+ xenbus_read_driver_state(dev, dev->otherend) ==26372637+ XenbusStateClosed ||26382638+ xenbus_read_driver_state(dev, dev->otherend) ==26392639+ XenbusStateUnknown,26402640+ XENNET_TIMEOUT);26412641 } while (!ret);2642264226432643- if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)26432643+ if (xenbus_read_driver_state(dev, dev->otherend) == XenbusStateClosed)26442644 return;2645264526462646 do {26472647 xenbus_switch_state(dev, XenbusStateClosed);26482648 ret = wait_event_timeout(module_wq,26492649- xenbus_read_driver_state(dev->otherend) ==26502650- XenbusStateClosed ||26512651- xenbus_read_driver_state(dev->otherend) ==26522652- XenbusStateUnknown,26532653- XENNET_TIMEOUT);26492649+ xenbus_read_driver_state(dev, dev->otherend) ==26502650+ XenbusStateClosed ||26512651+ xenbus_read_driver_state(dev, dev->otherend) ==26522652+ XenbusStateUnknown,26532653+ XENNET_TIMEOUT);26542654 } while (!ret);26552655}26562656
+13-18
drivers/nvme/host/core.c
···20462046 if (id->nabspf)20472047 boundary = (le16_to_cpu(id->nabspf) + 1) * bs;20482048 } else {20492049- /*20502050- * Use the controller wide atomic write unit. This sucks20512051- * because the limit is defined in terms of logical blocks while20522052- * namespaces can have different formats, and because there is20532053- * no clear language in the specification prohibiting different20542054- * values for different controllers in the subsystem.20552055- */20562056- atomic_bs = (1 + ns->ctrl->subsys->awupf) * bs;20492049+ if (ns->ctrl->awupf)20502050+ dev_info_once(ns->ctrl->device,20512051+ "AWUPF ignored, only NAWUPF accepted\n");20522052+ atomic_bs = bs;20572053 }2058205420592055 lim->atomic_write_hw_max = atomic_bs;···32183222 memcpy(subsys->model, id->mn, sizeof(subsys->model));32193223 subsys->vendor_id = le16_to_cpu(id->vid);32203224 subsys->cmic = id->cmic;32213221- subsys->awupf = le16_to_cpu(id->awupf);3222322532233226 /* Versions prior to 1.4 don't necessarily report a valid type */32243227 if (id->cntrltype == NVME_CTRL_DISC ||···36503655 dev_pm_qos_expose_latency_tolerance(ctrl->device);36513656 else if (!ctrl->apst_enabled && prev_apst_enabled)36523657 dev_pm_qos_hide_latency_tolerance(ctrl->device);36583658+ ctrl->awupf = le16_to_cpu(id->awupf);36533659out_free:36543660 kfree(id);36553661 return ret;···4181418541824186 nvme_mpath_add_disk(ns, info->anagrpid);41834187 nvme_fault_inject_init(&ns->fault_inject, ns->disk->disk_name);41844184-41854185- /*41864186- * Set ns->disk->device->driver_data to ns so we can access41874187- * ns->head->passthru_err_log_enabled in41884188- * nvme_io_passthru_err_log_enabled_[store | show]().41894189- */41904190- dev_set_drvdata(disk_to_dev(ns->disk), ns);4191418841924189 return;41934190···48344845int nvme_alloc_admin_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,48354846 const struct blk_mq_ops *ops, unsigned int cmd_size)48364847{48374837- struct queue_limits lim = {};48384848 int ret;4839484948404850 memset(set, 0, sizeof(*set));···48534865 if (ret)48544866 return ret;4855486748564856- ctrl->admin_q = blk_mq_alloc_queue(set, &lim, NULL);48684868+ /*48694869+ * If a previous admin queue exists (e.g., from before a reset),48704870+ * put it now before allocating a new one to avoid orphaning it.48714871+ */48724872+ if (ctrl->admin_q)48734873+ blk_put_queue(ctrl->admin_q);48744874+48754875+ ctrl->admin_q = blk_mq_alloc_queue(set, NULL, NULL);48574876 if (IS_ERR(ctrl->admin_q)) {48584877 ret = PTR_ERR(ctrl->admin_q);48594878 goto out_free_tagset;
···13001300 mutex_lock(&head->subsys->lock);13011301 /*13021302 * We are called when all paths have been removed, and at that point13031303- * head->list is expected to be empty. However, nvme_remove_ns() and13031303+ * head->list is expected to be empty. However, nvme_ns_remove() and13041304 * nvme_init_ns_head() can run concurrently and so if head->delayed_13051305 * removal_secs is configured, it is possible that by the time we reach13061306 * this point, head->list may no longer be empty. Therefore, we recheck···13101310 if (!list_empty(&head->list))13111311 goto out;1312131213131313- if (head->delayed_removal_secs) {13141314- /*13151315- * Ensure that no one could remove this module while the head13161316- * remove work is pending.13171317- */13181318- if (!try_module_get(THIS_MODULE))13191319- goto out;13131313+ /*13141314+ * Ensure that no one could remove this module while the head13151315+ * remove work is pending.13161316+ */13171317+ if (head->delayed_removal_secs && try_module_get(THIS_MODULE)) {13201318 mod_delayed_work(nvme_wq, &head->remove_work,13211319 head->delayed_removal_secs * HZ);13221320 } else {
+56-1
drivers/nvme/host/nvme.h
···180180 NVME_QUIRK_DMAPOOL_ALIGN_512 = (1 << 22),181181};182182183183+static inline char *nvme_quirk_name(enum nvme_quirks q)184184+{185185+ switch (q) {186186+ case NVME_QUIRK_STRIPE_SIZE:187187+ return "stripe_size";188188+ case NVME_QUIRK_IDENTIFY_CNS:189189+ return "identify_cns";190190+ case NVME_QUIRK_DEALLOCATE_ZEROES:191191+ return "deallocate_zeroes";192192+ case NVME_QUIRK_DELAY_BEFORE_CHK_RDY:193193+ return "delay_before_chk_rdy";194194+ case NVME_QUIRK_NO_APST:195195+ return "no_apst";196196+ case NVME_QUIRK_NO_DEEPEST_PS:197197+ return "no_deepest_ps";198198+ case NVME_QUIRK_QDEPTH_ONE:199199+ return "qdepth_one";200200+ case NVME_QUIRK_MEDIUM_PRIO_SQ:201201+ return "medium_prio_sq";202202+ case NVME_QUIRK_IGNORE_DEV_SUBNQN:203203+ return "ignore_dev_subnqn";204204+ case NVME_QUIRK_DISABLE_WRITE_ZEROES:205205+ return "disable_write_zeroes";206206+ case NVME_QUIRK_SIMPLE_SUSPEND:207207+ return "simple_suspend";208208+ case NVME_QUIRK_SINGLE_VECTOR:209209+ return "single_vector";210210+ case NVME_QUIRK_128_BYTES_SQES:211211+ return "128_bytes_sqes";212212+ case NVME_QUIRK_SHARED_TAGS:213213+ return "shared_tags";214214+ case NVME_QUIRK_NO_TEMP_THRESH_CHANGE:215215+ return "no_temp_thresh_change";216216+ case NVME_QUIRK_NO_NS_DESC_LIST:217217+ return "no_ns_desc_list";218218+ case NVME_QUIRK_DMA_ADDRESS_BITS_48:219219+ return "dma_address_bits_48";220220+ case NVME_QUIRK_SKIP_CID_GEN:221221+ return "skip_cid_gen";222222+ case NVME_QUIRK_BOGUS_NID:223223+ return "bogus_nid";224224+ case NVME_QUIRK_NO_SECONDARY_TEMP_THRESH:225225+ return "no_secondary_temp_thresh";226226+ case NVME_QUIRK_FORCE_NO_SIMPLE_SUSPEND:227227+ return "force_no_simple_suspend";228228+ case NVME_QUIRK_BROKEN_MSI:229229+ return "broken_msi";230230+ case NVME_QUIRK_DMAPOOL_ALIGN_512:231231+ return "dmapool_align_512";232232+ }233233+234234+ return "unknown";235235+}236236+183237/*184238 * Common request structure for NVMe passthrough. All drivers must have185239 * this structure as the first member of their request-private data.···464410465411 enum nvme_ctrl_type cntrltype;466412 enum nvme_dctype dctype;413413+414414+ u16 awupf; /* 0's based value. */467415};468416469417static inline enum nvme_ctrl_state nvme_ctrl_state(struct nvme_ctrl *ctrl)···498442 u8 cmic;499443 enum nvme_subsys_type subtype;500444 u16 vendor_id;501501- u16 awupf; /* 0's based value. */502445 struct ida ns_ida;503446#ifdef CONFIG_NVME_MULTIPATH504447 enum nvme_iopolicy iopolicy;
+189-5
drivers/nvme/host/pci.c
···7272static_assert(MAX_PRP_RANGE / NVME_CTRL_PAGE_SIZE <=7373 (1 /* prp1 */ + NVME_MAX_NR_DESCRIPTORS * PRPS_PER_PAGE));74747575+struct quirk_entry {7676+ u16 vendor_id;7777+ u16 dev_id;7878+ u32 enabled_quirks;7979+ u32 disabled_quirks;8080+};8181+7582static int use_threaded_interrupts;7683module_param(use_threaded_interrupts, int, 0444);7784···108101static unsigned int io_queue_depth = 1024;109102module_param_cb(io_queue_depth, &io_queue_depth_ops, &io_queue_depth, 0644);110103MODULE_PARM_DESC(io_queue_depth, "set io queue depth, should >= 2 and < 4096");104104+105105+static struct quirk_entry *nvme_pci_quirk_list;106106+static unsigned int nvme_pci_quirk_count;107107+108108+/* Helper to parse individual quirk names */109109+static int nvme_parse_quirk_names(char *quirk_str, struct quirk_entry *entry)110110+{111111+ int i;112112+ size_t field_len;113113+ bool disabled, found;114114+ char *p = quirk_str, *field;115115+116116+ while ((field = strsep(&p, ",")) && *field) {117117+ disabled = false;118118+ found = false;119119+120120+ if (*field == '^') {121121+ /* Skip the '^' character */122122+ disabled = true;123123+ field++;124124+ }125125+126126+ field_len = strlen(field);127127+ for (i = 0; i < 32; i++) {128128+ unsigned int bit = 1U << i;129129+ char *q_name = nvme_quirk_name(bit);130130+ size_t q_len = strlen(q_name);131131+132132+ if (!strcmp(q_name, "unknown"))133133+ break;134134+135135+ if (!strcmp(q_name, field) &&136136+ q_len == field_len) {137137+ if (disabled)138138+ entry->disabled_quirks |= bit;139139+ else140140+ entry->enabled_quirks |= bit;141141+ found = true;142142+ break;143143+ }144144+ }145145+146146+ if (!found) {147147+ pr_err("nvme: unrecognized quirk %s\n", field);148148+ return -EINVAL;149149+ }150150+ }151151+ return 0;152152+}153153+154154+/* Helper to parse a single VID:DID:quirk_names entry */155155+static int nvme_parse_quirk_entry(char *s, struct quirk_entry *entry)156156+{157157+ char *field;158158+159159+ field = strsep(&s, ":");160160+ if (!field || kstrtou16(field, 16, &entry->vendor_id))161161+ return -EINVAL;162162+163163+ field = strsep(&s, ":");164164+ if (!field || kstrtou16(field, 16, &entry->dev_id))165165+ return -EINVAL;166166+167167+ field = strsep(&s, ":");168168+ if (!field)169169+ return -EINVAL;170170+171171+ return nvme_parse_quirk_names(field, entry);172172+}173173+174174+static int quirks_param_set(const char *value, const struct kernel_param *kp)175175+{176176+ int count, err, i;177177+ struct quirk_entry *qlist;178178+ char *field, *val, *sep_ptr;179179+180180+ err = param_set_copystring(value, kp);181181+ if (err)182182+ return err;183183+184184+ val = kstrdup(value, GFP_KERNEL);185185+ if (!val)186186+ return -ENOMEM;187187+188188+ if (!*val)189189+ goto out_free_val;190190+191191+ count = 1;192192+ for (i = 0; val[i]; i++) {193193+ if (val[i] == '-')194194+ count++;195195+ }196196+197197+ qlist = kcalloc(count, sizeof(*qlist), GFP_KERNEL);198198+ if (!qlist) {199199+ err = -ENOMEM;200200+ goto out_free_val;201201+ }202202+203203+ i = 0;204204+ sep_ptr = val;205205+ while ((field = strsep(&sep_ptr, "-"))) {206206+ if (nvme_parse_quirk_entry(field, &qlist[i])) {207207+ pr_err("nvme: failed to parse quirk string %s\n",208208+ value);209209+ goto out_free_qlist;210210+ }211211+212212+ i++;213213+ }214214+215215+ kfree(nvme_pci_quirk_list);216216+ nvme_pci_quirk_count = count;217217+ nvme_pci_quirk_list = qlist;218218+ goto out_free_val;219219+220220+out_free_qlist:221221+ kfree(qlist);222222+out_free_val:223223+ kfree(val);224224+ return err;225225+}226226+227227+static char quirks_param[128];228228+static const struct kernel_param_ops quirks_param_ops = {229229+ .set = quirks_param_set,230230+ .get = param_get_string,231231+};232232+233233+static struct kparam_string quirks_param_string = {234234+ .maxlen = sizeof(quirks_param),235235+ .string = quirks_param,236236+};237237+238238+module_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0444);239239+MODULE_PARM_DESC(quirks, "Enable/disable NVMe quirks by specifying "240240+ "quirks=VID:DID:quirk_names");111241112242static int io_queue_count_set(const char *val, const struct kernel_param *kp)113243{···544400 /* Free memory and continue on */545401 nvme_dbbuf_dma_free(dev);546402547547- for (i = 1; i <= dev->online_queues; i++)403403+ for (i = 1; i < dev->online_queues; i++)548404 nvme_dbbuf_free(&dev->queues[i]);549405 }550406}···16251481static void nvme_poll_irqdisable(struct nvme_queue *nvmeq)16261482{16271483 struct pci_dev *pdev = to_pci_dev(nvmeq->dev->dev);14841484+ int irq;1628148516291486 WARN_ON_ONCE(test_bit(NVMEQ_POLLED, &nvmeq->flags));1630148716311631- disable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));14881488+ irq = pci_irq_vector(pdev, nvmeq->cq_vector);14891489+ disable_irq(irq);16321490 spin_lock(&nvmeq->cq_poll_lock);16331491 nvme_poll_cq(nvmeq, NULL);16341492 spin_unlock(&nvmeq->cq_poll_lock);16351635- enable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));14931493+ enable_irq(irq);16361494}1637149516381496static int nvme_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)···16421496 struct nvme_queue *nvmeq = hctx->driver_data;16431497 bool found;1644149816451645- if (!nvme_cqe_pending(nvmeq))14991499+ if (!test_bit(NVMEQ_POLLED, &nvmeq->flags) ||15001500+ !nvme_cqe_pending(nvmeq))16461501 return 0;1647150216481503 spin_lock(&nvmeq->cq_poll_lock);···29212774 dev->nr_write_queues = write_queues;29222775 dev->nr_poll_queues = poll_queues;2923277629242924- nr_io_queues = dev->nr_allocated_queues - 1;27772777+ if (dev->ctrl.tagset) {27782778+ /*27792779+ * The set's maps are allocated only once at initialization27802780+ * time. We can't add special queues later if their mq_map27812781+ * wasn't preallocated.27822782+ */27832783+ if (dev->ctrl.tagset->nr_maps < 3)27842784+ dev->nr_poll_queues = 0;27852785+ if (dev->ctrl.tagset->nr_maps < 2)27862786+ dev->nr_write_queues = 0;27872787+ }27882788+27892789+ /*27902790+ * The initial number of allocated queue slots may be too large if the27912791+ * user reduced the special queue parameters. Cap the value to the27922792+ * number we need for this round.27932793+ */27942794+ nr_io_queues = min(nvme_max_io_queues(dev),27952795+ dev->nr_allocated_queues - 1);29252796 result = nvme_set_queue_count(&dev->ctrl, &nr_io_queues);29262797 if (result < 0)29272798 return result;···36233458 return 0;36243459}3625346034613461+static struct quirk_entry *detect_dynamic_quirks(struct pci_dev *pdev)34623462+{34633463+ int i;34643464+34653465+ for (i = 0; i < nvme_pci_quirk_count; i++)34663466+ if (pdev->vendor == nvme_pci_quirk_list[i].vendor_id &&34673467+ pdev->device == nvme_pci_quirk_list[i].dev_id)34683468+ return &nvme_pci_quirk_list[i];34693469+34703470+ return NULL;34713471+}34723472+36263473static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,36273474 const struct pci_device_id *id)36283475{36293476 unsigned long quirks = id->driver_data;36303477 int node = dev_to_node(&pdev->dev);36313478 struct nvme_dev *dev;34793479+ struct quirk_entry *qentry;36323480 int ret = -ENOMEM;3633348136343482 dev = kzalloc_node(struct_size(dev, descriptor_pools, nr_node_ids),···36723494 dev_info(&pdev->dev,36733495 "platform quirk: setting simple suspend\n");36743496 quirks |= NVME_QUIRK_SIMPLE_SUSPEND;34973497+ }34983498+ qentry = detect_dynamic_quirks(pdev);34993499+ if (qentry) {35003500+ quirks |= qentry->enabled_quirks;35013501+ quirks &= ~qentry->disabled_quirks;36753502 }36763503 ret = nvme_init_ctrl(&dev->ctrl, &pdev->dev, &nvme_pci_ctrl_ops,36773504 quirks);···4278409542794096static void __exit nvme_exit(void)42804097{40984098+ kfree(nvme_pci_quirk_list);42814099 pci_unregister_driver(&nvme_driver);42824100 flush_workqueue(nvme_wq);42834101}
···25252626struct nvme_tcp_queue;27272828-/* Define the socket priority to use for connections were it is desirable2828+/*2929+ * Define the socket priority to use for connections where it is desirable2930 * that the NIC consider performing optimized packet processing or filtering.3031 * A non-zero value being sufficient to indicate general consideration of any3132 * possible optimization. Making it a module param allows for alternative···927926 req->curr_bio = req->curr_bio->bi_next;928927929928 /*930930- * If we don`t have any bios it means that controller929929+ * If we don't have any bios it means the controller931930 * sent more data than we requested, hence error932931 */933932 if (!req->curr_bio) {
···27272828struct workqueue_struct *nvmet_wq;2929EXPORT_SYMBOL_GPL(nvmet_wq);3030+struct workqueue_struct *nvmet_aen_wq;3131+EXPORT_SYMBOL_GPL(nvmet_aen_wq);30323133/*3234 * This read/write semaphore is used to synchronize access to configuration···208206 list_add_tail(&aen->entry, &ctrl->async_events);209207 mutex_unlock(&ctrl->lock);210208211211- queue_work(nvmet_wq, &ctrl->async_event_work);209209+ queue_work(nvmet_aen_wq, &ctrl->async_event_work);212210}213211214212static void nvmet_add_to_changed_ns_log(struct nvmet_ctrl *ctrl, __le32 nsid)···19581956 if (!nvmet_wq)19591957 goto out_free_buffered_work_queue;1960195819591959+ nvmet_aen_wq = alloc_workqueue("nvmet-aen-wq",19601960+ WQ_MEM_RECLAIM | WQ_UNBOUND, 0);19611961+ if (!nvmet_aen_wq)19621962+ goto out_free_nvmet_work_queue;19631963+19611964 error = nvmet_init_debugfs();19621965 if (error)19631963- goto out_free_nvmet_work_queue;19661966+ goto out_free_nvmet_aen_work_queue;1964196719651968 error = nvmet_init_discovery();19661969 if (error)···19811974 nvmet_exit_discovery();19821975out_exit_debugfs:19831976 nvmet_exit_debugfs();19771977+out_free_nvmet_aen_work_queue:19781978+ destroy_workqueue(nvmet_aen_wq);19841979out_free_nvmet_work_queue:19851980 destroy_workqueue(nvmet_wq);19861981out_free_buffered_work_queue:···20001991 nvmet_exit_discovery();20011992 nvmet_exit_debugfs();20021993 ida_destroy(&cntlid_ida);19941994+ destroy_workqueue(nvmet_aen_wq);20031995 destroy_workqueue(nvmet_wq);20041996 destroy_workqueue(buffered_io_wq);20051997 destroy_workqueue(zbd_wq);
+11-4
drivers/nvme/target/fcloop.c
···491491 struct fcloop_rport *rport = remoteport->private;492492 struct nvmet_fc_target_port *targetport = rport->targetport;493493 struct fcloop_tport *tport;494494+ int ret = 0;494495495496 if (!targetport) {496497 /*···501500 * We end up here from delete association exchange:502501 * nvmet_fc_xmt_disconnect_assoc sends an async request.503502 *504504- * Return success because this is what LLDDs do; silently505505- * drop the response.503503+ * Return success when remoteport is still online because this504504+ * is what LLDDs do and silently drop the response. Otherwise,505505+ * return with error to signal upper layer to perform the lsrsp506506+ * resource cleanup.506507 */507507- lsrsp->done(lsrsp);508508+ if (remoteport->port_state == FC_OBJSTATE_ONLINE)509509+ lsrsp->done(lsrsp);510510+ else511511+ ret = -ENODEV;512512+508513 kmem_cache_free(lsreq_cache, tls_req);509509- return 0;514514+ return ret;510515 }511516512517 memcpy(lsreq->rspaddr, lsrsp->rspbuf,
···9393 if (ret < 0)9494 goto out;95959696- print_hex_dump_bytes("set new password data: ", DUMP_PREFIX_NONE, buffer, buffer_size);9796 ret = call_password_interface(wmi_priv.password_attr_wdev, buffer, buffer_size);9897 /* on success copy the new password to current password */9998 if (!ret)
···223223 *con_id = "avdd";224224 *gpio_flags = GPIO_ACTIVE_HIGH;225225 break;226226+ case INT3472_GPIO_TYPE_DOVDD:227227+ *con_id = "dovdd";228228+ *gpio_flags = GPIO_ACTIVE_HIGH;229229+ break;226230 case INT3472_GPIO_TYPE_HANDSHAKE:227231 *con_id = "dvdd";228232 *gpio_flags = GPIO_ACTIVE_HIGH;···255251 * 0x0b Power enable256252 * 0x0c Clock enable257253 * 0x0d Privacy LED254254+ * 0x10 DOVDD (digital I/O voltage)258255 * 0x13 Hotplug detect259256 *260257 * There are some known platform specific quirks where that does not quite···337332 case INT3472_GPIO_TYPE_CLK_ENABLE:338333 case INT3472_GPIO_TYPE_PRIVACY_LED:339334 case INT3472_GPIO_TYPE_POWER_ENABLE:335335+ case INT3472_GPIO_TYPE_DOVDD:340336 case INT3472_GPIO_TYPE_HANDSHAKE:341337 gpio = skl_int3472_gpiod_get_from_temp_lookup(int3472, agpio, con_id, gpio_flags);342338 if (IS_ERR(gpio)) {···362356 case INT3472_GPIO_TYPE_POWER_ENABLE:363357 second_sensor = int3472->quirks.avdd_second_sensor;364358 fallthrough;359359+ case INT3472_GPIO_TYPE_DOVDD:365360 case INT3472_GPIO_TYPE_HANDSHAKE:366361 ret = skl_int3472_register_regulator(int3472, gpio, enable_time_us,367362 con_id, second_sensor);
+4-2
drivers/platform/x86/lenovo/thinkpad_acpi.c
···95259525{95269526 switch (what) {95279527 case THRESHOLD_START:95289528- if ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_START, ret, battery))95289528+ if (!battery_info.batteries[battery].start_support ||95299529+ ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_START, ret, battery)))95299530 return -ENODEV;9530953195319532 /* The value is in the low 8 bits of the response */95329533 *ret = *ret & 0xFF;95339534 return 0;95349535 case THRESHOLD_STOP:95359535- if ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_STOP, ret, battery))95369536+ if (!battery_info.batteries[battery].stop_support ||95379537+ ACPI_FAILURE(tpacpi_battery_acpi_eval(GET_STOP, ret, battery)))95369538 return -ENODEV;95379539 /* Value is in lower 8 bits */95389540 *ret = *ret & 0xFF;
···809809 }810810811811 ret = devm_request_threaded_irq(pf9453->dev, pf9453->irq, NULL, pf9453_irq_handler,812812- (IRQF_TRIGGER_FALLING | IRQF_ONESHOT),812812+ IRQF_ONESHOT,813813 "pf9453-irq", pf9453);814814 if (ret)815815 return dev_err_probe(pf9453->dev, ret, "Failed to request IRQ: %d\n", pf9453->irq);
+1-1
drivers/remoteproc/imx_rproc.c
···617617618618 err = of_reserved_mem_region_to_resource(np, i++, &res);619619 if (err)620620- return 0;620620+ break;621621622622 /*623623 * Ignore the first memory region which will be used vdev buffer.
+39
drivers/remoteproc/mtk_scp.c
···15921592};15931593MODULE_DEVICE_TABLE(of, mtk_scp_of_match);1594159415951595+static int __maybe_unused scp_suspend(struct device *dev)15961596+{15971597+ struct mtk_scp *scp = dev_get_drvdata(dev);15981598+ struct rproc *rproc = scp->rproc;15991599+16001600+ /*16011601+ * Only unprepare if the SCP is running and holding the clock.16021602+ *16031603+ * Note: `scp_ops` doesn't implement .attach() callback, hence16041604+ * `rproc->state` can never be RPROC_ATTACHED. Otherwise, it16051605+ * should also be checked here.16061606+ */16071607+ if (rproc->state == RPROC_RUNNING)16081608+ clk_unprepare(scp->clk);16091609+ return 0;16101610+}16111611+16121612+static int __maybe_unused scp_resume(struct device *dev)16131613+{16141614+ struct mtk_scp *scp = dev_get_drvdata(dev);16151615+ struct rproc *rproc = scp->rproc;16161616+16171617+ /*16181618+ * Only prepare if the SCP was running and holding the clock.16191619+ *16201620+ * Note: `scp_ops` doesn't implement .attach() callback, hence16211621+ * `rproc->state` can never be RPROC_ATTACHED. Otherwise, it16221622+ * should also be checked here.16231623+ */16241624+ if (rproc->state == RPROC_RUNNING)16251625+ return clk_prepare(scp->clk);16261626+ return 0;16271627+}16281628+16291629+static const struct dev_pm_ops scp_pm_ops = {16301630+ SET_SYSTEM_SLEEP_PM_OPS(scp_suspend, scp_resume)16311631+};16321632+15951633static struct platform_driver mtk_scp_driver = {15961634 .probe = scp_probe,15971635 .remove = scp_remove,15981636 .driver = {15991637 .name = "mtk-scp",16001638 .of_match_table = mtk_scp_of_match,16391639+ .pm = &scp_pm_ops,16011640 },16021641};16031642
···61356135static int dasd_eckd_copy_pair_swap(struct dasd_device *device, char *prim_busid,61366136 char *sec_busid)61376137{61386138+ struct dasd_eckd_private *prim_priv, *sec_priv;61386139 struct dasd_device *primary, *secondary;61396140 struct dasd_copy_relation *copy;61406141 struct dasd_block *block;···61556154 secondary = copy_relation_find_device(copy, sec_busid);61566155 if (!secondary)61576156 return DASD_COPYPAIRSWAP_SECONDARY;61576157+61586158+ prim_priv = primary->private;61596159+ sec_priv = secondary->private;6158616061596161 /*61606162 * usually the device should be quiesced for swap···61856181 dev_name(&primary->cdev->dev),61866182 dev_name(&secondary->cdev->dev), rc);61876183 }61846184+61856185+ if (primary->stopped & DASD_STOPPED_QUIESCE) {61866186+ dasd_device_set_stop_bits(secondary, DASD_STOPPED_QUIESCE);61876187+ dasd_device_remove_stop_bits(primary, DASD_STOPPED_QUIESCE);61886188+ }61896189+61906190+ /*61916191+ * The secondary device never got through format detection, but since it61926192+ * is a copy of the primary device, the format is exactly the same;61936193+ * therefore, the detected layout can simply be copied.61946194+ */61956195+ sec_priv->uses_cdl = prim_priv->uses_cdl;6188619661896197 /* re-enable device */61906198 dasd_device_remove_stop_bits(primary, DASD_STOPPED_PPRC);
+7-5
drivers/s390/crypto/zcrypt_ccamisc.c
···1639163916401640 memset(ci, 0, sizeof(*ci));1641164116421642- /* get first info from zcrypt device driver about this apqn */16431643- rc = zcrypt_device_status_ext(cardnr, domain, &devstat);16441644- if (rc)16451645- return rc;16461646- ci->hwtype = devstat.hwtype;16421642+ /* if specific domain given, fetch status and hw info for this apqn */16431643+ if (domain != AUTOSEL_DOM) {16441644+ rc = zcrypt_device_status_ext(cardnr, domain, &devstat);16451645+ if (rc)16461646+ return rc;16471647+ ci->hwtype = devstat.hwtype;16481648+ }1647164916481650 /*16491651 * Prep memory for rule array and var array use.
···360360 * default device queue depth to figure out sbitmap shift361361 * since we use this queue depth most of times.362362 */363363- if (scsi_realloc_sdev_budget_map(sdev, depth)) {364364- put_device(&starget->dev);365365- kfree(sdev);366366- goto out;367367- }363363+ if (scsi_realloc_sdev_budget_map(sdev, depth))364364+ goto out_device_destroy;368365369366 scsi_change_queue_depth(sdev, depth);370367
+1-1
drivers/scsi/xen-scsifront.c
···11751175 return;11761176 }1177117711781178- if (xenbus_read_driver_state(dev->nodename) ==11781178+ if (xenbus_read_driver_state(dev, dev->nodename) ==11791179 XenbusStateInitialised)11801180 scsifront_do_lun_hotplug(info, VSCSIFRONT_OP_ADD_LUN);11811181
···711711 }712712 }713713714714- ret = devm_spi_register_controller(dev, host);714714+ ret = spi_register_controller(host);715715 if (ret)716716 goto err_register;717717
···36363737 pr_info("mmio phyAddr = %lx\n", sm750_dev->vidreg_start);38383939- /*4040- * reserve the vidreg space of smi adaptor4141- * if you do this, you need to add release region code4242- * in lynxfb_remove, or memory will not be mapped again4343- * successfully4444- */3939+ /* reserve the vidreg space of smi adaptor */4540 ret = pci_request_region(pdev, 1, "sm750fb");4641 if (ret) {4742 pr_err("Can not request PCI regions.\n");4848- goto exit;4343+ return ret;4944 }50455146 /* now map mmio and vidmem */···4954 if (!sm750_dev->pvReg) {5055 pr_err("mmio failed\n");5156 ret = -EFAULT;5252- goto exit;5757+ goto err_release_region;5358 }5459 pr_info("mmio virtual addr = %p\n", sm750_dev->pvReg);5560···7479 sm750_dev->pvMem =7580 ioremap_wc(sm750_dev->vidmem_start, sm750_dev->vidmem_size);7681 if (!sm750_dev->pvMem) {7777- iounmap(sm750_dev->pvReg);7882 pr_err("Map video memory failed\n");7983 ret = -EFAULT;8080- goto exit;8484+ goto err_unmap_reg;8185 }8286 pr_info("video memory vaddr = %p\n", sm750_dev->pvMem);8383-exit:8787+8888+ return 0;8989+9090+err_unmap_reg:9191+ iounmap(sm750_dev->pvReg);9292+err_release_region:9393+ pci_release_region(pdev, 1);8494 return ret;8595}8696
+6-9
drivers/target/target_core_configfs.c
···108108 const char *page, size_t count)109109{110110 ssize_t read_bytes;111111- struct file *fp;112111 ssize_t r = -EINVAL;112112+ struct path path = {};113113114114 mutex_lock(&target_devices_lock);115115 if (target_devices) {···131131 db_root_stage[read_bytes - 1] = '\0';132132133133 /* validate new db root before accepting it */134134- fp = filp_open(db_root_stage, O_RDONLY, 0);135135- if (IS_ERR(fp)) {134134+ r = kern_path(db_root_stage, LOOKUP_FOLLOW | LOOKUP_DIRECTORY, &path);135135+ if (r) {136136 pr_err("db_root: cannot open: %s\n", db_root_stage);137137+ if (r == -ENOTDIR)138138+ pr_err("db_root: not a directory: %s\n", db_root_stage);137139 goto unlock;138140 }139139- if (!S_ISDIR(file_inode(fp)->i_mode)) {140140- filp_close(fp, NULL);141141- pr_err("db_root: not a directory: %s\n", db_root_stage);142142- goto unlock;143143- }144144- filp_close(fp, NULL);141141+ path_put(&path);145142146143 strscpy(db_root, db_root_stage);147144 pr_debug("Target_Core_ConfigFS: db_root set to %s\n", db_root);
···927927 dev->descriptor.bNumConfigurations = ncfg = USB_MAXCONFIG;928928 }929929930930- if (ncfg < 1) {930930+ if (ncfg < 1 && dev->quirks & USB_QUIRK_FORCE_ONE_CONFIG) {931931+ dev_info(ddev, "Device claims zero configurations, forcing to 1\n");932932+ dev->descriptor.bNumConfigurations = 1;933933+ ncfg = 1;934934+ } else if (ncfg < 1) {931935 dev_err(ddev, "no configurations\n");932936 return -EINVAL;933937 }
+79-21
drivers/usb/core/message.c
···424243434444/*4545- * Starts urb and waits for completion or timeout. Note that this call4646- * is NOT interruptible. Many device driver i/o requests should be4747- * interruptible and therefore these drivers should implement their4848- * own interruptible routines.4545+ * Starts urb and waits for completion or timeout.4646+ * Whether or not the wait is killable depends on the flag passed in.4747+ * For example, compare usb_bulk_msg() and usb_bulk_msg_killable().4848+ *4949+ * For non-killable waits, we enforce a maximum limit on the timeout value.4950 */5050-static int usb_start_wait_urb(struct urb *urb, int timeout, int *actual_length)5151+static int usb_start_wait_urb(struct urb *urb, int timeout, int *actual_length,5252+ bool killable)5153{5254 struct api_context ctx;5355 unsigned long expire;5456 int retval;5757+ long rc;55585659 init_completion(&ctx.done);5760 urb->context = &ctx;···6360 if (unlikely(retval))6461 goto out;65626666- expire = timeout ? msecs_to_jiffies(timeout) : MAX_SCHEDULE_TIMEOUT;6767- if (!wait_for_completion_timeout(&ctx.done, expire)) {6363+ if (!killable && (timeout <= 0 || timeout > USB_MAX_SYNCHRONOUS_TIMEOUT))6464+ timeout = USB_MAX_SYNCHRONOUS_TIMEOUT;6565+ expire = (timeout > 0) ? msecs_to_jiffies(timeout) : MAX_SCHEDULE_TIMEOUT;6666+ if (killable)6767+ rc = wait_for_completion_killable_timeout(&ctx.done, expire);6868+ else6969+ rc = wait_for_completion_timeout(&ctx.done, expire);7070+ if (rc <= 0) {6871 usb_kill_urb(urb);6969- retval = (ctx.status == -ENOENT ? -ETIMEDOUT : ctx.status);7272+ if (ctx.status != -ENOENT)7373+ retval = ctx.status;7474+ else if (rc == 0)7575+ retval = -ETIMEDOUT;7676+ else7777+ retval = rc;70787179 dev_dbg(&urb->dev->dev,7272- "%s timed out on ep%d%s len=%u/%u\n",8080+ "%s timed out or killed on ep%d%s len=%u/%u\n",7381 current->comm,7482 usb_endpoint_num(&urb->ep->desc),7583 usb_urb_dir_in(urb) ? "in" : "out",···114100 usb_fill_control_urb(urb, usb_dev, pipe, (unsigned char *)cmd, data,115101 len, usb_api_blocking_completion, NULL);116102117117- retv = usb_start_wait_urb(urb, timeout, &length);103103+ retv = usb_start_wait_urb(urb, timeout, &length, false);118104 if (retv < 0)119105 return retv;120106 else···131117 * @index: USB message index value132118 * @data: pointer to the data to send133119 * @size: length in bytes of the data to send134134- * @timeout: time in msecs to wait for the message to complete before timing135135- * out (if 0 the wait is forever)120120+ * @timeout: time in msecs to wait for the message to complete before timing out136121 *137122 * Context: task context, might sleep.138123 *···186173 * @index: USB message index value187174 * @driver_data: pointer to the data to send188175 * @size: length in bytes of the data to send189189- * @timeout: time in msecs to wait for the message to complete before timing190190- * out (if 0 the wait is forever)176176+ * @timeout: time in msecs to wait for the message to complete before timing out191177 * @memflags: the flags for memory allocation for buffers192178 *193179 * Context: !in_interrupt ()···244232 * @index: USB message index value245233 * @driver_data: pointer to the data to be filled in by the message246234 * @size: length in bytes of the data to be received247247- * @timeout: time in msecs to wait for the message to complete before timing248248- * out (if 0 the wait is forever)235235+ * @timeout: time in msecs to wait for the message to complete before timing out249236 * @memflags: the flags for memory allocation for buffers250237 *251238 * Context: !in_interrupt ()···315304 * @len: length in bytes of the data to send316305 * @actual_length: pointer to a location to put the actual length transferred317306 * in bytes318318- * @timeout: time in msecs to wait for the message to complete before319319- * timing out (if 0 the wait is forever)307307+ * @timeout: time in msecs to wait for the message to complete before timing out320308 *321309 * Context: task context, might sleep.322310 *···347337 * @len: length in bytes of the data to send348338 * @actual_length: pointer to a location to put the actual length transferred349339 * in bytes350350- * @timeout: time in msecs to wait for the message to complete before351351- * timing out (if 0 the wait is forever)340340+ * @timeout: time in msecs to wait for the message to complete before timing out352341 *353342 * Context: task context, might sleep.354343 *···394385 usb_fill_bulk_urb(urb, usb_dev, pipe, data, len,395386 usb_api_blocking_completion, NULL);396387397397- return usb_start_wait_urb(urb, timeout, actual_length);388388+ return usb_start_wait_urb(urb, timeout, actual_length, false);398389}399390EXPORT_SYMBOL_GPL(usb_bulk_msg);391391+392392+/**393393+ * usb_bulk_msg_killable - Builds a bulk urb, sends it off and waits for completion in a killable state394394+ * @usb_dev: pointer to the usb device to send the message to395395+ * @pipe: endpoint "pipe" to send the message to396396+ * @data: pointer to the data to send397397+ * @len: length in bytes of the data to send398398+ * @actual_length: pointer to a location to put the actual length transferred399399+ * in bytes400400+ * @timeout: time in msecs to wait for the message to complete before401401+ * timing out (if <= 0, the wait is as long as possible)402402+ *403403+ * Context: task context, might sleep.404404+ *405405+ * This function is just like usb_blk_msg(), except that it waits in a406406+ * killable state and there is no limit on the timeout length.407407+ *408408+ * Return:409409+ * If successful, 0. Otherwise a negative error number. The number of actual410410+ * bytes transferred will be stored in the @actual_length parameter.411411+ *412412+ */413413+int usb_bulk_msg_killable(struct usb_device *usb_dev, unsigned int pipe,414414+ void *data, int len, int *actual_length, int timeout)415415+{416416+ struct urb *urb;417417+ struct usb_host_endpoint *ep;418418+419419+ ep = usb_pipe_endpoint(usb_dev, pipe);420420+ if (!ep || len < 0)421421+ return -EINVAL;422422+423423+ urb = usb_alloc_urb(0, GFP_KERNEL);424424+ if (!urb)425425+ return -ENOMEM;426426+427427+ if ((ep->desc.bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) ==428428+ USB_ENDPOINT_XFER_INT) {429429+ pipe = (pipe & ~(3 << 30)) | (PIPE_INTERRUPT << 30);430430+ usb_fill_int_urb(urb, usb_dev, pipe, data, len,431431+ usb_api_blocking_completion, NULL,432432+ ep->desc.bInterval);433433+ } else434434+ usb_fill_bulk_urb(urb, usb_dev, pipe, data, len,435435+ usb_api_blocking_completion, NULL);436436+437437+ return usb_start_wait_urb(urb, timeout, actual_length, true);438438+}439439+EXPORT_SYMBOL_GPL(usb_bulk_msg_killable);400440401441/*-------------------------------------------------------------------*/402442
···38383939struct eth_dev;40404141-/**4242- * struct gether_opts - Options for Ethernet gadget function instances4343- * @name: Pattern for the network interface name (e.g., "usb%d").4444- * Used to generate the net device name.4545- * @qmult: Queue length multiplier for high/super speed.4646- * @host_mac: The MAC address to be used by the host side.4747- * @dev_mac: The MAC address to be used by the device side.4848- * @ifname_set: True if the interface name pattern has been set by userspace.4949- * @addr_assign_type: The method used for assigning the device MAC address5050- * (e.g., NET_ADDR_RANDOM, NET_ADDR_SET).5151- *5252- * This structure caches network-related settings provided through configfs5353- * before the net_device is fully instantiated. This allows for early5454- * configuration while deferring net_device allocation until the function5555- * is bound.5656- */5757-struct gether_opts {5858- char name[IFNAMSIZ];5959- unsigned int qmult;6060- u8 host_mac[ETH_ALEN];6161- u8 dev_mac[ETH_ALEN];6262- bool ifname_set;6363- unsigned char addr_assign_type;6464-};6565-6641/*6742 * This represents the USB side of an "ethernet" link, managed by a USB6843 * function which provides control and (maybe) framing. Two functions···151176void gether_set_gadget(struct net_device *net, struct usb_gadget *g);152177153178/**179179+ * gether_attach_gadget - Reparent net_device to the gadget device.180180+ * @net: The network device to reparent.181181+ * @g: The target USB gadget device to parent to.182182+ *183183+ * This function moves the network device to be a child of the USB gadget184184+ * device in the device hierarchy. This is typically done when the function185185+ * is bound to a configuration.186186+ *187187+ * Returns 0 on success, or a negative error code on failure.188188+ */189189+int gether_attach_gadget(struct net_device *net, struct usb_gadget *g);190190+191191+/**192192+ * gether_detach_gadget - Detach net_device from its gadget parent.193193+ * @net: The network device to detach.194194+ *195195+ * This function moves the network device to be a child of the virtual196196+ * devices parent, effectively detaching it from the USB gadget device197197+ * hierarchy. This is typically done when the function is unbound198198+ * from a configuration but the instance is not yet freed.199199+ */200200+void gether_detach_gadget(struct net_device *net);201201+202202+DEFINE_FREE(detach_gadget, struct net_device *, if (_T) gether_detach_gadget(_T))203203+204204+/**154205 * gether_set_dev_addr - initialize an ethernet-over-usb link with eth address155206 * @net: device representing this link156207 * @dev_addr: eth address of this device···284283int gether_set_ifname(struct net_device *net, const char *name, int len);285284286285void gether_cleanup(struct eth_dev *dev);287287-void gether_unregister_free_netdev(struct net_device *net);288288-DEFINE_FREE(free_gether_netdev, struct net_device *, gether_unregister_free_netdev(_T));289289-290290-void gether_setup_opts_default(struct gether_opts *opts, const char *name);291291-void gether_apply_opts(struct net_device *net, struct gether_opts *opts);292286293287void gether_suspend(struct gether *link);294288void gether_resume(struct gether *link);
···386386static int xhci_portli_show(struct seq_file *s, void *unused)387387{388388 struct xhci_port *port = s->private;389389- struct xhci_hcd *xhci = hcd_to_xhci(port->rhub->hcd);389389+ struct xhci_hcd *xhci;390390 u32 portli;391391392392 portli = readl(&port->port_reg->portli);393393+394394+ /* port without protocol capability isn't added to a roothub */395395+ if (!port->rhub) {396396+ seq_printf(s, "0x%08x\n", portli);397397+ return 0;398398+ }399399+400400+ xhci = hcd_to_xhci(port->rhub->hcd);393401394402 /* PORTLI fields are valid if port is a USB3 or eUSB2V2 port */395403 if (port->rhub == &xhci->usb3_rhub)
···707707 if (signal_pending (current)) 708708 {709709 mutex_unlock(&mdc800->io_lock);710710- return -EINTR;710710+ return len == left ? -EINTR : len-left;711711 }712712713713 sts=left > (mdc800->out_count-mdc800->out_ptr)?mdc800->out_count-mdc800->out_ptr:left;···730730 mutex_unlock(&mdc800->io_lock);731731 return len-left;732732 }733733- wait_event_timeout(mdc800->download_wait,733733+ retval = wait_event_timeout(mdc800->download_wait,734734 mdc800->downloaded,735735 msecs_to_jiffies(TO_DOWNLOAD_GET_READY));736736+ if (!retval)737737+ usb_kill_urb(mdc800->download_urb);736738 mdc800->downloaded = 0;737739 if (mdc800->download_urb->status != 0)738740 {
+1-1
drivers/usb/misc/uss720.c
···736736 ret = get_1284_register(pp, 0, ®, GFP_KERNEL);737737 dev_dbg(&intf->dev, "reg: %7ph\n", priv->reg);738738 if (ret < 0)739739- return ret;739739+ goto probe_abort;740740741741 ret = usb_find_last_int_in_endpoint(interface, &epd);742742 if (!ret) {
+1-1
drivers/usb/misc/yurex.c
···272272 dev->int_buffer, YUREX_BUF_SIZE, yurex_interrupt,273273 dev, 1);274274 dev->urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;275275+ dev->bbu = -1;275276 if (usb_submit_urb(dev->urb, GFP_KERNEL)) {276277 retval = -EIO;277278 dev_err(&interface->dev, "Could not submitting URB\n");···281280282281 /* save our data pointer in this interface device */283282 usb_set_intfdata(interface, dev);284284- dev->bbu = -1;285283286284 /* we can register the device now, as it is ready */287285 retval = usb_register_dev(interface, &yurex_class);
+9
drivers/usb/renesas_usbhs/common.c
···815815816816 usbhs_platform_call(priv, hardware_exit, pdev);817817 reset_control_assert(priv->rsts);818818+819819+ /*820820+ * Explicitly free the IRQ to ensure the interrupt handler is821821+ * disabled and synchronized before freeing resources.822822+ * devm_free_irq() calls free_irq() which waits for any running823823+ * ISR to complete, preventing UAF.824824+ */825825+ devm_free_irq(&pdev->dev, priv->irq, priv);826826+818827 usbhs_mod_remove(priv);819828 usbhs_fifo_remove(priv);820829 usbhs_pipe_remove(priv);
···100100{101101 u8 pin_assign = 0;102102 u32 conf;103103+ u32 signal;103104104105 /* DP Signalling */105105- conf = (dp->data.conf & DP_CONF_SIGNALLING_MASK) >> DP_CONF_SIGNALLING_SHIFT;106106+ signal = DP_CAP_DP_SIGNALLING(dp->port->vdo) & DP_CAP_DP_SIGNALLING(dp->alt->vdo);107107+ if (dp->plug_prime)108108+ signal &= DP_CAP_DP_SIGNALLING(dp->plug_prime->vdo);109109+110110+ conf = signal << DP_CONF_SIGNALLING_SHIFT;106111107112 switch (con) {108113 case DP_STATUS_CON_DISABLED:
+1-1
drivers/usb/typec/tcpm/tcpm.c
···78907890 port->partner_desc.identity = &port->partner_ident;7891789178927892 port->role_sw = fwnode_usb_role_switch_get(tcpc->fwnode);78937893- if (IS_ERR_OR_NULL(port->role_sw))78937893+ if (!port->role_sw)78947894 port->role_sw = usb_role_switch_get(port->dev);78957895 if (IS_ERR(port->role_sw)) {78967896 err = PTR_ERR(port->role_sw);
+6-2
drivers/video/fbdev/au1100fb.c
···380380#define panel_is_color(panel) (panel->control_base & LCD_CONTROL_PC)381381#define panel_swap_rgb(panel) (panel->control_base & LCD_CONTROL_CCO)382382383383-#if defined(CONFIG_COMPILE_TEST) && !defined(CONFIG_MIPS)384384-/* This is only defined to be able to compile this driver on non-mips platforms */383383+#if defined(CONFIG_COMPILE_TEST) && (!defined(CONFIG_MIPS) || defined(CONFIG_64BIT))384384+/*385385+ * KSEG1ADDR() is defined in arch/mips/include/asm/addrspace.h386386+ * for 32 bit configurations. Provide a stub for compile testing387387+ * on other platforms.388388+ */385389#define KSEG1ADDR(x) (x)386390#endif387391
+2-5
drivers/xen/xen-acpi-processor.c
···378378 acpi_psd[acpi_id].domain);379379 }380380381381- status = acpi_evaluate_object(handle, "_CST", NULL, &buffer);382382- if (ACPI_FAILURE(status)) {383383- if (!pblk)384384- return AE_OK;385385- }381381+ if (!pblk && !acpi_has_method(handle, "_CST"))382382+ return AE_OK;386383 /* .. and it has a C-state */387384 __set_bit(acpi_id, acpi_id_cst_present);388385
+5-5
drivers/xen/xen-pciback/xenbus.c
···149149150150 mutex_lock(&pdev->dev_lock);151151 /* Make sure we only do this setup once */152152- if (xenbus_read_driver_state(pdev->xdev->nodename) !=152152+ if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) !=153153 XenbusStateInitialised)154154 goto out;155155156156 /* Wait for frontend to state that it has published the configuration */157157- if (xenbus_read_driver_state(pdev->xdev->otherend) !=157157+ if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->otherend) !=158158 XenbusStateInitialised)159159 goto out;160160···374374 dev_dbg(&pdev->xdev->dev, "Reconfiguring device ...\n");375375376376 mutex_lock(&pdev->dev_lock);377377- if (xenbus_read_driver_state(pdev->xdev->nodename) != state)377377+ if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) != state)378378 goto out;379379380380 err = xenbus_scanf(XBT_NIL, pdev->xdev->nodename, "num_devs", "%d",···572572 /* It's possible we could get the call to setup twice, so make sure573573 * we're not already connected.574574 */575575- if (xenbus_read_driver_state(pdev->xdev->nodename) !=575575+ if (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename) !=576576 XenbusStateInitWait)577577 goto out;578578···662662 struct xen_pcibk_device *pdev =663663 container_of(watch, struct xen_pcibk_device, be_watch);664664665665- switch (xenbus_read_driver_state(pdev->xdev->nodename)) {665665+ switch (xenbus_read_driver_state(pdev->xdev, pdev->xdev->nodename)) {666666 case XenbusStateInitWait:667667 xen_pcibk_setup_backend(pdev);668668 break;
+14-3
drivers/xen/xenbus/xenbus_client.c
···226226 struct xenbus_transaction xbt;227227 int current_state;228228 int err, abort;229229+ bool vanished = false;229230230230- if (state == dev->state)231231+ if (state == dev->state || dev->vanished)231232 return 0;232233233234again:···243242 err = xenbus_scanf(xbt, dev->nodename, "state", "%d", ¤t_state);244243 if (err != 1)245244 goto abort;245245+ if (current_state != dev->state && current_state == XenbusStateInitialising) {246246+ vanished = true;247247+ goto abort;248248+ }246249247250 err = xenbus_printf(xbt, dev->nodename, "state", "%d", state);248251 if (err) {···261256 if (err == -EAGAIN && !abort)262257 goto again;263258 xenbus_switch_fatal(dev, depth, err, "ending transaction");264264- } else259259+ } else if (!vanished)265260 dev->state = state;266261267262 return 0;···936931937932/**938933 * xenbus_read_driver_state - read state from a store path934934+ * @dev: xenbus device pointer939935 * @path: path for driver940936 *941937 * Returns: the state of the driver rooted at the given store path, or942938 * XenbusStateUnknown if no state can be read.943939 */944944-enum xenbus_state xenbus_read_driver_state(const char *path)940940+enum xenbus_state xenbus_read_driver_state(const struct xenbus_device *dev,941941+ const char *path)945942{946943 enum xenbus_state result;944944+945945+ if (dev && dev->vanished)946946+ return XenbusStateUnknown;947947+947948 int err = xenbus_gather(XBT_NIL, path, "state", "%d", &result, NULL);948949 if (err)949950 result = XenbusStateUnknown;
+39-3
drivers/xen/xenbus/xenbus_probe.c
···191191 return;192192 }193193194194- state = xenbus_read_driver_state(dev->otherend);194194+ state = xenbus_read_driver_state(dev, dev->otherend);195195196196 dev_dbg(&dev->dev, "state is %d, (%s), %s, %s\n",197197 state, xenbus_strstate(state), dev->otherend_watch.node, path);···364364 * closed.365365 */366366 if (!drv->allow_rebind ||367367- xenbus_read_driver_state(dev->nodename) == XenbusStateClosing)367367+ xenbus_read_driver_state(dev, dev->nodename) == XenbusStateClosing)368368 xenbus_switch_state(dev, XenbusStateClosed);369369}370370EXPORT_SYMBOL_GPL(xenbus_dev_remove);···444444 info.dev = NULL;445445 bus_for_each_dev(bus, NULL, &info, cleanup_dev);446446 if (info.dev) {447447+ dev_warn(&info.dev->dev,448448+ "device forcefully removed from xenstore\n");449449+ info.dev->vanished = true;447450 device_unregister(&info.dev->dev);448451 put_device(&info.dev->dev);449452 }···517514 size_t stringlen;518515 char *tmpstring;519516520520- enum xenbus_state state = xenbus_read_driver_state(nodename);517517+ enum xenbus_state state = xenbus_read_driver_state(NULL, nodename);521518522519 if (state != XenbusStateInitialising) {523520 /* Device is not new, so ignore it. This can happen if a···662659 return;663660664661 dev = xenbus_device_find(root, &bus->bus);662662+ /*663663+ * Backend domain crash results in not coordinated frontend removal,664664+ * without going through XenbusStateClosing. If this is a new instance665665+ * of the same device Xen tools will have reset the state to666666+ * XenbusStateInitializing.667667+ * It might be that the backend crashed early during the init phase of668668+ * device setup, in which case the known state would have been669669+ * XenbusStateInitializing. So test the backend domid to match the670670+ * saved one. In case the new backend happens to have the same domid as671671+ * the old one, we can just carry on, as there is no inconsistency672672+ * resulting in this case.673673+ */674674+ if (dev && !strcmp(bus->root, "device")) {675675+ enum xenbus_state state = xenbus_read_driver_state(dev, dev->nodename);676676+ unsigned int backend = xenbus_read_unsigned(root, "backend-id",677677+ dev->otherend_id);678678+679679+ if (state == XenbusStateInitialising &&680680+ (state != dev->state || backend != dev->otherend_id)) {681681+ /*682682+ * State has been reset, assume the old one vanished683683+ * and new one needs to be probed.684684+ */685685+ dev_warn(&dev->dev,686686+ "state reset occurred, reconnecting\n");687687+ dev->vanished = true;688688+ }689689+ if (dev->vanished) {690690+ device_unregister(&dev->dev);691691+ put_device(&dev->dev);692692+ dev = NULL;693693+ }694694+ }665695 if (!dev)666696 xenbus_probe_node(bus, type, root);667697 else
+1-1
drivers/xen/xenbus/xenbus_probe_frontend.c
···253253 } else if (xendev->state < XenbusStateConnected) {254254 enum xenbus_state rstate = XenbusStateUnknown;255255 if (xendev->otherend)256256- rstate = xenbus_read_driver_state(xendev->otherend);256256+ rstate = xenbus_read_driver_state(xendev, xendev->otherend);257257 pr_warn("Timeout connecting to device: %s (local state %d, remote state %d)\n",258258 xendev->nodename, xendev->state, rstate);259259 }
+4-4
fs/afs/addr_list.c
···298298 srx.transport.sin.sin_addr.s_addr = xdr;299299300300 peer = rxrpc_kernel_lookup_peer(net->socket, &srx, GFP_KERNEL);301301- if (!peer)302302- return -ENOMEM;301301+ if (IS_ERR(peer))302302+ return PTR_ERR(peer);303303304304 for (i = 0; i < alist->nr_ipv4; i++) {305305 if (peer == alist->addrs[i].peer) {···342342 memcpy(&srx.transport.sin6.sin6_addr, xdr, 16);343343344344 peer = rxrpc_kernel_lookup_peer(net->socket, &srx, GFP_KERNEL);345345- if (!peer)346346- return -ENOMEM;345345+ if (IS_ERR(peer))346346+ return PTR_ERR(peer);347347348348 for (i = alist->nr_ipv4; i < alist->nr_addrs; i++) {349349 if (peer == alist->addrs[i].peer) {
+6-1
fs/btrfs/disk-io.c
···35943594 }35953595 }3596359635973597- btrfs_zoned_reserve_data_reloc_bg(fs_info);35983597 btrfs_free_zone_cache(fs_info);3599359836003599 btrfs_check_active_zone_reservation(fs_info);···36203621 ret = PTR_ERR(fs_info->transaction_kthread);36213622 goto fail_cleaner;36223623 }36243624+36253625+ /*36263626+ * Starts a transaction, must be called after the transaction kthread36273627+ * is initialized.36283628+ */36293629+ btrfs_zoned_reserve_data_reloc_bg(fs_info);3623363036243631 ret = btrfs_read_qgroup_config(fs_info);36253632 if (ret)
···66126612 int ret;66136613 bool xa_reserved = false;6614661466156615+ if (!args->orphan && !args->subvol) {66166616+ /*66176617+ * Before anything else, check if we can add the name to the66186618+ * parent directory. We want to avoid a dir item overflow in66196619+ * case we have an existing dir item due to existing name66206620+ * hash collisions. We do this check here before we call66216621+ * btrfs_add_link() down below so that we can avoid a66226622+ * transaction abort (which could be exploited by malicious66236623+ * users).66246624+ *66256625+ * For subvolumes we already do this in btrfs_mksubvol().66266626+ */66276627+ ret = btrfs_check_dir_item_collision(BTRFS_I(dir)->root,66286628+ btrfs_ino(BTRFS_I(dir)),66296629+ name);66306630+ if (ret < 0)66316631+ return ret;66326632+ }66336633+66156634 path = btrfs_alloc_path();66166635 if (!path)66176636 return -ENOMEM;
+28-4
fs/btrfs/ioctl.c
···672672 goto out;673673 }674674675675+ /*676676+ * Subvolumes have orphans cleaned on first dentry lookup. A new677677+ * subvolume cannot have any orphans, so we should set the bit before we678678+ * add the subvolume dentry to the dentry cache, so that it is in the679679+ * same state as a subvolume after first lookup.680680+ */681681+ set_bit(BTRFS_ROOT_ORPHAN_CLEANUP, &new_root->state);675682 d_instantiate_new(dentry, new_inode_args.inode);676683 new_inode_args.inode = NULL;677684···38593852 goto out;38603853 }3861385438553855+ received_uuid_changed = memcmp(root_item->received_uuid, sa->uuid,38563856+ BTRFS_UUID_SIZE);38573857+38583858+ /*38593859+ * Before we attempt to add the new received uuid, check if we have room38603860+ * for it in case there's already an item. If the size of the existing38613861+ * item plus this root's ID (u64) exceeds the maximum item size, we can38623862+ * return here without the need to abort a transaction. If we don't do38633863+ * this check, the btrfs_uuid_tree_add() call below would fail with38643864+ * -EOVERFLOW and result in a transaction abort. Malicious users could38653865+ * exploit this to turn the fs into RO mode.38663866+ */38673867+ if (received_uuid_changed && !btrfs_is_empty_uuid(sa->uuid)) {38683868+ ret = btrfs_uuid_tree_check_overflow(fs_info, sa->uuid,38693869+ BTRFS_UUID_KEY_RECEIVED_SUBVOL);38703870+ if (ret < 0)38713871+ goto out;38723872+ }38733873+38623874 /*38633875 * 1 - root item38643876 * 2 - uuid items (received uuid + subvol uuid)···38933867 sa->rtime.sec = ct.tv_sec;38943868 sa->rtime.nsec = ct.tv_nsec;3895386938963896- received_uuid_changed = memcmp(root_item->received_uuid, sa->uuid,38973897- BTRFS_UUID_SIZE);38983870 if (received_uuid_changed &&38993871 !btrfs_is_empty_uuid(root_item->received_uuid)) {39003872 ret = btrfs_uuid_tree_remove(trans, root_item->received_uuid,39013873 BTRFS_UUID_KEY_RECEIVED_SUBVOL,39023874 btrfs_root_id(root));39033875 if (unlikely(ret && ret != -ENOENT)) {39043904- btrfs_abort_transaction(trans, ret);39053876 btrfs_end_transaction(trans);39063877 goto out;39073878 }···3913389039143891 ret = btrfs_update_root(trans, fs_info->tree_root,39153892 &root->root_key, &root->root_item);39163916- if (ret < 0) {38933893+ if (unlikely(ret < 0)) {38943894+ btrfs_abort_transaction(trans, ret);39173895 btrfs_end_transaction(trans);39183896 goto out;39193897 }
···21942194 if (!btrfs_should_periodic_reclaim(space_info))21952195 continue;21962196 for (raid = 0; raid < BTRFS_NR_RAID_TYPES; raid++) {21972197- if (do_reclaim_sweep(space_info, raid))21972197+ if (do_reclaim_sweep(space_info, raid)) {21982198+ spin_lock(&space_info->lock);21982199 btrfs_set_periodic_reclaim_ready(space_info, false);22002200+ spin_unlock(&space_info->lock);22012201+ }21992202 }22002203 }22012204}
+16
fs/btrfs/transaction.c
···19051905 ret = btrfs_uuid_tree_add(trans, new_root_item->received_uuid,19061906 BTRFS_UUID_KEY_RECEIVED_SUBVOL,19071907 objectid);19081908+ /*19091909+ * We are creating of lot of snapshots of the same root that was19101910+ * received (has a received UUID) and reached a leaf's limit for19111911+ * an item. We can safely ignore this and avoid a transaction19121912+ * abort. A deletion of this snapshot will still work since we19131913+ * ignore if an item with a BTRFS_UUID_KEY_RECEIVED_SUBVOL key19141914+ * is missing (see btrfs_delete_subvolume()). Send/receive will19151915+ * work too since it peeks the first root id from the existing19161916+ * item (it could peek any), and in case it's missing it19171917+ * falls back to search by BTRFS_UUID_KEY_SUBVOL keys.19181918+ * Creation of a snapshot does not require CAP_SYS_ADMIN, so19191919+ * we don't want users triggering transaction aborts, either19201920+ * intentionally or not.19211921+ */19221922+ if (ret == -EOVERFLOW)19231923+ ret = 0;19081924 if (unlikely(ret && ret != -EEXIST)) {19091925 btrfs_abort_transaction(trans, ret);19101926 goto fail;
+38
fs/btrfs/uuid-tree.c
···199199 return 0;200200}201201202202+/*203203+ * Check if we can add one root ID to a UUID key.204204+ * If the key does not yet exists, we can, otherwise only if extended item does205205+ * not exceeds the maximum item size permitted by the leaf size.206206+ *207207+ * Returns 0 on success, negative value on error.208208+ */209209+int btrfs_uuid_tree_check_overflow(struct btrfs_fs_info *fs_info,210210+ const u8 *uuid, u8 type)211211+{212212+ BTRFS_PATH_AUTO_FREE(path);213213+ int ret;214214+ u32 item_size;215215+ struct btrfs_key key;216216+217217+ if (WARN_ON_ONCE(!fs_info->uuid_root))218218+ return -EINVAL;219219+220220+ path = btrfs_alloc_path();221221+ if (!path)222222+ return -ENOMEM;223223+224224+ btrfs_uuid_to_key(uuid, type, &key);225225+ ret = btrfs_search_slot(NULL, fs_info->uuid_root, &key, path, 0, 0);226226+ if (ret < 0)227227+ return ret;228228+ if (ret > 0)229229+ return 0;230230+231231+ item_size = btrfs_item_size(path->nodes[0], path->slots[0]);232232+233233+ if (sizeof(struct btrfs_item) + item_size + sizeof(u64) >234234+ BTRFS_LEAF_DATA_SIZE(fs_info))235235+ return -EOVERFLOW;236236+237237+ return 0;238238+}239239+202240static int btrfs_uuid_iter_rem(struct btrfs_root *uuid_root, u8 *uuid, u8 type,203241 u64 subid)204242{
···13391339 struct ceph_client *cl = fsc->client;13401340 struct ceph_mds_client *mdsc = fsc->mdsc;13411341 struct inode *inode = d_inode(dentry);13421342+ struct ceph_inode_info *ci = ceph_inode(inode);13421343 struct ceph_mds_request *req;13431344 bool try_async = ceph_test_mount_opt(fsc, ASYNC_DIROPS);13441345 struct dentry *dn;···13641363 if (!dn) {13651364 try_async = false;13661365 } else {13671367- struct ceph_path_info path_info;13661366+ struct ceph_path_info path_info = {0};13681367 path = ceph_mdsc_build_path(mdsc, dn, &path_info, 0);13691368 if (IS_ERR(path)) {13701369 try_async = false;···14251424 * We have enough caps, so we assume that the unlink14261425 * will succeed. Fix up the target inode and dcache.14271426 */14281428- drop_nlink(inode);14271427+14281428+ /*14291429+ * Protect the i_nlink update with i_ceph_lock14301430+ * to precent racing against ceph_fill_inode()14311431+ * handling our completion on a worker thread14321432+ * and don't decrement if i_nlink has already14331433+ * been updated to zero by this completion.14341434+ */14351435+ spin_lock(&ci->i_ceph_lock);14361436+ if (inode->i_nlink > 0)14371437+ drop_nlink(inode);14381438+ spin_unlock(&ci->i_ceph_lock);14391439+14291440 d_delete(dentry);14301441 } else {14311442 spin_lock(&fsc->async_unlink_conflict_lock);
···8787 space programs which can be found in the Linux nfs-utils package,8888 available from http://linux-nfs.org/.89899090- If unsure, say Y.9090+ If unsure, say N.91919292config NFS_SWAP9393 bool "Provide swap over NFS support"···100100config NFS_V4_0101101 bool "NFS client support for NFSv4.0"102102 depends on NFS_V4103103+ default y103104 help104105 This option enables support for minor version 0 of the NFSv4 protocol105106 (RFC 3530) in the kernel's NFS client.
+6-1
fs/nfs/nfs3proc.c
···392392 if (status != 0)393393 goto out_release_acls;394394395395- if (d_alias)395395+ if (d_alias) {396396+ if (d_is_dir(d_alias)) {397397+ status = -EISDIR;398398+ goto out_dput;399399+ }396400 dentry = d_alias;401401+ }397402398403 /* When we created the file with exclusive semantics, make399404 * sure we set the attributes afterwards. */
···332332333333 /*334334 * We need to release all dentries for the cached directories335335- * before we kill the sb.335335+ * and close all deferred file handles before we kill the sb.336336 */337337 if (cifs_sb->root) {338338 close_all_cached_dirs(cifs_sb);339339+ cifs_close_all_deferred_files_sb(cifs_sb);340340+341341+ /* Wait for all pending oplock breaks to complete */342342+ flush_workqueue(cifsoplockd_wq);339343340344 /* finally release root dentry */341345 dput(cifs_sb->root);···872868 spin_unlock(&tcon->tc_lock);873869 spin_unlock(&cifs_tcp_ses_lock);874870875875- cifs_close_all_deferred_files(tcon);876871 /* cancel_brl_requests(tcon); */ /* BB mark all brl mids as exiting */877872 /* cancel_notify_requests(tcon); */878873 if (tcon->ses && tcon->ses->server) {···12691266 struct cifsFileInfo *writeable_srcfile;12701267 int rc = -EINVAL;1271126812721272- writeable_srcfile = find_writable_file(src_cifsi, FIND_WR_FSUID_ONLY);12691269+ writeable_srcfile = find_writable_file(src_cifsi, FIND_FSUID_ONLY);12731270 if (writeable_srcfile) {12741271 if (src_tcon->ses->server->ops->set_file_size)12751272 rc = src_tcon->ses->server->ops->set_file_size(
+17-6
fs/smb/client/cifsglob.h
···2020#include <linux/utsname.h>2121#include <linux/sched/mm.h>2222#include <linux/netfs.h>2323+#include <linux/fcntl.h>2324#include "cifs_fs_sb.h"2425#include "cifsacl.h"2526#include <crypto/internal/hash.h>···18851884}188618851887188618881888-/* cifs_get_writable_file() flags */18891889-enum cifs_writable_file_flags {18901890- FIND_WR_ANY = 0U,18911891- FIND_WR_FSUID_ONLY = (1U << 0),18921892- FIND_WR_WITH_DELETE = (1U << 1),18931893- FIND_WR_NO_PENDING_DELETE = (1U << 2),18871887+enum cifs_find_flags {18881888+ FIND_ANY = 0U,18891889+ FIND_FSUID_ONLY = (1U << 0),18901890+ FIND_WITH_DELETE = (1U << 1),18911891+ FIND_NO_PENDING_DELETE = (1U << 2),18921892+ FIND_OPEN_FLAGS = (1U << 3),18941893};1895189418961895#define MID_FREE 0···23742373static inline bool cifs_forced_shutdown(const struct cifs_sb_info *sbi)23752374{23762375 return cifs_sb_flags(sbi) & CIFS_MOUNT_SHUTDOWN;23762376+}23772377+23782378+static inline int cifs_open_create_options(unsigned int oflags, int opts)23792379+{23802380+ /* O_SYNC also has bit for O_DSYNC so following check picks up either */23812381+ if (oflags & O_SYNC)23822382+ opts |= CREATE_WRITE_THROUGH;23832383+ if (oflags & O_DIRECT)23842384+ opts |= CREATE_NO_BUFFER;23852385+ return opts;23772386}2378238723792388#endif /* _CIFS_GLOB_H */
···187187 const char *full_path;188188 void *page = alloc_dentry_path();189189 struct inode *newinode = NULL;190190- unsigned int sbflags;190190+ unsigned int sbflags = cifs_sb_flags(cifs_sb);191191 int disposition;192192 struct TCP_Server_Info *server = tcon->ses->server;193193 struct cifs_open_parms oparms;···308308 goto out;309309 }310310311311+ create_options |= cifs_open_create_options(oflags, create_options);311312 /*312313 * if we're not using unix extensions, see if we need to set313314 * ATTR_READONLY on the create call···368367 * If Open reported that we actually created a file then we now have to369368 * set the mode if possible.370369 */371371- sbflags = cifs_sb_flags(cifs_sb);372370 if ((tcon->unix_ext) && (*oplock & CIFS_CREATE_ACTION)) {373371 struct cifs_unix_set_info_args args = {374372 .mode = mode,
+69-71
fs/smb/client/file.c
···255255 struct cifs_io_request *req = container_of(wreq, struct cifs_io_request, rreq);256256 int ret;257257258258- ret = cifs_get_writable_file(CIFS_I(wreq->inode), FIND_WR_ANY, &req->cfile);258258+ ret = cifs_get_writable_file(CIFS_I(wreq->inode), FIND_ANY, &req->cfile);259259 if (ret) {260260 cifs_dbg(VFS, "No writable handle in writepages ret=%d\n", ret);261261 return;···584584 *********************************************************************/585585586586 disposition = cifs_get_disposition(f_flags);587587-588587 /* BB pass O_SYNC flag through on file attributes .. BB */589589-590590- /* O_SYNC also has bit for O_DSYNC so following check picks up either */591591- if (f_flags & O_SYNC)592592- create_options |= CREATE_WRITE_THROUGH;593593-594594- if (f_flags & O_DIRECT)595595- create_options |= CREATE_NO_BUFFER;588588+ create_options |= cifs_open_create_options(f_flags, create_options);596589597590retry_open:598591 oparms = (struct cifs_open_parms) {···704711 mutex_init(&cfile->fh_mutex);705712 spin_lock_init(&cfile->file_info_lock);706713707707- cifs_sb_active(inode->i_sb);708708-709714 /*710715 * If the server returned a read oplock and we have mandatory brlocks,711716 * set oplock level to None.···758767 struct inode *inode = d_inode(cifs_file->dentry);759768 struct cifsInodeInfo *cifsi = CIFS_I(inode);760769 struct cifsLockInfo *li, *tmp;761761- struct super_block *sb = inode->i_sb;762770763771 /*764772 * Delete any outstanding lock records. We'll lose them when the file···775785776786 cifs_put_tlink(cifs_file->tlink);777787 dput(cifs_file->dentry);778778- cifs_sb_deactive(sb);779788 kfree(cifs_file->symlink_target);780789 kfree(cifs_file);781790}···956967 return tcon->ses->server->ops->flush(xid, tcon,957968 &cfile->fid);958969 }959959- rc = cifs_get_writable_file(CIFS_I(inode), FIND_WR_ANY, &cfile);970970+ rc = cifs_get_writable_file(CIFS_I(inode), FIND_ANY, &cfile);960971 if (!rc) {961972 tcon = tlink_tcon(cfile->tlink);962973 rc = tcon->ses->server->ops->flush(xid, tcon, &cfile->fid);···981992 return -ERESTARTSYS;982993 mapping_set_error(inode->i_mapping, rc);983994984984- cfile = find_writable_file(cinode, FIND_WR_FSUID_ONLY);995995+ cfile = find_writable_file(cinode, FIND_FSUID_ONLY);985996 rc = cifs_file_flush(xid, inode, cfile);986997 if (!rc) {987998 if (cfile) {···1061107210621073 /* Get the cached handle as SMB2 close is deferred */10631074 if (OPEN_FMODE(file->f_flags) & FMODE_WRITE) {10641064- rc = cifs_get_writable_path(tcon, full_path,10651065- FIND_WR_FSUID_ONLY |10661066- FIND_WR_NO_PENDING_DELETE,10671067- &cfile);10751075+ rc = __cifs_get_writable_file(CIFS_I(inode),10761076+ FIND_FSUID_ONLY |10771077+ FIND_NO_PENDING_DELETE |10781078+ FIND_OPEN_FLAGS,10791079+ file->f_flags, &cfile);10681080 } else {10691069- rc = cifs_get_readable_path(tcon, full_path, &cfile);10811081+ cfile = __find_readable_file(CIFS_I(inode),10821082+ FIND_NO_PENDING_DELETE |10831083+ FIND_OPEN_FLAGS,10841084+ file->f_flags);10851085+ rc = cfile ? 0 : -ENOENT;10701086 }10711087 if (rc == 0) {10721072- unsigned int oflags = file->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC);10731073- unsigned int cflags = cfile->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC);10741074-10751075- if (cifs_convert_flags(oflags, 0) == cifs_convert_flags(cflags, 0) &&10761076- (oflags & (O_SYNC|O_DIRECT)) == (cflags & (O_SYNC|O_DIRECT))) {10771077- file->private_data = cfile;10781078- spin_lock(&CIFS_I(inode)->deferred_lock);10791079- cifs_del_deferred_close(cfile);10801080- spin_unlock(&CIFS_I(inode)->deferred_lock);10811081- goto use_cache;10821082- }10831083- _cifsFileInfo_put(cfile, true, false);10841084- } else {10851085- /* hard link on the defeered close file */10861086- rc = cifs_get_hardlink_path(tcon, inode, file);10871087- if (rc)10881088- cifs_close_deferred_file(CIFS_I(inode));10881088+ file->private_data = cfile;10891089+ spin_lock(&CIFS_I(inode)->deferred_lock);10901090+ cifs_del_deferred_close(cfile);10911091+ spin_unlock(&CIFS_I(inode)->deferred_lock);10921092+ goto use_cache;10891093 }10941094+ /* hard link on the deferred close file */10951095+ rc = cifs_get_hardlink_path(tcon, inode, file);10961096+ if (rc)10971097+ cifs_close_deferred_file(CIFS_I(inode));1090109810911099 if (server->oplocks)10921100 oplock = REQ_OPLOCK;···13041318 rdwr_for_fscache = 1;1305131913061320 desired_access = cifs_convert_flags(cfile->f_flags, rdwr_for_fscache);13071307-13081308- /* O_SYNC also has bit for O_DSYNC so following check picks up either */13091309- if (cfile->f_flags & O_SYNC)13101310- create_options |= CREATE_WRITE_THROUGH;13111311-13121312- if (cfile->f_flags & O_DIRECT)13131313- create_options |= CREATE_NO_BUFFER;13211321+ create_options |= cifs_open_create_options(cfile->f_flags,13221322+ create_options);1314132313151324 if (server->ops->get_lease_key)13161325 server->ops->get_lease_key(inode, &cfile->fid);···25092528 netfs_write_subrequest_terminated(&wdata->subreq, result);25102529}2511253025122512-struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode,25132513- bool fsuid_only)25312531+static bool open_flags_match(struct cifsInodeInfo *cinode,25322532+ unsigned int oflags, unsigned int cflags)25332533+{25342534+ struct inode *inode = &cinode->netfs.inode;25352535+ int crw = 0, orw = 0;25362536+25372537+ oflags &= ~(O_CREAT | O_EXCL | O_TRUNC);25382538+ cflags &= ~(O_CREAT | O_EXCL | O_TRUNC);25392539+25402540+ if (cifs_fscache_enabled(inode)) {25412541+ if (OPEN_FMODE(cflags) & FMODE_WRITE)25422542+ crw = 1;25432543+ if (OPEN_FMODE(oflags) & FMODE_WRITE)25442544+ orw = 1;25452545+ }25462546+ if (cifs_convert_flags(oflags, orw) != cifs_convert_flags(cflags, crw))25472547+ return false;25482548+25492549+ return (oflags & (O_SYNC | O_DIRECT)) == (cflags & (O_SYNC | O_DIRECT));25502550+}25512551+25522552+struct cifsFileInfo *__find_readable_file(struct cifsInodeInfo *cifs_inode,25532553+ unsigned int find_flags,25542554+ unsigned int open_flags)25142555{25152556 struct cifs_sb_info *cifs_sb = CIFS_SB(cifs_inode);25572557+ bool fsuid_only = find_flags & FIND_FSUID_ONLY;25162558 struct cifsFileInfo *open_file = NULL;2517255925182560 /* only filter by fsuid on multiuser mounts */···25482544 have a close pending, we go through the whole list */25492545 list_for_each_entry(open_file, &cifs_inode->openFileList, flist) {25502546 if (fsuid_only && !uid_eq(open_file->uid, current_fsuid()))25472547+ continue;25482548+ if ((find_flags & FIND_NO_PENDING_DELETE) &&25492549+ open_file->status_file_deleted)25502550+ continue;25512551+ if ((find_flags & FIND_OPEN_FLAGS) &&25522552+ !open_flags_match(cifs_inode, open_flags,25532553+ open_file->f_flags))25512554 continue;25522555 if (OPEN_FMODE(open_file->f_flags) & FMODE_READ) {25532556 if ((!open_file->invalidHandle)) {···25742563}2575256425762565/* Return -EBADF if no handle is found and general rc otherwise */25772577-int25782578-cifs_get_writable_file(struct cifsInodeInfo *cifs_inode, int flags,25792579- struct cifsFileInfo **ret_file)25662566+int __cifs_get_writable_file(struct cifsInodeInfo *cifs_inode,25672567+ unsigned int find_flags, unsigned int open_flags,25682568+ struct cifsFileInfo **ret_file)25802569{25812570 struct cifsFileInfo *open_file, *inv_file = NULL;25822571 struct cifs_sb_info *cifs_sb;25832572 bool any_available = false;25842573 int rc = -EBADF;25852574 unsigned int refind = 0;25862586- bool fsuid_only = flags & FIND_WR_FSUID_ONLY;25872587- bool with_delete = flags & FIND_WR_WITH_DELETE;25752575+ bool fsuid_only = find_flags & FIND_FSUID_ONLY;25762576+ bool with_delete = find_flags & FIND_WITH_DELETE;25882577 *ret_file = NULL;2589257825902579 /*···26182607 continue;26192608 if (with_delete && !(open_file->fid.access & DELETE))26202609 continue;26212621- if ((flags & FIND_WR_NO_PENDING_DELETE) &&26102610+ if ((find_flags & FIND_NO_PENDING_DELETE) &&26222611 open_file->status_file_deleted)26122612+ continue;26132613+ if ((find_flags & FIND_OPEN_FLAGS) &&26142614+ !open_flags_match(cifs_inode, open_flags,26152615+ open_file->f_flags))26232616 continue;26242617 if (OPEN_FMODE(open_file->f_flags) & FMODE_WRITE) {26252618 if (!open_file->invalidHandle) {···27412726 cinode = CIFS_I(d_inode(cfile->dentry));27422727 spin_unlock(&tcon->open_file_lock);27432728 free_dentry_path(page);27442744- *ret_file = find_readable_file(cinode, 0);27452745- if (*ret_file) {27462746- spin_lock(&cinode->open_file_lock);27472747- if ((*ret_file)->status_file_deleted) {27482748- spin_unlock(&cinode->open_file_lock);27492749- cifsFileInfo_put(*ret_file);27502750- *ret_file = NULL;27512751- } else {27522752- spin_unlock(&cinode->open_file_lock);27532753- }27542754- }27292729+ *ret_file = find_readable_file(cinode, FIND_ANY);27552730 return *ret_file ? 0 : -ENOENT;27562731 }27572732···28132808 }2814280928152810 if ((OPEN_FMODE(smbfile->f_flags) & FMODE_WRITE) == 0) {28162816- smbfile = find_writable_file(CIFS_I(inode), FIND_WR_ANY);28112811+ smbfile = find_writable_file(CIFS_I(inode), FIND_ANY);28172812 if (smbfile) {28182813 rc = server->ops->flush(xid, tcon, &smbfile->fid);28192814 cifsFileInfo_put(smbfile);···31683163 __u64 persistent_fid, volatile_fid;31693164 __u16 net_fid;3170316531713171- /*31723172- * Hold a reference to the superblock to prevent it and its inodes from31733173- * being freed while we are accessing cinode. Otherwise, _cifsFileInfo_put()31743174- * may release the last reference to the sb and trigger inode eviction.31753175- */31763176- cifs_sb_active(sb);31773166 wait_on_bit(&cinode->flags, CIFS_INODE_PENDING_WRITERS,31783167 TASK_UNINTERRUPTIBLE);31793168···32523253 cifs_put_tlink(tlink);32533254out:32543255 cifs_done_oplock_break(cinode);32553255- cifs_sb_deactive(sb);32563256}3257325732583258static int cifs_swap_activate(struct swap_info_struct *sis,
+1-1
fs/smb/client/fs_context.c
···19971997 ctx->backupuid_specified = false; /* no backup intent for a user */19981998 ctx->backupgid_specified = false; /* no backup intent for a group */1999199920002000- ctx->retrans = 1;20002000+ ctx->retrans = 0;20012001 ctx->reparse_type = CIFS_REPARSE_TYPE_DEFAULT;20022002 ctx->symlink_type = CIFS_SYMLINK_TYPE_DEFAULT;20032003 ctx->nonativesocket = 0;
···2828#include "fs_context.h"2929#include "cached_dir.h"30303131+struct tcon_list {3232+ struct list_head entry;3333+ struct cifs_tcon *tcon;3434+};3535+3136/* The xid serves as a useful identifier for each incoming vfs request,3237 in a similar way to the mid which is useful to track each sent smb,3338 and CurrentXid can also provide a running counter (although it···555550 list_for_each_entry_safe(tmp_list, tmp_next_list, &file_head, list) {556551 _cifsFileInfo_put(tmp_list->cfile, true, false);557552 list_del(&tmp_list->list);553553+ kfree(tmp_list);554554+ }555555+}556556+557557+void cifs_close_all_deferred_files_sb(struct cifs_sb_info *cifs_sb)558558+{559559+ struct rb_root *root = &cifs_sb->tlink_tree;560560+ struct rb_node *node;561561+ struct cifs_tcon *tcon;562562+ struct tcon_link *tlink;563563+ struct tcon_list *tmp_list, *q;564564+ LIST_HEAD(tcon_head);565565+566566+ spin_lock(&cifs_sb->tlink_tree_lock);567567+ for (node = rb_first(root); node; node = rb_next(node)) {568568+ tlink = rb_entry(node, struct tcon_link, tl_rbnode);569569+ tcon = tlink_tcon(tlink);570570+ if (IS_ERR(tcon))571571+ continue;572572+ tmp_list = kmalloc_obj(struct tcon_list, GFP_ATOMIC);573573+ if (tmp_list == NULL)574574+ break;575575+ tmp_list->tcon = tcon;576576+ /* Take a reference on tcon to prevent it from being freed */577577+ spin_lock(&tcon->tc_lock);578578+ ++tcon->tc_count;579579+ trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count,580580+ netfs_trace_tcon_ref_get_close_defer_files);581581+ spin_unlock(&tcon->tc_lock);582582+ list_add_tail(&tmp_list->entry, &tcon_head);583583+ }584584+ spin_unlock(&cifs_sb->tlink_tree_lock);585585+586586+ list_for_each_entry_safe(tmp_list, q, &tcon_head, entry) {587587+ cifs_close_all_deferred_files(tmp_list->tcon);588588+ list_del(&tmp_list->entry);589589+ cifs_put_tcon(tmp_list->tcon, netfs_trace_tcon_ref_put_close_defer_files);558590 kfree(tmp_list);559591 }560592}
+2-1
fs/smb/client/smb1encrypt.c
···11111212#include <linux/fips.h>1313#include <crypto/md5.h>1414+#include <crypto/utils.h>1415#include "cifsproto.h"1516#include "smb1proto.h"1617#include "cifs_debug.h"···132131/* cifs_dump_mem("what we think it should be: ",133132 what_we_think_sig_should_be, 16); */134133135135- if (memcmp(server_response_sig, what_we_think_sig_should_be, 8))134134+ if (crypto_memneq(server_response_sig, what_we_think_sig_should_be, 8))136135 return -EACCES;137136 else138137 return 0;
+1-1
fs/smb/client/smb1ops.c
···960960 struct cifs_tcon *tcon;961961962962 /* if the file is already open for write, just use that fileid */963963- open_file = find_writable_file(cinode, FIND_WR_FSUID_ONLY);963963+ open_file = find_writable_file(cinode, FIND_FSUID_ONLY);964964965965 if (open_file) {966966 fid.netfid = open_file->fid.netfid;
···14391439 return 0;1440144014411441out_abort:14421442+ /*14431443+ * Shut down the log before removing the dquot item from the AIL.14441444+ * Otherwise, the log tail may advance past this item's LSN while14451445+ * log writes are still in progress, making these unflushed changes14461446+ * unrecoverable on the next mount.14471447+ */14481448+ xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);14421449 dqp->q_flags &= ~XFS_DQFLAG_DIRTY;14431450 xfs_trans_ail_delete(lip, 0);14441444- xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);14451451 xfs_dqfunlock(dqp);14461452 return error;14471453}
+10-7
fs/xfs/xfs_healthmon.c
···141141 hm->mount_cookie = DETACHED_MOUNT_COOKIE;142142 spin_unlock(&xfs_healthmon_lock);143143144144+ /*145145+ * Wake up any readers that might remain. This can happen if unmount146146+ * races with the healthmon fd owner entering ->read_iter, having147147+ * already emptied the event queue.148148+ *149149+ * In the ->release case there shouldn't be any readers because the150150+ * only users of the waiter are read and poll.151151+ */152152+ wake_up_all(&hm->wait);153153+144154 trace_xfs_healthmon_detach(hm);145155 xfs_healthmon_put(hm);146156}···10371027 * process can create another health monitor file.10381028 */10391029 xfs_healthmon_detach(hm);10401040-10411041- /*10421042- * Wake up any readers that might be left. There shouldn't be any10431043- * because the only users of the waiter are read and poll.10441044- */10451045- wake_up_all(&hm->wait);10461046-10471030 xfs_healthmon_put(hm);10481031 return 0;10491032}
···3535 * otherwise. It may also return error code if determining that3636 * the driver supports the device is not possible. In case of3737 * -EPROBE_DEFER it will queue the device for deferred probing.3838+ * Note: This callback may be invoked with or without the device3939+ * lock held.3840 * @uevent: Called when a device is added, removed, or a few other things3941 * that generate uevents to add the environment variables.4042 * @probe: Called when a new device or driver add to this bus, and callback
···6868 * timeout value used in Stratix10 FPGA manager driver.6969 * timeout value used in RSU driver7070 */7171-#define SVC_RECONFIG_REQUEST_TIMEOUT_MS 3007272-#define SVC_RECONFIG_BUFFER_TIMEOUT_MS 7207373-#define SVC_RSU_REQUEST_TIMEOUT_MS 3007171+#define SVC_RECONFIG_REQUEST_TIMEOUT_MS 50007272+#define SVC_RECONFIG_BUFFER_TIMEOUT_MS 50007373+#define SVC_RSU_REQUEST_TIMEOUT_MS 20007474#define SVC_FCS_REQUEST_TIMEOUT_MS 20007575#define SVC_COMPLETED_TIMEOUT_MS 300007676-#define SVC_HWMON_REQUEST_TIMEOUT_MS 3007676+#define SVC_HWMON_REQUEST_TIMEOUT_MS 200077777878struct stratix10_svc_chan;7979
+6
include/linux/hid.h
···836836 * raw_event and event should return negative on error, any other value will837837 * pass the event on to .event() typically return 0 for success.838838 *839839+ * report_fixup must return a report descriptor pointer whose lifetime is at840840+ * least that of the input rdesc. This is usually done by mutating the input841841+ * rdesc and returning it or a sub-portion of it. In case a new buffer is842842+ * allocated and returned, the implementation of report_fixup is responsible for843843+ * freeing it later.844844+ *839845 * input_mapping shall return a negative value to completely ignore this usage840846 * (e.g. doubled or invalid usage), zero to continue with parsing of this841847 * usage by generic code (no special handling needed) or positive to skip
···234234};235235236236/**237237- * struct mmu_interval_notifier_ops237237+ * struct mmu_interval_notifier_ops - callback for range notification238238 * @invalidate: Upon return the caller must stop using any SPTEs within this239239 * range. This function can sleep. Return false only if sleeping240240 * was required but mmu_notifier_range_blockable(range) is false.···309309310310/**311311 * mmu_interval_set_seq - Save the invalidation sequence312312- * @interval_sub - The subscription passed to invalidate313313- * @cur_seq - The cur_seq passed to the invalidate() callback312312+ * @interval_sub: The subscription passed to invalidate313313+ * @cur_seq: The cur_seq passed to the invalidate() callback314314 *315315 * This must be called unconditionally from the invalidate callback of a316316 * struct mmu_interval_notifier_ops under the same lock that is used to call···329329330330/**331331 * mmu_interval_read_retry - End a read side critical section against a VA range332332- * interval_sub: The subscription333333- * seq: The return of the paired mmu_interval_read_begin()332332+ * @interval_sub: The subscription333333+ * @seq: The return of the paired mmu_interval_read_begin()334334 *335335 * This MUST be called under a user provided lock that is also held336336 * unconditionally by op->invalidate() when it calls mmu_interval_set_seq().···338338 * Each call should be paired with a single mmu_interval_read_begin() and339339 * should be used to conclude the read side.340340 *341341- * Returns true if an invalidation collided with this critical section, and341341+ * Returns: true if an invalidation collided with this critical section, and342342 * the caller should retry.343343 */344344static inline bool···350350351351/**352352 * mmu_interval_check_retry - Test if a collision has occurred353353- * interval_sub: The subscription354354- * seq: The return of the matching mmu_interval_read_begin()353353+ * @interval_sub: The subscription354354+ * @seq: The return of the matching mmu_interval_read_begin()355355 *356356 * This can be used in the critical section between mmu_interval_read_begin()357357- * and mmu_interval_read_retry(). A return of true indicates an invalidation358358- * has collided with this critical region and a future359359- * mmu_interval_read_retry() will return true.360360- *361361- * False is not reliable and only suggests a collision may not have362362- * occurred. It can be called many times and does not have to hold the user363363- * provided lock.357357+ * and mmu_interval_read_retry().364358 *365359 * This call can be used as part of loops and other expensive operations to366360 * expedite a retry.361361+ * It can be called many times and does not have to hold the user362362+ * provided lock.363363+ *364364+ * Returns: true indicates an invalidation has collided with this critical365365+ * region and a future mmu_interval_read_retry() will return true.366366+ * False is not reliable and only suggests a collision may not have367367+ * occurred.367368 */368369static inline bool369370mmu_interval_check_retry(struct mmu_interval_notifier *interval_sub,
···1313/**1414 * enum mlxreg_wdt_type - type of HW watchdog1515 *1616- * TYPE1 HW watchdog implementation exist in old systems.1717- * All new systems have TYPE2 HW watchdog.1818- * TYPE3 HW watchdog can exist on all systems with new CPLD.1919- * TYPE3 is selected by WD capability bit.1616+ * @MLX_WDT_TYPE1: HW watchdog implementation in old systems.1717+ * @MLX_WDT_TYPE2: All new systems have TYPE2 HW watchdog.1818+ * @MLX_WDT_TYPE3: HW watchdog that can exist on all systems with new CPLD.1919+ * TYPE3 is selected by WD capability bit.2020 */2121enum mlxreg_wdt_type {2222 MLX_WDT_TYPE1,···3535 * @MLXREG_HOTPLUG_LC_SYNCED: entry for line card synchronization events, coming3636 * after hardware-firmware synchronization handshake;3737 * @MLXREG_HOTPLUG_LC_READY: entry for line card ready events, indicating line card3838- PHYs ready / unready state;3838+ * PHYs ready / unready state;3939 * @MLXREG_HOTPLUG_LC_ACTIVE: entry for line card active events, indicating firmware4040 * availability / unavailability for the ports on line card;4141 * @MLXREG_HOTPLUG_LC_THERMAL: entry for line card thermal shutdown events, positive···123123 * @reg_pwr: attribute power register;124124 * @reg_ena: attribute enable register;125125 * @mode: access mode;126126- * @np - pointer to node platform associated with attribute;127127- * @hpdev - hotplug device data;126126+ * @np: pointer to node platform associated with attribute;127127+ * @hpdev: hotplug device data;128128 * @notifier: pointer to event notifier block;129129 * @health_cntr: dynamic device health indication counter;130130 * @attached: true if device has been attached after good health indication;
+3-2
include/linux/platform_data/x86/int3472.h
···2626#define INT3472_GPIO_TYPE_POWER_ENABLE 0x0b2727#define INT3472_GPIO_TYPE_CLK_ENABLE 0x0c2828#define INT3472_GPIO_TYPE_PRIVACY_LED 0x0d2929+#define INT3472_GPIO_TYPE_DOVDD 0x102930#define INT3472_GPIO_TYPE_HANDSHAKE 0x123031#define INT3472_GPIO_TYPE_HOTPLUG_DETECT 0x133132···3433#define INT3472_MAX_SENSOR_GPIOS 33534#define INT3472_MAX_REGULATORS 336353737-/* E.g. "avdd\0" */3838-#define GPIO_SUPPLY_NAME_LENGTH 53636+/* E.g. "dovdd\0" */3737+#define GPIO_SUPPLY_NAME_LENGTH 63938/* 12 chars for acpi_dev_name() + "-", e.g. "ABCD1234:00-" */4039#define GPIO_REGULATOR_NAME_LENGTH (12 + GPIO_SUPPLY_NAME_LENGTH)4140/* lower- and upper-case mapping */
+5-1
include/linux/rseq_types.h
···133133 * @active: MM CID is active for the task134134 * @cid: The CID associated to the task either permanently or135135 * borrowed from the CPU136136+ * @node: Queued in the per MM MMCID list136137 */137138struct sched_mm_cid {138139 unsigned int active;139140 unsigned int cid;141141+ struct hlist_node node;140142};141143142144/**···159157 * @work: Regular work to handle the affinity mode change case160158 * @lock: Spinlock to protect against affinity setting which can't take @mutex161159 * @mutex: Mutex to serialize forks and exits related to this mm160160+ * @user_list: List of the MM CID users of a MM162161 * @nr_cpus_allowed: The number of CPUs in the per MM allowed CPUs map. The map163162 * is growth only.164163 * @users: The number of tasks sharing this MM. Separate from mm::mm_users···180177181178 raw_spinlock_t lock;182179 struct mutex mutex;180180+ struct hlist_head user_list;183181184182 /* Low frequency modified */185183 unsigned int nr_cpus_allowed;186184 unsigned int users;187185 unsigned int pcpu_thrs;188186 unsigned int update_deferred;189189-}____cacheline_aligned_in_smp;187187+} ____cacheline_aligned;190188#else /* CONFIG_SCHED_MM_CID */191189struct mm_mm_cid { };192190struct sched_mm_cid { };
···792792793793/**794794 * scoped_user_rw_access_size - Start a scoped user read/write access with given size795795- * @uptr Pointer to the user space address to read from and write to795795+ * @uptr: Pointer to the user space address to read from and write to796796 * @size: Size of the access starting from @uptr797797 * @elbl: Error label to goto when the access region is rejected798798 *···803803804804/**805805 * scoped_user_rw_access - Start a scoped user read/write access806806- * @uptr Pointer to the user space address to read from and write to806806+ * @uptr: Pointer to the user space address to read from and write to807807 * @elbl: Error label to goto when the access region is rejected808808 *809809 * The size of the access starting from @uptr is determined via sizeof(*@uptr)).
+6-2
include/linux/usb.h
···18621862 * SYNCHRONOUS CALL SUPPORT *18631863 *-------------------------------------------------------------------*/1864186418651865+/* Maximum value allowed for timeout in synchronous routines below */18661866+#define USB_MAX_SYNCHRONOUS_TIMEOUT 60000 /* ms */18671867+18651868extern int usb_control_msg(struct usb_device *dev, unsigned int pipe,18661869 __u8 request, __u8 requesttype, __u16 value, __u16 index,18671870 void *data, __u16 size, int timeout);18681871extern int usb_interrupt_msg(struct usb_device *usb_dev, unsigned int pipe,18691872 void *data, int len, int *actual_length, int timeout);18701873extern int usb_bulk_msg(struct usb_device *usb_dev, unsigned int pipe,18711871- void *data, int len, int *actual_length,18721872- int timeout);18741874+ void *data, int len, int *actual_length, int timeout);18751875+extern int usb_bulk_msg_killable(struct usb_device *usb_dev, unsigned int pipe,18761876+ void *data, int len, int *actual_length, int timeout);1873187718741878/* wrappers around usb_control_msg() for the most common standard requests */18751879int usb_control_msg_send(struct usb_device *dev, __u8 endpoint, __u8 request,
···132132#define FLAG_MULTI_PACKET 0x2000133133#define FLAG_RX_ASSEMBLE 0x4000 /* rx packets may span >1 frames */134134#define FLAG_NOARP 0x8000 /* device can't do ARP */135135+#define FLAG_NOMAXMTU 0x10000 /* allow max_mtu above hard_mtu */135136136137 /* init device ... can sleep, or cause probe() failure */137138 int (*bind)(struct usbnet *, struct usb_interface *);
+14
include/net/ip6_tunnel.h
···156156{157157 int pkt_len, err;158158159159+ if (unlikely(dev_recursion_level() > IP_TUNNEL_RECURSION_LIMIT)) {160160+ if (dev) {161161+ net_crit_ratelimited("Dead loop on virtual device %s, fix it urgently!\n",162162+ dev->name);163163+ DEV_STATS_INC(dev, tx_errors);164164+ }165165+ kfree_skb(skb);166166+ return;167167+ }168168+169169+ dev_xmit_recursion_inc();170170+159171 memset(skb->cb, 0, sizeof(struct inet6_skb_parm));160172 IP6CB(skb)->flags = ip6cb_flags;161173 pkt_len = skb->len - skb_inner_network_offset(skb);···178166 pkt_len = -1;179167 iptunnel_xmit_stats(dev, pkt_len);180168 }169169+170170+ dev_xmit_recursion_dec();181171}182172#endif183173#endif
+7
include/net/ip_tunnels.h
···2727#include <net/ip6_route.h>2828#endif29293030+/* Recursion limit for tunnel xmit to detect routing loops.3131+ * Unlike XMIT_RECURSION_LIMIT (8) used in the no-qdisc path, tunnel3232+ * recursion involves route lookups and full IP output, consuming much3333+ * more stack per level, so a lower limit is needed.3434+ */3535+#define IP_TUNNEL_RECURSION_LIMIT 43636+3037/* Keep error state on tunnel for 30 sec */3138#define IPTUNNEL_ERR_TIMEO (30*HZ)3239
···188188/*189189 * If COOP_TASKRUN is set, get notified if task work is available for190190 * running and a kernel transition would be needed to run it. This sets191191- * IORING_SQ_TASKRUN in the sq ring flags. Not valid with COOP_TASKRUN.191191+ * IORING_SQ_TASKRUN in the sq ring flags. Not valid without COOP_TASKRUN192192+ * or DEFER_TASKRUN.192193 */193194#define IORING_SETUP_TASKRUN_FLAG (1U << 9)194195#define IORING_SETUP_SQE128 (1U << 10) /* SQEs are 128 byte */
···19021902 default n19031903 depends on IO_URING19041904 help19051905- Enable mock files for io_uring subststem testing. The ABI might19051905+ Enable mock files for io_uring subsystem testing. The ABI might19061906 still change, so it's still experimental and should only be enabled19071907 for specific test purposes.19081908
+1-1
io_uring/bpf_filter.c
···8585 do {8686 if (filter == &dummy_filter)8787 return -EACCES;8888- ret = bpf_prog_run(filter->prog, &bpf_ctx);8888+ ret = bpf_prog_run_pin_on_cpu(filter->prog, &bpf_ctx);8989 if (!ret)9090 return -EACCES;9191 filter = filter->next;
+7-3
io_uring/eventfd.c
···7676{7777 bool skip = false;7878 struct io_ev_fd *ev_fd;7979-8080- if (READ_ONCE(ctx->rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)8181- return;7979+ struct io_rings *rings;82808381 guard(rcu)();8282+8383+ rings = rcu_dereference(ctx->rings_rcu);8484+ if (!rings)8585+ return;8686+ if (READ_ONCE(rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)8787+ return;8488 ev_fd = rcu_dereference(ctx->io_ev_fd);8589 /*8690 * Check again if ev_fd exists in case an io_eventfd_unregister call
+3-1
io_uring/io_uring.c
···17451745 * well as 2 contiguous entries.17461746 */17471747 if (!(ctx->flags & IORING_SETUP_SQE_MIXED) || *left < 2 ||17481748- !(ctx->cached_sq_head & (ctx->sq_entries - 1)))17481748+ (unsigned)(sqe - ctx->sq_sqes) >= ctx->sq_entries - 1)17491749 return io_init_fail_req(req, -EINVAL);17501750 /*17511751 * A 128b operation on a mixed SQ uses two entries, so we have···20662066 io_free_region(ctx->user, &ctx->sq_region);20672067 io_free_region(ctx->user, &ctx->ring_region);20682068 ctx->rings = NULL;20692069+ RCU_INIT_POINTER(ctx->rings_rcu, NULL);20692070 ctx->sq_sqes = NULL;20702071}20712072···27042703 if (ret)27052704 return ret;27062705 ctx->rings = rings = io_region_get_ptr(&ctx->ring_region);27062706+ rcu_assign_pointer(ctx->rings_rcu, rings);27072707 if (!(ctx->flags & IORING_SETUP_NO_SQARRAY))27082708 ctx->sq_array = (u32 *)((char *)rings + rl->sq_array_offset);27092709
+11-2
io_uring/kbuf.c
···111111112112 buf = req->kbuf;113113 bl = io_buffer_get_list(ctx, buf->bgid);114114- list_add(&buf->list, &bl->buf_list);115115- bl->nbufs++;114114+ /*115115+ * If the buffer list was upgraded to a ring-based one, or removed,116116+ * while the request was in-flight in io-wq, drop it.117117+ */118118+ if (bl && !(bl->flags & IOBL_BUF_RING)) {119119+ list_add(&buf->list, &bl->buf_list);120120+ bl->nbufs++;121121+ } else {122122+ kfree(buf);123123+ }116124 req->flags &= ~REQ_F_BUFFER_SELECTED;125125+ req->kbuf = NULL;117126118127 io_ring_submit_unlock(ctx, issue_flags);119128 return true;
···202202 return -EPERM;203203 /*204204 * Similar to seccomp, disallow setting a filter if task_no_new_privs205205- * is true and we're not CAP_SYS_ADMIN.205205+ * is false and we're not CAP_SYS_ADMIN.206206 */207207 if (!task_no_new_privs(current) &&208208 !ns_capable_noaudit(current_user_ns(), CAP_SYS_ADMIN))···238238239239 /*240240 * Similar to seccomp, disallow setting a filter if task_no_new_privs241241- * is true and we're not CAP_SYS_ADMIN.241241+ * is false and we're not CAP_SYS_ADMIN.242242 */243243 if (!task_no_new_privs(current) &&244244 !ns_capable_noaudit(current_user_ns(), CAP_SYS_ADMIN))···633633 ctx->sq_entries = p->sq_entries;634634 ctx->cq_entries = p->cq_entries;635635636636+ /*637637+ * Just mark any flag we may have missed and that the application638638+ * should act on unconditionally. Worst case it'll be an extra639639+ * syscall.640640+ */641641+ atomic_or(IORING_SQ_TASKRUN | IORING_SQ_NEED_WAKEUP, &n.rings->sq_flags);636642 ctx->rings = n.rings;643643+ rcu_assign_pointer(ctx->rings_rcu, n.rings);644644+637645 ctx->sq_sqes = n.sq_sqes;638646 swap_old(ctx, o, n, ring_region);639647 swap_old(ctx, o, n, sq_region);···650642out:651643 spin_unlock(&ctx->completion_lock);652644 mutex_unlock(&ctx->mmap_lock);645645+ /* Wait for concurrent io_ctx_mark_taskrun() */646646+ if (to_free == &o)647647+ synchronize_rcu_expedited();653648 io_register_free_rings(ctx, to_free);654649655650 if (ctx->sq_data)
+20-2
io_uring/tw.c
···152152 WARN_ON_ONCE(ret);153153}154154155155+/*156156+ * Sets IORING_SQ_TASKRUN in the sq_flags shared with userspace, using the157157+ * RCU protected rings pointer to be safe against concurrent ring resizing.158158+ */159159+static void io_ctx_mark_taskrun(struct io_ring_ctx *ctx)160160+{161161+ lockdep_assert_in_rcu_read_lock();162162+163163+ if (ctx->flags & IORING_SETUP_TASKRUN_FLAG) {164164+ struct io_rings *rings = rcu_dereference(ctx->rings_rcu);165165+166166+ atomic_or(IORING_SQ_TASKRUN, &rings->sq_flags);167167+ }168168+}169169+155170void io_req_local_work_add(struct io_kiocb *req, unsigned flags)156171{157172 struct io_ring_ctx *ctx = req->ctx;···221206 */222207223208 if (!head) {224224- if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)225225- atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);209209+ io_ctx_mark_taskrun(ctx);226210 if (ctx->has_evfd)227211 io_eventfd_signal(ctx, false);228212 }···245231 if (!llist_add(&req->io_task_work.node, &tctx->task_list))246232 return;247233234234+ /*235235+ * Doesn't need to use ->rings_rcu, as resizing isn't supported for236236+ * !DEFER_TASKRUN.237237+ */248238 if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)249239 atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);250240
···10021002 mutex_lock(&tr->mutex);1003100310041004 shim_link = cgroup_shim_find(tr, bpf_func);10051005- if (shim_link) {10051005+ if (shim_link && !IS_ERR(bpf_link_inc_not_zero(&shim_link->link.link))) {10061006 /* Reusing existing shim attached by the other program. */10071007- bpf_link_inc(&shim_link->link.link);10081008-10091007 mutex_unlock(&tr->mutex);10101008 bpf_trampoline_put(tr); /* bpf_trampoline_get above */10111009 return 0;
+34-4
kernel/bpf/verifier.c
···25112511 if ((u32)reg->s32_min_value <= (u32)reg->s32_max_value) {25122512 reg->u32_min_value = max_t(u32, reg->s32_min_value, reg->u32_min_value);25132513 reg->u32_max_value = min_t(u32, reg->s32_max_value, reg->u32_max_value);25142514+ } else {25152515+ if (reg->u32_max_value < (u32)reg->s32_min_value) {25162516+ /* See __reg64_deduce_bounds() for detailed explanation.25172517+ * Refine ranges in the following situation:25182518+ *25192519+ * 0 U32_MAX25202520+ * | [xxxxxxxxxxxxxx u32 range xxxxxxxxxxxxxx] |25212521+ * |----------------------------|----------------------------|25222522+ * |xxxxx s32 range xxxxxxxxx] [xxxxxxx|25232523+ * 0 S32_MAX S32_MIN -125242524+ */25252525+ reg->s32_min_value = (s32)reg->u32_min_value;25262526+ reg->u32_max_value = min_t(u32, reg->u32_max_value, reg->s32_max_value);25272527+ } else if ((u32)reg->s32_max_value < reg->u32_min_value) {25282528+ /*25292529+ * 0 U32_MAX25302530+ * | [xxxxxxxxxxxxxx u32 range xxxxxxxxxxxxxx] |25312531+ * |----------------------------|----------------------------|25322532+ * |xxxxxxxxx] [xxxxxxxxxxxx s32 range |25332533+ * 0 S32_MAX S32_MIN -125342534+ */25352535+ reg->s32_max_value = (s32)reg->u32_max_value;25362536+ reg->u32_min_value = max_t(u32, reg->u32_min_value, reg->s32_min_value);25372537+ }25142538 }25152539}25162540···1735917335 * in verifier state, save R in linked_regs if R->id == id.1736017336 * If there are too many Rs sharing same id, reset id for leftover Rs.1736117337 */1736217362-static void collect_linked_regs(struct bpf_verifier_state *vstate, u32 id,1733817338+static void collect_linked_regs(struct bpf_verifier_env *env,1733917339+ struct bpf_verifier_state *vstate,1734017340+ u32 id,1736317341 struct linked_regs *linked_regs)1736417342{1734317343+ struct bpf_insn_aux_data *aux = env->insn_aux_data;1736517344 struct bpf_func_state *func;1736617345 struct bpf_reg_state *reg;1734617346+ u16 live_regs;1736717347 int i, j;17368173481736917349 id = id & ~BPF_ADD_CONST;1737017350 for (i = vstate->curframe; i >= 0; i--) {1735117351+ live_regs = aux[frame_insn_idx(vstate, i)].live_regs_before;1737117352 func = vstate->frame[i];1737217353 for (j = 0; j < BPF_REG_FP; j++) {1735417354+ if (!(live_regs & BIT(j)))1735517355+ continue;1737317356 reg = &func->regs[j];1737417357 __collect_linked_regs(linked_regs, reg, id, i, j, true);1737517358 }···1759117560 * if parent state is created.1759217561 */1759317562 if (BPF_SRC(insn->code) == BPF_X && src_reg->type == SCALAR_VALUE && src_reg->id)1759417594- collect_linked_regs(this_branch, src_reg->id, &linked_regs);1756317563+ collect_linked_regs(env, this_branch, src_reg->id, &linked_regs);1759517564 if (dst_reg->type == SCALAR_VALUE && dst_reg->id)1759617596- collect_linked_regs(this_branch, dst_reg->id, &linked_regs);1756517565+ collect_linked_regs(env, this_branch, dst_reg->id, &linked_regs);1759717566 if (linked_regs.cnt > 1) {1759817567 err = push_jmp_history(env, this_branch, 0, linked_regs_pack(&linked_regs));1759917568 if (err)···2529225261BTF_ID(func, do_exit)2529325262BTF_ID(func, do_group_exit)2529425263BTF_ID(func, kthread_complete_and_exit)2529525295-BTF_ID(func, kthread_exit)2529625264BTF_ID(func, make_task_dead)2529725265BTF_SET_END(noreturn_deny)2529825266
+6
kernel/cgroup/cgroup.c
···51095109 return;5110511051115111 task = list_entry(it->task_pos, struct task_struct, cg_list);51125112+ /*51135113+ * Hide tasks that are exiting but not yet removed. Keep zombie51145114+ * leaders with live threads visible.51155115+ */51165116+ if ((task->flags & PF_EXITING) && !atomic_read(&task->signal->live))51175117+ goto repeat;5112511851135119 if (it->flags & CSS_TASK_ITER_PROCS) {51145120 /* if PROCS, skip over tasks which aren't group leaders */
+31-28
kernel/cgroup/cpuset.c
···879879 /*880880 * Cgroup v2 doesn't support domain attributes, just set all of them881881 * to SD_ATTR_INIT. Also non-isolating partition root CPUs are a882882- * subset of HK_TYPE_DOMAIN housekeeping CPUs.882882+ * subset of HK_TYPE_DOMAIN_BOOT housekeeping CPUs.883883 */884884 for (i = 0; i < ndoms; i++) {885885 /*···888888 */889889 if (!csa || csa[i] == &top_cpuset)890890 cpumask_and(doms[i], top_cpuset.effective_cpus,891891- housekeeping_cpumask(HK_TYPE_DOMAIN));891891+ housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT));892892 else893893 cpumask_copy(doms[i], csa[i]->effective_cpus);894894 if (dattr)···13291329}1330133013311331/*13321332- * update_hk_sched_domains - Update HK cpumasks & rebuild sched domains13321332+ * cpuset_update_sd_hk_unlock - Rebuild sched domains, update HK & unlock13331333 *13341334- * Update housekeeping cpumasks and rebuild sched domains if necessary.13351335- * This should be called at the end of cpuset or hotplug actions.13341334+ * Update housekeeping cpumasks and rebuild sched domains if necessary and13351335+ * then do a cpuset_full_unlock().13361336+ * This should be called at the end of cpuset operation.13361337 */13371337-static void update_hk_sched_domains(void)13381338+static void cpuset_update_sd_hk_unlock(void)13391339+ __releases(&cpuset_mutex)13401340+ __releases(&cpuset_top_mutex)13381341{13421342+ /* force_sd_rebuild will be cleared in rebuild_sched_domains_locked() */13431343+ if (force_sd_rebuild)13441344+ rebuild_sched_domains_locked();13451345+13391346 if (update_housekeeping) {13401340- /* Updating HK cpumasks implies rebuild sched domains */13411347 update_housekeeping = false;13421342- force_sd_rebuild = true;13431348 cpumask_copy(isolated_hk_cpus, isolated_cpus);1344134913451350 /*···13551350 mutex_unlock(&cpuset_mutex);13561351 cpus_read_unlock();13571352 WARN_ON_ONCE(housekeeping_update(isolated_hk_cpus));13581358- cpus_read_lock();13591359- mutex_lock(&cpuset_mutex);13531353+ mutex_unlock(&cpuset_top_mutex);13541354+ } else {13551355+ cpuset_full_unlock();13601356 }13611361- /* force_sd_rebuild will be cleared in rebuild_sched_domains_locked() */13621362- if (force_sd_rebuild)13631363- rebuild_sched_domains_locked();13641357}1365135813661359/*13671367- * Work function to invoke update_hk_sched_domains()13601360+ * Work function to invoke cpuset_update_sd_hk_unlock()13681361 */13691362static void hk_sd_workfn(struct work_struct *work)13701363{13711364 cpuset_full_lock();13721372- update_hk_sched_domains();13731373- cpuset_full_unlock();13651365+ cpuset_update_sd_hk_unlock();13741366}1375136713761368/**···3232323032333231 free_cpuset(trialcs);32343232out_unlock:32353235- update_hk_sched_domains();32363236- cpuset_full_unlock();32333233+ cpuset_update_sd_hk_unlock();32373234 if (of_cft(of)->private == FILE_MEMLIST)32383235 schedule_flush_migrate_mm();32393236 return retval ?: nbytes;···33393338 cpuset_full_lock();33403339 if (is_cpuset_online(cs))33413340 retval = update_prstate(cs, val);33423342- update_hk_sched_domains();33433343- cpuset_full_unlock();33413341+ cpuset_update_sd_hk_unlock();33443342 return retval ?: nbytes;33453343}33463344···35133513 /* Reset valid partition back to member */35143514 if (is_partition_valid(cs))35153515 update_prstate(cs, PRS_MEMBER);35163516- update_hk_sched_domains();35173517- cpuset_full_unlock();35163516+ cpuset_update_sd_hk_unlock();35183517}3519351835203519static void cpuset_css_free(struct cgroup_subsys_state *css)···39223923 rcu_read_unlock();39233924 }3924392539253925-39263926 /*39273927- * Queue a work to call housekeeping_update() & rebuild_sched_domains()39283928- * There will be a slight delay before the HK_TYPE_DOMAIN housekeeping39293929- * cpumask can correctly reflect what is in isolated_cpus.39273927+ * rebuild_sched_domains() will always be called directly if needed39283928+ * to make sure that newly added or removed CPU will be reflected in39293929+ * the sched domains. However, if isolated partition invalidation39303930+ * or recreation is being done (update_housekeeping set), a work item39313931+ * will be queued to call housekeeping_update() to update the39323932+ * corresponding housekeeping cpumasks after some slight delay.39303933 *39313934 * We rely on WORK_STRUCT_PENDING_BIT to not requeue a work item that39323935 * is still pending. Before the pending bit is cleared, the work data···39373936 * previously queued work. Since hk_sd_workfn() doesn't use the work39383937 * item at all, this is not a problem.39393938 */39403940- if (update_housekeeping || force_sd_rebuild)39413941- queue_work(system_unbound_wq, &hk_sd_work);39393939+ if (force_sd_rebuild)39403940+ rebuild_sched_domains_cpuslocked();39413941+ if (update_housekeeping)39423942+ queue_work(system_dfl_wq, &hk_sd_work);3942394339433944 free_tmpmasks(ptmp);39443945}
···11441144 lockdep_assert_held(&kprobe_mutex);1145114511461146 ret = ftrace_set_filter_ip(ops, (unsigned long)p->addr, 0, 0);11471147- if (WARN_ONCE(ret < 0, "Failed to arm kprobe-ftrace at %pS (error %d)\n", p->addr, ret))11471147+ if (ret < 0)11481148 return ret;1149114911501150 if (*cnt == 0) {11511151 ret = register_ftrace_function(ops);11521152- if (WARN(ret < 0, "Failed to register kprobe-ftrace (error %d)\n", ret)) {11521152+ if (ret < 0) {11531153 /*11541154 * At this point, sinec ops is not registered, we should be sefe from11551155 * registering empty filter.···11781178 int ret;1179117911801180 lockdep_assert_held(&kprobe_mutex);11811181+ if (unlikely(kprobe_ftrace_disabled)) {11821182+ /* Now ftrace is disabled forever, disarm is already done. */11831183+ return 0;11841184+ }1181118511821186 if (*cnt == 1) {11831187 ret = unregister_ftrace_function(ops);
+29-52
kernel/sched/core.c
···47244724 scx_cancel_fork(p);47254725}4726472647274727+static void sched_mm_cid_fork(struct task_struct *t);47284728+47274729void sched_post_fork(struct task_struct *p)47284730{47314731+ sched_mm_cid_fork(p);47294732 uclamp_post_fork(p);47304733 scx_post_fork(p);47314734}···1061510612 }1061610613}10617106141061810618-static bool mm_cid_fixup_task_to_cpu(struct task_struct *t, struct mm_struct *mm)1061510615+static void mm_cid_fixup_task_to_cpu(struct task_struct *t, struct mm_struct *mm)1061910616{1062010617 /* Remote access to mm::mm_cid::pcpu requires rq_lock */1062110618 guard(task_rq_lock)(t);1062210622- /* If the task is not active it is not in the users count */1062310623- if (!t->mm_cid.active)1062410624- return false;1062510619 if (cid_on_task(t->mm_cid.cid)) {1062610620 /* If running on the CPU, put the CID in transit mode, otherwise drop it */1062710621 if (task_rq(t)->curr == t)···1062610626 else1062710627 mm_unset_cid_on_task(t);1062810628 }1062910629- return true;1063010630-}1063110631-1063210632-static void mm_cid_do_fixup_tasks_to_cpus(struct mm_struct *mm)1063310633-{1063410634- struct task_struct *p, *t;1063510635- unsigned int users;1063610636-1063710637- /*1063810638- * This can obviously race with a concurrent affinity change, which1063910639- * increases the number of allowed CPUs for this mm, but that does1064010640- * not affect the mode and only changes the CID constraints. A1064110641- * possible switch back to per task mode happens either in the1064210642- * deferred handler function or in the next fork()/exit().1064310643- *1064410644- * The caller has already transferred. The newly incoming task is1064510645- * already accounted for, but not yet visible.1064610646- */1064710647- users = mm->mm_cid.users - 2;1064810648- if (!users)1064910649- return;1065010650-1065110651- guard(rcu)();1065210652- for_other_threads(current, t) {1065310653- if (mm_cid_fixup_task_to_cpu(t, mm))1065410654- users--;1065510655- }1065610656-1065710657- if (!users)1065810658- return;1065910659-1066010660- /* Happens only for VM_CLONE processes. */1066110661- for_each_process_thread(p, t) {1066210662- if (t == current || t->mm != mm)1066310663- continue;1066410664- if (mm_cid_fixup_task_to_cpu(t, mm)) {1066510665- if (--users == 0)1066610666- return;1066710667- }1066810668- }1066910629}10670106301067110631static void mm_cid_fixup_tasks_to_cpus(void)1067210632{1067310633 struct mm_struct *mm = current->mm;1063410634+ struct task_struct *t;10674106351067510675- mm_cid_do_fixup_tasks_to_cpus(mm);1063610636+ lockdep_assert_held(&mm->mm_cid.mutex);1063710637+1063810638+ hlist_for_each_entry(t, &mm->mm_cid.user_list, mm_cid.node) {1063910639+ /* Current has already transferred before invoking the fixup. */1064010640+ if (t != current)1064110641+ mm_cid_fixup_task_to_cpu(t, mm);1064210642+ }1064310643+1067610644 mm_cid_complete_transit(mm, MM_CID_ONCPU);1067710645}10678106461067910647static bool sched_mm_cid_add_user(struct task_struct *t, struct mm_struct *mm)1068010648{1064910649+ lockdep_assert_held(&mm->mm_cid.lock);1065010650+1068110651 t->mm_cid.active = 1;1065210652+ hlist_add_head(&t->mm_cid.node, &mm->mm_cid.user_list);1068210653 mm->mm_cid.users++;1068310654 return mm_update_max_cids(mm);1068410655}10685106561068610686-void sched_mm_cid_fork(struct task_struct *t)1065710657+static void sched_mm_cid_fork(struct task_struct *t)1068710658{1068810659 struct mm_struct *mm = t->mm;1068910660 bool percpu;10690106611069110691- WARN_ON_ONCE(!mm || t->mm_cid.cid != MM_CID_UNSET);1066210662+ if (!mm)1066310663+ return;1066410664+1066510665+ WARN_ON_ONCE(t->mm_cid.cid != MM_CID_UNSET);10692106661069310667 guard(mutex)(&mm->mm_cid.mutex);1069410668 scoped_guard(raw_spinlock_irq, &mm->mm_cid.lock) {···10701107271070210728static bool sched_mm_cid_remove_user(struct task_struct *t)1070310729{1073010730+ lockdep_assert_held(&t->mm->mm_cid.lock);1073110731+1070410732 t->mm_cid.active = 0;1070510705- scoped_guard(preempt) {1070610706- /* Clear the transition bit */1070710707- t->mm_cid.cid = cid_from_transit_cid(t->mm_cid.cid);1070810708- mm_unset_cid_on_task(t);1070910709- }1073310733+ /* Clear the transition bit */1073410734+ t->mm_cid.cid = cid_from_transit_cid(t->mm_cid.cid);1073510735+ mm_unset_cid_on_task(t);1073610736+ hlist_del_init(&t->mm_cid.node);1071010737 t->mm->mm_cid.users--;1071110738 return mm_update_max_cids(t->mm);1071210739}···1085010875 mutex_init(&mm->mm_cid.mutex);1085110876 mm->mm_cid.irq_work = IRQ_WORK_INIT_HARD(mm_cid_irq_work);1085210877 INIT_WORK(&mm->mm_cid.work, mm_cid_work_fn);1087810878+ INIT_HLIST_HEAD(&mm->mm_cid.user_list);1085310879 cpumask_copy(mm_cpus_allowed(mm), &p->cpus_mask);1085410880 bitmap_zero(mm_cidmask(mm), num_possible_cpus());1085510881}1085610882#else /* CONFIG_SCHED_MM_CID */1085710883static inline void mm_update_cpus_allowed(struct mm_struct *mm, const struct cpumask *affmsk) { }1088410884+static inline void sched_mm_cid_fork(struct task_struct *t) { }1085810885#endif /* !CONFIG_SCHED_MM_CID */10859108861086010887static DEFINE_PER_CPU(struct sched_change_ctx, sched_change_ctx);
+11-11
kernel/sched/ext.c
···11031103 }1104110411051105 /* seq records the order tasks are queued, used by BPF DSQ iterator */11061106- dsq->seq++;11061106+ WRITE_ONCE(dsq->seq, dsq->seq + 1);11071107 p->scx.dsq_seq = dsq->seq;1108110811091109 dsq_mod_nr(dsq, 1);···14701470 p->scx.flags |= SCX_TASK_RESET_RUNNABLE_AT;14711471}1472147214731473-static void enqueue_task_scx(struct rq *rq, struct task_struct *p, int enq_flags)14731473+static void enqueue_task_scx(struct rq *rq, struct task_struct *p, int core_enq_flags)14741474{14751475 struct scx_sched *sch = scx_root;14761476 int sticky_cpu = p->scx.sticky_cpu;14771477+ u64 enq_flags = core_enq_flags | rq->scx.extra_enq_flags;1477147814781479 if (enq_flags & ENQUEUE_WAKEUP)14791480 rq->scx.flags |= SCX_RQ_IN_WAKEUP;14801480-14811481- enq_flags |= rq->scx.extra_enq_flags;1482148114831482 if (sticky_cpu >= 0)14841483 p->scx.sticky_cpu = -1;···39073908 * consider offloading iff the total queued duration is over the39083909 * threshold.39093910 */39103910- min_delta_us = scx_bypass_lb_intv_us / SCX_BYPASS_LB_MIN_DELTA_DIV;39113911- if (delta < DIV_ROUND_UP(min_delta_us, scx_slice_bypass_us))39113911+ min_delta_us = READ_ONCE(scx_bypass_lb_intv_us) / SCX_BYPASS_LB_MIN_DELTA_DIV;39123912+ if (delta < DIV_ROUND_UP(min_delta_us, READ_ONCE(scx_slice_bypass_us)))39123913 return 0;3913391439143915 raw_spin_rq_lock_irq(rq);···41364137 WARN_ON_ONCE(scx_bypass_depth <= 0);41374138 if (scx_bypass_depth != 1)41384139 goto unlock;41394139- WRITE_ONCE(scx_slice_dfl, scx_slice_bypass_us * NSEC_PER_USEC);41404140+ WRITE_ONCE(scx_slice_dfl, READ_ONCE(scx_slice_bypass_us) * NSEC_PER_USEC);41404141 bypass_timestamp = ktime_get_ns();41414142 if (sch)41424143 scx_add_event(sch, SCX_EV_BYPASS_ACTIVATE, 1);···52585259 if (!READ_ONCE(helper)) {52595260 mutex_lock(&helper_mutex);52605261 if (!helper) {52615261- helper = kthread_run_worker(0, "scx_enable_helper");52625262- if (IS_ERR_OR_NULL(helper)) {52635263- helper = NULL;52625262+ struct kthread_worker *w =52635263+ kthread_run_worker(0, "scx_enable_helper");52645264+ if (IS_ERR_OR_NULL(w)) {52645265 mutex_unlock(&helper_mutex);52655266 return -ENOMEM;52665267 }52675267- sched_set_fifo(helper->task);52685268+ sched_set_fifo(w->task);52695269+ WRITE_ONCE(helper, w);52685270 }52695271 mutex_unlock(&helper_mutex);52705272 }
+98-16
kernel/sched/ext_internal.h
···10351035};1036103610371037/*10381038- * sched_ext_entity->ops_state10381038+ * Task Ownership State Machine (sched_ext_entity->ops_state)10391039 *10401040- * Used to track the task ownership between the SCX core and the BPF scheduler.10411041- * State transitions look as follows:10401040+ * The sched_ext core uses this state machine to track task ownership10411041+ * between the SCX core and the BPF scheduler. This allows the BPF10421042+ * scheduler to dispatch tasks without strict ordering requirements, while10431043+ * the SCX core safely rejects invalid dispatches.10421044 *10431043- * NONE -> QUEUEING -> QUEUED -> DISPATCHING10441044- * ^ | |10451045- * | v v10461046- * \-------------------------------/10451045+ * State Transitions10471046 *10481048- * QUEUEING and DISPATCHING states can be waited upon. See wait_ops_state() call10491049- * sites for explanations on the conditions being waited upon and why they are10501050- * safe. Transitions out of them into NONE or QUEUED must store_release and the10511051- * waiters should load_acquire.10471047+ * .------------> NONE (owned by SCX core)10481048+ * | | ^10491049+ * | enqueue | | direct dispatch10501050+ * | v |10511051+ * | QUEUEING -------'10521052+ * | |10531053+ * | enqueue |10541054+ * | completes |10551055+ * | v10561056+ * | QUEUED (owned by BPF scheduler)10571057+ * | |10581058+ * | dispatch |10591059+ * | |10601060+ * | v10611061+ * | DISPATCHING10621062+ * | |10631063+ * | dispatch |10641064+ * | completes |10651065+ * `---------------'10521066 *10531053- * Tracking scx_ops_state enables sched_ext core to reliably determine whether10541054- * any given task can be dispatched by the BPF scheduler at all times and thus10551055- * relaxes the requirements on the BPF scheduler. This allows the BPF scheduler10561056- * to try to dispatch any task anytime regardless of its state as the SCX core10571057- * can safely reject invalid dispatches.10671067+ * State Descriptions10681068+ *10691069+ * - %SCX_OPSS_NONE:10701070+ * Task is owned by the SCX core. It's either on a run queue, running,10711071+ * or being manipulated by the core scheduler. The BPF scheduler has no10721072+ * claim on this task.10731073+ *10741074+ * - %SCX_OPSS_QUEUEING:10751075+ * Transitional state while transferring a task from the SCX core to10761076+ * the BPF scheduler. The task's rq lock is held during this state.10771077+ * Since QUEUEING is both entered and exited under the rq lock, dequeue10781078+ * can never observe this state (it would be a BUG). When finishing a10791079+ * dispatch, if the task is still in %SCX_OPSS_QUEUEING the completion10801080+ * path busy-waits for it to leave this state (via wait_ops_state())10811081+ * before retrying.10821082+ *10831083+ * - %SCX_OPSS_QUEUED:10841084+ * Task is owned by the BPF scheduler. It's on a DSQ (dispatch queue)10851085+ * and the BPF scheduler is responsible for dispatching it. A QSEQ10861086+ * (queue sequence number) is embedded in this state to detect10871087+ * dispatch/dequeue races: if a task is dequeued and re-enqueued, the10881088+ * QSEQ changes and any in-flight dispatch operations targeting the old10891089+ * QSEQ are safely ignored.10901090+ *10911091+ * - %SCX_OPSS_DISPATCHING:10921092+ * Transitional state while transferring a task from the BPF scheduler10931093+ * back to the SCX core. This state indicates the BPF scheduler has10941094+ * selected the task for execution. When dequeue needs to take the task10951095+ * off a DSQ and it is still in %SCX_OPSS_DISPATCHING, the dequeue path10961096+ * busy-waits for it to leave this state (via wait_ops_state()) before10971097+ * proceeding. Exits to %SCX_OPSS_NONE when dispatch completes.10981098+ *10991099+ * Memory Ordering11001100+ *11011101+ * Transitions out of %SCX_OPSS_QUEUEING and %SCX_OPSS_DISPATCHING into11021102+ * %SCX_OPSS_NONE or %SCX_OPSS_QUEUED must use atomic_long_set_release()11031103+ * and waiters must use atomic_long_read_acquire(). This ensures proper11041104+ * synchronization between concurrent operations.11051105+ *11061106+ * Cross-CPU Task Migration11071107+ *11081108+ * When moving a task in the %SCX_OPSS_DISPATCHING state, we can't simply11091109+ * grab the target CPU's rq lock because a concurrent dequeue might be11101110+ * waiting on %SCX_OPSS_DISPATCHING while holding the source rq lock11111111+ * (deadlock).11121112+ *11131113+ * The sched_ext core uses a "lock dancing" protocol coordinated by11141114+ * p->scx.holding_cpu. When moving a task to a different rq:11151115+ *11161116+ * 1. Verify task can be moved (CPU affinity, migration_disabled, etc.)11171117+ * 2. Set p->scx.holding_cpu to the current CPU11181118+ * 3. Set task state to %SCX_OPSS_NONE; dequeue waits while DISPATCHING11191119+ * is set, so clearing DISPATCHING first prevents the circular wait11201120+ * (safe to lock the rq we need)11211121+ * 4. Unlock the current CPU's rq11221122+ * 5. Lock src_rq (where the task currently lives)11231123+ * 6. Verify p->scx.holding_cpu == current CPU, if not, dequeue won the11241124+ * race (dequeue clears holding_cpu to -1 when it takes the task), in11251125+ * this case migration is aborted11261126+ * 7. If src_rq == dst_rq: clear holding_cpu and enqueue directly11271127+ * into dst_rq's local DSQ (no lock swap needed)11281128+ * 8. Otherwise: call move_remote_task_to_local_dsq(), which releases11291129+ * src_rq, locks dst_rq, and performs the deactivate/activate11301130+ * migration cycle (dst_rq is held on return)11311131+ * 9. Unlock dst_rq and re-lock the current CPU's rq to restore11321132+ * the lock state expected by the caller11331133+ *11341134+ * If any verification fails, abort the migration.11351135+ *11361136+ * This state tracking allows the BPF scheduler to try to dispatch any task11371137+ * at any time regardless of its state. The SCX core can safely11381138+ * reject/ignore invalid dispatches, simplifying the BPF scheduler11391139+ * implementation.10581140 */10591141enum scx_ops_state {10601142 SCX_OPSS_NONE, /* owned by the SCX core */
+10-1
kernel/sched/idle.c
···221221222222 next_state = cpuidle_find_deepest_state(drv, dev, max_latency_ns);223223 call_cpuidle(drv, dev, next_state);224224- } else {224224+ } else if (drv->state_count > 1) {225225 bool stop_tick = true;226226227227 /*···239239 * Give the governor an opportunity to reflect on the outcome240240 */241241 cpuidle_reflect(dev, entered_state);242242+ } else {243243+ tick_nohz_idle_retain_tick();244244+245245+ /*246246+ * If there is only a single idle state (or none), there is247247+ * nothing meaningful for the governor to choose. Skip the248248+ * governor and always use state 0.249249+ */250250+ call_cpuidle(drv, dev, 0);242251 }243252244253exit_idle:
+30
kernel/sched/syscalls.c
···284284 uid_eq(cred->euid, pcred->uid));285285}286286287287+#ifdef CONFIG_RT_MUTEXES288288+static inline void __setscheduler_dl_pi(int newprio, int policy,289289+ struct task_struct *p,290290+ struct sched_change_ctx *scope)291291+{292292+ /*293293+ * In case a DEADLINE task (either proper or boosted) gets294294+ * setscheduled to a lower priority class, check if it neeeds to295295+ * inherit parameters from a potential pi_task. In that case make296296+ * sure replenishment happens with the next enqueue.297297+ */298298+299299+ if (dl_prio(newprio) && !dl_policy(policy)) {300300+ struct task_struct *pi_task = rt_mutex_get_top_task(p);301301+302302+ if (pi_task) {303303+ p->dl.pi_se = pi_task->dl.pi_se;304304+ scope->flags |= ENQUEUE_REPLENISH;305305+ }306306+ }307307+}308308+#else /* !CONFIG_RT_MUTEXES */309309+static inline void __setscheduler_dl_pi(int newprio, int policy,310310+ struct task_struct *p,311311+ struct sched_change_ctx *scope)312312+{313313+}314314+#endif /* !CONFIG_RT_MUTEXES */315315+287316#ifdef CONFIG_UCLAMP_TASK288317289318static int uclamp_validate(struct task_struct *p,···684655 __setscheduler_params(p, attr);685656 p->sched_class = next_class;686657 p->prio = newprio;658658+ __setscheduler_dl_pi(newprio, policy, p, scope);687659 }688660 __setscheduler_uclamp(p, attr);689661
+1-1
kernel/time/time.c
···697697 *698698 * Return: jiffies_64 value converted to 64-bit "clock_t" (CLOCKS_PER_SEC)699699 */700700-u64 jiffies_64_to_clock_t(u64 x)700700+notrace u64 jiffies_64_to_clock_t(u64 x)701701{702702#if (TICK_NSEC % (NSEC_PER_SEC / USER_HZ)) == 0703703# if HZ < USER_HZ
+4-2
kernel/time/timekeeping.c
···2653265326542654 if (aux_clock) {26552655 /* Auxiliary clocks are similar to TAI and do not have leap seconds */26562656- if (txc->status & (STA_INS | STA_DEL))26562656+ if (txc->modes & ADJ_STATUS &&26572657+ txc->status & (STA_INS | STA_DEL))26572658 return -EINVAL;2658265926592660 /* No TAI offset setting */···26622661 return -EINVAL;2663266226642663 /* No PPS support either */26652665- if (txc->status & (STA_PPSFREQ | STA_PPSTIME))26642664+ if (txc->modes & ADJ_STATUS &&26652665+ txc->status & (STA_PPSFREQ | STA_PPSTIME))26662666 return -EINVAL;26672667 }26682668
+1-2
kernel/trace/blktrace.c
···383383 cpu = raw_smp_processor_id();384384385385 if (blk_tracer) {386386- tracing_record_cmdline(current);387387-388386 buffer = blk_tr->array_buffer.buffer;389387 trace_ctx = tracing_gen_ctx_flags(0);390388 switch (bt->version) {···417419 if (!event)418420 return;419421422422+ tracing_record_cmdline(current);420423 switch (bt->version) {421424 case 1:422425 record_blktrace_event(ring_buffer_event_data(event),
+2
kernel/trace/ftrace.c
···64046404 new_filter_hash = old_filter_hash;64056405 }64066406 } else {64076407+ guard(mutex)(&ftrace_lock);64076408 err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH);64086409 /*64096410 * new_filter_hash is dup-ed, so we need to release it anyway,···65316530 ops->func_hash->filter_hash = NULL;65326531 }65336532 } else {65336533+ guard(mutex)(&ftrace_lock);65346534 err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH);65356535 /*65366536 * new_filter_hash is dup-ed, so we need to release it anyway,
+3-3
kernel/trace/trace.c
···93509350}9351935193529352static int93539353-allocate_trace_buffer(struct trace_array *tr, struct array_buffer *buf, int size)93539353+allocate_trace_buffer(struct trace_array *tr, struct array_buffer *buf, unsigned long size)93549354{93559355 enum ring_buffer_flags rb_flags;93569356 struct trace_scratch *tscratch;···94059405 }94069406}9407940794089408-static int allocate_trace_buffers(struct trace_array *tr, int size)94089408+static int allocate_trace_buffers(struct trace_array *tr, unsigned long size)94099409{94109410 int ret;94119411···10769107691077010770__init static int tracer_alloc_buffers(void)1077110771{1077210772- int ring_buf_size;1077210772+ unsigned long ring_buf_size;1077310773 int ret = -ENOMEM;10774107741077510775
···50505151void trigger_data_free(struct event_trigger_data *data)5252{5353+ if (!data)5454+ return;5555+5356 if (data->cmd_ops->set_filter)5457 data->cmd_ops->set_filter(NULL, data, NULL);5558
+28-27
kernel/workqueue.c
···190190 int id; /* I: pool ID */191191 unsigned int flags; /* L: flags */192192193193- unsigned long watchdog_ts; /* L: watchdog timestamp */193193+ unsigned long last_progress_ts; /* L: last forward progress timestamp */194194 bool cpu_stall; /* WD: stalled cpu bound pool */195195196196 /*···16971697 WARN_ON_ONCE(!(*wdb & WORK_STRUCT_INACTIVE));16981698 trace_workqueue_activate_work(work);16991699 if (list_empty(&pwq->pool->worklist))17001700- pwq->pool->watchdog_ts = jiffies;17001700+ pwq->pool->last_progress_ts = jiffies;17011701 move_linked_works(work, &pwq->pool->worklist, NULL);17021702 __clear_bit(WORK_STRUCT_INACTIVE_BIT, wdb);17031703}···23482348 */23492349 if (list_empty(&pwq->inactive_works) && pwq_tryinc_nr_active(pwq, false)) {23502350 if (list_empty(&pool->worklist))23512351- pool->watchdog_ts = jiffies;23512351+ pool->last_progress_ts = jiffies;2352235223532353 trace_workqueue_activate_work(work);23542354 insert_work(pwq, work, &pool->worklist, work_flags);···32043204 worker->current_pwq = pwq;32053205 if (worker->task)32063206 worker->current_at = worker->task->se.sum_exec_runtime;32073207+ worker->current_start = jiffies;32073208 work_data = *work_data_bits(work);32083209 worker->current_color = get_work_color(work_data);32093210···33533352 while ((work = list_first_entry_or_null(&worker->scheduled,33543353 struct work_struct, entry))) {33553354 if (first) {33563356- worker->pool->watchdog_ts = jiffies;33553355+ worker->pool->last_progress_ts = jiffies;33573356 first = false;33583357 }33593358 process_one_work(worker, work);···48514850 pool->cpu = -1;48524851 pool->node = NUMA_NO_NODE;48534852 pool->flags |= POOL_DISASSOCIATED;48544854- pool->watchdog_ts = jiffies;48534853+ pool->last_progress_ts = jiffies;48554854 INIT_LIST_HEAD(&pool->worklist);48564855 INIT_LIST_HEAD(&pool->idle_list);48574856 hash_init(pool->busy_hash);···62756274{62766275 struct worker_pool *pool = worker->pool;6277627662786278- if (pool->flags & WQ_BH)62776277+ if (pool->flags & POOL_BH)62796278 pr_cont("bh%s",62806279 pool->attrs->nice == HIGHPRI_NICE_LEVEL ? "-hi" : "");62816280 else···63606359 pr_cont(" %s", comma ? "," : "");63616360 pr_cont_worker_id(worker);63626361 pr_cont(":%ps", worker->current_func);63626362+ pr_cont(" for %us",63636363+ jiffies_to_msecs(jiffies - worker->current_start) / 1000);63636364 list_for_each_entry(work, &worker->scheduled, entry)63646365 pr_cont_work(false, work, &pcws);63656366 pr_cont_work_flush(comma, (work_func_t)-1L, &pcws);···6465646264666463 /* How long the first pending work is waiting for a worker. */64676464 if (!list_empty(&pool->worklist))64686468- hung = jiffies_to_msecs(jiffies - pool->watchdog_ts) / 1000;64656465+ hung = jiffies_to_msecs(jiffies - pool->last_progress_ts) / 1000;6469646664706467 /*64716468 * Defer printing to avoid deadlocks in console drivers that···7583758075847581/*75857582 * Show workers that might prevent the processing of pending work items.75867586- * The only candidates are CPU-bound workers in the running state.75877587- * Pending work items should be handled by another idle worker75887588- * in all other situations.75837583+ * A busy worker that is not running on the CPU (e.g. sleeping in75847584+ * wait_event_idle() with PF_WQ_WORKER cleared) can stall the pool just as75857585+ * effectively as a CPU-bound one, so dump every in-flight worker.75897586 */75907590-static void show_cpu_pool_hog(struct worker_pool *pool)75877587+static void show_cpu_pool_busy_workers(struct worker_pool *pool)75917588{75927589 struct worker *worker;75937590 unsigned long irq_flags;···75967593 raw_spin_lock_irqsave(&pool->lock, irq_flags);7597759475987595 hash_for_each(pool->busy_hash, bkt, worker, hentry) {75997599- if (task_is_running(worker->task)) {76007600- /*76017601- * Defer printing to avoid deadlocks in console76027602- * drivers that queue work while holding locks76037603- * also taken in their write paths.76047604- */76057605- printk_deferred_enter();75967596+ /*75977597+ * Defer printing to avoid deadlocks in console75987598+ * drivers that queue work while holding locks75997599+ * also taken in their write paths.76007600+ */76017601+ printk_deferred_enter();7606760276077607- pr_info("pool %d:\n", pool->id);76087608- sched_show_task(worker->task);76037603+ pr_info("pool %d:\n", pool->id);76047604+ sched_show_task(worker->task);7609760576107610- printk_deferred_exit();76117611- }76067606+ printk_deferred_exit();76127607 }7613760876147609 raw_spin_unlock_irqrestore(&pool->lock, irq_flags);76157610}7616761176177617-static void show_cpu_pools_hogs(void)76127612+static void show_cpu_pools_busy_workers(void)76187613{76197614 struct worker_pool *pool;76207615 int pi;7621761676227622- pr_info("Showing backtraces of running workers in stalled CPU-bound worker pools:\n");76177617+ pr_info("Showing backtraces of busy workers in stalled worker pools:\n");7623761876247619 rcu_read_lock();7625762076267621 for_each_pool(pool, pi) {76277622 if (pool->cpu_stall)76287628- show_cpu_pool_hog(pool);76237623+ show_cpu_pool_busy_workers(pool);7629762476307625 }76317626···76927691 touched = READ_ONCE(per_cpu(wq_watchdog_touched_cpu, pool->cpu));76937692 else76947693 touched = READ_ONCE(wq_watchdog_touched);76957695- pool_ts = READ_ONCE(pool->watchdog_ts);76947694+ pool_ts = READ_ONCE(pool->last_progress_ts);7696769576977696 if (time_after(pool_ts, touched))76987697 ts = pool_ts;···77207719 show_all_workqueues();7721772077227721 if (cpu_pool_stall)77237723- show_cpu_pools_hogs();77227722+ show_cpu_pools_busy_workers();7724772377257724 if (lockup_detected)77267725 panic_on_wq_watchdog(max_stall_time);
+1
kernel/workqueue_internal.h
···3232 work_func_t current_func; /* K: function */3333 struct pool_workqueue *current_pwq; /* K: pwq */3434 u64 current_at; /* K: runtime at start or last wakeup */3535+ unsigned long current_start; /* K: start time of current work item */3536 unsigned int current_color; /* K: color */36373738 int sleeping; /* S: is worker sleeping? */
+3-3
lib/bootconfig.c
···316316 depth ? "." : "");317317 if (ret < 0)318318 return ret;319319- if (ret > size) {319319+ if (ret >= size) {320320 size = 0;321321 } else {322322 size -= ret;···532532static int __init __xbc_open_brace(char *p)533533{534534 /* Push the last key as open brace */535535- open_brace[brace_index++] = xbc_node_index(last_parent);536535 if (brace_index >= XBC_DEPTH_MAX)537536 return xbc_parse_error("Exceed max depth of braces", p);537537+ open_brace[brace_index++] = xbc_node_index(last_parent);538538539539 return 0;540540}···802802803803 /* Brace closing */804804 if (brace_index) {805805- n = &xbc_nodes[open_brace[brace_index]];805805+ n = &xbc_nodes[open_brace[brace_index - 1]];806806 return xbc_parse_error("Brace is not closed",807807 xbc_node_get_data(n));808808 }
+125-106
lib/kunit/test.c
···9494 unsigned long total;9595};96969797-static bool kunit_should_print_stats(struct kunit_result_stats stats)9797+static bool kunit_should_print_stats(struct kunit_result_stats *stats)9898{9999 if (kunit_stats_enabled == 0)100100 return false;···102102 if (kunit_stats_enabled == 2)103103 return true;104104105105- return (stats.total > 1);105105+ return (stats->total > 1);106106}107107108108static void kunit_print_test_stats(struct kunit *test,109109- struct kunit_result_stats stats)109109+ struct kunit_result_stats *stats)110110{111111 if (!kunit_should_print_stats(stats))112112 return;···115115 KUNIT_SUBTEST_INDENT116116 "# %s: pass:%lu fail:%lu skip:%lu total:%lu",117117 test->name,118118- stats.passed,119119- stats.failed,120120- stats.skipped,121121- stats.total);118118+ stats->passed,119119+ stats->failed,120120+ stats->skipped,121121+ stats->total);122122}123123124124/* Append formatted message to log. */···600600}601601602602static void kunit_print_suite_stats(struct kunit_suite *suite,603603- struct kunit_result_stats suite_stats,604604- struct kunit_result_stats param_stats)603603+ struct kunit_result_stats *suite_stats,604604+ struct kunit_result_stats *param_stats)605605{606606 if (kunit_should_print_stats(suite_stats)) {607607 kunit_log(KERN_INFO, suite,608608 "# %s: pass:%lu fail:%lu skip:%lu total:%lu",609609 suite->name,610610- suite_stats.passed,611611- suite_stats.failed,612612- suite_stats.skipped,613613- suite_stats.total);610610+ suite_stats->passed,611611+ suite_stats->failed,612612+ suite_stats->skipped,613613+ suite_stats->total);614614 }615615616616 if (kunit_should_print_stats(param_stats)) {617617 kunit_log(KERN_INFO, suite,618618 "# Totals: pass:%lu fail:%lu skip:%lu total:%lu",619619- param_stats.passed,620620- param_stats.failed,621621- param_stats.skipped,622622- param_stats.total);619619+ param_stats->passed,620620+ param_stats->failed,621621+ param_stats->skipped,622622+ param_stats->total);623623 }624624}625625···681681 }682682}683683684684-int kunit_run_tests(struct kunit_suite *suite)684684+static noinline_for_stack void685685+kunit_run_param_test(struct kunit_suite *suite, struct kunit_case *test_case,686686+ struct kunit *test,687687+ struct kunit_result_stats *suite_stats,688688+ struct kunit_result_stats *total_stats,689689+ struct kunit_result_stats *param_stats)685690{686691 char param_desc[KUNIT_PARAM_DESC_SIZE];692692+ const void *curr_param;693693+694694+ kunit_init_parent_param_test(test_case, test);695695+ if (test_case->status == KUNIT_FAILURE) {696696+ kunit_update_stats(param_stats, test->status);697697+ return;698698+ }699699+ /* Get initial param. */700700+ param_desc[0] = '\0';701701+ /* TODO: Make generate_params try-catch */702702+ curr_param = test_case->generate_params(test, NULL, param_desc);703703+ test_case->status = KUNIT_SKIPPED;704704+ kunit_log(KERN_INFO, test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT705705+ "KTAP version 1\n");706706+ kunit_log(KERN_INFO, test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT707707+ "# Subtest: %s", test_case->name);708708+ if (test->params_array.params &&709709+ test_case->generate_params == kunit_array_gen_params) {710710+ kunit_log(KERN_INFO, test, KUNIT_SUBTEST_INDENT711711+ KUNIT_SUBTEST_INDENT "1..%zd\n",712712+ test->params_array.num_params);713713+ }714714+715715+ while (curr_param) {716716+ struct kunit param_test = {717717+ .param_value = curr_param,718718+ .param_index = ++test->param_index,719719+ .parent = test,720720+ };721721+ kunit_init_test(¶m_test, test_case->name, NULL);722722+ param_test.log = test_case->log;723723+ kunit_run_case_catch_errors(suite, test_case, ¶m_test);724724+725725+ if (param_desc[0] == '\0') {726726+ snprintf(param_desc, sizeof(param_desc),727727+ "param-%d", param_test.param_index);728728+ }729729+730730+ kunit_print_ok_not_ok(¶m_test, KUNIT_LEVEL_CASE_PARAM,731731+ param_test.status,732732+ param_test.param_index,733733+ param_desc,734734+ param_test.status_comment);735735+736736+ kunit_update_stats(param_stats, param_test.status);737737+738738+ /* Get next param. */739739+ param_desc[0] = '\0';740740+ curr_param = test_case->generate_params(test, curr_param,741741+ param_desc);742742+ }743743+ /*744744+ * TODO: Put into a try catch. Since we don't need suite->exit745745+ * for it we can't reuse kunit_try_run_cleanup for this yet.746746+ */747747+ if (test_case->param_exit)748748+ test_case->param_exit(test);749749+ /* TODO: Put this kunit_cleanup into a try-catch. */750750+ kunit_cleanup(test);751751+}752752+753753+static noinline_for_stack void754754+kunit_run_one_test(struct kunit_suite *suite, struct kunit_case *test_case,755755+ struct kunit_result_stats *suite_stats,756756+ struct kunit_result_stats *total_stats)757757+{758758+ struct kunit test = { .param_value = NULL, .param_index = 0 };759759+ struct kunit_result_stats param_stats = { 0 };760760+761761+ kunit_init_test(&test, test_case->name, test_case->log);762762+ if (test_case->status == KUNIT_SKIPPED) {763763+ /* Test marked as skip */764764+ test.status = KUNIT_SKIPPED;765765+ kunit_update_stats(¶m_stats, test.status);766766+ } else if (!test_case->generate_params) {767767+ /* Non-parameterised test. */768768+ test_case->status = KUNIT_SKIPPED;769769+ kunit_run_case_catch_errors(suite, test_case, &test);770770+ kunit_update_stats(¶m_stats, test.status);771771+ } else {772772+ kunit_run_param_test(suite, test_case, &test, suite_stats,773773+ total_stats, ¶m_stats);774774+ }775775+ kunit_print_attr((void *)test_case, true, KUNIT_LEVEL_CASE);776776+777777+ kunit_print_test_stats(&test, ¶m_stats);778778+779779+ kunit_print_ok_not_ok(&test, KUNIT_LEVEL_CASE, test_case->status,780780+ kunit_test_case_num(suite, test_case),781781+ test_case->name,782782+ test.status_comment);783783+784784+ kunit_update_stats(suite_stats, test_case->status);785785+ kunit_accumulate_stats(total_stats, param_stats);786786+}787787+788788+789789+int kunit_run_tests(struct kunit_suite *suite)790790+{687791 struct kunit_case *test_case;688792 struct kunit_result_stats suite_stats = { 0 };689793 struct kunit_result_stats total_stats = { 0 };690690- const void *curr_param;691794692795 /* Taint the kernel so we know we've run tests. */693796 add_taint(TAINT_TEST, LOCKDEP_STILL_OK);···806703807704 kunit_print_suite_start(suite);808705809809- kunit_suite_for_each_test_case(suite, test_case) {810810- struct kunit test = { .param_value = NULL, .param_index = 0 };811811- struct kunit_result_stats param_stats = { 0 };812812-813813- kunit_init_test(&test, test_case->name, test_case->log);814814- if (test_case->status == KUNIT_SKIPPED) {815815- /* Test marked as skip */816816- test.status = KUNIT_SKIPPED;817817- kunit_update_stats(¶m_stats, test.status);818818- } else if (!test_case->generate_params) {819819- /* Non-parameterised test. */820820- test_case->status = KUNIT_SKIPPED;821821- kunit_run_case_catch_errors(suite, test_case, &test);822822- kunit_update_stats(¶m_stats, test.status);823823- } else {824824- kunit_init_parent_param_test(test_case, &test);825825- if (test_case->status == KUNIT_FAILURE) {826826- kunit_update_stats(¶m_stats, test.status);827827- goto test_case_end;828828- }829829- /* Get initial param. */830830- param_desc[0] = '\0';831831- /* TODO: Make generate_params try-catch */832832- curr_param = test_case->generate_params(&test, NULL, param_desc);833833- test_case->status = KUNIT_SKIPPED;834834- kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT835835- "KTAP version 1\n");836836- kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT837837- "# Subtest: %s", test_case->name);838838- if (test.params_array.params &&839839- test_case->generate_params == kunit_array_gen_params) {840840- kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT841841- KUNIT_SUBTEST_INDENT "1..%zd\n",842842- test.params_array.num_params);843843- }844844-845845- while (curr_param) {846846- struct kunit param_test = {847847- .param_value = curr_param,848848- .param_index = ++test.param_index,849849- .parent = &test,850850- };851851- kunit_init_test(¶m_test, test_case->name, NULL);852852- param_test.log = test_case->log;853853- kunit_run_case_catch_errors(suite, test_case, ¶m_test);854854-855855- if (param_desc[0] == '\0') {856856- snprintf(param_desc, sizeof(param_desc),857857- "param-%d", param_test.param_index);858858- }859859-860860- kunit_print_ok_not_ok(¶m_test, KUNIT_LEVEL_CASE_PARAM,861861- param_test.status,862862- param_test.param_index,863863- param_desc,864864- param_test.status_comment);865865-866866- kunit_update_stats(¶m_stats, param_test.status);867867-868868- /* Get next param. */869869- param_desc[0] = '\0';870870- curr_param = test_case->generate_params(&test, curr_param,871871- param_desc);872872- }873873- /*874874- * TODO: Put into a try catch. Since we don't need suite->exit875875- * for it we can't reuse kunit_try_run_cleanup for this yet.876876- */877877- if (test_case->param_exit)878878- test_case->param_exit(&test);879879- /* TODO: Put this kunit_cleanup into a try-catch. */880880- kunit_cleanup(&test);881881- }882882-test_case_end:883883- kunit_print_attr((void *)test_case, true, KUNIT_LEVEL_CASE);884884-885885- kunit_print_test_stats(&test, param_stats);886886-887887- kunit_print_ok_not_ok(&test, KUNIT_LEVEL_CASE, test_case->status,888888- kunit_test_case_num(suite, test_case),889889- test_case->name,890890- test.status_comment);891891-892892- kunit_update_stats(&suite_stats, test_case->status);893893- kunit_accumulate_stats(&total_stats, param_stats);894894- }706706+ kunit_suite_for_each_test_case(suite, test_case)707707+ kunit_run_one_test(suite, test_case, &suite_stats, &total_stats);895708896709 if (suite->suite_exit)897710 suite->suite_exit(suite);898711899899- kunit_print_suite_stats(suite, suite_stats, total_stats);712712+ kunit_print_suite_stats(suite, &suite_stats, &total_stats);900713suite_end:901714 kunit_print_suite_end(suite);902715
+4-1
mm/cma.c
···10131013 unsigned long count)10141014{10151015 struct cma_memrange *cmr;10161016+ unsigned long ret = 0;10161017 unsigned long i, pfn;1017101810181019 cmr = find_cma_memrange(cma, pages, count);···1022102110231022 pfn = page_to_pfn(pages);10241023 for (i = 0; i < count; i++, pfn++)10251025- VM_WARN_ON(!put_page_testzero(pfn_to_page(pfn)));10241024+ ret += !put_page_testzero(pfn_to_page(pfn));10251025+10261026+ WARN(ret, "%lu pages are still in use!\n", ret);1026102710271028 __cma_release_frozen(cma, cmr, pages, count);10281029
+6-1
mm/damon/core.c
···15621562 }15631563 ctx->walk_control = control;15641564 mutex_unlock(&ctx->walk_control_lock);15651565- if (!damon_is_running(ctx))15651565+ if (!damon_is_running(ctx)) {15661566+ mutex_lock(&ctx->walk_control_lock);15671567+ if (ctx->walk_control == control)15681568+ ctx->walk_control = NULL;15691569+ mutex_unlock(&ctx->walk_control_lock);15661570 return -EINVAL;15711571+ }15671572 wait_for_completion(&control->completion);15681573 if (control->canceled)15691574 return -ECANCELED;
+10-5
mm/filemap.c
···1379137913801380#ifdef CONFIG_MIGRATION13811381/**13821382- * migration_entry_wait_on_locked - Wait for a migration entry to be removed13831383- * @entry: migration swap entry.13821382+ * softleaf_entry_wait_on_locked - Wait for a migration entry or13831383+ * device_private entry to be removed.13841384+ * @entry: migration or device_private swap entry.13841385 * @ptl: already locked ptl. This function will drop the lock.13851386 *13861386- * Wait for a migration entry referencing the given page to be removed. This is13871387+ * Wait for a migration entry referencing the given page, or device_private13881388+ * entry referencing a dvice_private page to be unlocked. This is13871389 * equivalent to folio_put_wait_locked(folio, TASK_UNINTERRUPTIBLE) except13881390 * this can be called without taking a reference on the page. Instead this13891389- * should be called while holding the ptl for the migration entry referencing13911391+ * should be called while holding the ptl for @entry referencing13901392 * the page.13911393 *13921394 * Returns after unlocking the ptl.···13961394 * This follows the same logic as folio_wait_bit_common() so see the comments13971395 * there.13981396 */13991399-void migration_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl)13971397+void softleaf_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl)14001398 __releases(ptl)14011399{14021400 struct wait_page_queue wait_page;···14301428 * If a migration entry exists for the page the migration path must hold14311429 * a valid reference to the page, and it must take the ptl to remove the14321430 * migration entry. So the page is valid until the ptl is dropped.14311431+ * Similarly any path attempting to drop the last reference to a14321432+ * device-private page needs to grab the ptl to remove the device-private14331433+ * entry.14331434 */14341435 spin_unlock(ptl);14351436
+9-4
mm/huge_memory.c
···36313631 const bool is_anon = folio_test_anon(folio);36323632 int old_order = folio_order(folio);36333633 int start_order = split_type == SPLIT_TYPE_UNIFORM ? new_order : old_order - 1;36343634+ struct folio *old_folio = folio;36343635 int split_order;3635363636363637 /*···36523651 * uniform split has xas_split_alloc() called before36533652 * irq is disabled to allocate enough memory, whereas36543653 * non-uniform split can handle ENOMEM.36543654+ * Use the to-be-split folio, so that a parallel36553655+ * folio_try_get() waits on it until xarray is updated36563656+ * with after-split folios and the original one is36573657+ * unfrozen.36553658 */36563656- if (split_type == SPLIT_TYPE_UNIFORM)36573657- xas_split(xas, folio, old_order);36583658- else {36593659+ if (split_type == SPLIT_TYPE_UNIFORM) {36603660+ xas_split(xas, old_folio, old_order);36613661+ } else {36593662 xas_set_order(xas, folio->index, split_order);36603660- xas_try_split(xas, folio, old_order);36633663+ xas_try_split(xas, old_folio, old_order);36613664 if (xas_error(xas))36623665 return xas_error(xas);36633666 }
+2-2
mm/hugetlb.c
···31013101 * extract the actual node first.31023102 */31033103 if (m)31043104- listnode = early_pfn_to_nid(PHYS_PFN(virt_to_phys(m)));31043104+ listnode = early_pfn_to_nid(PHYS_PFN(__pa(m)));31053105 }3106310631073107 if (m) {···31603160 * The head struct page is used to get folio information by the HugeTLB31613161 * subsystem like zone id and node id.31623162 */31633163- memblock_reserved_mark_noinit(virt_to_phys((void *)m + PAGE_SIZE),31633163+ memblock_reserved_mark_noinit(__pa((void *)m + PAGE_SIZE),31643164 huge_page_size(h) - PAGE_SIZE);3165316531663166 return 1;
···146146 for (i = 0; i < nr_folios; i++) {147147 struct memfd_luo_folio_ser *pfolio = &folios_ser[i];148148 struct folio *folio = folios[i];149149- unsigned int flags = 0;150149151150 err = kho_preserve_folio(folio);152151 if (err)153152 goto err_unpreserve;154153155155- if (folio_test_dirty(folio))156156- flags |= MEMFD_LUO_FOLIO_DIRTY;157157- if (folio_test_uptodate(folio))158158- flags |= MEMFD_LUO_FOLIO_UPTODATE;154154+ folio_lock(folio);155155+156156+ /*157157+ * A dirty folio is one which has been written to. A clean folio158158+ * is its opposite. Since a clean folio does not carry user159159+ * data, it can be freed by page reclaim under memory pressure.160160+ *161161+ * Saving the dirty flag at prepare() time doesn't work since it162162+ * can change later. Saving it at freeze() also won't work163163+ * because the dirty bit is normally synced at unmap and there164164+ * might still be a mapping of the file at freeze().165165+ *166166+ * To see why this is a problem, say a folio is clean at167167+ * preserve, but gets dirtied later. The pfolio flags will mark168168+ * it as clean. After retrieve, the next kernel might try to169169+ * reclaim this folio under memory pressure, losing user data.170170+ *171171+ * Unconditionally mark it dirty to avoid this problem. This172172+ * comes at the cost of making clean folios un-reclaimable after173173+ * live update.174174+ */175175+ folio_mark_dirty(folio);176176+177177+ /*178178+ * If the folio is not uptodate, it was fallocated but never179179+ * used. Saving this flag at prepare() doesn't work since it180180+ * might change later when someone uses the folio.181181+ *182182+ * Since we have taken the performance penalty of allocating,183183+ * zeroing, and pinning all the folios in the holes, take a bit184184+ * more and zero all non-uptodate folios too.185185+ *186186+ * NOTE: For someone looking to improve preserve performance,187187+ * this is a good place to look.188188+ */189189+ if (!folio_test_uptodate(folio)) {190190+ folio_zero_range(folio, 0, folio_size(folio));191191+ flush_dcache_folio(folio);192192+ folio_mark_uptodate(folio);193193+ }194194+195195+ folio_unlock(folio);159196160197 pfolio->pfn = folio_pfn(folio);161161- pfolio->flags = flags;198198+ pfolio->flags = MEMFD_LUO_FOLIO_DIRTY | MEMFD_LUO_FOLIO_UPTODATE;162199 pfolio->index = folio->index;163200 }164201
+2-1
mm/memory.c
···47634763 unlock_page(vmf->page);47644764 put_page(vmf->page);47654765 } else {47664766- pte_unmap_unlock(vmf->pte, vmf->ptl);47664766+ pte_unmap(vmf->pte);47674767+ softleaf_entry_wait_on_locked(entry, vmf->ptl);47674768 }47684769 } else if (softleaf_is_hwpoison(entry)) {47694770 ret = VM_FAULT_HWPOISON;
+4-4
mm/migrate.c
···500500 if (!softleaf_is_migration(entry))501501 goto out;502502503503- migration_entry_wait_on_locked(entry, ptl);503503+ softleaf_entry_wait_on_locked(entry, ptl);504504 return;505505out:506506 spin_unlock(ptl);···532532 * If migration entry existed, safe to release vma lock533533 * here because the pgtable page won't be freed without the534534 * pgtable lock released. See comment right above pgtable535535- * lock release in migration_entry_wait_on_locked().535535+ * lock release in softleaf_entry_wait_on_locked().536536 */537537 hugetlb_vma_unlock_read(vma);538538- migration_entry_wait_on_locked(entry, ptl);538538+ softleaf_entry_wait_on_locked(entry, ptl);539539 return;540540 }541541···553553 ptl = pmd_lock(mm, pmd);554554 if (!pmd_is_migration_entry(*pmd))555555 goto unlock;556556- migration_entry_wait_on_locked(softleaf_from_pmd(*pmd), ptl);556556+ softleaf_entry_wait_on_locked(softleaf_from_pmd(*pmd), ptl);557557 return;558558unlock:559559 spin_unlock(ptl);
···5959 * to save memory. In case ->stride field is not available,6060 * such optimizations are disabled.6161 */6262- unsigned short stride;6262+ unsigned int stride;6363#endif6464 };6565 };···559559}560560561561#ifdef CONFIG_64BIT562562-static inline void slab_set_stride(struct slab *slab, unsigned short stride)562562+static inline void slab_set_stride(struct slab *slab, unsigned int stride)563563{564564 slab->stride = stride;565565}566566-static inline unsigned short slab_get_stride(struct slab *slab)566566+static inline unsigned int slab_get_stride(struct slab *slab)567567{568568 return slab->stride;569569}570570#else571571-static inline void slab_set_stride(struct slab *slab, unsigned short stride)571571+static inline void slab_set_stride(struct slab *slab, unsigned int stride)572572{573573 VM_WARN_ON_ONCE(stride != sizeof(struct slabobj_ext));574574}575575-static inline unsigned short slab_get_stride(struct slab *slab)575575+static inline unsigned int slab_get_stride(struct slab *slab)576576{577577 return sizeof(struct slabobj_ext);578578}
+51-29
mm/slub.c
···21192119 size_t sz = sizeof(struct slabobj_ext) * slab->objects;21202120 struct kmem_cache *obj_exts_cache;2121212121222122- /*21232123- * slabobj_ext array for KMALLOC_CGROUP allocations21242124- * are served from KMALLOC_NORMAL caches.21252125- */21262126- if (!mem_alloc_profiling_enabled())21272127- return sz;21282128-21292122 if (sz > KMALLOC_MAX_CACHE_SIZE)21302123 return sz;21312124···27902797 if (s->flags & SLAB_KMALLOC)27912798 mark_obj_codetag_empty(sheaf);2792279928002800+ VM_WARN_ON_ONCE(sheaf->size > 0);27932801 kfree(sheaf);2794280227952803 stat(s, SHEAF_FREE);···28222828 return 0;28232829}2824283028312831+static void sheaf_flush_unused(struct kmem_cache *s, struct slab_sheaf *sheaf);2825283228262833static struct slab_sheaf *alloc_full_sheaf(struct kmem_cache *s, gfp_t gfp)28272834{···28322837 return NULL;2833283828342839 if (refill_sheaf(s, sheaf, gfp | __GFP_NOMEMALLOC | __GFP_NOWARN)) {28402840+ sheaf_flush_unused(s, sheaf);28352841 free_empty_sheaf(s, sheaf);28362842 return NULL;28372843 }···28542858 * object pointers are moved to a on-stack array under the lock. To bound the28552859 * stack usage, limit each batch to PCS_BATCH_MAX.28562860 *28572857- * returns true if at least partially flushed28612861+ * Must be called with s->cpu_sheaves->lock locked, returns with the lock28622862+ * unlocked.28632863+ *28642864+ * Returns how many objects are remaining to be flushed28582865 */28592859-static bool sheaf_flush_main(struct kmem_cache *s)28662866+static unsigned int __sheaf_flush_main_batch(struct kmem_cache *s)28602867{28612868 struct slub_percpu_sheaves *pcs;28622869 unsigned int batch, remaining;28632870 void *objects[PCS_BATCH_MAX];28642871 struct slab_sheaf *sheaf;28652865- bool ret = false;2866287228672867-next_batch:28682868- if (!local_trylock(&s->cpu_sheaves->lock))28692869- return ret;28732873+ lockdep_assert_held(this_cpu_ptr(&s->cpu_sheaves->lock));2870287428712875 pcs = this_cpu_ptr(s->cpu_sheaves);28722876 sheaf = pcs->main;···2884288828852889 stat_add(s, SHEAF_FLUSH, batch);2886289028872887- ret = true;28912891+ return remaining;28922892+}2888289328892889- if (remaining)28902890- goto next_batch;28942894+static void sheaf_flush_main(struct kmem_cache *s)28952895+{28962896+ unsigned int remaining;28972897+28982898+ do {28992899+ local_lock(&s->cpu_sheaves->lock);29002900+29012901+ remaining = __sheaf_flush_main_batch(s);29022902+29032903+ } while (remaining);29042904+}29052905+29062906+/*29072907+ * Returns true if the main sheaf was at least partially flushed.29082908+ */29092909+static bool sheaf_try_flush_main(struct kmem_cache *s)29102910+{29112911+ unsigned int remaining;29122912+ bool ret = false;29132913+29142914+ do {29152915+ if (!local_trylock(&s->cpu_sheaves->lock))29162916+ return ret;29172917+29182918+ ret = true;29192919+ remaining = __sheaf_flush_main_batch(s);29202920+29212921+ } while (remaining);2891292228922923 return ret;28932924}···45634540 struct slab_sheaf *empty = NULL;45644541 struct slab_sheaf *full;45654542 struct node_barn *barn;45664566- bool can_alloc;45434543+ bool allow_spin;4567454445684545 lockdep_assert_held(this_cpu_ptr(&s->cpu_sheaves->lock));45694546···45844561 return NULL;45854562 }4586456345874587- full = barn_replace_empty_sheaf(barn, pcs->main,45884588- gfpflags_allow_spinning(gfp));45644564+ allow_spin = gfpflags_allow_spinning(gfp);45654565+45664566+ full = barn_replace_empty_sheaf(barn, pcs->main, allow_spin);4589456745904568 if (full) {45914569 stat(s, BARN_GET);···4596457245974573 stat(s, BARN_GET_FAIL);4598457445994599- can_alloc = gfpflags_allow_blocking(gfp);46004600-46014601- if (can_alloc) {45754575+ if (allow_spin) {46024576 if (pcs->spare) {46034577 empty = pcs->spare;46044578 pcs->spare = NULL;···46064584 }4607458546084586 local_unlock(&s->cpu_sheaves->lock);45874587+ pcs = NULL;4609458846104610- if (!can_alloc)45894589+ if (!allow_spin)46114590 return NULL;4612459146134592 if (empty) {···46194596 * we must be very low on memory so don't bother46204597 * with the barn46214598 */45994599+ sheaf_flush_unused(s, empty);46224600 free_empty_sheaf(s, empty);46234601 }46244602 } else {···46294605 if (!full)46304606 return NULL;4631460746324632- /*46334633- * we can reach here only when gfpflags_allow_blocking46344634- * so this must not be an irq46354635- */46364636- local_lock(&s->cpu_sheaves->lock);46084608+ if (!local_trylock(&s->cpu_sheaves->lock))46094609+ goto barn_put;46374610 pcs = this_cpu_ptr(s->cpu_sheaves);4638461146394612 /*···46614640 return pcs;46624641 }4663464246434643+barn_put:46644644 barn_put_full_sheaf(barn, full);46654645 stat(s, BARN_PUT);46664646···57265704 if (put_fail)57275705 stat(s, BARN_PUT_FAIL);5728570657295729- if (!sheaf_flush_main(s))57075707+ if (!sheaf_try_flush_main(s))57305708 return NULL;5731570957325710 if (!local_trylock(&s->cpu_sheaves->lock))
+4-2
net/ceph/auth.c
···205205 s32 result;206206 u64 global_id;207207 void *payload, *payload_end;208208- int payload_len;208208+ u32 payload_len;209209 char *result_msg;210210- int result_msg_len;210210+ u32 result_msg_len;211211 int ret = -EINVAL;212212213213 mutex_lock(&ac->mutex);···217217 result = ceph_decode_32(&p);218218 global_id = ceph_decode_64(&p);219219 payload_len = ceph_decode_32(&p);220220+ ceph_decode_need(&p, end, payload_len, bad);220221 payload = p;221222 p += payload_len;222223 ceph_decode_need(&p, end, sizeof(u32), bad);223224 result_msg_len = ceph_decode_32(&p);225225+ ceph_decode_need(&p, end, result_msg_len, bad);224226 result_msg = p;225227 p += result_msg_len;226228 if (p != end)
+21-10
net/ceph/messenger_v2.c
···392392 int head_len;393393 int rem_len;394394395395- BUG_ON(ctrl_len < 0 || ctrl_len > CEPH_MSG_MAX_CONTROL_LEN);395395+ BUG_ON(ctrl_len < 1 || ctrl_len > CEPH_MSG_MAX_CONTROL_LEN);396396397397 if (secure) {398398 head_len = CEPH_PREAMBLE_SECURE_LEN;···401401 head_len += padded_len(rem_len) + CEPH_GCM_TAG_LEN;402402 }403403 } else {404404- head_len = CEPH_PREAMBLE_PLAIN_LEN;405405- if (ctrl_len)406406- head_len += ctrl_len + CEPH_CRC_LEN;404404+ head_len = CEPH_PREAMBLE_PLAIN_LEN + ctrl_len + CEPH_CRC_LEN;407405 }408406 return head_len;409407}···526528 desc->fd_aligns[i] = ceph_decode_16(&p);527529 }528530529529- if (desc->fd_lens[0] < 0 ||531531+ /*532532+ * This would fire for FRAME_TAG_WAIT (it has one empty533533+ * segment), but we should never get it as client.534534+ */535535+ if (desc->fd_lens[0] < 1 ||530536 desc->fd_lens[0] > CEPH_MSG_MAX_CONTROL_LEN) {531537 pr_err("bad control segment length %d\n", desc->fd_lens[0]);532538 return -EINVAL;533539 }540540+534541 if (desc->fd_lens[1] < 0 ||535542 desc->fd_lens[1] > CEPH_MSG_MAX_FRONT_LEN) {536543 pr_err("bad front segment length %d\n", desc->fd_lens[1]);···552549 return -EINVAL;553550 }554551555555- /*556556- * This would fire for FRAME_TAG_WAIT (it has one empty557557- * segment), but we should never get it as client.558558- */559552 if (!desc->fd_lens[desc->fd_seg_cnt - 1]) {560553 pr_err("last segment empty, segment count %d\n",561554 desc->fd_seg_cnt);···28322833 void *p, void *end)28332834{28342835 struct ceph_frame_desc *desc = &con->v2.in_desc;28352835- struct ceph_msg_header2 *hdr2 = p;28362836+ struct ceph_msg_header2 *hdr2;28362837 struct ceph_msg_header hdr;28372838 int skip;28382839 int ret;28392840 u64 seq;28412841+28422842+ ceph_decode_need(&p, end, sizeof(*hdr2), bad);28432843+ hdr2 = p;2840284428412845 /* verify seq# */28422846 seq = le64_to_cpu(hdr2->seq);···28712869 WARN_ON(!con->in_msg);28722870 WARN_ON(con->in_msg->con != con);28732871 return 1;28722872+28732873+bad:28742874+ pr_err("failed to decode message header\n");28752875+ return -EINVAL;28742876}2875287728762878static int process_message(struct ceph_connection *con)···2903289729042898 if (con->v2.in_desc.fd_tag != FRAME_TAG_MESSAGE)29052899 return process_control(con, p, end);29002900+29012901+ if (con->state != CEPH_CON_S_OPEN) {29022902+ con->error_msg = "protocol error, unexpected message";29032903+ return -EINVAL;29042904+ }2906290529072906 ret = process_message_header(con, p, end);29082907 if (ret < 0)
···124124125125#include <trace/events/sock.h>126126127127+/* Keep the definition of IPv6 disable here for now, to avoid annoying linker128128+ * issues in case IPv6=m129129+ */130130+int disable_ipv6_mod;131131+EXPORT_SYMBOL(disable_ipv6_mod);132132+127133/* The inetsw table contains everything that inet_create needs to128134 * build a new socket.129135 */
+15
net/ipv4/ip_tunnel_core.c
···5858 struct iphdr *iph;5959 int err;60606161+ if (unlikely(dev_recursion_level() > IP_TUNNEL_RECURSION_LIMIT)) {6262+ if (dev) {6363+ net_crit_ratelimited("Dead loop on virtual device %s, fix it urgently!\n",6464+ dev->name);6565+ DEV_STATS_INC(dev, tx_errors);6666+ }6767+ ip_rt_put(rt);6868+ kfree_skb(skb);6969+ return;7070+ }7171+7272+ dev_xmit_recursion_inc();7373+6174 skb_scrub_packet(skb, xnet);62756376 skb_clear_hash_if_not_l4(skb);···10188 pkt_len = 0;10289 iptunnel_xmit_stats(dev, pkt_len);10390 }9191+9292+ dev_xmit_recursion_dec();10493}10594EXPORT_SYMBOL_GPL(iptunnel_xmit);10695
+11-3
net/ipv4/nexthop.c
···20022002}2003200320042004static void remove_nh_grp_entry(struct net *net, struct nh_grp_entry *nhge,20052005- struct nl_info *nlinfo)20052005+ struct nl_info *nlinfo,20062006+ struct list_head *deferred_free)20062007{20072008 struct nh_grp_entry *nhges, *new_nhges;20082009 struct nexthop *nhp = nhge->nh_parent;···20632062 rcu_assign_pointer(nhp->nh_grp, newg);2064206320652064 list_del(&nhge->nh_list);20662066- free_percpu(nhge->stats);20672065 nexthop_put(nhge->nh);20662066+ list_add(&nhge->nh_list, deferred_free);2068206720692068 /* Removal of a NH from a resilient group is notified through20702069 * bucket notifications.···20842083 struct nl_info *nlinfo)20852084{20862085 struct nh_grp_entry *nhge, *tmp;20862086+ LIST_HEAD(deferred_free);2087208720882088 /* If there is nothing to do, let's avoid the costly call to20892089 * synchronize_net()···20932091 return;2094209220952093 list_for_each_entry_safe(nhge, tmp, &nh->grp_list, nh_list)20962096- remove_nh_grp_entry(net, nhge, nlinfo);20942094+ remove_nh_grp_entry(net, nhge, nlinfo, &deferred_free);2097209520982096 /* make sure all see the newly published array before releasing rtnl */20992097 synchronize_net();20982098+20992099+ /* Now safe to free percpu stats — all RCU readers have finished */21002100+ list_for_each_entry_safe(nhge, tmp, &deferred_free, nh_list) {21012101+ list_del(&nhge->nh_list);21022102+ free_percpu(nhge->stats);21032103+ }21002104}2101210521022106static void remove_nexthop_group(struct nexthop *nh, struct nl_info *nlinfo)
-8
net/ipv6/af_inet6.c
···8686 .autoconf = 1,8787};88888989-static int disable_ipv6_mod;9090-9189module_param_named(disable, disable_ipv6_mod, int, 0444);9290MODULE_PARM_DESC(disable, "Disable IPv6 module such that it is non-functional");9391···94969597module_param_named(autoconf, ipv6_defaults.autoconf, int, 0444);9698MODULE_PARM_DESC(autoconf, "Enable IPv6 address autoconfiguration on all interfaces");9797-9898-bool ipv6_mod_enabled(void)9999-{100100- return disable_ipv6_mod == 0;101101-}102102-EXPORT_SYMBOL_GPL(ipv6_mod_enabled);10399104100static struct ipv6_pinfo *inet6_sk_generic(struct sock *sk)105101{
···344344 break;345345 case NETDEV_REGISTER:346346 /* NOP if not matching or already registered */347347- if (!match || (changename && ops))347347+ if (!match || ops)348348 continue;349349350350 ops = kmemdup(&basechain->ops,
+2-1
net/netfilter/nft_set_pipapo.c
···16401640 int i;1641164116421642 nft_pipapo_for_each_field(f, i, m) {16431643+ bool last = i == m->field_count - 1;16431644 int g;1644164516451646 for (g = 0; g < f->groups; g++) {···16601659 }1661166016621661 pipapo_unmap(f->mt, f->rules, rulemap[i].to, rulemap[i].n,16631663- rulemap[i + 1].n, i == m->field_count - 1);16621662+ last ? 0 : rulemap[i + 1].n, last);16641663 if (pipapo_resize(f, f->rules, f->rules - rulemap[i].n)) {16651664 /* We can ignore this, a failure to shrink tables down16661665 * doesn't make tables invalid.
+6
net/netfilter/xt_IDLETIMER.c
···318318319319 info->timer = __idletimer_tg_find_by_label(info->label);320320 if (info->timer) {321321+ if (info->timer->timer_type & XT_IDLETIMER_ALARM) {322322+ pr_debug("Adding/Replacing rule with same label and different timer type is not allowed\n");323323+ mutex_unlock(&list_mutex);324324+ return -EINVAL;325325+ }326326+321327 info->timer->refcnt++;322328 mod_timer(&info->timer->timer,323329 secs_to_jiffies(info->timeout) + jiffies);
+2-2
net/netfilter/xt_dccp.c
···6262 return true;6363 }64646565- if (op[i] < 2)6565+ if (op[i] < 2 || i == optlen - 1)6666 i++;6767 else6868- i += op[i+1]?:1;6868+ i += op[i + 1] ? : 1;6969 }70707171 spin_unlock_bh(&dccp_buflock);
+4-2
net/netfilter/xt_tcpudp.c
···59596060 for (i = 0; i < optlen; ) {6161 if (op[i] == option) return !invert;6262- if (op[i] < 2) i++;6363- else i += op[i+1]?:1;6262+ if (op[i] < 2 || i == optlen - 1)6363+ i++;6464+ else6565+ i += op[i + 1] ? : 1;6466 }65676668 return invert;
+5-3
net/rxrpc/af_rxrpc.c
···267267 * Lookup or create a remote transport endpoint record for the specified268268 * address.269269 *270270- * Return: The peer record found with a reference, %NULL if no record is found271271- * or a negative error code if the address is invalid or unsupported.270270+ * Return: The peer record found with a reference or a negative error code if271271+ * the address is invalid or unsupported.272272 */273273struct rxrpc_peer *rxrpc_kernel_lookup_peer(struct socket *sock,274274 struct sockaddr_rxrpc *srx, gfp_t gfp)275275{276276+ struct rxrpc_peer *peer;276277 struct rxrpc_sock *rx = rxrpc_sk(sock->sk);277278 int ret;278279···281280 if (ret < 0)282281 return ERR_PTR(ret);283282284284- return rxrpc_lookup_peer(rx->local, srx, gfp);283283+ peer = rxrpc_lookup_peer(rx->local, srx, gfp);284284+ return peer ?: ERR_PTR(-ENOMEM);285285}286286EXPORT_SYMBOL(rxrpc_kernel_lookup_peer);287287
+1
net/sched/sch_teql.c
···315315 if (__netif_tx_trylock(slave_txq)) {316316 unsigned int length = qdisc_pkt_len(skb);317317318318+ skb->dev = slave;318319 if (!netif_xmit_frozen_or_stopped(slave_txq) &&319320 netdev_start_xmit(skb, slave, slave_txq, false) ==320321 NETDEV_TX_OK) {
···401401/// ```402402/// use kernel::cpufreq::{DEFAULT_TRANSITION_LATENCY_NS, Policy};403403///404404+/// #[allow(clippy::double_parens, reason = "False positive before 1.92.0")]404405/// fn update_policy(policy: &mut Policy) {405406/// policy406407/// .set_dvfs_possible_from_any_cpu(true)
+50-64
rust/kernel/dma.rs
···461461 self.count * core::mem::size_of::<T>()462462 }463463464464+ /// Returns the raw pointer to the allocated region in the CPU's virtual address space.465465+ #[inline]466466+ pub fn as_ptr(&self) -> *const [T] {467467+ core::ptr::slice_from_raw_parts(self.cpu_addr.as_ptr(), self.count)468468+ }469469+470470+ /// Returns the raw pointer to the allocated region in the CPU's virtual address space as471471+ /// a mutable pointer.472472+ #[inline]473473+ pub fn as_mut_ptr(&self) -> *mut [T] {474474+ core::ptr::slice_from_raw_parts_mut(self.cpu_addr.as_ptr(), self.count)475475+ }476476+464477 /// Returns the base address to the allocated region in the CPU's virtual address space.465478 pub fn start_ptr(&self) -> *const T {466479 self.cpu_addr.as_ptr()···594581 Ok(())595582 }596583597597- /// Returns a pointer to an element from the region with bounds checking. `offset` is in598598- /// units of `T`, not the number of bytes.599599- ///600600- /// Public but hidden since it should only be used from [`dma_read`] and [`dma_write`] macros.601601- #[doc(hidden)]602602- pub fn item_from_index(&self, offset: usize) -> Result<*mut T> {603603- if offset >= self.count {604604- return Err(EINVAL);605605- }606606- // SAFETY:607607- // - The pointer is valid due to type invariant on `CoherentAllocation`608608- // and we've just checked that the range and index is within bounds.609609- // - `offset` can't overflow since it is smaller than `self.count` and we've checked610610- // that `self.count` won't overflow early in the constructor.611611- Ok(unsafe { self.cpu_addr.as_ptr().add(offset) })612612- }613613-614584 /// Reads the value of `field` and ensures that its type is [`FromBytes`].615585 ///616586 /// # Safety···666670667671/// Reads a field of an item from an allocated region of structs.668672///673673+/// The syntax is of the form `kernel::dma_read!(dma, proj)` where `dma` is an expression evaluating674674+/// to a [`CoherentAllocation`] and `proj` is a [projection specification](kernel::ptr::project!).675675+///669676/// # Examples670677///671678/// ```···683684/// unsafe impl kernel::transmute::AsBytes for MyStruct{};684685///685686/// # fn test(alloc: &kernel::dma::CoherentAllocation<MyStruct>) -> Result {686686-/// let whole = kernel::dma_read!(alloc[2]);687687-/// let field = kernel::dma_read!(alloc[1].field);687687+/// let whole = kernel::dma_read!(alloc, [2]?);688688+/// let field = kernel::dma_read!(alloc, [1]?.field);688689/// # Ok::<(), Error>(()) }689690/// ```690691#[macro_export]691692macro_rules! dma_read {692692- ($dma:expr, $idx: expr, $($field:tt)*) => {{693693- (|| -> ::core::result::Result<_, $crate::error::Error> {694694- let item = $crate::dma::CoherentAllocation::item_from_index(&$dma, $idx)?;695695- // SAFETY: `item_from_index` ensures that `item` is always a valid pointer and can be696696- // dereferenced. The compiler also further validates the expression on whether `field`697697- // is a member of `item` when expanded by the macro.698698- unsafe {699699- let ptr_field = ::core::ptr::addr_of!((*item) $($field)*);700700- ::core::result::Result::Ok(701701- $crate::dma::CoherentAllocation::field_read(&$dma, ptr_field)702702- )703703- }704704- })()693693+ ($dma:expr, $($proj:tt)*) => {{694694+ let dma = &$dma;695695+ let ptr = $crate::ptr::project!(696696+ $crate::dma::CoherentAllocation::as_ptr(dma), $($proj)*697697+ );698698+ // SAFETY: The pointer created by the projection is within the DMA region.699699+ unsafe { $crate::dma::CoherentAllocation::field_read(dma, ptr) }705700 }};706706- ($dma:ident [ $idx:expr ] $($field:tt)* ) => {707707- $crate::dma_read!($dma, $idx, $($field)*)708708- };709709- ($($dma:ident).* [ $idx:expr ] $($field:tt)* ) => {710710- $crate::dma_read!($($dma).*, $idx, $($field)*)711711- };712701}713702714703/// Writes to a field of an item from an allocated region of structs.704704+///705705+/// The syntax is of the form `kernel::dma_write!(dma, proj, val)` where `dma` is an expression706706+/// evaluating to a [`CoherentAllocation`], `proj` is a707707+/// [projection specification](kernel::ptr::project!), and `val` is the value to be written to the708708+/// projected location.715709///716710/// # Examples717711///···720728/// unsafe impl kernel::transmute::AsBytes for MyStruct{};721729///722730/// # fn test(alloc: &kernel::dma::CoherentAllocation<MyStruct>) -> Result {723723-/// kernel::dma_write!(alloc[2].member = 0xf);724724-/// kernel::dma_write!(alloc[1] = MyStruct { member: 0xf });731731+/// kernel::dma_write!(alloc, [2]?.member, 0xf);732732+/// kernel::dma_write!(alloc, [1]?, MyStruct { member: 0xf });725733/// # Ok::<(), Error>(()) }726734/// ```727735#[macro_export]728736macro_rules! dma_write {729729- ($dma:ident [ $idx:expr ] $($field:tt)*) => {{730730- $crate::dma_write!($dma, $idx, $($field)*)737737+ (@parse [$dma:expr] [$($proj:tt)*] [, $val:expr]) => {{738738+ let dma = &$dma;739739+ let ptr = $crate::ptr::project!(740740+ mut $crate::dma::CoherentAllocation::as_mut_ptr(dma), $($proj)*741741+ );742742+ let val = $val;743743+ // SAFETY: The pointer created by the projection is within the DMA region.744744+ unsafe { $crate::dma::CoherentAllocation::field_write(dma, ptr, val) }731745 }};732732- ($($dma:ident).* [ $idx:expr ] $($field:tt)* ) => {{733733- $crate::dma_write!($($dma).*, $idx, $($field)*)734734- }};735735- ($dma:expr, $idx: expr, = $val:expr) => {736736- (|| -> ::core::result::Result<_, $crate::error::Error> {737737- let item = $crate::dma::CoherentAllocation::item_from_index(&$dma, $idx)?;738738- // SAFETY: `item_from_index` ensures that `item` is always a valid item.739739- unsafe { $crate::dma::CoherentAllocation::field_write(&$dma, item, $val) }740740- ::core::result::Result::Ok(())741741- })()746746+ (@parse [$dma:expr] [$($proj:tt)*] [.$field:tt $($rest:tt)*]) => {747747+ $crate::dma_write!(@parse [$dma] [$($proj)* .$field] [$($rest)*])742748 };743743- ($dma:expr, $idx: expr, $(.$field:ident)* = $val:expr) => {744744- (|| -> ::core::result::Result<_, $crate::error::Error> {745745- let item = $crate::dma::CoherentAllocation::item_from_index(&$dma, $idx)?;746746- // SAFETY: `item_from_index` ensures that `item` is always a valid pointer and can be747747- // dereferenced. The compiler also further validates the expression on whether `field`748748- // is a member of `item` when expanded by the macro.749749- unsafe {750750- let ptr_field = ::core::ptr::addr_of_mut!((*item) $(.$field)*);751751- $crate::dma::CoherentAllocation::field_write(&$dma, ptr_field, $val)752752- }753753- ::core::result::Result::Ok(())754754- })()749749+ (@parse [$dma:expr] [$($proj:tt)*] [[$index:expr]? $($rest:tt)*]) => {750750+ $crate::dma_write!(@parse [$dma] [$($proj)* [$index]?] [$($rest)*])751751+ };752752+ (@parse [$dma:expr] [$($proj:tt)*] [[$index:expr] $($rest:tt)*]) => {753753+ $crate::dma_write!(@parse [$dma] [$($proj)* [$index]] [$($rest)*])754754+ };755755+ ($dma:expr, $($rest:tt)*) => {756756+ $crate::dma_write!(@parse [$dma] [] [$($rest)*])755757 };756758}
+8
rust/kernel/kunit.rs
···1414/// Public but hidden since it should only be used from KUnit generated code.1515#[doc(hidden)]1616pub fn err(args: fmt::Arguments<'_>) {1717+ // `args` is unused if `CONFIG_PRINTK` is not set - this avoids a build-time warning.1818+ #[cfg(not(CONFIG_PRINTK))]1919+ let _ = args;2020+1721 // SAFETY: The format string is null-terminated and the `%pA` specifier matches the argument we1822 // are passing.1923 #[cfg(CONFIG_PRINTK)]···3430/// Public but hidden since it should only be used from KUnit generated code.3531#[doc(hidden)]3632pub fn info(args: fmt::Arguments<'_>) {3333+ // `args` is unused if `CONFIG_PRINTK` is not set - this avoids a build-time warning.3434+ #[cfg(not(CONFIG_PRINTK))]3535+ let _ = args;3636+3737 // SAFETY: The format string is null-terminated and the `%pA` specifier matches the argument we3838 // are passing.3939 #[cfg(CONFIG_PRINTK)]
+4
rust/kernel/lib.rs
···2020#![feature(generic_nonzero)]2121#![feature(inline_const)]2222#![feature(pointer_is_aligned)]2323+#![feature(slice_ptr_len)]2324//2425// Stable since Rust 1.80.0.2526#![feature(slice_flatten)]···3736#![feature(const_option)]3837#![feature(const_ptr_write)]3938#![feature(const_refs_to_cell)]3939+//4040+// Stable since Rust 1.84.0.4141+#![feature(strict_provenance)]4042//4143// Expected to become stable.4244#![feature(arbitrary_self_types)]
+29-1
rust/kernel/ptr.rs
···2233//! Types and functions to work with pointers and addresses.4455-use core::mem::align_of;55+pub mod projection;66+pub use crate::project_pointer as project;77+88+use core::mem::{99+ align_of,1010+ size_of, //1111+};612use core::num::NonZero;713814/// Type representing an alignment, which is always a power of two.···231225}232226233227impl_alignable_uint!(u8, u16, u32, u64, usize);228228+229229+/// Trait to represent compile-time known size information.230230+///231231+/// This is a generalization of [`size_of`] that works for dynamically sized types.232232+pub trait KnownSize {233233+ /// Get the size of an object of this type in bytes, with the metadata of the given pointer.234234+ fn size(p: *const Self) -> usize;235235+}236236+237237+impl<T> KnownSize for T {238238+ #[inline(always)]239239+ fn size(_: *const Self) -> usize {240240+ size_of::<T>()241241+ }242242+}243243+244244+impl<T> KnownSize for [T] {245245+ #[inline(always)]246246+ fn size(p: *const Self) -> usize {247247+ p.len() * size_of::<T>()248248+ }249249+}
+305
rust/kernel/ptr/projection.rs
···11+// SPDX-License-Identifier: GPL-2.022+33+//! Infrastructure for handling projections.44+55+use core::{66+ mem::MaybeUninit,77+ ops::Deref, //88+};99+1010+use crate::prelude::*;1111+1212+/// Error raised when a projection is attempted on an array or slice out of bounds.1313+pub struct OutOfBound;1414+1515+impl From<OutOfBound> for Error {1616+ #[inline(always)]1717+ fn from(_: OutOfBound) -> Self {1818+ ERANGE1919+ }2020+}2121+2222+/// A helper trait to perform index projection.2323+///2424+/// This is similar to [`core::slice::SliceIndex`], but operates on raw pointers safely and2525+/// fallibly.2626+///2727+/// # Safety2828+///2929+/// The implementation of `index` and `get` (if [`Some`] is returned) must ensure that, if provided3030+/// input pointer `slice` and returned pointer `output`, then:3131+/// - `output` has the same provenance as `slice`;3232+/// - `output.byte_offset_from(slice)` is between 0 to3333+/// `KnownSize::size(slice) - KnownSize::size(output)`.3434+///3535+/// This means that if the input pointer is valid, then pointer returned by `get` or `index` is3636+/// also valid.3737+#[diagnostic::on_unimplemented(message = "`{Self}` cannot be used to index `{T}`")]3838+#[doc(hidden)]3939+pub unsafe trait ProjectIndex<T: ?Sized>: Sized {4040+ type Output: ?Sized;4141+4242+ /// Returns an index-projected pointer, if in bounds.4343+ fn get(self, slice: *mut T) -> Option<*mut Self::Output>;4444+4545+ /// Returns an index-projected pointer; fail the build if it cannot be proved to be in bounds.4646+ #[inline(always)]4747+ fn index(self, slice: *mut T) -> *mut Self::Output {4848+ Self::get(self, slice).unwrap_or_else(|| build_error!())4949+ }5050+}5151+5252+// Forward array impl to slice impl.5353+//5454+// SAFETY: Safety requirement guaranteed by the forwarded impl.5555+unsafe impl<T, I, const N: usize> ProjectIndex<[T; N]> for I5656+where5757+ I: ProjectIndex<[T]>,5858+{5959+ type Output = <I as ProjectIndex<[T]>>::Output;6060+6161+ #[inline(always)]6262+ fn get(self, slice: *mut [T; N]) -> Option<*mut Self::Output> {6363+ <I as ProjectIndex<[T]>>::get(self, slice)6464+ }6565+6666+ #[inline(always)]6767+ fn index(self, slice: *mut [T; N]) -> *mut Self::Output {6868+ <I as ProjectIndex<[T]>>::index(self, slice)6969+ }7070+}7171+7272+// SAFETY: `get`-returned pointer has the same provenance as `slice` and the offset is checked to7373+// not exceed the required bound.7474+unsafe impl<T> ProjectIndex<[T]> for usize {7575+ type Output = T;7676+7777+ #[inline(always)]7878+ fn get(self, slice: *mut [T]) -> Option<*mut T> {7979+ if self >= slice.len() {8080+ None8181+ } else {8282+ Some(slice.cast::<T>().wrapping_add(self))8383+ }8484+ }8585+}8686+8787+// SAFETY: `get`-returned pointer has the same provenance as `slice` and the offset is checked to8888+// not exceed the required bound.8989+unsafe impl<T> ProjectIndex<[T]> for core::ops::Range<usize> {9090+ type Output = [T];9191+9292+ #[inline(always)]9393+ fn get(self, slice: *mut [T]) -> Option<*mut [T]> {9494+ let new_len = self.end.checked_sub(self.start)?;9595+ if self.end > slice.len() {9696+ return None;9797+ }9898+ Some(core::ptr::slice_from_raw_parts_mut(9999+ slice.cast::<T>().wrapping_add(self.start),100100+ new_len,101101+ ))102102+ }103103+}104104+105105+// SAFETY: Safety requirement guaranteed by the forwarded impl.106106+unsafe impl<T> ProjectIndex<[T]> for core::ops::RangeTo<usize> {107107+ type Output = [T];108108+109109+ #[inline(always)]110110+ fn get(self, slice: *mut [T]) -> Option<*mut [T]> {111111+ (0..self.end).get(slice)112112+ }113113+}114114+115115+// SAFETY: Safety requirement guaranteed by the forwarded impl.116116+unsafe impl<T> ProjectIndex<[T]> for core::ops::RangeFrom<usize> {117117+ type Output = [T];118118+119119+ #[inline(always)]120120+ fn get(self, slice: *mut [T]) -> Option<*mut [T]> {121121+ (self.start..slice.len()).get(slice)122122+ }123123+}124124+125125+// SAFETY: `get` returned the pointer as is, so it always has the same provenance and offset of 0.126126+unsafe impl<T> ProjectIndex<[T]> for core::ops::RangeFull {127127+ type Output = [T];128128+129129+ #[inline(always)]130130+ fn get(self, slice: *mut [T]) -> Option<*mut [T]> {131131+ Some(slice)132132+ }133133+}134134+135135+/// A helper trait to perform field projection.136136+///137137+/// This trait has a `DEREF` generic parameter so it can be implemented twice for types that138138+/// implement [`Deref`]. This will cause an ambiguity error and thus block [`Deref`] types being139139+/// used as base of projection, as they can inject unsoundness. Users therefore must not specify140140+/// `DEREF` and should always leave it to be inferred.141141+///142142+/// # Safety143143+///144144+/// `proj` may only invoke `f` with a valid allocation, as the documentation of [`Self::proj`]145145+/// describes.146146+#[doc(hidden)]147147+pub unsafe trait ProjectField<const DEREF: bool> {148148+ /// Project a pointer to a type to a pointer of a field.149149+ ///150150+ /// `f` may only be invoked with a valid allocation so it can safely obtain raw pointers to151151+ /// fields using `&raw mut`.152152+ ///153153+ /// This is needed because `base` might not point to a valid allocation, while `&raw mut`154154+ /// requires pointers to be in bounds of a valid allocation.155155+ ///156156+ /// # Safety157157+ ///158158+ /// `f` must return a pointer in bounds of the provided pointer.159159+ unsafe fn proj<F>(base: *mut Self, f: impl FnOnce(*mut Self) -> *mut F) -> *mut F;160160+}161161+162162+// NOTE: in theory, this API should work for `T: ?Sized` and `F: ?Sized`, too. However, we cannot163163+// currently support that as we need to obtain a valid allocation that `&raw const` can operate on.164164+//165165+// SAFETY: `proj` invokes `f` with valid allocation.166166+unsafe impl<T> ProjectField<false> for T {167167+ #[inline(always)]168168+ unsafe fn proj<F>(base: *mut Self, f: impl FnOnce(*mut Self) -> *mut F) -> *mut F {169169+ // Create a valid allocation to start projection, as `base` is not necessarily so. The170170+ // memory is never actually used so it will be optimized out, so it should work even for171171+ // very large `T` (`memoffset` crate also relies on this). To be extra certain, we also172172+ // annotate `f` closure with `#[inline(always)]` in the macro.173173+ let mut place = MaybeUninit::uninit();174174+ let place_base = place.as_mut_ptr();175175+ let field = f(place_base);176176+ // SAFETY: `field` is in bounds from `base` per safety requirement.177177+ let offset = unsafe { field.byte_offset_from(place_base) };178178+ // Use `wrapping_byte_offset` as `base` does not need to be of valid allocation.179179+ base.wrapping_byte_offset(offset).cast()180180+ }181181+}182182+183183+// SAFETY: Vacuously satisfied.184184+unsafe impl<T: Deref> ProjectField<true> for T {185185+ #[inline(always)]186186+ unsafe fn proj<F>(_: *mut Self, _: impl FnOnce(*mut Self) -> *mut F) -> *mut F {187187+ build_error!("this function is a guard against `Deref` impl and is never invoked");188188+ }189189+}190190+191191+/// Create a projection from a raw pointer.192192+///193193+/// The projected pointer is within the memory region marked by the input pointer. There is no194194+/// requirement that the input raw pointer needs to be valid, so this macro may be used for195195+/// projecting pointers outside normal address space, e.g. I/O pointers. However, if the input196196+/// pointer is valid, the projected pointer is also valid.197197+///198198+/// Supported projections include field projections and index projections.199199+/// It is not allowed to project into types that implement custom [`Deref`] or200200+/// [`Index`](core::ops::Index).201201+///202202+/// The macro has basic syntax of `kernel::ptr::project!(ptr, projection)`, where `ptr` is an203203+/// expression that evaluates to a raw pointer which serves as the base of projection. `projection`204204+/// can be a projection expression of form `.field` (normally identifier, or numeral in case of205205+/// tuple structs) or of form `[index]`.206206+///207207+/// If a mutable pointer is needed, the macro input can be prefixed with the `mut` keyword, i.e.208208+/// `kernel::ptr::project!(mut ptr, projection)`. By default, a const pointer is created.209209+///210210+/// `ptr::project!` macro can perform both fallible indexing and build-time checked indexing.211211+/// `[index]` form performs build-time bounds checking; if compiler fails to prove `[index]` is in212212+/// bounds, compilation will fail. `[index]?` can be used to perform runtime bounds checking;213213+/// `OutOfBound` error is raised via `?` if the index is out of bounds.214214+///215215+/// # Examples216216+///217217+/// Field projections are performed with `.field_name`:218218+///219219+/// ```220220+/// struct MyStruct { field: u32, }221221+/// let ptr: *const MyStruct = core::ptr::dangling();222222+/// let field_ptr: *const u32 = kernel::ptr::project!(ptr, .field);223223+///224224+/// struct MyTupleStruct(u32, u32);225225+///226226+/// fn proj(ptr: *const MyTupleStruct) {227227+/// let field_ptr: *const u32 = kernel::ptr::project!(ptr, .1);228228+/// }229229+/// ```230230+///231231+/// Index projections are performed with `[index]`:232232+///233233+/// ```234234+/// fn proj(ptr: *const [u8; 32]) -> Result {235235+/// let field_ptr: *const u8 = kernel::ptr::project!(ptr, [1]);236236+/// // The following invocation, if uncommented, would fail the build.237237+/// //238238+/// // kernel::ptr::project!(ptr, [128]);239239+///240240+/// // This will raise an `OutOfBound` error (which is convertible to `ERANGE`).241241+/// kernel::ptr::project!(ptr, [128]?);242242+/// Ok(())243243+/// }244244+/// ```245245+///246246+/// If you need to match on the error instead of propagate, put the invocation inside a closure:247247+///248248+/// ```249249+/// let ptr: *const [u8; 32] = core::ptr::dangling();250250+/// let field_ptr: Result<*const u8> = (|| -> Result<_> {251251+/// Ok(kernel::ptr::project!(ptr, [128]?))252252+/// })();253253+/// assert!(field_ptr.is_err());254254+/// ```255255+///256256+/// For mutable pointers, put `mut` as the first token in macro invocation.257257+///258258+/// ```259259+/// let ptr: *mut [(u8, u16); 32] = core::ptr::dangling_mut();260260+/// let field_ptr: *mut u16 = kernel::ptr::project!(mut ptr, [1].1);261261+/// ```262262+#[macro_export]263263+macro_rules! project_pointer {264264+ (@gen $ptr:ident, ) => {};265265+ // Field projection. `$field` needs to be `tt` to support tuple index like `.0`.266266+ (@gen $ptr:ident, .$field:tt $($rest:tt)*) => {267267+ // SAFETY: The provided closure always returns an in-bounds pointer.268268+ let $ptr = unsafe {269269+ $crate::ptr::projection::ProjectField::proj($ptr, #[inline(always)] |ptr| {270270+ // Check unaligned field. Not all users (e.g. DMA) can handle unaligned271271+ // projections.272272+ if false {273273+ let _ = &(*ptr).$field;274274+ }275275+ // SAFETY: `$field` is in bounds, and no implicit `Deref` is possible (if the276276+ // type implements `Deref`, Rust cannot infer the generic parameter `DEREF`).277277+ &raw mut (*ptr).$field278278+ })279279+ };280280+ $crate::ptr::project!(@gen $ptr, $($rest)*)281281+ };282282+ // Fallible index projection.283283+ (@gen $ptr:ident, [$index:expr]? $($rest:tt)*) => {284284+ let $ptr = $crate::ptr::projection::ProjectIndex::get($index, $ptr)285285+ .ok_or($crate::ptr::projection::OutOfBound)?;286286+ $crate::ptr::project!(@gen $ptr, $($rest)*)287287+ };288288+ // Build-time checked index projection.289289+ (@gen $ptr:ident, [$index:expr] $($rest:tt)*) => {290290+ let $ptr = $crate::ptr::projection::ProjectIndex::index($index, $ptr);291291+ $crate::ptr::project!(@gen $ptr, $($rest)*)292292+ };293293+ (mut $ptr:expr, $($proj:tt)*) => {{294294+ let ptr: *mut _ = $ptr;295295+ $crate::ptr::project!(@gen ptr, $($proj)*);296296+ ptr297297+ }};298298+ ($ptr:expr, $($proj:tt)*) => {{299299+ let ptr = <*const _>::cast_mut($ptr);300300+ // We currently always project using mutable pointer, as it is not decided whether `&raw301301+ // const` allows the resulting pointer to be mutated (see documentation of `addr_of!`).302302+ $crate::ptr::project!(@gen ptr, $($proj)*);303303+ ptr.cast_const()304304+ }};305305+}
+2-2
rust/kernel/str.rs
···664664///665665/// * The first byte of `buffer` is always zero.666666/// * The length of `buffer` is at least 1.667667-pub(crate) struct NullTerminatedFormatter<'a> {667667+pub struct NullTerminatedFormatter<'a> {668668 buffer: &'a mut [u8],669669}670670671671impl<'a> NullTerminatedFormatter<'a> {672672 /// Create a new [`Self`] instance.673673- pub(crate) fn new(buffer: &'a mut [u8]) -> Option<NullTerminatedFormatter<'a>> {673673+ pub fn new(buffer: &'a mut [u8]) -> Option<NullTerminatedFormatter<'a>> {674674 *(buffer.first_mut()?) = 0;675675676676 // INVARIANT:
+23-46
rust/pin-init/internal/src/init.rs
···62626363enum InitializerAttribute {6464 DefaultError(DefaultErrorAttribute),6565- DisableInitializedFieldAccess,6665}67666867struct DefaultErrorAttribute {···8586 let error = error.map_or_else(8687 || {8788 if let Some(default_error) = attrs.iter().fold(None, |acc, attr| {8989+ #[expect(irrefutable_let_patterns)]8890 if let InitializerAttribute::DefaultError(DefaultErrorAttribute { ty }) = attr {8991 Some(ty.clone())9092 } else {···145145 };146146 // `mixed_site` ensures that the data is not accessible to the user-controlled code.147147 let data = Ident::new("__data", Span::mixed_site());148148- let init_fields = init_fields(149149- &fields,150150- pinned,151151- !attrs152152- .iter()153153- .any(|attr| matches!(attr, InitializerAttribute::DisableInitializedFieldAccess)),154154- &data,155155- &slot,156156- );148148+ let init_fields = init_fields(&fields, pinned, &data, &slot);157149 let field_check = make_field_check(&fields, init_kind, &path);158150 Ok(quote! {{159159- // We do not want to allow arbitrary returns, so we declare this type as the `Ok` return160160- // type and shadow it later when we insert the arbitrary user code. That way there will be161161- // no possibility of returning without `unsafe`.162162- struct __InitOk;163163-164151 // Get the data about fields from the supplied type.165152 // SAFETY: TODO166153 let #data = unsafe {···157170 #path::#get_data()158171 };159172 // Ensure that `#data` really is of type `#data` and help with type inference:160160- let init = ::pin_init::__internal::#data_trait::make_closure::<_, __InitOk, #error>(173173+ let init = ::pin_init::__internal::#data_trait::make_closure::<_, #error>(161174 #data,162175 move |slot| {163163- {164164- // Shadow the structure so it cannot be used to return early.165165- struct __InitOk;166166- #zeroable_check167167- #this168168- #init_fields169169- #field_check170170- }171171- Ok(__InitOk)176176+ #zeroable_check177177+ #this178178+ #init_fields179179+ #field_check180180+ // SAFETY: we are the `init!` macro that is allowed to call this.181181+ Ok(unsafe { ::pin_init::__internal::InitOk::new() })172182 }173183 );174184 let init = move |slot| -> ::core::result::Result<(), #error> {···220236fn init_fields(221237 fields: &Punctuated<InitializerField, Token![,]>,222238 pinned: bool,223223- generate_initialized_accessors: bool,224239 data: &Ident,225240 slot: &Ident,226241) -> TokenStream {···243260 });244261 // Again span for better diagnostics245262 let write = quote_spanned!(ident.span()=> ::core::ptr::write);263263+ // NOTE: the field accessor ensures that the initialized field is properly aligned.264264+ // Unaligned fields will cause the compiler to emit E0793. We do not support265265+ // unaligned fields since `Init::__init` requires an aligned pointer; the call to266266+ // `ptr::write` below has the same requirement.246267 let accessor = if pinned {247268 let project_ident = format_ident!("__project_{ident}");248269 quote! {···259272 unsafe { &mut (*#slot).#ident }260273 }261274 };262262- let accessor = generate_initialized_accessors.then(|| {263263- quote! {264264- #(#cfgs)*265265- #[allow(unused_variables)]266266- let #ident = #accessor;267267- }268268- });269275 quote! {270276 #(#attrs)*271277 {···266286 // SAFETY: TODO267287 unsafe { #write(::core::ptr::addr_of_mut!((*#slot).#ident), #value_ident) };268288 }269269- #accessor289289+ #(#cfgs)*290290+ #[allow(unused_variables)]291291+ let #ident = #accessor;270292 }271293 }272294 InitializerKind::Init { ident, value, .. } => {273295 // Again span for better diagnostics274296 let init = format_ident!("init", span = value.span());297297+ // NOTE: the field accessor ensures that the initialized field is properly aligned.298298+ // Unaligned fields will cause the compiler to emit E0793. We do not support299299+ // unaligned fields since `Init::__init` requires an aligned pointer; the call to300300+ // `ptr::write` below has the same requirement.275301 let (value_init, accessor) = if pinned {276302 let project_ident = format_ident!("__project_{ident}");277303 (···312326 },313327 )314328 };315315- let accessor = generate_initialized_accessors.then(|| {316316- quote! {317317- #(#cfgs)*318318- #[allow(unused_variables)]319319- let #ident = #accessor;320320- }321321- });322329 quote! {323330 #(#attrs)*324331 {325332 let #init = #value;326333 #value_init327334 }328328- #accessor335335+ #(#cfgs)*336336+ #[allow(unused_variables)]337337+ let #ident = #accessor;329338 }330339 }331340 InitializerKind::Code { block: value, .. } => quote! {···447466 if a.path().is_ident("default_error") {448467 a.parse_args::<DefaultErrorAttribute>()449468 .map(InitializerAttribute::DefaultError)450450- } else if a.path().is_ident("disable_initialized_field_access") {451451- a.meta452452- .require_path_only()453453- .map(|_| InitializerAttribute::DisableInitializedFieldAccess)454469 } else {455470 Err(syn::Error::new_spanned(a, "unknown initializer attribute"))456471 }
+24-4
rust/pin-init/src/__internal.rs
···4646 }4747}48484949+/// Token type to signify successful initialization.5050+///5151+/// Can only be constructed via the unsafe [`Self::new`] function. The initializer macros use this5252+/// token type to prevent returning `Ok` from an initializer without initializing all fields.5353+pub struct InitOk(());5454+5555+impl InitOk {5656+ /// Creates a new token.5757+ ///5858+ /// # Safety5959+ ///6060+ /// This function may only be called from the `init!` macro in `../internal/src/init.rs`.6161+ #[inline(always)]6262+ pub unsafe fn new() -> Self {6363+ Self(())6464+ }6565+}6666+4967/// This trait is only implemented via the `#[pin_data]` proc-macro. It is used to facilitate5068/// the pin projections within the initializers.5169///···8668 type Datee: ?Sized + HasPinData;87698870 /// Type inference helper function.8989- fn make_closure<F, O, E>(self, f: F) -> F7171+ #[inline(always)]7272+ fn make_closure<F, E>(self, f: F) -> F9073 where9191- F: FnOnce(*mut Self::Datee) -> Result<O, E>,7474+ F: FnOnce(*mut Self::Datee) -> Result<InitOk, E>,9275 {9376 f9477 }···11798 type Datee: ?Sized + HasInitData;11899119100 /// Type inference helper function.120120- fn make_closure<F, O, E>(self, f: F) -> F101101+ #[inline(always)]102102+ fn make_closure<F, E>(self, f: F) -> F121103 where122122- F: FnOnce(*mut Self::Datee) -> Result<O, E>,104104+ F: FnOnce(*mut Self::Datee) -> Result<InitOk, E>,123105 {124106 f125107 }
+16-14
samples/rust/rust_dma.rs
···6868 CoherentAllocation::alloc_coherent(pdev.as_ref(), TEST_VALUES.len(), GFP_KERNEL)?;69697070 for (i, value) in TEST_VALUES.into_iter().enumerate() {7171- kernel::dma_write!(ca[i] = MyStruct::new(value.0, value.1))?;7171+ kernel::dma_write!(ca, [i]?, MyStruct::new(value.0, value.1));7272 }73737474 let size = 4 * page::PAGE_SIZE;···8585 }8686}87878888+impl DmaSampleDriver {8989+ fn check_dma(&self) -> Result {9090+ for (i, value) in TEST_VALUES.into_iter().enumerate() {9191+ let val0 = kernel::dma_read!(self.ca, [i]?.h);9292+ let val1 = kernel::dma_read!(self.ca, [i]?.b);9393+9494+ assert_eq!(val0, value.0);9595+ assert_eq!(val1, value.1);9696+ }9797+9898+ Ok(())9999+ }100100+}101101+88102#[pinned_drop]89103impl PinnedDrop for DmaSampleDriver {90104 fn drop(self: Pin<&mut Self>) {91105 dev_info!(self.pdev, "Unload DMA test driver.\n");921069393- for (i, value) in TEST_VALUES.into_iter().enumerate() {9494- let val0 = kernel::dma_read!(self.ca[i].h);9595- let val1 = kernel::dma_read!(self.ca[i].b);9696- assert!(val0.is_ok());9797- assert!(val1.is_ok());9898-9999- if let Ok(val0) = val0 {100100- assert_eq!(val0, value.0);101101- }102102- if let Ok(val1) = val1 {103103- assert_eq!(val1, value.1);104104- }105105- }107107+ assert!(self.check_dma().is_ok());106108107109 for (i, entry) in self.sgt.iter().enumerate() {108110 dev_info!(
···11+// SPDX-License-Identifier: GPL-2.022+/*33+ * wq_stall - Test module for the workqueue stall detector.44+ *55+ * Deliberately creates a workqueue stall so the watchdog fires and66+ * prints diagnostic output. Useful for verifying that the stall77+ * detector correctly identifies stuck workers and produces useful88+ * backtraces.99+ *1010+ * The stall is triggered by clearing PF_WQ_WORKER before sleeping,1111+ * which hides the worker from the concurrency manager. A second1212+ * work item queued on the same pool then sits in the worklist with1313+ * no worker available to process it.1414+ *1515+ * After ~30s the workqueue watchdog fires:1616+ * BUG: workqueue lockup - pool cpus=N ...1717+ *1818+ * Build:1919+ * make -C <kernel tree> M=samples/workqueue/stall_detector modules2020+ *2121+ * Copyright (c) 2026 Meta Platforms, Inc. and affiliates.2222+ * Copyright (c) 2026 Breno Leitao <leitao@debian.org>2323+ */2424+2525+#include <linux/module.h>2626+#include <linux/workqueue.h>2727+#include <linux/wait.h>2828+#include <linux/atomic.h>2929+#include <linux/sched.h>3030+3131+static DECLARE_WAIT_QUEUE_HEAD(stall_wq_head);3232+static atomic_t wake_condition = ATOMIC_INIT(0);3333+static struct work_struct stall_work1;3434+static struct work_struct stall_work2;3535+3636+static void stall_work2_fn(struct work_struct *work)3737+{3838+ pr_info("wq_stall: second work item finally ran\n");3939+}4040+4141+static void stall_work1_fn(struct work_struct *work)4242+{4343+ pr_info("wq_stall: first work item running on cpu %d\n",4444+ raw_smp_processor_id());4545+4646+ /*4747+ * Queue second item while we're still counted as running4848+ * (pool->nr_running > 0). Since schedule_work() on a per-CPU4949+ * workqueue targets raw_smp_processor_id(), item 2 lands on the5050+ * same pool. __queue_work -> kick_pool -> need_more_worker()5151+ * sees nr_running > 0 and does NOT wake a new worker.5252+ */5353+ schedule_work(&stall_work2);5454+5555+ /*5656+ * Hide from the workqueue concurrency manager. Without5757+ * PF_WQ_WORKER, schedule() won't call wq_worker_sleeping(),5858+ * so nr_running is never decremented and no replacement5959+ * worker is created. Item 2 stays stuck in pool->worklist.6060+ */6161+ current->flags &= ~PF_WQ_WORKER;6262+6363+ pr_info("wq_stall: entering wait_event_idle (PF_WQ_WORKER cleared)\n");6464+ pr_info("wq_stall: expect 'BUG: workqueue lockup' in ~30-60s\n");6565+ wait_event_idle(stall_wq_head, atomic_read(&wake_condition) != 0);6666+6767+ /* Restore so process_one_work() cleanup works correctly */6868+ current->flags |= PF_WQ_WORKER;6969+ pr_info("wq_stall: woke up, PF_WQ_WORKER restored\n");7070+}7171+7272+static int __init wq_stall_init(void)7373+{7474+ pr_info("wq_stall: loading\n");7575+7676+ INIT_WORK(&stall_work1, stall_work1_fn);7777+ INIT_WORK(&stall_work2, stall_work2_fn);7878+ schedule_work(&stall_work1);7979+8080+ return 0;8181+}8282+8383+static void __exit wq_stall_exit(void)8484+{8585+ pr_info("wq_stall: unloading\n");8686+ atomic_set(&wake_condition, 1);8787+ wake_up(&stall_wq_head);8888+ flush_work(&stall_work1);8989+ flush_work(&stall_work2);9090+ pr_info("wq_stall: all work flushed, module unloaded\n");9191+}9292+9393+module_init(wq_stall_init);9494+module_exit(wq_stall_exit);9595+9696+MODULE_LICENSE("GPL");9797+MODULE_DESCRIPTION("Reproduce workqueue stall caused by PF_WQ_WORKER misuse");9898+MODULE_AUTHOR("Breno Leitao <leitao@debian.org>");
+3-1
scripts/Makefile.build
···310310311311# The features in this list are the ones allowed for non-`rust/` code.312312#313313+# - Stable since Rust 1.79.0: `feature(slice_ptr_len)`.313314# - Stable since Rust 1.81.0: `feature(lint_reasons)`.314315# - Stable since Rust 1.82.0: `feature(asm_const)`,315316# `feature(offset_of_nested)`, `feature(raw_ref_op)`.317317+# - Stable since Rust 1.84.0: `feature(strict_provenance)`.316318# - Stable since Rust 1.87.0: `feature(asm_goto)`.317319# - Expected to become stable: `feature(arbitrary_self_types)`.318320# - To be determined: `feature(used_with_arg)`.319321#320322# Please see https://github.com/Rust-for-Linux/linux/issues/2 for details on321323# the unstable features in use.322322-rust_allowed_features := asm_const,asm_goto,arbitrary_self_types,lint_reasons,offset_of_nested,raw_ref_op,used_with_arg324324+rust_allowed_features := asm_const,asm_goto,arbitrary_self_types,lint_reasons,offset_of_nested,raw_ref_op,slice_ptr_len,strict_provenance,used_with_arg323325324326# `--out-dir` is required to avoid temporaries being created by `rustc` in the325327# current working directory, which may be not accessible in the out-of-tree
+2-2
scripts/genksyms/parse.y
···325325 { $$ = $4; }326326 | direct_declarator BRACKET_PHRASE327327 { $$ = $2; }328328- | '(' declarator ')'329329- { $$ = $3; }328328+ | '(' attribute_opt declarator ')'329329+ { $$ = $4; }330330 ;331331332332/* Nested declarators differ from regular declarators in that they do
···3232#include "include/crypto.h"3333#include "include/ipc.h"3434#include "include/label.h"3535+#include "include/lib.h"3536#include "include/policy.h"3637#include "include/policy_ns.h"3738#include "include/resource.h"···6362 * securityfs and apparmorfs filesystems.6463 */65646565+#define IREF_POISON 10166666767/*6868 * support fns···8179 if (!private)8280 return;83818484- aa_put_loaddata(private->loaddata);8282+ aa_put_i_loaddata(private->loaddata);8583 kvfree(private);8684}8785···155153 return 0;156154}157155156156+static struct aa_ns *get_ns_common_ref(struct aa_common_ref *ref)157157+{158158+ if (ref) {159159+ struct aa_label *reflabel = container_of(ref, struct aa_label,160160+ count);161161+ return aa_get_ns(labels_ns(reflabel));162162+ }163163+164164+ return NULL;165165+}166166+167167+static struct aa_proxy *get_proxy_common_ref(struct aa_common_ref *ref)168168+{169169+ if (ref)170170+ return aa_get_proxy(container_of(ref, struct aa_proxy, count));171171+172172+ return NULL;173173+}174174+175175+static struct aa_loaddata *get_loaddata_common_ref(struct aa_common_ref *ref)176176+{177177+ if (ref)178178+ return aa_get_i_loaddata(container_of(ref, struct aa_loaddata,179179+ count));180180+ return NULL;181181+}182182+183183+static void aa_put_common_ref(struct aa_common_ref *ref)184184+{185185+ if (!ref)186186+ return;187187+188188+ switch (ref->reftype) {189189+ case REF_RAWDATA:190190+ aa_put_i_loaddata(container_of(ref, struct aa_loaddata,191191+ count));192192+ break;193193+ case REF_PROXY:194194+ aa_put_proxy(container_of(ref, struct aa_proxy,195195+ count));196196+ break;197197+ case REF_NS:198198+ /* ns count is held on its unconfined label */199199+ aa_put_ns(labels_ns(container_of(ref, struct aa_label, count)));200200+ break;201201+ default:202202+ AA_BUG(true, "unknown refcount type");203203+ break;204204+ }205205+}206206+207207+static void aa_get_common_ref(struct aa_common_ref *ref)208208+{209209+ kref_get(&ref->count);210210+}211211+212212+static void aafs_evict(struct inode *inode)213213+{214214+ struct aa_common_ref *ref = inode->i_private;215215+216216+ clear_inode(inode);217217+ aa_put_common_ref(ref);218218+ inode->i_private = (void *) IREF_POISON;219219+}220220+158221static void aafs_free_inode(struct inode *inode)159222{160223 if (S_ISLNK(inode->i_mode))···229162230163static const struct super_operations aafs_super_ops = {231164 .statfs = simple_statfs,165165+ .evict_inode = aafs_evict,232166 .free_inode = aafs_free_inode,233167 .show_path = aafs_show_path,234168};···330262 * aafs_remove(). Will return ERR_PTR on failure.331263 */332264static struct dentry *aafs_create(const char *name, umode_t mode,333333- struct dentry *parent, void *data, void *link,265265+ struct dentry *parent,266266+ struct aa_common_ref *data, void *link,334267 const struct file_operations *fops,335268 const struct inode_operations *iops)336269{···368299 goto fail_dentry;369300 inode_unlock(dir);370301302302+ if (data)303303+ aa_get_common_ref(data);304304+371305 return dentry;372306373307fail_dentry:···395323 * see aafs_create396324 */397325static struct dentry *aafs_create_file(const char *name, umode_t mode,398398- struct dentry *parent, void *data,326326+ struct dentry *parent,327327+ struct aa_common_ref *data,399328 const struct file_operations *fops)400329{401330 return aafs_create(name, mode, parent, data, NULL, fops, NULL);···482409483410 data->size = copy_size;484411 if (copy_from_user(data->data, userbuf, copy_size)) {485485- aa_put_loaddata(data);412412+ /* trigger free - don't need to put pcount */413413+ aa_put_i_loaddata(data);486414 return ERR_PTR(-EFAULT);487415 }488416···491417}492418493419static ssize_t policy_update(u32 mask, const char __user *buf, size_t size,494494- loff_t *pos, struct aa_ns *ns)420420+ loff_t *pos, struct aa_ns *ns,421421+ const struct cred *ocred)495422{496423 struct aa_loaddata *data;497424 struct aa_label *label;···503428 /* high level check about policy management - fine grained in504429 * below after unpack505430 */506506- error = aa_may_manage_policy(current_cred(), label, ns, mask);431431+ error = aa_may_manage_policy(current_cred(), label, ns, ocred, mask);507432 if (error)508433 goto end_section;509434···511436 error = PTR_ERR(data);512437 if (!IS_ERR(data)) {513438 error = aa_replace_profiles(ns, label, mask, data);514514- aa_put_loaddata(data);439439+ /* put pcount, which will put count and free if no440440+ * profiles referencing it.441441+ */442442+ aa_put_profile_loaddata(data);515443 }516444end_section:517445 end_current_label_crit_section(label);···526448static ssize_t profile_load(struct file *f, const char __user *buf, size_t size,527449 loff_t *pos)528450{529529- struct aa_ns *ns = aa_get_ns(f->f_inode->i_private);530530- int error = policy_update(AA_MAY_LOAD_POLICY, buf, size, pos, ns);451451+ struct aa_ns *ns = get_ns_common_ref(f->f_inode->i_private);452452+ int error = policy_update(AA_MAY_LOAD_POLICY, buf, size, pos, ns,453453+ f->f_cred);531454532455 aa_put_ns(ns);533456···544465static ssize_t profile_replace(struct file *f, const char __user *buf,545466 size_t size, loff_t *pos)546467{547547- struct aa_ns *ns = aa_get_ns(f->f_inode->i_private);468468+ struct aa_ns *ns = get_ns_common_ref(f->f_inode->i_private);548469 int error = policy_update(AA_MAY_LOAD_POLICY | AA_MAY_REPLACE_POLICY,549549- buf, size, pos, ns);470470+ buf, size, pos, ns, f->f_cred);550471 aa_put_ns(ns);551472552473 return error;···564485 struct aa_loaddata *data;565486 struct aa_label *label;566487 ssize_t error;567567- struct aa_ns *ns = aa_get_ns(f->f_inode->i_private);488488+ struct aa_ns *ns = get_ns_common_ref(f->f_inode->i_private);568489569490 label = begin_current_label_crit_section();570491 /* high level check about policy management - fine grained in571492 * below after unpack572493 */573494 error = aa_may_manage_policy(current_cred(), label, ns,574574- AA_MAY_REMOVE_POLICY);495495+ f->f_cred, AA_MAY_REMOVE_POLICY);575496 if (error)576497 goto out;577498···585506 if (!IS_ERR(data)) {586507 data->data[size] = 0;587508 error = aa_remove_profiles(ns, label, data->data, size);588588- aa_put_loaddata(data);509509+ aa_put_profile_loaddata(data);589510 }590511 out:591512 end_current_label_crit_section(label);···654575 if (!rev)655576 return -ENOMEM;656577657657- rev->ns = aa_get_ns(inode->i_private);578578+ rev->ns = get_ns_common_ref(inode->i_private);658579 if (!rev->ns)659580 rev->ns = aa_get_current_ns();660581 file->private_data = rev;···11401061static int seq_profile_open(struct inode *inode, struct file *file,11411062 int (*show)(struct seq_file *, void *))11421063{11431143- struct aa_proxy *proxy = aa_get_proxy(inode->i_private);10641064+ struct aa_proxy *proxy = get_proxy_common_ref(inode->i_private);11441065 int error = single_open(file, show, proxy);1145106611461067 if (error) {···13321253static int seq_rawdata_open(struct inode *inode, struct file *file,13331254 int (*show)(struct seq_file *, void *))13341255{13351335- struct aa_loaddata *data = __aa_get_loaddata(inode->i_private);12561256+ struct aa_loaddata *data = get_loaddata_common_ref(inode->i_private);13361257 int error;1337125813381259 if (!data)13391339- /* lost race this ent is being reaped */13401260 return -ENOENT;1341126113421262 error = single_open(file, show, data);13431263 if (error) {13441264 AA_BUG(file->private_data &&13451265 ((struct seq_file *)file->private_data)->private);13461346- aa_put_loaddata(data);12661266+ aa_put_i_loaddata(data);13471267 }1348126813491269 return error;···13531275 struct seq_file *seq = (struct seq_file *) file->private_data;1354127613551277 if (seq)13561356- aa_put_loaddata(seq->private);12781278+ aa_put_i_loaddata(seq->private);1357127913581280 return single_release(inode, file);13591281}···14651387 if (!aa_current_policy_view_capable(NULL))14661388 return -EACCES;1467138914681468- loaddata = __aa_get_loaddata(inode->i_private);13901390+ loaddata = get_loaddata_common_ref(inode->i_private);14691391 if (!loaddata)14701470- /* lost race: this entry is being reaped */14711392 return -ENOENT;1472139314731394 private = rawdata_f_data_alloc(loaddata->size);···14911414 return error;1492141514931416fail_private_alloc:14941494- aa_put_loaddata(loaddata);14171417+ aa_put_i_loaddata(loaddata);14951418 return error;14961419}14971420···1508143115091432 for (i = 0; i < AAFS_LOADDATA_NDENTS; i++) {15101433 if (!IS_ERR_OR_NULL(rawdata->dents[i])) {15111511- /* no refcounts on i_private */15121434 aafs_remove(rawdata->dents[i]);15131435 rawdata->dents[i] = NULL;15141436 }···15501474 return PTR_ERR(dir);15511475 rawdata->dents[AAFS_LOADDATA_DIR] = dir;1552147615531553- dent = aafs_create_file("abi", S_IFREG | 0444, dir, rawdata,14771477+ dent = aafs_create_file("abi", S_IFREG | 0444, dir, &rawdata->count,15541478 &seq_rawdata_abi_fops);15551479 if (IS_ERR(dent))15561480 goto fail;15571481 rawdata->dents[AAFS_LOADDATA_ABI] = dent;1558148215591559- dent = aafs_create_file("revision", S_IFREG | 0444, dir, rawdata,15601560- &seq_rawdata_revision_fops);14831483+ dent = aafs_create_file("revision", S_IFREG | 0444, dir,14841484+ &rawdata->count,14851485+ &seq_rawdata_revision_fops);15611486 if (IS_ERR(dent))15621487 goto fail;15631488 rawdata->dents[AAFS_LOADDATA_REVISION] = dent;1564148915651490 if (aa_g_hash_policy) {15661491 dent = aafs_create_file("sha256", S_IFREG | 0444, dir,15671567- rawdata, &seq_rawdata_hash_fops);14921492+ &rawdata->count,14931493+ &seq_rawdata_hash_fops);15681494 if (IS_ERR(dent))15691495 goto fail;15701496 rawdata->dents[AAFS_LOADDATA_HASH] = dent;15711497 }1572149815731499 dent = aafs_create_file("compressed_size", S_IFREG | 0444, dir,15741574- rawdata,15001500+ &rawdata->count,15751501 &seq_rawdata_compressed_size_fops);15761502 if (IS_ERR(dent))15771503 goto fail;15781504 rawdata->dents[AAFS_LOADDATA_COMPRESSED_SIZE] = dent;1579150515801580- dent = aafs_create_file("raw_data", S_IFREG | 0444,15811581- dir, rawdata, &rawdata_fops);15061506+ dent = aafs_create_file("raw_data", S_IFREG | 0444, dir,15071507+ &rawdata->count, &rawdata_fops);15821508 if (IS_ERR(dent))15831509 goto fail;15841510 rawdata->dents[AAFS_LOADDATA_DATA] = dent;···1588151015891511 rawdata->ns = aa_get_ns(ns);15901512 list_add(&rawdata->list, &ns->rawdata_list);15911591- /* no refcount on inode rawdata */1592151315931514 return 0;1594151515951516fail:15961517 remove_rawdata_dents(rawdata);15971597-15981518 return PTR_ERR(dent);15991519}16001520#endif /* CONFIG_SECURITY_APPARMOR_EXPORT_BINARY */···16161540 __aafs_profile_rmdir(child);1617154116181542 for (i = AAFS_PROF_SIZEOF - 1; i >= 0; --i) {16191619- struct aa_proxy *proxy;16201543 if (!profile->dents[i])16211544 continue;1622154516231623- proxy = d_inode(profile->dents[i])->i_private;16241546 aafs_remove(profile->dents[i]);16251625- aa_put_proxy(proxy);16261547 profile->dents[i] = NULL;16271548 }16281549}···16531580 struct aa_profile *profile,16541581 const struct file_operations *fops)16551582{16561656- struct aa_proxy *proxy = aa_get_proxy(profile->label.proxy);16571657- struct dentry *dent;16581658-16591659- dent = aafs_create_file(name, S_IFREG | 0444, dir, proxy, fops);16601660- if (IS_ERR(dent))16611661- aa_put_proxy(proxy);16621662-16631663- return dent;15831583+ return aafs_create_file(name, S_IFREG | 0444, dir, &profile->label.proxy->count, fops);16641584}1665158516661586#ifdef CONFIG_SECURITY_APPARMOR_EXPORT_BINARY···17031637 struct delayed_call *done,17041638 const char *name)17051639{17061706- struct aa_proxy *proxy = inode->i_private;16401640+ struct aa_common_ref *ref = inode->i_private;16411641+ struct aa_proxy *proxy = container_of(ref, struct aa_proxy, count);17071642 struct aa_label *label;17081643 struct aa_profile *profile;17091644 char *target;···18461779 if (profile->rawdata) {18471780 if (aa_g_hash_policy) {18481781 dent = aafs_create("raw_sha256", S_IFLNK | 0444, dir,18491849- profile->label.proxy, NULL, NULL,18501850- &rawdata_link_sha256_iops);17821782+ &profile->label.proxy->count, NULL,17831783+ NULL, &rawdata_link_sha256_iops);18511784 if (IS_ERR(dent))18521785 goto fail;18531853- aa_get_proxy(profile->label.proxy);18541786 profile->dents[AAFS_PROF_RAW_HASH] = dent;18551787 }18561788 dent = aafs_create("raw_abi", S_IFLNK | 0444, dir,18571857- profile->label.proxy, NULL, NULL,17891789+ &profile->label.proxy->count, NULL, NULL,18581790 &rawdata_link_abi_iops);18591791 if (IS_ERR(dent))18601792 goto fail;18611861- aa_get_proxy(profile->label.proxy);18621793 profile->dents[AAFS_PROF_RAW_ABI] = dent;1863179418641795 dent = aafs_create("raw_data", S_IFLNK | 0444, dir,18651865- profile->label.proxy, NULL, NULL,17961796+ &profile->label.proxy->count, NULL, NULL,18661797 &rawdata_link_data_iops);18671798 if (IS_ERR(dent))18681799 goto fail;18691869- aa_get_proxy(profile->label.proxy);18701800 profile->dents[AAFS_PROF_RAW_DATA] = dent;18711801 }18721802#endif /*CONFIG_SECURITY_APPARMOR_EXPORT_BINARY */···18941830 int error;1895183118961832 label = begin_current_label_crit_section();18971897- error = aa_may_manage_policy(current_cred(), label, NULL,18331833+ error = aa_may_manage_policy(current_cred(), label, NULL, NULL,18981834 AA_MAY_LOAD_POLICY);18991835 end_current_label_crit_section(label);19001836 if (error)19011837 return ERR_PTR(error);1902183819031903- parent = aa_get_ns(dir->i_private);18391839+ parent = get_ns_common_ref(dir->i_private);19041840 AA_BUG(d_inode(ns_subns_dir(parent)) != dir);1905184119061842 /* we have to unlock and then relock to get locking order right···19441880 int error;1945188119461882 label = begin_current_label_crit_section();19471947- error = aa_may_manage_policy(current_cred(), label, NULL,18831883+ error = aa_may_manage_policy(current_cred(), label, NULL, NULL,19481884 AA_MAY_LOAD_POLICY);19491885 end_current_label_crit_section(label);19501886 if (error)19511887 return error;1952188819531953- parent = aa_get_ns(dir->i_private);18891889+ parent = get_ns_common_ref(dir->i_private);19541890 /* rmdir calls the generic securityfs functions to remove files19551891 * from the apparmor dir. It is up to the apparmor ns locking19561892 * to avoid races.···2020195620211957 __aa_fs_list_remove_rawdata(ns);2022195820232023- if (ns_subns_dir(ns)) {20242024- sub = d_inode(ns_subns_dir(ns))->i_private;20252025- aa_put_ns(sub);20262026- }20272027- if (ns_subload(ns)) {20282028- sub = d_inode(ns_subload(ns))->i_private;20292029- aa_put_ns(sub);20302030- }20312031- if (ns_subreplace(ns)) {20322032- sub = d_inode(ns_subreplace(ns))->i_private;20332033- aa_put_ns(sub);20342034- }20352035- if (ns_subremove(ns)) {20362036- sub = d_inode(ns_subremove(ns))->i_private;20372037- aa_put_ns(sub);20382038- }20392039- if (ns_subrevision(ns)) {20402040- sub = d_inode(ns_subrevision(ns))->i_private;20412041- aa_put_ns(sub);20422042- }20432043-20441959 for (i = AAFS_NS_SIZEOF - 1; i >= 0; --i) {20451960 aafs_remove(ns->dents[i]);20461961 ns->dents[i] = NULL;···20442001 return PTR_ERR(dent);20452002 ns_subdata_dir(ns) = dent;2046200320472047- dent = aafs_create_file("revision", 0444, dir, ns,20042004+ dent = aafs_create_file("revision", 0444, dir,20052005+ &ns->unconfined->label.count,20482006 &aa_fs_ns_revision_fops);20492007 if (IS_ERR(dent))20502008 return PTR_ERR(dent);20512051- aa_get_ns(ns);20522009 ns_subrevision(ns) = dent;2053201020542054- dent = aafs_create_file(".load", 0640, dir, ns,20552055- &aa_fs_profile_load);20112011+ dent = aafs_create_file(".load", 0640, dir,20122012+ &ns->unconfined->label.count,20132013+ &aa_fs_profile_load);20562014 if (IS_ERR(dent))20572015 return PTR_ERR(dent);20582058- aa_get_ns(ns);20592016 ns_subload(ns) = dent;2060201720612061- dent = aafs_create_file(".replace", 0640, dir, ns,20622062- &aa_fs_profile_replace);20182018+ dent = aafs_create_file(".replace", 0640, dir,20192019+ &ns->unconfined->label.count,20202020+ &aa_fs_profile_replace);20632021 if (IS_ERR(dent))20642022 return PTR_ERR(dent);20652065- aa_get_ns(ns);20662023 ns_subreplace(ns) = dent;2067202420682068- dent = aafs_create_file(".remove", 0640, dir, ns,20692069- &aa_fs_profile_remove);20252025+ dent = aafs_create_file(".remove", 0640, dir,20262026+ &ns->unconfined->label.count,20272027+ &aa_fs_profile_remove);20702028 if (IS_ERR(dent))20712029 return PTR_ERR(dent);20722072- aa_get_ns(ns);20732030 ns_subremove(ns) = dent;2074203120752032 /* use create_dentry so we can supply private data */20762076- dent = aafs_create("namespaces", S_IFDIR | 0755, dir, ns, NULL, NULL,20772077- &ns_dir_inode_operations);20332033+ dent = aafs_create("namespaces", S_IFDIR | 0755, dir,20342034+ &ns->unconfined->label.count,20352035+ NULL, NULL, &ns_dir_inode_operations);20782036 if (IS_ERR(dent))20792037 return PTR_ERR(dent);20802080- aa_get_ns(ns);20812038 ns_subns_dir(ns) = dent;2082203920832040 return 0;
+8-8
security/apparmor/include/label.h
···102102103103struct aa_label;104104struct aa_proxy {105105- struct kref count;105105+ struct aa_common_ref count;106106 struct aa_label __rcu *label;107107};108108···125125 * vec: vector of profiles comprising the compound label126126 */127127struct aa_label {128128- struct kref count;128128+ struct aa_common_ref count;129129 struct rb_node node;130130 struct rcu_head rcu;131131 struct aa_proxy *proxy;···357357 */358358static inline struct aa_label *__aa_get_label(struct aa_label *l)359359{360360- if (l && kref_get_unless_zero(&l->count))360360+ if (l && kref_get_unless_zero(&l->count.count))361361 return l;362362363363 return NULL;···366366static inline struct aa_label *aa_get_label(struct aa_label *l)367367{368368 if (l)369369- kref_get(&(l->count));369369+ kref_get(&(l->count.count));370370371371 return l;372372}···386386 rcu_read_lock();387387 do {388388 c = rcu_dereference(*l);389389- } while (c && !kref_get_unless_zero(&c->count));389389+ } while (c && !kref_get_unless_zero(&c->count.count));390390 rcu_read_unlock();391391392392 return c;···426426static inline void aa_put_label(struct aa_label *l)427427{428428 if (l)429429- kref_put(&l->count, aa_label_kref);429429+ kref_put(&l->count.count, aa_label_kref);430430}431431432432/* wrapper fn to indicate semantics of the check */···443443static inline struct aa_proxy *aa_get_proxy(struct aa_proxy *proxy)444444{445445 if (proxy)446446- kref_get(&(proxy->count));446446+ kref_get(&(proxy->count.count));447447448448 return proxy;449449}···451451static inline void aa_put_proxy(struct aa_proxy *proxy)452452{453453 if (proxy)454454- kref_put(&proxy->count, aa_proxy_kref);454454+ kref_put(&proxy->count.count, aa_proxy_kref);455455}456456457457void __aa_proxy_redirect(struct aa_label *orig, struct aa_label *new);
+12
security/apparmor/include/lib.h
···102102/* Security blob offsets */103103extern struct lsm_blob_sizes apparmor_blob_sizes;104104105105+enum reftype {106106+ REF_NS,107107+ REF_PROXY,108108+ REF_RAWDATA,109109+};110110+111111+/* common reference count used by data the shows up in aafs */112112+struct aa_common_ref {113113+ struct kref count;114114+ enum reftype reftype;115115+};116116+105117/**106118 * aa_strneq - compare null terminated @str to a non null terminated substring107119 * @str: a null terminated string
···1818#include "label.h"1919#include "policy.h"20202121+/* Match max depth of user namespaces */2222+#define MAX_NS_DEPTH 3221232224/* struct aa_ns_acct - accounting of profiles in namespace2325 * @max_size: maximum space allowed for all profiles in namespace
+49-34
security/apparmor/include/policy_unpack.h
···8787 u32 version;8888};89899090-/*9191- * struct aa_loaddata - buffer of policy raw_data set9090+/* struct aa_loaddata - buffer of policy raw_data set9191+ * @count: inode/filesystem refcount - use aa_get_i_loaddata()9292+ * @pcount: profile refcount - use aa_get_profile_loaddata()9393+ * @list: list the loaddata is on9494+ * @work: used to do a delayed cleanup9595+ * @dents: refs to dents created in aafs9696+ * @ns: the namespace this loaddata was loaded into9797+ * @name:9898+ * @size: the size of the data that was loaded9999+ * @compressed_size: the size of the data when it is compressed100100+ * @revision: unique revision count that this data was loaded as101101+ * @abi: the abi number the loaddata uses102102+ * @hash: a hash of the loaddata, used to help dedup data92103 *9393- * there is no loaddata ref for being on ns list, nor a ref from9494- * d_inode(@dentry) when grab a ref from these, @ns->lock must be held9595- * && __aa_get_loaddata() needs to be used, and the return value9696- * checked, if NULL the loaddata is already being reaped and should be9797- * considered dead.104104+ * There is no loaddata ref for being on ns->rawdata_list, so105105+ * @ns->lock must be held when walking the list. Dentries and106106+ * inode opens hold refs on @count; profiles hold refs on @pcount.107107+ * When the last @pcount drops, do_ploaddata_rmfs() removes the108108+ * fs entries and drops the associated @count ref.98109 */99110struct aa_loaddata {100100- struct kref count;111111+ struct aa_common_ref count;112112+ struct kref pcount;101113 struct list_head list;102114 struct work_struct work;103115 struct dentry *dents[AAFS_LOADDATA_NDENTS];···131119int aa_unpack(struct aa_loaddata *udata, struct list_head *lh, const char **ns);132120133121/**134134- * __aa_get_loaddata - get a reference count to uncounted data reference135135- * @data: reference to get a count on136136- *137137- * Returns: pointer to reference OR NULL if race is lost and reference is138138- * being repeated.139139- * Requires: @data->ns->lock held, and the return code MUST be checked140140- *141141- * Use only from inode->i_private and @data->list found references142142- */143143-static inline struct aa_loaddata *144144-__aa_get_loaddata(struct aa_loaddata *data)145145-{146146- if (data && kref_get_unless_zero(&(data->count)))147147- return data;148148-149149- return NULL;150150-}151151-152152-/**153122 * aa_get_loaddata - get a reference count from a counted data reference154123 * @data: reference to get a count on155124 *156156- * Returns: point to reference125125+ * Returns: pointer to reference157126 * Requires: @data to have a valid reference count on it. It is a bug158127 * if the race to reap can be encountered when it is used.159128 */160129static inline struct aa_loaddata *161161-aa_get_loaddata(struct aa_loaddata *data)130130+aa_get_i_loaddata(struct aa_loaddata *data)162131{163163- struct aa_loaddata *tmp = __aa_get_loaddata(data);164132165165- AA_BUG(data && !tmp);133133+ if (data)134134+ kref_get(&(data->count.count));135135+ return data;136136+}166137167167- return tmp;138138+139139+/**140140+ * aa_get_profile_loaddata - get a profile reference count on loaddata141141+ * @data: reference to get a count on142142+ *143143+ * Returns: pointer to reference144144+ * Requires: @data to have a valid reference count on it.145145+ */146146+static inline struct aa_loaddata *147147+aa_get_profile_loaddata(struct aa_loaddata *data)148148+{149149+ if (data)150150+ kref_get(&(data->pcount));151151+ return data;168152}169153170154void __aa_loaddata_update(struct aa_loaddata *data, long revision);171155bool aa_rawdata_eq(struct aa_loaddata *l, struct aa_loaddata *r);172156void aa_loaddata_kref(struct kref *kref);157157+void aa_ploaddata_kref(struct kref *kref);173158struct aa_loaddata *aa_loaddata_alloc(size_t size);174174-static inline void aa_put_loaddata(struct aa_loaddata *data)159159+static inline void aa_put_i_loaddata(struct aa_loaddata *data)175160{176161 if (data)177177- kref_put(&data->count, aa_loaddata_kref);162162+ kref_put(&data->count.count, aa_loaddata_kref);163163+}164164+165165+static inline void aa_put_profile_loaddata(struct aa_loaddata *data)166166+{167167+ if (data)168168+ kref_put(&data->pcount, aa_ploaddata_kref);178169}179170180171#if IS_ENABLED(CONFIG_KUNIT)
···160160 if (state_count == 0)161161 goto out;162162 for (i = 0; i < state_count; i++) {163163- if (!(BASE_TABLE(dfa)[i] & MATCH_FLAG_DIFF_ENCODE) &&164164- (DEFAULT_TABLE(dfa)[i] >= state_count))163163+ if (DEFAULT_TABLE(dfa)[i] >= state_count) {164164+ pr_err("AppArmor DFA default state out of bounds");165165 goto out;166166+ }166167 if (BASE_TABLE(dfa)[i] & MATCH_FLAGS_INVALID) {167168 pr_err("AppArmor DFA state with invalid match flags");168169 goto out;···202201 size_t j, k;203202204203 for (j = i;205205- (BASE_TABLE(dfa)[j] & MATCH_FLAG_DIFF_ENCODE) &&206206- !(BASE_TABLE(dfa)[j] & MARK_DIFF_ENCODE);204204+ ((BASE_TABLE(dfa)[j] & MATCH_FLAG_DIFF_ENCODE) &&205205+ !(BASE_TABLE(dfa)[j] & MARK_DIFF_ENCODE_VERIFIED));207206 j = k) {207207+ if (BASE_TABLE(dfa)[j] & MARK_DIFF_ENCODE)208208+ /* loop in current chain */209209+ goto out;208210 k = DEFAULT_TABLE(dfa)[j];209211 if (j == k)212212+ /* self loop */210213 goto out;211211- if (k < j)212212- break; /* already verified */213214 BASE_TABLE(dfa)[j] |= MARK_DIFF_ENCODE;215215+ }216216+ /* move mark to verified */217217+ for (j = i;218218+ (BASE_TABLE(dfa)[j] & MATCH_FLAG_DIFF_ENCODE);219219+ j = k) {220220+ k = DEFAULT_TABLE(dfa)[j];221221+ if (j < i)222222+ /* jumps to state/chain that has been223223+ * verified224224+ */225225+ break;226226+ BASE_TABLE(dfa)[j] &= ~MARK_DIFF_ENCODE;227227+ BASE_TABLE(dfa)[j] |= MARK_DIFF_ENCODE_VERIFIED;214228 }215229 }216230 error = 0;···479463 if (dfa->tables[YYTD_ID_EC]) {480464 /* Equivalence class table defined */481465 u8 *equiv = EQUIV_TABLE(dfa);482482- for (; len; len--)483483- match_char(state, def, base, next, check,484484- equiv[(u8) *str++]);466466+ for (; len; len--) {467467+ u8 c = equiv[(u8) *str];468468+469469+ match_char(state, def, base, next, check, c);470470+ str++;471471+ }485472 } else {486473 /* default is direct to next state */487487- for (; len; len--)488488- match_char(state, def, base, next, check, (u8) *str++);474474+ for (; len; len--) {475475+ match_char(state, def, base, next, check, (u8) *str);476476+ str++;477477+ }489478 }490479491480 return state;···524503 /* Equivalence class table defined */525504 u8 *equiv = EQUIV_TABLE(dfa);526505 /* default is direct to next state */527527- while (*str)528528- match_char(state, def, base, next, check,529529- equiv[(u8) *str++]);506506+ while (*str) {507507+ u8 c = equiv[(u8) *str];508508+509509+ match_char(state, def, base, next, check, c);510510+ str++;511511+ }530512 } else {531513 /* default is direct to next state */532532- while (*str)533533- match_char(state, def, base, next, check, (u8) *str++);514514+ while (*str) {515515+ match_char(state, def, base, next, check, (u8) *str);516516+ str++;517517+ }534518 }535519536520 return state;
+67-10
security/apparmor/policy.c
···191191}192192193193/**194194- * __remove_profile - remove old profile, and children195195- * @profile: profile to be replaced (NOT NULL)194194+ * __remove_profile - remove profile, and children195195+ * @profile: profile to be removed (NOT NULL)196196 *197197 * Requires: namespace list lock be held, or list not be shared198198 */199199static void __remove_profile(struct aa_profile *profile)200200{201201+ struct aa_profile *curr, *to_remove;202202+201203 AA_BUG(!profile);202204 AA_BUG(!profile->ns);203205 AA_BUG(!mutex_is_locked(&profile->ns->lock));204206205207 /* release any children lists first */206206- __aa_profile_list_release(&profile->base.profiles);208208+ if (!list_empty(&profile->base.profiles)) {209209+ curr = list_first_entry(&profile->base.profiles, struct aa_profile, base.list);210210+211211+ while (curr != profile) {212212+213213+ while (!list_empty(&curr->base.profiles))214214+ curr = list_first_entry(&curr->base.profiles,215215+ struct aa_profile, base.list);216216+217217+ to_remove = curr;218218+ if (!list_is_last(&to_remove->base.list,219219+ &aa_deref_parent(curr)->base.profiles))220220+ curr = list_next_entry(to_remove, base.list);221221+ else222222+ curr = aa_deref_parent(curr);223223+224224+ /* released by free_profile */225225+ aa_label_remove(&to_remove->label);226226+ __aafs_profile_rmdir(to_remove);227227+ __list_remove_profile(to_remove);228228+ }229229+ }230230+207231 /* released by free_profile */208232 aa_label_remove(&profile->label);209233 __aafs_profile_rmdir(profile);···350326 }351327352328 kfree_sensitive(profile->hash);353353- aa_put_loaddata(profile->rawdata);329329+ aa_put_profile_loaddata(profile->rawdata);354330 aa_label_destroy(&profile->label);355331356332 kfree_sensitive(profile);···942918 return res;943919}944920921921+static bool is_subset_of_obj_privilege(const struct cred *cred,922922+ struct aa_label *label,923923+ const struct cred *ocred)924924+{925925+ if (cred == ocred)926926+ return true;927927+928928+ if (!aa_label_is_subset(label, cred_label(ocred)))929929+ return false;930930+ /* don't allow crossing userns for now */931931+ if (cred->user_ns != ocred->user_ns)932932+ return false;933933+ if (!cap_issubset(cred->cap_inheritable, ocred->cap_inheritable))934934+ return false;935935+ if (!cap_issubset(cred->cap_permitted, ocred->cap_permitted))936936+ return false;937937+ if (!cap_issubset(cred->cap_effective, ocred->cap_effective))938938+ return false;939939+ if (!cap_issubset(cred->cap_bset, ocred->cap_bset))940940+ return false;941941+ if (!cap_issubset(cred->cap_ambient, ocred->cap_ambient))942942+ return false;943943+ return true;944944+}945945+946946+945947/**946948 * aa_may_manage_policy - can the current task manage policy947949 * @subj_cred: subjects cred948950 * @label: label to check if it can manage policy949951 * @ns: namespace being managed by @label (may be NULL if @label's ns)952952+ * @ocred: object cred if request is coming from an open object950953 * @mask: contains the policy manipulation operation being done951954 *952955 * Returns: 0 if the task is allowed to manipulate policy else error953956 */954957int aa_may_manage_policy(const struct cred *subj_cred, struct aa_label *label,955955- struct aa_ns *ns, u32 mask)958958+ struct aa_ns *ns, const struct cred *ocred, u32 mask)956959{957960 const char *op;958961···993942 /* check if loading policy is locked out */994943 if (aa_g_lock_policy)995944 return audit_policy(label, op, NULL, NULL, "policy_locked",945945+ -EACCES);946946+947947+ if (ocred && !is_subset_of_obj_privilege(subj_cred, label, ocred))948948+ return audit_policy(label, op, NULL, NULL,949949+ "not privileged for target profile",996950 -EACCES);997951998952 if (!aa_policy_admin_capable(subj_cred, label, ns))···11711115 LIST_HEAD(lh);1172111611731117 op = mask & AA_MAY_REPLACE_POLICY ? OP_PROF_REPL : OP_PROF_LOAD;11741174- aa_get_loaddata(udata);11181118+ aa_get_profile_loaddata(udata);11751119 /* released below */11761120 error = aa_unpack(udata, &lh, &ns_name);11771121 if (error)···11981142 goto fail;11991143 }12001144 ns_name = ent->ns_name;11451145+ ent->ns_name = NULL;12011146 } else12021147 count++;12031148 }···12231166 if (aa_rawdata_eq(rawdata_ent, udata)) {12241167 struct aa_loaddata *tmp;1225116812261226- tmp = __aa_get_loaddata(rawdata_ent);11691169+ tmp = aa_get_profile_loaddata(rawdata_ent);12271170 /* check we didn't fail the race */12281171 if (tmp) {12291229- aa_put_loaddata(udata);11721172+ aa_put_profile_loaddata(udata);12301173 udata = tmp;12311174 break;12321175 }···12391182 struct aa_profile *p;1240118312411184 if (aa_g_export_binary)12421242- ent->new->rawdata = aa_get_loaddata(udata);11851185+ ent->new->rawdata = aa_get_profile_loaddata(udata);12431186 error = __lookup_replace(ns, ent->new->base.hname,12441187 !(mask & AA_MAY_REPLACE_POLICY),12451188 &ent->old, &info);···1372131513731316out:13741317 aa_put_ns(ns);13751375- aa_put_loaddata(udata);13181318+ aa_put_profile_loaddata(udata);13761319 kfree(ns_name);1377132013781321 if (error)
+2
security/apparmor/policy_ns.c
···223223 AA_BUG(!name);224224 AA_BUG(!mutex_is_locked(&parent->lock));225225226226+ if (parent->level > MAX_NS_DEPTH)227227+ return ERR_PTR(-ENOSPC);226228 ns = alloc_ns(parent->base.hname, name);227229 if (!ns)228230 return ERR_PTR(-ENOMEM);
+45-20
security/apparmor/policy_unpack.c
···109109 return memcmp(l->data, r->data, r->compressed_size ?: r->size) == 0;110110}111111112112-/*113113- * need to take the ns mutex lock which is NOT safe most places that114114- * put_loaddata is called, so we have to delay freeing it115115- */116116-static void do_loaddata_free(struct work_struct *work)112112+static void do_loaddata_free(struct aa_loaddata *d)117113{118118- struct aa_loaddata *d = container_of(work, struct aa_loaddata, work);119119- struct aa_ns *ns = aa_get_ns(d->ns);120120-121121- if (ns) {122122- mutex_lock_nested(&ns->lock, ns->level);123123- __aa_fs_remove_rawdata(d);124124- mutex_unlock(&ns->lock);125125- aa_put_ns(ns);126126- }127127-128114 kfree_sensitive(d->hash);129115 kfree_sensitive(d->name);130116 kvfree(d->data);···119133120134void aa_loaddata_kref(struct kref *kref)121135{122122- struct aa_loaddata *d = container_of(kref, struct aa_loaddata, count);136136+ struct aa_loaddata *d = container_of(kref, struct aa_loaddata,137137+ count.count);138138+139139+ do_loaddata_free(d);140140+}141141+142142+/*143143+ * need to take the ns mutex lock which is NOT safe most places that144144+ * put_loaddata is called, so we have to delay freeing it145145+ */146146+static void do_ploaddata_rmfs(struct work_struct *work)147147+{148148+ struct aa_loaddata *d = container_of(work, struct aa_loaddata, work);149149+ struct aa_ns *ns = aa_get_ns(d->ns);150150+151151+ if (ns) {152152+ mutex_lock_nested(&ns->lock, ns->level);153153+ /* remove fs ref to loaddata */154154+ __aa_fs_remove_rawdata(d);155155+ mutex_unlock(&ns->lock);156156+ aa_put_ns(ns);157157+ }158158+ /* called by dropping last pcount, so drop its associated icount */159159+ aa_put_i_loaddata(d);160160+}161161+162162+void aa_ploaddata_kref(struct kref *kref)163163+{164164+ struct aa_loaddata *d = container_of(kref, struct aa_loaddata, pcount);123165124166 if (d) {125125- INIT_WORK(&d->work, do_loaddata_free);167167+ INIT_WORK(&d->work, do_ploaddata_rmfs);126168 schedule_work(&d->work);127169 }128170}···167153 kfree(d);168154 return ERR_PTR(-ENOMEM);169155 }170170- kref_init(&d->count);156156+ kref_init(&d->count.count);157157+ d->count.reftype = REF_RAWDATA;158158+ kref_init(&d->pcount);171159 INIT_LIST_HEAD(&d->list);172160173161 return d;···10261010 if (!aa_unpack_u32(e, &pdb->start[AA_CLASS_FILE], "dfa_start")) {10271011 /* default start state for xmatch and file dfa */10281012 pdb->start[AA_CLASS_FILE] = DFA_START;10291029- } /* setup class index */10131013+ }10141014+10151015+ size_t state_count = pdb->dfa->tables[YYTD_ID_BASE]->td_lolen;10161016+10171017+ if (pdb->start[0] >= state_count ||10181018+ pdb->start[AA_CLASS_FILE] >= state_count) {10191019+ *info = "invalid dfa start state";10201020+ goto fail;10211021+ }10221022+10231023+ /* setup class index */10301024 for (i = AA_CLASS_FILE + 1; i <= AA_CLASS_LAST; i++) {10311025 pdb->start[i] = aa_dfa_next(pdb->dfa, pdb->start[0],10321026 i);···14351409{14361410 int error = -EPROTONOSUPPORT;14371411 const char *name = NULL;14381438- *ns = NULL;1439141214401413 /* get the interface version */14411414 if (!aa_unpack_u32(e, &e->version, "version")) {
+16-3
sound/core/pcm_native.c
···21442144 for (;;) {21452145 long tout;21462146 struct snd_pcm_runtime *to_check;21472147+ unsigned int drain_rate;21482148+ snd_pcm_uframes_t drain_bufsz;21492149+ bool drain_no_period_wakeup;21502150+21472151 if (signal_pending(current)) {21482152 result = -ERESTARTSYS;21492153 break;···21672163 snd_pcm_group_unref(group, substream);21682164 if (!to_check)21692165 break; /* all drained */21662166+ /*21672167+ * Cache the runtime fields needed after unlock.21682168+ * A concurrent close() on the linked stream may free21692169+ * its runtime via snd_pcm_detach_substream() once we21702170+ * release the stream lock below.21712171+ */21722172+ drain_no_period_wakeup = to_check->no_period_wakeup;21732173+ drain_rate = to_check->rate;21742174+ drain_bufsz = to_check->buffer_size;21702175 init_waitqueue_entry(&wait, current);21712176 set_current_state(TASK_INTERRUPTIBLE);21722177 add_wait_queue(&to_check->sleep, &wait);21732178 snd_pcm_stream_unlock_irq(substream);21742174- if (runtime->no_period_wakeup)21792179+ if (drain_no_period_wakeup)21752180 tout = MAX_SCHEDULE_TIMEOUT;21762181 else {21772182 tout = 100;21782178- if (runtime->rate) {21792179- long t = runtime->buffer_size * 1100 / runtime->rate;21832183+ if (drain_rate) {21842184+ long t = drain_bufsz * 1100 / drain_rate;21802185 tout = max(t, tout);21812186 }21822187 tout = msecs_to_jiffies(tout);
···11561156 if (!terminal->is_dataport) {11571157 const char *type_name = sdca_find_terminal_name(terminal->type);1158115811591159- if (type_name)11591159+ if (type_name) {11601160 entity->label = devm_kasprintf(dev, GFP_KERNEL, "%s %s",11611161 entity->label, type_name);11621162+ if (!entity->label)11631163+ return -ENOMEM;11641164+ }11621165 }1163116611641167 ret = fwnode_property_read_u32(entity_node,
+8-3
sound/soc/soc-core.c
···462462463463 list_del(&rtd->list);464464465465- if (delayed_work_pending(&rtd->delayed_work))466466- flush_delayed_work(&rtd->delayed_work);465465+ flush_delayed_work(&rtd->delayed_work);467466 snd_soc_pcm_component_free(rtd);468467469468 /*···1863186418641865/*18651866 * Check if a DMI field is valid, i.e. not containing any string18661866- * in the black list.18671867+ * in the black list and not the empty string.18671868 */18681869static int is_dmi_valid(const char *field)18691870{18701871 int i = 0;18721872+18731873+ if (!field[0])18741874+ return 0;1871187518721876 while (dmi_blacklist[i]) {18731877 if (strstr(field, dmi_blacklist[i]))···21242122 for_each_card_rtds(card, rtd)21252123 if (rtd->initialized)21262124 snd_soc_link_exit(rtd);21252125+ /* flush delayed work before removing DAIs and DAPM widgets */21262126+ snd_soc_flush_all_delayed_work(card);21272127+21272128 /* remove and free each DAI */21282129 soc_remove_link_dais(card);21292130 soc_remove_link_components(card);
···110110 __u64 ld_op:1, /* 0: load op */111111 st_op:1, /* 1: store op */112112 dc_l1tlb_miss:1, /* 2: data cache L1TLB miss */113113- dc_l2tlb_miss:1, /* 3: data cache L2TLB hit in 2M page */113113+ dc_l2tlb_miss:1, /* 3: data cache L2TLB miss in 2M page */114114 dc_l1tlb_hit_2m:1, /* 4: data cache L1TLB hit in 2M page */115115 dc_l1tlb_hit_1g:1, /* 5: data cache L1TLB hit in 1G page */116116 dc_l2tlb_hit_2m:1, /* 6: data cache L2TLB hit in 2M page */
+3-1
tools/arch/x86/include/asm/cpufeatures.h
···8484#define X86_FEATURE_PEBS ( 3*32+12) /* "pebs" Precise-Event Based Sampling */8585#define X86_FEATURE_BTS ( 3*32+13) /* "bts" Branch Trace Store */8686#define X86_FEATURE_SYSCALL32 ( 3*32+14) /* syscall in IA32 userspace */8787-#define X86_FEATURE_SYSENTER32 ( 3*32+15) /* sysenter in IA32 userspace */8787+#define X86_FEATURE_SYSFAST32 ( 3*32+15) /* sysenter/syscall in IA32 userspace */8888#define X86_FEATURE_REP_GOOD ( 3*32+16) /* "rep_good" REP microcode works well */8989#define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* "amd_lbr_v2" AMD Last Branch Record Extension Version 2 */9090#define X86_FEATURE_CLEAR_CPU_BUF ( 3*32+18) /* Clear CPU buffers using VERW */···326326#define X86_FEATURE_AMX_FP16 (12*32+21) /* AMX fp16 Support */327327#define X86_FEATURE_AVX_IFMA (12*32+23) /* Support for VPMADD52[H,L]UQ */328328#define X86_FEATURE_LAM (12*32+26) /* "lam" Linear Address Masking */329329+#define X86_FEATURE_MOVRS (12*32+31) /* MOVRS instructions */329330330331/* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */331332#define X86_FEATURE_CLZERO (13*32+ 0) /* "clzero" CLZERO instruction */···473472#define X86_FEATURE_GP_ON_USER_CPUID (20*32+17) /* User CPUID faulting */474473475474#define X86_FEATURE_PREFETCHI (20*32+20) /* Prefetch Data/Instruction to Cache Level */475475+#define X86_FEATURE_ERAPS (20*32+24) /* Enhanced Return Address Predictor Security */476476#define X86_FEATURE_SBPB (20*32+27) /* Selective Branch Prediction Barrier */477477#define X86_FEATURE_IBPB_BRTYPE (20*32+28) /* MSR_PRED_CMD[IBPB] flushes all branch type predictions */478478#define X86_FEATURE_SRSO_NO (20*32+29) /* CPU is not affected by SRSO */
···2222#define CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) (0x10 + (cpu * 2))23232424/*2525- * Below are the definition of bit offsets for perf option, and works as2626- * arbitrary values for all ETM versions.2727- *2828- * Most of them are orignally from ETMv3.5/PTM's ETMCR config, therefore,2929- * ETMv3.5/PTM doesn't define ETMCR config bits with prefix "ETM3_" and3030- * directly use below macros as config bits.3131- */3232-#define ETM_OPT_BRANCH_BROADCAST 83333-#define ETM_OPT_CYCACC 123434-#define ETM_OPT_CTXTID 143535-#define ETM_OPT_CTXTID2 153636-#define ETM_OPT_TS 283737-#define ETM_OPT_RETSTK 293838-3939-/* ETMv4 CONFIGR programming bits for the ETM OPTs */4040-#define ETM4_CFG_BIT_BB 34141-#define ETM4_CFG_BIT_CYCACC 44242-#define ETM4_CFG_BIT_CTXTID 64343-#define ETM4_CFG_BIT_VMID 74444-#define ETM4_CFG_BIT_TS 114545-#define ETM4_CFG_BIT_RETSTK 124646-#define ETM4_CFG_BIT_VMID_OPT 154747-4848-/*4925 * Interpretation of the PERF_RECORD_AUX_OUTPUT_HW_ID payload.5026 * Used to associate a CPU with the CoreSight Trace ID.5127 * [07:00] - Trace ID - uses 8 bits to make value easy to read in file.
+4
tools/include/linux/gfp.h
···55#include <linux/types.h>66#include <linux/gfp_types.h>7788+/* Helper macro to avoid gfp flags if they are the default one */99+#define __default_gfp(a,...) a1010+#define default_gfp(...) __default_gfp(__VA_ARGS__ __VA_OPT__(,) GFP_KERNEL)1111+812static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags)913{1014 return !!(gfp_flags & __GFP_DIRECT_RECLAIM);
+7-2
tools/include/linux/gfp_types.h
···139139 * %__GFP_ACCOUNT causes the allocation to be accounted to kmemcg.140140 *141141 * %__GFP_NO_OBJ_EXT causes slab allocation to have no object extension.142142+ * mark_obj_codetag_empty() should be called upon freeing for objects allocated143143+ * with this flag to indicate that their NULL tags are expected and normal.142144 */143145#define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE)144146#define __GFP_WRITE ((__force gfp_t)___GFP_WRITE)···311309 *312310 * %GFP_ATOMIC users can not sleep and need the allocation to succeed. A lower313311 * watermark is applied to allow access to "atomic reserves".314314- * The current implementation doesn't support NMI and few other strict315315- * non-preemptive contexts (e.g. raw_spin_lock). The same applies to %GFP_NOWAIT.312312+ * The current implementation doesn't support NMI, nor contexts that disable313313+ * preemption under PREEMPT_RT. This includes raw_spin_lock() and plain314314+ * preempt_disable() - see "Memory allocation" in315315+ * Documentation/core-api/real-time/differences.rst for more info.316316 *317317 * %GFP_KERNEL is typical for kernel-internal allocations. The caller requires318318 * %ZONE_NORMAL or a lower zone for direct access but can direct reclaim.···325321 * %GFP_NOWAIT is for kernel allocations that should not stall for direct326322 * reclaim, start physical IO or use any filesystem callback. It is very327323 * likely to fail to allocate memory, even for very small allocations.324324+ * The same restrictions on calling contexts apply as for %GFP_ATOMIC.328325 *329326 * %GFP_NOIO will use direct reclaim to discard clean pages or slab pages330327 * that do not require the starting of any physical IO.
+19
tools/include/linux/overflow.h
···6969})70707171/**7272+ * size_mul() - Calculate size_t multiplication with saturation at SIZE_MAX7373+ * @factor1: first factor7474+ * @factor2: second factor7575+ *7676+ * Returns: calculate @factor1 * @factor2, both promoted to size_t,7777+ * with any overflow causing the return value to be SIZE_MAX. The7878+ * lvalue must be size_t to avoid implicit type conversion.7979+ */8080+static inline size_t __must_check size_mul(size_t factor1, size_t factor2)8181+{8282+ size_t bytes;8383+8484+ if (check_mul_overflow(factor1, factor2, &bytes))8585+ return SIZE_MAX;8686+8787+ return bytes;8888+}8989+9090+/**7291 * array_size() - Calculate size of 2-dimensional array.7392 *7493 * @a: dimension one
···860860#define __NR_listns 470861861__SYSCALL(__NR_listns, sys_listns)862862863863+#define __NR_rseq_slice_yield 471864864+__SYSCALL(__NR_rseq_slice_yield, sys_rseq_slice_yield)865865+863866#undef __NR_syscalls864864-#define __NR_syscalls 471867867+#define __NR_syscalls 472865868866869/*867870 * 32 bit systems traditionally used different
···107107#define ERROR_ELF(format, ...) __WARN_ELF(ERROR_STR, format, ##__VA_ARGS__)108108#define ERROR_GLIBC(format, ...) __WARN_GLIBC(ERROR_STR, format, ##__VA_ARGS__)109109#define ERROR_FUNC(sec, offset, format, ...) __WARN_FUNC(ERROR_STR, sec, offset, format, ##__VA_ARGS__)110110-#define ERROR_INSN(insn, format, ...) WARN_FUNC(insn->sec, insn->offset, format, ##__VA_ARGS__)110110+#define ERROR_INSN(insn, format, ...) ERROR_FUNC(insn->sec, insn->offset, format, ##__VA_ARGS__)111111112112extern bool debug;113113extern int indent;
+26-13
tools/objtool/klp-diff.c
···13341334 * be applied after static branch/call init, resulting in code corruption.13351335 *13361336 * Validate a special section entry to avoid that. Note that an inert13371337- * tracepoint is harmless enough, in that case just skip the entry and print a13381338- * warning. Otherwise, return an error.13371337+ * tracepoint or pr_debug() is harmless enough, in that case just skip the13381338+ * entry and print a warning. Otherwise, return an error.13391339 *13401340- * This is only a temporary limitation which will be fixed when livepatch adds13411341- * support for submodules: fully self-contained modules which are embedded in13421342- * the top-level livepatch module's data and which can be loaded on demand when13431343- * their corresponding to-be-patched module gets loaded. Then klp relocs can13441344- * be retired.13401340+ * TODO: This is only a temporary limitation which will be fixed when livepatch13411341+ * adds support for submodules: fully self-contained modules which are embedded13421342+ * in the top-level livepatch module's data and which can be loaded on demand13431343+ * when their corresponding to-be-patched module gets loaded. Then klp relocs13441344+ * can be retired.13451345 *13461346 * Return:13471347 * -1: error: validation failed13481348- * 1: warning: tracepoint skipped13481348+ * 1: warning: disabled tracepoint or pr_debug()13491349 * 0: success13501350 */13511351static int validate_special_section_klp_reloc(struct elfs *e, struct symbol *sym)13521352{13531353 bool static_branch = !strcmp(sym->sec->name, "__jump_table");13541354 bool static_call = !strcmp(sym->sec->name, ".static_call_sites");13551355- struct symbol *code_sym = NULL;13551355+ const char *code_sym = NULL;13561356 unsigned long code_offset = 0;13571357 struct reloc *reloc;13581358 int ret = 0;···13641364 const char *sym_modname;13651365 struct export *export;1366136613671367+ if (convert_reloc_sym(e->patched, reloc))13681368+ continue;13691369+13671370 /* Static branch/call keys are always STT_OBJECT */13681371 if (reloc->sym->type != STT_OBJECT) {1369137213701373 /* Save code location which can be printed below */13711374 if (reloc->sym->type == STT_FUNC && !code_sym) {13721372- code_sym = reloc->sym;13751375+ code_sym = reloc->sym->name;13731376 code_offset = reloc_addend(reloc);13741377 }13751378···13951392 if (!strcmp(sym_modname, "vmlinux"))13961393 continue;1397139413951395+ if (!code_sym)13961396+ code_sym = "<unknown>";13971397+13981398 if (static_branch) {13991399 if (strstarts(reloc->sym->name, "__tracepoint_")) {14001400 WARN("%s: disabling unsupported tracepoint %s",14011401- code_sym->name, reloc->sym->name + 13);14011401+ code_sym, reloc->sym->name + 13);14021402+ ret = 1;14031403+ continue;14041404+ }14051405+14061406+ if (strstr(reloc->sym->name, "__UNIQUE_ID_ddebug_")) {14071407+ WARN("%s: disabling unsupported pr_debug()",14081408+ code_sym);14021409 ret = 1;14031410 continue;14041411 }1405141214061413 ERROR("%s+0x%lx: unsupported static branch key %s. Use static_key_enabled() instead",14071407- code_sym->name, code_offset, reloc->sym->name);14141414+ code_sym, code_offset, reloc->sym->name);14081415 return -1;14091416 }14101417···14251412 }1426141314271414 ERROR("%s()+0x%lx: unsupported static call key %s. Use KLP_STATIC_CALL() instead",14281428- code_sym->name, code_offset, reloc->sym->name);14151415+ code_sym, code_offset, reloc->sym->name);14291416 return -1;14301417 }14311418
···274274 PYLINT := $(shell which pylint 2> /dev/null)275275endif276276277277-export srctree OUTPUT RM CC CXX RUSTC LD AR CFLAGS CXXFLAGS V BISON FLEX AWK277277+export srctree OUTPUT RM CC CXX RUSTC LD AR CFLAGS CXXFLAGS RUST_FLAGS V BISON FLEX AWK278278export HOSTCC HOSTLD HOSTAR HOSTCFLAGS SHELLCHECK MYPY PYLINT279279280280include $(srctree)/tools/build/Makefile.include
+1
tools/perf/arch/arm/entry/syscalls/syscall.tbl
···485485468 common file_getattr sys_file_getattr486486469 common file_setattr sys_file_setattr487487470 common listns sys_listns488488+471 common rseq_slice_yield sys_rseq_slice_yield
···561561468 common file_getattr sys_file_getattr562562469 common file_setattr sys_file_setattr563563470 common listns sys_listns564564+471 nospu rseq_slice_yield sys_rseq_slice_yield
+392-467
tools/perf/arch/s390/entry/syscalls/syscall.tbl
···33# System call table for s39044#55# Format:66+# <nr> <abi> <syscall> <entry>67#77-# <nr> <abi> <syscall> <entry-64bit> <compat-entry>88-#99-# where <abi> can be common, 64, or 3288+# <abi> is always common.1091111-1 common exit sys_exit sys_exit1212-2 common fork sys_fork sys_fork1313-3 common read sys_read compat_sys_s390_read1414-4 common write sys_write compat_sys_s390_write1515-5 common open sys_open compat_sys_open1616-6 common close sys_close sys_close1717-7 common restart_syscall sys_restart_syscall sys_restart_syscall1818-8 common creat sys_creat sys_creat1919-9 common link sys_link sys_link2020-10 common unlink sys_unlink sys_unlink2121-11 common execve sys_execve compat_sys_execve2222-12 common chdir sys_chdir sys_chdir2323-13 32 time - sys_time322424-14 common mknod sys_mknod sys_mknod2525-15 common chmod sys_chmod sys_chmod2626-16 32 lchown - sys_lchown162727-19 common lseek sys_lseek compat_sys_lseek2828-20 common getpid sys_getpid sys_getpid2929-21 common mount sys_mount sys_mount3030-22 common umount sys_oldumount sys_oldumount3131-23 32 setuid - sys_setuid163232-24 32 getuid - sys_getuid163333-25 32 stime - sys_stime323434-26 common ptrace sys_ptrace compat_sys_ptrace3535-27 common alarm sys_alarm sys_alarm3636-29 common pause sys_pause sys_pause3737-30 common utime sys_utime sys_utime323838-33 common access sys_access sys_access3939-34 common nice sys_nice sys_nice4040-36 common sync sys_sync sys_sync4141-37 common kill sys_kill sys_kill4242-38 common rename sys_rename sys_rename4343-39 common mkdir sys_mkdir sys_mkdir4444-40 common rmdir sys_rmdir sys_rmdir4545-41 common dup sys_dup sys_dup4646-42 common pipe sys_pipe sys_pipe4747-43 common times sys_times compat_sys_times4848-45 common brk sys_brk sys_brk4949-46 32 setgid - sys_setgid165050-47 32 getgid - sys_getgid165151-48 common signal sys_signal sys_signal5252-49 32 geteuid - sys_geteuid165353-50 32 getegid - sys_getegid165454-51 common acct sys_acct sys_acct5555-52 common umount2 sys_umount sys_umount5656-54 common ioctl sys_ioctl compat_sys_ioctl5757-55 common fcntl sys_fcntl compat_sys_fcntl5858-57 common setpgid sys_setpgid sys_setpgid5959-60 common umask sys_umask sys_umask6060-61 common chroot sys_chroot sys_chroot6161-62 common ustat sys_ustat compat_sys_ustat6262-63 common dup2 sys_dup2 sys_dup26363-64 common getppid sys_getppid sys_getppid6464-65 common getpgrp sys_getpgrp sys_getpgrp6565-66 common setsid sys_setsid sys_setsid6666-67 common sigaction sys_sigaction compat_sys_sigaction6767-70 32 setreuid - sys_setreuid166868-71 32 setregid - sys_setregid166969-72 common sigsuspend sys_sigsuspend sys_sigsuspend7070-73 common sigpending sys_sigpending compat_sys_sigpending7171-74 common sethostname sys_sethostname sys_sethostname7272-75 common setrlimit sys_setrlimit compat_sys_setrlimit7373-76 32 getrlimit - compat_sys_old_getrlimit7474-77 common getrusage sys_getrusage compat_sys_getrusage7575-78 common gettimeofday sys_gettimeofday compat_sys_gettimeofday7676-79 common settimeofday sys_settimeofday compat_sys_settimeofday7777-80 32 getgroups - sys_getgroups167878-81 32 setgroups - sys_setgroups167979-83 common symlink sys_symlink sys_symlink8080-85 common readlink sys_readlink sys_readlink8181-86 common uselib sys_uselib sys_uselib8282-87 common swapon sys_swapon sys_swapon8383-88 common reboot sys_reboot sys_reboot8484-89 common readdir - compat_sys_old_readdir8585-90 common mmap sys_old_mmap compat_sys_s390_old_mmap8686-91 common munmap sys_munmap sys_munmap8787-92 common truncate sys_truncate compat_sys_truncate8888-93 common ftruncate sys_ftruncate compat_sys_ftruncate8989-94 common fchmod sys_fchmod sys_fchmod9090-95 32 fchown - sys_fchown169191-96 common getpriority sys_getpriority sys_getpriority9292-97 common setpriority sys_setpriority sys_setpriority9393-99 common statfs sys_statfs compat_sys_statfs9494-100 common fstatfs sys_fstatfs compat_sys_fstatfs9595-101 32 ioperm - -9696-102 common socketcall sys_socketcall compat_sys_socketcall9797-103 common syslog sys_syslog sys_syslog9898-104 common setitimer sys_setitimer compat_sys_setitimer9999-105 common getitimer sys_getitimer compat_sys_getitimer100100-106 common stat sys_newstat compat_sys_newstat101101-107 common lstat sys_newlstat compat_sys_newlstat102102-108 common fstat sys_newfstat compat_sys_newfstat103103-110 common lookup_dcookie - -104104-111 common vhangup sys_vhangup sys_vhangup105105-112 common idle - -106106-114 common wait4 sys_wait4 compat_sys_wait4107107-115 common swapoff sys_swapoff sys_swapoff108108-116 common sysinfo sys_sysinfo compat_sys_sysinfo109109-117 common ipc sys_s390_ipc compat_sys_s390_ipc110110-118 common fsync sys_fsync sys_fsync111111-119 common sigreturn sys_sigreturn compat_sys_sigreturn112112-120 common clone sys_clone sys_clone113113-121 common setdomainname sys_setdomainname sys_setdomainname114114-122 common uname sys_newuname sys_newuname115115-124 common adjtimex sys_adjtimex sys_adjtimex_time32116116-125 common mprotect sys_mprotect sys_mprotect117117-126 common sigprocmask sys_sigprocmask compat_sys_sigprocmask118118-127 common create_module - -119119-128 common init_module sys_init_module sys_init_module120120-129 common delete_module sys_delete_module sys_delete_module121121-130 common get_kernel_syms - -122122-131 common quotactl sys_quotactl sys_quotactl123123-132 common getpgid sys_getpgid sys_getpgid124124-133 common fchdir sys_fchdir sys_fchdir125125-134 common bdflush sys_ni_syscall sys_ni_syscall126126-135 common sysfs sys_sysfs sys_sysfs127127-136 common personality sys_s390_personality sys_s390_personality128128-137 common afs_syscall - -129129-138 32 setfsuid - sys_setfsuid16130130-139 32 setfsgid - sys_setfsgid16131131-140 32 _llseek - sys_llseek132132-141 common getdents sys_getdents compat_sys_getdents133133-142 32 _newselect - compat_sys_select134134-142 64 select sys_select -135135-143 common flock sys_flock sys_flock136136-144 common msync sys_msync sys_msync137137-145 common readv sys_readv sys_readv138138-146 common writev sys_writev sys_writev139139-147 common getsid sys_getsid sys_getsid140140-148 common fdatasync sys_fdatasync sys_fdatasync141141-149 common _sysctl - -142142-150 common mlock sys_mlock sys_mlock143143-151 common munlock sys_munlock sys_munlock144144-152 common mlockall sys_mlockall sys_mlockall145145-153 common munlockall sys_munlockall sys_munlockall146146-154 common sched_setparam sys_sched_setparam sys_sched_setparam147147-155 common sched_getparam sys_sched_getparam sys_sched_getparam148148-156 common sched_setscheduler sys_sched_setscheduler sys_sched_setscheduler149149-157 common sched_getscheduler sys_sched_getscheduler sys_sched_getscheduler150150-158 common sched_yield sys_sched_yield sys_sched_yield151151-159 common sched_get_priority_max sys_sched_get_priority_max sys_sched_get_priority_max152152-160 common sched_get_priority_min sys_sched_get_priority_min sys_sched_get_priority_min153153-161 common sched_rr_get_interval sys_sched_rr_get_interval sys_sched_rr_get_interval_time32154154-162 common nanosleep sys_nanosleep sys_nanosleep_time32155155-163 common mremap sys_mremap sys_mremap156156-164 32 setresuid - sys_setresuid16157157-165 32 getresuid - sys_getresuid16158158-167 common query_module - -159159-168 common poll sys_poll sys_poll160160-169 common nfsservctl - -161161-170 32 setresgid - sys_setresgid16162162-171 32 getresgid - sys_getresgid16163163-172 common prctl sys_prctl sys_prctl164164-173 common rt_sigreturn sys_rt_sigreturn compat_sys_rt_sigreturn165165-174 common rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction166166-175 common rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask167167-176 common rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending168168-177 common rt_sigtimedwait sys_rt_sigtimedwait compat_sys_rt_sigtimedwait_time32169169-178 common rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo170170-179 common rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend171171-180 common pread64 sys_pread64 compat_sys_s390_pread64172172-181 common pwrite64 sys_pwrite64 compat_sys_s390_pwrite64173173-182 32 chown - sys_chown16174174-183 common getcwd sys_getcwd sys_getcwd175175-184 common capget sys_capget sys_capget176176-185 common capset sys_capset sys_capset177177-186 common sigaltstack sys_sigaltstack compat_sys_sigaltstack178178-187 common sendfile sys_sendfile64 compat_sys_sendfile179179-188 common getpmsg - -180180-189 common putpmsg - -181181-190 common vfork sys_vfork sys_vfork182182-191 32 ugetrlimit - compat_sys_getrlimit183183-191 64 getrlimit sys_getrlimit -184184-192 32 mmap2 - compat_sys_s390_mmap2185185-193 32 truncate64 - compat_sys_s390_truncate64186186-194 32 ftruncate64 - compat_sys_s390_ftruncate64187187-195 32 stat64 - compat_sys_s390_stat64188188-196 32 lstat64 - compat_sys_s390_lstat64189189-197 32 fstat64 - compat_sys_s390_fstat64190190-198 32 lchown32 - sys_lchown191191-198 64 lchown sys_lchown -192192-199 32 getuid32 - sys_getuid193193-199 64 getuid sys_getuid -194194-200 32 getgid32 - sys_getgid195195-200 64 getgid sys_getgid -196196-201 32 geteuid32 - sys_geteuid197197-201 64 geteuid sys_geteuid -198198-202 32 getegid32 - sys_getegid199199-202 64 getegid sys_getegid -200200-203 32 setreuid32 - sys_setreuid201201-203 64 setreuid sys_setreuid -202202-204 32 setregid32 - sys_setregid203203-204 64 setregid sys_setregid -204204-205 32 getgroups32 - sys_getgroups205205-205 64 getgroups sys_getgroups -206206-206 32 setgroups32 - sys_setgroups207207-206 64 setgroups sys_setgroups -208208-207 32 fchown32 - sys_fchown209209-207 64 fchown sys_fchown -210210-208 32 setresuid32 - sys_setresuid211211-208 64 setresuid sys_setresuid -212212-209 32 getresuid32 - sys_getresuid213213-209 64 getresuid sys_getresuid -214214-210 32 setresgid32 - sys_setresgid215215-210 64 setresgid sys_setresgid -216216-211 32 getresgid32 - sys_getresgid217217-211 64 getresgid sys_getresgid -218218-212 32 chown32 - sys_chown219219-212 64 chown sys_chown -220220-213 32 setuid32 - sys_setuid221221-213 64 setuid sys_setuid -222222-214 32 setgid32 - sys_setgid223223-214 64 setgid sys_setgid -224224-215 32 setfsuid32 - sys_setfsuid225225-215 64 setfsuid sys_setfsuid -226226-216 32 setfsgid32 - sys_setfsgid227227-216 64 setfsgid sys_setfsgid -228228-217 common pivot_root sys_pivot_root sys_pivot_root229229-218 common mincore sys_mincore sys_mincore230230-219 common madvise sys_madvise sys_madvise231231-220 common getdents64 sys_getdents64 sys_getdents64232232-221 32 fcntl64 - compat_sys_fcntl64233233-222 common readahead sys_readahead compat_sys_s390_readahead234234-223 32 sendfile64 - compat_sys_sendfile64235235-224 common setxattr sys_setxattr sys_setxattr236236-225 common lsetxattr sys_lsetxattr sys_lsetxattr237237-226 common fsetxattr sys_fsetxattr sys_fsetxattr238238-227 common getxattr sys_getxattr sys_getxattr239239-228 common lgetxattr sys_lgetxattr sys_lgetxattr240240-229 common fgetxattr sys_fgetxattr sys_fgetxattr241241-230 common listxattr sys_listxattr sys_listxattr242242-231 common llistxattr sys_llistxattr sys_llistxattr243243-232 common flistxattr sys_flistxattr sys_flistxattr244244-233 common removexattr sys_removexattr sys_removexattr245245-234 common lremovexattr sys_lremovexattr sys_lremovexattr246246-235 common fremovexattr sys_fremovexattr sys_fremovexattr247247-236 common gettid sys_gettid sys_gettid248248-237 common tkill sys_tkill sys_tkill249249-238 common futex sys_futex sys_futex_time32250250-239 common sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity251251-240 common sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity252252-241 common tgkill sys_tgkill sys_tgkill253253-243 common io_setup sys_io_setup compat_sys_io_setup254254-244 common io_destroy sys_io_destroy sys_io_destroy255255-245 common io_getevents sys_io_getevents sys_io_getevents_time32256256-246 common io_submit sys_io_submit compat_sys_io_submit257257-247 common io_cancel sys_io_cancel sys_io_cancel258258-248 common exit_group sys_exit_group sys_exit_group259259-249 common epoll_create sys_epoll_create sys_epoll_create260260-250 common epoll_ctl sys_epoll_ctl sys_epoll_ctl261261-251 common epoll_wait sys_epoll_wait sys_epoll_wait262262-252 common set_tid_address sys_set_tid_address sys_set_tid_address263263-253 common fadvise64 sys_fadvise64_64 compat_sys_s390_fadvise64264264-254 common timer_create sys_timer_create compat_sys_timer_create265265-255 common timer_settime sys_timer_settime sys_timer_settime32266266-256 common timer_gettime sys_timer_gettime sys_timer_gettime32267267-257 common timer_getoverrun sys_timer_getoverrun sys_timer_getoverrun268268-258 common timer_delete sys_timer_delete sys_timer_delete269269-259 common clock_settime sys_clock_settime sys_clock_settime32270270-260 common clock_gettime sys_clock_gettime sys_clock_gettime32271271-261 common clock_getres sys_clock_getres sys_clock_getres_time32272272-262 common clock_nanosleep sys_clock_nanosleep sys_clock_nanosleep_time32273273-264 32 fadvise64_64 - compat_sys_s390_fadvise64_64274274-265 common statfs64 sys_statfs64 compat_sys_statfs64275275-266 common fstatfs64 sys_fstatfs64 compat_sys_fstatfs64276276-267 common remap_file_pages sys_remap_file_pages sys_remap_file_pages277277-268 common mbind sys_mbind sys_mbind278278-269 common get_mempolicy sys_get_mempolicy sys_get_mempolicy279279-270 common set_mempolicy sys_set_mempolicy sys_set_mempolicy280280-271 common mq_open sys_mq_open compat_sys_mq_open281281-272 common mq_unlink sys_mq_unlink sys_mq_unlink282282-273 common mq_timedsend sys_mq_timedsend sys_mq_timedsend_time32283283-274 common mq_timedreceive sys_mq_timedreceive sys_mq_timedreceive_time32284284-275 common mq_notify sys_mq_notify compat_sys_mq_notify285285-276 common mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr286286-277 common kexec_load sys_kexec_load compat_sys_kexec_load287287-278 common add_key sys_add_key sys_add_key288288-279 common request_key sys_request_key sys_request_key289289-280 common keyctl sys_keyctl compat_sys_keyctl290290-281 common waitid sys_waitid compat_sys_waitid291291-282 common ioprio_set sys_ioprio_set sys_ioprio_set292292-283 common ioprio_get sys_ioprio_get sys_ioprio_get293293-284 common inotify_init sys_inotify_init sys_inotify_init294294-285 common inotify_add_watch sys_inotify_add_watch sys_inotify_add_watch295295-286 common inotify_rm_watch sys_inotify_rm_watch sys_inotify_rm_watch296296-287 common migrate_pages sys_migrate_pages sys_migrate_pages297297-288 common openat sys_openat compat_sys_openat298298-289 common mkdirat sys_mkdirat sys_mkdirat299299-290 common mknodat sys_mknodat sys_mknodat300300-291 common fchownat sys_fchownat sys_fchownat301301-292 common futimesat sys_futimesat sys_futimesat_time32302302-293 32 fstatat64 - compat_sys_s390_fstatat64303303-293 64 newfstatat sys_newfstatat -304304-294 common unlinkat sys_unlinkat sys_unlinkat305305-295 common renameat sys_renameat sys_renameat306306-296 common linkat sys_linkat sys_linkat307307-297 common symlinkat sys_symlinkat sys_symlinkat308308-298 common readlinkat sys_readlinkat sys_readlinkat309309-299 common fchmodat sys_fchmodat sys_fchmodat310310-300 common faccessat sys_faccessat sys_faccessat311311-301 common pselect6 sys_pselect6 compat_sys_pselect6_time32312312-302 common ppoll sys_ppoll compat_sys_ppoll_time32313313-303 common unshare sys_unshare sys_unshare314314-304 common set_robust_list sys_set_robust_list compat_sys_set_robust_list315315-305 common get_robust_list sys_get_robust_list compat_sys_get_robust_list316316-306 common splice sys_splice sys_splice317317-307 common sync_file_range sys_sync_file_range compat_sys_s390_sync_file_range318318-308 common tee sys_tee sys_tee319319-309 common vmsplice sys_vmsplice sys_vmsplice320320-310 common move_pages sys_move_pages sys_move_pages321321-311 common getcpu sys_getcpu sys_getcpu322322-312 common epoll_pwait sys_epoll_pwait compat_sys_epoll_pwait323323-313 common utimes sys_utimes sys_utimes_time32324324-314 common fallocate sys_fallocate compat_sys_s390_fallocate325325-315 common utimensat sys_utimensat sys_utimensat_time32326326-316 common signalfd sys_signalfd compat_sys_signalfd327327-317 common timerfd - -328328-318 common eventfd sys_eventfd sys_eventfd329329-319 common timerfd_create sys_timerfd_create sys_timerfd_create330330-320 common timerfd_settime sys_timerfd_settime sys_timerfd_settime32331331-321 common timerfd_gettime sys_timerfd_gettime sys_timerfd_gettime32332332-322 common signalfd4 sys_signalfd4 compat_sys_signalfd4333333-323 common eventfd2 sys_eventfd2 sys_eventfd2334334-324 common inotify_init1 sys_inotify_init1 sys_inotify_init1335335-325 common pipe2 sys_pipe2 sys_pipe2336336-326 common dup3 sys_dup3 sys_dup3337337-327 common epoll_create1 sys_epoll_create1 sys_epoll_create1338338-328 common preadv sys_preadv compat_sys_preadv339339-329 common pwritev sys_pwritev compat_sys_pwritev340340-330 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo341341-331 common perf_event_open sys_perf_event_open sys_perf_event_open342342-332 common fanotify_init sys_fanotify_init sys_fanotify_init343343-333 common fanotify_mark sys_fanotify_mark compat_sys_fanotify_mark344344-334 common prlimit64 sys_prlimit64 sys_prlimit64345345-335 common name_to_handle_at sys_name_to_handle_at sys_name_to_handle_at346346-336 common open_by_handle_at sys_open_by_handle_at compat_sys_open_by_handle_at347347-337 common clock_adjtime sys_clock_adjtime sys_clock_adjtime32348348-338 common syncfs sys_syncfs sys_syncfs349349-339 common setns sys_setns sys_setns350350-340 common process_vm_readv sys_process_vm_readv sys_process_vm_readv351351-341 common process_vm_writev sys_process_vm_writev sys_process_vm_writev352352-342 common s390_runtime_instr sys_s390_runtime_instr sys_s390_runtime_instr353353-343 common kcmp sys_kcmp sys_kcmp354354-344 common finit_module sys_finit_module sys_finit_module355355-345 common sched_setattr sys_sched_setattr sys_sched_setattr356356-346 common sched_getattr sys_sched_getattr sys_sched_getattr357357-347 common renameat2 sys_renameat2 sys_renameat2358358-348 common seccomp sys_seccomp sys_seccomp359359-349 common getrandom sys_getrandom sys_getrandom360360-350 common memfd_create sys_memfd_create sys_memfd_create361361-351 common bpf sys_bpf sys_bpf362362-352 common s390_pci_mmio_write sys_s390_pci_mmio_write sys_s390_pci_mmio_write363363-353 common s390_pci_mmio_read sys_s390_pci_mmio_read sys_s390_pci_mmio_read364364-354 common execveat sys_execveat compat_sys_execveat365365-355 common userfaultfd sys_userfaultfd sys_userfaultfd366366-356 common membarrier sys_membarrier sys_membarrier367367-357 common recvmmsg sys_recvmmsg compat_sys_recvmmsg_time32368368-358 common sendmmsg sys_sendmmsg compat_sys_sendmmsg369369-359 common socket sys_socket sys_socket370370-360 common socketpair sys_socketpair sys_socketpair371371-361 common bind sys_bind sys_bind372372-362 common connect sys_connect sys_connect373373-363 common listen sys_listen sys_listen374374-364 common accept4 sys_accept4 sys_accept4375375-365 common getsockopt sys_getsockopt sys_getsockopt376376-366 common setsockopt sys_setsockopt sys_setsockopt377377-367 common getsockname sys_getsockname sys_getsockname378378-368 common getpeername sys_getpeername sys_getpeername379379-369 common sendto sys_sendto sys_sendto380380-370 common sendmsg sys_sendmsg compat_sys_sendmsg381381-371 common recvfrom sys_recvfrom compat_sys_recvfrom382382-372 common recvmsg sys_recvmsg compat_sys_recvmsg383383-373 common shutdown sys_shutdown sys_shutdown384384-374 common mlock2 sys_mlock2 sys_mlock2385385-375 common copy_file_range sys_copy_file_range sys_copy_file_range386386-376 common preadv2 sys_preadv2 compat_sys_preadv2387387-377 common pwritev2 sys_pwritev2 compat_sys_pwritev2388388-378 common s390_guarded_storage sys_s390_guarded_storage sys_s390_guarded_storage389389-379 common statx sys_statx sys_statx390390-380 common s390_sthyi sys_s390_sthyi sys_s390_sthyi391391-381 common kexec_file_load sys_kexec_file_load sys_kexec_file_load392392-382 common io_pgetevents sys_io_pgetevents compat_sys_io_pgetevents393393-383 common rseq sys_rseq sys_rseq394394-384 common pkey_mprotect sys_pkey_mprotect sys_pkey_mprotect395395-385 common pkey_alloc sys_pkey_alloc sys_pkey_alloc396396-386 common pkey_free sys_pkey_free sys_pkey_free1010+1 common exit sys_exit1111+2 common fork sys_fork1212+3 common read sys_read1313+4 common write sys_write1414+5 common open sys_open1515+6 common close sys_close1616+7 common restart_syscall sys_restart_syscall1717+8 common creat sys_creat1818+9 common link sys_link1919+10 common unlink sys_unlink2020+11 common execve sys_execve2121+12 common chdir sys_chdir2222+14 common mknod sys_mknod2323+15 common chmod sys_chmod2424+19 common lseek sys_lseek2525+20 common getpid sys_getpid2626+21 common mount sys_mount2727+22 common umount sys_oldumount2828+26 common ptrace sys_ptrace2929+27 common alarm sys_alarm3030+29 common pause sys_pause3131+30 common utime sys_utime3232+33 common access sys_access3333+34 common nice sys_nice3434+36 common sync sys_sync3535+37 common kill sys_kill3636+38 common rename sys_rename3737+39 common mkdir sys_mkdir3838+40 common rmdir sys_rmdir3939+41 common dup sys_dup4040+42 common pipe sys_pipe4141+43 common times sys_times4242+45 common brk sys_brk4343+48 common signal sys_signal4444+51 common acct sys_acct4545+52 common umount2 sys_umount4646+54 common ioctl sys_ioctl4747+55 common fcntl sys_fcntl4848+57 common setpgid sys_setpgid4949+60 common umask sys_umask5050+61 common chroot sys_chroot5151+62 common ustat sys_ustat5252+63 common dup2 sys_dup25353+64 common getppid sys_getppid5454+65 common getpgrp sys_getpgrp5555+66 common setsid sys_setsid5656+67 common sigaction sys_sigaction5757+72 common sigsuspend sys_sigsuspend5858+73 common sigpending sys_sigpending5959+74 common sethostname sys_sethostname6060+75 common setrlimit sys_setrlimit6161+77 common getrusage sys_getrusage6262+78 common gettimeofday sys_gettimeofday6363+79 common settimeofday sys_settimeofday6464+83 common symlink sys_symlink6565+85 common readlink sys_readlink6666+86 common uselib sys_uselib6767+87 common swapon sys_swapon6868+88 common reboot sys_reboot6969+89 common readdir sys_ni_syscall7070+90 common mmap sys_old_mmap7171+91 common munmap sys_munmap7272+92 common truncate sys_truncate7373+93 common ftruncate sys_ftruncate7474+94 common fchmod sys_fchmod7575+96 common getpriority sys_getpriority7676+97 common setpriority sys_setpriority7777+99 common statfs sys_statfs7878+100 common fstatfs sys_fstatfs7979+102 common socketcall sys_socketcall8080+103 common syslog sys_syslog8181+104 common setitimer sys_setitimer8282+105 common getitimer sys_getitimer8383+106 common stat sys_newstat8484+107 common lstat sys_newlstat8585+108 common fstat sys_newfstat8686+110 common lookup_dcookie sys_ni_syscall8787+111 common vhangup sys_vhangup8888+112 common idle sys_ni_syscall8989+114 common wait4 sys_wait49090+115 common swapoff sys_swapoff9191+116 common sysinfo sys_sysinfo9292+117 common ipc sys_s390_ipc9393+118 common fsync sys_fsync9494+119 common sigreturn sys_sigreturn9595+120 common clone sys_clone9696+121 common setdomainname sys_setdomainname9797+122 common uname sys_newuname9898+124 common adjtimex sys_adjtimex9999+125 common mprotect sys_mprotect100100+126 common sigprocmask sys_sigprocmask101101+127 common create_module sys_ni_syscall102102+128 common init_module sys_init_module103103+129 common delete_module sys_delete_module104104+130 common get_kernel_syms sys_ni_syscall105105+131 common quotactl sys_quotactl106106+132 common getpgid sys_getpgid107107+133 common fchdir sys_fchdir108108+134 common bdflush sys_ni_syscall109109+135 common sysfs sys_sysfs110110+136 common personality sys_s390_personality111111+137 common afs_syscall sys_ni_syscall112112+141 common getdents sys_getdents113113+142 common select sys_select114114+143 common flock sys_flock115115+144 common msync sys_msync116116+145 common readv sys_readv117117+146 common writev sys_writev118118+147 common getsid sys_getsid119119+148 common fdatasync sys_fdatasync120120+149 common _sysctl sys_ni_syscall121121+150 common mlock sys_mlock122122+151 common munlock sys_munlock123123+152 common mlockall sys_mlockall124124+153 common munlockall sys_munlockall125125+154 common sched_setparam sys_sched_setparam126126+155 common sched_getparam sys_sched_getparam127127+156 common sched_setscheduler sys_sched_setscheduler128128+157 common sched_getscheduler sys_sched_getscheduler129129+158 common sched_yield sys_sched_yield130130+159 common sched_get_priority_max sys_sched_get_priority_max131131+160 common sched_get_priority_min sys_sched_get_priority_min132132+161 common sched_rr_get_interval sys_sched_rr_get_interval133133+162 common nanosleep sys_nanosleep134134+163 common mremap sys_mremap135135+167 common query_module sys_ni_syscall136136+168 common poll sys_poll137137+169 common nfsservctl sys_ni_syscall138138+172 common prctl sys_prctl139139+173 common rt_sigreturn sys_rt_sigreturn140140+174 common rt_sigaction sys_rt_sigaction141141+175 common rt_sigprocmask sys_rt_sigprocmask142142+176 common rt_sigpending sys_rt_sigpending143143+177 common rt_sigtimedwait sys_rt_sigtimedwait144144+178 common rt_sigqueueinfo sys_rt_sigqueueinfo145145+179 common rt_sigsuspend sys_rt_sigsuspend146146+180 common pread64 sys_pread64147147+181 common pwrite64 sys_pwrite64148148+183 common getcwd sys_getcwd149149+184 common capget sys_capget150150+185 common capset sys_capset151151+186 common sigaltstack sys_sigaltstack152152+187 common sendfile sys_sendfile64153153+188 common getpmsg sys_ni_syscall154154+189 common putpmsg sys_ni_syscall155155+190 common vfork sys_vfork156156+191 common getrlimit sys_getrlimit157157+198 common lchown sys_lchown158158+199 common getuid sys_getuid159159+200 common getgid sys_getgid160160+201 common geteuid sys_geteuid161161+202 common getegid sys_getegid162162+203 common setreuid sys_setreuid163163+204 common setregid sys_setregid164164+205 common getgroups sys_getgroups165165+206 common setgroups sys_setgroups166166+207 common fchown sys_fchown167167+208 common setresuid sys_setresuid168168+209 common getresuid sys_getresuid169169+210 common setresgid sys_setresgid170170+211 common getresgid sys_getresgid171171+212 common chown sys_chown172172+213 common setuid sys_setuid173173+214 common setgid sys_setgid174174+215 common setfsuid sys_setfsuid175175+216 common setfsgid sys_setfsgid176176+217 common pivot_root sys_pivot_root177177+218 common mincore sys_mincore178178+219 common madvise sys_madvise179179+220 common getdents64 sys_getdents64180180+222 common readahead sys_readahead181181+224 common setxattr sys_setxattr182182+225 common lsetxattr sys_lsetxattr183183+226 common fsetxattr sys_fsetxattr184184+227 common getxattr sys_getxattr185185+228 common lgetxattr sys_lgetxattr186186+229 common fgetxattr sys_fgetxattr187187+230 common listxattr sys_listxattr188188+231 common llistxattr sys_llistxattr189189+232 common flistxattr sys_flistxattr190190+233 common removexattr sys_removexattr191191+234 common lremovexattr sys_lremovexattr192192+235 common fremovexattr sys_fremovexattr193193+236 common gettid sys_gettid194194+237 common tkill sys_tkill195195+238 common futex sys_futex196196+239 common sched_setaffinity sys_sched_setaffinity197197+240 common sched_getaffinity sys_sched_getaffinity198198+241 common tgkill sys_tgkill199199+243 common io_setup sys_io_setup200200+244 common io_destroy sys_io_destroy201201+245 common io_getevents sys_io_getevents202202+246 common io_submit sys_io_submit203203+247 common io_cancel sys_io_cancel204204+248 common exit_group sys_exit_group205205+249 common epoll_create sys_epoll_create206206+250 common epoll_ctl sys_epoll_ctl207207+251 common epoll_wait sys_epoll_wait208208+252 common set_tid_address sys_set_tid_address209209+253 common fadvise64 sys_fadvise64_64210210+254 common timer_create sys_timer_create211211+255 common timer_settime sys_timer_settime212212+256 common timer_gettime sys_timer_gettime213213+257 common timer_getoverrun sys_timer_getoverrun214214+258 common timer_delete sys_timer_delete215215+259 common clock_settime sys_clock_settime216216+260 common clock_gettime sys_clock_gettime217217+261 common clock_getres sys_clock_getres218218+262 common clock_nanosleep sys_clock_nanosleep219219+265 common statfs64 sys_statfs64220220+266 common fstatfs64 sys_fstatfs64221221+267 common remap_file_pages sys_remap_file_pages222222+268 common mbind sys_mbind223223+269 common get_mempolicy sys_get_mempolicy224224+270 common set_mempolicy sys_set_mempolicy225225+271 common mq_open sys_mq_open226226+272 common mq_unlink sys_mq_unlink227227+273 common mq_timedsend sys_mq_timedsend228228+274 common mq_timedreceive sys_mq_timedreceive229229+275 common mq_notify sys_mq_notify230230+276 common mq_getsetattr sys_mq_getsetattr231231+277 common kexec_load sys_kexec_load232232+278 common add_key sys_add_key233233+279 common request_key sys_request_key234234+280 common keyctl sys_keyctl235235+281 common waitid sys_waitid236236+282 common ioprio_set sys_ioprio_set237237+283 common ioprio_get sys_ioprio_get238238+284 common inotify_init sys_inotify_init239239+285 common inotify_add_watch sys_inotify_add_watch240240+286 common inotify_rm_watch sys_inotify_rm_watch241241+287 common migrate_pages sys_migrate_pages242242+288 common openat sys_openat243243+289 common mkdirat sys_mkdirat244244+290 common mknodat sys_mknodat245245+291 common fchownat sys_fchownat246246+292 common futimesat sys_futimesat247247+293 common newfstatat sys_newfstatat248248+294 common unlinkat sys_unlinkat249249+295 common renameat sys_renameat250250+296 common linkat sys_linkat251251+297 common symlinkat sys_symlinkat252252+298 common readlinkat sys_readlinkat253253+299 common fchmodat sys_fchmodat254254+300 common faccessat sys_faccessat255255+301 common pselect6 sys_pselect6256256+302 common ppoll sys_ppoll257257+303 common unshare sys_unshare258258+304 common set_robust_list sys_set_robust_list259259+305 common get_robust_list sys_get_robust_list260260+306 common splice sys_splice261261+307 common sync_file_range sys_sync_file_range262262+308 common tee sys_tee263263+309 common vmsplice sys_vmsplice264264+310 common move_pages sys_move_pages265265+311 common getcpu sys_getcpu266266+312 common epoll_pwait sys_epoll_pwait267267+313 common utimes sys_utimes268268+314 common fallocate sys_fallocate269269+315 common utimensat sys_utimensat270270+316 common signalfd sys_signalfd271271+317 common timerfd sys_ni_syscall272272+318 common eventfd sys_eventfd273273+319 common timerfd_create sys_timerfd_create274274+320 common timerfd_settime sys_timerfd_settime275275+321 common timerfd_gettime sys_timerfd_gettime276276+322 common signalfd4 sys_signalfd4277277+323 common eventfd2 sys_eventfd2278278+324 common inotify_init1 sys_inotify_init1279279+325 common pipe2 sys_pipe2280280+326 common dup3 sys_dup3281281+327 common epoll_create1 sys_epoll_create1282282+328 common preadv sys_preadv283283+329 common pwritev sys_pwritev284284+330 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo285285+331 common perf_event_open sys_perf_event_open286286+332 common fanotify_init sys_fanotify_init287287+333 common fanotify_mark sys_fanotify_mark288288+334 common prlimit64 sys_prlimit64289289+335 common name_to_handle_at sys_name_to_handle_at290290+336 common open_by_handle_at sys_open_by_handle_at291291+337 common clock_adjtime sys_clock_adjtime292292+338 common syncfs sys_syncfs293293+339 common setns sys_setns294294+340 common process_vm_readv sys_process_vm_readv295295+341 common process_vm_writev sys_process_vm_writev296296+342 common s390_runtime_instr sys_s390_runtime_instr297297+343 common kcmp sys_kcmp298298+344 common finit_module sys_finit_module299299+345 common sched_setattr sys_sched_setattr300300+346 common sched_getattr sys_sched_getattr301301+347 common renameat2 sys_renameat2302302+348 common seccomp sys_seccomp303303+349 common getrandom sys_getrandom304304+350 common memfd_create sys_memfd_create305305+351 common bpf sys_bpf306306+352 common s390_pci_mmio_write sys_s390_pci_mmio_write307307+353 common s390_pci_mmio_read sys_s390_pci_mmio_read308308+354 common execveat sys_execveat309309+355 common userfaultfd sys_userfaultfd310310+356 common membarrier sys_membarrier311311+357 common recvmmsg sys_recvmmsg312312+358 common sendmmsg sys_sendmmsg313313+359 common socket sys_socket314314+360 common socketpair sys_socketpair315315+361 common bind sys_bind316316+362 common connect sys_connect317317+363 common listen sys_listen318318+364 common accept4 sys_accept4319319+365 common getsockopt sys_getsockopt320320+366 common setsockopt sys_setsockopt321321+367 common getsockname sys_getsockname322322+368 common getpeername sys_getpeername323323+369 common sendto sys_sendto324324+370 common sendmsg sys_sendmsg325325+371 common recvfrom sys_recvfrom326326+372 common recvmsg sys_recvmsg327327+373 common shutdown sys_shutdown328328+374 common mlock2 sys_mlock2329329+375 common copy_file_range sys_copy_file_range330330+376 common preadv2 sys_preadv2331331+377 common pwritev2 sys_pwritev2332332+378 common s390_guarded_storage sys_s390_guarded_storage333333+379 common statx sys_statx334334+380 common s390_sthyi sys_s390_sthyi335335+381 common kexec_file_load sys_kexec_file_load336336+382 common io_pgetevents sys_io_pgetevents337337+383 common rseq sys_rseq338338+384 common pkey_mprotect sys_pkey_mprotect339339+385 common pkey_alloc sys_pkey_alloc340340+386 common pkey_free sys_pkey_free397341# room for arch specific syscalls398398-392 64 semtimedop sys_semtimedop -399399-393 common semget sys_semget sys_semget400400-394 common semctl sys_semctl compat_sys_semctl401401-395 common shmget sys_shmget sys_shmget402402-396 common shmctl sys_shmctl compat_sys_shmctl403403-397 common shmat sys_shmat compat_sys_shmat404404-398 common shmdt sys_shmdt sys_shmdt405405-399 common msgget sys_msgget sys_msgget406406-400 common msgsnd sys_msgsnd compat_sys_msgsnd407407-401 common msgrcv sys_msgrcv compat_sys_msgrcv408408-402 common msgctl sys_msgctl compat_sys_msgctl409409-403 32 clock_gettime64 - sys_clock_gettime410410-404 32 clock_settime64 - sys_clock_settime411411-405 32 clock_adjtime64 - sys_clock_adjtime412412-406 32 clock_getres_time64 - sys_clock_getres413413-407 32 clock_nanosleep_time64 - sys_clock_nanosleep414414-408 32 timer_gettime64 - sys_timer_gettime415415-409 32 timer_settime64 - sys_timer_settime416416-410 32 timerfd_gettime64 - sys_timerfd_gettime417417-411 32 timerfd_settime64 - sys_timerfd_settime418418-412 32 utimensat_time64 - sys_utimensat419419-413 32 pselect6_time64 - compat_sys_pselect6_time64420420-414 32 ppoll_time64 - compat_sys_ppoll_time64421421-416 32 io_pgetevents_time64 - compat_sys_io_pgetevents_time64422422-417 32 recvmmsg_time64 - compat_sys_recvmmsg_time64423423-418 32 mq_timedsend_time64 - sys_mq_timedsend424424-419 32 mq_timedreceive_time64 - sys_mq_timedreceive425425-420 32 semtimedop_time64 - sys_semtimedop426426-421 32 rt_sigtimedwait_time64 - compat_sys_rt_sigtimedwait_time64427427-422 32 futex_time64 - sys_futex428428-423 32 sched_rr_get_interval_time64 - sys_sched_rr_get_interval429429-424 common pidfd_send_signal sys_pidfd_send_signal sys_pidfd_send_signal430430-425 common io_uring_setup sys_io_uring_setup sys_io_uring_setup431431-426 common io_uring_enter sys_io_uring_enter sys_io_uring_enter432432-427 common io_uring_register sys_io_uring_register sys_io_uring_register433433-428 common open_tree sys_open_tree sys_open_tree434434-429 common move_mount sys_move_mount sys_move_mount435435-430 common fsopen sys_fsopen sys_fsopen436436-431 common fsconfig sys_fsconfig sys_fsconfig437437-432 common fsmount sys_fsmount sys_fsmount438438-433 common fspick sys_fspick sys_fspick439439-434 common pidfd_open sys_pidfd_open sys_pidfd_open440440-435 common clone3 sys_clone3 sys_clone3441441-436 common close_range sys_close_range sys_close_range442442-437 common openat2 sys_openat2 sys_openat2443443-438 common pidfd_getfd sys_pidfd_getfd sys_pidfd_getfd444444-439 common faccessat2 sys_faccessat2 sys_faccessat2445445-440 common process_madvise sys_process_madvise sys_process_madvise446446-441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2447447-442 common mount_setattr sys_mount_setattr sys_mount_setattr448448-443 common quotactl_fd sys_quotactl_fd sys_quotactl_fd449449-444 common landlock_create_ruleset sys_landlock_create_ruleset sys_landlock_create_ruleset450450-445 common landlock_add_rule sys_landlock_add_rule sys_landlock_add_rule451451-446 common landlock_restrict_self sys_landlock_restrict_self sys_landlock_restrict_self452452-447 common memfd_secret sys_memfd_secret sys_memfd_secret453453-448 common process_mrelease sys_process_mrelease sys_process_mrelease454454-449 common futex_waitv sys_futex_waitv sys_futex_waitv455455-450 common set_mempolicy_home_node sys_set_mempolicy_home_node sys_set_mempolicy_home_node456456-451 common cachestat sys_cachestat sys_cachestat457457-452 common fchmodat2 sys_fchmodat2 sys_fchmodat2458458-453 common map_shadow_stack sys_map_shadow_stack sys_map_shadow_stack459459-454 common futex_wake sys_futex_wake sys_futex_wake460460-455 common futex_wait sys_futex_wait sys_futex_wait461461-456 common futex_requeue sys_futex_requeue sys_futex_requeue462462-457 common statmount sys_statmount sys_statmount463463-458 common listmount sys_listmount sys_listmount464464-459 common lsm_get_self_attr sys_lsm_get_self_attr sys_lsm_get_self_attr465465-460 common lsm_set_self_attr sys_lsm_set_self_attr sys_lsm_set_self_attr466466-461 common lsm_list_modules sys_lsm_list_modules sys_lsm_list_modules467467-462 common mseal sys_mseal sys_mseal468468-463 common setxattrat sys_setxattrat sys_setxattrat469469-464 common getxattrat sys_getxattrat sys_getxattrat470470-465 common listxattrat sys_listxattrat sys_listxattrat471471-466 common removexattrat sys_removexattrat sys_removexattrat472472-467 common open_tree_attr sys_open_tree_attr sys_open_tree_attr473473-468 common file_getattr sys_file_getattr sys_file_getattr474474-469 common file_setattr sys_file_setattr sys_file_setattr475475-470 common listns sys_listns sys_listns342342+392 common semtimedop sys_semtimedop343343+393 common semget sys_semget344344+394 common semctl sys_semctl345345+395 common shmget sys_shmget346346+396 common shmctl sys_shmctl347347+397 common shmat sys_shmat348348+398 common shmdt sys_shmdt349349+399 common msgget sys_msgget350350+400 common msgsnd sys_msgsnd351351+401 common msgrcv sys_msgrcv352352+402 common msgctl sys_msgctl353353+424 common pidfd_send_signal sys_pidfd_send_signal354354+425 common io_uring_setup sys_io_uring_setup355355+426 common io_uring_enter sys_io_uring_enter356356+427 common io_uring_register sys_io_uring_register357357+428 common open_tree sys_open_tree358358+429 common move_mount sys_move_mount359359+430 common fsopen sys_fsopen360360+431 common fsconfig sys_fsconfig361361+432 common fsmount sys_fsmount362362+433 common fspick sys_fspick363363+434 common pidfd_open sys_pidfd_open364364+435 common clone3 sys_clone3365365+436 common close_range sys_close_range366366+437 common openat2 sys_openat2367367+438 common pidfd_getfd sys_pidfd_getfd368368+439 common faccessat2 sys_faccessat2369369+440 common process_madvise sys_process_madvise370370+441 common epoll_pwait2 sys_epoll_pwait2371371+442 common mount_setattr sys_mount_setattr372372+443 common quotactl_fd sys_quotactl_fd373373+444 common landlock_create_ruleset sys_landlock_create_ruleset374374+445 common landlock_add_rule sys_landlock_add_rule375375+446 common landlock_restrict_self sys_landlock_restrict_self376376+447 common memfd_secret sys_memfd_secret377377+448 common process_mrelease sys_process_mrelease378378+449 common futex_waitv sys_futex_waitv379379+450 common set_mempolicy_home_node sys_set_mempolicy_home_node380380+451 common cachestat sys_cachestat381381+452 common fchmodat2 sys_fchmodat2382382+453 common map_shadow_stack sys_map_shadow_stack383383+454 common futex_wake sys_futex_wake384384+455 common futex_wait sys_futex_wait385385+456 common futex_requeue sys_futex_requeue386386+457 common statmount sys_statmount387387+458 common listmount sys_listmount388388+459 common lsm_get_self_attr sys_lsm_get_self_attr389389+460 common lsm_set_self_attr sys_lsm_set_self_attr390390+461 common lsm_list_modules sys_lsm_list_modules391391+462 common mseal sys_mseal392392+463 common setxattrat sys_setxattrat393393+464 common getxattrat sys_getxattrat394394+465 common listxattrat sys_listxattrat395395+466 common removexattrat sys_removexattrat396396+467 common open_tree_attr sys_open_tree_attr397397+468 common file_getattr sys_file_getattr398398+469 common file_setattr sys_file_setattr399399+470 common listns sys_listns400400+471 common rseq_slice_yield sys_rseq_slice_yield
+1
tools/perf/arch/sh/entry/syscalls/syscall.tbl
···474474468 common file_getattr sys_file_getattr475475469 common file_setattr sys_file_setattr476476470 common listns sys_listns477477+471 common rseq_slice_yield sys_rseq_slice_yield
+2-1
tools/perf/arch/sparc/entry/syscalls/syscall.tbl
···480480432 common fsmount sys_fsmount481481433 common fspick sys_fspick482482434 common pidfd_open sys_pidfd_open483483-# 435 reserved for clone3483483+435 common clone3 __sys_clone3484484436 common close_range sys_close_range485485437 common openat2 sys_openat2486486438 common pidfd_getfd sys_pidfd_getfd···516516468 common file_getattr sys_file_getattr517517469 common file_setattr sys_file_setattr518518470 common listns sys_listns519519+471 common rseq_slice_yield sys_rseq_slice_yield
···395395468 common file_getattr sys_file_getattr396396469 common file_setattr sys_file_setattr397397470 common listns sys_listns398398+471 common rseq_slice_yield sys_rseq_slice_yield398399399400#400401# Due to a historical design error, certain syscalls are numbered differently
+1
tools/perf/arch/xtensa/entry/syscalls/syscall.tbl
···441441468 common file_getattr sys_file_getattr442442469 common file_setattr sys_file_setattr443443470 common listns sys_listns444444+471 common rseq_slice_yield sys_rseq_slice_yield
···214214quiet_cmd_rm = RM $^215215216216prune_orphans: $(ORPHAN_FILES)217217- $(Q)$(call echo-cmd,rm)rm -f $^217217+ # The list of files can be long. Use xargs to prevent issues.218218+ $(Q)$(call echo-cmd,rm)echo "$^" | xargs rm -f218219219220JEVENTS_DEPS += prune_orphans220221endif
···7777 */7878#define IRQ_WORK_VECTOR 0xf679798080+/* IRQ vector for PMIs when running a guest with a mediated PMU. */8081#define PERF_GUEST_MEDIATED_PMI_VECTOR 0xf581828283#define DEFERRED_ERROR_VECTOR 0xf4
+1
tools/perf/trace/beauty/include/uapi/linux/fs.h
···253253#define FS_XFLAG_FILESTREAM 0x00004000 /* use filestream allocator */254254#define FS_XFLAG_DAX 0x00008000 /* use DAX for IO */255255#define FS_XFLAG_COWEXTSIZE 0x00010000 /* CoW extent size allocator hint */256256+#define FS_XFLAG_VERITY 0x00020000 /* fs-verity enabled */256257#define FS_XFLAG_HASATTR 0x80000000 /* no DIFLAG for this */257258258259/* the read-only stuff doesn't really belong here, but any other place is
···6161/*6262 * open_tree() flags.6363 */6464-#define OPEN_TREE_CLONE 1 /* Clone the target tree and attach the clone */6464+#define OPEN_TREE_CLONE (1 << 0) /* Clone the target tree and attach the clone */6565+#define OPEN_TREE_NAMESPACE (1 << 1) /* Clone the target tree into a new mount namespace */6566#define OPEN_TREE_CLOEXEC O_CLOEXEC /* Close the file on execve() */66676768/*···198197 */199198struct mnt_id_req {200199 __u32 size;201201- __u32 mnt_ns_fd;200200+ union {201201+ __u32 mnt_ns_fd;202202+ __u32 mnt_fd;203203+ };202204 __u64 mnt_id;203205 __u64 param;204206 __u64 mnt_ns_id;···235231 */236232#define LSMT_ROOT 0xffffffffffffffff /* root mount */237233#define LISTMOUNT_REVERSE (1 << 0) /* List later mounts first */234234+235235+/*236236+ * @flag bits for statmount(2)237237+ */238238+#define STATMOUNT_BY_FD 0x00000001U /* want mountinfo for given fd */238239239240#endif /* _UAPI_LINUX_MOUNT_H */
···386386# define PR_FUTEX_HASH_SET_SLOTS 1387387# define PR_FUTEX_HASH_GET_SLOTS 2388388389389+/* RSEQ time slice extensions */390390+#define PR_RSEQ_SLICE_EXTENSION 79391391+# define PR_RSEQ_SLICE_EXTENSION_GET 1392392+# define PR_RSEQ_SLICE_EXTENSION_SET 2393393+/*394394+ * Bits for RSEQ_SLICE_EXTENSION_GET/SET395395+ * PR_RSEQ_SLICE_EXT_ENABLE: Enable396396+ */397397+# define PR_RSEQ_SLICE_EXT_ENABLE 0x01398398+399399+/*400400+ * Get the current indirect branch tracking configuration for the current401401+ * thread, this will be the value configured via PR_SET_INDIR_BR_LP_STATUS.402402+ */403403+#define PR_GET_INDIR_BR_LP_STATUS 80404404+405405+/*406406+ * Set the indirect branch tracking configuration. PR_INDIR_BR_LP_ENABLE will407407+ * enable cpu feature for user thread, to track all indirect branches and ensure408408+ * they land on arch defined landing pad instruction.409409+ * x86 - If enabled, an indirect branch must land on an ENDBRANCH instruction.410410+ * arch64 - If enabled, an indirect branch must land on a BTI instruction.411411+ * riscv - If enabled, an indirect branch must land on an lpad instruction.412412+ * PR_INDIR_BR_LP_DISABLE will disable feature for user thread and indirect413413+ * branches will no more be tracked by cpu to land on arch defined landing pad414414+ * instruction.415415+ */416416+#define PR_SET_INDIR_BR_LP_STATUS 81417417+# define PR_INDIR_BR_LP_ENABLE (1UL << 0)418418+419419+/*420420+ * Prevent further changes to the specified indirect branch tracking421421+ * configuration. All bits may be locked via this call, including422422+ * undefined bits.423423+ */424424+#define PR_LOCK_INDIR_BR_LP_STATUS 82425425+389426#endif /* _LINUX_PRCTL_H */
···549549 /*550550 * Process the PE_CONTEXT packets if we have a valid contextID or VMID.551551 * If the kernel is running at EL2, the PID is traced in CONTEXTIDR_EL2552552- * as VMID, Bit ETM_OPT_CTXTID2 is set in this case.552552+ * as VMID, Format attribute 'contextid2' is set in this case.553553 */554554 switch (cs_etm__get_pid_fmt(etmq)) {555555 case CS_ETM_PIDFMT_CTXTID:
+13-23
tools/perf/util/cs-etm.c
···194194 * CS_ETM_PIDFMT_CTXTID2: CONTEXTIDR_EL2 is traced.195195 * CS_ETM_PIDFMT_NONE: No context IDs196196 *197197- * It's possible that the two bits ETM_OPT_CTXTID and ETM_OPT_CTXTID2197197+ * It's possible that the two format attributes 'contextid1' and 'contextid2'198198 * are enabled at the same time when the session runs on an EL2 kernel.199199 * This means the CONTEXTIDR_EL1 and CONTEXTIDR_EL2 both will be200200 * recorded in the trace data, the tool will selectively use···210210 if (metadata[CS_ETM_MAGIC] == __perf_cs_etmv3_magic) {211211 val = metadata[CS_ETM_ETMCR];212212 /* CONTEXTIDR is traced */213213- if (val & BIT(ETM_OPT_CTXTID))213213+ if (val & ETMCR_CTXTID)214214 return CS_ETM_PIDFMT_CTXTID;215215 } else {216216 val = metadata[CS_ETMV4_TRCCONFIGR];217217 /* CONTEXTIDR_EL2 is traced */218218- if (val & (BIT(ETM4_CFG_BIT_VMID) | BIT(ETM4_CFG_BIT_VMID_OPT)))218218+ if (val & (TRCCONFIGR_VMID | TRCCONFIGR_VMIDOPT))219219 return CS_ETM_PIDFMT_CTXTID2;220220 /* CONTEXTIDR_EL1 is traced */221221- else if (val & BIT(ETM4_CFG_BIT_CTXTID))221221+ else if (val & TRCCONFIGR_CID)222222 return CS_ETM_PIDFMT_CTXTID;223223 }224224···29142914 return 0;29152915}2916291629172917-static int cs_etm__setup_timeless_decoding(struct cs_etm_auxtrace *etm)29172917+static void cs_etm__setup_timeless_decoding(struct cs_etm_auxtrace *etm)29182918{29192919- struct evsel *evsel;29202920- struct evlist *evlist = etm->session->evlist;29192919+ /* Take first ETM as all options will be the same for all ETMs */29202920+ u64 *metadata = etm->metadata[0];2921292129222922 /* Override timeless mode with user input from --itrace=Z */29232923 if (etm->synth_opts.timeless_decoding) {29242924 etm->timeless_decoding = true;29252925- return 0;29252925+ return;29262926 }2927292729282928- /*29292929- * Find the cs_etm evsel and look at what its timestamp setting was29302930- */29312931- evlist__for_each_entry(evlist, evsel)29322932- if (cs_etm__evsel_is_auxtrace(etm->session, evsel)) {29332933- etm->timeless_decoding =29342934- !(evsel->core.attr.config & BIT(ETM_OPT_TS));29352935- return 0;29362936- }29372937-29382938- pr_err("CS ETM: Couldn't find ETM evsel\n");29392939- return -EINVAL;29282928+ if (metadata[CS_ETM_MAGIC] == __perf_cs_etmv3_magic)29292929+ etm->timeless_decoding = !(metadata[CS_ETM_ETMCR] & ETMCR_TIMESTAMP_EN);29302930+ else29312931+ etm->timeless_decoding = !(metadata[CS_ETMV4_TRCCONFIGR] & TRCCONFIGR_TS);29402932}2941293329422934/*···34913499 etm->auxtrace.evsel_is_auxtrace = cs_etm__evsel_is_auxtrace;34923500 session->auxtrace = &etm->auxtrace;3493350134943494- err = cs_etm__setup_timeless_decoding(etm);34953495- if (err)34963496- return err;35023502+ cs_etm__setup_timeless_decoding(etm);3497350334983504 etm->tc.time_shift = tc->time_shift;34993505 etm->tc.time_mult = tc->time_mult;
+15
tools/perf/util/cs-etm.h
···230230/* CoreSight trace ID is currently the bottom 7 bits of the value */231231#define CORESIGHT_TRACE_ID_VAL_MASK GENMASK(6, 0)232232233233+/* ETMv4 CONFIGR register bits */234234+#define TRCCONFIGR_BB BIT(3)235235+#define TRCCONFIGR_CCI BIT(4)236236+#define TRCCONFIGR_CID BIT(6)237237+#define TRCCONFIGR_VMID BIT(7)238238+#define TRCCONFIGR_TS BIT(11)239239+#define TRCCONFIGR_RS BIT(12)240240+#define TRCCONFIGR_VMIDOPT BIT(15)241241+242242+/* ETMv3 ETMCR register bits */243243+#define ETMCR_CYC_ACC BIT(12)244244+#define ETMCR_CTXTID BIT(14)245245+#define ETMCR_TIMESTAMP_EN BIT(28)246246+#define ETMCR_RETURN_STACK BIT(29)247247+233248int cs_etm__process_auxtrace_info(union perf_event *event,234249 struct perf_session *session);235250void cs_etm_get_default_config(const struct perf_pmu *pmu, struct perf_event_attr *attr);
+1-1
tools/perf/util/disasm.c
···384384 start = map__unmap_ip(map, sym->start);385385 end = map__unmap_ip(map, sym->end);386386387387- ops->target.outside = target.addr < start || target.addr > end;387387+ ops->target.outside = target.addr < start || target.addr >= end;388388389389 /*390390 * FIXME: things like this in _cpp_lex_token (gcc's cc1 program):
···3030# its policy for the relative importance of performance versus energy savings to3131# the processor. See man CPUPOWER-SET(1) for additional details3232#PERF_BIAS=3333+3434+# Set the Energy Performance Preference3535+# Available options can be read from3636+# /sys/devices/system/cpu/cpufreq/policy0/energy_performance_available_preferences3737+#EPP=
+6
tools/power/cpupower/cpupower.sh
···2323 cpupower set -b "$PERF_BIAS" > /dev/null || ESTATUS=12424fi25252626+# apply Energy Performance Preference2727+if test -n "$EPP"2828+then2929+ cpupower set -e "$EPP" > /dev/null || ESTATUS=13030+fi3131+2632exit $ESTATUS
+5-1
tools/power/cpupower/utils/cpupower-set.c
···124124 }125125126126 if (params.turbo_boost) {127127- ret = cpupower_set_turbo_boost(turbo_boost);127127+ if (cpupower_cpu_info.vendor == X86_VENDOR_INTEL)128128+ ret = cpupower_set_intel_turbo_boost(turbo_boost);129129+ else130130+ ret = cpupower_set_generic_turbo_boost(turbo_boost);131131+128132 if (ret)129133 fprintf(stderr, "Error setting turbo-boost\n");130134 }
+4-1
tools/power/cpupower/utils/helpers/helpers.h
···104104/* cpuid and cpuinfo helpers **************************/105105106106int cpufreq_has_generic_boost_support(bool *active);107107-int cpupower_set_turbo_boost(int turbo_boost);107107+int cpupower_set_generic_turbo_boost(int turbo_boost);108108109109/* X86 ONLY ****************************************/110110#if defined(__i386__) || defined(__x86_64__)···143143144144int cpufreq_has_x86_boost_support(unsigned int cpu, int *support,145145 int *active, int *states);146146+int cpupower_set_intel_turbo_boost(int turbo_boost);146147147148/* AMD P-State stuff **************************/148149bool cpupower_amd_pstate_enabled(void);···189188190189static inline int cpufreq_has_x86_boost_support(unsigned int cpu, int *support,191190 int *active, int *states)191191+{ return -1; }192192+static inline int cpupower_set_intel_turbo_boost(int turbo_boost)192193{ return -1; }193194194195static inline bool cpupower_amd_pstate_enabled(void)
+39-2
tools/power/cpupower/utils/helpers/misc.c
···1919{2020 int ret;2121 unsigned long long val;2222+ char linebuf[MAX_LINE_LEN];2323+ char path[SYSFS_PATH_MAX];2424+ char *endp;22252326 *support = *active = *states = 0;2427···4542 }4643 } else if (cpupower_cpu_info.caps & CPUPOWER_CAP_AMD_PSTATE) {4744 amd_pstate_boost_init(cpu, support, active);4848- } else if (cpupower_cpu_info.caps & CPUPOWER_CAP_INTEL_IDA)4545+ } else if (cpupower_cpu_info.caps & CPUPOWER_CAP_INTEL_IDA) {4946 *support = *active = 1;4747+4848+ snprintf(path, sizeof(path), PATH_TO_CPU "intel_pstate/no_turbo");4949+5050+ if (!is_valid_path(path))5151+ return 0;5252+5353+ if (cpupower_read_sysfs(path, linebuf, MAX_LINE_LEN) == 0)5454+ return -1;5555+5656+ val = strtol(linebuf, &endp, 0);5757+ if (endp == linebuf || errno == ERANGE)5858+ return -1;5959+6060+ *active = !val;6161+ }6262+ return 0;6363+}6464+6565+int cpupower_set_intel_turbo_boost(int turbo_boost)6666+{6767+ char path[SYSFS_PATH_MAX];6868+ char linebuf[2] = {};6969+7070+ snprintf(path, sizeof(path), PATH_TO_CPU "intel_pstate/no_turbo");7171+7272+ /* Fallback to generic solution when intel_pstate driver not running */7373+ if (!is_valid_path(path))7474+ return cpupower_set_generic_turbo_boost(turbo_boost);7575+7676+ snprintf(linebuf, sizeof(linebuf), "%d", !turbo_boost);7777+7878+ if (cpupower_write_sysfs(path, linebuf, 2) <= 0)7979+ return -1;8080+5081 return 0;5182}5283···311274 }312275}313276314314-int cpupower_set_turbo_boost(int turbo_boost)277277+int cpupower_set_generic_turbo_boost(int turbo_boost)315278{316279 char path[SYSFS_PATH_MAX];317280 char linebuf[2] = {};
+2-2
tools/power/cpupower/utils/powercap-info.c
···3838 printf(" (%s)\n", mode ? "enabled" : "disabled");39394040 if (zone->has_power_uw)4141- printf(_("%sPower can be monitored in micro Jules\n"),4141+ printf(_("%sPower can be monitored in micro Watts\n"),4242 pr_prefix);43434444 if (zone->has_energy_uj)4545- printf(_("%sPower can be monitored in micro Watts\n"),4545+ printf(_("%sPower can be monitored in micro Jules\n"),4646 pr_prefix);47474848 printf("\n");
+1
tools/scripts/syscall.tbl
···411411468 common file_getattr sys_file_getattr412412469 common file_setattr sys_file_setattr413413470 common listns sys_listns414414+471 common rseq_slice_yield sys_rseq_slice_yield
+4-2
tools/testing/kunit/kunit_kernel.py
···346346 return self.validate_config(build_dir)347347348348 def run_kernel(self, args: Optional[List[str]]=None, build_dir: str='', filter_glob: str='', filter: str='', filter_action: Optional[str]=None, timeout: Optional[int]=None) -> Iterator[str]:349349- if not args:350350- args = []349349+ # Copy to avoid mutating the caller-supplied list. exec_tests() reuses350350+ # the same args across repeated run_kernel() calls (e.g. --run_isolated),351351+ # so appending to the original would accumulate stale flags on each call.352352+ args = list(args) if args else []351353 if filter_glob:352354 args.append('kunit.filter_glob=' + filter_glob)353355 if filter:
+26
tools/testing/kunit/kunit_tool_test.py
···503503 with open(kunit_kernel.get_outfile_path(build_dir), 'rt') as outfile:504504 self.assertEqual(outfile.read(), 'hi\nbye\n', msg='Missing some output')505505506506+ def test_run_kernel_args_not_mutated(self):507507+ """Verify run_kernel() copies args so callers can reuse them."""508508+ start_calls = []509509+510510+ def fake_start(start_args, unused_build_dir):511511+ start_calls.append(list(start_args))512512+ return subprocess.Popen(['printf', 'KTAP version 1\n'],513513+ text=True, stdout=subprocess.PIPE)514514+515515+ with tempfile.TemporaryDirectory('') as build_dir:516516+ tree = kunit_kernel.LinuxSourceTree(build_dir,517517+ kunitconfig_paths=[os.devnull])518518+ with mock.patch.object(tree._ops, 'start', side_effect=fake_start), \519519+ mock.patch.object(kunit_kernel.subprocess, 'call'):520520+ kernel_args = ['mem=1G']521521+ for _ in tree.run_kernel(args=kernel_args, build_dir=build_dir,522522+ filter_glob='suite.test1'):523523+ pass524524+ for _ in tree.run_kernel(args=kernel_args, build_dir=build_dir,525525+ filter_glob='suite.test2'):526526+ pass527527+ self.assertEqual(kernel_args, ['mem=1G'],528528+ 'run_kernel() should not modify caller args')529529+ self.assertIn('kunit.filter_glob=suite.test1', start_calls[0])530530+ self.assertIn('kunit.filter_glob=suite.test2', start_calls[1])531531+506532 def test_build_reconfig_no_config(self):507533 with tempfile.TemporaryDirectory('') as build_dir:508534 with open(kunit_kernel.get_kunitconfig_path(build_dir), 'w') as f:
···409409 CC="$(HOSTCC)" LD="$(HOSTLD)" AR="$(HOSTAR)" \410410 LIBBPF_INCLUDE=$(HOST_INCLUDE_DIR) \411411 EXTRA_LDFLAGS='$(SAN_LDFLAGS) $(EXTRA_LDFLAGS)' \412412+ HOSTPKG_CONFIG=$(PKG_CONFIG) \412413 OUTPUT=$(HOST_BUILD_DIR)/resolve_btfids/ BPFOBJ=$(HOST_BPFOBJ)413414414415# Get Clang's default includes on this system, as opposed to those seen by
···363363 __sink(path[0]);364364}365365366366+void dummy_calls(void)367367+{368368+ bpf_iter_num_new(0, 0, 0);369369+ bpf_iter_num_next(0);370370+ bpf_iter_num_destroy(0);371371+}372372+373373+SEC("socket")374374+__success375375+__flag(BPF_F_TEST_STATE_FREQ)376376+int spurious_precision_marks(void *ctx)377377+{378378+ struct bpf_iter_num iter;379379+380380+ asm volatile(381381+ "r1 = %[iter];"382382+ "r2 = 0;"383383+ "r3 = 10;"384384+ "call %[bpf_iter_num_new];"385385+ "1:"386386+ "r1 = %[iter];"387387+ "call %[bpf_iter_num_next];"388388+ "if r0 == 0 goto 4f;"389389+ "r7 = *(u32 *)(r0 + 0);"390390+ "r8 = *(u32 *)(r0 + 0);"391391+ /* This jump can't be predicted and does not change r7 or r8 state. */392392+ "if r7 > r8 goto 2f;"393393+ /* Branch explored first ties r2 and r7 as having the same id. */394394+ "r2 = r7;"395395+ "goto 3f;"396396+ "2:"397397+ /* Branch explored second does not tie r2 and r7 but has a function call. */398398+ "call %[bpf_get_prandom_u32];"399399+ "3:"400400+ /*401401+ * A checkpoint.402402+ * When first branch is explored, this would inject linked registers403403+ * r2 and r7 into the jump history.404404+ * When second branch is explored, this would be a cache hit point,405405+ * triggering propagate_precision().406406+ */407407+ "if r7 <= 42 goto +0;"408408+ /*409409+ * Mark r7 as precise using an if condition that is always true.410410+ * When reached via the second branch, this triggered a bug in the backtrack_insn()411411+ * because r2 (tied to r7) was propagated as precise to a call.412412+ */413413+ "if r7 <= 0xffffFFFF goto +0;"414414+ "goto 1b;"415415+ "4:"416416+ "r1 = %[iter];"417417+ "call %[bpf_iter_num_destroy];"418418+ :419419+ : __imm_ptr(iter),420420+ __imm(bpf_iter_num_new),421421+ __imm(bpf_iter_num_next),422422+ __imm(bpf_iter_num_destroy),423423+ __imm(bpf_get_prandom_u32)424424+ : __clobber_common, "r7", "r8"425425+ );426426+427427+ return 0;428428+}429429+366430char _license[] SEC("license") = "GPL";
···598598 if unit_set:599599 assert required[usage].contains(field)600600601601- def test_prop_direct(self):602602- """603603- Todo: Verify that INPUT_PROP_DIRECT is set on display devices.604604- """605605- pass606606-607607- def test_prop_pointer(self):608608- """609609- Todo: Verify that INPUT_PROP_POINTER is set on opaque devices.610610- """611611- pass612612-613601614602class PenTabletTest(BaseTest.TestTablet):615603 def assertName(self, uhdev):···664676 self.sync_and_assert_events(665677 uhdev.event(130, 240, pressure=0), [], auto_syn=False, strict=True666678 )679679+680680+ def test_prop_pointer(self):681681+ """682682+ Verify that INPUT_PROP_POINTER is set and INPUT_PROP_DIRECT683683+ is not set on opaque devices.684684+ """685685+ evdev = self.uhdev.get_evdev()686686+ assert libevdev.INPUT_PROP_POINTER in evdev.properties687687+ assert libevdev.INPUT_PROP_DIRECT not in evdev.properties667688668689669690class TestOpaqueCTLTablet(TestOpaqueTablet):···859862 )860863861864862862-class TestDTH2452Tablet(test_multitouch.BaseTest.TestMultitouch, TouchTabletTest):865865+class DirectTabletTest():866866+ def test_prop_direct(self):867867+ """868868+ Verify that INPUT_PROP_DIRECT is set and INPUT_PROP_POINTER869869+ is not set on display devices.870870+ """871871+ evdev = self.uhdev.get_evdev()872872+ assert libevdev.INPUT_PROP_DIRECT in evdev.properties873873+ assert libevdev.INPUT_PROP_POINTER not in evdev.properties874874+875875+876876+class TestDTH2452Tablet(test_multitouch.BaseTest.TestMultitouch, TouchTabletTest, DirectTabletTest):863877 ContactIds = namedtuple("ContactIds", "contact_id, tracking_id, slot_num")864878865879 def create_device(self):
···2828 kci_test_fdb_get2929 kci_test_fdb_del3030 kci_test_neigh_get3131+ kci_test_neigh_update3132 kci_test_bridge_parent_id3233 kci_test_address_proto3334 kci_test_enslave_bonding···11591158 fi1160115911611160 end_test "PASS: neigh get"11611161+}11621162+11631163+kci_test_neigh_update()11641164+{11651165+ dstip=10.0.2.411661166+ dstmac=de:ad:be:ef:13:3711671167+ local ret=011681168+11691169+ for proxy in "" "proxy" ; do11701170+ # add a neighbour entry without any flags11711171+ run_cmd ip neigh add $proxy $dstip dev "$devdummy" lladdr $dstmac nud permanent11721172+ run_cmd_grep $dstip ip neigh show $proxy11731173+ run_cmd_grep_fail "$dstip dev $devdummy .*\(managed\|use\|router\|extern\)" ip neigh show $proxy11741174+11751175+ # set the extern_learn flag, but no other11761176+ run_cmd ip neigh change $proxy $dstip dev "$devdummy" extern_learn11771177+ run_cmd_grep "$dstip dev $devdummy .* extern_learn" ip neigh show $proxy11781178+ run_cmd_grep_fail "$dstip dev $devdummy .* \(managed\|use\|router\)" ip neigh show $proxy11791179+11801180+ # flags are reset when not provided11811181+ run_cmd ip neigh change $proxy $dstip dev "$devdummy"11821182+ run_cmd_grep $dstip ip neigh show $proxy11831183+ run_cmd_grep_fail "$dstip dev $devdummy .* extern_learn" ip neigh show $proxy11841184+11851185+ # add a protocol11861186+ run_cmd ip neigh change $proxy $dstip dev "$devdummy" protocol boot11871187+ run_cmd_grep "$dstip dev $devdummy .* proto boot" ip neigh show $proxy11881188+11891189+ # protocol is retained when not provided11901190+ run_cmd ip neigh change $proxy $dstip dev "$devdummy"11911191+ run_cmd_grep "$dstip dev $devdummy .* proto boot" ip neigh show $proxy11921192+11931193+ # change protocol11941194+ run_cmd ip neigh change $proxy $dstip dev "$devdummy" protocol static11951195+ run_cmd_grep "$dstip dev $devdummy .* proto static" ip neigh show $proxy11961196+11971197+ # also check an extended flag for non-proxy neighs11981198+ if [ "$proxy" = "" ]; then11991199+ run_cmd ip neigh change $proxy $dstip dev "$devdummy" managed12001200+ run_cmd_grep "$dstip dev $devdummy managed" ip neigh show $proxy12011201+12021202+ run_cmd ip neigh change $proxy $dstip dev "$devdummy" lladdr $dstmac12031203+ run_cmd_grep_fail "$dstip dev $devdummy managed" ip neigh show $proxy12041204+ fi12051205+12061206+ run_cmd ip neigh del $proxy $dstip dev "$devdummy"12071207+ done12081208+12091209+ if [ $ret -ne 0 ];then12101210+ end_test "FAIL: neigh update"12111211+ return 112121212+ fi12131213+12141214+ end_test "PASS: neigh update"11621215}1163121611641217kci_test_bridge_parent_id()
···11#include <asm/ppc_asm.h>2233-FUNC_START(enter_vmx_usercopy)44- li r3,155- blr66-77-FUNC_START(exit_vmx_usercopy)88- li r3,099- blr1010-113FUNC_START(enter_vmx_ops)124 li r3,1135 blr