···151151 The algorithm_params file is write-only and is used to setup152152 compression algorithm parameters.153153154154-What: /sys/block/zram<id>/writeback_compressed154154+What: /sys/block/zram<id>/compressed_writeback155155Date: Decemeber 2025156156Contact: Richard Chang <richardycc@google.com>157157Description:158158- The writeback_compressed device atrribute toggles compressed158158+ The compressed_writeback device atrribute toggles compressed159159 writeback feature.160160161161What: /sys/block/zram<id>/writeback_batch_size
+3-3
Documentation/admin-guide/blockdev/zram.rst
···216216writeback_limit_enable RW show and set writeback_limit feature217217writeback_batch_size RW show and set maximum number of in-flight218218 writeback operations219219-writeback_compressed RW show and set compressed writeback feature219219+compressed_writeback RW show and set compressed writeback feature220220comp_algorithm RW show and change the compression algorithm221221algorithm_params WO setup compression algorithm parameters222222compact WO trigger memory compaction···439439By default zram stores written back pages in decompressed (raw) form, which440440means that writeback operation involves decompression of the page before441441writing it to the backing device. This behavior can be changed by enabling442442-`writeback_compressed` feature, which causes zram to write compressed pages442442+`compressed_writeback` feature, which causes zram to write compressed pages443443to the backing device, thus avoiding decompression overhead. To enable444444this feature, execute::445445446446- $ echo yes > /sys/block/zramX/writeback_compressed446446+ $ echo yes > /sys/block/zramX/compressed_writeback447447448448Note that this feature should be configured before the `zramX` device is449449initialized.
+3
Documentation/admin-guide/kernel-parameters.txt
···81968196 p = USB_QUIRK_SHORT_SET_ADDRESS_REQ_TIMEOUT81978197 (Reduce timeout of the SET_ADDRESS81988198 request from 5000 ms to 500 ms);81998199+ q = USB_QUIRK_FORCE_ONE_CONFIG (Device82008200+ claims zero configurations,82018201+ forcing to 1);81998202 Example: quirks=0781:5580:bk,0a5c:5834:gij8200820382018204 usbhid.mousepoll=
+2
Documentation/dev-tools/kunit/run_wrapper.rst
···336336- ``--list_tests_attr``: If set, lists all tests that will be run and all of their337337 attributes.338338339339+- ``--list_suites``: If set, lists all suites that will be run.340340+339341Command-line completion340342==============================341343
···168168 offset from voltage set to regulator.169169170170 regulator-uv-protection-microvolt:171171- description: Set over under voltage protection limit. This is a limit where171171+ description: Set under voltage protection limit. This is a limit where172172 hardware performs emergency shutdown. Zero can be passed to disable173173 protection and value '1' indicates that protection should be enabled but174174 limit setting can be omitted. Limit is given as microvolt offset from···182182 is given as microvolt offset from voltage set to regulator.183183184184 regulator-uv-warn-microvolt:185185- description: Set over under voltage warning limit. This is a limit where185185+ description: Set under voltage warning limit. This is a limit where186186 hardware is assumed still to be functional but approaching limit where187187 it gets damaged. Recovery actions should be initiated. Zero can be passed188188 to disable detection and value '1' indicates that detection should
···9999When a driver is removed, the list of devices that it supports is100100iterated over, and the driver's remove callback is called for each101101one. The device is removed from that list and the symlinks removed.102102+103103+104104+Driver Override105105+~~~~~~~~~~~~~~~106106+107107+Userspace may override the standard matching by writing a driver name to108108+a device's ``driver_override`` sysfs attribute. When set, only a driver109109+whose name matches the override will be considered during binding. This110110+bypasses all bus-specific matching (OF, ACPI, ID tables, etc.).111111+112112+The override may be cleared by writing an empty string, which returns113113+the device to standard matching rules. Writing to ``driver_override``114114+does not automatically unbind the device from its current driver or115115+make any attempt to load the specified driver.116116+117117+Buses opt into this mechanism by setting the ``driver_override`` flag in118118+their ``struct bus_type``::119119+120120+ const struct bus_type example_bus_type = {121121+ ...122122+ .driver_override = true,123123+ };124124+125125+When the flag is set, the driver core automatically creates the126126+``driver_override`` sysfs attribute for every device on that bus.127127+128128+The bus's ``match()`` callback should check the override before performing129129+its own matching, using ``device_match_driver_override()``::130130+131131+ static int example_match(struct device *dev, const struct device_driver *drv)132132+ {133133+ int ret;134134+135135+ ret = device_match_driver_override(dev, drv);136136+ if (ret >= 0)137137+ return ret;138138+139139+ /* Fall through to bus-specific matching... */140140+ }141141+142142+``device_match_driver_override()`` returns > 0 if the override matches143143+the given driver, 0 if the override is set but does not match, or < 0 if144144+no override is set at all.145145+146146+Additional helpers are available:147147+148148+- ``device_set_driver_override()`` - set or clear the override from kernel code.149149+- ``device_has_driver_override()`` - check whether an override is set.
···4343 CONFIG_DEBUG_INFO_BTF=y4444 CONFIG_BPF_JIT_ALWAYS_ON=y4545 CONFIG_BPF_JIT_DEFAULT_ON=y4646- CONFIG_PAHOLE_HAS_BTF_TAG=y47464847sched_ext is used only when the BPF scheduler is loaded and running.4948···5758However, when the BPF scheduler is loaded and ``SCX_OPS_SWITCH_PARTIAL`` is5859set in ``ops->flags``, only tasks with the ``SCHED_EXT`` policy are scheduled5960by sched_ext, while tasks with ``SCHED_NORMAL``, ``SCHED_BATCH`` and6060-``SCHED_IDLE`` policies are scheduled by the fair-class scheduler.6161+``SCHED_IDLE`` policies are scheduled by the fair-class scheduler which has6262+higher sched_class precedence than ``SCHED_EXT``.61636264Terminating the sched_ext scheduler program, triggering `SysRq-S`, or6365detection of any internal error including stalled runnable tasks aborts the···345345 The functions prefixed with ``scx_bpf_`` can be called from the BPF346346 scheduler.347347348348+* ``kernel/sched/ext_idle.c`` contains the built-in idle CPU selection policy.349349+348350* ``tools/sched_ext/`` hosts example BPF scheduler implementations.349351350352 * ``scx_simple[.bpf].c``: Minimal global FIFO scheduler example using a···355353 * ``scx_qmap[.bpf].c``: A multi-level FIFO scheduler supporting five356354 levels of priority implemented with ``BPF_MAP_TYPE_QUEUE``.357355356356+ * ``scx_central[.bpf].c``: A central FIFO scheduler where all scheduling357357+ decisions are made on one CPU, demonstrating ``LOCAL_ON`` dispatching,358358+ tickless operation, and kthread preemption.359359+360360+ * ``scx_cpu0[.bpf].c``: A scheduler that queues all tasks to a shared DSQ361361+ and only dispatches them on CPU0 in FIFO order. Useful for testing bypass362362+ behavior.363363+364364+ * ``scx_flatcg[.bpf].c``: A flattened cgroup hierarchy scheduler365365+ implementing hierarchical weight-based cgroup CPU control by compounding366366+ each cgroup's share at every level into a single flat scheduling layer.367367+368368+ * ``scx_pair[.bpf].c``: A core-scheduling example that always makes369369+ sibling CPU pairs execute tasks from the same CPU cgroup.370370+371371+ * ``scx_sdt[.bpf].c``: A variation of ``scx_simple`` demonstrating BPF372372+ arena memory management for per-task data.373373+374374+ * ``scx_userland[.bpf].c``: A minimal scheduler demonstrating user space375375+ scheduling. Tasks with CPU affinity are direct-dispatched in FIFO order;376376+ all others are scheduled in user space by a simple vruntime scheduler.377377+358378ABI Instability359379===============360380361381The APIs provided by sched_ext to BPF schedulers programs have no stability362382guarantees. This includes the ops table callbacks and constants defined in363383``include/linux/sched/ext.h``, as well as the ``scx_bpf_`` kfuncs defined in364364-``kernel/sched/ext.c``.384384+``kernel/sched/ext.c`` and ``kernel/sched/ext_idle.c``.365385366386While we will attempt to provide a relatively stable API surface when367387possible, they are subject to change without warning between kernel
+107-99
Documentation/virt/kvm/api.rst
···8435843584368436The valid bits in cap.args[0] are:8437843784388438-=================================== ============================================84398439- KVM_X86_QUIRK_LINT0_REENABLED By default, the reset value for the LVT84408440- LINT0 register is 0x700 (APIC_MODE_EXTINT).84418441- When this quirk is disabled, the reset value84428442- is 0x10000 (APIC_LVT_MASKED).84388438+======================================== ================================================84398439+KVM_X86_QUIRK_LINT0_REENABLED By default, the reset value for the LVT84408440+ LINT0 register is 0x700 (APIC_MODE_EXTINT).84418441+ When this quirk is disabled, the reset value84428442+ is 0x10000 (APIC_LVT_MASKED).8443844384448444- KVM_X86_QUIRK_CD_NW_CLEARED By default, KVM clears CR0.CD and CR0.NW on84458445- AMD CPUs to workaround buggy guest firmware84468446- that runs in perpetuity with CR0.CD, i.e.84478447- with caches in "no fill" mode.84448444+KVM_X86_QUIRK_CD_NW_CLEARED By default, KVM clears CR0.CD and CR0.NW on84458445+ AMD CPUs to workaround buggy guest firmware84468446+ that runs in perpetuity with CR0.CD, i.e.84478447+ with caches in "no fill" mode.8448844884498449- When this quirk is disabled, KVM does not84508450- change the value of CR0.CD and CR0.NW.84498449+ When this quirk is disabled, KVM does not84508450+ change the value of CR0.CD and CR0.NW.8451845184528452- KVM_X86_QUIRK_LAPIC_MMIO_HOLE By default, the MMIO LAPIC interface is84538453- available even when configured for x2APIC84548454- mode. When this quirk is disabled, KVM84558455- disables the MMIO LAPIC interface if the84568456- LAPIC is in x2APIC mode.84528452+KVM_X86_QUIRK_LAPIC_MMIO_HOLE By default, the MMIO LAPIC interface is84538453+ available even when configured for x2APIC84548454+ mode. When this quirk is disabled, KVM84558455+ disables the MMIO LAPIC interface if the84568456+ LAPIC is in x2APIC mode.8457845784588458- KVM_X86_QUIRK_OUT_7E_INC_RIP By default, KVM pre-increments %rip before84598459- exiting to userspace for an OUT instruction84608460- to port 0x7e. When this quirk is disabled,84618461- KVM does not pre-increment %rip before84628462- exiting to userspace.84588458+KVM_X86_QUIRK_OUT_7E_INC_RIP By default, KVM pre-increments %rip before84598459+ exiting to userspace for an OUT instruction84608460+ to port 0x7e. When this quirk is disabled,84618461+ KVM does not pre-increment %rip before84628462+ exiting to userspace.8463846384648464- KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT When this quirk is disabled, KVM sets84658465- CPUID.01H:ECX[bit 3] (MONITOR/MWAIT) if84668466- IA32_MISC_ENABLE[bit 18] (MWAIT) is set.84678467- Additionally, when this quirk is disabled,84688468- KVM clears CPUID.01H:ECX[bit 3] if84698469- IA32_MISC_ENABLE[bit 18] is cleared.84648464+KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT When this quirk is disabled, KVM sets84658465+ CPUID.01H:ECX[bit 3] (MONITOR/MWAIT) if84668466+ IA32_MISC_ENABLE[bit 18] (MWAIT) is set.84678467+ Additionally, when this quirk is disabled,84688468+ KVM clears CPUID.01H:ECX[bit 3] if84698469+ IA32_MISC_ENABLE[bit 18] is cleared.8470847084718471- KVM_X86_QUIRK_FIX_HYPERCALL_INSN By default, KVM rewrites guest84728472- VMMCALL/VMCALL instructions to match the84738473- vendor's hypercall instruction for the84748474- system. When this quirk is disabled, KVM84758475- will no longer rewrite invalid guest84768476- hypercall instructions. Executing the84778477- incorrect hypercall instruction will84788478- generate a #UD within the guest.84718471+KVM_X86_QUIRK_FIX_HYPERCALL_INSN By default, KVM rewrites guest84728472+ VMMCALL/VMCALL instructions to match the84738473+ vendor's hypercall instruction for the84748474+ system. When this quirk is disabled, KVM84758475+ will no longer rewrite invalid guest84768476+ hypercall instructions. Executing the84778477+ incorrect hypercall instruction will84788478+ generate a #UD within the guest.8479847984808480-KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS By default, KVM emulates MONITOR/MWAIT (if84818481- they are intercepted) as NOPs regardless of84828482- whether or not MONITOR/MWAIT are supported84838483- according to guest CPUID. When this quirk84848484- is disabled and KVM_X86_DISABLE_EXITS_MWAIT84858485- is not set (MONITOR/MWAIT are intercepted),84868486- KVM will inject a #UD on MONITOR/MWAIT if84878487- they're unsupported per guest CPUID. Note,84888488- KVM will modify MONITOR/MWAIT support in84898489- guest CPUID on writes to MISC_ENABLE if84908490- KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT is84918491- disabled.84808480+KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS By default, KVM emulates MONITOR/MWAIT (if84818481+ they are intercepted) as NOPs regardless of84828482+ whether or not MONITOR/MWAIT are supported84838483+ according to guest CPUID. When this quirk84848484+ is disabled and KVM_X86_DISABLE_EXITS_MWAIT84858485+ is not set (MONITOR/MWAIT are intercepted),84868486+ KVM will inject a #UD on MONITOR/MWAIT if84878487+ they're unsupported per guest CPUID. Note,84888488+ KVM will modify MONITOR/MWAIT support in84898489+ guest CPUID on writes to MISC_ENABLE if84908490+ KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT is84918491+ disabled.8492849284938493-KVM_X86_QUIRK_SLOT_ZAP_ALL By default, for KVM_X86_DEFAULT_VM VMs, KVM84948494- invalidates all SPTEs in all memslots and84958495- address spaces when a memslot is deleted or84968496- moved. When this quirk is disabled (or the84978497- VM type isn't KVM_X86_DEFAULT_VM), KVM only84988498- ensures the backing memory of the deleted84998499- or moved memslot isn't reachable, i.e KVM85008500- _may_ invalidate only SPTEs related to the85018501- memslot.84938493+KVM_X86_QUIRK_SLOT_ZAP_ALL By default, for KVM_X86_DEFAULT_VM VMs, KVM84948494+ invalidates all SPTEs in all memslots and84958495+ address spaces when a memslot is deleted or84968496+ moved. When this quirk is disabled (or the84978497+ VM type isn't KVM_X86_DEFAULT_VM), KVM only84988498+ ensures the backing memory of the deleted84998499+ or moved memslot isn't reachable, i.e KVM85008500+ _may_ invalidate only SPTEs related to the85018501+ memslot.8502850285038503-KVM_X86_QUIRK_STUFF_FEATURE_MSRS By default, at vCPU creation, KVM sets the85048504- vCPU's MSR_IA32_PERF_CAPABILITIES (0x345),85058505- MSR_IA32_ARCH_CAPABILITIES (0x10a),85068506- MSR_PLATFORM_INFO (0xce), and all VMX MSRs85078507- (0x480..0x492) to the maximal capabilities85088508- supported by KVM. KVM also sets85098509- MSR_IA32_UCODE_REV (0x8b) to an arbitrary85108510- value (which is different for Intel vs.85118511- AMD). Lastly, when guest CPUID is set (by85128512- userspace), KVM modifies select VMX MSR85138513- fields to force consistency between guest85148514- CPUID and L2's effective ISA. When this85158515- quirk is disabled, KVM zeroes the vCPU's MSR85168516- values (with two exceptions, see below),85178517- i.e. treats the feature MSRs like CPUID85188518- leaves and gives userspace full control of85198519- the vCPU model definition. This quirk does85208520- not affect VMX MSRs CR0/CR4_FIXED1 (0x48785218521- and 0x489), as KVM does now allow them to85228522- be set by userspace (KVM sets them based on85238523- guest CPUID, for safety purposes).85038503+KVM_X86_QUIRK_STUFF_FEATURE_MSRS By default, at vCPU creation, KVM sets the85048504+ vCPU's MSR_IA32_PERF_CAPABILITIES (0x345),85058505+ MSR_IA32_ARCH_CAPABILITIES (0x10a),85068506+ MSR_PLATFORM_INFO (0xce), and all VMX MSRs85078507+ (0x480..0x492) to the maximal capabilities85088508+ supported by KVM. KVM also sets85098509+ MSR_IA32_UCODE_REV (0x8b) to an arbitrary85108510+ value (which is different for Intel vs.85118511+ AMD). Lastly, when guest CPUID is set (by85128512+ userspace), KVM modifies select VMX MSR85138513+ fields to force consistency between guest85148514+ CPUID and L2's effective ISA. When this85158515+ quirk is disabled, KVM zeroes the vCPU's MSR85168516+ values (with two exceptions, see below),85178517+ i.e. treats the feature MSRs like CPUID85188518+ leaves and gives userspace full control of85198519+ the vCPU model definition. This quirk does85208520+ not affect VMX MSRs CR0/CR4_FIXED1 (0x48785218521+ and 0x489), as KVM does now allow them to85228522+ be set by userspace (KVM sets them based on85238523+ guest CPUID, for safety purposes).8524852485258525-KVM_X86_QUIRK_IGNORE_GUEST_PAT By default, on Intel platforms, KVM ignores85268526- guest PAT and forces the effective memory85278527- type to WB in EPT. The quirk is not available85288528- on Intel platforms which are incapable of85298529- safely honoring guest PAT (i.e., without CPU85308530- self-snoop, KVM always ignores guest PAT and85318531- forces effective memory type to WB). It is85328532- also ignored on AMD platforms or, on Intel,85338533- when a VM has non-coherent DMA devices85348534- assigned; KVM always honors guest PAT in85358535- such case. The quirk is needed to avoid85368536- slowdowns on certain Intel Xeon platforms85378537- (e.g. ICX, SPR) where self-snoop feature is85388538- supported but UC is slow enough to cause85398539- issues with some older guests that use85408540- UC instead of WC to map the video RAM.85418541- Userspace can disable the quirk to honor85428542- guest PAT if it knows that there is no such85438543- guest software, for example if it does not85448544- expose a bochs graphics device (which is85458545- known to have had a buggy driver).85468546-=================================== ============================================85258525+KVM_X86_QUIRK_IGNORE_GUEST_PAT By default, on Intel platforms, KVM ignores85268526+ guest PAT and forces the effective memory85278527+ type to WB in EPT. The quirk is not available85288528+ on Intel platforms which are incapable of85298529+ safely honoring guest PAT (i.e., without CPU85308530+ self-snoop, KVM always ignores guest PAT and85318531+ forces effective memory type to WB). It is85328532+ also ignored on AMD platforms or, on Intel,85338533+ when a VM has non-coherent DMA devices85348534+ assigned; KVM always honors guest PAT in85358535+ such case. The quirk is needed to avoid85368536+ slowdowns on certain Intel Xeon platforms85378537+ (e.g. ICX, SPR) where self-snoop feature is85388538+ supported but UC is slow enough to cause85398539+ issues with some older guests that use85408540+ UC instead of WC to map the video RAM.85418541+ Userspace can disable the quirk to honor85428542+ guest PAT if it knows that there is no such85438543+ guest software, for example if it does not85448544+ expose a bochs graphics device (which is85458545+ known to have had a buggy driver).85468546+85478547+KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM By default, KVM relaxes the consistency85488548+ check for GUEST_IA32_DEBUGCTL in vmcs1285498549+ to allow FREEZE_IN_SMM to be set. When85508550+ this quirk is disabled, KVM requires this85518551+ bit to be cleared. Note that the vmcs0285528552+ bit is still completely controlled by the85538553+ host, regardless of the quirk setting.85548554+======================================== ================================================85478555854885567.32 KVM_CAP_MAX_VCPU_ID85498557------------------------
+2
Documentation/virt/kvm/locking.rst
···17171818- kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock19192020+- vcpu->mutex is taken outside kvm->slots_lock and kvm->slots_arch_lock2121+2022- kvm->slots_lock is taken outside kvm->irq_lock, though acquiring2123 them together is quite rare.2224
+33-21
MAINTAINERS
···40224022ASYMMETRIC KEYS40234023M: David Howells <dhowells@redhat.com>40244024M: Lukas Wunner <lukas@wunner.de>40254025-M: Ignat Korchagin <ignat@cloudflare.com>40254025+M: Ignat Korchagin <ignat@linux.win>40264026L: keyrings@vger.kernel.org40274027L: linux-crypto@vger.kernel.org40284028S: Maintained···4035403540364036ASYMMETRIC KEYS - ECDSA40374037M: Lukas Wunner <lukas@wunner.de>40384038-M: Ignat Korchagin <ignat@cloudflare.com>40384038+M: Ignat Korchagin <ignat@linux.win>40394039R: Stefan Berger <stefanb@linux.ibm.com>40404040L: linux-crypto@vger.kernel.org40414041S: Maintained···4045404540464046ASYMMETRIC KEYS - GOST40474047M: Lukas Wunner <lukas@wunner.de>40484048-M: Ignat Korchagin <ignat@cloudflare.com>40484048+M: Ignat Korchagin <ignat@linux.win>40494049L: linux-crypto@vger.kernel.org40504050S: Odd fixes40514051F: crypto/ecrdsa*4052405240534053ASYMMETRIC KEYS - RSA40544054M: Lukas Wunner <lukas@wunner.de>40554055-M: Ignat Korchagin <ignat@cloudflare.com>40554055+M: Ignat Korchagin <ignat@linux.win>40564056L: linux-crypto@vger.kernel.org40574057S: Maintained40584058F: crypto/rsa*···79987998F: drivers/gpu/drm/tiny/hx8357d.c7999799980008000DRM DRIVER FOR HYPERV SYNTHETIC VIDEO DEVICE80018001-M: Deepak Rawat <drawat.floss@gmail.com>80018001+M: Dexuan Cui <decui@microsoft.com>80028002+M: Long Li <longli@microsoft.com>80038003+M: Saurabh Sengar <ssengar@linux.microsoft.com>80028004L: linux-hyperv@vger.kernel.org80038005L: dri-devel@lists.freedesktop.org80048006S: Maintained···86288626F: include/uapi/drm/lima_drm.h8629862786308628DRM DRIVERS FOR LOONGSON86318631-M: Sui Jingfeng <suijingfeng@loongson.cn>86328629L: dri-devel@lists.freedesktop.org86338633-S: Supported86308630+S: Orphan86348631T: git https://gitlab.freedesktop.org/drm/misc/kernel.git86358632F: drivers/gpu/drm/loongson/86368633···16359163581636016359MEDIATEK T7XX 5G WWAN MODEM DRIVER1636116360M: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>1636216362-R: Chiranjeevi Rapolu <chiranjeevi.rapolu@linux.intel.com>1636316361R: Liu Haijun <haijun.liu@mediatek.com>1636416362R: Ricardo Martinez <ricardo.martinez@linux.intel.com>1636516363L: netdev@vger.kernel.org···1664316643MEMORY MANAGEMENT - CORE1664416644M: Andrew Morton <akpm@linux-foundation.org>1664516645M: David Hildenbrand <david@kernel.org>1664616646-R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1664616646+R: Lorenzo Stoakes <ljs@kernel.org>1664716647R: Liam R. Howlett <Liam.Howlett@oracle.com>1664816648R: Vlastimil Babka <vbabka@kernel.org>1664916649R: Mike Rapoport <rppt@kernel.org>···1677316773MEMORY MANAGEMENT - MISC1677416774M: Andrew Morton <akpm@linux-foundation.org>1677516775M: David Hildenbrand <david@kernel.org>1677616776-R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1677616776+R: Lorenzo Stoakes <ljs@kernel.org>1677716777R: Liam R. Howlett <Liam.Howlett@oracle.com>1677816778R: Vlastimil Babka <vbabka@kernel.org>1677916779R: Mike Rapoport <rppt@kernel.org>···1686416864R: Michal Hocko <mhocko@kernel.org>1686516865R: Qi Zheng <zhengqi.arch@bytedance.com>1686616866R: Shakeel Butt <shakeel.butt@linux.dev>1686716867-R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1686716867+R: Lorenzo Stoakes <ljs@kernel.org>1686816868L: linux-mm@kvack.org1686916869S: Maintained1687016870F: mm/vmscan.c···1687316873MEMORY MANAGEMENT - RMAP (REVERSE MAPPING)1687416874M: Andrew Morton <akpm@linux-foundation.org>1687516875M: David Hildenbrand <david@kernel.org>1687616876-M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1687616876+M: Lorenzo Stoakes <ljs@kernel.org>1687716877R: Rik van Riel <riel@surriel.com>1687816878R: Liam R. Howlett <Liam.Howlett@oracle.com>1687916879R: Vlastimil Babka <vbabka@kernel.org>···1691816918MEMORY MANAGEMENT - THP (TRANSPARENT HUGE PAGE)1691916919M: Andrew Morton <akpm@linux-foundation.org>1692016920M: David Hildenbrand <david@kernel.org>1692116921-M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1692116921+M: Lorenzo Stoakes <ljs@kernel.org>1692216922R: Zi Yan <ziy@nvidia.com>1692316923R: Baolin Wang <baolin.wang@linux.alibaba.com>1692416924R: Liam R. Howlett <Liam.Howlett@oracle.com>···16958169581695916959MEMORY MANAGEMENT - RUST1696016960M: Alice Ryhl <aliceryhl@google.com>1696116961-R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1696116961+R: Lorenzo Stoakes <ljs@kernel.org>1696216962R: Liam R. Howlett <Liam.Howlett@oracle.com>1696316963L: linux-mm@kvack.org1696416964L: rust-for-linux@vger.kernel.org···1697416974MEMORY MAPPING1697516975M: Andrew Morton <akpm@linux-foundation.org>1697616976M: Liam R. Howlett <Liam.Howlett@oracle.com>1697716977-M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1697716977+M: Lorenzo Stoakes <ljs@kernel.org>1697816978R: Vlastimil Babka <vbabka@kernel.org>1697916979R: Jann Horn <jannh@google.com>1698016980R: Pedro Falcato <pfalcato@suse.de>···1700417004M: Andrew Morton <akpm@linux-foundation.org>1700517005M: Suren Baghdasaryan <surenb@google.com>1700617006M: Liam R. Howlett <Liam.Howlett@oracle.com>1700717007-M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1700717007+M: Lorenzo Stoakes <ljs@kernel.org>1700817008R: Vlastimil Babka <vbabka@kernel.org>1700917009R: Shakeel Butt <shakeel.butt@linux.dev>1701017010L: linux-mm@kvack.org···1701917019MEMORY MAPPING - MADVISE (MEMORY ADVICE)1702017020M: Andrew Morton <akpm@linux-foundation.org>1702117021M: Liam R. Howlett <Liam.Howlett@oracle.com>1702217022-M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1702217022+M: Lorenzo Stoakes <ljs@kernel.org>1702317023M: David Hildenbrand <david@kernel.org>1702417024R: Vlastimil Babka <vbabka@kernel.org>1702517025R: Jann Horn <jannh@google.com>···21938219382193921939RADOS BLOCK DEVICE (RBD)2194021940M: Ilya Dryomov <idryomov@gmail.com>2194121941-R: Dongsheng Yang <dongsheng.yang@easystack.cn>2194121941+R: Dongsheng Yang <dongsheng.yang@linux.dev>2194221942L: ceph-devel@vger.kernel.org2194321943S: Supported2194421944W: http://ceph.com/···2226622266L: linux-wireless@vger.kernel.org2226722267S: Orphan2226822268F: drivers/net/wireless/rsi/2226922269+2227022270+RELAY2227122271+M: Andrew Morton <akpm@linux-foundation.org>2227222272+M: Jens Axboe <axboe@kernel.dk>2227322273+M: Jason Xing <kernelxing@tencent.com>2227422274+L: linux-kernel@vger.kernel.org2227522275+S: Maintained2227622276+F: Documentation/filesystems/relay.rst2227722277+F: include/linux/relay.h2227822278+F: kernel/relay.c22269222792227022280REGISTER MAP ABSTRACTION2227122281M: Mark Brown <broonie@kernel.org>···23166231562316723157RUST [ALLOC]2316823158M: Danilo Krummrich <dakr@kernel.org>2316923169-R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>2315923159+R: Lorenzo Stoakes <ljs@kernel.org>2317023160R: Vlastimil Babka <vbabka@kernel.org>2317123161R: Liam R. Howlett <Liam.Howlett@oracle.com>2317223162R: Uladzislau Rezki <urezki@gmail.com>···24343243332434424334SLAB ALLOCATOR2434524335M: Vlastimil Babka <vbabka@kernel.org>2433624336+M: Harry Yoo <harry.yoo@oracle.com>2434624337M: Andrew Morton <akpm@linux-foundation.org>2433824338+R: Hao Li <hao.li@linux.dev>2434724339R: Christoph Lameter <cl@gentwo.org>2434824340R: David Rientjes <rientjes@google.com>2434924341R: Roman Gushchin <roman.gushchin@linux.dev>2435024350-R: Harry Yoo <harry.yoo@oracle.com>2435124342L: linux-mm@kvack.org2435224343S: Maintained2435324344T: git git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git···2490224891F: drivers/pinctrl/spear/24903248922490424893SPI NOR SUBSYSTEM2490524905-M: Tudor Ambarus <tudor.ambarus@linaro.org>2490624894M: Pratyush Yadav <pratyush@kernel.org>2490724895M: Michael Walle <mwalle@kernel.org>2489624896+R: Takahiro Kuwano <takahiro.kuwano@infineon.com>2490824897L: linux-mtd@lists.infradead.org2490924898S: Maintained2491024899W: http://www.linux-mtd.infradead.org/···2575925748F: include/net/pkt_sched.h2576025749F: include/net/sch_priv.h2576125750F: include/net/tc_act/2575125751+F: include/net/tc_wrapper.h2576225752F: include/uapi/linux/pkt_cls.h2576325753F: include/uapi/linux/pkt_sched.h2576425754F: include/uapi/linux/tc_act/
+5-1
Makefile
···22VERSION = 733PATCHLEVEL = 044SUBLEVEL = 055-EXTRAVERSION = -rc355+EXTRAVERSION = -rc566NAME = Baby Opossum Posse7788# *DOCUMENTATION*···476476export rust_common_flags := --edition=2021 \477477 -Zbinary_dep_depinfo=y \478478 -Astable_features \479479+ -Aunused_features \479480 -Dnon_ascii_idents \480481 -Dunsafe_op_in_unsafe_fn \481482 -Wmissing_docs \···11141113# change __FILE__ to the relative path to the source directory11151114ifdef building_out_of_srctree11161115KBUILD_CPPFLAGS += -fmacro-prefix-map=$(srcroot)/=11161116+ifeq ($(call rustc-option-yn, --remap-path-scope=macro),y)11171117+KBUILD_RUSTFLAGS += --remap-path-prefix=$(srcroot)/= --remap-path-scope=macro11181118+endif11171119endif1118112011191121# include additional Makefiles when needed
···784784 /* Number of debug breakpoints/watchpoints for this CPU (minus 1) */785785 unsigned int debug_brps;786786 unsigned int debug_wrps;787787+788788+ /* Last vgic_irq part of the AP list recorded in an LR */789789+ struct vgic_irq *last_lr_irq;787790};788791789792struct kvm_host_psci_config {
+9
arch/arm64/kernel/cpufeature.c
···23452345 !is_midr_in_range_list(has_vgic_v3))23462346 return false;2347234723482348+ /*23492349+ * pKVM prevents late onlining of CPUs. This means that whatever23502350+ * state the capability is in after deprivilege cannot be affected23512351+ * by a new CPU booting -- this is garanteed to be a CPU we have23522352+ * already seen, and the cap is therefore unchanged.23532353+ */23542354+ if (system_capabilities_finalized() && is_protected_kvm_enabled())23552355+ return cpus_have_final_cap(ARM64_HAS_ICH_HCR_EL2_TDIR);23562356+23482357 if (is_kernel_in_hyp_mode())23492358 res.a1 = read_sysreg_s(SYS_ICH_VTR_EL2);23502359 else
+8
arch/arm64/kernel/pi/patch-scs.c
···192192 size -= 2;193193 break;194194195195+ case DW_CFA_advance_loc4:196196+ loc += *opcode++ * code_alignment_factor;197197+ loc += (*opcode++ << 8) * code_alignment_factor;198198+ loc += (*opcode++ << 16) * code_alignment_factor;199199+ loc += (*opcode++ << 24) * code_alignment_factor;200200+ size -= 4;201201+ break;202202+195203 case DW_CFA_def_cfa:196204 case DW_CFA_offset_extended:197205 size = skip_xleb128(&opcode, size);
···148148 struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;149149 struct vgic_v3_cpu_if *cpuif = &vgic_cpu->vgic_v3;150150 u32 eoicount = FIELD_GET(ICH_HCR_EL2_EOIcount, cpuif->vgic_hcr);151151- struct vgic_irq *irq;151151+ struct vgic_irq *irq = *host_data_ptr(last_lr_irq);152152153153 DEBUG_SPINLOCK_BUG_ON(!irqs_disabled());154154···158158 /*159159 * EOIMode=0: use EOIcount to emulate deactivation. We are160160 * guaranteed to deactivate in reverse order of the activation, so161161- * just pick one active interrupt after the other in the ap_list,162162- * and replay the deactivation as if the CPU was doing it. We also163163- * rely on priority drop to have taken place, and the list to be164164- * sorted by priority.161161+ * just pick one active interrupt after the other in the tail part162162+ * of the ap_list, past the LRs, and replay the deactivation as if163163+ * the CPU was doing it. We also rely on priority drop to have taken164164+ * place, and the list to be sorted by priority.165165 */166166- list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) {166166+ list_for_each_entry_continue(irq, &vgic_cpu->ap_list_head, ap_list) {167167 u64 lr;168168169169 /*
···11-/* T4240 Interlaken LAC Portal device tree stub with 24 portals.22- *33- * Copyright 2012 Freescale Semiconductor Inc.44- *55- * Redistribution and use in source and binary forms, with or without66- * modification, are permitted provided that the following conditions are met:77- * * Redistributions of source code must retain the above copyright88- * notice, this list of conditions and the following disclaimer.99- * * Redistributions in binary form must reproduce the above copyright1010- * notice, this list of conditions and the following disclaimer in the1111- * documentation and/or other materials provided with the distribution.1212- * * Neither the name of Freescale Semiconductor nor the1313- * names of its contributors may be used to endorse or promote products1414- * derived from this software without specific prior written permission.1515- *1616- *1717- * ALTERNATIVELY, this software may be distributed under the terms of the1818- * GNU General Public License ("GPL") as published by the Free Software1919- * Foundation, either version 2 of that License or (at your option) any2020- * later version.2121- *2222- * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY2323- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED2424- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE2525- * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY2626- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES2727- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;2828- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND2929- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT3030- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS3131- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.3232- */3333-3434-#address-cells = <0x1>;3535-#size-cells = <0x1>;3636-compatible = "fsl,interlaken-lac-portals";3737-3838-lportal0: lac-portal@0 {3939- compatible = "fsl,interlaken-lac-portal-v1.0";4040- reg = <0x0 0x1000>;4141-};4242-4343-lportal1: lac-portal@1000 {4444- compatible = "fsl,interlaken-lac-portal-v1.0";4545- reg = <0x1000 0x1000>;4646-};4747-4848-lportal2: lac-portal@2000 {4949- compatible = "fsl,interlaken-lac-portal-v1.0";5050- reg = <0x2000 0x1000>;5151-};5252-5353-lportal3: lac-portal@3000 {5454- compatible = "fsl,interlaken-lac-portal-v1.0";5555- reg = <0x3000 0x1000>;5656-};5757-5858-lportal4: lac-portal@4000 {5959- compatible = "fsl,interlaken-lac-portal-v1.0";6060- reg = <0x4000 0x1000>;6161-};6262-6363-lportal5: lac-portal@5000 {6464- compatible = "fsl,interlaken-lac-portal-v1.0";6565- reg = <0x5000 0x1000>;6666-};6767-6868-lportal6: lac-portal@6000 {6969- compatible = "fsl,interlaken-lac-portal-v1.0";7070- reg = <0x6000 0x1000>;7171-};7272-7373-lportal7: lac-portal@7000 {7474- compatible = "fsl,interlaken-lac-portal-v1.0";7575- reg = <0x7000 0x1000>;7676-};7777-7878-lportal8: lac-portal@8000 {7979- compatible = "fsl,interlaken-lac-portal-v1.0";8080- reg = <0x8000 0x1000>;8181-};8282-8383-lportal9: lac-portal@9000 {8484- compatible = "fsl,interlaken-lac-portal-v1.0";8585- reg = <0x9000 0x1000>;8686-};8787-8888-lportal10: lac-portal@A000 {8989- compatible = "fsl,interlaken-lac-portal-v1.0";9090- reg = <0xA000 0x1000>;9191-};9292-9393-lportal11: lac-portal@B000 {9494- compatible = "fsl,interlaken-lac-portal-v1.0";9595- reg = <0xB000 0x1000>;9696-};9797-9898-lportal12: lac-portal@C000 {9999- compatible = "fsl,interlaken-lac-portal-v1.0";100100- reg = <0xC000 0x1000>;101101-};102102-103103-lportal13: lac-portal@D000 {104104- compatible = "fsl,interlaken-lac-portal-v1.0";105105- reg = <0xD000 0x1000>;106106-};107107-108108-lportal14: lac-portal@E000 {109109- compatible = "fsl,interlaken-lac-portal-v1.0";110110- reg = <0xE000 0x1000>;111111-};112112-113113-lportal15: lac-portal@F000 {114114- compatible = "fsl,interlaken-lac-portal-v1.0";115115- reg = <0xF000 0x1000>;116116-};117117-118118-lportal16: lac-portal@10000 {119119- compatible = "fsl,interlaken-lac-portal-v1.0";120120- reg = <0x10000 0x1000>;121121-};122122-123123-lportal17: lac-portal@11000 {124124- compatible = "fsl,interlaken-lac-portal-v1.0";125125- reg = <0x11000 0x1000>;126126-};127127-128128-lportal18: lac-portal@1200 {129129- compatible = "fsl,interlaken-lac-portal-v1.0";130130- reg = <0x12000 0x1000>;131131-};132132-133133-lportal19: lac-portal@13000 {134134- compatible = "fsl,interlaken-lac-portal-v1.0";135135- reg = <0x13000 0x1000>;136136-};137137-138138-lportal20: lac-portal@14000 {139139- compatible = "fsl,interlaken-lac-portal-v1.0";140140- reg = <0x14000 0x1000>;141141-};142142-143143-lportal21: lac-portal@15000 {144144- compatible = "fsl,interlaken-lac-portal-v1.0";145145- reg = <0x15000 0x1000>;146146-};147147-148148-lportal22: lac-portal@16000 {149149- compatible = "fsl,interlaken-lac-portal-v1.0";150150- reg = <0x16000 0x1000>;151151-};152152-153153-lportal23: lac-portal@17000 {154154- compatible = "fsl,interlaken-lac-portal-v1.0";155155- reg = <0x17000 0x1000>;156156-};
-45
arch/powerpc/boot/dts/fsl/interlaken-lac.dtsi
···11-/*22- * T4 Interlaken Look-aside Controller (LAC) device tree stub33- *44- * Copyright 2012 Freescale Semiconductor Inc.55- *66- * Redistribution and use in source and binary forms, with or without77- * modification, are permitted provided that the following conditions are met:88- * * Redistributions of source code must retain the above copyright99- * notice, this list of conditions and the following disclaimer.1010- * * Redistributions in binary form must reproduce the above copyright1111- * notice, this list of conditions and the following disclaimer in the1212- * documentation and/or other materials provided with the distribution.1313- * * Neither the name of Freescale Semiconductor nor the1414- * names of its contributors may be used to endorse or promote products1515- * derived from this software without specific prior written permission.1616- *1717- *1818- * ALTERNATIVELY, this software may be distributed under the terms of the1919- * GNU General Public License ("GPL") as published by the Free Software2020- * Foundation, either version 2 of that License or (at your option) any2121- * later version.2222- *2323- * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY2424- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED2525- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE2626- * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY2727- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES2828- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;2929- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND3030- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT3131- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS3232- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.3333- */3434-3535-lac: lac@229000 {3636- compatible = "fsl,interlaken-lac";3737- reg = <0x229000 0x1000>;3838- interrupts = <16 2 1 18>;3939-};4040-4141-lac-hv@228000 {4242- compatible = "fsl,interlaken-lac-hv";4343- reg = <0x228000 0x1000>;4444- fsl,non-hv-node = <&lac>;4545-};
-43
arch/powerpc/boot/dts/fsl/pq3-mpic-message-B.dtsi
···11-/*22- * PQ3 MPIC Message (Group B) device tree stub [ controller @ offset 0x42400 ]33- *44- * Copyright 2012 Freescale Semiconductor Inc.55- *66- * Redistribution and use in source and binary forms, with or without77- * modification, are permitted provided that the following conditions are met:88- * * Redistributions of source code must retain the above copyright99- * notice, this list of conditions and the following disclaimer.1010- * * Redistributions in binary form must reproduce the above copyright1111- * notice, this list of conditions and the following disclaimer in the1212- * documentation and/or other materials provided with the distribution.1313- * * Neither the name of Freescale Semiconductor nor the1414- * names of its contributors may be used to endorse or promote products1515- * derived from this software without specific prior written permission.1616- *1717- *1818- * ALTERNATIVELY, this software may be distributed under the terms of the1919- * GNU General Public License ("GPL") as published by the Free Software2020- * Foundation, either version 2 of that License or (at your option) any2121- * later version.2222- *2323- * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY2424- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED2525- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE2626- * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY2727- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES2828- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;2929- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND3030- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT3131- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS3232- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.3333- */3434-3535-message@42400 {3636- compatible = "fsl,mpic-v3.1-msgr";3737- reg = <0x42400 0x200>;3838- interrupts = <3939- 0xb4 2 0 04040- 0xb5 2 0 04141- 0xb6 2 0 04242- 0xb7 2 0 0>;4343-};
···11-/*22- * QorIQ FMan v3 1g port #1 device tree stub [ controller @ offset 0x400000 ]33- *44- * Copyright 2012 - 2015 Freescale Semiconductor Inc.55- *66- * Redistribution and use in source and binary forms, with or without77- * modification, are permitted provided that the following conditions are met:88- * * Redistributions of source code must retain the above copyright99- * notice, this list of conditions and the following disclaimer.1010- * * Redistributions in binary form must reproduce the above copyright1111- * notice, this list of conditions and the following disclaimer in the1212- * documentation and/or other materials provided with the distribution.1313- * * Neither the name of Freescale Semiconductor nor the1414- * names of its contributors may be used to endorse or promote products1515- * derived from this software without specific prior written permission.1616- *1717- *1818- * ALTERNATIVELY, this software may be distributed under the terms of the1919- * GNU General Public License ("GPL") as published by the Free Software2020- * Foundation, either version 2 of that License or (at your option) any2121- * later version.2222- *2323- * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY2424- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED2525- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE2626- * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY2727- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES2828- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;2929- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND3030- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT3131- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS3232- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.3333- */3434-3535-fman@400000 {3636- fman0_rx_0x09: port@89000 {3737- cell-index = <0x9>;3838- compatible = "fsl,fman-v3-port-rx";3939- reg = <0x89000 0x1000>;4040- fsl,fman-10g-port;4141- fsl,fman-best-effort-port;4242- };4343-4444- fman0_tx_0x29: port@a9000 {4545- cell-index = <0x29>;4646- compatible = "fsl,fman-v3-port-tx";4747- reg = <0xa9000 0x1000>;4848- fsl,fman-10g-port;4949- fsl,fman-best-effort-port;5050- };5151-5252- ethernet@e2000 {5353- cell-index = <1>;5454- compatible = "fsl,fman-memac";5555- reg = <0xe2000 0x1000>;5656- fsl,fman-ports = <&fman0_rx_0x09 &fman0_tx_0x29>;5757- ptp-timer = <&ptp_timer0>;5858- pcsphy-handle = <&pcsphy1>, <&qsgmiia_pcs1>;5959- pcs-handle-names = "sgmii", "qsgmii";6060- };6161-6262- mdio@e1000 {6363- qsgmiia_pcs1: ethernet-pcs@1 {6464- compatible = "fsl,lynx-pcs";6565- reg = <1>;6666- };6767- };6868-6969- mdio@e3000 {7070- #address-cells = <1>;7171- #size-cells = <0>;7272- compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";7373- reg = <0xe3000 0x1000>;7474- fsl,erratum-a011043; /* must ignore read errors */7575-7676- pcsphy1: ethernet-phy@0 {7777- reg = <0x0>;7878- };7979- };8080-};
···28932893 for (node = 0; prom_next_node(&node); ) {28942894 type[0] = '\0';28952895 prom_getprop(node, "device_type", type, sizeof(type));28962896- if (prom_strcmp(type, "escc") && prom_strcmp(type, "i2s"))28962896+ if (prom_strcmp(type, "escc") && prom_strcmp(type, "i2s") &&28972897+ prom_strcmp(type, "media-bay"))28972898 continue;2898289928992900 if (prom_getproplen(node, "#size-cells") != PROM_ERROR)
-10
arch/powerpc/kernel/setup-common.c
···3535#include <linux/of_irq.h>3636#include <linux/hugetlb.h>3737#include <linux/pgtable.h>3838-#include <asm/kexec.h>3938#include <asm/io.h>4039#include <asm/paca.h>4140#include <asm/processor.h>···993994 smp_release_cpus();994995995996 initmem_init();996996-997997- /*998998- * Reserve large chunks of memory for use by CMA for kdump, fadump, KVM and999999- * hugetlb. These must be called after initmem_init(), so that10001000- * pageblock_order is initialised.10011001- */10021002- fadump_cma_init();10031003- kdump_cma_reserve();10041004- kvm_cma_reserve();10059971006998 early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);1007999
+22-4
arch/powerpc/kernel/trace/ftrace.c
···3737 if (addr >= (unsigned long)__exittext_begin && addr < (unsigned long)__exittext_end)3838 return 0;39394040- if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY) &&4141- !IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {4242- addr += MCOUNT_INSN_SIZE;4343- if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS))4040+ if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)) {4141+ if (!IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {4442 addr += MCOUNT_INSN_SIZE;4343+ if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS))4444+ addr += MCOUNT_INSN_SIZE;4545+ } else if (IS_ENABLED(CONFIG_CC_IS_CLANG) && IS_ENABLED(CONFIG_PPC64)) {4646+ /*4747+ * addr points to global entry point though the NOP was emitted at local4848+ * entry point due to https://github.com/llvm/llvm-project/issues/1637064949+ * Handle that here with ppc_function_entry() for kernel symbols while5050+ * adjusting module addresses in the else case, by looking for the below5151+ * module global entry point sequence:5252+ * ld r2, -8(r12)5353+ * add r2, r2, r125454+ */5555+ if (is_kernel_text(addr) || is_kernel_inittext(addr))5656+ addr = ppc_function_entry((void *)addr);5757+ else if ((ppc_inst_val(ppc_inst_read((u32 *)addr)) ==5858+ PPC_RAW_LD(_R2, _R12, -8)) &&5959+ (ppc_inst_val(ppc_inst_read((u32 *)(addr+4))) ==6060+ PPC_RAW_ADD(_R2, _R2, _R12)))6161+ addr += 8;6262+ }4563 }46644765 return addr;
···450450 kbuf->buffer = headers;451451 kbuf->mem = KEXEC_BUF_MEM_UNKNOWN;452452 kbuf->bufsz = headers_sz;453453+454454+ /*455455+ * Account for extra space required to accommodate additional memory456456+ * ranges in elfcorehdr due to memory hotplug events.457457+ */453458 kbuf->memsz = headers_sz + kdump_extra_elfcorehdr_size(cmem);454459 kbuf->top_down = false;455460···465460 }466461467462 image->elf_load_addr = kbuf->mem;468468- image->elf_headers_sz = headers_sz;463463+464464+ /*465465+ * If CONFIG_CRASH_HOTPLUG is enabled, the elfcorehdr kexec segment466466+ * memsz can be larger than bufsz. Always initialize elf_headers_sz467467+ * with memsz. This ensures the correct size is reserved for elfcorehdr468468+ * memory in the FDT prepared for kdump.469469+ */470470+ image->elf_headers_sz = kbuf->memsz;469471 image->elf_headers = headers;470472out:471473 kfree(cmem);
···189189{190190 struct kvm_book3e_206_tlb_entry *gtlbe =191191 get_entry(vcpu_e500, tlbsel, esel);192192- struct tlbe_ref *ref = &vcpu_e500->gtlb_priv[tlbsel][esel].ref;192192+ struct tlbe_priv *tlbe = &vcpu_e500->gtlb_priv[tlbsel][esel];193193194194 /* Don't bother with unmapped entries */195195- if (!(ref->flags & E500_TLB_VALID)) {196196- WARN(ref->flags & (E500_TLB_BITMAP | E500_TLB_TLB0),197197- "%s: flags %x\n", __func__, ref->flags);195195+ if (!(tlbe->flags & E500_TLB_VALID)) {196196+ WARN(tlbe->flags & (E500_TLB_BITMAP | E500_TLB_TLB0),197197+ "%s: flags %x\n", __func__, tlbe->flags);198198 WARN_ON(tlbsel == 1 && vcpu_e500->g2h_tlb1_map[esel]);199199 }200200201201- if (tlbsel == 1 && ref->flags & E500_TLB_BITMAP) {201201+ if (tlbsel == 1 && tlbe->flags & E500_TLB_BITMAP) {202202 u64 tmp = vcpu_e500->g2h_tlb1_map[esel];203203 int hw_tlb_indx;204204 unsigned long flags;···216216 }217217 mb();218218 vcpu_e500->g2h_tlb1_map[esel] = 0;219219- ref->flags &= ~(E500_TLB_BITMAP | E500_TLB_VALID);219219+ tlbe->flags &= ~(E500_TLB_BITMAP | E500_TLB_VALID);220220 local_irq_restore(flags);221221 }222222223223- if (tlbsel == 1 && ref->flags & E500_TLB_TLB0) {223223+ if (tlbsel == 1 && tlbe->flags & E500_TLB_TLB0) {224224 /*225225 * TLB1 entry is backed by 4k pages. This should happen226226 * rarely and is not worth optimizing. Invalidate everything.227227 */228228 kvmppc_e500_tlbil_all(vcpu_e500);229229- ref->flags &= ~(E500_TLB_TLB0 | E500_TLB_VALID);229229+ tlbe->flags &= ~(E500_TLB_TLB0 | E500_TLB_VALID);230230 }231231232232 /*233233 * If TLB entry is still valid then it's a TLB0 entry, and thus234234 * backed by at most one host tlbe per shadow pid235235 */236236- if (ref->flags & E500_TLB_VALID)236236+ if (tlbe->flags & E500_TLB_VALID)237237 kvmppc_e500_tlbil_one(vcpu_e500, gtlbe);238238239239 /* Mark the TLB as not backed by the host anymore */240240- ref->flags = 0;240240+ tlbe->flags = 0;241241}242242243243static inline int tlbe_is_writable(struct kvm_book3e_206_tlb_entry *tlbe)···245245 return tlbe->mas7_3 & (MAS3_SW|MAS3_UW);246246}247247248248-static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,249249- struct kvm_book3e_206_tlb_entry *gtlbe,250250- kvm_pfn_t pfn, unsigned int wimg,251251- bool writable)248248+static inline void kvmppc_e500_tlbe_setup(struct tlbe_priv *tlbe,249249+ struct kvm_book3e_206_tlb_entry *gtlbe,250250+ kvm_pfn_t pfn, unsigned int wimg,251251+ bool writable)252252{253253- ref->pfn = pfn;254254- ref->flags = E500_TLB_VALID;253253+ tlbe->pfn = pfn;254254+ tlbe->flags = E500_TLB_VALID;255255 if (writable)256256- ref->flags |= E500_TLB_WRITABLE;256256+ tlbe->flags |= E500_TLB_WRITABLE;257257258258 /* Use guest supplied MAS2_G and MAS2_E */259259- ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;259259+ tlbe->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;260260}261261262262-static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref)262262+static inline void kvmppc_e500_tlbe_release(struct tlbe_priv *tlbe)263263{264264- if (ref->flags & E500_TLB_VALID) {264264+ if (tlbe->flags & E500_TLB_VALID) {265265 /* FIXME: don't log bogus pfn for TLB1 */266266- trace_kvm_booke206_ref_release(ref->pfn, ref->flags);267267- ref->flags = 0;266266+ trace_kvm_booke206_ref_release(tlbe->pfn, tlbe->flags);267267+ tlbe->flags = 0;268268 }269269}270270···284284 int i;285285286286 for (tlbsel = 0; tlbsel <= 1; tlbsel++) {287287- for (i = 0; i < vcpu_e500->gtlb_params[tlbsel].entries; i++) {288288- struct tlbe_ref *ref =289289- &vcpu_e500->gtlb_priv[tlbsel][i].ref;290290- kvmppc_e500_ref_release(ref);291291- }287287+ for (i = 0; i < vcpu_e500->gtlb_params[tlbsel].entries; i++)288288+ kvmppc_e500_tlbe_release(&vcpu_e500->gtlb_priv[tlbsel][i]);292289 }293290}294291···301304static void kvmppc_e500_setup_stlbe(302305 struct kvm_vcpu *vcpu,303306 struct kvm_book3e_206_tlb_entry *gtlbe,304304- int tsize, struct tlbe_ref *ref, u64 gvaddr,307307+ int tsize, struct tlbe_priv *tlbe, u64 gvaddr,305308 struct kvm_book3e_206_tlb_entry *stlbe)306309{307307- kvm_pfn_t pfn = ref->pfn;310310+ kvm_pfn_t pfn = tlbe->pfn;308311 u32 pr = vcpu->arch.shared->msr & MSR_PR;309309- bool writable = !!(ref->flags & E500_TLB_WRITABLE);312312+ bool writable = !!(tlbe->flags & E500_TLB_WRITABLE);310313311311- BUG_ON(!(ref->flags & E500_TLB_VALID));314314+ BUG_ON(!(tlbe->flags & E500_TLB_VALID));312315313316 /* Force IPROT=0 for all guest mappings. */314317 stlbe->mas1 = MAS1_TSIZE(tsize) | get_tlb_sts(gtlbe) | MAS1_VALID;315315- stlbe->mas2 = (gvaddr & MAS2_EPN) | (ref->flags & E500_TLB_MAS2_ATTR);318318+ stlbe->mas2 = (gvaddr & MAS2_EPN) | (tlbe->flags & E500_TLB_MAS2_ATTR);316319 stlbe->mas7_3 = ((u64)pfn << PAGE_SHIFT) |317320 e500_shadow_mas3_attrib(gtlbe->mas7_3, writable, pr);318321}···320323static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,321324 u64 gvaddr, gfn_t gfn, struct kvm_book3e_206_tlb_entry *gtlbe,322325 int tlbsel, struct kvm_book3e_206_tlb_entry *stlbe,323323- struct tlbe_ref *ref)326326+ struct tlbe_priv *tlbe)324327{325328 struct kvm_memory_slot *slot;326329 unsigned int psize;···452455 }453456 }454457455455- kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg, writable);458458+ kvmppc_e500_tlbe_setup(tlbe, gtlbe, pfn, wimg, writable);456459 kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,457457- ref, gvaddr, stlbe);460460+ tlbe, gvaddr, stlbe);458461 writable = tlbe_is_writable(stlbe);459462460463 /* Clear i-cache for new pages */···471474 struct kvm_book3e_206_tlb_entry *stlbe)472475{473476 struct kvm_book3e_206_tlb_entry *gtlbe;474474- struct tlbe_ref *ref;477477+ struct tlbe_priv *tlbe;475478 int stlbsel = 0;476479 int sesel = 0;477480 int r;478481479482 gtlbe = get_entry(vcpu_e500, 0, esel);480480- ref = &vcpu_e500->gtlb_priv[0][esel].ref;483483+ tlbe = &vcpu_e500->gtlb_priv[0][esel];481484482485 r = kvmppc_e500_shadow_map(vcpu_e500, get_tlb_eaddr(gtlbe),483486 get_tlb_raddr(gtlbe) >> PAGE_SHIFT,484484- gtlbe, 0, stlbe, ref);487487+ gtlbe, 0, stlbe, tlbe);485488 if (r)486489 return r;487490···491494}492495493496static int kvmppc_e500_tlb1_map_tlb1(struct kvmppc_vcpu_e500 *vcpu_e500,494494- struct tlbe_ref *ref,497497+ struct tlbe_priv *tlbe,495498 int esel)496499{497500 unsigned int sesel = vcpu_e500->host_tlb1_nv++;···504507 vcpu_e500->g2h_tlb1_map[idx] &= ~(1ULL << sesel);505508 }506509507507- vcpu_e500->gtlb_priv[1][esel].ref.flags |= E500_TLB_BITMAP;510510+ vcpu_e500->gtlb_priv[1][esel].flags |= E500_TLB_BITMAP;508511 vcpu_e500->g2h_tlb1_map[esel] |= (u64)1 << sesel;509512 vcpu_e500->h2g_tlb1_rmap[sesel] = esel + 1;510510- WARN_ON(!(ref->flags & E500_TLB_VALID));513513+ WARN_ON(!(tlbe->flags & E500_TLB_VALID));511514512515 return sesel;513516}···519522 u64 gvaddr, gfn_t gfn, struct kvm_book3e_206_tlb_entry *gtlbe,520523 struct kvm_book3e_206_tlb_entry *stlbe, int esel)521524{522522- struct tlbe_ref *ref = &vcpu_e500->gtlb_priv[1][esel].ref;525525+ struct tlbe_priv *tlbe = &vcpu_e500->gtlb_priv[1][esel];523526 int sesel;524527 int r;525528526529 r = kvmppc_e500_shadow_map(vcpu_e500, gvaddr, gfn, gtlbe, 1, stlbe,527527- ref);530530+ tlbe);528531 if (r)529532 return r;530533531534 /* Use TLB0 when we can only map a page with 4k */532535 if (get_tlb_tsize(stlbe) == BOOK3E_PAGESZ_4K) {533533- vcpu_e500->gtlb_priv[1][esel].ref.flags |= E500_TLB_TLB0;536536+ vcpu_e500->gtlb_priv[1][esel].flags |= E500_TLB_TLB0;534537 write_stlbe(vcpu_e500, gtlbe, stlbe, 0, 0);535538 return 0;536539 }537540538541 /* Otherwise map into TLB1 */539539- sesel = kvmppc_e500_tlb1_map_tlb1(vcpu_e500, ref, esel);542542+ sesel = kvmppc_e500_tlb1_map_tlb1(vcpu_e500, tlbe, esel);540543 write_stlbe(vcpu_e500, gtlbe, stlbe, 1, sesel);541544542545 return 0;···558561 priv = &vcpu_e500->gtlb_priv[tlbsel][esel];559562560563 /* Triggers after clear_tlb_privs or on initial mapping */561561- if (!(priv->ref.flags & E500_TLB_VALID)) {564564+ if (!(priv->flags & E500_TLB_VALID)) {562565 kvmppc_e500_tlb0_map(vcpu_e500, esel, &stlbe);563566 } else {564567 kvmppc_e500_setup_stlbe(vcpu, gtlbe, BOOK3E_PAGESZ_4K,565565- &priv->ref, eaddr, &stlbe);568568+ priv, eaddr, &stlbe);566569 write_stlbe(vcpu_e500, gtlbe, &stlbe, 0, 0);567570 }568571 break;
+1
arch/powerpc/lib/copyuser_64.S
···562562 li r5,4096563563 b .Ldst_aligned564564EXPORT_SYMBOL(__copy_tofrom_user)565565+EXPORT_SYMBOL(__copy_tofrom_user_base)
+15-30
arch/powerpc/lib/copyuser_power7.S
···55 *66 * Author: Anton Blanchard <anton@au.ibm.com>77 */88+#include <linux/export.h>89#include <asm/ppc_asm.h>99-1010-#ifndef SELFTEST_CASE1111-/* 0 == don't use VMX, 1 == use VMX */1212-#define SELFTEST_CASE 01313-#endif14101511#ifdef __BIG_ENDIAN__1612#define LVS(VRT,RA,RB) lvsl VRT,RA,RB···4347 ld r15,STK_REG(R15)(r1)4448 ld r14,STK_REG(R14)(r1)4549.Ldo_err3:4646- bl CFUNC(exit_vmx_usercopy)5050+ ld r6,STK_REG(R31)(r1) /* original destination pointer */5151+ ld r5,STK_REG(R29)(r1) /* original number of bytes */5252+ subf r7,r6,r3 /* #bytes copied */5353+ subf r3,r7,r5 /* #bytes not copied in r3 */4754 ld r0,STACKFRAMESIZE+16(r1)4855 mtlr r04949- b .Lexit5656+ addi r1,r1,STACKFRAMESIZE5757+ blr5058#endif /* CONFIG_ALTIVEC */51595260.Ldo_err2:···74747575_GLOBAL(__copy_tofrom_user_power7)7676 cmpldi r5,167777- cmpldi cr1,r5,332878777978 std r3,-STACKFRAMESIZE+STK_REG(R31)(r1)8079 std r4,-STACKFRAMESIZE+STK_REG(R30)(r1)···81828283 blt .Lshort_copy83848484-#ifdef CONFIG_ALTIVEC8585-test_feature = SELFTEST_CASE8686-BEGIN_FTR_SECTION8787- bgt cr1,.Lvmx_copy8888-END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)8989-#endif90859186.Lnonvmx_copy:9287 /* Get the source 8B aligned */···25626315: li r3,0257264 blr258265259259-.Lunwind_stack_nonvmx_copy:260260- addi r1,r1,STACKFRAMESIZE261261- b .Lnonvmx_copy262262-263263-.Lvmx_copy:264266#ifdef CONFIG_ALTIVEC267267+_GLOBAL(__copy_tofrom_user_power7_vmx)265268 mflr r0266269 std r0,16(r1)267270 stdu r1,-STACKFRAMESIZE(r1)268268- bl CFUNC(enter_vmx_usercopy)269269- cmpwi cr1,r3,0270270- ld r0,STACKFRAMESIZE+16(r1)271271- ld r3,STK_REG(R31)(r1)272272- ld r4,STK_REG(R30)(r1)273273- ld r5,STK_REG(R29)(r1)274274- mtlr r0275271272272+ std r3,STK_REG(R31)(r1)273273+ std r5,STK_REG(R29)(r1)276274 /*277275 * We prefetch both the source and destination using enhanced touch278276 * instructions. We use a stream ID of 0 for the load side and···283299 ori r10,r7,1 /* stream=1 */284300285301 DCBT_SETUP_STREAMS(r6, r7, r9, r10, r8)286286-287287- beq cr1,.Lunwind_stack_nonvmx_copy288302289303 /*290304 * If source and destination are not relatively aligned we use a···460478err3; stb r0,0(r3)46147946248015: addi r1,r1,STACKFRAMESIZE463463- b CFUNC(exit_vmx_usercopy) /* tail call optimise */481481+ li r3,0482482+ blr464483465484.Lvmx_unaligned_copy:466485 /* Get the destination 16B aligned */···664681err3; stb r0,0(r3)66568266668315: addi r1,r1,STACKFRAMESIZE667667- b CFUNC(exit_vmx_usercopy) /* tail call optimise */684684+ li r3,0685685+ blr686686+EXPORT_SYMBOL(__copy_tofrom_user_power7_vmx)668687#endif /* CONFIG_ALTIVEC */
+2
arch/powerpc/lib/vmx-helper.c
···27272828 return 1;2929}3030+EXPORT_SYMBOL(enter_vmx_usercopy);30313132/*3233 * This function must return 0 because we tail call optimise when calling···5049 set_dec(1);5150 return 0;5251}5252+EXPORT_SYMBOL(exit_vmx_usercopy);53535454int enter_vmx_ops(void)5555{
+14
arch/powerpc/mm/mem.c
···3030#include <asm/setup.h>3131#include <asm/fixmap.h>32323333+#include <asm/fadump.h>3434+#include <asm/kexec.h>3535+#include <asm/kvm_ppc.h>3636+3337#include <mm/mmu_decl.h>34383539unsigned long long memory_limit __initdata;···272268273269void __init arch_mm_preinit(void)274270{271271+272272+ /*273273+ * Reserve large chunks of memory for use by CMA for kdump, fadump, KVM274274+ * and hugetlb. These must be called after pageblock_order is275275+ * initialised.276276+ */277277+ fadump_cma_init();278278+ kdump_cma_reserve();279279+ kvm_cma_reserve();280280+275281 /*276282 * book3s is limited to 16 page sizes due to encoding this in277283 * a 4-bit field for slices.
-5
arch/powerpc/net/bpf_jit.h
···81818282#ifdef CONFIG_PPC6483838484-/* for gpr non volatile registers BPG_REG_6 to 10 */8585-#define BPF_PPC_STACK_SAVE (6 * 8)8686-8784/* If dummy pass (!image), account for maximum possible instructions */8885#define PPC_LI64(d, i) do { \8986 if (!image) \···216219int bpf_add_extable_entry(struct bpf_prog *fp, u32 *image, u32 *fimage, int pass,217220 struct codegen_context *ctx, int insn_idx,218221 int jmp_off, int dst_reg, u32 code);219219-220220-int bpf_jit_stack_tailcallinfo_offset(struct codegen_context *ctx);221222#endif222223223224#endif
+56-71
arch/powerpc/net/bpf_jit_comp.c
···450450451451bool bpf_jit_supports_kfunc_call(void)452452{453453- return true;453453+ return IS_ENABLED(CONFIG_PPC64);454454}455455456456bool bpf_jit_supports_arena(void)···638638 * for the traced function (BPF subprog/callee) to fetch it.639639 */640640static void bpf_trampoline_setup_tail_call_info(u32 *image, struct codegen_context *ctx,641641- int func_frame_offset,642642- int bpf_dummy_frame_size, int r4_off)641641+ int bpf_frame_size, int r4_off)643642{644643 if (IS_ENABLED(CONFIG_PPC64)) {645645- /* See Generated stack layout */646646- int tailcallinfo_offset = BPF_PPC_TAILCALL;647647-648648- /*649649- * func_frame_offset = ...(1)650650- * bpf_dummy_frame_size + trampoline_frame_size651651- */652652- EMIT(PPC_RAW_LD(_R4, _R1, func_frame_offset));653653- EMIT(PPC_RAW_LD(_R3, _R4, -tailcallinfo_offset));644644+ EMIT(PPC_RAW_LD(_R4, _R1, bpf_frame_size));645645+ /* Refer to trampoline's Generated stack layout */646646+ EMIT(PPC_RAW_LD(_R3, _R4, -BPF_PPC_TAILCALL));654647655648 /*656649 * Setting the tail_call_info in trampoline's frame···651658 */652659 EMIT(PPC_RAW_CMPLWI(_R3, MAX_TAIL_CALL_CNT));653660 PPC_BCC_CONST_SHORT(COND_GT, 8);654654- EMIT(PPC_RAW_ADDI(_R3, _R4, bpf_jit_stack_tailcallinfo_offset(ctx)));661661+ EMIT(PPC_RAW_ADDI(_R3, _R4, -BPF_PPC_TAILCALL));662662+655663 /*656656- * From ...(1) above:657657- * trampoline_frame_bottom = ...(2)658658- * func_frame_offset - bpf_dummy_frame_size659659- *660660- * Using ...(2) derived above:661661- * trampoline_tail_call_info_offset = ...(3)662662- * trampoline_frame_bottom - tailcallinfo_offset663663- *664664- * From ...(3):665665- * Use trampoline_tail_call_info_offset to write reference of main's666666- * tail_call_info in trampoline frame.664664+ * Trampoline's tail_call_info is at the same offset, as that of665665+ * any bpf program, with reference to previous frame. Update the666666+ * address of main's tail_call_info in trampoline frame.667667 */668668- EMIT(PPC_RAW_STL(_R3, _R1, (func_frame_offset - bpf_dummy_frame_size)669669- - tailcallinfo_offset));668668+ EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size - BPF_PPC_TAILCALL));670669 } else {671670 /* See bpf_jit_stack_offsetof() and BPF_PPC_TC */672671 EMIT(PPC_RAW_LL(_R4, _R1, r4_off));···666681}667682668683static void bpf_trampoline_restore_tail_call_cnt(u32 *image, struct codegen_context *ctx,669669- int func_frame_offset, int r4_off)684684+ int bpf_frame_size, int r4_off)670685{671686 if (IS_ENABLED(CONFIG_PPC32)) {672687 /*···677692 }678693}679694680680-static void bpf_trampoline_save_args(u32 *image, struct codegen_context *ctx, int func_frame_offset,681681- int nr_regs, int regs_off)695695+static void bpf_trampoline_save_args(u32 *image, struct codegen_context *ctx,696696+ int bpf_frame_size, int nr_regs, int regs_off)682697{683698 int param_save_area_offset;684699685685- param_save_area_offset = func_frame_offset; /* the two frames we alloted */700700+ param_save_area_offset = bpf_frame_size;686701 param_save_area_offset += STACK_FRAME_MIN_SIZE; /* param save area is past frame header */687702688703 for (int i = 0; i < nr_regs; i++) {···705720706721/* Used when we call into the traced function. Replicate parameter save area */707722static void bpf_trampoline_restore_args_stack(u32 *image, struct codegen_context *ctx,708708- int func_frame_offset, int nr_regs, int regs_off)723723+ int bpf_frame_size, int nr_regs, int regs_off)709724{710725 int param_save_area_offset;711726712712- param_save_area_offset = func_frame_offset; /* the two frames we alloted */727727+ param_save_area_offset = bpf_frame_size;713728 param_save_area_offset += STACK_FRAME_MIN_SIZE; /* param save area is past frame header */714729715730 for (int i = 8; i < nr_regs; i++) {···726741 void *func_addr)727742{728743 int regs_off, nregs_off, ip_off, run_ctx_off, retval_off, nvr_off, alt_lr_off, r4_off = 0;729729- int i, ret, nr_regs, bpf_frame_size = 0, bpf_dummy_frame_size = 0, func_frame_offset;730744 struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];731745 struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];732746 struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];747747+ int i, ret, nr_regs, retaddr_off, bpf_frame_size = 0;733748 struct codegen_context codegen_ctx, *ctx;734749 u32 *image = (u32 *)rw_image;735750 ppc_inst_t branch_insn;···755770 * Generated stack layout:756771 *757772 * func prev back chain [ back chain ]758758- * [ ]759759- * bpf prog redzone/tailcallcnt [ ... ] 64 bytes (64-bit powerpc)760760- * [ ] --761761- * LR save area [ r0 save (64-bit) ] | header762762- * [ r0 save (32-bit) ] |763763- * dummy frame for unwind [ back chain 1 ] --764773 * [ tail_call_info ] optional - 64-bit powerpc765774 * [ padding ] align stack frame766775 * r4_off [ r4 (tailcallcnt) ] optional - 32-bit powerpc767776 * alt_lr_off [ real lr (ool stub)] optional - actual lr777777+ * retaddr_off [ return address ]768778 * [ r26 ]769779 * nvr_off [ r25 ] nvr save area770780 * retval_off [ return value ]771781 * [ reg argN ]772782 * [ ... ]773773- * regs_off [ reg_arg1 ] prog ctx context774774- * nregs_off [ args count ]775775- * ip_off [ traced function ]783783+ * regs_off [ reg_arg1 ] prog_ctx784784+ * nregs_off [ args count ] ((u64 *)prog_ctx)[-1]785785+ * ip_off [ traced function ] ((u64 *)prog_ctx)[-2]776786 * [ ... ]777787 * run_ctx_off [ bpf_tramp_run_ctx ]778788 * [ reg argN ]···823843 nvr_off = bpf_frame_size;824844 bpf_frame_size += 2 * SZL;825845846846+ /* Save area for return address */847847+ retaddr_off = bpf_frame_size;848848+ bpf_frame_size += SZL;849849+826850 /* Optional save area for actual LR in case of ool ftrace */827851 if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {828852 alt_lr_off = bpf_frame_size;···853869 /* Padding to align stack frame, if any */854870 bpf_frame_size = round_up(bpf_frame_size, SZL * 2);855871856856- /* Dummy frame size for proper unwind - includes 64-bytes red zone for 64-bit powerpc */857857- bpf_dummy_frame_size = STACK_FRAME_MIN_SIZE + 64;858858-859859- /* Offset to the traced function's stack frame */860860- func_frame_offset = bpf_dummy_frame_size + bpf_frame_size;861861-862862- /* Create dummy frame for unwind, store original return value */872872+ /* Store original return value */863873 EMIT(PPC_RAW_STL(_R0, _R1, PPC_LR_STKOFF));864864- /* Protect red zone where tail call count goes */865865- EMIT(PPC_RAW_STLU(_R1, _R1, -bpf_dummy_frame_size));866874867875 /* Create our stack frame */868876 EMIT(PPC_RAW_STLU(_R1, _R1, -bpf_frame_size));···869893 if (IS_ENABLED(CONFIG_PPC32) && nr_regs < 2)870894 EMIT(PPC_RAW_STL(_R4, _R1, r4_off));871895872872- bpf_trampoline_save_args(image, ctx, func_frame_offset, nr_regs, regs_off);896896+ bpf_trampoline_save_args(image, ctx, bpf_frame_size, nr_regs, regs_off);873897874874- /* Save our return address */898898+ /* Save our LR/return address */875899 EMIT(PPC_RAW_MFLR(_R3));876900 if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))877901 EMIT(PPC_RAW_STL(_R3, _R1, alt_lr_off));878902 else879879- EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));903903+ EMIT(PPC_RAW_STL(_R3, _R1, retaddr_off));880904881905 /*882882- * Save ip address of the traced function.883883- * We could recover this from LR, but we will need to address for OOL trampoline,884884- * and optional GEP area.906906+ * Derive IP address of the traced function.907907+ * In case of CONFIG_PPC_FTRACE_OUT_OF_LINE or BPF program, LR points to the instruction908908+ * after the 'bl' instruction in the OOL stub. Refer to ftrace_init_ool_stub() and909909+ * bpf_arch_text_poke() for OOL stub of kernel functions and bpf programs respectively.910910+ * Relevant stub sequence:911911+ *912912+ * bl <tramp>913913+ * LR (R3) => mtlr r0914914+ * b <func_addr+4>915915+ *916916+ * Recover kernel function/bpf program address from the unconditional917917+ * branch instruction at the end of OOL stub.885918 */886919 if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE) || flags & BPF_TRAMP_F_IP_ARG) {887920 EMIT(PPC_RAW_LWZ(_R4, _R3, 4));888921 EMIT(PPC_RAW_SLWI(_R4, _R4, 6));889922 EMIT(PPC_RAW_SRAWI(_R4, _R4, 6));890923 EMIT(PPC_RAW_ADD(_R3, _R3, _R4));891891- EMIT(PPC_RAW_ADDI(_R3, _R3, 4));892924 }893925894926 if (flags & BPF_TRAMP_F_IP_ARG)895927 EMIT(PPC_RAW_STL(_R3, _R1, ip_off));896928897897- if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))898898- /* Fake our LR for unwind */899899- EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));929929+ if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {930930+ /* Fake our LR for BPF_TRAMP_F_CALL_ORIG case */931931+ EMIT(PPC_RAW_ADDI(_R3, _R3, 4));932932+ EMIT(PPC_RAW_STL(_R3, _R1, retaddr_off));933933+ }900934901935 /* Save function arg count -- see bpf_get_func_arg_cnt() */902936 EMIT(PPC_RAW_LI(_R3, nr_regs));···944958 /* Call the traced function */945959 if (flags & BPF_TRAMP_F_CALL_ORIG) {946960 /*947947- * The address in LR save area points to the correct point in the original function961961+ * retaddr on trampoline stack points to the correct point in the original function948962 * with both PPC_FTRACE_OUT_OF_LINE as well as with traditional ftrace instruction949963 * sequence950964 */951951- EMIT(PPC_RAW_LL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));965965+ EMIT(PPC_RAW_LL(_R3, _R1, retaddr_off));952966 EMIT(PPC_RAW_MTCTR(_R3));953967954968 /* Replicate tail_call_cnt before calling the original BPF prog */955969 if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)956956- bpf_trampoline_setup_tail_call_info(image, ctx, func_frame_offset,957957- bpf_dummy_frame_size, r4_off);970970+ bpf_trampoline_setup_tail_call_info(image, ctx, bpf_frame_size, r4_off);958971959972 /* Restore args */960960- bpf_trampoline_restore_args_stack(image, ctx, func_frame_offset, nr_regs, regs_off);973973+ bpf_trampoline_restore_args_stack(image, ctx, bpf_frame_size, nr_regs, regs_off);961974962975 /* Restore TOC for 64-bit */963976 if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2) && !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL))···970985971986 /* Restore updated tail_call_cnt */972987 if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)973973- bpf_trampoline_restore_tail_call_cnt(image, ctx, func_frame_offset, r4_off);988988+ bpf_trampoline_restore_tail_call_cnt(image, ctx, bpf_frame_size, r4_off);974989975990 /* Reserve space to patch branch instruction to skip fexit progs */976991 if (ro_image) /* image is NULL for dummy pass */···10221037 EMIT(PPC_RAW_LD(_R2, _R1, 24));10231038 if (flags & BPF_TRAMP_F_SKIP_FRAME) {10241039 /* Skip the traced function and return to parent */10251025- EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset));10401040+ EMIT(PPC_RAW_ADDI(_R1, _R1, bpf_frame_size));10261041 EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF));10271042 EMIT(PPC_RAW_MTLR(_R0));10281043 EMIT(PPC_RAW_BLR());···10301045 if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {10311046 EMIT(PPC_RAW_LL(_R0, _R1, alt_lr_off));10321047 EMIT(PPC_RAW_MTLR(_R0));10331033- EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset));10481048+ EMIT(PPC_RAW_ADDI(_R1, _R1, bpf_frame_size));10341049 EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF));10351050 EMIT(PPC_RAW_BLR());10361051 } else {10371037- EMIT(PPC_RAW_LL(_R0, _R1, bpf_frame_size + PPC_LR_STKOFF));10521052+ EMIT(PPC_RAW_LL(_R0, _R1, retaddr_off));10381053 EMIT(PPC_RAW_MTCTR(_R0));10391039- EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset));10541054+ EMIT(PPC_RAW_ADDI(_R1, _R1, bpf_frame_size));10401055 EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF));10411056 EMIT(PPC_RAW_MTLR(_R0));10421057 EMIT(PPC_RAW_BCTR());
+143-38
arch/powerpc/net/bpf_jit_comp64.c
···3232 *3333 * [ prev sp ] <-------------3434 * [ tail_call_info ] 8 |3535- * [ nv gpr save area ] 6*8 + (12*8) |3535+ * [ nv gpr save area ] (6 * 8) |3636+ * [ addl. nv gpr save area] (12 * 8) | <--- exception boundary/callback program3637 * [ local_tmp_var ] 24 |3738 * fp (r31) --> [ ebpf stack space ] upto 512 |3839 * [ frame header ] 32/112 |3940 * sp (r1) ---> [ stack pointer ] --------------4041 *4141- * Additional (12*8) in 'nv gpr save area' only in case of4242- * exception boundary.4242+ * Additional (12 * 8) in 'nv gpr save area' only in case of4343+ * exception boundary/callback.4344 */4545+4646+/* BPF non-volatile registers save area size */4747+#define BPF_PPC_STACK_SAVE (6 * 8)44484549/* for bpf JIT code internal usage */4650#define BPF_PPC_STACK_LOCALS 24···5248 * for additional non volatile registers(r14-r25) to be saved5349 * at exception boundary5450 */5555-#define BPF_PPC_EXC_STACK_SAVE (12*8)5151+#define BPF_PPC_EXC_STACK_SAVE (12 * 8)56525753/* stack frame excluding BPF stack, ensure this is quadword aligned */5854#define BPF_PPC_STACKFRAME (STACK_FRAME_MIN_SIZE + \···129125 * [ ... ] |130126 * sp (r1) ---> [ stack pointer ] --------------131127 * [ tail_call_info ] 8132132- * [ nv gpr save area ] 6*8 + (12*8)128128+ * [ nv gpr save area ] (6 * 8)129129+ * [ addl. nv gpr save area] (12 * 8) <--- exception boundary/callback program133130 * [ local_tmp_var ] 24134131 * [ unused red zone ] 224135132 *136136- * Additional (12*8) in 'nv gpr save area' only in case of137137- * exception boundary.133133+ * Additional (12 * 8) in 'nv gpr save area' only in case of134134+ * exception boundary/callback.138135 */139136static int bpf_jit_stack_local(struct codegen_context *ctx)140137{···153148 }154149}155150156156-int bpf_jit_stack_tailcallinfo_offset(struct codegen_context *ctx)151151+static int bpf_jit_stack_tailcallinfo_offset(struct codegen_context *ctx)157152{158153 return bpf_jit_stack_local(ctx) + BPF_PPC_STACK_LOCALS + BPF_PPC_STACK_SAVE;159154}···242237243238 if (bpf_has_stack_frame(ctx) && !ctx->exception_cb) {244239 /*245245- * exception_cb uses boundary frame after stack walk.246246- * It can simply use redzone, this optimization reduces247247- * stack walk loop by one level.248248- *249240 * We need a stack frame, but we don't necessarily need to250241 * save/restore LR unless we call other functions251242 */···285284 * program(main prog) as third arg286285 */287286 EMIT(PPC_RAW_MR(_R1, _R5));287287+ /*288288+ * Exception callback reuses the stack frame of exception boundary.289289+ * But BPF stack depth of exception callback and exception boundary290290+ * don't have to be same. If BPF stack depth is different, adjust the291291+ * stack frame size considering BPF stack depth of exception callback.292292+ * The non-volatile register save area remains unchanged. These non-293293+ * volatile registers are restored in exception callback's epilogue.294294+ */295295+ EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_1), _R5, 0));296296+ EMIT(PPC_RAW_SUB(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_1), _R1));297297+ EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2),298298+ -BPF_PPC_EXC_STACKFRAME));299299+ EMIT(PPC_RAW_CMPLDI(bpf_to_ppc(TMP_REG_2), ctx->stack_size));300300+ PPC_BCC_CONST_SHORT(COND_EQ, 12);301301+ EMIT(PPC_RAW_MR(_R1, bpf_to_ppc(TMP_REG_1)));302302+ EMIT(PPC_RAW_STDU(_R1, _R1, -(BPF_PPC_EXC_STACKFRAME + ctx->stack_size)));288303 }289304290305 /*···499482 return 0;500483}501484485485+static int zero_extend(u32 *image, struct codegen_context *ctx, u32 src_reg, u32 dst_reg, u32 size)486486+{487487+ switch (size) {488488+ case 1:489489+ /* zero-extend 8 bits into 64 bits */490490+ EMIT(PPC_RAW_RLDICL(dst_reg, src_reg, 0, 56));491491+ return 0;492492+ case 2:493493+ /* zero-extend 16 bits into 64 bits */494494+ EMIT(PPC_RAW_RLDICL(dst_reg, src_reg, 0, 48));495495+ return 0;496496+ case 4:497497+ /* zero-extend 32 bits into 64 bits */498498+ EMIT(PPC_RAW_RLDICL(dst_reg, src_reg, 0, 32));499499+ fallthrough;500500+ case 8:501501+ /* Nothing to do */502502+ return 0;503503+ default:504504+ return -1;505505+ }506506+}507507+508508+static int sign_extend(u32 *image, struct codegen_context *ctx, u32 src_reg, u32 dst_reg, u32 size)509509+{510510+ switch (size) {511511+ case 1:512512+ /* sign-extend 8 bits into 64 bits */513513+ EMIT(PPC_RAW_EXTSB(dst_reg, src_reg));514514+ return 0;515515+ case 2:516516+ /* sign-extend 16 bits into 64 bits */517517+ EMIT(PPC_RAW_EXTSH(dst_reg, src_reg));518518+ return 0;519519+ case 4:520520+ /* sign-extend 32 bits into 64 bits */521521+ EMIT(PPC_RAW_EXTSW(dst_reg, src_reg));522522+ fallthrough;523523+ case 8:524524+ /* Nothing to do */525525+ return 0;526526+ default:527527+ return -1;528528+ }529529+}530530+531531+/*532532+ * Handle powerpc ABI expectations from caller:533533+ * - Unsigned arguments are zero-extended.534534+ * - Signed arguments are sign-extended.535535+ */536536+static int prepare_for_kfunc_call(const struct bpf_prog *fp, u32 *image,537537+ struct codegen_context *ctx,538538+ const struct bpf_insn *insn)539539+{540540+ const struct btf_func_model *m = bpf_jit_find_kfunc_model(fp, insn);541541+ int i;542542+543543+ if (!m)544544+ return -1;545545+546546+ for (i = 0; i < m->nr_args; i++) {547547+ /* Note that BPF ABI only allows up to 5 args for kfuncs */548548+ u32 reg = bpf_to_ppc(BPF_REG_1 + i), size = m->arg_size[i];549549+550550+ if (!(m->arg_flags[i] & BTF_FMODEL_SIGNED_ARG)) {551551+ if (zero_extend(image, ctx, reg, reg, size))552552+ return -1;553553+ } else {554554+ if (sign_extend(image, ctx, reg, reg, size))555555+ return -1;556556+ }557557+ }558558+559559+ return 0;560560+}561561+502562static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)503563{504564 /*···616522617523 /*618524 * tail_call_info++; <- Actual value of tcc here525525+ * Writeback this updated value only if tailcall succeeds.619526 */620527 EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), 1));528528+529529+ /* prog = array->ptrs[index]; */530530+ EMIT(PPC_RAW_MULI(bpf_to_ppc(TMP_REG_2), b2p_index, 8));531531+ EMIT(PPC_RAW_ADD(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2), b2p_bpf_array));532532+ EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2),533533+ offsetof(struct bpf_array, ptrs)));534534+535535+ /*536536+ * if (prog == NULL)537537+ * goto out;538538+ */539539+ EMIT(PPC_RAW_CMPLDI(bpf_to_ppc(TMP_REG_2), 0));540540+ PPC_BCC_SHORT(COND_EQ, out);541541+542542+ /* goto *(prog->bpf_func + prologue_size); */543543+ EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2),544544+ offsetof(struct bpf_prog, bpf_func)));545545+ EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2),546546+ FUNCTION_DESCR_SIZE + bpf_tailcall_prologue_size));547547+ EMIT(PPC_RAW_MTCTR(bpf_to_ppc(TMP_REG_2)));621548622549 /*623550 * Before writing updated tail_call_info, distinguish if current frame···653538 EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_2), _R1, bpf_jit_stack_tailcallinfo_offset(ctx)));654539 /* Writeback updated value to tail_call_info */655540 EMIT(PPC_RAW_STD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_2), 0));656656-657657- /* prog = array->ptrs[index]; */658658- EMIT(PPC_RAW_MULI(bpf_to_ppc(TMP_REG_1), b2p_index, 8));659659- EMIT(PPC_RAW_ADD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), b2p_bpf_array));660660- EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), offsetof(struct bpf_array, ptrs)));661661-662662- /*663663- * if (prog == NULL)664664- * goto out;665665- */666666- EMIT(PPC_RAW_CMPLDI(bpf_to_ppc(TMP_REG_1), 0));667667- PPC_BCC_SHORT(COND_EQ, out);668668-669669- /* goto *(prog->bpf_func + prologue_size); */670670- EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), offsetof(struct bpf_prog, bpf_func)));671671- EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1),672672- FUNCTION_DESCR_SIZE + bpf_tailcall_prologue_size));673673- EMIT(PPC_RAW_MTCTR(bpf_to_ppc(TMP_REG_1)));674541675542 /* tear down stack, restore NVRs, ... */676543 bpf_jit_emit_common_epilogue(image, ctx);···12201123 /* special mov32 for zext */12211124 EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 0, 31));12221125 break;12231223- } else if (off == 8) {12241224- EMIT(PPC_RAW_EXTSB(dst_reg, src_reg));12251225- } else if (off == 16) {12261226- EMIT(PPC_RAW_EXTSH(dst_reg, src_reg));12271227- } else if (off == 32) {12281228- EMIT(PPC_RAW_EXTSW(dst_reg, src_reg));12291229- } else if (dst_reg != src_reg)12301230- EMIT(PPC_RAW_MR(dst_reg, src_reg));11261126+ }11271127+ if (off == 0) {11281128+ /* MOV */11291129+ if (dst_reg != src_reg)11301130+ EMIT(PPC_RAW_MR(dst_reg, src_reg));11311131+ } else {11321132+ /* MOVSX: dst = (s8,s16,s32)src (off = 8,16,32) */11331133+ if (sign_extend(image, ctx, src_reg, dst_reg, off / 8))11341134+ return -1;11351135+ }12311136 goto bpf_alu32_trunc;12321137 case BPF_ALU | BPF_MOV | BPF_K: /* (u32) dst = imm */12331138 case BPF_ALU64 | BPF_MOV | BPF_K: /* dst = (s64) imm */···16961597 &func_addr, &func_addr_fixed);16971598 if (ret < 0)16981599 return ret;16001600+16011601+ /* Take care of powerpc ABI requirements before kfunc call */16021602+ if (insn[i].src_reg == BPF_PSEUDO_KFUNC_CALL) {16031603+ if (prepare_for_kfunc_call(fp, image, ctx, &insn[i]))16041604+ return -1;16051605+ }1699160617001607 ret = bpf_jit_emit_func_call_rel(image, fimage, ctx, func_addr);17011608 if (ret)
+5
arch/powerpc/perf/callchain.c
···103103void104104perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)105105{106106+ perf_callchain_store(entry, perf_arch_instruction_pointer(regs));107107+108108+ if (!current->mm)109109+ return;110110+106111 if (!is_32bit_task())107112 perf_callchain_user_64(entry, regs);108113 else
-1
arch/powerpc/perf/callchain_32.c
···142142 next_ip = perf_arch_instruction_pointer(regs);143143 lr = regs->link;144144 sp = regs->gpr[1];145145- perf_callchain_store(entry, next_ip);146145147146 while (entry->nr < entry->max_stack) {148147 fp = (unsigned int __user *) (unsigned long) sp;
-1
arch/powerpc/perf/callchain_64.c
···7777 next_ip = perf_arch_instruction_pointer(regs);7878 lr = regs->link;7979 sp = regs->gpr[1];8080- perf_callchain_store(entry, next_ip);81808281 while (entry->nr < entry->max_stack) {8382 fp = (unsigned long __user *) sp;
+2-2
arch/powerpc/platforms/83xx/km83xx.c
···155155156156/* list of the supported boards */157157static char *board[] __initdata = {158158- "Keymile,KMETER1",159159- "Keymile,kmpbec8321",158158+ "keymile,KMETER1",159159+ "keymile,kmpbec8321",160160 NULL161161};162162
+2-2
arch/powerpc/platforms/Kconfig.cputype
···276276config PPC_E500277277 select FSL_EMB_PERFMON278278 bool279279- select ARCH_SUPPORTS_HUGETLBFS if PHYS_64BIT || PPC64279279+ select ARCH_SUPPORTS_HUGETLBFS280280 select PPC_SMP_MUXED_IPI281281 select PPC_DOORBELL282282 select PPC_KUEP···337337config PTE_64BIT338338 bool339339 depends on 44x || PPC_E500 || PPC_86xx340340- default y if PHYS_64BIT340340+ default y if PPC_E500 || PHYS_64BIT341341342342config PHYS_64BIT343343 bool 'Large physical address support' if PPC_E500 || PPC_86xx
···1111#include <linux/irqchip/riscv-imsic.h>1212#include <linux/kvm_host.h>1313#include <linux/uaccess.h>1414+#include <linux/cpufeature.h>14151516static int aia_create(struct kvm_device *dev, u32 type)1617{···22212322 if (irqchip_in_kernel(kvm))2423 return -EEXIST;2424+2525+ if (!riscv_isa_extension_available(NULL, SSAIA))2626+ return -ENODEV;25272628 ret = -EBUSY;2729 if (kvm_trylock_all_vcpus(kvm))···441437442438static int aia_has_attr(struct kvm_device *dev, struct kvm_device_attr *attr)443439{444444- int nr_vcpus;440440+ int nr_vcpus, r = -ENXIO;445441446442 switch (attr->group) {447443 case KVM_DEV_RISCV_AIA_GRP_CONFIG:···470466 }471467 break;472468 case KVM_DEV_RISCV_AIA_GRP_APLIC:473473- return kvm_riscv_aia_aplic_has_attr(dev->kvm, attr->attr);469469+ mutex_lock(&dev->kvm->lock);470470+ r = kvm_riscv_aia_aplic_has_attr(dev->kvm, attr->attr);471471+ mutex_unlock(&dev->kvm->lock);472472+ break;474473 case KVM_DEV_RISCV_AIA_GRP_IMSIC:475475- return kvm_riscv_aia_imsic_has_attr(dev->kvm, attr->attr);474474+ mutex_lock(&dev->kvm->lock);475475+ r = kvm_riscv_aia_imsic_has_attr(dev->kvm, attr->attr);476476+ mutex_unlock(&dev->kvm->lock);477477+ break;476478 }477479478478- return -ENXIO;480480+ return r;479481}480482481483struct kvm_device_ops kvm_riscv_aia_device_ops = {
+4
arch/riscv/kvm/aia_imsic.c
···908908 int r, rc = KVM_INSN_CONTINUE_NEXT_SEPC;909909 struct imsic *imsic = vcpu->arch.aia_context.imsic_state;910910911911+ /* If IMSIC vCPU state not initialized then forward to user space */912912+ if (!imsic)913913+ return KVM_INSN_EXIT_TO_USER_SPACE;914914+911915 if (isel == KVM_RISCV_AIA_IMSIC_TOPEI) {912916 /* Read pending and enabled interrupt with highest priority */913917 topei = imsic_mrif_topei(imsic->swfile, imsic->nr_eix,
+5-1
arch/riscv/kvm/mmu.c
···245245bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)246246{247247 struct kvm_gstage gstage;248248+ bool mmu_locked;248249249250 if (!kvm->arch.pgd)250251 return false;···254253 gstage.flags = 0;255254 gstage.vmid = READ_ONCE(kvm->arch.vmid.vmid);256255 gstage.pgd = kvm->arch.pgd;256256+ mmu_locked = spin_trylock(&kvm->mmu_lock);257257 kvm_riscv_gstage_unmap_range(&gstage, range->start << PAGE_SHIFT,258258 (range->end - range->start) << PAGE_SHIFT,259259 range->may_block);260260+ if (mmu_locked)261261+ spin_unlock(&kvm->mmu_lock);260262 return false;261263}262264···539535 goto out_unlock;540536541537 /* Check if we are backed by a THP and thus use block mapping if possible */542542- if (vma_pagesize == PAGE_SIZE)538538+ if (!logging && (vma_pagesize == PAGE_SIZE))543539 vma_pagesize = transparent_hugepage_adjust(kvm, memslot, hva, &hfn, &gpa);544540545541 if (writable) {
···2626 struct platform_device *pdev = to_platform_device(dev);2727 struct platform_driver *pdrv = to_platform_driver(drv);28282929- /* When driver_override is set, only bind to the matching driver */3030- if (pdev->driver_override)3131- return !strcmp(pdev->driver_override, drv->name);3232-3329 /* Then try to match against the id table */3430 if (pdrv->id_table)3531 return platform_match_id(pdrv->id_table, pdev) != NULL;
···13721372 else if (i < n_running)13731373 continue;1374137413751375- if (hwc->state & PERF_HES_ARCH)13751375+ cpuc->events[hwc->idx] = event;13761376+13771377+ if (hwc->state & PERF_HES_ARCH) {13781378+ static_call(x86_pmu_set_period)(event);13761379 continue;13801380+ }1377138113781382 /*13791383 * if cpuc->enabled = 0, then no wrmsr as13801384 * per x86_pmu_enable_event()13811385 */13821382- cpuc->events[hwc->idx] = event;13831386 x86_pmu_start(event, PERF_EF_RELOAD);13841387 }13851388 cpuc->n_added = 0;
+21-10
arch/x86/events/intel/core.c
···46284628 event->hw.dyn_constraint &= hybrid(event->pmu, acr_cause_mask64);46294629}4630463046314631+static inline int intel_set_branch_counter_constr(struct perf_event *event,46324632+ int *num)46334633+{46344634+ if (branch_sample_call_stack(event))46354635+ return -EINVAL;46364636+ if (branch_sample_counters(event)) {46374637+ (*num)++;46384638+ event->hw.dyn_constraint &= x86_pmu.lbr_counters;46394639+ }46404640+46414641+ return 0;46424642+}46434643+46314644static int intel_pmu_hw_config(struct perf_event *event)46324645{46334646 int ret = x86_pmu_hw_config(event);···47114698 * group, which requires the extra space to store the counters.47124699 */47134700 leader = event->group_leader;47144714- if (branch_sample_call_stack(leader))47014701+ if (intel_set_branch_counter_constr(leader, &num))47154702 return -EINVAL;47164716- if (branch_sample_counters(leader)) {47174717- num++;47184718- leader->hw.dyn_constraint &= x86_pmu.lbr_counters;47194719- }47204703 leader->hw.flags |= PERF_X86_EVENT_BRANCH_COUNTERS;4721470447224705 for_each_sibling_event(sibling, leader) {47234723- if (branch_sample_call_stack(sibling))47064706+ if (intel_set_branch_counter_constr(sibling, &num))47244707 return -EINVAL;47254725- if (branch_sample_counters(sibling)) {47264726- num++;47274727- sibling->hw.dyn_constraint &= x86_pmu.lbr_counters;47284728- }47084708+ }47094709+47104710+ /* event isn't installed as a sibling yet. */47114711+ if (event != leader) {47124712+ if (intel_set_branch_counter_constr(event, &num))47134713+ return -EINVAL;47294714 }4730471547314716 if (num > fls(x86_pmu.lbr_counters))
+7-4
arch/x86/events/intel/ds.c
···345345 if (omr.omr_remote)346346 val |= REM;347347348348- val |= omr.omr_hitm ? P(SNOOP, HITM) : P(SNOOP, HIT);349349-350348 if (omr.omr_source == 0x2) {351351- u8 snoop = omr.omr_snoop | omr.omr_promoted;349349+ u8 snoop = omr.omr_snoop | (omr.omr_promoted << 1);352350353353- if (snoop == 0x0)351351+ if (omr.omr_hitm)352352+ val |= P(SNOOP, HITM);353353+ else if (snoop == 0x0)354354 val |= P(SNOOP, NA);355355 else if (snoop == 0x1)356356 val |= P(SNOOP, MISS);···359359 else if (snoop == 0x3)360360 val |= P(SNOOP, NONE);361361 } else if (omr.omr_source > 0x2 && omr.omr_source < 0x7) {362362+ val |= omr.omr_hitm ? P(SNOOP, HITM) : P(SNOOP, HIT);362363 val |= omr.omr_snoop ? P(SNOOPX, FWD) : 0;364364+ } else {365365+ val |= P(SNOOP, NONE);363366 }364367365368 return val;
+61-57
arch/x86/hyperv/hv_crash.c
···107107 cpu_relax();108108}109109110110-/* This cannot be inlined as it needs stack */111111-static noinline __noclone void hv_crash_restore_tss(void)110110+static void hv_crash_restore_tss(void)112111{113112 load_TR_desc();114113}115114116116-/* This cannot be inlined as it needs stack */117117-static noinline void hv_crash_clear_kernpt(void)115115+static void hv_crash_clear_kernpt(void)118116{119117 pgd_t *pgd;120118 p4d_t *p4d;···123125 native_p4d_clear(p4d);124126}125127126126-/*127127- * This is the C entry point from the asm glue code after the disable hypercall.128128- * We enter here in IA32-e long mode, ie, full 64bit mode running on kernel129129- * page tables with our below 4G page identity mapped, but using a temporary130130- * GDT. ds/fs/gs/es are null. ss is not usable. bp is null. stack is not131131- * available. We restore kernel GDT, and rest of the context, and continue132132- * to kexec.133133- */134134-static asmlinkage void __noreturn hv_crash_c_entry(void)128128+129129+static void __noreturn hv_crash_handle(void)135130{136136- struct hv_crash_ctxt *ctxt = &hv_crash_ctxt;137137-138138- /* first thing, restore kernel gdt */139139- native_load_gdt(&ctxt->gdtr);140140-141141- asm volatile("movw %%ax, %%ss" : : "a"(ctxt->ss));142142- asm volatile("movq %0, %%rsp" : : "m"(ctxt->rsp));143143-144144- asm volatile("movw %%ax, %%ds" : : "a"(ctxt->ds));145145- asm volatile("movw %%ax, %%es" : : "a"(ctxt->es));146146- asm volatile("movw %%ax, %%fs" : : "a"(ctxt->fs));147147- asm volatile("movw %%ax, %%gs" : : "a"(ctxt->gs));148148-149149- native_wrmsrq(MSR_IA32_CR_PAT, ctxt->pat);150150- asm volatile("movq %0, %%cr0" : : "r"(ctxt->cr0));151151-152152- asm volatile("movq %0, %%cr8" : : "r"(ctxt->cr8));153153- asm volatile("movq %0, %%cr4" : : "r"(ctxt->cr4));154154- asm volatile("movq %0, %%cr2" : : "r"(ctxt->cr4));155155-156156- native_load_idt(&ctxt->idtr);157157- native_wrmsrq(MSR_GS_BASE, ctxt->gsbase);158158- native_wrmsrq(MSR_EFER, ctxt->efer);159159-160160- /* restore the original kernel CS now via far return */161161- asm volatile("movzwq %0, %%rax\n\t"162162- "pushq %%rax\n\t"163163- "pushq $1f\n\t"164164- "lretq\n\t"165165- "1:nop\n\t" : : "m"(ctxt->cs) : "rax");166166-167167- /* We are in asmlinkage without stack frame, hence make C function168168- * calls which will buy stack frames.169169- */170131 hv_crash_restore_tss();171132 hv_crash_clear_kernpt();172133···134177135178 hv_panic_timeout_reboot();136179}137137-/* Tell gcc we are using lretq long jump in the above function intentionally */180180+181181+/*182182+ * __naked functions do not permit function calls, not even to __always_inline183183+ * functions that only contain asm() blocks themselves. So use a macro instead.184184+ */185185+#define hv_wrmsr(msr, val) \186186+ asm volatile("wrmsr" :: "c"(msr), "a"((u32)val), "d"((u32)(val >> 32)) : "memory")187187+188188+/*189189+ * This is the C entry point from the asm glue code after the disable hypercall.190190+ * We enter here in IA32-e long mode, ie, full 64bit mode running on kernel191191+ * page tables with our below 4G page identity mapped, but using a temporary192192+ * GDT. ds/fs/gs/es are null. ss is not usable. bp is null. stack is not193193+ * available. We restore kernel GDT, and rest of the context, and continue194194+ * to kexec.195195+ */196196+static void __naked hv_crash_c_entry(void)197197+{198198+ /* first thing, restore kernel gdt */199199+ asm volatile("lgdt %0" : : "m" (hv_crash_ctxt.gdtr));200200+201201+ asm volatile("movw %0, %%ss\n\t"202202+ "movq %1, %%rsp"203203+ :: "m"(hv_crash_ctxt.ss), "m"(hv_crash_ctxt.rsp));204204+205205+ asm volatile("movw %0, %%ds" : : "m"(hv_crash_ctxt.ds));206206+ asm volatile("movw %0, %%es" : : "m"(hv_crash_ctxt.es));207207+ asm volatile("movw %0, %%fs" : : "m"(hv_crash_ctxt.fs));208208+ asm volatile("movw %0, %%gs" : : "m"(hv_crash_ctxt.gs));209209+210210+ hv_wrmsr(MSR_IA32_CR_PAT, hv_crash_ctxt.pat);211211+ asm volatile("movq %0, %%cr0" : : "r"(hv_crash_ctxt.cr0));212212+213213+ asm volatile("movq %0, %%cr8" : : "r"(hv_crash_ctxt.cr8));214214+ asm volatile("movq %0, %%cr4" : : "r"(hv_crash_ctxt.cr4));215215+ asm volatile("movq %0, %%cr2" : : "r"(hv_crash_ctxt.cr2));216216+217217+ asm volatile("lidt %0" : : "m" (hv_crash_ctxt.idtr));218218+ hv_wrmsr(MSR_GS_BASE, hv_crash_ctxt.gsbase);219219+ hv_wrmsr(MSR_EFER, hv_crash_ctxt.efer);220220+221221+ /* restore the original kernel CS now via far return */222222+ asm volatile("pushq %q0\n\t"223223+ "pushq %q1\n\t"224224+ "lretq"225225+ :: "r"(hv_crash_ctxt.cs), "r"(hv_crash_handle));226226+}227227+/* Tell objtool we are using lretq long jump in the above function intentionally */138228STACK_FRAME_NON_STANDARD(hv_crash_c_entry);139229140230static void hv_mark_tss_not_busy(void)···199195{200196 struct hv_crash_ctxt *ctxt = &hv_crash_ctxt;201197202202- asm volatile("movq %%rsp,%0" : "=m"(ctxt->rsp));198198+ ctxt->rsp = current_stack_pointer;203199204200 ctxt->cr0 = native_read_cr0();205201 ctxt->cr4 = native_read_cr4();206202207207- asm volatile("movq %%cr2, %0" : "=a"(ctxt->cr2));208208- asm volatile("movq %%cr8, %0" : "=a"(ctxt->cr8));203203+ asm volatile("movq %%cr2, %0" : "=r"(ctxt->cr2));204204+ asm volatile("movq %%cr8, %0" : "=r"(ctxt->cr8));209205210210- asm volatile("movl %%cs, %%eax" : "=a"(ctxt->cs));211211- asm volatile("movl %%ss, %%eax" : "=a"(ctxt->ss));212212- asm volatile("movl %%ds, %%eax" : "=a"(ctxt->ds));213213- asm volatile("movl %%es, %%eax" : "=a"(ctxt->es));214214- asm volatile("movl %%fs, %%eax" : "=a"(ctxt->fs));215215- asm volatile("movl %%gs, %%eax" : "=a"(ctxt->gs));206206+ asm volatile("movw %%cs, %0" : "=m"(ctxt->cs));207207+ asm volatile("movw %%ss, %0" : "=m"(ctxt->ss));208208+ asm volatile("movw %%ds, %0" : "=m"(ctxt->ds));209209+ asm volatile("movw %%es, %0" : "=m"(ctxt->es));210210+ asm volatile("movw %%fs, %0" : "=m"(ctxt->fs));211211+ asm volatile("movw %%gs, %0" : "=m"(ctxt->gs));216212217213 native_store_gdt(&ctxt->gdtr);218214 store_idt(&ctxt->idtr);
···1894189418951895static inline void try_to_enable_x2apic(int remap_mode) { }18961896static inline void __x2apic_enable(void) { }18971897+static inline void __x2apic_disable(void) { }18971898#endif /* !CONFIG_X86_X2APIC */1898189918991900void __init enable_IR_x2apic(void)···24572456 if (x2apic_mode) {24582457 __x2apic_enable();24592458 } else {24592459+ if (x2apic_enabled()) {24602460+ pr_warn_once("x2apic: re-enabled by firmware during resume. Disabling\n");24612461+ __x2apic_disable();24622462+ }24632463+24602464 /*24612465 * Make sure the APICBASE points to the right address24622466 *
+16-2
arch/x86/kernel/apic/x2apic_uv_x.c
···17081708 struct uv_hub_info_s *new_hub;1709170917101710 /* Allocate & fill new per hub info list */17111711- new_hub = (bid == 0) ? &uv_hub_info_node017121712- : kzalloc_node(bytes, GFP_KERNEL, uv_blade_to_node(bid));17111711+ if (bid == 0) {17121712+ new_hub = &uv_hub_info_node0;17131713+ } else {17141714+ int nid;17151715+17161716+ /*17171717+ * Deconfigured sockets are mapped to SOCK_EMPTY. Use17181718+ * NUMA_NO_NODE to allocate on a valid node.17191719+ */17201720+ nid = uv_blade_to_node(bid);17211721+ if (nid == SOCK_EMPTY)17221722+ nid = NUMA_NO_NODE;17231723+17241724+ new_hub = kzalloc_node(bytes, GFP_KERNEL, nid);17251725+ }17261726+17131727 if (WARN_ON_ONCE(!new_hub)) {17141728 /* do not kfree() bid 0, which is statically allocated */17151729 while (--bid > 0)
+11-6
arch/x86/kernel/cpu/mce/amd.c
···875875{876876 amd_reset_thr_limit(m->bank);877877878878- /* Clear MCA_DESTAT for all deferred errors even those logged in MCA_STATUS. */879879- if (m->status & MCI_STATUS_DEFERRED)880880- mce_wrmsrq(MSR_AMD64_SMCA_MCx_DESTAT(m->bank), 0);878878+ if (mce_flags.smca) {879879+ /*880880+ * Clear MCA_DESTAT for all deferred errors even those881881+ * logged in MCA_STATUS.882882+ */883883+ if (m->status & MCI_STATUS_DEFERRED)884884+ mce_wrmsrq(MSR_AMD64_SMCA_MCx_DESTAT(m->bank), 0);881885882882- /* Don't clear MCA_STATUS if MCA_DESTAT was used exclusively. */883883- if (m->kflags & MCE_CHECK_DFR_REGS)884884- return;886886+ /* Don't clear MCA_STATUS if MCA_DESTAT was used exclusively. */887887+ if (m->kflags & MCE_CHECK_DFR_REGS)888888+ return;889889+ }885890886891 mce_wrmsrq(mca_msr_reg(m->bank, MCA_STATUS), 0);887892}
···451451452452 {{"_DSM",453453 METHOD_4ARGS(ACPI_TYPE_BUFFER, ACPI_TYPE_INTEGER, ACPI_TYPE_INTEGER,454454- ACPI_TYPE_ANY | ACPI_TYPE_PACKAGE) |454454+ ACPI_TYPE_PACKAGE | ACPI_TYPE_ANY) |455455 ARG_COUNT_IS_MINIMUM,456456 METHOD_RETURNS(ACPI_RTYPE_ALL)}}, /* Must return a value, but it can be of any type */457457
-3
drivers/acpi/bus.c
···818818 if (list_empty(&adev->pnp.ids))819819 return NULL;820820821821- if (adev->pnp.type.backlight)822822- return adev;823823-824821 return acpi_primary_dev_companion(adev, dev);825822}826823
+1-1
drivers/acpi/osl.c
···16811681 * Use acpi_os_map_generic_address to pre-map the reset16821682 * register if it's in system memory.16831683 */16841684- void *rv;16841684+ void __iomem *rv;1685168516861686 rv = acpi_os_map_generic_address(&acpi_gbl_FADT.reset_register);16871687 pr_debug("%s: Reset register mapping %s\n", __func__,
···142142 _pin: PhantomPinned,143143}144144145145+// We do not define any ops. For now, used only to check identity of vmas.146146+static BINDER_VM_OPS: bindings::vm_operations_struct = pin_init::zeroed();147147+148148+// To ensure that we do not accidentally install pages into or zap pages from the wrong vma, we149149+// check its vm_ops and private data before using it.150150+fn check_vma(vma: &virt::VmaRef, owner: *const ShrinkablePageRange) -> Option<&virt::VmaMixedMap> {151151+ // SAFETY: Just reading the vm_ops pointer of any active vma is safe.152152+ let vm_ops = unsafe { (*vma.as_ptr()).vm_ops };153153+ if !ptr::eq(vm_ops, &BINDER_VM_OPS) {154154+ return None;155155+ }156156+157157+ // SAFETY: Reading the vm_private_data pointer of a binder-owned vma is safe.158158+ let vm_private_data = unsafe { (*vma.as_ptr()).vm_private_data };159159+ // The ShrinkablePageRange is only dropped when the Process is dropped, which only happens once160160+ // the file's ->release handler is invoked, which means the ShrinkablePageRange outlives any161161+ // VMA associated with it, so there can't be any false positives due to pointer reuse here.162162+ if !ptr::eq(vm_private_data, owner.cast()) {163163+ return None;164164+ }165165+166166+ vma.as_mixedmap_vma()167167+}168168+145169struct Inner {146170 /// Array of pages.147171 ///···332308 inner.size = num_pages;333309 inner.vma_addr = vma.start();334310311311+ // This pointer is only used for comparison - it's not dereferenced.312312+ //313313+ // SAFETY: We own the vma, and we don't use any methods on VmaNew that rely on314314+ // `vm_private_data`.315315+ unsafe {316316+ (*vma.as_ptr()).vm_private_data = ptr::from_ref(self).cast_mut().cast::<c_void>()317317+ };318318+319319+ // SAFETY: We own the vma, and we don't use any methods on VmaNew that rely on320320+ // `vm_ops`.321321+ unsafe { (*vma.as_ptr()).vm_ops = &BINDER_VM_OPS };322322+335323 Ok(num_pages)336324 }337325···435399 //436400 // Using `mmput_async` avoids this, because then the `mm` cleanup is instead queued to a437401 // workqueue.438438- MmWithUser::into_mmput_async(self.mm.mmget_not_zero().ok_or(ESRCH)?)439439- .mmap_read_lock()440440- .vma_lookup(vma_addr)441441- .ok_or(ESRCH)?442442- .as_mixedmap_vma()443443- .ok_or(ESRCH)?444444- .vm_insert_page(user_page_addr, &new_page)445445- .inspect_err(|err| {446446- pr_warn!(447447- "Failed to vm_insert_page({}): vma_addr:{} i:{} err:{:?}",448448- user_page_addr,449449- vma_addr,450450- i,451451- err452452- )453453- })?;402402+ let mm = MmWithUser::into_mmput_async(self.mm.mmget_not_zero().ok_or(ESRCH)?);403403+ {404404+ let vma_read;405405+ let mmap_read;406406+ let vma = if let Some(ret) = mm.lock_vma_under_rcu(vma_addr) {407407+ vma_read = ret;408408+ check_vma(&vma_read, self)409409+ } else {410410+ mmap_read = mm.mmap_read_lock();411411+ mmap_read412412+ .vma_lookup(vma_addr)413413+ .and_then(|vma| check_vma(vma, self))414414+ };415415+416416+ match vma {417417+ Some(vma) => vma.vm_insert_page(user_page_addr, &new_page)?,418418+ None => return Err(ESRCH),419419+ }420420+ }454421455422 let inner = self.lock.lock();456423···706667 let mmap_read;707668 let mm_mutex;708669 let vma_addr;670670+ let range_ptr;709671710672 {711673 // CAST: The `list_head` field is first in `PageInfo`.712674 let info = item as *mut PageInfo;713675 // SAFETY: The `range` field of `PageInfo` is immutable.714714- let range = unsafe { &*((*info).range) };676676+ range_ptr = unsafe { (*info).range };677677+ // SAFETY: The `range` outlives its `PageInfo` values.678678+ let range = unsafe { &*range_ptr };715679716680 mm = match range.mm.mmget_not_zero() {717681 Some(mm) => MmWithUser::into_mmput_async(mm),···759717 // SAFETY: The lru lock is locked when this method is called.760718 unsafe { bindings::spin_unlock(&raw mut (*lru).lock) };761719762762- if let Some(vma) = mmap_read.vma_lookup(vma_addr) {763763- let user_page_addr = vma_addr + (page_index << PAGE_SHIFT);764764- vma.zap_page_range_single(user_page_addr, PAGE_SIZE);720720+ if let Some(unchecked_vma) = mmap_read.vma_lookup(vma_addr) {721721+ if let Some(vma) = check_vma(unchecked_vma, range_ptr) {722722+ let user_page_addr = vma_addr + (page_index << PAGE_SHIFT);723723+ vma.zap_page_range_single(user_page_addr, PAGE_SIZE);724724+ }765725 }766726767727 drop(mmap_read);
+2-1
drivers/android/binder/process.rs
···12951295 }1296129612971297 pub(crate) fn dead_binder_done(&self, cookie: u64, thread: &Thread) {12981298- if let Some(death) = self.inner.lock().pull_delivered_death(cookie) {12981298+ let death = self.inner.lock().pull_delivered_death(cookie);12991299+ if let Some(death) = death {12991300 death.set_notification_done(thread);13001301 }13011302 }
+33-2
drivers/android/binder/range_alloc/array.rs
···118118 size: usize,119119 is_oneway: bool,120120 pid: Pid,121121- ) -> Result<usize> {121121+ ) -> Result<(usize, bool)> {122122 // Compute new value of free_oneway_space, which is set only on success.123123 let new_oneway_space = if is_oneway {124124 match self.free_oneway_space.checked_sub(size) {···146146 .ok()147147 .unwrap();148148149149- Ok(insert_at_offset)149149+ // Start detecting spammers once we have less than 20%150150+ // of async space left (which is less than 10% of total151151+ // buffer size).152152+ //153153+ // (This will short-circuit, so `low_oneway_space` is154154+ // only called when necessary.)155155+ let oneway_spam_detected =156156+ is_oneway && new_oneway_space < self.size / 10 && self.low_oneway_space(pid);157157+158158+ Ok((insert_at_offset, oneway_spam_detected))159159+ }160160+161161+ /// Find the amount and size of buffers allocated by the current caller.162162+ ///163163+ /// The idea is that once we cross the threshold, whoever is responsible164164+ /// for the low async space is likely to try to send another async transaction,165165+ /// and at some point we'll catch them in the act. This is more efficient166166+ /// than keeping a map per pid.167167+ fn low_oneway_space(&self, calling_pid: Pid) -> bool {168168+ let mut total_alloc_size = 0;169169+ let mut num_buffers = 0;170170+171171+ // Warn if this pid has more than 50 transactions, or more than 50% of172172+ // async space (which is 25% of total buffer size). Oneway spam is only173173+ // detected when the threshold is exceeded.174174+ for range in &self.ranges {175175+ if range.state.is_oneway() && range.state.pid() == calling_pid {176176+ total_alloc_size += range.size;177177+ num_buffers += 1;178178+ }179179+ }180180+ num_buffers > 50 || total_alloc_size > self.size / 4150181 }151182152183 pub(crate) fn reservation_abort(&mut self, offset: usize) -> Result<FreedRange> {
···164164 self.free_oneway_space165165 };166166167167- // Start detecting spammers once we have less than 20%168168- // of async space left (which is less than 10% of total169169- // buffer size).170170- //171171- // (This will short-circut, so `low_oneway_space` is172172- // only called when necessary.)173173- let oneway_spam_detected =174174- is_oneway && new_oneway_space < self.size / 10 && self.low_oneway_space(pid);175175-176167 let (found_size, found_off, tree_node, free_tree_node) = match self.find_best_match(size) {177168 None => {178169 pr_warn!("ENOSPC from range_alloc.reserve_new - size: {}", size);···193202 self.tree.insert(tree_node);194203 self.free_tree.insert(free_tree_node);195204 }205205+206206+ // Start detecting spammers once we have less than 20%207207+ // of async space left (which is less than 10% of total208208+ // buffer size).209209+ //210210+ // (This will short-circuit, so `low_oneway_space` is211211+ // only called when necessary.)212212+ let oneway_spam_detected =213213+ is_oneway && new_oneway_space < self.size / 10 && self.low_oneway_space(pid);196214197215 Ok((found_off, oneway_spam_detected))198216 }
+6-11
drivers/android/binder/thread.rs
···1015101510161016 // Copy offsets if there are any.10171017 if offsets_size > 0 {10181018- {10191019- let mut reader =10201020- UserSlice::new(UserPtr::from_addr(trd_data_ptr.offsets as _), offsets_size)10211021- .reader();10221022- alloc.copy_into(&mut reader, aligned_data_size, offsets_size)?;10231023- }10181018+ let mut offsets_reader =10191019+ UserSlice::new(UserPtr::from_addr(trd_data_ptr.offsets as _), offsets_size)10201020+ .reader();1024102110251022 let offsets_start = aligned_data_size;10261023 let offsets_end = aligned_data_size + offsets_size;···10381041 .step_by(size_of::<u64>())10391042 .enumerate()10401043 {10411041- let offset: usize = view10421042- .alloc10431043- .read::<u64>(index_offset)?10441044- .try_into()10451045- .map_err(|_| EINVAL)?;10441044+ let offset = offsets_reader.read::<u64>()?;10451045+ view.alloc.write(index_offset, &offset)?;10461046+ let offset: usize = offset.try_into().map_err(|_| EINVAL)?;1046104710471048 if offset < end_of_previous_object || !is_aligned(offset, size_of::<u32>()) {10481049 pr_warn!("Got transaction with invalid offset.");
···381381}382382__exitcall(deferred_probe_exit);383383384384+int __device_set_driver_override(struct device *dev, const char *s, size_t len)385385+{386386+ const char *new, *old;387387+ char *cp;388388+389389+ if (!s)390390+ return -EINVAL;391391+392392+ /*393393+ * The stored value will be used in sysfs show callback (sysfs_emit()),394394+ * which has a length limit of PAGE_SIZE and adds a trailing newline.395395+ * Thus we can store one character less to avoid truncation during sysfs396396+ * show.397397+ */398398+ if (len >= (PAGE_SIZE - 1))399399+ return -EINVAL;400400+401401+ /*402402+ * Compute the real length of the string in case userspace sends us a403403+ * bunch of \0 characters like python likes to do.404404+ */405405+ len = strlen(s);406406+407407+ if (!len) {408408+ /* Empty string passed - clear override */409409+ spin_lock(&dev->driver_override.lock);410410+ old = dev->driver_override.name;411411+ dev->driver_override.name = NULL;412412+ spin_unlock(&dev->driver_override.lock);413413+ kfree(old);414414+415415+ return 0;416416+ }417417+418418+ cp = strnchr(s, len, '\n');419419+ if (cp)420420+ len = cp - s;421421+422422+ new = kstrndup(s, len, GFP_KERNEL);423423+ if (!new)424424+ return -ENOMEM;425425+426426+ spin_lock(&dev->driver_override.lock);427427+ old = dev->driver_override.name;428428+ if (cp != s) {429429+ dev->driver_override.name = new;430430+ spin_unlock(&dev->driver_override.lock);431431+ } else {432432+ /* "\n" passed - clear override */433433+ dev->driver_override.name = NULL;434434+ spin_unlock(&dev->driver_override.lock);435435+436436+ kfree(new);437437+ }438438+ kfree(old);439439+440440+ return 0;441441+}442442+EXPORT_SYMBOL_GPL(__device_set_driver_override);443443+384444/**385445 * device_is_bound() - Check if device is bound to a driver386446 * @dev: device to check
+5-32
drivers/base/platform.c
···603603 kfree(pa->pdev.dev.platform_data);604604 kfree(pa->pdev.mfd_cell);605605 kfree(pa->pdev.resource);606606- kfree(pa->pdev.driver_override);607606 kfree(pa);608607}609608···13051306}13061307static DEVICE_ATTR_RO(numa_node);1307130813081308-static ssize_t driver_override_show(struct device *dev,13091309- struct device_attribute *attr, char *buf)13101310-{13111311- struct platform_device *pdev = to_platform_device(dev);13121312- ssize_t len;13131313-13141314- device_lock(dev);13151315- len = sysfs_emit(buf, "%s\n", pdev->driver_override);13161316- device_unlock(dev);13171317-13181318- return len;13191319-}13201320-13211321-static ssize_t driver_override_store(struct device *dev,13221322- struct device_attribute *attr,13231323- const char *buf, size_t count)13241324-{13251325- struct platform_device *pdev = to_platform_device(dev);13261326- int ret;13271327-13281328- ret = driver_set_override(dev, &pdev->driver_override, buf, count);13291329- if (ret)13301330- return ret;13311331-13321332- return count;13331333-}13341334-static DEVICE_ATTR_RW(driver_override);13351335-13361309static struct attribute *platform_dev_attrs[] = {13371310 &dev_attr_modalias.attr,13381311 &dev_attr_numa_node.attr,13391339- &dev_attr_driver_override.attr,13401312 NULL,13411313};13421314···13471377{13481378 struct platform_device *pdev = to_platform_device(dev);13491379 struct platform_driver *pdrv = to_platform_driver(drv);13801380+ int ret;1350138113511382 /* When driver_override is set, only bind to the matching driver */13521352- if (pdev->driver_override)13531353- return !strcmp(pdev->driver_override, drv->name);13831383+ ret = device_match_driver_override(dev, drv);13841384+ if (ret >= 0)13851385+ return ret;1354138613551387 /* Attempt an OF style match first */13561388 if (of_driver_match_device(dev, drv))···14881516const struct bus_type platform_bus_type = {14891517 .name = "platform",14901518 .dev_groups = platform_dev_groups,15191519+ .driver_override = true,14911520 .match = platform_match,14921521 .uevent = platform_uevent,14931522 .probe = platform_probe,
···3636 * that's not listed in simple_pm_bus_of_match. We don't want to do any3737 * of the simple-pm-bus tasks for these devices, so return early.3838 */3939- if (pdev->driver_override)3939+ if (device_has_driver_override(&pdev->dev))4040 return 0;41414242 match = of_match_device(dev->driver->of_match_table, dev);···7878{7979 const void *data = of_device_get_match_data(&pdev->dev);80808181- if (pdev->driver_override || data)8181+ if (device_has_driver_override(&pdev->dev) || data)8282 return;83838484 dev_dbg(&pdev->dev, "%s\n", __func__);
+2-2
drivers/cache/ax45mp_cache.c
···178178179179static int __init ax45mp_cache_init(void)180180{181181- struct device_node *np;182181 struct resource res;183182 int ret;184183185185- np = of_find_matching_node(NULL, ax45mp_cache_ids);184184+ struct device_node *np __free(device_node) =185185+ of_find_matching_node(NULL, ax45mp_cache_ids);186186 if (!of_device_is_available(np))187187 return -ENODEV;188188
+2-2
drivers/cache/starfive_starlink_cache.c
···102102103103static int __init starlink_cache_init(void)104104{105105- struct device_node *np;106105 u32 block_size;107106 int ret;108107109109- np = of_find_matching_node(NULL, starlink_cache_ids);108108+ struct device_node *np __free(device_node) =109109+ of_find_matching_node(NULL, starlink_cache_ids);110110 if (!of_device_is_available(np))111111 return -ENODEV;112112
+1-2
drivers/clk/imx/clk-scu.c
···706706 if (ret)707707 goto put_device;708708709709- ret = driver_set_override(&pdev->dev, &pdev->driver_override,710710- "imx-scu-clk", strlen("imx-scu-clk"));709709+ ret = device_set_driver_override(&pdev->dev, "imx-scu-clk");711710 if (ret)712711 goto put_device;713712
-10
drivers/cpuidle/cpuidle.c
···359359int cpuidle_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,360360 bool *stop_tick)361361{362362- /*363363- * If there is only a single idle state (or none), there is nothing364364- * meaningful for the governor to choose. Skip the governor and365365- * always use state 0 with the tick running.366366- */367367- if (drv->state_count <= 1) {368368- *stop_tick = false;369369- return 0;370370- }371371-372362 return cpuidle_curr_governor->select(drv, dev, stop_tick);373363}374364
+1-3
drivers/crypto/ccp/sev-dev.c
···24082408 * in Firmware state on failure. Use snp_reclaim_pages() to24092409 * transition either case back to Hypervisor-owned state.24102410 */24112411- if (snp_reclaim_pages(__pa(data), 1, true)) {24122412- snp_leak_pages(__page_to_pfn(status_page), 1);24112411+ if (snp_reclaim_pages(__pa(data), 1, true))24132412 return -EFAULT;24142414- }24152413 }2416241424172415 if (ret)
+7
drivers/crypto/padlock-sha.c
···332332 if (!x86_match_cpu(padlock_sha_ids) || !boot_cpu_has(X86_FEATURE_PHE_EN))333333 return -ENODEV;334334335335+ /*336336+ * Skip family 0x07 and newer used by Zhaoxin processors,337337+ * as the driver's self-tests fail on these CPUs.338338+ */339339+ if (c->x86 >= 0x07)340340+ return -ENODEV;341341+335342 /* Register the newly added algorithm module if on *336343 * VIA Nano processor, or else just do as before */337344 if (c->x86_model < 0x0f) {
···205205 return 0;206206}207207208208-static int ffa_rxtx_unmap(u16 vm_id)208208+static int ffa_rxtx_unmap(void)209209{210210 ffa_value_t ret;211211212212 invoke_ffa_fn((ffa_value_t){213213- .a0 = FFA_RXTX_UNMAP, .a1 = PACK_TARGET_INFO(vm_id, 0),213213+ .a0 = FFA_RXTX_UNMAP,214214 }, &ret);215215216216 if (ret.a0 == FFA_ERROR)···2097209720982098 pr_err("failed to setup partitions\n");20992099 ffa_notifications_cleanup();21002100- ffa_rxtx_unmap(drv_info->vm_id);21002100+ ffa_rxtx_unmap();21012101free_pages:21022102 if (drv_info->tx_buffer)21032103 free_pages_exact(drv_info->tx_buffer, rxtx_bufsz);···21122112{21132113 ffa_notifications_cleanup();21142114 ffa_partitions_cleanup();21152115- ffa_rxtx_unmap(drv_info->vm_id);21152115+ ffa_rxtx_unmap();21162116 free_pages_exact(drv_info->tx_buffer, drv_info->rxtx_bufsz);21172117 free_pages_exact(drv_info->rx_buffer, drv_info->rxtx_bufsz);21182118 kfree(drv_info);
+2-2
drivers/firmware/arm_scmi/notify.c
···10661066 * since at creation time we usually want to have all setup and ready before10671067 * events really start flowing.10681068 *10691069- * Return: A properly refcounted handler on Success, NULL on Failure10691069+ * Return: A properly refcounted handler on Success, ERR_PTR on Failure10701070 */10711071static inline struct scmi_event_handler *10721072__scmi_event_handler_get_ops(struct scmi_notify_instance *ni,···11131113 }11141114 mutex_unlock(&ni->pending_mtx);1115111511161116- return hndl;11161116+ return hndl ?: ERR_PTR(-ENODEV);11171117}1118111811191119static struct scmi_event_handler *
+2-2
drivers/firmware/arm_scmi/protocols.h
···189189190190/**191191 * struct scmi_iterator_state - Iterator current state descriptor192192- * @desc_index: Starting index for the current mulit-part request.192192+ * @desc_index: Starting index for the current multi-part request.193193 * @num_returned: Number of returned items in the last multi-part reply.194194 * @num_remaining: Number of remaining items in the multi-part message.195195 * @max_resources: Maximum acceptable number of items, configured by the caller196196 * depending on the underlying resources that it is querying.197197 * @loop_idx: The iterator loop index in the current multi-part reply.198198- * @rx_len: Size in bytes of the currenly processed message; it can be used by198198+ * @rx_len: Size in bytes of the currently processed message; it can be used by199199 * the user of the iterator to verify a reply size.200200 * @priv: Optional pointer to some additional state-related private data setup201201 * by the caller during the iterations.
+3-2
drivers/firmware/arm_scpi.c
···18181919#include <linux/bitmap.h>2020#include <linux/bitfield.h>2121+#include <linux/cleanup.h>2122#include <linux/device.h>2223#include <linux/err.h>2324#include <linux/export.h>···941940 int idx = scpi_drvinfo->num_chans;942941 struct scpi_chan *pchan = scpi_drvinfo->channels + idx;943942 struct mbox_client *cl = &pchan->cl;944944- struct device_node *shmem = of_parse_phandle(np, "shmem", idx);943943+ struct device_node *shmem __free(device_node) =944944+ of_parse_phandle(np, "shmem", idx);945945946946 if (!of_match_node(shmem_of_match, shmem))947947 return -ENXIO;948948949949 ret = of_address_to_resource(shmem, 0, &res);950950- of_node_put(shmem);951950 if (ret) {952951 dev_err(dev, "failed to get SCPI payload mem resource\n");953952 return ret;
+18-6
drivers/firmware/cirrus/cs_dsp.c
···16101610 region_name);1611161116121612 if (reg) {16131613+ /*16141614+ * Although we expect the underlying bus does not require16151615+ * physically-contiguous buffers, we pessimistically use16161616+ * a temporary buffer instead of trusting that the16171617+ * alignment of region->data is ok.16181618+ */16131619 region_len = le32_to_cpu(region->len);16141620 if (region_len > buf_len) {16151621 buf_len = round_up(region_len, PAGE_SIZE);16161616- kfree(buf);16171617- buf = kmalloc(buf_len, GFP_KERNEL | GFP_DMA);16221622+ vfree(buf);16231623+ buf = vmalloc(buf_len);16181624 if (!buf) {16191625 ret = -ENOMEM;16201626 goto out_fw;···1649164316501644 ret = 0;16511645out_fw:16521652- kfree(buf);16461646+ vfree(buf);1653164716541648 if (ret == -EOVERFLOW)16551649 cs_dsp_err(dsp, "%s: file content overflows file data\n", file);···23372331 }2338233223392333 if (reg) {23342334+ /*23352335+ * Although we expect the underlying bus does not require23362336+ * physically-contiguous buffers, we pessimistically use23372337+ * a temporary buffer instead of trusting that the23382338+ * alignment of blk->data is ok.23392339+ */23402340 region_len = le32_to_cpu(blk->len);23412341 if (region_len > buf_len) {23422342 buf_len = round_up(region_len, PAGE_SIZE);23432343- kfree(buf);23442344- buf = kmalloc(buf_len, GFP_KERNEL | GFP_DMA);23432343+ vfree(buf);23442344+ buf = vmalloc(buf_len);23452345 if (!buf) {23462346 ret = -ENOMEM;23472347 goto out_fw;···2378236623792367 ret = 0;23802368out_fw:23812381- kfree(buf);23692369+ vfree(buf);2382237023832371 if (ret == -EOVERFLOW)23842372 cs_dsp_err(dsp, "%s: file content overflows file data\n", file);
+2
drivers/firmware/stratix10-rsu.c
···768768 rsu_async_status_callback);769769 if (ret) {770770 dev_err(dev, "Error, getting RSU status %i\n", ret);771771+ stratix10_svc_remove_async_client(priv->chan);771772 stratix10_svc_free_channel(priv->chan);773773+ return ret;772774 }773775774776 /* get DCMF version from firmware */
+126-102
drivers/firmware/stratix10-svc.c
···3737 * service layer will return error to FPGA manager when timeout occurs,3838 * timeout is set to 30 seconds (30 * 1000) at Intel Stratix10 SoC.3939 */4040-#define SVC_NUM_DATA_IN_FIFO 324040+#define SVC_NUM_DATA_IN_FIFO 84141#define SVC_NUM_CHANNEL 44242-#define FPGA_CONFIG_DATA_CLAIM_TIMEOUT_MS 2004242+#define FPGA_CONFIG_DATA_CLAIM_TIMEOUT_MS 20004343#define FPGA_CONFIG_STATUS_TIMEOUT_SEC 304444#define BYTE_TO_WORD_SIZE 445454646/* stratix10 service layer clients */4747#define STRATIX10_RSU "stratix10-rsu"4848-#define INTEL_FCS "intel-fcs"49485049/* Maximum number of SDM client IDs. */5150#define MAX_SDM_CLIENT_IDS 16···104105/**105106 * struct stratix10_svc - svc private data106107 * @stratix10_svc_rsu: pointer to stratix10 RSU device107107- * @intel_svc_fcs: pointer to the FCS device108108 */109109struct stratix10_svc {110110 struct platform_device *stratix10_svc_rsu;111111- struct platform_device *intel_svc_fcs;112111};113112114113/**···248251 * @num_active_client: number of active service client249252 * @node: list management250253 * @genpool: memory pool pointing to the memory region251251- * @task: pointer to the thread task which handles SMC or HVC call252252- * @svc_fifo: a queue for storing service message data253254 * @complete_status: state for completion254254- * @svc_fifo_lock: protect access to service message data queue255255 * @invoke_fn: function to issue secure monitor call or hypervisor call256256 * @svc: manages the list of client svc drivers257257+ * @sdm_lock: only allows a single command single response to SDM257258 * @actrl: async control structure258259 *259260 * This struct is used to create communication channels for service clients, to···264269 int num_active_client;265270 struct list_head node;266271 struct gen_pool *genpool;267267- struct task_struct *task;268268- struct kfifo svc_fifo;269272 struct completion complete_status;270270- spinlock_t svc_fifo_lock;271273 svc_invoke_fn *invoke_fn;272274 struct stratix10_svc *svc;275275+ struct mutex sdm_lock;273276 struct stratix10_async_ctrl actrl;274277};275278···276283 * @ctrl: pointer to service controller which is the provider of this channel277284 * @scl: pointer to service client which owns the channel278285 * @name: service client name associated with the channel286286+ * @task: pointer to the thread task which handles SMC or HVC call287287+ * @svc_fifo: a queue for storing service message data (separate fifo for every channel)288288+ * @svc_fifo_lock: protect access to service message data queue (locking pending fifo)279289 * @lock: protect access to the channel280290 * @async_chan: reference to asynchronous channel object for this channel281291 *···289293 struct stratix10_svc_controller *ctrl;290294 struct stratix10_svc_client *scl;291295 char *name;296296+ struct task_struct *task;297297+ struct kfifo svc_fifo;298298+ spinlock_t svc_fifo_lock;292299 spinlock_t lock;293300 struct stratix10_async_chan *async_chan;294301};···526527 */527528static int svc_normal_to_secure_thread(void *data)528529{529529- struct stratix10_svc_controller530530- *ctrl = (struct stratix10_svc_controller *)data;531531- struct stratix10_svc_data *pdata;532532- struct stratix10_svc_cb_data *cbdata;530530+ struct stratix10_svc_chan *chan = (struct stratix10_svc_chan *)data;531531+ struct stratix10_svc_controller *ctrl = chan->ctrl;532532+ struct stratix10_svc_data *pdata = NULL;533533+ struct stratix10_svc_cb_data *cbdata = NULL;533534 struct arm_smccc_res res;534535 unsigned long a0, a1, a2, a3, a4, a5, a6, a7;535536 int ret_fifo = 0;···554555 a6 = 0;555556 a7 = 0;556557557557- pr_debug("smc_hvc_shm_thread is running\n");558558+ pr_debug("%s: %s: Thread is running!\n", __func__, chan->name);558559559560 while (!kthread_should_stop()) {560560- ret_fifo = kfifo_out_spinlocked(&ctrl->svc_fifo,561561+ ret_fifo = kfifo_out_spinlocked(&chan->svc_fifo,561562 pdata, sizeof(*pdata),562562- &ctrl->svc_fifo_lock);563563+ &chan->svc_fifo_lock);563564564565 if (!ret_fifo)565566 continue;···568569 (unsigned int)pdata->paddr, pdata->command,569570 (unsigned int)pdata->size);570571572572+ /* SDM can only process one command at a time */573573+ pr_debug("%s: %s: Thread is waiting for mutex!\n",574574+ __func__, chan->name);575575+ if (mutex_lock_interruptible(&ctrl->sdm_lock)) {576576+ /* item already dequeued; notify client to unblock it */577577+ cbdata->status = BIT(SVC_STATUS_ERROR);578578+ cbdata->kaddr1 = NULL;579579+ cbdata->kaddr2 = NULL;580580+ cbdata->kaddr3 = NULL;581581+ if (pdata->chan->scl)582582+ pdata->chan->scl->receive_cb(pdata->chan->scl,583583+ cbdata);584584+ break;585585+ }586586+571587 switch (pdata->command) {572588 case COMMAND_RECONFIG_DATA_CLAIM:573589 svc_thread_cmd_data_claim(ctrl, pdata, cbdata);590590+ mutex_unlock(&ctrl->sdm_lock);574591 continue;575592 case COMMAND_RECONFIG:576593 a0 = INTEL_SIP_SMC_FPGA_CONFIG_START;···715700 break;716701 default:717702 pr_warn("it shouldn't happen\n");718718- break;703703+ mutex_unlock(&ctrl->sdm_lock);704704+ continue;719705 }720720- pr_debug("%s: before SMC call -- a0=0x%016x a1=0x%016x",721721- __func__,706706+ pr_debug("%s: %s: before SMC call -- a0=0x%016x a1=0x%016x",707707+ __func__, chan->name,722708 (unsigned int)a0,723709 (unsigned int)a1);724710 pr_debug(" a2=0x%016x\n", (unsigned int)a2);···728712 pr_debug(" a5=0x%016x\n", (unsigned int)a5);729713 ctrl->invoke_fn(a0, a1, a2, a3, a4, a5, a6, a7, &res);730714731731- pr_debug("%s: after SMC call -- res.a0=0x%016x",732732- __func__, (unsigned int)res.a0);715715+ pr_debug("%s: %s: after SMC call -- res.a0=0x%016x",716716+ __func__, chan->name, (unsigned int)res.a0);733717 pr_debug(" res.a1=0x%016x, res.a2=0x%016x",734718 (unsigned int)res.a1, (unsigned int)res.a2);735719 pr_debug(" res.a3=0x%016x\n", (unsigned int)res.a3);···744728 cbdata->kaddr2 = NULL;745729 cbdata->kaddr3 = NULL;746730 pdata->chan->scl->receive_cb(pdata->chan->scl, cbdata);731731+ mutex_unlock(&ctrl->sdm_lock);747732 continue;748733 }749734···818801 break;819802820803 }804804+805805+ mutex_unlock(&ctrl->sdm_lock);821806 }822807823808 kfree(cbdata);···17151696 if (!p_data)17161697 return -ENOMEM;1717169817181718- /* first client will create kernel thread */17191719- if (!chan->ctrl->task) {17201720- chan->ctrl->task =17211721- kthread_run_on_cpu(svc_normal_to_secure_thread,17221722- (void *)chan->ctrl,17231723- cpu, "svc_smc_hvc_thread");17241724- if (IS_ERR(chan->ctrl->task)) {16991699+ /* first caller creates the per-channel kthread */17001700+ if (!chan->task) {17011701+ struct task_struct *task;17021702+17031703+ task = kthread_run_on_cpu(svc_normal_to_secure_thread,17041704+ (void *)chan,17051705+ cpu, "svc_smc_hvc_thread");17061706+ if (IS_ERR(task)) {17251707 dev_err(chan->ctrl->dev,17261708 "failed to create svc_smc_hvc_thread\n");17271709 kfree(p_data);17281710 return -EINVAL;17291711 }17121712+17131713+ spin_lock(&chan->lock);17141714+ if (chan->task) {17151715+ /* another caller won the race; discard our thread */17161716+ spin_unlock(&chan->lock);17171717+ kthread_stop(task);17181718+ } else {17191719+ chan->task = task;17201720+ spin_unlock(&chan->lock);17211721+ }17301722 }1731172317321732- pr_debug("%s: sent P-va=%p, P-com=%x, P-size=%u\n", __func__,17331733- p_msg->payload, p_msg->command,17241724+ pr_debug("%s: %s: sent P-va=%p, P-com=%x, P-size=%u\n", __func__,17251725+ chan->name, p_msg->payload, p_msg->command,17341726 (unsigned int)p_msg->payload_length);1735172717361728 if (list_empty(&svc_data_mem)) {···17771747 p_data->arg[2] = p_msg->arg[2];17781748 p_data->size = p_msg->payload_length;17791749 p_data->chan = chan;17801780- pr_debug("%s: put to FIFO pa=0x%016x, cmd=%x, size=%u\n", __func__,17811781- (unsigned int)p_data->paddr, p_data->command,17821782- (unsigned int)p_data->size);17831783- ret = kfifo_in_spinlocked(&chan->ctrl->svc_fifo, p_data,17501750+ pr_debug("%s: %s: put to FIFO pa=0x%016x, cmd=%x, size=%u\n",17511751+ __func__,17521752+ chan->name,17531753+ (unsigned int)p_data->paddr,17541754+ p_data->command,17551755+ (unsigned int)p_data->size);17561756+17571757+ ret = kfifo_in_spinlocked(&chan->svc_fifo, p_data,17841758 sizeof(*p_data),17851785- &chan->ctrl->svc_fifo_lock);17591759+ &chan->svc_fifo_lock);1786176017871761 kfree(p_data);17881762···18071773 */18081774void stratix10_svc_done(struct stratix10_svc_chan *chan)18091775{18101810- /* stop thread when thread is running AND only one active client */18111811- if (chan->ctrl->task && chan->ctrl->num_active_client <= 1) {18121812- pr_debug("svc_smc_hvc_shm_thread is stopped\n");18131813- kthread_stop(chan->ctrl->task);18141814- chan->ctrl->task = NULL;17761776+ /* stop thread when thread is running */17771777+ if (chan->task) {17781778+ pr_debug("%s: %s: svc_smc_hvc_shm_thread is stopping\n",17791779+ __func__, chan->name);17801780+ kthread_stop(chan->task);17811781+ chan->task = NULL;18151782 }18161783}18171784EXPORT_SYMBOL_GPL(stratix10_svc_done);···18521817 pmem->paddr = pa;18531818 pmem->size = s;18541819 list_add_tail(&pmem->node, &svc_data_mem);18551855- pr_debug("%s: va=%p, pa=0x%016x\n", __func__,18561856- pmem->vaddr, (unsigned int)pmem->paddr);18201820+ pr_debug("%s: %s: va=%p, pa=0x%016x\n", __func__,18211821+ chan->name, pmem->vaddr, (unsigned int)pmem->paddr);1857182218581823 return (void *)va;18591824}···18901855 {},18911856};1892185718581858+static const char * const chan_names[SVC_NUM_CHANNEL] = {18591859+ SVC_CLIENT_FPGA,18601860+ SVC_CLIENT_RSU,18611861+ SVC_CLIENT_FCS,18621862+ SVC_CLIENT_HWMON18631863+};18641864+18931865static int stratix10_svc_drv_probe(struct platform_device *pdev)18941866{18951867 struct device *dev = &pdev->dev;···19041862 struct stratix10_svc_chan *chans;19051863 struct gen_pool *genpool;19061864 struct stratix10_svc_sh_memory *sh_memory;19071907- struct stratix10_svc *svc;18651865+ struct stratix10_svc *svc = NULL;1908186619091867 svc_invoke_fn *invoke_fn;19101868 size_t fifo_size;19111911- int ret;18691869+ int ret, i = 0;1912187019131871 /* get SMC or HVC function */19141872 invoke_fn = get_invoke_func(dev);···19471905 controller->num_active_client = 0;19481906 controller->chans = chans;19491907 controller->genpool = genpool;19501950- controller->task = NULL;19511908 controller->invoke_fn = invoke_fn;19091909+ INIT_LIST_HEAD(&controller->node);19521910 init_completion(&controller->complete_status);1953191119541912 ret = stratix10_svc_async_init(controller);···19591917 }1960191819611919 fifo_size = sizeof(struct stratix10_svc_data) * SVC_NUM_DATA_IN_FIFO;19621962- ret = kfifo_alloc(&controller->svc_fifo, fifo_size, GFP_KERNEL);19631963- if (ret) {19641964- dev_err(dev, "failed to allocate FIFO\n");19651965- goto err_async_exit;19201920+ mutex_init(&controller->sdm_lock);19211921+19221922+ for (i = 0; i < SVC_NUM_CHANNEL; i++) {19231923+ chans[i].scl = NULL;19241924+ chans[i].ctrl = controller;19251925+ chans[i].name = (char *)chan_names[i];19261926+ spin_lock_init(&chans[i].lock);19271927+ ret = kfifo_alloc(&chans[i].svc_fifo, fifo_size, GFP_KERNEL);19281928+ if (ret) {19291929+ dev_err(dev, "failed to allocate FIFO %d\n", i);19301930+ goto err_free_fifos;19311931+ }19321932+ spin_lock_init(&chans[i].svc_fifo_lock);19661933 }19671967- spin_lock_init(&controller->svc_fifo_lock);19681968-19691969- chans[0].scl = NULL;19701970- chans[0].ctrl = controller;19711971- chans[0].name = SVC_CLIENT_FPGA;19721972- spin_lock_init(&chans[0].lock);19731973-19741974- chans[1].scl = NULL;19751975- chans[1].ctrl = controller;19761976- chans[1].name = SVC_CLIENT_RSU;19771977- spin_lock_init(&chans[1].lock);19781978-19791979- chans[2].scl = NULL;19801980- chans[2].ctrl = controller;19811981- chans[2].name = SVC_CLIENT_FCS;19821982- spin_lock_init(&chans[2].lock);19831983-19841984- chans[3].scl = NULL;19851985- chans[3].ctrl = controller;19861986- chans[3].name = SVC_CLIENT_HWMON;19871987- spin_lock_init(&chans[3].lock);1988193419891935 list_add_tail(&controller->node, &svc_ctrl);19901936 platform_set_drvdata(pdev, controller);···19811951 svc = devm_kzalloc(dev, sizeof(*svc), GFP_KERNEL);19821952 if (!svc) {19831953 ret = -ENOMEM;19841984- goto err_free_kfifo;19541954+ goto err_free_fifos;19851955 }19861956 controller->svc = svc;19871957···19891959 if (!svc->stratix10_svc_rsu) {19901960 dev_err(dev, "failed to allocate %s device\n", STRATIX10_RSU);19911961 ret = -ENOMEM;19921992- goto err_free_kfifo;19621962+ goto err_free_fifos;19931963 }1994196419951965 ret = platform_device_add(svc->stratix10_svc_rsu);19961996- if (ret) {19971997- platform_device_put(svc->stratix10_svc_rsu);19981998- goto err_free_kfifo;19991999- }20002000-20012001- svc->intel_svc_fcs = platform_device_alloc(INTEL_FCS, 1);20022002- if (!svc->intel_svc_fcs) {20032003- dev_err(dev, "failed to allocate %s device\n", INTEL_FCS);20042004- ret = -ENOMEM;20052005- goto err_unregister_rsu_dev;20062006- }20072007-20082008- ret = platform_device_add(svc->intel_svc_fcs);20092009- if (ret) {20102010- platform_device_put(svc->intel_svc_fcs);20112011- goto err_unregister_rsu_dev;20122012- }19661966+ if (ret)19671967+ goto err_put_device;2013196820141969 ret = of_platform_default_populate(dev_of_node(dev), NULL, dev);20151970 if (ret)20162016- goto err_unregister_fcs_dev;19711971+ goto err_unregister_rsu_dev;2017197220181973 pr_info("Intel Service Layer Driver Initialized\n");2019197420201975 return 0;2021197620222022-err_unregister_fcs_dev:20232023- platform_device_unregister(svc->intel_svc_fcs);20241977err_unregister_rsu_dev:20251978 platform_device_unregister(svc->stratix10_svc_rsu);20262026-err_free_kfifo:20272027- kfifo_free(&controller->svc_fifo);20282028-err_async_exit:19791979+ goto err_free_fifos;19801980+err_put_device:19811981+ platform_device_put(svc->stratix10_svc_rsu);19821982+err_free_fifos:19831983+ /* only remove from list if list_add_tail() was reached */19841984+ if (!list_empty(&controller->node))19851985+ list_del(&controller->node);19861986+ /* free only the FIFOs that were successfully allocated */19871987+ while (i--)19881988+ kfifo_free(&chans[i].svc_fifo);20291989 stratix10_svc_async_exit(controller);20301990err_destroy_pool:20311991 gen_pool_destroy(genpool);19921992+20321993 return ret;20331994}2034199520351996static void stratix10_svc_drv_remove(struct platform_device *pdev)20361997{19981998+ int i;20371999 struct stratix10_svc_controller *ctrl = platform_get_drvdata(pdev);20382000 struct stratix10_svc *svc = ctrl->svc;20392001···2033201120342012 of_platform_depopulate(ctrl->dev);2035201320362036- platform_device_unregister(svc->intel_svc_fcs);20372014 platform_device_unregister(svc->stratix10_svc_rsu);2038201520392039- kfifo_free(&ctrl->svc_fifo);20402040- if (ctrl->task) {20412041- kthread_stop(ctrl->task);20422042- ctrl->task = NULL;20162016+ for (i = 0; i < SVC_NUM_CHANNEL; i++) {20172017+ if (ctrl->chans[i].task) {20182018+ kthread_stop(ctrl->chans[i].task);20192019+ ctrl->chans[i].task = NULL;20202020+ }20212021+ kfifo_free(&ctrl->chans[i].svc_fifo);20432022 }20232023+20442024 if (ctrl->genpool)20452025 gen_pool_destroy(ctrl->genpool);20462026 list_del(&ctrl->node);
+4-3
drivers/gpib/lpvo_usb_gpib/lpvo_usb_gpib.c
···3838/*3939 * Table of devices that work with this driver.4040 *4141- * Currently, only one device is known to be used in the4242- * lpvo_usb_gpib adapter (FTDI 0403:6001).4141+ * Currently, only one device is known to be used in the lpvo_usb_gpib4242+ * adapter (FTDI 0403:6001) but as this device id is already handled by the4343+ * ftdi_sio USB serial driver the LPVO driver must not bind to it by default.4444+ *4345 * If your adapter uses a different chip, insert a line4446 * in the following table with proper <Vendor-id>, <Product-id>.4547 *···5250 */53515452static const struct usb_device_id skel_table[] = {5555- { USB_DEVICE(0x0403, 0x6001) },5653 { } /* Terminating entry */5754};5855MODULE_DEVICE_TABLE(usb, skel_table);
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
···36363737#define AMDGPU_BO_LIST_MAX_PRIORITY 32u3838#define AMDGPU_BO_LIST_NUM_BUCKETS (AMDGPU_BO_LIST_MAX_PRIORITY + 1)3939+#define AMDGPU_BO_LIST_MAX_ENTRIES (128 * 1024)39404041static void amdgpu_bo_list_free_rcu(struct rcu_head *rcu)4142{···188187 const uint32_t bo_info_size = in->bo_info_size;189188 const uint32_t bo_number = in->bo_number;190189 struct drm_amdgpu_bo_list_entry *info;190190+191191+ if (bo_number > AMDGPU_BO_LIST_MAX_ENTRIES)192192+ return -EINVAL;191193192194 /* copy the handle array from userspace to a kernel buffer */193195 if (likely(info_size == bo_info_size)) {
+13-1
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···26902690 break;26912691 default:26922692 r = amdgpu_discovery_set_ip_blocks(adev);26932693- if (r)26932693+ if (r) {26942694+ adev->num_ip_blocks = 0;26942695 return r;26962696+ }26952697 break;26962698 }26972699···32493247 i = state == AMD_CG_STATE_GATE ? j : adev->num_ip_blocks - j - 1;32503248 if (!adev->ip_blocks[i].status.late_initialized)32513249 continue;32503250+ if (!adev->ip_blocks[i].version)32513251+ continue;32523252 /* skip CG for GFX, SDMA on S0ix */32533253 if (adev->in_s0ix &&32543254 (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GFX ||···32893285 for (j = 0; j < adev->num_ip_blocks; j++) {32903286 i = state == AMD_PG_STATE_GATE ? j : adev->num_ip_blocks - j - 1;32913287 if (!adev->ip_blocks[i].status.late_initialized)32883288+ continue;32893289+ if (!adev->ip_blocks[i].version)32923290 continue;32933291 /* skip PG for GFX, SDMA on S0ix */32943292 if (adev->in_s0ix &&···34993493 int i, r;3500349435013495 for (i = 0; i < adev->num_ip_blocks; i++) {34963496+ if (!adev->ip_blocks[i].version)34973497+ continue;35023498 if (!adev->ip_blocks[i].version->funcs->early_fini)35033499 continue;35043500···35783570 if (!adev->ip_blocks[i].status.sw)35793571 continue;3580357235733573+ if (!adev->ip_blocks[i].version)35743574+ continue;35813575 if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC) {35823576 amdgpu_ucode_free_bo(adev);35833577 amdgpu_free_static_csa(&adev->virt.csa_obj);···3605359536063596 for (i = adev->num_ip_blocks - 1; i >= 0; i--) {36073597 if (!adev->ip_blocks[i].status.late_initialized)35983598+ continue;35993599+ if (!adev->ip_blocks[i].version)36083600 continue;36093601 if (adev->ip_blocks[i].version->funcs->late_fini)36103602 adev->ip_blocks[i].version->funcs->late_fini(&adev->ip_blocks[i]);
+1-1
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
···8383{8484 struct amdgpu_device *adev = drm_to_adev(dev);85858686- if (adev == NULL)8686+ if (adev == NULL || !adev->num_ip_blocks)8787 return;88888989 amdgpu_unregister_gpu_instance(adev);
+8-8
drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
···368368369369 struct drm_property *plane_ctm_property;370370 /**371371- * @shaper_lut_property: Plane property to set pre-blending shaper LUT372372- * that converts color content before 3D LUT. If373373- * plane_shaper_tf_property != Identity TF, AMD color module will371371+ * @plane_shaper_lut_property: Plane property to set pre-blending372372+ * shaper LUT that converts color content before 3D LUT.373373+ * If plane_shaper_tf_property != Identity TF, AMD color module will374374 * combine the user LUT values with pre-defined TF into the LUT375375 * parameters to be programmed.376376 */377377 struct drm_property *plane_shaper_lut_property;378378 /**379379- * @shaper_lut_size_property: Plane property for the size of379379+ * @plane_shaper_lut_size_property: Plane property for the size of380380 * pre-blending shaper LUT as supported by the driver (read-only).381381 */382382 struct drm_property *plane_shaper_lut_size_property;···400400 */401401 struct drm_property *plane_lut3d_property;402402 /**403403- * @plane_degamma_lut_size_property: Plane property to define the max404404- * size of 3D LUT as supported by the driver (read-only). The max size405405- * is the max size of one dimension and, therefore, the max number of406406- * entries for 3D LUT array is the 3D LUT size cubed;403403+ * @plane_lut3d_size_property: Plane property to define the max size404404+ * of 3D LUT as supported by the driver (read-only). The max size is405405+ * the max size of one dimension and, therefore, the max number of406406+ * entries for 3D LUT array is the 3D LUT size cubed.407407 */408408 struct drm_property *plane_lut3d_size_property;409409 /**
+6-1
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
···10691069 }1070107010711071 /* Prepare a TLB flush fence to be attached to PTs */10721072- if (!params->unlocked) {10721072+ /* The check for need_tlb_fence should be dropped once we10731073+ * sort out the issues with KIQ/MES TLB invalidation timeouts.10741074+ */10751075+ if (!params->unlocked && vm->need_tlb_fence) {10731076 amdgpu_vm_tlb_fence_create(params->adev, vm, fence);1074107710751078 /* Makes sure no PD/PT is freed before the flush */···26052602 ttm_lru_bulk_move_init(&vm->lru_bulk_move);2606260326072604 vm->is_compute_context = false;26052605+ vm->need_tlb_fence = amdgpu_userq_enabled(&adev->ddev);2608260626092607 vm->use_cpu_for_update = !!(adev->vm_manager.vm_update_mode &26102608 AMDGPU_VM_USE_CPU_FOR_GFX);···27432739 dma_fence_put(vm->last_update);27442740 vm->last_update = dma_fence_get_stub();27452741 vm->is_compute_context = true;27422742+ vm->need_tlb_fence = true;2746274327472744unreserve_bo:27482745 amdgpu_bo_unreserve(vm->root.bo);
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
···441441 struct ttm_lru_bulk_move lru_bulk_move;442442 /* Flag to indicate if VM is used for compute */443443 bool is_compute_context;444444+ /* Flag to indicate if VM needs a TLB fence (KFD or KGD) */445445+ bool need_tlb_fence;444446445447 /* Memory partition number, -1 means any partition */446448 int8_t mem_id;
···129129 if (!pdev)130130 return -EINVAL;131131132132- if (!dev->type->name) {132132+ if (!dev->type || !dev->type->name) {133133 drm_dbg(&adev->ddev, "Invalid device type to add\n");134134 goto exit;135135 }···165165 if (!pdev)166166 return -EINVAL;167167168168- if (!dev->type->name) {168168+ if (!dev->type || !dev->type->name) {169169 drm_dbg(&adev->ddev, "Invalid device type to remove\n");170170 goto exit;171171 }
+4-1
drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
···731731 int i;732732 struct amdgpu_device *adev = mes->adev;733733 union MESAPI_SET_HW_RESOURCES mes_set_hw_res_pkt;734734+ uint32_t mes_rev = (pipe == AMDGPU_MES_SCHED_PIPE) ?735735+ (mes->sched_version & AMDGPU_MES_VERSION_MASK) :736736+ (mes->kiq_version & AMDGPU_MES_VERSION_MASK);734737735738 memset(&mes_set_hw_res_pkt, 0, sizeof(mes_set_hw_res_pkt));736739···788785 * handling support, other queue will not use the oversubscribe timer.789786 * handling mode - 0: disabled; 1: basic version; 2: basic+ version790787 */791791- mes_set_hw_res_pkt.oversubscription_timer = 50;788788+ mes_set_hw_res_pkt.oversubscription_timer = mes_rev < 0x8b ? 0 : 50;792789 mes_set_hw_res_pkt.unmapped_doorbell_handling = 1;793790794791 if (amdgpu_mes_log_enable) {
···9696 dccg->pipe_dppclk_khz[dpp_inst] = req_dppclk;9797}98989999+/*100100+ * On DCN21 S0i3 resume, BIOS programs MICROSECOND_TIME_BASE_DIV to101101+ * 0x00120464 as a marker that golden init has already been done.102102+ * dcn21_s0i3_golden_init_wa() reads this marker later in bios_golden_init()103103+ * to decide whether to skip golden init.104104+ *105105+ * dccg2_init() unconditionally overwrites MICROSECOND_TIME_BASE_DIV to106106+ * 0x00120264, destroying the marker before it can be read.107107+ *108108+ * Guard the call: if the S0i3 marker is present, skip dccg2_init() so the109109+ * WA can function correctly. bios_golden_init() will handle init in that case.110110+ */111111+static void dccg21_init(struct dccg *dccg)112112+{113113+ if (dccg2_is_s0i3_golden_init_wa_done(dccg))114114+ return;115115+116116+ dccg2_init(dccg);117117+}99118100119static const struct dccg_funcs dccg21_funcs = {101120 .update_dpp_dto = dccg21_update_dpp_dto,···122103 .set_fifo_errdet_ovr_en = dccg2_set_fifo_errdet_ovr_en,123104 .otg_add_pixel = dccg2_otg_add_pixel,124105 .otg_drop_pixel = dccg2_otg_drop_pixel,125125- .dccg_init = dccg2_init,106106+ .dccg_init = dccg21_init,126107 .refclk_setup = dccg2_refclk_setup, /* Deprecated - for backward compatibility only */127108 .allow_clock_gating = dccg2_allow_clock_gating,128109 .enable_memory_low_power = dccg2_enable_memory_low_power,
···45774577intel_edp_init_dpcd(struct intel_dp *intel_dp, struct intel_connector *connector)45784578{45794579 struct intel_display *display = to_intel_display(intel_dp);45804580+ int ret;4580458145814582 /* this function is meant to be called only once */45824583 drm_WARN_ON(display->drm, intel_dp->dpcd[DP_DPCD_REV] != 0);···46164615 * available (such as HDR backlight controls)46174616 */46184617 intel_dp_init_source_oui(intel_dp);46184618+46194619+ /* Read the ALPM DPCD caps */46204620+ ret = drm_dp_dpcd_read_byte(&intel_dp->aux, DP_RECEIVER_ALPM_CAP,46214621+ &intel_dp->alpm_dpcd);46224622+ if (ret < 0)46234623+ return false;4619462446204625 /*46214626 * This has to be called after intel_dp->edp_dpcd is filled, PSR checks
+53-14
drivers/gpu/drm/i915/display/intel_psr.c
···17171717 entry_setup_frames = intel_psr_entry_setup_frames(intel_dp, conn_state, adjusted_mode);1718171817191719 if (entry_setup_frames >= 0) {17201720- intel_dp->psr.entry_setup_frames = entry_setup_frames;17201720+ crtc_state->entry_setup_frames = entry_setup_frames;17211721 } else {17221722 crtc_state->no_psr_reason = "PSR setup timing not met";17231723 drm_dbg_kms(display->drm,···18151815{18161816 struct intel_display *display = to_intel_display(intel_dp);1817181718181818- return (DISPLAY_VER(display) == 20 && intel_dp->psr.entry_setup_frames > 0 &&18181818+ return (DISPLAY_VER(display) == 20 && crtc_state->entry_setup_frames > 0 &&18191819 !crtc_state->has_sel_update);18201820}18211821···21892189 intel_dp->psr.pkg_c_latency_used = crtc_state->pkg_c_latency_used;21902190 intel_dp->psr.io_wake_lines = crtc_state->alpm_state.io_wake_lines;21912191 intel_dp->psr.fast_wake_lines = crtc_state->alpm_state.fast_wake_lines;21922192+ intel_dp->psr.entry_setup_frames = crtc_state->entry_setup_frames;2192219321932194 if (!psr_interrupt_error_check(intel_dp))21942195 return;···2620261926212620 intel_de_write_dsb(display, dsb, PIPE_SRCSZ_ERLY_TPT(crtc->pipe),26222621 crtc_state->pipe_srcsz_early_tpt);26222622+26232623+ if (!crtc_state->dsc.compression_enable)26242624+ return;26252625+26262626+ intel_dsc_su_et_parameters_configure(dsb, encoder, crtc_state,26272627+ drm_rect_height(&crtc_state->psr2_su_area));26232628}2624262926252630static void psr2_man_trk_ctl_calc(struct intel_crtc_state *crtc_state,···26962689 overlap_damage_area->y2 = damage_area->y2;26972690}2698269126992699-static void intel_psr2_sel_fetch_pipe_alignment(struct intel_crtc_state *crtc_state)26922692+static bool intel_psr2_sel_fetch_pipe_alignment(struct intel_crtc_state *crtc_state)27002693{27012694 struct intel_display *display = to_intel_display(crtc_state);27022695 const struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;27032696 u16 y_alignment;26972697+ bool su_area_changed = false;2704269827052699 /* ADLP aligns the SU region to vdsc slice height in case dsc is enabled */27062700 if (crtc_state->dsc.compression_enable &&···27102702 else27112703 y_alignment = crtc_state->su_y_granularity;2712270427132713- crtc_state->psr2_su_area.y1 -= crtc_state->psr2_su_area.y1 % y_alignment;27142714- if (crtc_state->psr2_su_area.y2 % y_alignment)27052705+ if (crtc_state->psr2_su_area.y1 % y_alignment) {27062706+ crtc_state->psr2_su_area.y1 -= crtc_state->psr2_su_area.y1 % y_alignment;27072707+ su_area_changed = true;27082708+ }27092709+27102710+ if (crtc_state->psr2_su_area.y2 % y_alignment) {27152711 crtc_state->psr2_su_area.y2 = ((crtc_state->psr2_su_area.y2 /27162712 y_alignment) + 1) * y_alignment;27132713+ su_area_changed = true;27142714+ }27152715+27162716+ return su_area_changed;27172717}2718271827192719/*···28552839 struct intel_crtc_state *crtc_state = intel_atomic_get_new_crtc_state(state, crtc);28562840 struct intel_plane_state *new_plane_state, *old_plane_state;28572841 struct intel_plane *plane;28582858- bool full_update = false, cursor_in_su_area = false;28422842+ bool full_update = false, su_area_changed;28592843 int i, ret;2860284428612845 if (!crtc_state->enable_psr2_sel_fetch)···29622946 if (ret)29632947 return ret;2964294829652965- /*29662966- * Adjust su area to cover cursor fully as necessary (early29672967- * transport). This needs to be done after29682968- * drm_atomic_add_affected_planes to ensure visible cursor is added into29692969- * affected planes even when cursor is not updated by itself.29702970- */29712971- intel_psr2_sel_fetch_et_alignment(state, crtc, &cursor_in_su_area);29492949+ do {29502950+ bool cursor_in_su_area;2972295129732973- intel_psr2_sel_fetch_pipe_alignment(crtc_state);29522952+ /*29532953+ * Adjust su area to cover cursor fully as necessary29542954+ * (early transport). This needs to be done after29552955+ * drm_atomic_add_affected_planes to ensure visible29562956+ * cursor is added into affected planes even when29572957+ * cursor is not updated by itself.29582958+ */29592959+ intel_psr2_sel_fetch_et_alignment(state, crtc, &cursor_in_su_area);29602960+29612961+ su_area_changed = intel_psr2_sel_fetch_pipe_alignment(crtc_state);29622962+29632963+ /*29642964+ * If the cursor was outside the SU area before29652965+ * alignment, the alignment step (which only expands29662966+ * SU) may pull the cursor partially inside, so we29672967+ * must run ET alignment again to fully cover it. But29682968+ * if the cursor was already fully inside before29692969+ * alignment, expanding the SU area won't change that,29702970+ * so no further work is needed.29712971+ */29722972+ if (cursor_in_su_area)29732973+ break;29742974+ } while (su_area_changed);2974297529752976 /*29762977 * Now that we have the pipe damaged area check if it intersect with···30473014 }3048301530493016skip_sel_fetch_set_loop:30173017+ if (full_update)30183018+ clip_area_update(&crtc_state->psr2_su_area, &crtc_state->pipe_src,30193019+ &crtc_state->pipe_src);30203020+30503021 psr2_man_trk_ctl_calc(crtc_state, full_update);30513022 crtc_state->pipe_srcsz_early_tpt =30523023 psr2_pipe_srcsz_early_tpt_calc(crtc_state, full_update);···31103073 * - Display WA #1136: skl, bxt31113074 */31123075 if (intel_crtc_needs_modeset(new_crtc_state) ||30763076+ new_crtc_state->update_m_n ||30773077+ new_crtc_state->update_lrr ||31133078 !new_crtc_state->has_psr ||31143079 !new_crtc_state->active_planes ||31153080 new_crtc_state->has_sel_update != psr->sel_update_enabled ||
···598598 return;599599600600 /*601601+ * Bspec says:602602+ * "(note: VRR needs to be programmed after603603+ * TRANS_DDI_FUNC_CTL and before TRANS_CONF)."604604+ *605605+ * In practice it turns out that ICL can hang if606606+ * TRANS_VRR_VMAX/FLIPLINE are written before607607+ * enabling TRANS_DDI_FUNC_CTL.608608+ */609609+ drm_WARN_ON(display->drm,610610+ !(intel_de_read(display, TRANS_DDI_FUNC_CTL(display, cpu_transcoder)) & TRANS_DDI_FUNC_ENABLE));611611+612612+ /*601613 * This bit seems to have two meanings depending on the platform:602614 * TGL: generate VRR "safe window" for DSB vblank waits603615 * ADL/DG2: make TRANS_SET_CONTEXT_LATENCY effective with VRR···950938void intel_vrr_transcoder_enable(const struct intel_crtc_state *crtc_state)951939{952940 struct intel_display *display = to_intel_display(crtc_state);941941+942942+ intel_vrr_set_transcoder_timings(crtc_state);953943954944 if (!intel_vrr_possible(crtc_state))955945 return;
···19671967 if (engine->sanitize)19681968 engine->sanitize(engine);1969196919701970- engine->set_default_submission(engine);19701970+ if (engine->set_default_submission)19711971+ engine->set_default_submission(engine);19711972 }19721973}19731974
-17
drivers/gpu/drm/imagination/pvr_device.c
···225225 }226226227227 if (pvr_dev->has_safety_events) {228228- int err;229229-230230- /*231231- * Ensure the GPU is powered on since some safety events (such232232- * as ECC faults) can happen outside of job submissions, which233233- * are otherwise the only time a power reference is held.234234- */235235- err = pvr_power_get(pvr_dev);236236- if (err) {237237- drm_err_ratelimited(drm_dev,238238- "%s: could not take power reference (%d)\n",239239- __func__, err);240240- return ret;241241- }242242-243228 while (pvr_device_safety_irq_pending(pvr_dev)) {244229 pvr_device_safety_irq_clear(pvr_dev);245230 pvr_device_handle_safety_events(pvr_dev);246231247232 ret = IRQ_HANDLED;248233 }249249-250250- pvr_power_put(pvr_dev);251234 }252235253236 return ret;
+39-12
drivers/gpu/drm/imagination/pvr_power.c
···9090}91919292static int9393-pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset)9393+pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset, bool rpm_suspend)9494{9595- if (!hard_reset) {9696- int err;9595+ int err;97969797+ if (!hard_reset) {9898 cancel_delayed_work_sync(&pvr_dev->watchdog.work);9999100100 err = pvr_power_request_idle(pvr_dev);···106106 return err;107107 }108108109109- return pvr_fw_stop(pvr_dev);109109+ if (rpm_suspend) {110110+ /* This also waits for late processing of GPU or firmware IRQs in other cores */111111+ disable_irq(pvr_dev->irq);112112+ }113113+114114+ err = pvr_fw_stop(pvr_dev);115115+ if (err && rpm_suspend)116116+ enable_irq(pvr_dev->irq);117117+118118+ return err;110119}111120112121static int113113-pvr_power_fw_enable(struct pvr_device *pvr_dev)122122+pvr_power_fw_enable(struct pvr_device *pvr_dev, bool rpm_resume)114123{115124 int err;116125126126+ if (rpm_resume)127127+ enable_irq(pvr_dev->irq);128128+117129 err = pvr_fw_start(pvr_dev);118130 if (err)119119- return err;131131+ goto out;120132121133 err = pvr_wait_for_fw_boot(pvr_dev);122134 if (err) {123135 drm_err(from_pvr_device(pvr_dev), "Firmware failed to boot\n");124136 pvr_fw_stop(pvr_dev);125125- return err;137137+ goto out;126138 }127139128140 queue_delayed_work(pvr_dev->sched_wq, &pvr_dev->watchdog.work,129141 msecs_to_jiffies(WATCHDOG_TIME_MS));130142131143 return 0;144144+145145+out:146146+ if (rpm_resume)147147+ disable_irq(pvr_dev->irq);148148+149149+ return err;132150}133151134152bool···379361 return -EIO;380362381363 if (pvr_dev->fw_dev.booted) {382382- err = pvr_power_fw_disable(pvr_dev, false);364364+ err = pvr_power_fw_disable(pvr_dev, false, true);383365 if (err)384366 goto err_drm_dev_exit;385367 }···409391 goto err_drm_dev_exit;410392411393 if (pvr_dev->fw_dev.booted) {412412- err = pvr_power_fw_enable(pvr_dev);394394+ err = pvr_power_fw_enable(pvr_dev, true);413395 if (err)414396 goto err_power_off;415397 }···528510 }529511530512 /* Disable IRQs for the duration of the reset. */531531- disable_irq(pvr_dev->irq);513513+ if (hard_reset) {514514+ disable_irq(pvr_dev->irq);515515+ } else {516516+ /*517517+ * Soft reset is triggered as a response to a FW command to the Host and is518518+ * processed from the threaded IRQ handler. This code cannot (nor needs to)519519+ * wait for any IRQ processing to complete.520520+ */521521+ disable_irq_nosync(pvr_dev->irq);522522+ }532523533524 do {534525 if (hard_reset) {···545518 queues_disabled = true;546519 }547520548548- err = pvr_power_fw_disable(pvr_dev, hard_reset);521521+ err = pvr_power_fw_disable(pvr_dev, hard_reset, false);549522 if (!err) {550523 if (hard_reset) {551524 pvr_dev->fw_dev.booted = false;···568541569542 pvr_fw_irq_clear(pvr_dev);570543571571- err = pvr_power_fw_enable(pvr_dev);544544+ err = pvr_power_fw_enable(pvr_dev, false);572545 }573546574547 if (err && hard_reset)
···156156 u8 color;157157 u32 lr_pe[4], tb_pe[4];158158 const u32 bytemask = 0xff;159159- u32 offset = ctx->cap->sblk->sspp_rec0_blk.base;159159+ u32 offset;160160161161 if (!ctx || !pe_ext)162162 return;163163+164164+ offset = ctx->cap->sblk->sspp_rec0_blk.base;163165164166 c = &ctx->hw;165167 /* program SW pixel extension override for all pipes*/
+14-38
drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
···350350 return true;351351}352352353353-static bool dpu_rm_find_lms(struct dpu_rm *rm,354354- struct dpu_global_state *global_state,355355- uint32_t crtc_id, bool skip_dspp,356356- struct msm_display_topology *topology,357357- int *lm_idx, int *pp_idx, int *dspp_idx)353353+static int _dpu_rm_reserve_lms(struct dpu_rm *rm,354354+ struct dpu_global_state *global_state,355355+ uint32_t crtc_id,356356+ struct msm_display_topology *topology)358357359358{359359+ int lm_idx[MAX_BLOCKS];360360+ int pp_idx[MAX_BLOCKS];361361+ int dspp_idx[MAX_BLOCKS] = {0};360362 int i, lm_count = 0;363363+364364+ if (!topology->num_lm) {365365+ DPU_ERROR("zero LMs in topology\n");366366+ return -EINVAL;367367+ }361368362369 /* Find a primary mixer */363370 for (i = 0; i < ARRAY_SIZE(rm->mixer_blks) &&364371 lm_count < topology->num_lm; i++) {365372 if (!rm->mixer_blks[i])366373 continue;367367-368368- if (skip_dspp && to_dpu_hw_mixer(rm->mixer_blks[i])->cap->dspp) {369369- DPU_DEBUG("Skipping LM_%d, skipping LMs with DSPPs\n", i);370370- continue;371371- }372374373375 /*374376 * Reset lm_count to an even index. This will drop the previous···410408 }411409 }412410413413- return lm_count == topology->num_lm;414414-}415415-416416-static int _dpu_rm_reserve_lms(struct dpu_rm *rm,417417- struct dpu_global_state *global_state,418418- uint32_t crtc_id,419419- struct msm_display_topology *topology)420420-421421-{422422- int lm_idx[MAX_BLOCKS];423423- int pp_idx[MAX_BLOCKS];424424- int dspp_idx[MAX_BLOCKS] = {0};425425- int i;426426- bool found;427427-428428- if (!topology->num_lm) {429429- DPU_ERROR("zero LMs in topology\n");430430- return -EINVAL;431431- }432432-433433- /* Try using non-DSPP LM blocks first */434434- found = dpu_rm_find_lms(rm, global_state, crtc_id, !topology->num_dspp,435435- topology, lm_idx, pp_idx, dspp_idx);436436- if (!found && !topology->num_dspp)437437- found = dpu_rm_find_lms(rm, global_state, crtc_id, false,438438- topology, lm_idx, pp_idx, dspp_idx);439439- if (!found) {411411+ if (lm_count != topology->num_lm) {440412 DPU_DEBUG("unable to find appropriate mixers\n");441413 return -ENAVAIL;442414 }443415444444- for (i = 0; i < topology->num_lm; i++) {416416+ for (i = 0; i < lm_count; i++) {445417 global_state->mixer_to_crtc_id[lm_idx[i]] = crtc_id;446418 global_state->pingpong_to_crtc_id[pp_idx[i]] = crtc_id;447419 global_state->dspp_to_crtc_id[dspp_idx[i]] =
+31-12
drivers/gpu/drm/msm/dsi/dsi_host.c
···584584 * FIXME: Reconsider this if/when CMD mode handling is rewritten to use585585 * transfer time and data overhead as a starting point of the calculations.586586 */587587-static unsigned long dsi_adjust_pclk_for_compression(const struct drm_display_mode *mode,588588- const struct drm_dsc_config *dsc)587587+static unsigned long588588+dsi_adjust_pclk_for_compression(const struct drm_display_mode *mode,589589+ const struct drm_dsc_config *dsc,590590+ bool is_bonded_dsi)589591{590590- int new_hdisplay = DIV_ROUND_UP(mode->hdisplay * drm_dsc_get_bpp_int(dsc),591591- dsc->bits_per_component * 3);592592+ int hdisplay, new_hdisplay, new_htotal;592593593593- int new_htotal = mode->htotal - mode->hdisplay + new_hdisplay;594594+ /*595595+ * For bonded DSI, split hdisplay across two links and round up each596596+ * half separately, passing the full hdisplay would only round up once.597597+ * This also aligns with the hdisplay we program later in598598+ * dsi_timing_setup()599599+ */600600+ hdisplay = mode->hdisplay;601601+ if (is_bonded_dsi)602602+ hdisplay /= 2;603603+604604+ new_hdisplay = DIV_ROUND_UP(hdisplay * drm_dsc_get_bpp_int(dsc),605605+ dsc->bits_per_component * 3);606606+607607+ if (is_bonded_dsi)608608+ new_hdisplay *= 2;609609+610610+ new_htotal = mode->htotal - mode->hdisplay + new_hdisplay;594611595612 return mult_frac(mode->clock * 1000u, new_htotal, mode->htotal);596613}···620603 pclk_rate = mode->clock * 1000u;621604622605 if (dsc)623623- pclk_rate = dsi_adjust_pclk_for_compression(mode, dsc);606606+ pclk_rate = dsi_adjust_pclk_for_compression(mode, dsc, is_bonded_dsi);624607625608 /*626609 * For bonded DSI mode, the current DRM mode has the complete width of the···10109931011994 if (msm_host->dsc) {1012995 struct drm_dsc_config *dsc = msm_host->dsc;10131013- u32 bytes_per_pclk;996996+ u32 bits_per_pclk;10149971015998 /* update dsc params with timing params */1016999 if (!dsc || !mode->hdisplay || !mode->vdisplay) {···1032101510331016 /*10341017 * DPU sends 3 bytes per pclk cycle to DSI. If widebus is10351035- * enabled, bus width is extended to 6 bytes.10181018+ * enabled, MDP always sends out 48-bit compressed data per10191019+ * pclk and on average, DSI consumes an amount of compressed10201020+ * data equivalent to the uncompressed pixel depth per pclk.10361021 *10371022 * Calculate the number of pclks needed to transmit one line of10381023 * the compressed data.···10461027 * unused anyway.10471028 */10481029 h_total -= hdisplay;10491049- if (wide_bus_enabled && !(msm_host->mode_flags & MIPI_DSI_MODE_VIDEO))10501050- bytes_per_pclk = 6;10301030+ if (wide_bus_enabled)10311031+ bits_per_pclk = mipi_dsi_pixel_format_to_bpp(msm_host->format);10511032 else10521052- bytes_per_pclk = 3;10331033+ bits_per_pclk = 24;1053103410541054- hdisplay = DIV_ROUND_UP(msm_dsc_get_bytes_per_line(msm_host->dsc), bytes_per_pclk);10351035+ hdisplay = DIV_ROUND_UP(msm_dsc_get_bytes_per_line(msm_host->dsc) * 8, bits_per_pclk);1055103610561037 h_total += hdisplay;10571038 ha_end = ha_start + hdisplay;
+11-11
drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
···5151#define DSI_PHY_7NM_QUIRK_V4_3 BIT(3)5252/* Hardware is V5.2 */5353#define DSI_PHY_7NM_QUIRK_V5_2 BIT(4)5454-/* Hardware is V7.0 */5555-#define DSI_PHY_7NM_QUIRK_V7_0 BIT(5)5454+/* Hardware is V7.2 */5555+#define DSI_PHY_7NM_QUIRK_V7_2 BIT(5)56565757struct dsi_pll_config {5858 bool enable_ssc;···143143144144 if (pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_PRE_V4_1) {145145 config->pll_clock_inverters = 0x28;146146- } else if ((pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) {146146+ } else if ((pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) {147147 if (pll_freq < 163000000ULL)148148 config->pll_clock_inverters = 0xa0;149149 else if (pll_freq < 175000000ULL)···284284 }285285286286 if ((pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V5_2) ||287287- (pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) {287287+ (pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) {288288 if (pll->vco_current_rate < 1557000000ULL)289289 vco_config_1 = 0x08;290290 else···699699 case MSM_DSI_PHY_MASTER:700700 pll_7nm->slave = pll_7nm_list[(pll_7nm->phy->id + 1) % DSI_MAX];701701 /* v7.0: Enable ATB_EN0 and alternate clock output to external phy */702702- if (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)702702+ if (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)703703 writel(0x07, base + REG_DSI_7nm_PHY_CMN_CTRL_5);704704 break;705705 case MSM_DSI_PHY_SLAVE:···987987 /* Request for REFGEN READY */988988 if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_3) ||989989 (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V5_2) ||990990- (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) {990990+ (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) {991991 writel(0x1, phy->base + REG_DSI_7nm_PHY_CMN_GLBL_DIGTOP_SPARE10);992992 udelay(500);993993 }···10211021 lane_ctrl0 = 0x1f;10221022 }1023102310241024- if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) {10241024+ if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) {10251025 if (phy->cphy_mode) {10261026 /* TODO: different for second phy */10271027 vreg_ctrl_0 = 0x57;···1097109710981098 /* program CMN_CTRL_4 for minor_ver 2 chipsets*/10991099 if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V5_2) ||11001100- (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0) ||11001100+ (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2) ||11011101 (readl(base + REG_DSI_7nm_PHY_CMN_REVISION_ID0) & (0xf0)) == 0x20)11021102 writel(0x04, base + REG_DSI_7nm_PHY_CMN_CTRL_4);11031103···12131213 /* Turn off REFGEN Vote */12141214 if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_3) ||12151215 (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V5_2) ||12161216- (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) {12161216+ (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) {12171217 writel(0x0, base + REG_DSI_7nm_PHY_CMN_GLBL_DIGTOP_SPARE10);12181218 wmb();12191219 /* Delay to ensure HW removes vote before PHY shut down */···15021502#endif15031503 .io_start = { 0xae95000, 0xae97000 },15041504 .num_dsi_phy = 2,15051505- .quirks = DSI_PHY_7NM_QUIRK_V7_0,15051505+ .quirks = DSI_PHY_7NM_QUIRK_V7_2,15061506};1507150715081508const struct msm_dsi_phy_cfg dsi_phy_3nm_kaanapali_cfgs = {···15251525#endif15261526 .io_start = { 0x9ac1000, 0x9ac4000 },15271527 .num_dsi_phy = 2,15281528- .quirks = DSI_PHY_7NM_QUIRK_V7_0,15281528+ .quirks = DSI_PHY_7NM_QUIRK_V7_2,15291529};
···347347 if (ret)348348 return ret;349349350350+ /*351351+ * Override value set by mipi_dbi_spi_init(). This driver is a bit352352+ * non-standard, so best to set it explicitly here.353353+ */354354+ dbi->write_memory_bpw = 8;355355+350356 /* Cannot read from this controller via SPI */351357 dbi->read_commands = NULL;352358···361355 &st7586_mode, rotation, bufsize);362356 if (ret)363357 return ret;364364-365365- /*366366- * we are using 8-bit data, so we are not actually swapping anything,367367- * but setting mipi->swap_bytes makes mipi_dbi_typec3_command() do the368368- * right thing and not use 16-bit transfers (which results in swapped369369- * bytes on little-endian systems and causes out of order data to be370370- * sent to the display).371371- */372372- dbi->swap_bytes = true;373358374359 drm_mode_config_reset(drm);375360
+57-36
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
···96969797struct vmw_res_func;98989999+struct vmw_bo;100100+struct vmw_bo;101101+struct vmw_resource_dirty;102102+99103/**100100- * struct vmw-resource - base class for hardware resources104104+ * struct vmw_resource - base class for hardware resources101105 *102106 * @kref: For refcounting.103107 * @dev_priv: Pointer to the device private for this resource. Immutable.104108 * @id: Device id. Protected by @dev_priv::resource_lock.109109+ * @used_prio: Priority for this resource.105110 * @guest_memory_size: Guest memory buffer size. Immutable.106111 * @res_dirty: Resource contains data not yet in the guest memory buffer.107112 * Protected by resource reserved.···122117 * pin-count greater than zero. It is not on the resource LRU lists and its123118 * guest memory buffer is pinned. Hence it can't be evicted.124119 * @func: Method vtable for this resource. Immutable.125125- * @mob_node; Node for the MOB guest memory rbtree. Protected by120120+ * @mob_node: Node for the MOB guest memory rbtree. Protected by126121 * @guest_memory_bo reserved.127122 * @lru_head: List head for the LRU list. Protected by @dev_priv::resource_lock.128123 * @binding_head: List head for the context binding list. Protected by129124 * the @dev_priv::binding_mutex125125+ * @dirty: resource's dirty tracker130126 * @res_free: The resource destructor.131127 * @hw_destroy: Callback to destroy the resource on the device, as part of132128 * resource destruction.133129 */134134-struct vmw_bo;135135-struct vmw_bo;136136-struct vmw_resource_dirty;137130struct vmw_resource {138131 struct kref kref;139132 struct vmw_private *dev_priv;···199196 * @quality_level: Quality level.200197 * @autogen_filter: Filter for automatically generated mipmaps.201198 * @array_size: Number of array elements for a 1D/2D texture. For cubemap202202- texture number of faces * array_size. This should be 0 for pre203203- SM4 device.199199+ * texture number of faces * array_size. This should be 0 for pre200200+ * SM4 device.204201 * @buffer_byte_stride: Buffer byte stride.205202 * @num_sizes: Size of @sizes. For GB surface this should always be 1.206203 * @base_size: Surface dimension.···268265struct vmw_res_cache_entry {269266 uint32_t handle;270267 struct vmw_resource *res;268268+ /* private: */271269 void *private;270270+ /* public: */272271 unsigned short valid_handle;273272 unsigned short valid;274273};275274276275/**277276 * enum vmw_dma_map_mode - indicate how to perform TTM page dma mappings.277277+ * @vmw_dma_alloc_coherent: Use TTM coherent pages278278+ * @vmw_dma_map_populate: Unmap from DMA just after unpopulate279279+ * @vmw_dma_map_bind: Unmap from DMA just before unbind278280 */279281enum vmw_dma_map_mode {280280- vmw_dma_alloc_coherent, /* Use TTM coherent pages */281281- vmw_dma_map_populate, /* Unmap from DMA just after unpopulate */282282- vmw_dma_map_bind, /* Unmap from DMA just before unbind */282282+ vmw_dma_alloc_coherent,283283+ vmw_dma_map_populate,284284+ vmw_dma_map_bind,285285+ /* private: */283286 vmw_dma_map_max284287};285288···293284 * struct vmw_sg_table - Scatter/gather table for binding, with additional294285 * device-specific information.295286 *287287+ * @mode: which page mapping mode to use288288+ * @pages: Array of page pointers to the pages.289289+ * @addrs: DMA addresses to the pages if coherent pages are used.296290 * @sgt: Pointer to a struct sg_table with binding information297297- * @num_regions: Number of regions with device-address contiguous pages291291+ * @num_pages: Number of @pages298292 */299293struct vmw_sg_table {300294 enum vmw_dma_map_mode mode;···365353 * than from user-space366354 * @fp: If @kernel is false, points to the file of the client. Otherwise367355 * NULL356356+ * @filp: DRM state for this file368357 * @cmd_bounce: Command bounce buffer used for command validation before369358 * copying to fifo space370359 * @cmd_bounce_size: Current command bounce buffer size···742729bool vmwgfx_supported(struct vmw_private *vmw);743730744731745745-/**732732+/*746733 * GMR utilities - vmwgfx_gmr.c747734 */748735···752739 int gmr_id);753740extern void vmw_gmr_unbind(struct vmw_private *dev_priv, int gmr_id);754741755755-/**742742+/*756743 * User handles757744 */758745struct vmw_user_object {···772759void vmw_user_object_unmap(struct vmw_user_object *uo);773760bool vmw_user_object_is_mapped(struct vmw_user_object *uo);774761775775-/**762762+/*776763 * Resource utilities - vmwgfx_resource.c777764 */778765struct vmw_user_resource_conv;···832819 return !RB_EMPTY_NODE(&res->mob_node);833820}834821835835-/**822822+/*836823 * GEM related functionality - vmwgfx_gem.c837824 */838825struct vmw_bo_params;···846833 struct drm_file *filp);847834extern void vmw_debugfs_gem_init(struct vmw_private *vdev);848835849849-/**836836+/*850837 * Misc Ioctl functionality - vmwgfx_ioctl.c851838 */852839···859846extern int vmw_present_readback_ioctl(struct drm_device *dev, void *data,860847 struct drm_file *file_priv);861848862862-/**849849+/*863850 * Fifo utilities - vmwgfx_fifo.c864851 */865852···893880894881895882/**896896- * vmw_fifo_caps - Returns the capabilities of the FIFO command883883+ * vmw_fifo_caps - Get the capabilities of the FIFO command897884 * queue or 0 if fifo memory isn't present.898885 * @dev_priv: The device private context886886+ *887887+ * Returns: capabilities of the FIFO command or %0 if fifo memory not present899888 */900889static inline uint32_t vmw_fifo_caps(const struct vmw_private *dev_priv)901890{···908893909894910895/**911911- * vmw_is_cursor_bypass3_enabled - Returns TRUE iff Cursor Bypass 3912912- * is enabled in the FIFO.896896+ * vmw_is_cursor_bypass3_enabled - check Cursor Bypass 3 enabled setting897897+ * in the FIFO.913898 * @dev_priv: The device private context899899+ *900900+ * Returns: %true iff Cursor Bypass 3 is enabled in the FIFO914901 */915902static inline bool916903vmw_is_cursor_bypass3_enabled(const struct vmw_private *dev_priv)···920903 return (vmw_fifo_caps(dev_priv) & SVGA_FIFO_CAP_CURSOR_BYPASS_3) != 0;921904}922905923923-/**906906+/*924907 * TTM buffer object driver - vmwgfx_ttm_buffer.c925908 */926909···944927 *945928 * @viter: Pointer to the iterator to advance.946929 *947947- * Returns false if past the list of pages, true otherwise.930930+ * Returns: false if past the list of pages, true otherwise.948931 */949932static inline bool vmw_piter_next(struct vmw_piter *viter)950933{···956939 *957940 * @viter: Pointer to the iterator958941 *959959- * Returns the DMA address of the page pointed to by @viter.942942+ * Returns: the DMA address of the page pointed to by @viter.960943 */961944static inline dma_addr_t vmw_piter_dma_addr(struct vmw_piter *viter)962945{···968951 *969952 * @viter: Pointer to the iterator970953 *971971- * Returns the DMA address of the page pointed to by @viter.954954+ * Returns: the DMA address of the page pointed to by @viter.972955 */973956static inline struct page *vmw_piter_page(struct vmw_piter *viter)974957{975958 return viter->pages[viter->i];976959}977960978978-/**961961+/*979962 * Command submission - vmwgfx_execbuf.c980963 */981964···1010993 int32_t out_fence_fd);1011994bool vmw_cmd_describe(const void *buf, u32 *size, char const **cmd);101299510131013-/**996996+/*1014997 * IRQs and wating - vmwgfx_irq.c1015998 */1016999···10331016bool vmw_generic_waiter_remove(struct vmw_private *dev_priv,10341017 u32 flag, int *waiter_count);1035101810361036-/**10191019+/*10371020 * Kernel modesetting - vmwgfx_kms.c10381021 */10391022···10651048extern void vmw_resource_unpin(struct vmw_resource *res);10661049extern enum vmw_res_type vmw_res_type(const struct vmw_resource *res);1067105010681068-/**10511051+/*10691052 * Overlay control - vmwgfx_overlay.c10701053 */10711054···10801063int vmw_overlay_num_overlays(struct vmw_private *dev_priv);10811064int vmw_overlay_num_free_overlays(struct vmw_private *dev_priv);1082106510831083-/**10661066+/*10841067 * GMR Id manager10851068 */1086106910871070int vmw_gmrid_man_init(struct vmw_private *dev_priv, int type);10881071void vmw_gmrid_man_fini(struct vmw_private *dev_priv, int type);1089107210901090-/**10731073+/*10911074 * System memory manager10921075 */10931076int vmw_sys_man_init(struct vmw_private *dev_priv);10941077void vmw_sys_man_fini(struct vmw_private *dev_priv);1095107810961096-/**10791079+/*10971080 * Prime - vmwgfx_prime.c10981081 */10991082···13091292 * @line: The current line of the blit.13101293 * @line_offset: Offset of the current line segment.13111294 * @cpp: Bytes per pixel (granularity information).13121312- * @memcpy: Which memcpy function to use.12951295+ * @do_cpy: Which memcpy function to use.13131296 */13141297struct vmw_diff_cpy {13151298 struct drm_rect rect;···1397138013981381/**13991382 * VMW_DEBUG_KMS - Debug output for kernel mode-setting13831383+ * @fmt: format string for the args14001384 *14011385 * This macro is for debugging vmwgfx mode-setting code.14021386 */14031387#define VMW_DEBUG_KMS(fmt, ...) \14041388 DRM_DEBUG_DRIVER(fmt, ##__VA_ARGS__)1405138914061406-/**13901390+/*14071391 * Inline helper functions14081392 */14091393···1435141714361418/**14371419 * vmw_fifo_mem_read - Perform a MMIO read from the fifo memory14381438- *14201420+ * @vmw: The device private structure14391421 * @fifo_reg: The fifo register to read from14401422 *14411423 * This function is intended to be equivalent to ioread32() on14421424 * memremap'd memory, but without byteswapping.14251425+ *14261426+ * Returns: the value read14431427 */14441428static inline u32 vmw_fifo_mem_read(struct vmw_private *vmw, uint32 fifo_reg)14451429{···1451143114521432/**14531433 * vmw_fifo_mem_write - Perform a MMIO write to volatile memory14541454- *14551455- * @addr: The fifo register to write to14341434+ * @vmw: The device private structure14351435+ * @fifo_reg: The fifo register to write to14361436+ * @value: The value to write14561437 *14571438 * This function is intended to be equivalent to iowrite32 on14581439 * memremap'd memory, but without byteswapping.
+2-1
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
···771771 ret = vmw_bo_dirty_add(bo);772772 if (!ret && surface && surface->res.func->dirty_alloc) {773773 surface->res.coherent = true;774774- ret = surface->res.func->dirty_alloc(&surface->res);774774+ if (surface->res.dirty == NULL)775775+ ret = surface->res.func->dirty_alloc(&surface->res);775776 }776777 ttm_bo_unreserve(&bo->tbo);777778 }
···2828 /** @size: Total usable size of this GGTT */2929 u64 size;30303131-#define XE_GGTT_FLAGS_64K BIT(0)3131+#define XE_GGTT_FLAGS_64K BIT(0)3232+#define XE_GGTT_FLAGS_ONLINE BIT(1)3233 /**3334 * @flags: Flags for this GGTT3435 * Acceptable flags:3536 * - %XE_GGTT_FLAGS_64K - if PTE size is 64K. Otherwise, regular is 4K.3737+ * - %XE_GGTT_FLAGS_ONLINE - is GGTT online, protected by ggtt->lock3838+ * after init3639 */3740 unsigned int flags;3841 /** @scratch: Internal object allocation used as a scratch page */
···48484949#define XE_GUC_EXEC_QUEUE_CGP_CONTEXT_ERROR_LEN 650505151+static int guc_submit_reset_prepare(struct xe_guc *guc);5252+5153static struct xe_guc *5254exec_queue_to_guc(struct xe_exec_queue *q)5355{···241239 EXEC_QUEUE_STATE_BANNED));242240}243241244244-static void guc_submit_fini(struct drm_device *drm, void *arg)242242+static void guc_submit_sw_fini(struct drm_device *drm, void *arg)245243{246244 struct xe_guc *guc = arg;247245 struct xe_device *xe = guc_to_xe(guc);···257255 xe_gt_assert(gt, ret);258256259257 xa_destroy(&guc->submission_state.exec_queue_lookup);258258+}259259+260260+static void guc_submit_fini(void *arg)261261+{262262+ struct xe_guc *guc = arg;263263+264264+ /* Forcefully kill any remaining exec queues */265265+ xe_guc_ct_stop(&guc->ct);266266+ guc_submit_reset_prepare(guc);267267+ xe_guc_softreset(guc);268268+ xe_guc_submit_stop(guc);269269+ xe_uc_fw_sanitize(&guc->fw);270270+ xe_guc_submit_pause_abort(guc);260271}261272262273static void guc_submit_wedged_fini(void *arg)···341326342327 guc->submission_state.initialized = true;343328344344- return drmm_add_action_or_reset(&xe->drm, guc_submit_fini, guc);329329+ err = drmm_add_action_or_reset(&xe->drm, guc_submit_sw_fini, guc);330330+ if (err)331331+ return err;332332+333333+ return devm_add_action_or_reset(xe->drm.dev, guc_submit_fini, guc);345334}346335347336/*···12711252 */12721253void xe_guc_submit_wedge(struct xe_guc *guc)12731254{12551255+ struct xe_device *xe = guc_to_xe(guc);12741256 struct xe_gt *gt = guc_to_gt(guc);12751257 struct xe_exec_queue *q;12761258 unsigned long index;···12861266 if (!guc->submission_state.initialized)12871267 return;1288126812891289- err = devm_add_action_or_reset(guc_to_xe(guc)->drm.dev,12901290- guc_submit_wedged_fini, guc);12911291- if (err) {12921292- xe_gt_err(gt, "Failed to register clean-up in wedged.mode=%s; "12931293- "Although device is wedged.\n",12941294- xe_wedged_mode_to_string(XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET));12951295- return;12961296- }12691269+ if (xe->wedged.mode == 2) {12701270+ err = devm_add_action_or_reset(guc_to_xe(guc)->drm.dev,12711271+ guc_submit_wedged_fini, guc);12721272+ if (err) {12731273+ xe_gt_err(gt, "Failed to register clean-up on wedged.mode=2; "12741274+ "Although device is wedged.\n");12751275+ return;12761276+ }1297127712981298- mutex_lock(&guc->submission_state.lock);12991299- xa_for_each(&guc->submission_state.exec_queue_lookup, index, q)13001300- if (xe_exec_queue_get_unless_zero(q))13011301- set_exec_queue_wedged(q);13021302- mutex_unlock(&guc->submission_state.lock);12781278+ mutex_lock(&guc->submission_state.lock);12791279+ xa_for_each(&guc->submission_state.exec_queue_lookup, index, q)12801280+ if (xe_exec_queue_get_unless_zero(q))12811281+ set_exec_queue_wedged(q);12821282+ mutex_unlock(&guc->submission_state.lock);12831283+ } else {12841284+ /* Forcefully kill any remaining exec queues, signal fences */12851285+ guc_submit_reset_prepare(guc);12861286+ xe_guc_submit_stop(guc);12871287+ xe_guc_softreset(guc);12881288+ xe_uc_fw_sanitize(&guc->fw);12891289+ xe_guc_submit_pause_abort(guc);12901290+ }13031291}1304129213051293static bool guc_submit_hint_wedged(struct xe_guc *guc)···22582230static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q)22592231{22602232 struct xe_gpu_scheduler *sched = &q->guc->sched;22332233+ bool do_destroy = false;2261223422622235 /* Stop scheduling + flush any DRM scheduler operations */22632236 xe_sched_submission_stop(sched);···22662237 /* Clean up lost G2H + reset engine state */22672238 if (exec_queue_registered(q)) {22682239 if (exec_queue_destroyed(q))22692269- __guc_exec_queue_destroy(guc, q);22402240+ do_destroy = true;22702241 }22712242 if (q->guc->suspend_pending) {22722243 set_exec_queue_suspended(q);···23022273 xe_guc_exec_queue_trigger_cleanup(q);23032274 }23042275 }22762276+22772277+ if (do_destroy)22782278+ __guc_exec_queue_destroy(guc, q);23052279}2306228023072307-int xe_guc_submit_reset_prepare(struct xe_guc *guc)22812281+static int guc_submit_reset_prepare(struct xe_guc *guc)23082282{23092283 int ret;23102310-23112311- if (xe_gt_WARN_ON(guc_to_gt(guc), vf_recovery(guc)))23122312- return 0;23132313-23142314- if (!guc->submission_state.initialized)23152315- return 0;2316228423172285 /*23182286 * Using an atomic here rather than submission_state.lock as this···23232297 wake_up_all(&guc->ct.wq);2324229823252299 return ret;23002300+}23012301+23022302+int xe_guc_submit_reset_prepare(struct xe_guc *guc)23032303+{23042304+ if (xe_gt_WARN_ON(guc_to_gt(guc), vf_recovery(guc)))23052305+ return 0;23062306+23072307+ if (!guc->submission_state.initialized)23082308+ return 0;23092309+23102310+ return guc_submit_reset_prepare(guc);23262311}2327231223282313void xe_guc_submit_reset_wait(struct xe_guc *guc)···27322695 continue;2733269627342697 xe_sched_submission_start(sched);27352735- if (exec_queue_killed_or_banned_or_wedged(q))27362736- xe_guc_exec_queue_trigger_cleanup(q);26982698+ guc_exec_queue_kill(q);27372699 }27382700 mutex_unlock(&guc->submission_state.lock);27392701}
+2-2
drivers/gpu/drm/xe/xe_lrc.c
···24132413 * @lrc: Pointer to the lrc.24142414 *24152415 * Return latest ctx timestamp. With support for active contexts, the24162416- * calculation may bb slightly racy, so follow a read-again logic to ensure that24162416+ * calculation may be slightly racy, so follow a read-again logic to ensure that24172417 * the context is still active before returning the right timestamp.24182418 *24192419 * Returns: New ctx timestamp value24202420 */24212421u64 xe_lrc_timestamp(struct xe_lrc *lrc)24222422{24232423- u64 lrc_ts, reg_ts, new_ts;24232423+ u64 lrc_ts, reg_ts, new_ts = lrc->ctx_timestamp;24242424 u32 engine_id;2425242524262426 lrc_ts = xe_lrc_ctx_timestamp(lrc);
+5-2
drivers/gpu/drm/xe/xe_oa.c
···543543 size_t offset = 0;544544 int ret;545545546546- /* Can't read from disabled streams */547547- if (!stream->enabled || !stream->sample)546546+ if (!stream->sample)548547 return -EINVAL;549548550549 if (!(file->f_flags & O_NONBLOCK)) {···1459146014601461 if (stream->sample)14611462 hrtimer_cancel(&stream->poll_check_timer);14631463+14641464+ /* Update stream->oa_buffer.tail to allow any final reports to be read */14651465+ if (xe_oa_buffer_check_unlocked(stream))14661466+ wake_up(&stream->poll_wq);14621467}1463146814641469static int xe_oa_enable_preempt_timeslice(struct xe_oa_stream *stream)
+29-9
drivers/gpu/drm/xe/xe_pt.c
···16551655 XE_WARN_ON(!level);16561656 /* Check for leaf node */16571657 if (xe_walk->prl && xe_page_reclaim_list_valid(xe_walk->prl) &&16581658- (!xe_child->base.children || !xe_child->base.children[first])) {16581658+ xe_child->level <= MAX_HUGEPTE_LEVEL) {16591659 struct iosys_map *leaf_map = &xe_child->bo->vmap;16601660 pgoff_t count = xe_pt_num_entries(addr, next, xe_child->level, walk);1661166116621662 for (pgoff_t i = 0; i < count; i++) {16631663- u64 pte = xe_map_rd(xe, leaf_map, (first + i) * sizeof(u64), u64);16631663+ u64 pte;16641664 int ret;16651665+16661666+ /*16671667+ * If not a leaf pt, skip unless non-leaf pt is interleaved between16681668+ * leaf ptes which causes the page walk to skip over the child leaves16691669+ */16701670+ if (xe_child->base.children && xe_child->base.children[first + i]) {16711671+ u64 pt_size = 1ULL << walk->shifts[xe_child->level];16721672+ bool edge_pt = (i == 0 && !IS_ALIGNED(addr, pt_size)) ||16731673+ (i == count - 1 && !IS_ALIGNED(next, pt_size));16741674+16751675+ if (!edge_pt) {16761676+ xe_page_reclaim_list_abort(xe_walk->tile->primary_gt,16771677+ xe_walk->prl,16781678+ "PT is skipped by walk at level=%u offset=%lu",16791679+ xe_child->level, first + i);16801680+ break;16811681+ }16821682+ continue;16831683+ }16841684+16851685+ pte = xe_map_rd(xe, leaf_map, (first + i) * sizeof(u64), u64);1665168616661687 /*16671688 * In rare scenarios, pte may not be written yet due to racy conditions.···16951674 }1696167516971676 /* Ensure it is a defined page */16981698- xe_tile_assert(xe_walk->tile,16991699- xe_child->level == 0 ||17001700- (pte & (XE_PTE_PS64 | XE_PDE_PS_2M | XE_PDPE_PS_1G)));16771677+ xe_tile_assert(xe_walk->tile, xe_child->level == 0 ||16781678+ (pte & (XE_PDE_PS_2M | XE_PDPE_PS_1G)));1701167917021680 /* An entry should be added for 64KB but contigious 4K have XE_PTE_PS64 */17031681 if (pte & XE_PTE_PS64)···17211701 killed = xe_pt_check_kill(addr, next, level - 1, xe_child, action, walk);1722170217231703 /*17241724- * Verify PRL is active and if entry is not a leaf pte (base.children conditions),17251725- * there is a potential need to invalidate the PRL if any PTE (num_live) are dropped.17041704+ * Verify if any PTE are potentially dropped at non-leaf levels, either from being17051705+ * killed or the page walk covers the region.17261706 */17271727- if (xe_walk->prl && level > 1 && xe_child->num_live &&17281728- xe_child->base.children && xe_child->base.children[first]) {17071707+ if (xe_walk->prl && xe_page_reclaim_list_valid(xe_walk->prl) &&17081708+ xe_child->level > MAX_HUGEPTE_LEVEL && xe_child->num_live) {17291709 bool covered = xe_pt_covers(addr, next, xe_child->level, &xe_walk->base);1730171017311711 /*
+24-22
drivers/gpu/nova-core/gsp.rs
···4747unsafe impl<const NUM_ENTRIES: usize> AsBytes for PteArray<NUM_ENTRIES> {}48484949impl<const NUM_PAGES: usize> PteArray<NUM_PAGES> {5050- /// Creates a new page table array mapping `NUM_PAGES` GSP pages starting at address `start`.5151- fn new(start: DmaAddress) -> Result<Self> {5252- let mut ptes = [0u64; NUM_PAGES];5353- for (i, pte) in ptes.iter_mut().enumerate() {5454- *pte = start5555- .checked_add(num::usize_as_u64(i) << GSP_PAGE_SHIFT)5656- .ok_or(EOVERFLOW)?;5757- }5858-5959- Ok(Self(ptes))5050+ /// Returns the page table entry for `index`, for a mapping starting at `start`.5151+ // TODO: Replace with `IoView` projection once available.5252+ fn entry(start: DmaAddress, index: usize) -> Result<u64> {5353+ start5454+ .checked_add(num::usize_as_u64(index) << GSP_PAGE_SHIFT)5555+ .ok_or(EOVERFLOW)6056 }6157}6258···8286 NUM_PAGES * GSP_PAGE_SIZE,8387 GFP_KERNEL | __GFP_ZERO,8488 )?);8585- let ptes = PteArray::<NUM_PAGES>::new(obj.0.dma_handle())?;8989+9090+ let start_addr = obj.0.dma_handle();86918792 // SAFETY: `obj` has just been created and we are its sole user.8888- unsafe {8989- // Copy the self-mapping PTE at the expected location.9393+ let pte_region = unsafe {9094 obj.09191- .as_slice_mut(size_of::<u64>(), size_of_val(&ptes))?9292- .copy_from_slice(ptes.as_bytes())9595+ .as_slice_mut(size_of::<u64>(), NUM_PAGES * size_of::<u64>())?9396 };9797+9898+ // Write values one by one to avoid an on-stack instance of `PteArray`.9999+ for (i, chunk) in pte_region.chunks_exact_mut(size_of::<u64>()).enumerate() {100100+ let pte_value = PteArray::<0>::entry(start_addr, i)?;101101+102102+ chunk.copy_from_slice(&pte_value.to_ne_bytes());103103+ }9410495105 Ok(obj)96106 }···145143 // _kgspInitLibosLoggingStructures (allocates memory for buffers)146144 // kgspSetupLibosInitArgs_IMPL (creates pLibosInitArgs[] array)147145 dma_write!(148148- libos[0] = LibosMemoryRegionInitArgument::new("LOGINIT", &loginit.0)149149- )?;146146+ libos, [0]?, LibosMemoryRegionInitArgument::new("LOGINIT", &loginit.0)147147+ );150148 dma_write!(151151- libos[1] = LibosMemoryRegionInitArgument::new("LOGINTR", &logintr.0)152152- )?;153153- dma_write!(libos[2] = LibosMemoryRegionInitArgument::new("LOGRM", &logrm.0))?;154154- dma_write!(rmargs[0].inner = fw::GspArgumentsCached::new(cmdq))?;155155- dma_write!(libos[3] = LibosMemoryRegionInitArgument::new("RMARGS", rmargs))?;149149+ libos, [1]?, LibosMemoryRegionInitArgument::new("LOGINTR", &logintr.0)150150+ );151151+ dma_write!(libos, [2]?, LibosMemoryRegionInitArgument::new("LOGRM", &logrm.0));152152+ dma_write!(rmargs, [0]?.inner, fw::GspArgumentsCached::new(cmdq));153153+ dma_write!(libos, [3]?, LibosMemoryRegionInitArgument::new("RMARGS", rmargs));156154 },157155 }))158156 })
···2233use core::{44 cmp,55- mem,66- sync::atomic::{77- fence,88- Ordering, //99- }, //55+ mem, //106};117128use kernel::{···142146#[repr(C)]143147// There is no struct defined for this in the open-gpu-kernel-source headers.144148// Instead it is defined by code in `GspMsgQueuesInit()`.145145-struct Msgq {149149+// TODO: Revert to private once `IoView` projections replace the `gsp_mem` module.150150+pub(super) struct Msgq {146151 /// Header for sending messages, including the write pointer.147147- tx: MsgqTxHeader,152152+ pub(super) tx: MsgqTxHeader,148153 /// Header for receiving messages, including the read pointer.149149- rx: MsgqRxHeader,154154+ pub(super) rx: MsgqRxHeader,150155 /// The message queue proper.151156 msgq: MsgqData,152157}153158154159/// Structure shared between the driver and the GSP and containing the command and message queues.155160#[repr(C)]156156-struct GspMem {161161+// TODO: Revert to private once `IoView` projections replace the `gsp_mem` module.162162+pub(super) struct GspMem {157163 /// Self-mapping page table entries.158158- ptes: PteArray<{ GSP_PAGE_SIZE / size_of::<u64>() }>,164164+ ptes: PteArray<{ Self::PTE_ARRAY_SIZE }>,159165 /// CPU queue: the driver writes commands here, and the GSP reads them. It also contains the160166 /// write and read pointers that the CPU updates.161167 ///162168 /// This member is read-only for the GSP.163163- cpuq: Msgq,169169+ pub(super) cpuq: Msgq,164170 /// GSP queue: the GSP writes messages here, and the driver reads them. It also contains the165171 /// write and read pointers that the GSP updates.166172 ///167173 /// This member is read-only for the driver.168168- gspq: Msgq,174174+ pub(super) gspq: Msgq,175175+}176176+177177+impl GspMem {178178+ const PTE_ARRAY_SIZE: usize = GSP_PAGE_SIZE / size_of::<u64>();169179}170180171181// SAFETY: These structs don't meet the no-padding requirements of AsBytes but···203201204202 let gsp_mem =205203 CoherentAllocation::<GspMem>::alloc_coherent(dev, 1, GFP_KERNEL | __GFP_ZERO)?;206206- dma_write!(gsp_mem[0].ptes = PteArray::new(gsp_mem.dma_handle())?)?;207207- dma_write!(gsp_mem[0].cpuq.tx = MsgqTxHeader::new(MSGQ_SIZE, RX_HDR_OFF, MSGQ_NUM_PAGES))?;208208- dma_write!(gsp_mem[0].cpuq.rx = MsgqRxHeader::new())?;204204+205205+ let start = gsp_mem.dma_handle();206206+ // Write values one by one to avoid an on-stack instance of `PteArray`.207207+ for i in 0..GspMem::PTE_ARRAY_SIZE {208208+ dma_write!(gsp_mem, [0]?.ptes.0[i], PteArray::<0>::entry(start, i)?);209209+ }210210+211211+ dma_write!(212212+ gsp_mem,213213+ [0]?.cpuq.tx,214214+ MsgqTxHeader::new(MSGQ_SIZE, RX_HDR_OFF, MSGQ_NUM_PAGES)215215+ );216216+ dma_write!(gsp_mem, [0]?.cpuq.rx, MsgqRxHeader::new());209217210218 Ok(Self(gsp_mem))211219 }···329317 //330318 // - The returned value is between `0` and `MSGQ_NUM_PAGES`.331319 fn gsp_write_ptr(&self) -> u32 {332332- let gsp_mem = self.0.start_ptr();333333-334334- // SAFETY:335335- // - The 'CoherentAllocation' contains at least one object.336336- // - By the invariants of `CoherentAllocation` the pointer is valid.337337- (unsafe { (*gsp_mem).gspq.tx.write_ptr() } % MSGQ_NUM_PAGES)320320+ super::fw::gsp_mem::gsp_write_ptr(&self.0)338321 }339322340323 // Returns the index of the memory page the GSP will read the next command from.···338331 //339332 // - The returned value is between `0` and `MSGQ_NUM_PAGES`.340333 fn gsp_read_ptr(&self) -> u32 {341341- let gsp_mem = self.0.start_ptr();342342-343343- // SAFETY:344344- // - The 'CoherentAllocation' contains at least one object.345345- // - By the invariants of `CoherentAllocation` the pointer is valid.346346- (unsafe { (*gsp_mem).gspq.rx.read_ptr() } % MSGQ_NUM_PAGES)334334+ super::fw::gsp_mem::gsp_read_ptr(&self.0)347335 }348336349337 // Returns the index of the memory page the CPU can read the next message from.···347345 //348346 // - The returned value is between `0` and `MSGQ_NUM_PAGES`.349347 fn cpu_read_ptr(&self) -> u32 {350350- let gsp_mem = self.0.start_ptr();351351-352352- // SAFETY:353353- // - The ['CoherentAllocation'] contains at least one object.354354- // - By the invariants of CoherentAllocation the pointer is valid.355355- (unsafe { (*gsp_mem).cpuq.rx.read_ptr() } % MSGQ_NUM_PAGES)348348+ super::fw::gsp_mem::cpu_read_ptr(&self.0)356349 }357350358351 // Informs the GSP that it can send `elem_count` new pages into the message queue.359352 fn advance_cpu_read_ptr(&mut self, elem_count: u32) {360360- let rptr = self.cpu_read_ptr().wrapping_add(elem_count) % MSGQ_NUM_PAGES;361361-362362- // Ensure read pointer is properly ordered.363363- fence(Ordering::SeqCst);364364-365365- let gsp_mem = self.0.start_ptr_mut();366366-367367- // SAFETY:368368- // - The 'CoherentAllocation' contains at least one object.369369- // - By the invariants of `CoherentAllocation` the pointer is valid.370370- unsafe { (*gsp_mem).cpuq.rx.set_read_ptr(rptr) };353353+ super::fw::gsp_mem::advance_cpu_read_ptr(&self.0, elem_count)371354 }372355373356 // Returns the index of the memory page the CPU can write the next command to.···361374 //362375 // - The returned value is between `0` and `MSGQ_NUM_PAGES`.363376 fn cpu_write_ptr(&self) -> u32 {364364- let gsp_mem = self.0.start_ptr();365365-366366- // SAFETY:367367- // - The 'CoherentAllocation' contains at least one object.368368- // - By the invariants of `CoherentAllocation` the pointer is valid.369369- (unsafe { (*gsp_mem).cpuq.tx.write_ptr() } % MSGQ_NUM_PAGES)377377+ super::fw::gsp_mem::cpu_write_ptr(&self.0)370378 }371379372380 // Informs the GSP that it can process `elem_count` new pages from the command queue.373381 fn advance_cpu_write_ptr(&mut self, elem_count: u32) {374374- let wptr = self.cpu_write_ptr().wrapping_add(elem_count) & MSGQ_NUM_PAGES;375375- let gsp_mem = self.0.start_ptr_mut();376376-377377- // SAFETY:378378- // - The 'CoherentAllocation' contains at least one object.379379- // - By the invariants of `CoherentAllocation` the pointer is valid.380380- unsafe { (*gsp_mem).cpuq.tx.set_write_ptr(wptr) };381381-382382- // Ensure all command data is visible before triggering the GSP read.383383- fence(Ordering::SeqCst);382382+ super::fw::gsp_mem::advance_cpu_write_ptr(&self.0, elem_count)384383 }385384}386385
+69-32
drivers/gpu/nova-core/gsp/fw.rs
···4040 },4141};42424343+// TODO: Replace with `IoView` projections once available; the `unwrap()` calls go away once we4444+// switch to the new `dma::Coherent` API.4545+pub(super) mod gsp_mem {4646+ use core::sync::atomic::{4747+ fence,4848+ Ordering, //4949+ };5050+5151+ use kernel::{5252+ dma::CoherentAllocation,5353+ dma_read,5454+ dma_write,5555+ prelude::*, //5656+ };5757+5858+ use crate::gsp::cmdq::{5959+ GspMem,6060+ MSGQ_NUM_PAGES, //6161+ };6262+6363+ pub(in crate::gsp) fn gsp_write_ptr(qs: &CoherentAllocation<GspMem>) -> u32 {6464+ // PANIC: A `dma::CoherentAllocation` always contains at least one element.6565+ || -> Result<u32> { Ok(dma_read!(qs, [0]?.gspq.tx.0.writePtr) % MSGQ_NUM_PAGES) }().unwrap()6666+ }6767+6868+ pub(in crate::gsp) fn gsp_read_ptr(qs: &CoherentAllocation<GspMem>) -> u32 {6969+ // PANIC: A `dma::CoherentAllocation` always contains at least one element.7070+ || -> Result<u32> { Ok(dma_read!(qs, [0]?.gspq.rx.0.readPtr) % MSGQ_NUM_PAGES) }().unwrap()7171+ }7272+7373+ pub(in crate::gsp) fn cpu_read_ptr(qs: &CoherentAllocation<GspMem>) -> u32 {7474+ // PANIC: A `dma::CoherentAllocation` always contains at least one element.7575+ || -> Result<u32> { Ok(dma_read!(qs, [0]?.cpuq.rx.0.readPtr) % MSGQ_NUM_PAGES) }().unwrap()7676+ }7777+7878+ pub(in crate::gsp) fn advance_cpu_read_ptr(qs: &CoherentAllocation<GspMem>, count: u32) {7979+ let rptr = cpu_read_ptr(qs).wrapping_add(count) % MSGQ_NUM_PAGES;8080+8181+ // Ensure read pointer is properly ordered.8282+ fence(Ordering::SeqCst);8383+8484+ // PANIC: A `dma::CoherentAllocation` always contains at least one element.8585+ || -> Result {8686+ dma_write!(qs, [0]?.cpuq.rx.0.readPtr, rptr);8787+ Ok(())8888+ }()8989+ .unwrap()9090+ }9191+9292+ pub(in crate::gsp) fn cpu_write_ptr(qs: &CoherentAllocation<GspMem>) -> u32 {9393+ // PANIC: A `dma::CoherentAllocation` always contains at least one element.9494+ || -> Result<u32> { Ok(dma_read!(qs, [0]?.cpuq.tx.0.writePtr) % MSGQ_NUM_PAGES) }().unwrap()9595+ }9696+9797+ pub(in crate::gsp) fn advance_cpu_write_ptr(qs: &CoherentAllocation<GspMem>, count: u32) {9898+ let wptr = cpu_write_ptr(qs).wrapping_add(count) % MSGQ_NUM_PAGES;9999+100100+ // PANIC: A `dma::CoherentAllocation` always contains at least one element.101101+ || -> Result {102102+ dma_write!(qs, [0]?.cpuq.tx.0.writePtr, wptr);103103+ Ok(())104104+ }()105105+ .unwrap();106106+107107+ // Ensure all command data is visible before triggering the GSP read.108108+ fence(Ordering::SeqCst);109109+ }110110+}111111+43112/// Empty type to group methods related to heap parameters for running the GSP firmware.44113enum GspFwHeapParams {}45114···777708 entryOff: num::usize_into_u32::<GSP_PAGE_SIZE>(),778709 })779710 }780780-781781- /// Returns the value of the write pointer for this queue.782782- pub(crate) fn write_ptr(&self) -> u32 {783783- let ptr = core::ptr::from_ref(&self.0.writePtr);784784-785785- // SAFETY: `ptr` is a valid pointer to a `u32`.786786- unsafe { ptr.read_volatile() }787787- }788788-789789- /// Sets the value of the write pointer for this queue.790790- pub(crate) fn set_write_ptr(&mut self, val: u32) {791791- let ptr = core::ptr::from_mut(&mut self.0.writePtr);792792-793793- // SAFETY: `ptr` is a valid pointer to a `u32`.794794- unsafe { ptr.write_volatile(val) }795795- }796711}797712798713// SAFETY: Padding is explicit and does not contain uninitialized data.···791738 /// Creates a new RX queue header.792739 pub(crate) fn new() -> Self {793740 Self(Default::default())794794- }795795-796796- /// Returns the value of the read pointer for this queue.797797- pub(crate) fn read_ptr(&self) -> u32 {798798- let ptr = core::ptr::from_ref(&self.0.readPtr);799799-800800- // SAFETY: `ptr` is a valid pointer to a `u32`.801801- unsafe { ptr.read_volatile() }802802- }803803-804804- /// Sets the value of the read pointer for this queue.805805- pub(crate) fn set_read_ptr(&mut self, val: u32) {806806- let ptr = core::ptr::from_mut(&mut self.0.readPtr);807807-808808- // SAFETY: `ptr` is a valid pointer to a `u32`.809809- unsafe { ptr.write_volatile(val) }810741 }811742}812743
+2
drivers/hid/bpf/hid_bpf_dispatch.c
···444444 (u64)(long)ctx,445445 true); /* prevent infinite recursions */446446447447+ if (ret > size)448448+ ret = size;447449 if (ret > 0)448450 memcpy(buf, dma_data, ret);449451
···20572057 rsize = max_buffer_size;2058205820592059 if (csize < rsize) {20602060- dbg_hid("report %d is too short, (%d < %d)\n", report->id,20612061- csize, rsize);20622062- memset(cdata + csize, 0, rsize - csize);20602060+ hid_warn_ratelimited(hid, "Event data for report %d was too short (%d vs %d)\n",20612061+ report->id, rsize, csize);20622062+ ret = -EINVAL;20632063+ goto out;20632064 }2064206520652066 if ((hid->claimed & HID_CLAIMED_HIDDEV) && hid->hiddev_report_event)
···354354#define HID_BATTERY_QUIRK_FEATURE (1 << 1) /* ask for feature report */355355#define HID_BATTERY_QUIRK_IGNORE (1 << 2) /* completely ignore the battery */356356#define HID_BATTERY_QUIRK_AVOID_QUERY (1 << 3) /* do not query the battery */357357+#define HID_BATTERY_QUIRK_DYNAMIC (1 << 4) /* report present only after life signs */357358358359static const struct hid_device_id hid_battery_quirks[] = {359360 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE,···387386 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,388387 USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD),389388 HID_BATTERY_QUIRK_IGNORE },390390- { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN),391391- HID_BATTERY_QUIRK_IGNORE },392392- { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN),393393- HID_BATTERY_QUIRK_IGNORE },394389 { HID_USB_DEVICE(USB_VENDOR_ID_UGEE, USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_L),395390 HID_BATTERY_QUIRK_AVOID_QUERY },396391 { HID_USB_DEVICE(USB_VENDOR_ID_UGEE, USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_MW),···399402 * Elan HID touchscreens seem to all report a non present battery,400403 * set HID_BATTERY_QUIRK_IGNORE for all Elan I2C and USB HID devices.401404 */402402- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, HID_ANY_ID), HID_BATTERY_QUIRK_IGNORE },403403- { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, HID_ANY_ID), HID_BATTERY_QUIRK_IGNORE },405405+ { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, HID_ANY_ID), HID_BATTERY_QUIRK_DYNAMIC },406406+ { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, HID_ANY_ID), HID_BATTERY_QUIRK_DYNAMIC },404407 {}405408};406409···457460 int ret = 0;458461459462 switch (prop) {460460- case POWER_SUPPLY_PROP_PRESENT:461463 case POWER_SUPPLY_PROP_ONLINE:462464 val->intval = 1;465465+ break;466466+467467+ case POWER_SUPPLY_PROP_PRESENT:468468+ val->intval = dev->battery_present;463469 break;464470465471 case POWER_SUPPLY_PROP_CAPACITY:···577577 if (quirks & HID_BATTERY_QUIRK_AVOID_QUERY)578578 dev->battery_avoid_query = true;579579580580+ dev->battery_present = (quirks & HID_BATTERY_QUIRK_DYNAMIC) ? false : true;581581+580582 dev->battery = power_supply_register(&dev->dev, psy_desc, &psy_cfg);581583 if (IS_ERR(dev->battery)) {582584 error = PTR_ERR(dev->battery);···634632 return;635633636634 if (hidinput_update_battery_charge_status(dev, usage, value)) {635635+ dev->battery_present = true;637636 power_supply_changed(dev->battery);638637 return;639638 }···650647 if (dev->battery_status != HID_BATTERY_REPORTED ||651648 capacity != dev->battery_capacity ||652649 ktime_after(ktime_get_coarse(), dev->battery_ratelimit_time)) {650650+ dev->battery_present = true;653651 dev->battery_capacity = capacity;654652 dev->battery_status = HID_BATTERY_REPORTED;655653 dev->battery_ratelimit_time =
+5-1
drivers/hid/hid-logitech-hidpp.c
···44874487 if (!ret)44884488 ret = hidpp_ff_init(hidpp, &data);4489448944904490- if (ret)44904490+ if (ret) {44914491 hid_warn(hidpp->hid_dev,44924492 "Unable to initialize force feedback support, errno %d\n",44934493 ret);44944494+ ret = 0;44954495+ }44944496 }4495449744964498 /*···46704668 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb038) },46714669 { /* Slim Solar+ K980 Keyboard over Bluetooth */46724670 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb391) },46714671+ { /* MX Master 4 mouse over Bluetooth */46724672+ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb042) },46734673 {}46744674};46754675
+7
drivers/hid/hid-multitouch.c
···526526 dev_warn(&hdev->dev, "failed to fetch feature %d\n",527527 report->id);528528 } else {529529+ /* The report ID in the request and the response should match */530530+ if (report->id != buf[0]) {531531+ hid_err(hdev, "Returned feature report did not match the request\n");532532+ goto free;533533+ }534534+529535 ret = hid_report_raw_event(hdev, HID_FEATURE_REPORT, buf,530536 size, 0);531537 if (ret)532538 dev_warn(&hdev->dev, "failed to report feature\n");533539 }534540541541+free:535542 kfree(buf);536543}537544
···118118 hid->product = le16_to_cpu(qsdev->dev_desc.product_id);119119 snprintf(hid->name, sizeof(hid->name), "%s %04X:%04X", "quickspi-hid",120120 hid->vendor, hid->product);121121+ strscpy(hid->phys, dev_name(qsdev->dev), sizeof(hid->phys));121122122123 ret = hid_add_device(hid);123124 if (ret) {
+10
drivers/hid/wacom_wac.c
···1208120812091209 switch (data[0]) {12101210 case 0x04:12111211+ if (len < 32) {12121212+ dev_warn(wacom->pen_input->dev.parent,12131213+ "Report 0x04 too short: %zu bytes\n", len);12141214+ break;12151215+ }12111216 wacom_intuos_bt_process_data(wacom, data + i);12121217 i += 10;12131218 fallthrough;12141219 case 0x03:12201220+ if (i == 1 && len < 22) {12211221+ dev_warn(wacom->pen_input->dev.parent,12221222+ "Report 0x03 too short: %zu bytes\n", len);12231223+ break;12241224+ }12151225 wacom_intuos_bt_process_data(wacom, data + i);12161226 i += 10;12171227 wacom_intuos_bt_process_data(wacom, data + i);
+4-2
drivers/hv/mshv_regions.c
···314314 ret = pin_user_pages_fast(userspace_addr, nr_pages,315315 FOLL_WRITE | FOLL_LONGTERM,316316 pages);317317- if (ret < 0)317317+ if (ret != nr_pages)318318 goto release_pages;319319 }320320321321 return 0;322322323323release_pages:324324+ if (ret > 0)325325+ done_count += ret;324326 mshv_region_invalidate_pages(region, 0, done_count);325325- return ret;327327+ return ret < 0 ? ret : -ENOMEM;326328}327329328330static int mshv_region_chunk_unmap(struct mshv_mem_region *region,
···120120 HVCALL_SET_VP_REGISTERS,121121 HVCALL_TRANSLATE_VIRTUAL_ADDRESS,122122 HVCALL_CLEAR_VIRTUAL_INTERRUPT,123123- HVCALL_SCRUB_PARTITION,124123 HVCALL_REGISTER_INTERCEPT_RESULT,125124 HVCALL_ASSERT_VIRTUAL_INTERRUPT,126125 HVCALL_GET_GPA_PAGES_ACCESS_STATES,···12881289 */12891290static long12901291mshv_map_user_memory(struct mshv_partition *partition,12911291- struct mshv_user_mem_region mem)12921292+ struct mshv_user_mem_region *mem)12921293{12931294 struct mshv_mem_region *region;12941295 struct vm_area_struct *vma;···12961297 ulong mmio_pfn;12971298 long ret;1298129912991299- if (mem.flags & BIT(MSHV_SET_MEM_BIT_UNMAP) ||13001300- !access_ok((const void __user *)mem.userspace_addr, mem.size))13001300+ if (mem->flags & BIT(MSHV_SET_MEM_BIT_UNMAP) ||13011301+ !access_ok((const void __user *)mem->userspace_addr, mem->size))13011302 return -EINVAL;1302130313031304 mmap_read_lock(current->mm);13041304- vma = vma_lookup(current->mm, mem.userspace_addr);13051305+ vma = vma_lookup(current->mm, mem->userspace_addr);13051306 is_mmio = vma ? !!(vma->vm_flags & (VM_IO | VM_PFNMAP)) : 0;13061307 mmio_pfn = is_mmio ? vma->vm_pgoff : 0;13071308 mmap_read_unlock(current->mm);···13091310 if (!vma)13101311 return -EINVAL;1311131213121312- ret = mshv_partition_create_region(partition, &mem, ®ion,13131313+ ret = mshv_partition_create_region(partition, mem, ®ion,13131314 is_mmio);13141315 if (ret)13151316 return ret;···13471348 return 0;1348134913491350errout:13501350- vfree(region);13511351+ mshv_region_put(region);13511352 return ret;13521353}1353135413541355/* Called for unmapping both the guest ram and the mmio space */13551356static long13561357mshv_unmap_user_memory(struct mshv_partition *partition,13571357- struct mshv_user_mem_region mem)13581358+ struct mshv_user_mem_region *mem)13581359{13591360 struct mshv_mem_region *region;1360136113611361- if (!(mem.flags & BIT(MSHV_SET_MEM_BIT_UNMAP)))13621362+ if (!(mem->flags & BIT(MSHV_SET_MEM_BIT_UNMAP)))13621363 return -EINVAL;1363136413641365 spin_lock(&partition->pt_mem_regions_lock);1365136613661366- region = mshv_partition_region_by_gfn(partition, mem.guest_pfn);13671367+ region = mshv_partition_region_by_gfn(partition, mem->guest_pfn);13671368 if (!region) {13681369 spin_unlock(&partition->pt_mem_regions_lock);13691370 return -ENOENT;13701371 }1371137213721373 /* Paranoia check */13731373- if (region->start_uaddr != mem.userspace_addr ||13741374- region->start_gfn != mem.guest_pfn ||13751375- region->nr_pages != HVPFN_DOWN(mem.size)) {13741374+ if (region->start_uaddr != mem->userspace_addr ||13751375+ region->start_gfn != mem->guest_pfn ||13761376+ region->nr_pages != HVPFN_DOWN(mem->size)) {13761377 spin_unlock(&partition->pt_mem_regions_lock);13771378 return -EINVAL;13781379 }···14031404 return -EINVAL;1404140514051406 if (mem.flags & BIT(MSHV_SET_MEM_BIT_UNMAP))14061406- return mshv_unmap_user_memory(partition, mem);14071407+ return mshv_unmap_user_memory(partition, &mem);1407140814081408- return mshv_map_user_memory(partition, mem);14091409+ return mshv_map_user_memory(partition, &mem);14091410}1410141114111412static long···20632064 return 0;20642065}2065206620662066-static int mshv_cpuhp_online;20672067static int mshv_root_sched_online;2068206820692069static const char *scheduler_type_to_string(enum hv_scheduler_type type)···22472249 free_percpu(root_scheduler_output);22482250}2249225122502250-static int mshv_reboot_notify(struct notifier_block *nb,22512251- unsigned long code, void *unused)22522252-{22532253- cpuhp_remove_state(mshv_cpuhp_online);22542254- return 0;22552255-}22562256-22572257-struct notifier_block mshv_reboot_nb = {22582258- .notifier_call = mshv_reboot_notify,22592259-};22602260-22612261-static void mshv_root_partition_exit(void)22622262-{22632263- unregister_reboot_notifier(&mshv_reboot_nb);22642264-}22652265-22662266-static int __init mshv_root_partition_init(struct device *dev)22672267-{22682268- return register_reboot_notifier(&mshv_reboot_nb);22692269-}22702270-22712252static int __init mshv_init_vmm_caps(struct device *dev)22722253{22732254 int ret;···22912314 MSHV_HV_MAX_VERSION);22922315 }2293231622942294- mshv_root.synic_pages = alloc_percpu(struct hv_synic_pages);22952295- if (!mshv_root.synic_pages) {22962296- dev_err(dev, "Failed to allocate percpu synic page\n");22972297- ret = -ENOMEM;23172317+ ret = mshv_synic_init(dev);23182318+ if (ret)22982319 goto device_deregister;22992299- }23002300-23012301- ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "mshv_synic",23022302- mshv_synic_init,23032303- mshv_synic_cleanup);23042304- if (ret < 0) {23052305- dev_err(dev, "Failed to setup cpu hotplug state: %i\n", ret);23062306- goto free_synic_pages;23072307- }23082308-23092309- mshv_cpuhp_online = ret;2310232023112321 ret = mshv_init_vmm_caps(dev);23122322 if (ret)23132313- goto remove_cpu_state;23232323+ goto synic_cleanup;2314232423152325 ret = mshv_retrieve_scheduler_type(dev);23162326 if (ret)23172317- goto remove_cpu_state;23182318-23192319- if (hv_root_partition())23202320- ret = mshv_root_partition_init(dev);23212321- if (ret)23222322- goto remove_cpu_state;23272327+ goto synic_cleanup;2323232823242329 ret = root_scheduler_init(dev);23252330 if (ret)23262326- goto exit_partition;23312331+ goto synic_cleanup;2327233223282333 ret = mshv_debugfs_init();23292334 if (ret)···23262367 mshv_debugfs_exit();23272368deinit_root_scheduler:23282369 root_scheduler_deinit();23292329-exit_partition:23302330- if (hv_root_partition())23312331- mshv_root_partition_exit();23322332-remove_cpu_state:23332333- cpuhp_remove_state(mshv_cpuhp_online);23342334-free_synic_pages:23352335- free_percpu(mshv_root.synic_pages);23702370+synic_cleanup:23712371+ mshv_synic_exit();23362372device_deregister:23372373 misc_deregister(&mshv_dev);23382374 return ret;···23412387 misc_deregister(&mshv_dev);23422388 mshv_irqfd_wq_cleanup();23432389 root_scheduler_deinit();23442344- if (hv_root_partition())23452345- mshv_root_partition_exit();23462346- cpuhp_remove_state(mshv_cpuhp_online);23472347- free_percpu(mshv_root.synic_pages);23902390+ mshv_synic_exit();23482391}2349239223502393module_init(mshv_parent_partition_init);
+173-15
drivers/hv/mshv_synic.c
···1010#include <linux/kernel.h>1111#include <linux/slab.h>1212#include <linux/mm.h>1313+#include <linux/interrupt.h>1314#include <linux/io.h>1415#include <linux/random.h>1616+#include <linux/cpuhotplug.h>1717+#include <linux/reboot.h>1518#include <asm/mshyperv.h>1919+#include <linux/acpi.h>16201721#include "mshv_eventfd.h"1822#include "mshv.h"2323+2424+static int synic_cpuhp_online;2525+static struct hv_synic_pages __percpu *synic_pages;2626+static int mshv_sint_vector = -1; /* hwirq for the SynIC SINTs */2727+static int mshv_sint_irq = -1; /* Linux IRQ for mshv_sint_vector */19282029static u32 synic_event_ring_get_queued_port(u32 sint_index)2130{···3526 u32 message;3627 u8 tail;37283838- spages = this_cpu_ptr(mshv_root.synic_pages);2929+ spages = this_cpu_ptr(synic_pages);3930 event_ring_page = &spages->synic_event_ring_page;4031 synic_eventring_tail = (u8 **)this_cpu_ptr(hv_synic_eventring_tail);4132···402393403394void mshv_isr(void)404395{405405- struct hv_synic_pages *spages = this_cpu_ptr(mshv_root.synic_pages);396396+ struct hv_synic_pages *spages = this_cpu_ptr(synic_pages);406397 struct hv_message_page **msg_page = &spages->hyp_synic_message_page;407398 struct hv_message *msg;408399 bool handled;···446437 if (msg->header.message_flags.msg_pending)447438 hv_set_non_nested_msr(HV_MSR_EOM, 0);448439449449-#ifdef HYPERVISOR_CALLBACK_VECTOR450450- add_interrupt_randomness(HYPERVISOR_CALLBACK_VECTOR);451451-#endif440440+ add_interrupt_randomness(mshv_sint_vector);452441 } else {453442 pr_warn_once("%s: unknown message type 0x%x\n", __func__,454443 msg->header.message_type);455444 }456445}457446458458-int mshv_synic_init(unsigned int cpu)447447+static int mshv_synic_cpu_init(unsigned int cpu)459448{460449 union hv_synic_simp simp;461450 union hv_synic_siefp siefp;462451 union hv_synic_sirbp sirbp;463463-#ifdef HYPERVISOR_CALLBACK_VECTOR464452 union hv_synic_sint sint;465465-#endif466453 union hv_synic_scontrol sctrl;467467- struct hv_synic_pages *spages = this_cpu_ptr(mshv_root.synic_pages);454454+ struct hv_synic_pages *spages = this_cpu_ptr(synic_pages);468455 struct hv_message_page **msg_page = &spages->hyp_synic_message_page;469456 struct hv_synic_event_flags_page **event_flags_page =470457 &spages->synic_event_flags_page;···501496502497 hv_set_non_nested_msr(HV_MSR_SIRBP, sirbp.as_uint64);503498504504-#ifdef HYPERVISOR_CALLBACK_VECTOR499499+ if (mshv_sint_irq != -1)500500+ enable_percpu_irq(mshv_sint_irq, 0);501501+505502 /* Enable intercepts */506503 sint.as_uint64 = 0;507507- sint.vector = HYPERVISOR_CALLBACK_VECTOR;504504+ sint.vector = mshv_sint_vector;508505 sint.masked = false;509506 sint.auto_eoi = hv_recommend_using_aeoi();510507 hv_set_non_nested_msr(HV_MSR_SINT0 + HV_SYNIC_INTERCEPTION_SINT_INDEX,···514507515508 /* Doorbell SINT */516509 sint.as_uint64 = 0;517517- sint.vector = HYPERVISOR_CALLBACK_VECTOR;510510+ sint.vector = mshv_sint_vector;518511 sint.masked = false;519512 sint.as_intercept = 1;520513 sint.auto_eoi = hv_recommend_using_aeoi();521514 hv_set_non_nested_msr(HV_MSR_SINT0 + HV_SYNIC_DOORBELL_SINT_INDEX,522515 sint.as_uint64);523523-#endif524516525517 /* Enable global synic bit */526518 sctrl.as_uint64 = hv_get_non_nested_msr(HV_MSR_SCONTROL);···548542 return -EFAULT;549543}550544551551-int mshv_synic_cleanup(unsigned int cpu)545545+static int mshv_synic_cpu_exit(unsigned int cpu)552546{553547 union hv_synic_sint sint;554548 union hv_synic_simp simp;555549 union hv_synic_siefp siefp;556550 union hv_synic_sirbp sirbp;557551 union hv_synic_scontrol sctrl;558558- struct hv_synic_pages *spages = this_cpu_ptr(mshv_root.synic_pages);552552+ struct hv_synic_pages *spages = this_cpu_ptr(synic_pages);559553 struct hv_message_page **msg_page = &spages->hyp_synic_message_page;560554 struct hv_synic_event_flags_page **event_flags_page =561555 &spages->synic_event_flags_page;···573567 sint.masked = true;574568 hv_set_non_nested_msr(HV_MSR_SINT0 + HV_SYNIC_DOORBELL_SINT_INDEX,575569 sint.as_uint64);570570+571571+ if (mshv_sint_irq != -1)572572+ disable_percpu_irq(mshv_sint_irq);576573577574 /* Disable Synic's event ring page */578575 sirbp.as_uint64 = hv_get_non_nested_msr(HV_MSR_SIRBP);···671662 hv_call_delete_port(hv_current_partition_id, port_id);672663673664 mshv_portid_free(doorbell_portid);665665+}666666+667667+static int mshv_synic_reboot_notify(struct notifier_block *nb,668668+ unsigned long code, void *unused)669669+{670670+ if (!hv_root_partition())671671+ return 0;672672+673673+ cpuhp_remove_state(synic_cpuhp_online);674674+ return 0;675675+}676676+677677+static struct notifier_block mshv_synic_reboot_nb = {678678+ .notifier_call = mshv_synic_reboot_notify,679679+};680680+681681+#ifndef HYPERVISOR_CALLBACK_VECTOR682682+static DEFINE_PER_CPU(long, mshv_evt);683683+684684+static irqreturn_t mshv_percpu_isr(int irq, void *dev_id)685685+{686686+ mshv_isr();687687+ return IRQ_HANDLED;688688+}689689+690690+#ifdef CONFIG_ACPI691691+static int __init mshv_acpi_setup_sint_irq(void)692692+{693693+ return acpi_register_gsi(NULL, mshv_sint_vector, ACPI_EDGE_SENSITIVE,694694+ ACPI_ACTIVE_HIGH);695695+}696696+697697+static void mshv_acpi_cleanup_sint_irq(void)698698+{699699+ acpi_unregister_gsi(mshv_sint_vector);700700+}701701+#else702702+static int __init mshv_acpi_setup_sint_irq(void)703703+{704704+ return -ENODEV;705705+}706706+707707+static void mshv_acpi_cleanup_sint_irq(void)708708+{709709+}710710+#endif711711+712712+static int __init mshv_sint_vector_setup(void)713713+{714714+ int ret;715715+ struct hv_register_assoc reg = {716716+ .name = HV_ARM64_REGISTER_SINT_RESERVED_INTERRUPT_ID,717717+ };718718+ union hv_input_vtl input_vtl = { 0 };719719+720720+ if (acpi_disabled)721721+ return -ENODEV;722722+723723+ ret = hv_call_get_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF,724724+ 1, input_vtl, ®);725725+ if (ret || !reg.value.reg64)726726+ return -ENODEV;727727+728728+ mshv_sint_vector = reg.value.reg64;729729+ ret = mshv_acpi_setup_sint_irq();730730+ if (ret < 0) {731731+ pr_err("Failed to setup IRQ for MSHV SINT vector %d: %d\n",732732+ mshv_sint_vector, ret);733733+ goto out_fail;734734+ }735735+736736+ mshv_sint_irq = ret;737737+738738+ ret = request_percpu_irq(mshv_sint_irq, mshv_percpu_isr, "MSHV",739739+ &mshv_evt);740740+ if (ret)741741+ goto out_unregister;742742+743743+ return 0;744744+745745+out_unregister:746746+ mshv_acpi_cleanup_sint_irq();747747+out_fail:748748+ return ret;749749+}750750+751751+static void mshv_sint_vector_cleanup(void)752752+{753753+ free_percpu_irq(mshv_sint_irq, &mshv_evt);754754+ mshv_acpi_cleanup_sint_irq();755755+}756756+#else /* !HYPERVISOR_CALLBACK_VECTOR */757757+static int __init mshv_sint_vector_setup(void)758758+{759759+ mshv_sint_vector = HYPERVISOR_CALLBACK_VECTOR;760760+ return 0;761761+}762762+763763+static void mshv_sint_vector_cleanup(void)764764+{765765+}766766+#endif /* HYPERVISOR_CALLBACK_VECTOR */767767+768768+int __init mshv_synic_init(struct device *dev)769769+{770770+ int ret = 0;771771+772772+ ret = mshv_sint_vector_setup();773773+ if (ret)774774+ return ret;775775+776776+ synic_pages = alloc_percpu(struct hv_synic_pages);777777+ if (!synic_pages) {778778+ dev_err(dev, "Failed to allocate percpu synic page\n");779779+ ret = -ENOMEM;780780+ goto sint_vector_cleanup;781781+ }782782+783783+ ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "mshv_synic",784784+ mshv_synic_cpu_init,785785+ mshv_synic_cpu_exit);786786+ if (ret < 0) {787787+ dev_err(dev, "Failed to setup cpu hotplug state: %i\n", ret);788788+ goto free_synic_pages;789789+ }790790+791791+ synic_cpuhp_online = ret;792792+793793+ ret = register_reboot_notifier(&mshv_synic_reboot_nb);794794+ if (ret)795795+ goto remove_cpuhp_state;796796+797797+ return 0;798798+799799+remove_cpuhp_state:800800+ cpuhp_remove_state(synic_cpuhp_online);801801+free_synic_pages:802802+ free_percpu(synic_pages);803803+sint_vector_cleanup:804804+ mshv_sint_vector_cleanup();805805+ return ret;806806+}807807+808808+void mshv_synic_exit(void)809809+{810810+ unregister_reboot_notifier(&mshv_synic_reboot_nb);811811+ cpuhp_remove_state(synic_cpuhp_online);812812+ free_percpu(synic_pages);813813+ mshv_sint_vector_cleanup();674814}
+2-4
drivers/hwmon/Kconfig
···1493149314941494config SENSORS_LM7514951495 tristate "National Semiconductor LM75 and compatibles"14961496- depends on I2C14971497- depends on I3C || !I3C14961496+ depends on I3C_OR_I2C14981497 select REGMAP_I2C14991498 select REGMAP_I3C if I3C15001499 help···2381238223822383config SENSORS_TMP10823832384 tristate "Texas Instruments TMP108"23842384- depends on I2C23852385- depends on I3C || !I3C23852385+ depends on I3C_OR_I2C23862386 select REGMAP_I2C23872387 select REGMAP_I3C if I3C23882388 help
+1-1
drivers/hwmon/axi-fan-control.c
···507507 ret = devm_request_threaded_irq(&pdev->dev, ctl->irq, NULL,508508 axi_fan_control_irq_handler,509509 IRQF_ONESHOT | IRQF_TRIGGER_HIGH,510510- pdev->driver_override, ctl);510510+ NULL, ctl);511511 if (ret)512512 return dev_err_probe(&pdev->dev, ret,513513 "failed to request an irq\n");
+5-5
drivers/hwmon/max6639.c
···232232static int max6639_set_ppr(struct max6639_data *data, int channel, u8 ppr)233233{234234 /* Decrement the PPR value and shift left by 6 to match the register format */235235- return regmap_write(data->regmap, MAX6639_REG_FAN_PPR(channel), ppr-- << 6);235235+ return regmap_write(data->regmap, MAX6639_REG_FAN_PPR(channel), --ppr << 6);236236}237237238238static int max6639_write_fan(struct device *dev, u32 attr, int channel,···524524525525{526526 struct device *dev = &client->dev;527527- u32 i;528528- int err, val;527527+ u32 i, val;528528+ int err;529529530530 err = of_property_read_u32(child, "reg", &i);531531 if (err) {···540540541541 err = of_property_read_u32(child, "pulses-per-revolution", &val);542542 if (!err) {543543- if (val < 1 || val > 5) {544544- dev_err(dev, "invalid pulses-per-revolution %d of %pOFn\n", val, child);543543+ if (val < 1 || val > 4) {544544+ dev_err(dev, "invalid pulses-per-revolution %u of %pOFn\n", val, child);545545 return -EINVAL;546546 }547547 data->ppr[i] = val;
+2
drivers/hwmon/pmbus/hac300s.c
···5858 case PMBUS_MFR_VOUT_MIN:5959 case PMBUS_READ_VOUT:6060 rv = pmbus_read_word_data(client, page, phase, reg);6161+ if (rv < 0)6262+ return rv;6163 return FIELD_GET(LINEAR11_MANTISSA_MASK, rv);6264 default:6365 return -ENODATA;
+2
drivers/hwmon/pmbus/ina233.c
···6767 switch (reg) {6868 case PMBUS_VIRT_READ_VMON:6969 ret = pmbus_read_word_data(client, 0, 0xff, MFR_READ_VSHUNT);7070+ if (ret < 0)7171+ return ret;70727173 /* Adjust returned value to match VIN coefficients */7274 /* VIN: 1.25 mV VSHUNT: 2.5 uV LSB */
···165165{166166 const struct pmbus_driver_info *info = pmbus_get_driver_info(client);167167 struct mp2869_data *data = to_mp2869_data(info);168168- int ret;168168+ int ret, mfr;169169170170 switch (reg) {171171 case PMBUS_VOUT_MODE:···188188 if (ret < 0)189189 return ret;190190191191+ mfr = pmbus_read_byte_data(client, page,192192+ PMBUS_STATUS_MFR_SPECIFIC);193193+ if (mfr < 0)194194+ return mfr;195195+191196 ret = (ret & ~GENMASK(2, 2)) |192197 FIELD_PREP(GENMASK(2, 2),193193- FIELD_GET(GENMASK(1, 1),194194- pmbus_read_byte_data(client, page,195195- PMBUS_STATUS_MFR_SPECIFIC)));198198+ FIELD_GET(GENMASK(1, 1), mfr));196199 break;197200 case PMBUS_STATUS_TEMPERATURE:198201 /*···210207 if (ret < 0)211208 return ret;212209210210+ mfr = pmbus_read_byte_data(client, page,211211+ PMBUS_STATUS_MFR_SPECIFIC);212212+ if (mfr < 0)213213+ return mfr;214214+213215 ret = (ret & ~GENMASK(7, 6)) |214216 FIELD_PREP(GENMASK(6, 6),215215- FIELD_GET(GENMASK(1, 1),216216- pmbus_read_byte_data(client, page,217217- PMBUS_STATUS_MFR_SPECIFIC))) |217217+ FIELD_GET(GENMASK(1, 1), mfr)) |218218 FIELD_PREP(GENMASK(7, 7),219219- FIELD_GET(GENMASK(1, 1),220220- pmbus_read_byte_data(client, page,221221- PMBUS_STATUS_MFR_SPECIFIC)));219219+ FIELD_GET(GENMASK(1, 1), mfr));222220 break;223221 default:224222 ret = -ENODATA;···234230{235231 const struct pmbus_driver_info *info = pmbus_get_driver_info(client);236232 struct mp2869_data *data = to_mp2869_data(info);237237- int ret;233233+ int ret, mfr;238234239235 switch (reg) {240236 case PMBUS_STATUS_WORD:···250246 if (ret < 0)251247 return ret;252248249249+ mfr = pmbus_read_byte_data(client, page,250250+ PMBUS_STATUS_MFR_SPECIFIC);251251+ if (mfr < 0)252252+ return mfr;253253+253254 ret = (ret & ~GENMASK(2, 2)) |254255 FIELD_PREP(GENMASK(2, 2),255255- FIELD_GET(GENMASK(1, 1),256256- pmbus_read_byte_data(client, page,257257- PMBUS_STATUS_MFR_SPECIFIC)));256256+ FIELD_GET(GENMASK(1, 1), mfr));258257 break;259258 case PMBUS_READ_VIN:260259 /*
+2
drivers/hwmon/pmbus/mp2975.c
···313313 case PMBUS_STATUS_WORD:314314 /* MP2973 & MP2971 return PGOOD instead of PB_STATUS_POWER_GOOD_N. */315315 ret = pmbus_read_word_data(client, page, phase, reg);316316+ if (ret < 0)317317+ return ret;316318 ret ^= PB_STATUS_POWER_GOOD_N;317319 break;318320 case PMBUS_OT_FAULT_LIMIT:
+2
drivers/i2c/busses/Kconfig
···12131213 tristate "NVIDIA Tegra internal I2C controller"12141214 depends on ARCH_TEGRA || (COMPILE_TEST && (ARC || ARM || ARM64 || M68K || RISCV || SUPERH || SPARC))12151215 # COMPILE_TEST needs architectures with readsX()/writesX() primitives12161216+ depends on PINCTRL12171217+ # ARCH_TEGRA implies PINCTRL, but the COMPILE_TEST side doesn't.12161218 help12171219 If you say yes to this option, support will be included for the12181220 I2C controller embedded in NVIDIA Tegra SOCs
+3
drivers/i2c/busses/i2c-cp2615.c
···298298 if (!adap)299299 return -ENOMEM;300300301301+ if (!usbdev->serial)302302+ return -EINVAL;303303+301304 strscpy(adap->name, usbdev->serial, sizeof(adap->name));302305 adap->owner = THIS_MODULE;303306 adap->dev.parent = &usbif->dev;
+1
drivers/i2c/busses/i2c-fsi.c
···729729 rc = i2c_add_adapter(&port->adapter);730730 if (rc < 0) {731731 dev_err(dev, "Failed to register adapter: %d\n", rc);732732+ of_node_put(np);732733 kfree(port);733734 continue;734735 }
+16-1
drivers/i2c/busses/i2c-pxa.c
···268268 struct pinctrl *pinctrl;269269 struct pinctrl_state *pinctrl_default;270270 struct pinctrl_state *pinctrl_recovery;271271+ bool reset_before_xfer;271272};272273273274#define _IBMR(i2c) ((i2c)->reg_ibmr)···11451144{11461145 struct pxa_i2c *i2c = adap->algo_data;1147114611471147+ if (i2c->reset_before_xfer) {11481148+ i2c_pxa_reset(i2c);11491149+ i2c->reset_before_xfer = false;11501150+ }11511151+11481152 return i2c_pxa_internal_xfer(i2c, msgs, num, i2c_pxa_do_xfer);11491153}11501154···15271521 }15281522 }1529152315301530- i2c_pxa_reset(i2c);15241524+ /*15251525+ * Skip reset on Armada 3700 when recovery is used to avoid15261526+ * controller hang due to the pinctrl state changes done by15271527+ * the generic recovery initialization code. The reset will15281528+ * be performed later, prior to the first transfer.15291529+ */15301530+ if (i2c_type == REGS_A3700 && i2c->adap.bus_recovery_info)15311531+ i2c->reset_before_xfer = true;15321532+ else15331533+ i2c_pxa_reset(i2c);1531153415321535 ret = i2c_add_numbered_adapter(&i2c->adap);15331536 if (ret < 0)
+4-1
drivers/i2c/busses/i2c-tegra.c
···20472047 *20482048 * VI I2C device shouldn't be marked as IRQ-safe because VI I2C won't20492049 * be used for atomic transfers. ACPI device is not IRQ safe also.20502050+ *20512051+ * Devices with pinctrl states cannot be marked IRQ-safe as the pinctrl20522052+ * state transitions during runtime PM require mutexes.20502053 */20512051- if (!IS_VI(i2c_dev) && !has_acpi_companion(i2c_dev->dev))20542054+ if (!IS_VI(i2c_dev) && !has_acpi_companion(i2c_dev->dev) && !i2c_dev->dev->pins)20522055 pm_runtime_irq_safe(i2c_dev->dev);2053205620542057 pm_runtime_enable(i2c_dev->dev);
+12
drivers/i3c/Kconfig
···2222if I3C2323source "drivers/i3c/master/Kconfig"2424endif # I3C2525+2626+config I3C_OR_I2C2727+ tristate2828+ default m if I3C=m2929+ default I2C3030+ help3131+ Device drivers using module_i3c_i2c_driver() can use either3232+ i2c or i3c hosts, but cannot be built-in for the kernel when3333+ CONFIG_I3C=m.3434+3535+ Add 'depends on I2C_OR_I3C' in Kconfig for those drivers to3636+ get the correct dependencies.
···303303 if (msleep_interruptible(1000))304304 return -EINTR;305305306306- ret = sps30_serial_command(state, SPS30_SERIAL_READ_MEAS, NULL, 0, meas, num * sizeof(num));306306+ ret = sps30_serial_command(state, SPS30_SERIAL_READ_MEAS, NULL, 0, meas, num * sizeof(*meas));307307 if (ret < 0)308308 return ret;309309 /* if measurements aren't ready sensor returns empty frame */
+1-1
drivers/iio/dac/ds4424.c
···140140141141 switch (mask) {142142 case IIO_CHAN_INFO_RAW:143143- if (val < S8_MIN || val > S8_MAX)143143+ if (val <= S8_MIN || val > S8_MAX)144144 return -EINVAL;145145146146 if (val > 0) {
···322322 }323323 case IIO_CHAN_INFO_RAW:324324 /* Resume device */325325- pm_runtime_get_sync(mpu3050->dev);325325+ ret = pm_runtime_resume_and_get(mpu3050->dev);326326+ if (ret)327327+ return ret;326328 mutex_lock(&mpu3050->lock);327329328330 ret = mpu3050_set_8khz_samplerate(mpu3050);···649647static int mpu3050_buffer_preenable(struct iio_dev *indio_dev)650648{651649 struct mpu3050 *mpu3050 = iio_priv(indio_dev);650650+ int ret;652651653653- pm_runtime_get_sync(mpu3050->dev);652652+ ret = pm_runtime_resume_and_get(mpu3050->dev);653653+ if (ret)654654+ return ret;654655655656 /* Unless we have OUR trigger active, run at full speed */656656- if (!mpu3050->hw_irq_trigger)657657- return mpu3050_set_8khz_samplerate(mpu3050);657657+ if (!mpu3050->hw_irq_trigger) {658658+ ret = mpu3050_set_8khz_samplerate(mpu3050);659659+ if (ret)660660+ pm_runtime_put_autosuspend(mpu3050->dev);661661+ }658662659659- return 0;663663+ return ret;660664}661665662666static int mpu3050_buffer_postdisable(struct iio_dev *indio_dev)
+1-2
drivers/iio/gyro/mpu3050-i2c.c
···1919 struct mpu3050 *mpu3050 = i2c_mux_priv(mux);20202121 /* Just power up the device, that is all that is needed */2222- pm_runtime_get_sync(mpu3050->dev);2323- return 0;2222+ return pm_runtime_resume_and_get(mpu3050->dev);2423}25242625static int mpu3050_i2c_bypass_deselect(struct i2c_mux_core *mux, u32 chan_id)
+1-1
drivers/iio/imu/adis.c
···526526527527 adis->spi = spi;528528 adis->data = data;529529- if (!adis->ops->write && !adis->ops->read && !adis->ops->reset)529529+ if (!adis->ops)530530 adis->ops = &adis_default_ops;531531 else if (!adis->ops->write || !adis->ops->read || !adis->ops->reset)532532 return -EINVAL;
···637637 break;638638 }639639640640- if (!open_drain)641641- val |= INV_ICM45600_INT1_CONFIG2_PUSH_PULL;640640+ if (open_drain)641641+ val |= INV_ICM45600_INT1_CONFIG2_OPEN_DRAIN;642642643643 ret = regmap_write(st->map, INV_ICM45600_REG_INT1_CONFIG2, val);644644 if (ret)···744744 */745745 fsleep(5 * USEC_PER_MSEC);746746747747+ /* set pm_runtime active early for disable vddio resource cleanup */748748+ ret = pm_runtime_set_active(dev);749749+ if (ret)750750+ return ret;751751+747752 ret = inv_icm45600_enable_regulator_vddio(st);748753 if (ret)749754 return ret;···781776 if (ret)782777 return ret;783778784784- ret = devm_pm_runtime_set_active_enabled(dev);779779+ ret = devm_pm_runtime_enable(dev);785780 if (ret)786781 return ret;787782
+8
drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
···19431943 irq_type);19441944 return -EINVAL;19451945 }19461946+19471947+ /*19481948+ * Acking interrupts by status register does not work reliably19491949+ * but seem to work when this bit is set.19501950+ */19511951+ if (st->chip_type == INV_MPU9150)19521952+ st->irq_mask |= INV_MPU6050_INT_RD_CLEAR;19531953+19461954 device_set_wakeup_capable(dev, true);1947195519481956 st->vdd_supply = devm_regulator_get(dev, "vdd");
+2
drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
···390390/* enable level triggering */391391#define INV_MPU6050_LATCH_INT_EN 0x20392392#define INV_MPU6050_BIT_BYPASS_EN 0x2393393+/* allow acking interrupts by any register read */394394+#define INV_MPU6050_INT_RD_CLEAR 0x10393395394396/* Allowed timestamp period jitter in percent */395397#define INV_MPU6050_TS_PERIOD_JITTER 4
+4-1
drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
···248248 switch (st->chip_type) {249249 case INV_MPU6000:250250 case INV_MPU6050:251251- case INV_MPU9150:252251 /*253252 * WoM is not supported and interrupt status read seems to be broken for254253 * some chips. Since data ready is the only interrupt, bypass interrupt···256257 wom_bits = 0;257258 int_status = INV_MPU6050_BIT_RAW_DATA_RDY_INT;258259 goto data_ready_interrupt;260260+ case INV_MPU9150:261261+ /* IRQ needs to be acked */262262+ wom_bits = 0;263263+ break;259264 case INV_MPU6500:260265 case INV_MPU6515:261266 case INV_MPU6880:
+4-2
drivers/iio/industrialio-buffer.c
···228228 written = 0;229229 add_wait_queue(&rb->pollq, &wait);230230 do {231231- if (!indio_dev->info)232232- return -ENODEV;231231+ if (!indio_dev->info) {232232+ ret = -ENODEV;233233+ break;234234+ }233235234236 if (!iio_buffer_space_available(rb)) {235237 if (signal_pending(current)) {
+1-1
drivers/iio/light/bh1780.c
···109109 case IIO_LIGHT:110110 pm_runtime_get_sync(&bh1780->client->dev);111111 value = bh1780_read_word(bh1780, BH1780_REG_DLOW);112112+ pm_runtime_put_autosuspend(&bh1780->client->dev);112113 if (value < 0)113114 return value;114114- pm_runtime_put_autosuspend(&bh1780->client->dev);115115 *val = value;116116117117 return IIO_VAL_INT;
+1-2
drivers/iio/magnetometer/Kconfig
···143143 tristate "MEMSIC MMC5633 3-axis magnetic sensor"144144 select REGMAP_I2C145145 select REGMAP_I3C if I3C146146- depends on I2C147147- depends on I3C || !I3C146146+ depends on I3C_OR_I2C148147 help149148 Say yes here to build support for the MEMSIC MMC5633 3-axis150149 magnetic sensor.
+1-1
drivers/iio/magnetometer/tlv493d.c
···171171 switch (ch) {172172 case TLV493D_AXIS_X:173173 val = FIELD_GET(TLV493D_BX_MAG_X_AXIS_MSB, b[TLV493D_RD_REG_BX]) << 4 |174174- FIELD_GET(TLV493D_BX2_MAG_X_AXIS_LSB, b[TLV493D_RD_REG_BX2]) >> 4;174174+ FIELD_GET(TLV493D_BX2_MAG_X_AXIS_LSB, b[TLV493D_RD_REG_BX2]);175175 break;176176 case TLV493D_AXIS_Y:177177 val = FIELD_GET(TLV493D_BY_MAG_Y_AXIS_MSB, b[TLV493D_RD_REG_BY]) << 4 |
···11# SPDX-License-Identifier: GPL-2.0-only22config AMD_SBRMI_I2C33 tristate "AMD side band RMI support"44- depends on I2C44+ depends on I3C_OR_I2C55 depends on ARM || ARM64 || COMPILE_TEST66 select REGMAP_I2C77- depends on I3C || !I3C87 select REGMAP_I3C if I3C98 help109 Side band RMI over I2C/I3C support for AMD out of band management.
···45324532 * their platform code before calling sdhci_add_host(), and we45334533 * won't assume 8-bit width for hosts without that CAP.45344534 */45354535- if (!(host->quirks & SDHCI_QUIRK_FORCE_1_BIT_DATA))45354535+ if (host->quirks & SDHCI_QUIRK_FORCE_1_BIT_DATA) {45364536+ host->caps1 &= ~(SDHCI_SUPPORT_SDR104 | SDHCI_SUPPORT_SDR50 | SDHCI_SUPPORT_DDR50);45374537+ if (host->quirks2 & SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400)45384538+ host->caps1 &= ~SDHCI_SUPPORT_HS400;45394539+ mmc->caps2 &= ~(MMC_CAP2_HS200 | MMC_CAP2_HS400 | MMC_CAP2_HS400_ES);45404540+ mmc->caps &= ~(MMC_CAP_DDR | MMC_CAP_UHS);45414541+ } else {45364542 mmc->caps |= MMC_CAP_4_BIT_DATA;45434543+ }4537454445384545 if (host->quirks2 & SDHCI_QUIRK2_HOST_NO_CMD23)45394546 mmc->caps &= ~MMC_CAP_CMD23;
+2-4
drivers/mtd/nand/raw/brcmnand/brcmnand.c
···23502350 for (i = 0; i < ctrl->max_oob; i += 4)23512351 oob_reg_write(ctrl, i, 0xffffffff);2352235223532353- if (mtd->oops_panic_write)23532353+ if (mtd->oops_panic_write) {23542354 /* switch to interrupt polling and PIO mode */23552355 disable_ctrl_irqs(ctrl);23562356-23572357- if (use_dma(ctrl) && (has_edu(ctrl) || !oob) && flash_dma_buf_ok(buf)) {23562356+ } else if (use_dma(ctrl) && (has_edu(ctrl) || !oob) && flash_dma_buf_ok(buf)) {23582357 if (ctrl->dma_trans(host, addr, (u32 *)buf, oob, mtd->writesize,23592358 CMD_PROGRAM_PAGE))23602360-23612359 ret = -EIO;2362236023632361 goto out;
+1-1
drivers/mtd/nand/raw/cadence-nand-controller.c
···31333133 sizeof(*cdns_ctrl->cdma_desc),31343134 &cdns_ctrl->dma_cdma_desc,31353135 GFP_KERNEL);31363136- if (!cdns_ctrl->dma_cdma_desc)31363136+ if (!cdns_ctrl->cdma_desc)31373137 return -ENOMEM;3138313831393139 cdns_ctrl->buf_size = SZ_16K;
+12-2
drivers/mtd/nand/raw/nand_base.c
···47374737static int nand_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len)47384738{47394739 struct nand_chip *chip = mtd_to_nand(mtd);47404740+ int ret;4740474147414742 if (!chip->ops.lock_area)47424743 return -ENOTSUPP;4743474447444744- return chip->ops.lock_area(chip, ofs, len);47454745+ nand_get_device(chip);47464746+ ret = chip->ops.lock_area(chip, ofs, len);47474747+ nand_release_device(chip);47484748+47494749+ return ret;47454750}4746475147474752/**···47584753static int nand_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)47594754{47604755 struct nand_chip *chip = mtd_to_nand(mtd);47564756+ int ret;4761475747624758 if (!chip->ops.unlock_area)47634759 return -ENOTSUPP;4764476047654765- return chip->ops.unlock_area(chip, ofs, len);47614761+ nand_get_device(chip);47624762+ ret = chip->ops.unlock_area(chip, ofs, len);47634763+ nand_release_device(chip);47644764+47654765+ return ret;47664766}4767476747684768/* Set default functions */
···23452345}2346234623472347/**23482348- * spi_nor_spimem_check_op - check if the operation is supported23492349- * by controller23482348+ * spi_nor_spimem_check_read_pp_op - check if a read or a page program operation is23492349+ * supported by controller23502350 *@nor: pointer to a 'struct spi_nor'23512351 *@op: pointer to op template to be checked23522352 *23532353 * Returns 0 if operation is supported, -EOPNOTSUPP otherwise.23542354 */23552355-static int spi_nor_spimem_check_op(struct spi_nor *nor,23562356- struct spi_mem_op *op)23552355+static int spi_nor_spimem_check_read_pp_op(struct spi_nor *nor,23562356+ struct spi_mem_op *op)23572357{23582358 /*23592359 * First test with 4 address bytes. The opcode itself might···23962396 if (spi_nor_protocol_is_dtr(nor->read_proto))23972397 op.dummy.nbytes *= 2;2398239823992399- return spi_nor_spimem_check_op(nor, &op);23992399+ return spi_nor_spimem_check_read_pp_op(nor, &op);24002400}2401240124022402/**···2414241424152415 spi_nor_spimem_setup_op(nor, &op, pp->proto);2416241624172417- return spi_nor_spimem_check_op(nor, &op);24172417+ return spi_nor_spimem_check_read_pp_op(nor, &op);24182418}2419241924202420/**···2466246624672467 spi_nor_spimem_setup_op(nor, &op, nor->reg_proto);2468246824692469- if (spi_nor_spimem_check_op(nor, &op))24692469+ if (!spi_mem_supports_op(nor->spimem, &op))24702470 nor->flags |= SNOR_F_NO_READ_CR;24712471 }24722472}
···88#include <linux/units.h>99#include <linux/can/dev.h>10101111-#define CAN_CALC_MAX_ERROR 50 /* in one-tenth of a percent */1111+#define CAN_CALC_MAX_ERROR 500 /* max error 5% */12121313/* CiA recommended sample points for Non Return to Zero encoding. */1414static int can_calc_sample_point_nrz(const struct can_bittiming *bt)
···980980 ret = bcm_sf2_sw_rst(priv);981981 if (ret) {982982 pr_err("%s: failed to software reset switch\n", __func__);983983+ if (!priv->wol_ports_mask)984984+ clk_disable_unprepare(priv->clk);983985 return ret;984986 }985987986988 bcm_sf2_crossbar_setup(priv);987989988990 ret = bcm_sf2_cfp_resume(ds);989989- if (ret)991991+ if (ret) {992992+ if (!priv->wol_ports_mask)993993+ clk_disable_unprepare(priv->clk);990994 return ret;991991-995995+ }992996 if (priv->hw_params.num_gphy == 1)993997 bcm_sf2_gphy_enable_set(ds, true);994998
···12711271 if (ret)12721272 goto err_napi;1273127312741274+ /* Reset the phy settings */12751275+ ret = xgbe_phy_reset(pdata);12761276+ if (ret)12771277+ goto err_irqs;12781278+12791279+ /* Start the phy */12741280 ret = phy_if->phy_start(pdata);12751281 if (ret)12761282 goto err_irqs;1277128312781284 hw_if->enable_tx(pdata);12791285 hw_if->enable_rx(pdata);12861286+ /* Synchronize flag with hardware state after enabling TX/RX.12871287+ * This prevents stale state after device restart cycles.12881288+ */12891289+ pdata->data_path_stopped = false;1280129012811291 udp_tunnel_nic_reset_ntf(netdev);12821282-12831283- /* Reset the phy settings */12841284- ret = xgbe_phy_reset(pdata);12851285- if (ret)12861286- goto err_txrx;1287129212881293 netif_tx_start_all_queues(netdev);12891294···12981293 clear_bit(XGBE_STOPPED, &pdata->dev_state);1299129413001295 return 0;13011301-13021302-err_txrx:13031303- hw_if->disable_rx(pdata);13041304- hw_if->disable_tx(pdata);1305129613061297err_irqs:13071298 xgbe_free_irqs(pdata);
+75-7
drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
···19421942static void xgbe_rx_adaptation(struct xgbe_prv_data *pdata)19431943{19441944 struct xgbe_phy_data *phy_data = pdata->phy_data;19451945- unsigned int reg;19451945+ int reg;1946194619471947 /* step 2: force PCS to send RX_ADAPT Req to PHY */19481948 XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_RX_EQ_CTRL4,···1964196419651965 /* Step 4: Check for Block lock */1966196619671967- /* Link status is latched low, so read once to clear19681968- * and then read again to get current state19671967+ reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);19681968+ if (reg < 0)19691969+ goto set_mode;19701970+19711971+ /* Link status is latched low so that momentary link drops19721972+ * can be detected. If link was already down read again19731973+ * to get the latest state.19691974 */19701970- reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);19711971- reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);19751975+ if (!pdata->phy.link && !(reg & MDIO_STAT1_LSTATUS)) {19761976+ reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);19771977+ if (reg < 0)19781978+ goto set_mode;19791979+ }19801980+19721981 if (reg & MDIO_STAT1_LSTATUS) {19731982 /* If the block lock is found, update the helpers19741983 * and declare the link up···2015200620162007 /* perform rx adaptation */20172008 xgbe_rx_adaptation(pdata);20092009+}20102010+20112011+/*20122012+ * xgbe_phy_stop_data_path - Stop TX/RX to prevent packet corruption20132013+ * @pdata: driver private data20142014+ *20152015+ * This function stops the data path (TX and RX) to prevent packet20162016+ * corruption during critical PHY operations like RX adaptation.20172017+ * Must be called before initiating RX adaptation when link goes down.20182018+ */20192019+static void xgbe_phy_stop_data_path(struct xgbe_prv_data *pdata)20202020+{20212021+ if (pdata->data_path_stopped)20222022+ return;20232023+20242024+ /* Stop TX/RX to prevent packet corruption during RX adaptation */20252025+ pdata->hw_if.disable_tx(pdata);20262026+ pdata->hw_if.disable_rx(pdata);20272027+ pdata->data_path_stopped = true;20282028+20292029+ netif_dbg(pdata, link, pdata->netdev,20302030+ "stopping data path for RX adaptation\n");20312031+}20322032+20332033+/*20342034+ * xgbe_phy_start_data_path - Re-enable TX/RX after RX adaptation20352035+ * @pdata: driver private data20362036+ *20372037+ * This function re-enables the data path (TX and RX) after RX adaptation20382038+ * has completed successfully. Only called when link is confirmed up.20392039+ */20402040+static void xgbe_phy_start_data_path(struct xgbe_prv_data *pdata)20412041+{20422042+ if (!pdata->data_path_stopped)20432043+ return;20442044+20452045+ pdata->hw_if.enable_rx(pdata);20462046+ pdata->hw_if.enable_tx(pdata);20472047+ pdata->data_path_stopped = false;20482048+20492049+ netif_dbg(pdata, link, pdata->netdev,20502050+ "restarting data path after RX adaptation\n");20182051}2019205220202053static void xgbe_phy_rx_reset(struct xgbe_prv_data *pdata)···28522801 if (pdata->en_rx_adap) {28532802 /* if the link is available and adaptation is done,28542803 * declare link up28042804+ *28052805+ * Note: When link is up and adaptation is done, we can28062806+ * safely re-enable the data path if it was stopped28072807+ * for adaptation.28552808 */28562856- if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done)28092809+ if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done) {28102810+ xgbe_phy_start_data_path(pdata);28572811 return 1;28122812+ }28582813 /* If either link is not available or adaptation is not done,28592814 * retrigger the adaptation logic. (if the mode is not set,28602815 * then issue mailbox command first)28612816 */28172817+28182818+ /* CRITICAL: Stop data path BEFORE triggering RX adaptation28192819+ * to prevent CRC errors from packets corrupted during28202820+ * the adaptation process. This is especially important28212821+ * when AN is OFF in 10G KR mode.28222822+ */28232823+ xgbe_phy_stop_data_path(pdata);28242824+28622825 if (pdata->mode_set) {28632826 xgbe_phy_rx_adaptation(pdata);28642827 } else {···28802815 xgbe_phy_set_mode(pdata, phy_data->cur_mode);28812816 }2882281728832883- if (pdata->rx_adapt_done)28182818+ if (pdata->rx_adapt_done) {28192819+ /* Adaptation complete, safe to re-enable data path */28202820+ xgbe_phy_start_data_path(pdata);28842821 return 1;28222822+ }28852823 } else if (reg & MDIO_STAT1_LSTATUS)28862824 return 1;28872825
+4
drivers/net/ethernet/amd/xgbe/xgbe.h
···12431243 bool en_rx_adap;12441244 int rx_adapt_retries;12451245 bool rx_adapt_done;12461246+ /* Flag to track if data path (TX/RX) was stopped for RX adaptation.12471247+ * This prevents packet corruption during the adaptation window.12481248+ */12491249+ bool data_path_stopped;12461250 bool mode_set;12471251 bool sph;12481252};
+11
drivers/net/ethernet/arc/emac_main.c
···934934 /* Set poll rate so that it polls every 1 ms */935935 arc_reg_set(priv, R_POLLRATE, clock_frequency / 1000000);936936937937+ /*938938+ * Put the device into a known quiescent state before requesting939939+ * the IRQ. Clear only EMAC interrupt status bits here; leave the940940+ * MDIO completion bit alone and avoid writing TXPL_MASK, which is941941+ * used to force TX polling rather than acknowledge interrupts.942942+ */943943+ arc_reg_set(priv, R_ENABLE, 0);944944+ arc_reg_set(priv, R_STATUS, RXINT_MASK | TXINT_MASK | ERR_MASK |945945+ TXCH_MASK | MSER_MASK | RXCR_MASK |946946+ RXFR_MASK | RXFL_MASK);947947+937948 ndev->irq = irq;938949 dev_info(dev, "IRQ is %d\n", ndev->irq);939950
+2
drivers/net/ethernet/broadcom/bnxt/bnxt.c
···29292929 u16 type = (u16)BNXT_EVENT_BUF_PRODUCER_TYPE(data1);29302930 u32 offset = BNXT_EVENT_BUF_PRODUCER_OFFSET(data2);2931293129322932+ if (type >= ARRAY_SIZE(bp->bs_trace))29332933+ goto async_event_process_exit;29322934 bnxt_bs_trace_check_wrap(&bp->bs_trace[type], offset);29332935 goto async_event_process_exit;29342936 }
···333333334334 mdio_node = of_get_child_by_name(np, "mdio");335335 if (!mdio_node)336336- return 0;336336+ return -ENODEV;337337338338 phy_node = of_get_next_child(mdio_node, NULL);339339- if (!phy_node)339339+ if (!phy_node) {340340+ err = -ENODEV;340341 goto of_put_mdio_node;342342+ }341343342344 err = of_property_read_u32(phy_node, "reg", &addr);343345 if (err)···425423426424 addr = netc_get_phy_addr(gchild);427425 if (addr < 0) {426426+ if (addr == -ENODEV)427427+ continue;428428+428429 dev_err(dev, "Failed to get PHY address\n");429430 return addr;430431 }···437432 "Find same PHY address in EMDIO and ENETC node\n");438433 return -EINVAL;439434 }440440-441441- /* The default value of LaBCR[MDIO_PHYAD_PRTAD ] is442442- * 0, so no need to set the register.443443- */444444- if (!addr)445445- continue;446435447436 switch (bus_devfn) {448437 case IMX95_ENETC0_BUS_DEVFN:···577578578579 addr = netc_get_phy_addr(np);579580 if (addr < 0) {581581+ if (addr == -ENODEV)582582+ return 0;583583+580584 dev_err(dev, "Failed to get PHY address\n");581585 return addr;582586 }583583-584584- /* The default value of LaBCR[MDIO_PHYAD_PRTAD] is 0,585585- * so no need to set the register.586586- */587587- if (!addr)588588- return 0;589587590588 if (phy_mask & BIT(addr)) {591589 dev_err(dev,
···492492{493493 struct iavf_adapter *adapter = netdev_priv(netdev);494494 u32 new_rx_count, new_tx_count;495495- int ret = 0;496495497496 if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))498497 return -EINVAL;···536537 }537538538539 if (netif_running(netdev)) {539539- iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED);540540- ret = iavf_wait_for_reset(adapter);541541- if (ret)542542- netdev_warn(netdev, "Changing ring parameters timeout or interrupted waiting for reset");540540+ adapter->flags |= IAVF_FLAG_RESET_NEEDED;541541+ iavf_reset_step(adapter);543542 }544543545545- return ret;544544+ return 0;546545}547546548547/**···17201723{17211724 struct iavf_adapter *adapter = netdev_priv(netdev);17221725 u32 num_req = ch->combined_count;17231723- int ret = 0;1724172617251727 if ((adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_ADQ) &&17261728 adapter->num_tc) {···1741174517421746 adapter->num_req_queues = num_req;17431747 adapter->flags |= IAVF_FLAG_REINIT_ITR_NEEDED;17441744- iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED);17481748+ adapter->flags |= IAVF_FLAG_RESET_NEEDED;17491749+ iavf_reset_step(adapter);1745175017461746- ret = iavf_wait_for_reset(adapter);17471747- if (ret)17481748- netdev_warn(netdev, "Changing channel count timeout or interrupted waiting for reset");17491749-17501750- return ret;17511751+ return 0;17511752}1752175317531754/**
+34-56
drivers/net/ethernet/intel/iavf/iavf_main.c
···186186}187187188188/**189189- * iavf_wait_for_reset - Wait for reset to finish.190190- * @adapter: board private structure191191- *192192- * Returns 0 if reset finished successfully, negative on timeout or interrupt.193193- */194194-int iavf_wait_for_reset(struct iavf_adapter *adapter)195195-{196196- int ret = wait_event_interruptible_timeout(adapter->reset_waitqueue,197197- !iavf_is_reset_in_progress(adapter),198198- msecs_to_jiffies(5000));199199-200200- /* If ret < 0 then it means wait was interrupted.201201- * If ret == 0 then it means we got a timeout while waiting202202- * for reset to finish.203203- * If ret > 0 it means reset has finished.204204- */205205- if (ret > 0)206206- return 0;207207- else if (ret < 0)208208- return -EINTR;209209- else210210- return -EBUSY;211211-}212212-213213-/**214189 * iavf_allocate_dma_mem_d - OS specific memory alloc for shared code215190 * @hw: pointer to the HW structure216191 * @mem: ptr to mem struct to fill out···757782 adapter->num_vlan_filters++;758783 iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_ADD_VLAN_FILTER);759784 } else if (f->state == IAVF_VLAN_REMOVE) {760760- /* IAVF_VLAN_REMOVE means that VLAN wasn't yet removed.761761- * We can safely only change the state here.785785+ /* Re-add the filter since we cannot tell whether the786786+ * pending delete has already been processed by the PF.787787+ * A duplicate add is harmless.762788 */763763- f->state = IAVF_VLAN_ACTIVE;789789+ f->state = IAVF_VLAN_ADD;790790+ iavf_schedule_aq_request(adapter,791791+ IAVF_FLAG_AQ_ADD_VLAN_FILTER);764792 }765793766794clearout:···3014303630153037 adapter->flags |= IAVF_FLAG_PF_COMMS_FAILED;3016303830393039+ iavf_ptp_release(adapter);30403040+30173041 /* We don't use netif_running() because it may be true prior to30183042 * ndo_open() returning, so we can't assume it means all our open30193043 * tasks have finished, since we're not holding the rtnl_lock here.···30913111}3092311230933113/**30943094- * iavf_reset_task - Call-back task to handle hardware reset30953095- * @work: pointer to work_struct31143114+ * iavf_reset_step - Perform the VF reset sequence31153115+ * @adapter: board private structure30963116 *30973097- * During reset we need to shut down and reinitialize the admin queue30983098- * before we can use it to communicate with the PF again. We also clear30993099- * and reinit the rings because that context is lost as well.31003100- **/31013101-static void iavf_reset_task(struct work_struct *work)31173117+ * Requests a reset from PF, polls for completion, and reconfigures31183118+ * the driver. Caller must hold the netdev instance lock.31193119+ *31203120+ * This can sleep for several seconds while polling HW registers.31213121+ */31223122+void iavf_reset_step(struct iavf_adapter *adapter)31023123{31033103- struct iavf_adapter *adapter = container_of(work,31043104- struct iavf_adapter,31053105- reset_task);31063124 struct virtchnl_vf_resource *vfres = adapter->vf_res;31073125 struct net_device *netdev = adapter->netdev;31083126 struct iavf_hw *hw = &adapter->hw;···31113133 int i = 0, err;31123134 bool running;3113313531143114- netdev_lock(netdev);31363136+ netdev_assert_locked(netdev);3115313731163138 iavf_misc_irq_disable(adapter);31173139 if (adapter->flags & IAVF_FLAG_RESET_NEEDED) {···31563178 dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n",31573179 reg_val);31583180 iavf_disable_vf(adapter);31593159- netdev_unlock(netdev);31603181 return; /* Do not attempt to reinit. It's dead, Jim. */31613182 }31623183···31673190 iavf_startup(adapter);31683191 queue_delayed_work(adapter->wq, &adapter->watchdog_task,31693192 msecs_to_jiffies(30));31703170- netdev_unlock(netdev);31713193 return;31723194 }31733195···3186321031873211 iavf_change_state(adapter, __IAVF_RESETTING);31883212 adapter->flags &= ~IAVF_FLAG_RESET_PENDING;32133213+32143214+ iavf_ptp_release(adapter);3189321531903216 /* free the Tx/Rx rings and descriptors, might be better to just31913217 * re-use them sometime in the future···3309333133103332 adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;3311333333123312- wake_up(&adapter->reset_waitqueue);33133313- netdev_unlock(netdev);33143314-33153334 return;33163335reset_err:33173336 if (running) {···33173342 }33183343 iavf_disable_vf(adapter);3319334433203320- netdev_unlock(netdev);33213345 dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n");33463346+}33473347+33483348+static void iavf_reset_task(struct work_struct *work)33493349+{33503350+ struct iavf_adapter *adapter = container_of(work,33513351+ struct iavf_adapter,33523352+ reset_task);33533353+ struct net_device *netdev = adapter->netdev;33543354+33553355+ netdev_lock(netdev);33563356+ iavf_reset_step(adapter);33573357+ netdev_unlock(netdev);33223358}3323335933243360/**···45974611static int iavf_change_mtu(struct net_device *netdev, int new_mtu)45984612{45994613 struct iavf_adapter *adapter = netdev_priv(netdev);46004600- int ret = 0;4601461446024615 netdev_dbg(netdev, "changing MTU from %d to %d\n",46034616 netdev->mtu, new_mtu);46044617 WRITE_ONCE(netdev->mtu, new_mtu);4605461846064619 if (netif_running(netdev)) {46074607- iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED);46084608- ret = iavf_wait_for_reset(adapter);46094609- if (ret < 0)46104610- netdev_warn(netdev, "MTU change interrupted waiting for reset");46114611- else if (ret)46124612- netdev_warn(netdev, "MTU change timed out waiting for reset");46204620+ adapter->flags |= IAVF_FLAG_RESET_NEEDED;46214621+ iavf_reset_step(adapter);46134622 }4614462346154615- return ret;46244624+ return 0;46164625}4617462646184627/**···5411543054125431 /* Setup the wait queue for indicating transition to down status */54135432 init_waitqueue_head(&adapter->down_waitqueue);54145414-54155415- /* Setup the wait queue for indicating transition to running state */54165416- init_waitqueue_head(&adapter->reset_waitqueue);5417543354185434 /* Setup the wait queue for indicating virtchannel events */54195435 init_waitqueue_head(&adapter->vc_waitqueue);
-1
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
···27362736 case VIRTCHNL_OP_ENABLE_QUEUES:27372737 /* enable transmits */27382738 iavf_irq_enable(adapter, true);27392739- wake_up(&adapter->reset_waitqueue);27402739 adapter->flags &= ~IAVF_FLAG_QUEUES_DISABLED;27412740 break;27422741 case VIRTCHNL_OP_DISABLE_QUEUES:
···264264 /* reset next_to_use and next_to_clean */265265 tx_ring->next_to_use = 0;266266 tx_ring->next_to_clean = 0;267267+268268+ /* Clear any lingering XSK TX timestamp requests */269269+ if (test_bit(IGC_RING_FLAG_TX_HWTSTAMP, &tx_ring->flags)) {270270+ struct igc_adapter *adapter = netdev_priv(tx_ring->netdev);271271+272272+ igc_ptp_clear_xsk_tx_tstamp_queue(adapter, tx_ring->queue_index);273273+ }267274}268275269276/**···17371730 /* The minimum packet size with TCTL.PSP set is 17 so pad the skb17381731 * in order to meet this minimum size requirement.17391732 */17401740- if (skb->len < 17) {17411741- if (skb_padto(skb, 17))17421742- return NETDEV_TX_OK;17431743- skb->len = 17;17441744- }17331733+ if (skb_put_padto(skb, 17))17341734+ return NETDEV_TX_OK;1745173517461736 return igc_xmit_frame_ring(skb, igc_tx_queue_mapping(adapter, skb));17471737}
+33
drivers/net/ethernet/intel/igc/igc_ptp.c
···577577 spin_unlock_irqrestore(&adapter->ptp_tx_lock, flags);578578}579579580580+/**581581+ * igc_ptp_clear_xsk_tx_tstamp_queue - Clear pending XSK TX timestamps for a queue582582+ * @adapter: Board private structure583583+ * @queue_id: TX queue index to clear timestamps for584584+ *585585+ * Iterates over all TX timestamp registers and releases any pending586586+ * timestamp requests associated with the given TX queue. This is587587+ * called when an XDP pool is being disabled to ensure no stale588588+ * timestamp references remain.589589+ */590590+void igc_ptp_clear_xsk_tx_tstamp_queue(struct igc_adapter *adapter, u16 queue_id)591591+{592592+ unsigned long flags;593593+ int i;594594+595595+ spin_lock_irqsave(&adapter->ptp_tx_lock, flags);596596+597597+ for (i = 0; i < IGC_MAX_TX_TSTAMP_REGS; i++) {598598+ struct igc_tx_timestamp_request *tstamp = &adapter->tx_tstamp[i];599599+600600+ if (tstamp->buffer_type != IGC_TX_BUFFER_TYPE_XSK)601601+ continue;602602+ if (tstamp->xsk_queue_index != queue_id)603603+ continue;604604+ if (!tstamp->xsk_tx_buffer)605605+ continue;606606+607607+ igc_ptp_free_tx_buffer(adapter, tstamp);608608+ }609609+610610+ spin_unlock_irqrestore(&adapter->ptp_tx_lock, flags);611611+}612612+580613static void igc_ptp_disable_tx_timestamp(struct igc_adapter *adapter)581614{582615 struct igc_hw *hw = &adapter->hw;
+36-13
drivers/net/ethernet/intel/libie/fwlog.c
···433433 module = libie_find_module_by_dentry(fwlog->debugfs_modules, dentry);434434 if (module < 0) {435435 dev_info(dev, "unknown module\n");436436- return -EINVAL;436436+ count = -EINVAL;437437+ goto free_cmd_buf;437438 }438439439440 cnt = sscanf(cmd_buf, "%s", user_val);440440- if (cnt != 1)441441- return -EINVAL;441441+ if (cnt != 1) {442442+ count = -EINVAL;443443+ goto free_cmd_buf;444444+ }442445443446 log_level = sysfs_match_string(libie_fwlog_level_string, user_val);444447 if (log_level < 0) {445448 dev_info(dev, "unknown log level '%s'\n", user_val);446446- return -EINVAL;449449+ count = -EINVAL;450450+ goto free_cmd_buf;447451 }448452449453 if (module != LIBIE_AQC_FW_LOG_ID_MAX) {···461457 for (i = 0; i < LIBIE_AQC_FW_LOG_ID_MAX; i++)462458 fwlog->cfg.module_entries[i].log_level = log_level;463459 }460460+461461+free_cmd_buf:462462+ kfree(cmd_buf);464463465464 return count;466465}···522515 return PTR_ERR(cmd_buf);523516524517 ret = sscanf(cmd_buf, "%s", user_val);525525- if (ret != 1)526526- return -EINVAL;518518+ if (ret != 1) {519519+ count = -EINVAL;520520+ goto free_cmd_buf;521521+ }527522528523 ret = kstrtos16(user_val, 0, &nr_messages);529529- if (ret)530530- return ret;524524+ if (ret) {525525+ count = ret;526526+ goto free_cmd_buf;527527+ }531528532529 if (nr_messages < LIBIE_AQC_FW_LOG_MIN_RESOLUTION ||533530 nr_messages > LIBIE_AQC_FW_LOG_MAX_RESOLUTION) {534531 dev_err(dev, "Invalid FW log number of messages %d, value must be between %d - %d\n",535532 nr_messages, LIBIE_AQC_FW_LOG_MIN_RESOLUTION,536533 LIBIE_AQC_FW_LOG_MAX_RESOLUTION);537537- return -EINVAL;534534+ count = -EINVAL;535535+ goto free_cmd_buf;538536 }539537540538 fwlog->cfg.log_resolution = nr_messages;539539+540540+free_cmd_buf:541541+ kfree(cmd_buf);541542542543 return count;543544}···603588 return PTR_ERR(cmd_buf);604589605590 ret = sscanf(cmd_buf, "%s", user_val);606606- if (ret != 1)607607- return -EINVAL;591591+ if (ret != 1) {592592+ ret = -EINVAL;593593+ goto free_cmd_buf;594594+ }608595609596 ret = kstrtobool(user_val, &enable);610597 if (ret)···641624 */642625 if (WARN_ON(ret != (ssize_t)count && ret >= 0))643626 ret = -EIO;627627+free_cmd_buf:628628+ kfree(cmd_buf);644629645630 return ret;646631}···701682 return PTR_ERR(cmd_buf);702683703684 ret = sscanf(cmd_buf, "%s", user_val);704704- if (ret != 1)705705- return -EINVAL;685685+ if (ret != 1) {686686+ ret = -EINVAL;687687+ goto free_cmd_buf;688688+ }706689707690 index = sysfs_match_string(libie_fwlog_log_size, user_val);708691 if (index < 0) {···733712 */734713 if (WARN_ON(ret != (ssize_t)count && ret >= 0))735714 ret = -EIO;715715+free_cmd_buf:716716+ kfree(cmd_buf);736717737718 return ret;738719}
+2-2
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
···50165016 if (priv->percpu_pools)50175017 numbufs = port->nrxqs * 2;5018501850195019- if (change_percpu)50195019+ if (change_percpu && priv->global_tx_fc)50205020 mvpp2_bm_pool_update_priv_fc(priv, false);5021502150225022 for (i = 0; i < numbufs; i++)···50415041 mvpp2_open(port->dev);50425042 }5043504350445044- if (change_percpu)50445044+ if (change_percpu && priv->global_tx_fc)50455045 mvpp2_bm_pool_update_priv_fc(priv, true);5046504650475047 return 0;
···109109 int ret;110110111111 ret = __dev_forward_skb(rx_dev, skb);112112- if (ret)112112+ if (ret) {113113+ if (psp_ext)114114+ __skb_ext_put(psp_ext);113115 return ret;116116+ }114117115118 nsim_psp_handle_ext(skb, psp_ext);116119
+7-1
drivers/net/phy/sfp.c
···367367 sfp->state_ignore_mask |= SFP_F_TX_FAULT;368368}369369370370+static void sfp_fixup_ignore_tx_fault_and_los(struct sfp *sfp)371371+{372372+ sfp_fixup_ignore_tx_fault(sfp);373373+ sfp_fixup_ignore_los(sfp);374374+}375375+370376static void sfp_fixup_ignore_hw(struct sfp *sfp, unsigned int mask)371377{372378 sfp->state_hw_mask &= ~mask;···536530 // Huawei MA5671A can operate at 2500base-X, but report 1.2GBd NRZ in537531 // their EEPROM538532 SFP_QUIRK("HUAWEI", "MA5671A", sfp_quirk_2500basex,539539- sfp_fixup_ignore_tx_fault),533533+ sfp_fixup_ignore_tx_fault_and_los),540534541535 // Lantech 8330-262D-E and 8330-265D can operate at 2500base-X, but542536 // incorrectly report 2500MBd NRZ in their EEPROM.
···894894 dev_err(&epf->dev, "pci_epc_set_bar() failed: %d\n", ret);895895 bar->submap = old_submap;896896 bar->num_submap = old_nsub;897897+ ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar);898898+ if (ret)899899+ dev_warn(&epf->dev, "Failed to restore the original BAR mapping: %d\n",900900+ ret);901901+897902 kfree(submap);898903 goto err;899904 }
+41-13
drivers/pci/pwrctrl/core.c
···268268}269269EXPORT_SYMBOL_GPL(pci_pwrctrl_power_on_devices);270270271271+/*272272+ * Check whether the pwrctrl device really needs to be created or not. The273273+ * pwrctrl device will only be created if the node satisfies below requirements:274274+ *275275+ * 1. Presence of compatible property with "pci" prefix to match against the276276+ * pwrctrl driver (AND)277277+ * 2. At least one of the power supplies defined in the devicetree node of the278278+ * device (OR) in the remote endpoint parent node to indicate pwrctrl279279+ * requirement.280280+ */281281+static bool pci_pwrctrl_is_required(struct device_node *np)282282+{283283+ struct device_node *endpoint;284284+ const char *compat;285285+ int ret;286286+287287+ ret = of_property_read_string(np, "compatible", &compat);288288+ if (ret < 0)289289+ return false;290290+291291+ if (!strstarts(compat, "pci"))292292+ return false;293293+294294+ if (of_pci_supply_present(np))295295+ return true;296296+297297+ if (of_graph_is_present(np)) {298298+ for_each_endpoint_of_node(np, endpoint) {299299+ struct device_node *remote __free(device_node) =300300+ of_graph_get_remote_port_parent(endpoint);301301+ if (remote) {302302+ if (of_pci_supply_present(remote))303303+ return true;304304+ }305305+ }306306+ }307307+308308+ return false;309309+}310310+271311static int pci_pwrctrl_create_device(struct device_node *np,272312 struct device *parent)273313{···327287 return 0;328288 }329289330330- /*331331- * Sanity check to make sure that the node has the compatible property332332- * to allow driver binding.333333- */334334- if (!of_property_present(np, "compatible"))335335- return 0;336336-337337- /*338338- * Check whether the pwrctrl device really needs to be created or not.339339- * This is decided based on at least one of the power supplies defined340340- * in the devicetree node of the device or the graph property.341341- */342342- if (!of_pci_supply_present(np) && !of_graph_is_present(np)) {290290+ if (!pci_pwrctrl_is_required(np)) {343291 dev_dbg(parent, "Skipping OF node: %s\n", np->name);344292 return 0;345293 }
···617617618618 err = of_reserved_mem_region_to_resource(np, i++, &res);619619 if (err)620620- return 0;620620+ break;621621622622 /*623623 * Ignore the first memory region which will be used vdev buffer.
+39
drivers/remoteproc/mtk_scp.c
···15921592};15931593MODULE_DEVICE_TABLE(of, mtk_scp_of_match);1594159415951595+static int __maybe_unused scp_suspend(struct device *dev)15961596+{15971597+ struct mtk_scp *scp = dev_get_drvdata(dev);15981598+ struct rproc *rproc = scp->rproc;15991599+16001600+ /*16011601+ * Only unprepare if the SCP is running and holding the clock.16021602+ *16031603+ * Note: `scp_ops` doesn't implement .attach() callback, hence16041604+ * `rproc->state` can never be RPROC_ATTACHED. Otherwise, it16051605+ * should also be checked here.16061606+ */16071607+ if (rproc->state == RPROC_RUNNING)16081608+ clk_unprepare(scp->clk);16091609+ return 0;16101610+}16111611+16121612+static int __maybe_unused scp_resume(struct device *dev)16131613+{16141614+ struct mtk_scp *scp = dev_get_drvdata(dev);16151615+ struct rproc *rproc = scp->rproc;16161616+16171617+ /*16181618+ * Only prepare if the SCP was running and holding the clock.16191619+ *16201620+ * Note: `scp_ops` doesn't implement .attach() callback, hence16211621+ * `rproc->state` can never be RPROC_ATTACHED. Otherwise, it16221622+ * should also be checked here.16231623+ */16241624+ if (rproc->state == RPROC_RUNNING)16251625+ return clk_prepare(scp->clk);16261626+ return 0;16271627+}16281628+16291629+static const struct dev_pm_ops scp_pm_ops = {16301630+ SET_SYSTEM_SLEEP_PM_OPS(scp_suspend, scp_resume)16311631+};16321632+15951633static struct platform_driver mtk_scp_driver = {15961634 .probe = scp_probe,15971635 .remove = scp_remove,15981636 .driver = {15991637 .name = "mtk-scp",16001638 .of_match_table = mtk_scp_of_match,16391639+ .pm = &scp_pm_ops,16011640 },16021641};16031642
···136136{137137 u32 val = power_on ? 0 : 1;138138139139+ if (!pwrrdy)140140+ return 0;141141+139142 /* The initialization path guarantees that the mask is 1 bit long. */140143 return regmap_field_update_bits(pwrrdy, 1, val);141144}
+16
drivers/s390/block/dasd_eckd.c
···61356135static int dasd_eckd_copy_pair_swap(struct dasd_device *device, char *prim_busid,61366136 char *sec_busid)61376137{61386138+ struct dasd_eckd_private *prim_priv, *sec_priv;61386139 struct dasd_device *primary, *secondary;61396140 struct dasd_copy_relation *copy;61406141 struct dasd_block *block;···61556154 secondary = copy_relation_find_device(copy, sec_busid);61566155 if (!secondary)61576156 return DASD_COPYPAIRSWAP_SECONDARY;61576157+61586158+ prim_priv = primary->private;61596159+ sec_priv = secondary->private;6158616061596161 /*61606162 * usually the device should be quiesced for swap···61856181 dev_name(&primary->cdev->dev),61866182 dev_name(&secondary->cdev->dev), rc);61876183 }61846184+61856185+ if (primary->stopped & DASD_STOPPED_QUIESCE) {61866186+ dasd_device_set_stop_bits(secondary, DASD_STOPPED_QUIESCE);61876187+ dasd_device_remove_stop_bits(primary, DASD_STOPPED_QUIESCE);61886188+ }61896189+61906190+ /*61916191+ * The secondary device never got through format detection, but since it61926192+ * is a copy of the primary device, the format is exactly the same;61936193+ * therefore, the detected layout can simply be copied.61946194+ */61956195+ sec_priv->uses_cdl = prim_priv->uses_cdl;6188619661896197 /* re-enable device */61906198 dasd_device_remove_stop_bits(primary, DASD_STOPPED_PPRC);
+7-5
drivers/s390/crypto/zcrypt_ccamisc.c
···1639163916401640 memset(ci, 0, sizeof(*ci));1641164116421642- /* get first info from zcrypt device driver about this apqn */16431643- rc = zcrypt_device_status_ext(cardnr, domain, &devstat);16441644- if (rc)16451645- return rc;16461646- ci->hwtype = devstat.hwtype;16421642+ /* if specific domain given, fetch status and hw info for this apqn */16431643+ if (domain != AUTOSEL_DOM) {16441644+ rc = zcrypt_device_status_ext(cardnr, domain, &devstat);16451645+ if (rc)16461646+ return rc;16471647+ ci->hwtype = devstat.hwtype;16481648+ }1647164916481650 /*16491651 * Prep memory for rule array and var array use.
···360360 * default device queue depth to figure out sbitmap shift361361 * since we use this queue depth most of times.362362 */363363- if (scsi_realloc_sdev_budget_map(sdev, depth)) {364364- kref_put(&sdev->host->tagset_refcnt, scsi_mq_free_tags);365365- put_device(&starget->dev);366366- kfree(sdev);367367- goto out;368368- }363363+ if (scsi_realloc_sdev_budget_map(sdev, depth))364364+ goto out_device_destroy;369365370366 scsi_change_queue_depth(sdev, depth);371367
+2-4
drivers/slimbus/qcom-ngd-ctrl.c
···15351535 ngd->id = id;15361536 ngd->pdev->dev.parent = parent;1537153715381538- ret = driver_set_override(&ngd->pdev->dev,15391539- &ngd->pdev->driver_override,15401540- QCOM_SLIM_NGD_DRV_NAME,15411541- strlen(QCOM_SLIM_NGD_DRV_NAME));15381538+ ret = device_set_driver_override(&ngd->pdev->dev,15391539+ QCOM_SLIM_NGD_DRV_NAME);15421540 if (ret) {15431541 platform_device_put(ngd->pdev);15441542 kfree(ngd);
+22-2
drivers/soc/fsl/qbman/qman.c
···1827182718281828void qman_destroy_fq(struct qman_fq *fq)18291829{18301830+ int leaked;18311831+18301832 /*18311833 * We don't need to lock the FQ as it is a pre-condition that the FQ be18321834 * quiesced. Instead, run some checks.···18361834 switch (fq->state) {18371835 case qman_fq_state_parked:18381836 case qman_fq_state_oos:18391839- if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))18401840- qman_release_fqid(fq->fqid);18371837+ /*18381838+ * There's a race condition here on releasing the fqid,18391839+ * setting the fq_table to NULL, and freeing the fqid.18401840+ * To prevent it, this order should be respected:18411841+ */18421842+ if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID)) {18431843+ leaked = qman_shutdown_fq(fq->fqid);18441844+ if (leaked)18451845+ pr_debug("FQID %d leaked\n", fq->fqid);18461846+ }1841184718421848 DPAA_ASSERT(fq_table[fq->idx]);18431849 fq_table[fq->idx] = NULL;18501850+18511851+ if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID) && !leaked) {18521852+ /*18531853+ * fq_table[fq->idx] should be set to null before18541854+ * freeing fq->fqid otherwise it could by allocated by18551855+ * qman_alloc_fqid() while still being !NULL18561856+ */18571857+ smp_wmb();18581858+ gen_pool_free(qm_fqalloc, fq->fqid | DPAA_GENALLOC_OFF, 1);18591859+ }18441860 return;18451861 default:18461862 break;
···142142143143 sys_controller->flash = of_get_mtd_device_by_node(np);144144 of_node_put(np);145145- if (IS_ERR(sys_controller->flash))146146- return dev_err_probe(dev, PTR_ERR(sys_controller->flash), "Failed to get flash\n");145145+ if (IS_ERR(sys_controller->flash)) {146146+ ret = dev_err_probe(dev, PTR_ERR(sys_controller->flash), "Failed to get flash\n");147147+ goto out_free;148148+ }147149148150no_flash:149151 sys_controller->client.dev = dev;···157155 if (IS_ERR(sys_controller->chan)) {158156 ret = dev_err_probe(dev, PTR_ERR(sys_controller->chan),159157 "Failed to get mbox channel\n");160160- kfree(sys_controller);161161- return ret;158158+ goto out_free;162159 }163160164161 init_completion(&sys_controller->c);···175174 dev_info(&pdev->dev, "Registered MPFS system controller\n");176175177176 return 0;177177+178178+out_free:179179+ kfree(sys_controller);180180+ return ret;178181}179182180183static void mpfs_sys_controller_remove(struct platform_device *pdev)
+1
drivers/soc/rockchip/grf.c
···231231 grf = syscon_node_to_regmap(np);232232 if (IS_ERR(grf)) {233233 pr_err("%s: could not get grf syscon\n", __func__);234234+ of_node_put(np);234235 return PTR_ERR(grf);235236 }236237
+9-42
drivers/spi/spi-amlogic-spifc-a4.c
···411411 ret = dma_mapping_error(sfc->dev, sfc->daddr);412412 if (ret) {413413 dev_err(sfc->dev, "DMA mapping error\n");414414- goto out_map_data;414414+ return ret;415415 }416416417417 cmd = CMD_DATA_ADDRL(sfc->daddr);···429429 ret = dma_mapping_error(sfc->dev, sfc->iaddr);430430 if (ret) {431431 dev_err(sfc->dev, "DMA mapping error\n");432432- dma_unmap_single(sfc->dev, sfc->daddr, datalen, dir);433432 goto out_map_data;434433 }435434···447448 return 0;448449449450out_map_info:450450- dma_unmap_single(sfc->dev, sfc->iaddr, datalen, dir);451451+ dma_unmap_single(sfc->dev, sfc->iaddr, infolen, dir);451452out_map_data:452453 dma_unmap_single(sfc->dev, sfc->daddr, datalen, dir);453454···10831084 return clk_set_rate(sfc->core_clk, SFC_BUS_DEFAULT_CLK);10841085}1085108610861086-static int aml_sfc_disable_clk(struct aml_sfc *sfc)10871087-{10881088- clk_disable_unprepare(sfc->core_clk);10891089- clk_disable_unprepare(sfc->gate_clk);10901090-10911091- return 0;10921092-}10931093-10941087static int aml_sfc_probe(struct platform_device *pdev)10951088{10961089 struct device_node *np = pdev->dev.of_node;···1133114211341143 /* Enable Amlogic flash controller spi mode */11351144 ret = regmap_write(sfc->regmap_base, SFC_SPI_CFG, SPI_MODE_EN);11361136- if (ret) {11371137- dev_err(dev, "failed to enable SPI mode\n");11381138- goto err_out;11391139- }11451145+ if (ret)11461146+ return dev_err_probe(dev, ret, "failed to enable SPI mode\n");1140114711411148 ret = dma_set_mask(sfc->dev, DMA_BIT_MASK(32));11421142- if (ret) {11431143- dev_err(sfc->dev, "failed to set dma mask\n");11441144- goto err_out;11451145- }11491149+ if (ret)11501150+ return dev_err_probe(sfc->dev, ret, "failed to set dma mask\n");1146115111471152 sfc->ecc_eng.dev = &pdev->dev;11481153 sfc->ecc_eng.integration = NAND_ECC_ENGINE_INTEGRATION_PIPELINED;···11461159 sfc->ecc_eng.priv = sfc;1147116011481161 ret = nand_ecc_register_on_host_hw_engine(&sfc->ecc_eng);11491149- if (ret) {11501150- dev_err(&pdev->dev, "failed to register Aml host ecc engine.\n");11511151- goto err_out;11521152- }11621162+ if (ret)11631163+ return dev_err_probe(&pdev->dev, ret, "failed to register Aml host ecc engine.\n");1153116411541165 ret = of_property_read_u32(np, "amlogic,rx-adj", &val);11551166 if (!ret)···11631178 ctrl->min_speed_hz = SFC_MIN_FREQUENCY;11641179 ctrl->num_chipselect = SFC_MAX_CS_NUM;1165118011661166- ret = devm_spi_register_controller(dev, ctrl);11671167- if (ret)11681168- goto err_out;11691169-11701170- return 0;11711171-11721172-err_out:11731173- aml_sfc_disable_clk(sfc);11741174-11751175- return ret;11761176-}11771177-11781178-static void aml_sfc_remove(struct platform_device *pdev)11791179-{11801180- struct spi_controller *ctlr = platform_get_drvdata(pdev);11811181- struct aml_sfc *sfc = spi_controller_get_devdata(ctlr);11821182-11831183- aml_sfc_disable_clk(sfc);11811181+ return devm_spi_register_controller(dev, ctrl);11841182}1185118311861184static const struct of_device_id aml_sfc_of_match[] = {···11811213 .of_match_table = aml_sfc_of_match,11821214 },11831215 .probe = aml_sfc_probe,11841184- .remove = aml_sfc_remove,11851216};11861217module_platform_driver(aml_sfc_driver);11871218
+4-8
drivers/spi/spi-amlogic-spisg.c
···729729 };730730731731 if (of_property_read_bool(dev->of_node, "spi-slave"))732732- ctlr = spi_alloc_target(dev, sizeof(*spisg));732732+ ctlr = devm_spi_alloc_target(dev, sizeof(*spisg));733733 else734734- ctlr = spi_alloc_host(dev, sizeof(*spisg));734734+ ctlr = devm_spi_alloc_host(dev, sizeof(*spisg));735735 if (!ctlr)736736 return -ENOMEM;737737···750750 return dev_err_probe(dev, PTR_ERR(spisg->map), "regmap init failed\n");751751752752 irq = platform_get_irq(pdev, 0);753753- if (irq < 0) {754754- ret = irq;755755- goto out_controller;756756- }753753+ if (irq < 0)754754+ return irq;757755758756 ret = device_reset_optional(dev);759757 if (ret)···815817 if (spisg->core)816818 clk_disable_unprepare(spisg->core);817819 clk_disable_unprepare(spisg->pclk);818818-out_controller:819819- spi_controller_put(ctlr);820820821821 return ret;822822}
+16-22
drivers/spi/spi-atcspi200.c
···195195 if (op->addr.buswidth > 1)196196 tc |= TRANS_ADDR_FMT;197197 if (op->data.nbytes) {198198- tc |= TRANS_DUAL_QUAD(ffs(op->data.buswidth) - 1);198198+ unsigned int width_code;199199+200200+ width_code = ffs(op->data.buswidth) - 1;201201+ if (unlikely(width_code > 3)) {202202+ WARN_ON_ONCE(1);203203+ width_code = 0;204204+ }205205+ tc |= TRANS_DUAL_QUAD(width_code);206206+199207 if (op->data.dir == SPI_MEM_DATA_IN) {200208 if (op->dummy.nbytes)201209 tc |= TRANS_MODE_DMY_READ |···505497506498static int atcspi_configure_dma(struct atcspi_dev *spi)507499{508508- struct dma_chan *dma_chan;509509- int ret = 0;500500+ spi->host->dma_rx = devm_dma_request_chan(spi->dev, "rx");501501+ if (IS_ERR(spi->host->dma_rx))502502+ return PTR_ERR(spi->host->dma_rx);510503511511- dma_chan = devm_dma_request_chan(spi->dev, "rx");512512- if (IS_ERR(dma_chan)) {513513- ret = PTR_ERR(dma_chan);514514- goto err_exit;515515- }516516- spi->host->dma_rx = dma_chan;504504+ spi->host->dma_tx = devm_dma_request_chan(spi->dev, "tx");505505+ if (IS_ERR(spi->host->dma_tx))506506+ return PTR_ERR(spi->host->dma_tx);517507518518- dma_chan = devm_dma_request_chan(spi->dev, "tx");519519- if (IS_ERR(dma_chan)) {520520- ret = PTR_ERR(dma_chan);521521- goto free_rx;522522- }523523- spi->host->dma_tx = dma_chan;524508 init_completion(&spi->dma_completion);525509526526- return ret;527527-528528-free_rx:529529- dma_release_channel(spi->host->dma_rx);530530- spi->host->dma_rx = NULL;531531-err_exit:532532- return ret;510510+ return 0;533511}534512535513static int atcspi_enable_clk(struct atcspi_dev *spi)
+11-20
drivers/spi/spi-axiado.c
···765765 platform_set_drvdata(pdev, ctlr);766766767767 xspi->regs = devm_platform_ioremap_resource(pdev, 0);768768- if (IS_ERR(xspi->regs)) {769769- ret = PTR_ERR(xspi->regs);770770- goto remove_ctlr;771771- }768768+ if (IS_ERR(xspi->regs))769769+ return PTR_ERR(xspi->regs);772770773771 xspi->pclk = devm_clk_get(&pdev->dev, "pclk");774774- if (IS_ERR(xspi->pclk)) {775775- dev_err(&pdev->dev, "pclk clock not found.\n");776776- ret = PTR_ERR(xspi->pclk);777777- goto remove_ctlr;778778- }772772+ if (IS_ERR(xspi->pclk))773773+ return dev_err_probe(&pdev->dev, PTR_ERR(xspi->pclk),774774+ "pclk clock not found.\n");779775780776 xspi->ref_clk = devm_clk_get(&pdev->dev, "ref");781781- if (IS_ERR(xspi->ref_clk)) {782782- dev_err(&pdev->dev, "ref clock not found.\n");783783- ret = PTR_ERR(xspi->ref_clk);784784- goto remove_ctlr;785785- }777777+ if (IS_ERR(xspi->ref_clk))778778+ return dev_err_probe(&pdev->dev, PTR_ERR(xspi->ref_clk),779779+ "ref clock not found.\n");786780787781 ret = clk_prepare_enable(xspi->pclk);788788- if (ret) {789789- dev_err(&pdev->dev, "Unable to enable APB clock.\n");790790- goto remove_ctlr;791791- }782782+ if (ret)783783+ return dev_err_probe(&pdev->dev, ret, "Unable to enable APB clock.\n");792784793785 ret = clk_prepare_enable(xspi->ref_clk);794786 if (ret) {···861869 clk_disable_unprepare(xspi->ref_clk);862870clk_dis_apb:863871 clk_disable_unprepare(xspi->pclk);864864-remove_ctlr:865865- spi_controller_put(ctlr);872872+866873 return ret;867874}868875
···711711 }712712 }713713714714- ret = devm_spi_register_controller(dev, host);714714+ ret = spi_register_controller(host);715715 if (ret)716716 goto err_register;717717
+12-13
drivers/spi/spi.c
···30493049 struct spi_controller *ctlr;3050305030513051 ctlr = container_of(dev, struct spi_controller, dev);30523052+30533053+ free_percpu(ctlr->pcpu_statistics);30523054 kfree(ctlr);30533055}30543056···31933191 ctlr = kzalloc(size + ctlr_size, GFP_KERNEL);31943192 if (!ctlr)31953193 return NULL;31943194+31953195+ ctlr->pcpu_statistics = spi_alloc_pcpu_stats(NULL);31963196+ if (!ctlr->pcpu_statistics) {31973197+ kfree(ctlr);31983198+ return NULL;31993199+ }3196320031973201 device_initialize(&ctlr->dev);31983202 INIT_LIST_HEAD(&ctlr->queue);···34883480 dev_info(dev, "controller is unqueued, this is deprecated\n");34893481 } else if (ctlr->transfer_one || ctlr->transfer_one_message) {34903482 status = spi_controller_initialize_queue(ctlr);34913491- if (status) {34923492- device_del(&ctlr->dev);34933493- goto free_bus_id;34943494- }34953495- }34963496- /* Add statistics */34973497- ctlr->pcpu_statistics = spi_alloc_pcpu_stats(dev);34983498- if (!ctlr->pcpu_statistics) {34993499- dev_err(dev, "Error allocating per-cpu statistics\n");35003500- status = -ENOMEM;35013501- goto destroy_queue;34833483+ if (status)34843484+ goto del_ctrl;35023485 }3503348635043487 mutex_lock(&board_lock);···35033504 acpi_register_spi_devices(ctlr);35043505 return status;3505350635063506-destroy_queue:35073507- spi_destroy_queue(ctlr);35073507+del_ctrl:35083508+ device_del(&ctlr->dev);35083509free_bus_id:35093510 mutex_lock(&board_lock);35103511 idr_remove(&spi_controller_idr, ctlr->bus_num);
···36363737 pr_info("mmio phyAddr = %lx\n", sm750_dev->vidreg_start);38383939- /*4040- * reserve the vidreg space of smi adaptor4141- * if you do this, you need to add release region code4242- * in lynxfb_remove, or memory will not be mapped again4343- * successfully4444- */3939+ /* reserve the vidreg space of smi adaptor */4540 ret = pci_request_region(pdev, 1, "sm750fb");4641 if (ret) {4742 pr_err("Can not request PCI regions.\n");4848- goto exit;4343+ return ret;4944 }50455146 /* now map mmio and vidmem */···4954 if (!sm750_dev->pvReg) {5055 pr_err("mmio failed\n");5156 ret = -EFAULT;5252- goto exit;5757+ goto err_release_region;5358 }5459 pr_info("mmio virtual addr = %p\n", sm750_dev->pvReg);5560···7479 sm750_dev->pvMem =7580 ioremap_wc(sm750_dev->vidmem_start, sm750_dev->vidmem_size);7681 if (!sm750_dev->pvMem) {7777- iounmap(sm750_dev->pvReg);7882 pr_err("Map video memory failed\n");7983 ret = -EFAULT;8080- goto exit;8484+ goto err_unmap_reg;8185 }8286 pr_info("video memory vaddr = %p\n", sm750_dev->pvMem);8383-exit:8787+8888+ return 0;8989+9090+err_unmap_reg:9191+ iounmap(sm750_dev->pvReg);9292+err_release_region:9393+ pci_release_region(pdev, 1);8494 return ret;8595}8696
-27
drivers/tee/tee_shm.c
···2323 struct page *page;2424};25252626-static void shm_put_kernel_pages(struct page **pages, size_t page_count)2727-{2828- size_t n;2929-3030- for (n = 0; n < page_count; n++)3131- put_page(pages[n]);3232-}3333-3434-static void shm_get_kernel_pages(struct page **pages, size_t page_count)3535-{3636- size_t n;3737-3838- for (n = 0; n < page_count; n++)3939- get_page(pages[n]);4040-}4141-4226static void release_registered_pages(struct tee_shm *shm)4327{4428 if (shm->pages) {4529 if (shm->flags & TEE_SHM_USER_MAPPED)4630 unpin_user_pages(shm->pages, shm->num_pages);4747- else4848- shm_put_kernel_pages(shm->pages, shm->num_pages);49315032 kfree(shm->pages);5133 }···459477 goto err_put_shm_pages;460478 }461479462462- /*463463- * iov_iter_extract_kvec_pages does not get reference on the pages,464464- * get a reference on them.465465- */466466- if (iov_iter_is_kvec(iter))467467- shm_get_kernel_pages(shm->pages, num_pages);468468-469480 shm->offset = off;470481 shm->size = len;471482 shm->num_pages = num_pages;···474499err_put_shm_pages:475500 if (!iov_iter_is_kvec(iter))476501 unpin_user_pages(shm->pages, shm->num_pages);477477- else478478- shm_put_kernel_pages(shm->pages, shm->num_pages);479502err_free_shm_pages:480503 kfree(shm->pages);481504err_free_shm:
···162162 */163163 dma->tx_size = 0;164164165165+ /*166166+ * We can't use `dmaengine_terminate_sync` because `uart_flush_buffer` is167167+ * holding the uart port spinlock.168168+ */165169 dmaengine_terminate_async(dma->txchan);170170+171171+ /*172172+ * The callback might or might not run. If it doesn't run, we need to ensure173173+ * that `tx_running` is cleared so that we can schedule new transactions.174174+ * If it does run, then the zombie callback will clear `tx_running` again175175+ * and perform a no-op since `tx_size` was cleared above.176176+ *177177+ * In either case, we ASSUME the DMA transaction will terminate before we178178+ * issue a new `serial8250_tx_dma`.179179+ */180180+ dma->tx_running = 0;166181}167182168183int serial8250_rx_dma(struct uart_8250_port *p)
+239-65
drivers/tty/serial/8250/8250_dw.c
···99 * LCR is written whilst busy. If it is, then a busy detect interrupt is1010 * raised, the LCR needs to be rewritten and the uart status register read.1111 */1212+#include <linux/bitfield.h>1313+#include <linux/bits.h>1414+#include <linux/cleanup.h>1215#include <linux/clk.h>1316#include <linux/delay.h>1417#include <linux/device.h>1518#include <linux/io.h>1919+#include <linux/lockdep.h>1620#include <linux/mod_devicetable.h>1721#include <linux/module.h>1822#include <linux/notifier.h>···4440#define RZN1_UART_RDMACR 0x110 /* DMA Control Register Receive Mode */45414642/* DesignWare specific register fields */4343+#define DW_UART_IIR_IID GENMASK(3, 0)4444+4745#define DW_UART_MCR_SIRE BIT(6)4646+4747+#define DW_UART_USR_BUSY BIT(0)48484949/* Renesas specific register fields */5050#define RZN1_UART_xDMACR_DMA_EN BIT(0)···6456#define DW_UART_QUIRK_IS_DMA_FC BIT(3)6557#define DW_UART_QUIRK_APMC0D08 BIT(4)6658#define DW_UART_QUIRK_CPR_VALUE BIT(5)5959+#define DW_UART_QUIRK_IER_KICK BIT(6)6060+6161+/*6262+ * Number of consecutive IIR_NO_INT interrupts required to trigger interrupt6363+ * storm prevention code.6464+ */6565+#define DW_UART_QUIRK_IER_KICK_THRES 467666867struct dw8250_platform_data {6968 u8 usr_reg;···92779378 unsigned int skip_autocfg:1;9479 unsigned int uart_16550_compatible:1;8080+ unsigned int in_idle:1;8181+8282+ u8 no_int_count;9583};96849785static inline struct dw8250_data *to_dw8250_data(struct dw8250_port_data *data)···125107 return value;126108}127109128128-/*129129- * This function is being called as part of the uart_port::serial_out()130130- * routine. Hence, it must not call serial_port_out() or serial_out()131131- * against the modified registers here, i.e. LCR.132132- */133133-static void dw8250_force_idle(struct uart_port *p)110110+static void dw8250_idle_exit(struct uart_port *p)134111{112112+ struct dw8250_data *d = to_dw8250_data(p->private_data);135113 struct uart_8250_port *up = up_to_u8250p(p);136136- unsigned int lsr;137114138138- /*139139- * The following call currently performs serial_out()140140- * against the FCR register. Because it differs to LCR141141- * there will be no infinite loop, but if it ever gets142142- * modified, we might need a new custom version of it143143- * that avoids infinite recursion.144144- */145145- serial8250_clear_and_reinit_fifos(up);115115+ if (d->uart_16550_compatible)116116+ return;146117147147- /*148148- * With PSLVERR_RESP_EN parameter set to 1, the device generates an149149- * error response when an attempt to read an empty RBR with FIFO150150- * enabled.151151- */152152- if (up->fcr & UART_FCR_ENABLE_FIFO) {153153- lsr = serial_port_in(p, UART_LSR);154154- if (!(lsr & UART_LSR_DR))155155- return;118118+ if (up->capabilities & UART_CAP_FIFO)119119+ serial_port_out(p, UART_FCR, up->fcr);120120+ serial_port_out(p, UART_MCR, up->mcr);121121+ serial_port_out(p, UART_IER, up->ier);122122+123123+ /* DMA Rx is restarted by IRQ handler as needed. */124124+ if (up->dma)125125+ serial8250_tx_dma_resume(up);126126+127127+ d->in_idle = 0;128128+}129129+130130+/*131131+ * Ensure BUSY is not asserted. If DW UART is configured with132132+ * !uart_16550_compatible, the writes to LCR, DLL, and DLH fail while133133+ * BUSY is asserted.134134+ *135135+ * Context: port's lock must be held136136+ */137137+static int dw8250_idle_enter(struct uart_port *p)138138+{139139+ struct dw8250_data *d = to_dw8250_data(p->private_data);140140+ unsigned int usr_reg = d->pdata ? d->pdata->usr_reg : DW_UART_USR;141141+ struct uart_8250_port *up = up_to_u8250p(p);142142+ int retries;143143+ u32 lsr;144144+145145+ lockdep_assert_held_once(&p->lock);146146+147147+ if (d->uart_16550_compatible)148148+ return 0;149149+150150+ d->in_idle = 1;151151+152152+ /* Prevent triggering interrupt from RBR filling */153153+ serial_port_out(p, UART_IER, 0);154154+155155+ if (up->dma) {156156+ serial8250_rx_dma_flush(up);157157+ if (serial8250_tx_dma_running(up))158158+ serial8250_tx_dma_pause(up);156159 }157160158158- serial_port_in(p, UART_RX);161161+ /*162162+ * Wait until Tx becomes empty + one extra frame time to ensure all bits163163+ * have been sent on the wire.164164+ *165165+ * FIXME: frame_time delay is too long with very low baudrates.166166+ */167167+ serial8250_fifo_wait_for_lsr_thre(up, p->fifosize);168168+ ndelay(p->frame_time);169169+170170+ serial_port_out(p, UART_MCR, up->mcr | UART_MCR_LOOP);171171+172172+ retries = 4; /* Arbitrary limit, 2 was always enough in tests */173173+ do {174174+ serial8250_clear_fifos(up);175175+ if (!(serial_port_in(p, usr_reg) & DW_UART_USR_BUSY))176176+ break;177177+ /* FIXME: frame_time delay is too long with very low baudrates. */178178+ ndelay(p->frame_time);179179+ } while (--retries);180180+181181+ lsr = serial_lsr_in(up);182182+ if (lsr & UART_LSR_DR) {183183+ serial_port_in(p, UART_RX);184184+ up->lsr_saved_flags = 0;185185+ }186186+187187+ /* Now guaranteed to have BUSY deasserted? Just sanity check */188188+ if (serial_port_in(p, usr_reg) & DW_UART_USR_BUSY) {189189+ dw8250_idle_exit(p);190190+ return -EBUSY;191191+ }192192+193193+ return 0;194194+}195195+196196+static void dw8250_set_divisor(struct uart_port *p, unsigned int baud,197197+ unsigned int quot, unsigned int quot_frac)198198+{199199+ struct uart_8250_port *up = up_to_u8250p(p);200200+ int ret;201201+202202+ ret = dw8250_idle_enter(p);203203+ if (ret < 0)204204+ return;205205+206206+ serial_port_out(p, UART_LCR, up->lcr | UART_LCR_DLAB);207207+ if (!(serial_port_in(p, UART_LCR) & UART_LCR_DLAB))208208+ goto idle_failed;209209+210210+ serial_dl_write(up, quot);211211+ serial_port_out(p, UART_LCR, up->lcr);212212+213213+idle_failed:214214+ dw8250_idle_exit(p);159215}160216161217/*162218 * This function is being called as part of the uart_port::serial_out()163163- * routine. Hence, it must not call serial_port_out() or serial_out()164164- * against the modified registers here, i.e. LCR.219219+ * routine. Hence, special care must be taken when serial_port_out() or220220+ * serial_out() against the modified registers here, i.e. LCR (d->in_idle is221221+ * used to break recursion loop).165222 */166223static void dw8250_check_lcr(struct uart_port *p, unsigned int offset, u32 value)167224{168225 struct dw8250_data *d = to_dw8250_data(p->private_data);169169- void __iomem *addr = p->membase + (offset << p->regshift);170170- int tries = 1000;226226+ u32 lcr;227227+ int ret;171228172229 if (offset != UART_LCR || d->uart_16550_compatible)173230 return;174231232232+ lcr = serial_port_in(p, UART_LCR);233233+175234 /* Make sure LCR write wasn't ignored */176176- while (tries--) {177177- u32 lcr = serial_port_in(p, offset);235235+ if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR))236236+ return;178237179179- if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR))180180- return;238238+ if (d->in_idle)239239+ goto write_err;181240182182- dw8250_force_idle(p);241241+ ret = dw8250_idle_enter(p);242242+ if (ret < 0)243243+ goto write_err;183244184184-#ifdef CONFIG_64BIT185185- if (p->type == PORT_OCTEON)186186- __raw_writeq(value & 0xff, addr);187187- else188188-#endif189189- if (p->iotype == UPIO_MEM32)190190- writel(value, addr);191191- else if (p->iotype == UPIO_MEM32BE)192192- iowrite32be(value, addr);193193- else194194- writeb(value, addr);195195- }245245+ serial_port_out(p, UART_LCR, value);246246+ dw8250_idle_exit(p);247247+ return;248248+249249+write_err:196250 /*197251 * FIXME: this deadlocks if port->lock is already held198252 * dev_err(p->dev, "Couldn't set LCR to %d\n", value);199253 */254254+ return; /* Silences "label at the end of compound statement" */255255+}256256+257257+/*258258+ * With BUSY, LCR writes can be very expensive (IRQ + complex retry logic).259259+ * If the write does not change the value of the LCR register, skip it entirely.260260+ */261261+static bool dw8250_can_skip_reg_write(struct uart_port *p, unsigned int offset, u32 value)262262+{263263+ struct dw8250_data *d = to_dw8250_data(p->private_data);264264+ u32 lcr;265265+266266+ if (offset != UART_LCR || d->uart_16550_compatible)267267+ return false;268268+269269+ lcr = serial_port_in(p, offset);270270+ return lcr == value;200271}201272202273/* Returns once the transmitter is empty or we run out of retries */···314207315208static void dw8250_serial_out(struct uart_port *p, unsigned int offset, u32 value)316209{210210+ if (dw8250_can_skip_reg_write(p, offset, value))211211+ return;212212+317213 writeb(value, p->membase + (offset << p->regshift));318214 dw8250_check_lcr(p, offset, value);319215}320216321217static void dw8250_serial_out38x(struct uart_port *p, unsigned int offset, u32 value)322218{219219+ if (dw8250_can_skip_reg_write(p, offset, value))220220+ return;221221+323222 /* Allow the TX to drain before we reconfigure */324223 if (offset == UART_LCR)325224 dw8250_tx_wait_empty(p);···350237351238static void dw8250_serial_outq(struct uart_port *p, unsigned int offset, u32 value)352239{240240+ if (dw8250_can_skip_reg_write(p, offset, value))241241+ return;242242+353243 value &= 0xff;354244 __raw_writeq(value, p->membase + (offset << p->regshift));355245 /* Read back to ensure register write ordering. */···364248365249static void dw8250_serial_out32(struct uart_port *p, unsigned int offset, u32 value)366250{251251+ if (dw8250_can_skip_reg_write(p, offset, value))252252+ return;253253+367254 writel(value, p->membase + (offset << p->regshift));368255 dw8250_check_lcr(p, offset, value);369256}···380261381262static void dw8250_serial_out32be(struct uart_port *p, unsigned int offset, u32 value)382263{264264+ if (dw8250_can_skip_reg_write(p, offset, value))265265+ return;266266+383267 iowrite32be(value, p->membase + (offset << p->regshift));384268 dw8250_check_lcr(p, offset, value);385269}···394272 return dw8250_modify_msr(p, offset, value);395273}396274275275+/*276276+ * INTC10EE UART can IRQ storm while reporting IIR_NO_INT. Inducing IIR value277277+ * change has been observed to break the storm.278278+ *279279+ * If Tx is empty (THRE asserted), we use here IER_THRI to cause IIR_NO_INT ->280280+ * IIR_THRI transition.281281+ */282282+static void dw8250_quirk_ier_kick(struct uart_port *p)283283+{284284+ struct uart_8250_port *up = up_to_u8250p(p);285285+ u32 lsr;286286+287287+ if (up->ier & UART_IER_THRI)288288+ return;289289+290290+ lsr = serial_lsr_in(up);291291+ if (!(lsr & UART_LSR_THRE))292292+ return;293293+294294+ serial_port_out(p, UART_IER, up->ier | UART_IER_THRI);295295+ serial_port_in(p, UART_LCR); /* safe, no side-effects */296296+ serial_port_out(p, UART_IER, up->ier);297297+}397298398299static int dw8250_handle_irq(struct uart_port *p)399300{···426281 bool rx_timeout = (iir & 0x3f) == UART_IIR_RX_TIMEOUT;427282 unsigned int quirks = d->pdata->quirks;428283 unsigned int status;429429- unsigned long flags;284284+285285+ guard(uart_port_lock_irqsave)(p);286286+287287+ switch (FIELD_GET(DW_UART_IIR_IID, iir)) {288288+ case UART_IIR_NO_INT:289289+ if (d->uart_16550_compatible || up->dma)290290+ return 0;291291+292292+ if (quirks & DW_UART_QUIRK_IER_KICK &&293293+ d->no_int_count == (DW_UART_QUIRK_IER_KICK_THRES - 1))294294+ dw8250_quirk_ier_kick(p);295295+ d->no_int_count = (d->no_int_count + 1) % DW_UART_QUIRK_IER_KICK_THRES;296296+297297+ return 0;298298+299299+ case UART_IIR_BUSY:300300+ /* Clear the USR */301301+ serial_port_in(p, d->pdata->usr_reg);302302+303303+ d->no_int_count = 0;304304+305305+ return 1;306306+ }307307+308308+ d->no_int_count = 0;430309431310 /*432311 * There are ways to get Designware-based UARTs into a state where···463294 * so we limit the workaround only to non-DMA mode.464295 */465296 if (!up->dma && rx_timeout) {466466- uart_port_lock_irqsave(p, &flags);467297 status = serial_lsr_in(up);468298469299 if (!(status & (UART_LSR_DR | UART_LSR_BI)))470300 serial_port_in(p, UART_RX);471471-472472- uart_port_unlock_irqrestore(p, flags);473301 }474302475303 /* Manually stop the Rx DMA transfer when acting as flow controller */476304 if (quirks & DW_UART_QUIRK_IS_DMA_FC && up->dma && up->dma->rx_running && rx_timeout) {477477- uart_port_lock_irqsave(p, &flags);478305 status = serial_lsr_in(up);479479- uart_port_unlock_irqrestore(p, flags);480306481307 if (status & (UART_LSR_DR | UART_LSR_BI)) {482308 dw8250_writel_ext(p, RZN1_UART_RDMACR, 0);···479315 }480316 }481317482482- if (serial8250_handle_irq(p, iir))483483- return 1;318318+ serial8250_handle_irq_locked(p, iir);484319485485- if ((iir & UART_IIR_BUSY) == UART_IIR_BUSY) {486486- /* Clear the USR */487487- serial_port_in(p, d->pdata->usr_reg);488488-489489- return 1;490490- }491491-492492- return 0;320320+ return 1;493321}494322495323static void dw8250_clk_work_cb(struct work_struct *work)···683527 reset_control_assert(data);684528}685529530530+static void dw8250_shutdown(struct uart_port *port)531531+{532532+ struct dw8250_data *d = to_dw8250_data(port->private_data);533533+534534+ serial8250_do_shutdown(port);535535+ d->no_int_count = 0;536536+}537537+686538static int dw8250_probe(struct platform_device *pdev)687539{688540 struct uart_8250_port uart = {}, *up = &uart;···709545 p->type = PORT_8250;710546 p->flags = UPF_FIXED_PORT;711547 p->dev = dev;548548+712549 p->set_ldisc = dw8250_set_ldisc;713550 p->set_termios = dw8250_set_termios;551551+ p->set_divisor = dw8250_set_divisor;714552715553 data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);716554 if (!data)···820654 dw8250_quirks(p, data);821655822656 /* If the Busy Functionality is not implemented, don't handle it */823823- if (data->uart_16550_compatible)657657+ if (data->uart_16550_compatible) {824658 p->handle_irq = NULL;825825- else if (data->pdata)659659+ } else if (data->pdata) {826660 p->handle_irq = dw8250_handle_irq;661661+ p->shutdown = dw8250_shutdown;662662+ }827663828664 dw8250_setup_dma_filter(p, data);829665···957789 .quirks = DW_UART_QUIRK_SKIP_SET_RATE,958790};959791792792+static const struct dw8250_platform_data dw8250_intc10ee = {793793+ .usr_reg = DW_UART_USR,794794+ .quirks = DW_UART_QUIRK_IER_KICK,795795+};796796+960797static const struct of_device_id dw8250_of_match[] = {961798 { .compatible = "snps,dw-apb-uart", .data = &dw8250_dw_apb },962799 { .compatible = "cavium,octeon-3860-uart", .data = &dw8250_octeon_3860_data },···991818 { "INT33C5", (kernel_ulong_t)&dw8250_dw_apb },992819 { "INT3434", (kernel_ulong_t)&dw8250_dw_apb },993820 { "INT3435", (kernel_ulong_t)&dw8250_dw_apb },994994- { "INTC10EE", (kernel_ulong_t)&dw8250_dw_apb },821821+ { "INTC10EE", (kernel_ulong_t)&dw8250_intc10ee },995822 { },996823};997824MODULE_DEVICE_TABLE(acpi, dw8250_acpi_match);···10098361010837module_platform_driver(dw8250_platform_driver);1011838839839+MODULE_IMPORT_NS("SERIAL_8250");1012840MODULE_AUTHOR("Jamie Iles");1013841MODULE_LICENSE("GPL");1014842MODULE_DESCRIPTION("Synopsys DesignWare 8250 serial port driver");
···1818#include <linux/irq.h>1919#include <linux/console.h>2020#include <linux/gpio/consumer.h>2121+#include <linux/lockdep.h>2122#include <linux/sysrq.h>2223#include <linux/delay.h>2324#include <linux/platform_device.h>···489488/*490489 * FIFO support.491490 */492492-static void serial8250_clear_fifos(struct uart_8250_port *p)491491+void serial8250_clear_fifos(struct uart_8250_port *p)493492{494493 if (p->capabilities & UART_CAP_FIFO) {495494 serial_out(p, UART_FCR, UART_FCR_ENABLE_FIFO);···498497 serial_out(p, UART_FCR, 0);499498 }500499}500500+EXPORT_SYMBOL_NS_GPL(serial8250_clear_fifos, "SERIAL_8250");501501502502static enum hrtimer_restart serial8250_em485_handle_start_tx(struct hrtimer *t);503503static enum hrtimer_restart serial8250_em485_handle_stop_tx(struct hrtimer *t);···17841782}1785178317861784/*17871787- * This handles the interrupt from one port.17851785+ * Context: port's lock must be held by the caller.17881786 */17891789-int serial8250_handle_irq(struct uart_port *port, unsigned int iir)17871787+void serial8250_handle_irq_locked(struct uart_port *port, unsigned int iir)17901788{17911789 struct uart_8250_port *up = up_to_u8250p(port);17921790 struct tty_port *tport = &port->state->port;17931791 bool skip_rx = false;17941794- unsigned long flags;17951792 u16 status;1796179317971797- if (iir & UART_IIR_NO_INT)17981798- return 0;17991799-18001800- uart_port_lock_irqsave(port, &flags);17941794+ lockdep_assert_held_once(&port->lock);1801179518021796 status = serial_lsr_in(up);18031797···18261828 else if (!up->dma->tx_running)18271829 __stop_tx(up);18281830 }18311831+}18321832+EXPORT_SYMBOL_NS_GPL(serial8250_handle_irq_locked, "SERIAL_8250");1829183318301830- uart_unlock_and_check_sysrq_irqrestore(port, flags);18341834+/*18351835+ * This handles the interrupt from one port.18361836+ */18371837+int serial8250_handle_irq(struct uart_port *port, unsigned int iir)18381838+{18391839+ if (iir & UART_IIR_NO_INT)18401840+ return 0;18411841+18421842+ guard(uart_port_lock_irqsave)(port);18431843+ serial8250_handle_irq_locked(port, iir);1831184418321845 return 1;18331846}···21562147 if (up->port.flags & UPF_NO_THRE_TEST)21572148 return;2158214921592159- if (port->irqflags & IRQF_SHARED)21602160- disable_irq_nosync(port->irq);21502150+ disable_irq(port->irq);2161215121622152 /*21632153 * Test for UARTs that do not reassert THRE when the transmitter is idle and the interrupt···21782170 serial_port_out(port, UART_IER, 0);21792171 }2180217221812181- if (port->irqflags & IRQF_SHARED)21822182- enable_irq(port->irq);21732173+ enable_irq(port->irq);2183217421842175 /*21852176 * If the interrupt is not reasserted, or we otherwise don't trust the iir, setup a timer to···23572350void serial8250_do_shutdown(struct uart_port *port)23582351{23592352 struct uart_8250_port *up = up_to_u8250p(port);23532353+ u32 lcr;2360235423612355 serial8250_rpm_get(up);23622356 /*···23842376 port->mctrl &= ~TIOCM_OUT2;2385237723862378 serial8250_set_mctrl(port, port->mctrl);23792379+23802380+ /* Disable break condition */23812381+ lcr = serial_port_in(port, UART_LCR);23822382+ lcr &= ~UART_LCR_SBC;23832383+ serial_port_out(port, UART_LCR, lcr);23872384 }2388238523892389- /*23902390- * Disable break condition and FIFOs23912391- */23922392- serial_port_out(port, UART_LCR,23932393- serial_port_in(port, UART_LCR) & ~UART_LCR_SBC);23942386 serial8250_clear_fifos(up);2395238723962388 rsa_disable(up);···24002392 * the IRQ chain.24012393 */24022394 serial_port_in(port, UART_RX);23952395+ /*23962396+ * LCR writes on DW UART can trigger late (unmaskable) IRQs.23972397+ * Handle them before releasing the handler.23982398+ */23992399+ synchronize_irq(port->irq);24002400+24032401 serial8250_rpm_put(up);2404240224052403 up->ops->release_irq(up);···31993185}32003186EXPORT_SYMBOL_GPL(serial8250_set_defaults);3201318731883188+void serial8250_fifo_wait_for_lsr_thre(struct uart_8250_port *up, unsigned int count)31893189+{31903190+ unsigned int i;31913191+31923192+ for (i = 0; i < count; i++) {31933193+ if (wait_for_lsr(up, UART_LSR_THRE))31943194+ return;31953195+ }31963196+}31973197+EXPORT_SYMBOL_NS_GPL(serial8250_fifo_wait_for_lsr_thre, "SERIAL_8250");31983198+32023199#ifdef CONFIG_SERIAL_8250_CONSOLE3203320032043201static void serial8250_console_putchar(struct uart_port *port, unsigned char ch)···32513226 serial8250_out_MCR(up, up->mcr | UART_MCR_DTR | UART_MCR_RTS);32523227}3253322832543254-static void fifo_wait_for_lsr(struct uart_8250_port *up, unsigned int count)32553255-{32563256- unsigned int i;32573257-32583258- for (i = 0; i < count; i++) {32593259- if (wait_for_lsr(up, UART_LSR_THRE))32603260- return;32613261- }32623262-}32633263-32643229/*32653230 * Print a string to the serial port using the device FIFO32663231 *···3269325432703255 while (s != end) {32713256 /* Allow timeout for each byte of a possibly full FIFO */32723272- fifo_wait_for_lsr(up, fifosize);32573257+ serial8250_fifo_wait_for_lsr_thre(up, fifosize);3273325832743259 for (i = 0; i < fifosize && s != end; ++i) {32753260 if (*s == '\n' && !cr_sent) {···32873272 * Allow timeout for each byte written since the caller will only wait32883273 * for UART_LSR_BOTH_EMPTY using the timeout of a single character32893274 */32903290- fifo_wait_for_lsr(up, tx_count);32753275+ serial8250_fifo_wait_for_lsr_thre(up, tx_count);32913276}3292327732933278/*
+4-1
drivers/tty/serial/serial_core.c
···643643 unsigned int ret;644644645645 port = uart_port_ref_lock(state, &flags);646646- ret = kfifo_avail(&state->port.xmit_fifo);646646+ if (!state->port.xmit_buf)647647+ ret = 0;648648+ else649649+ ret = kfifo_avail(&state->port.xmit_fifo);647650 uart_port_unlock_deref(port, flags);648651 return ret;649652}
···927927 dev->descriptor.bNumConfigurations = ncfg = USB_MAXCONFIG;928928 }929929930930- if (ncfg < 1) {930930+ if (ncfg < 1 && dev->quirks & USB_QUIRK_FORCE_ONE_CONFIG) {931931+ dev_info(ddev, "Device claims zero configurations, forcing to 1\n");932932+ dev->descriptor.bNumConfigurations = 1;933933+ ncfg = 1;934934+ } else if (ncfg < 1) {931935 dev_err(ddev, "no configurations\n");932936 return -EINVAL;933937 }
+79-21
drivers/usb/core/message.c
···424243434444/*4545- * Starts urb and waits for completion or timeout. Note that this call4646- * is NOT interruptible. Many device driver i/o requests should be4747- * interruptible and therefore these drivers should implement their4848- * own interruptible routines.4545+ * Starts urb and waits for completion or timeout.4646+ * Whether or not the wait is killable depends on the flag passed in.4747+ * For example, compare usb_bulk_msg() and usb_bulk_msg_killable().4848+ *4949+ * For non-killable waits, we enforce a maximum limit on the timeout value.4950 */5050-static int usb_start_wait_urb(struct urb *urb, int timeout, int *actual_length)5151+static int usb_start_wait_urb(struct urb *urb, int timeout, int *actual_length,5252+ bool killable)5153{5254 struct api_context ctx;5355 unsigned long expire;5456 int retval;5757+ long rc;55585659 init_completion(&ctx.done);5760 urb->context = &ctx;···6360 if (unlikely(retval))6461 goto out;65626666- expire = timeout ? msecs_to_jiffies(timeout) : MAX_SCHEDULE_TIMEOUT;6767- if (!wait_for_completion_timeout(&ctx.done, expire)) {6363+ if (!killable && (timeout <= 0 || timeout > USB_MAX_SYNCHRONOUS_TIMEOUT))6464+ timeout = USB_MAX_SYNCHRONOUS_TIMEOUT;6565+ expire = (timeout > 0) ? msecs_to_jiffies(timeout) : MAX_SCHEDULE_TIMEOUT;6666+ if (killable)6767+ rc = wait_for_completion_killable_timeout(&ctx.done, expire);6868+ else6969+ rc = wait_for_completion_timeout(&ctx.done, expire);7070+ if (rc <= 0) {6871 usb_kill_urb(urb);6969- retval = (ctx.status == -ENOENT ? -ETIMEDOUT : ctx.status);7272+ if (ctx.status != -ENOENT)7373+ retval = ctx.status;7474+ else if (rc == 0)7575+ retval = -ETIMEDOUT;7676+ else7777+ retval = rc;70787179 dev_dbg(&urb->dev->dev,7272- "%s timed out on ep%d%s len=%u/%u\n",8080+ "%s timed out or killed on ep%d%s len=%u/%u\n",7381 current->comm,7482 usb_endpoint_num(&urb->ep->desc),7583 usb_urb_dir_in(urb) ? "in" : "out",···114100 usb_fill_control_urb(urb, usb_dev, pipe, (unsigned char *)cmd, data,115101 len, usb_api_blocking_completion, NULL);116102117117- retv = usb_start_wait_urb(urb, timeout, &length);103103+ retv = usb_start_wait_urb(urb, timeout, &length, false);118104 if (retv < 0)119105 return retv;120106 else···131117 * @index: USB message index value132118 * @data: pointer to the data to send133119 * @size: length in bytes of the data to send134134- * @timeout: time in msecs to wait for the message to complete before timing135135- * out (if 0 the wait is forever)120120+ * @timeout: time in msecs to wait for the message to complete before timing out136121 *137122 * Context: task context, might sleep.138123 *···186173 * @index: USB message index value187174 * @driver_data: pointer to the data to send188175 * @size: length in bytes of the data to send189189- * @timeout: time in msecs to wait for the message to complete before timing190190- * out (if 0 the wait is forever)176176+ * @timeout: time in msecs to wait for the message to complete before timing out191177 * @memflags: the flags for memory allocation for buffers192178 *193179 * Context: !in_interrupt ()···244232 * @index: USB message index value245233 * @driver_data: pointer to the data to be filled in by the message246234 * @size: length in bytes of the data to be received247247- * @timeout: time in msecs to wait for the message to complete before timing248248- * out (if 0 the wait is forever)235235+ * @timeout: time in msecs to wait for the message to complete before timing out249236 * @memflags: the flags for memory allocation for buffers250237 *251238 * Context: !in_interrupt ()···315304 * @len: length in bytes of the data to send316305 * @actual_length: pointer to a location to put the actual length transferred317306 * in bytes318318- * @timeout: time in msecs to wait for the message to complete before319319- * timing out (if 0 the wait is forever)307307+ * @timeout: time in msecs to wait for the message to complete before timing out320308 *321309 * Context: task context, might sleep.322310 *···347337 * @len: length in bytes of the data to send348338 * @actual_length: pointer to a location to put the actual length transferred349339 * in bytes350350- * @timeout: time in msecs to wait for the message to complete before351351- * timing out (if 0 the wait is forever)340340+ * @timeout: time in msecs to wait for the message to complete before timing out352341 *353342 * Context: task context, might sleep.354343 *···394385 usb_fill_bulk_urb(urb, usb_dev, pipe, data, len,395386 usb_api_blocking_completion, NULL);396387397397- return usb_start_wait_urb(urb, timeout, actual_length);388388+ return usb_start_wait_urb(urb, timeout, actual_length, false);398389}399390EXPORT_SYMBOL_GPL(usb_bulk_msg);391391+392392+/**393393+ * usb_bulk_msg_killable - Builds a bulk urb, sends it off and waits for completion in a killable state394394+ * @usb_dev: pointer to the usb device to send the message to395395+ * @pipe: endpoint "pipe" to send the message to396396+ * @data: pointer to the data to send397397+ * @len: length in bytes of the data to send398398+ * @actual_length: pointer to a location to put the actual length transferred399399+ * in bytes400400+ * @timeout: time in msecs to wait for the message to complete before401401+ * timing out (if <= 0, the wait is as long as possible)402402+ *403403+ * Context: task context, might sleep.404404+ *405405+ * This function is just like usb_blk_msg(), except that it waits in a406406+ * killable state and there is no limit on the timeout length.407407+ *408408+ * Return:409409+ * If successful, 0. Otherwise a negative error number. The number of actual410410+ * bytes transferred will be stored in the @actual_length parameter.411411+ *412412+ */413413+int usb_bulk_msg_killable(struct usb_device *usb_dev, unsigned int pipe,414414+ void *data, int len, int *actual_length, int timeout)415415+{416416+ struct urb *urb;417417+ struct usb_host_endpoint *ep;418418+419419+ ep = usb_pipe_endpoint(usb_dev, pipe);420420+ if (!ep || len < 0)421421+ return -EINVAL;422422+423423+ urb = usb_alloc_urb(0, GFP_KERNEL);424424+ if (!urb)425425+ return -ENOMEM;426426+427427+ if ((ep->desc.bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) ==428428+ USB_ENDPOINT_XFER_INT) {429429+ pipe = (pipe & ~(3 << 30)) | (PIPE_INTERRUPT << 30);430430+ usb_fill_int_urb(urb, usb_dev, pipe, data, len,431431+ usb_api_blocking_completion, NULL,432432+ ep->desc.bInterval);433433+ } else434434+ usb_fill_bulk_urb(urb, usb_dev, pipe, data, len,435435+ usb_api_blocking_completion, NULL);436436+437437+ return usb_start_wait_urb(urb, timeout, actual_length, true);438438+}439439+EXPORT_SYMBOL_GPL(usb_bulk_msg_killable);400440401441/*-------------------------------------------------------------------*/402442
···38383939struct eth_dev;40404141-/**4242- * struct gether_opts - Options for Ethernet gadget function instances4343- * @name: Pattern for the network interface name (e.g., "usb%d").4444- * Used to generate the net device name.4545- * @qmult: Queue length multiplier for high/super speed.4646- * @host_mac: The MAC address to be used by the host side.4747- * @dev_mac: The MAC address to be used by the device side.4848- * @ifname_set: True if the interface name pattern has been set by userspace.4949- * @addr_assign_type: The method used for assigning the device MAC address5050- * (e.g., NET_ADDR_RANDOM, NET_ADDR_SET).5151- *5252- * This structure caches network-related settings provided through configfs5353- * before the net_device is fully instantiated. This allows for early5454- * configuration while deferring net_device allocation until the function5555- * is bound.5656- */5757-struct gether_opts {5858- char name[IFNAMSIZ];5959- unsigned int qmult;6060- u8 host_mac[ETH_ALEN];6161- u8 dev_mac[ETH_ALEN];6262- bool ifname_set;6363- unsigned char addr_assign_type;6464-};6565-6641/*6742 * This represents the USB side of an "ethernet" link, managed by a USB6843 * function which provides control and (maybe) framing. Two functions···151176void gether_set_gadget(struct net_device *net, struct usb_gadget *g);152177153178/**179179+ * gether_attach_gadget - Reparent net_device to the gadget device.180180+ * @net: The network device to reparent.181181+ * @g: The target USB gadget device to parent to.182182+ *183183+ * This function moves the network device to be a child of the USB gadget184184+ * device in the device hierarchy. This is typically done when the function185185+ * is bound to a configuration.186186+ *187187+ * Returns 0 on success, or a negative error code on failure.188188+ */189189+int gether_attach_gadget(struct net_device *net, struct usb_gadget *g);190190+191191+/**192192+ * gether_detach_gadget - Detach net_device from its gadget parent.193193+ * @net: The network device to detach.194194+ *195195+ * This function moves the network device to be a child of the virtual196196+ * devices parent, effectively detaching it from the USB gadget device197197+ * hierarchy. This is typically done when the function is unbound198198+ * from a configuration but the instance is not yet freed.199199+ */200200+void gether_detach_gadget(struct net_device *net);201201+202202+DEFINE_FREE(detach_gadget, struct net_device *, if (_T) gether_detach_gadget(_T))203203+204204+/**154205 * gether_set_dev_addr - initialize an ethernet-over-usb link with eth address155206 * @net: device representing this link156207 * @dev_addr: eth address of this device···284283int gether_set_ifname(struct net_device *net, const char *name, int len);285284286285void gether_cleanup(struct eth_dev *dev);287287-void gether_unregister_free_netdev(struct net_device *net);288288-DEFINE_FREE(free_gether_netdev, struct net_device *, gether_unregister_free_netdev(_T));289289-290290-void gether_setup_opts_default(struct gether_opts *opts, const char *name);291291-void gether_apply_opts(struct net_device *net, struct gether_opts *opts);292286293287void gether_suspend(struct gether *link);294288void gether_resume(struct gether *link);
···386386static int xhci_portli_show(struct seq_file *s, void *unused)387387{388388 struct xhci_port *port = s->private;389389- struct xhci_hcd *xhci = hcd_to_xhci(port->rhub->hcd);389389+ struct xhci_hcd *xhci;390390 u32 portli;391391392392 portli = readl(&port->port_reg->portli);393393+394394+ /* port without protocol capability isn't added to a roothub */395395+ if (!port->rhub) {396396+ seq_printf(s, "0x%08x\n", portli);397397+ return 0;398398+ }399399+400400+ xhci = hcd_to_xhci(port->rhub->hcd);393401394402 /* PORTLI fields are valid if port is a USB3 or eUSB2V2 port */395403 if (port->rhub == &xhci->usb3_rhub)
···707707 if (signal_pending (current)) 708708 {709709 mutex_unlock(&mdc800->io_lock);710710- return -EINTR;710710+ return len == left ? -EINTR : len-left;711711 }712712713713 sts=left > (mdc800->out_count-mdc800->out_ptr)?mdc800->out_count-mdc800->out_ptr:left;···730730 mutex_unlock(&mdc800->io_lock);731731 return len-left;732732 }733733- wait_event_timeout(mdc800->download_wait,733733+ retval = wait_event_timeout(mdc800->download_wait,734734 mdc800->downloaded,735735 msecs_to_jiffies(TO_DOWNLOAD_GET_READY));736736+ if (!retval)737737+ usb_kill_urb(mdc800->download_urb);736738 mdc800->downloaded = 0;737739 if (mdc800->download_urb->status != 0)738740 {
+1-1
drivers/usb/misc/uss720.c
···736736 ret = get_1284_register(pp, 0, ®, GFP_KERNEL);737737 dev_dbg(&intf->dev, "reg: %7ph\n", priv->reg);738738 if (ret < 0)739739- return ret;739739+ goto probe_abort;740740741741 ret = usb_find_last_int_in_endpoint(interface, &epd);742742 if (!ret) {
+1-1
drivers/usb/misc/yurex.c
···272272 dev->int_buffer, YUREX_BUF_SIZE, yurex_interrupt,273273 dev, 1);274274 dev->urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;275275+ dev->bbu = -1;275276 if (usb_submit_urb(dev->urb, GFP_KERNEL)) {276277 retval = -EIO;277278 dev_err(&interface->dev, "Could not submitting URB\n");···281280282281 /* save our data pointer in this interface device */283282 usb_set_intfdata(interface, dev);284284- dev->bbu = -1;285283286284 /* we can register the device now, as it is ready */287285 retval = usb_register_dev(interface, &yurex_class);
+9
drivers/usb/renesas_usbhs/common.c
···815815816816 usbhs_platform_call(priv, hardware_exit, pdev);817817 reset_control_assert(priv->rsts);818818+819819+ /*820820+ * Explicitly free the IRQ to ensure the interrupt handler is821821+ * disabled and synchronized before freeing resources.822822+ * devm_free_irq() calls free_irq() which waits for any running823823+ * ISR to complete, preventing UAF.824824+ */825825+ devm_free_irq(&pdev->dev, priv->irq, priv);826826+818827 usbhs_mod_remove(priv);819828 usbhs_fifo_remove(priv);820829 usbhs_pipe_remove(priv);
···13931393 .indirect_missing_keys = PREFTREE_INIT13941394 };1395139513961396+ if (unlikely(!root)) {13971397+ btrfs_err(ctx->fs_info,13981398+ "missing extent root for extent at bytenr %llu",13991399+ ctx->bytenr);14001400+ return -EUCLEAN;14011401+ }14021402+13961403 /* Roots ulist is not needed when using a sharedness check context. */13971404 if (sc)13981405 ASSERT(ctx->roots == NULL);···22112204 struct btrfs_extent_item *ei;22122205 struct btrfs_key key;2213220622072207+ if (unlikely(!extent_root)) {22082208+ btrfs_err(fs_info,22092209+ "missing extent root for extent at bytenr %llu",22102210+ logical);22112211+ return -EUCLEAN;22122212+ }22132213+22142214 key.objectid = logical;22152215 if (btrfs_fs_incompat(fs_info, SKINNY_METADATA))22162216 key.type = BTRFS_METADATA_ITEM_KEY;···28652851 struct btrfs_key key;28662852 int ret;2867285328542854+ if (unlikely(!extent_root)) {28552855+ btrfs_err(fs_info,28562856+ "missing extent root for extent at bytenr %llu",28572857+ bytenr);28582858+ return -EUCLEAN;28592859+ }28602860+28682861 key.objectid = bytenr;28692862 key.type = BTRFS_METADATA_ITEM_KEY;28702863 key.offset = (u64)-1;···3008298730092988 /* We're at keyed items, there is no inline item, go to the next one */30102989 extent_root = btrfs_extent_root(iter->fs_info, iter->bytenr);29902990+ if (unlikely(!extent_root)) {29912991+ btrfs_err(iter->fs_info,29922992+ "missing extent root for extent at bytenr %llu",29932993+ iter->bytenr);29942994+ return -EUCLEAN;29952995+ }29962996+30112997 ret = btrfs_next_item(extent_root, iter->path);30122998 if (ret)30132999 return ret;
+36
fs/btrfs/block-group.c
···739739740740 last = max_t(u64, block_group->start, BTRFS_SUPER_INFO_OFFSET);741741 extent_root = btrfs_extent_root(fs_info, last);742742+ if (unlikely(!extent_root)) {743743+ btrfs_err(fs_info,744744+ "missing extent root for block group at offset %llu",745745+ block_group->start);746746+ return -EUCLEAN;747747+ }742748743749#ifdef CONFIG_BTRFS_DEBUG744750 /*···10671061 int ret;1068106210691063 root = btrfs_block_group_root(fs_info);10641064+ if (unlikely(!root)) {10651065+ btrfs_err(fs_info, "missing block group root");10661066+ return -EUCLEAN;10671067+ }10681068+10701069 key.objectid = block_group->start;10711070 key.type = BTRFS_BLOCK_GROUP_ITEM_KEY;10721071 key.offset = block_group->length;···13591348 struct btrfs_root *root = btrfs_block_group_root(fs_info);13601349 struct btrfs_chunk_map *map;13611350 unsigned int num_items;13511351+13521352+ if (unlikely(!root)) {13531353+ btrfs_err(fs_info, "missing block group root");13541354+ return ERR_PTR(-EUCLEAN);13551355+ }1362135613631357 map = btrfs_find_chunk_map(fs_info, chunk_offset, 1);13641358 ASSERT(map != NULL);···21562140 int ret;21572141 struct btrfs_key found_key;2158214221432143+ if (unlikely(!root)) {21442144+ btrfs_err(fs_info, "missing block group root");21452145+ return -EUCLEAN;21462146+ }21472147+21592148 btrfs_for_each_slot(root, key, &found_key, path, ret) {21602149 if (found_key.objectid >= key->objectid &&21612150 found_key.type == BTRFS_BLOCK_GROUP_ITEM_KEY) {···27342713 size_t size;27352714 int ret;2736271527162716+ if (unlikely(!root)) {27172717+ btrfs_err(fs_info, "missing block group root");27182718+ return -EUCLEAN;27192719+ }27202720+27372721 spin_lock(&block_group->lock);27382722 btrfs_set_stack_block_group_v2_used(&bgi, block_group->used);27392723 btrfs_set_stack_block_group_v2_chunk_objectid(&bgi, block_group->global_root_id);···30743048 int ret;30753049 bool dirty_bg_running;3076305030513051+ if (unlikely(!root)) {30523052+ btrfs_err(fs_info, "missing block group root");30533053+ return -EUCLEAN;30543054+ }30553055+30773056 /*30783057 * This can only happen when we are doing read-only scrub on read-only30793058 * mount.···32223191 u32 old_last_identity_remap_count;32233192 u64 used, remap_bytes;32243193 u32 identity_remap_count;31943194+31953195+ if (unlikely(!root)) {31963196+ btrfs_err(fs_info, "missing block group root");31973197+ return -EUCLEAN;31983198+ }3225319932263200 /*32273201 * Block group items update can be triggered out of commit transaction
+8-3
fs/btrfs/compression.c
···320320321321 ASSERT(IS_ALIGNED(ordered->file_offset, fs_info->sectorsize));322322 ASSERT(IS_ALIGNED(ordered->num_bytes, fs_info->sectorsize));323323- ASSERT(cb->writeback);323323+ /*324324+ * This flag determines if we should clear the writeback flag from the325325+ * page cache. But this function is only utilized by encoded writes, it326326+ * never goes through the page cache.327327+ */328328+ ASSERT(!cb->writeback);324329325330 cb->start = ordered->file_offset;326331 cb->len = ordered->num_bytes;332332+ ASSERT(cb->bbio.bio.bi_iter.bi_size == ordered->disk_num_bytes);327333 cb->compressed_len = ordered->disk_num_bytes;328334 cb->bbio.bio.bi_iter.bi_sector = ordered->disk_bytenr >> SECTOR_SHIFT;329335 cb->bbio.ordered = ordered;···351345 cb = alloc_compressed_bio(inode, start, REQ_OP_WRITE, end_bbio_compressed_write);352346 cb->start = start;353347 cb->len = len;354354- cb->writeback = true;355355-348348+ cb->writeback = false;356349 return cb;357350}358351
+23-4
fs/btrfs/disk-io.c
···15911591 * this will bump the backup pointer by one when it is15921592 * done15931593 */15941594-static void backup_super_roots(struct btrfs_fs_info *info)15941594+static int backup_super_roots(struct btrfs_fs_info *info)15951595{15961596 const int next_backup = info->backup_root_index;15971597 struct btrfs_root_backup *root_backup;···16221622 if (!btrfs_fs_incompat(info, EXTENT_TREE_V2)) {16231623 struct btrfs_root *extent_root = btrfs_extent_root(info, 0);16241624 struct btrfs_root *csum_root = btrfs_csum_root(info, 0);16251625+16261626+ if (unlikely(!extent_root)) {16271627+ btrfs_err(info, "missing extent root for extent at bytenr 0");16281628+ return -EUCLEAN;16291629+ }16301630+ if (unlikely(!csum_root)) {16311631+ btrfs_err(info, "missing csum root for extent at bytenr 0");16321632+ return -EUCLEAN;16331633+ }1625163416261635 btrfs_set_backup_extent_root(root_backup,16271636 extent_root->node->start);···16791670 memcpy(&info->super_copy->super_roots,16801671 &info->super_for_commit->super_roots,16811672 sizeof(*root_backup) * BTRFS_NUM_BACKUP_ROOTS);16731673+16741674+ return 0;16821675}1683167616841677/*···36053594 }36063595 }3607359636083608- btrfs_zoned_reserve_data_reloc_bg(fs_info);36093597 btrfs_free_zone_cache(fs_info);3610359836113599 btrfs_check_active_zone_reservation(fs_info);···36313621 ret = PTR_ERR(fs_info->transaction_kthread);36323622 goto fail_cleaner;36333623 }36243624+36253625+ /*36263626+ * Starts a transaction, must be called after the transaction kthread36273627+ * is initialized.36283628+ */36293629+ btrfs_zoned_reserve_data_reloc_bg(fs_info);3634363036353631 ret = btrfs_read_qgroup_config(fs_info);36363632 if (ret)···40624046 * not from fsync where the tree roots in fs_info have not40634047 * been consistent on disk.40644048 */40654065- if (max_mirrors == 0)40664066- backup_super_roots(fs_info);40494049+ if (max_mirrors == 0) {40504050+ ret = backup_super_roots(fs_info);40514051+ if (ret < 0)40524052+ return ret;40534053+ }4067405440684055 sb = fs_info->super_for_commit;40694056 dev_item = &sb->dev_item;
+93-5
fs/btrfs/extent-tree.c
···7575 struct btrfs_key key;7676 BTRFS_PATH_AUTO_FREE(path);77777878+ if (unlikely(!root)) {7979+ btrfs_err(fs_info,8080+ "missing extent root for extent at bytenr %llu", start);8181+ return -EUCLEAN;8282+ }8383+7884 path = btrfs_alloc_path();7985 if (!path)8086 return -ENOMEM;···137131 key.offset = offset;138132139133 extent_root = btrfs_extent_root(fs_info, bytenr);134134+ if (unlikely(!extent_root)) {135135+ btrfs_err(fs_info,136136+ "missing extent root for extent at bytenr %llu", bytenr);137137+ return -EUCLEAN;138138+ }139139+140140 ret = btrfs_search_slot(NULL, extent_root, &key, path, 0, 0);141141 if (ret < 0)142142 return ret;···448436 int recow;449437 int ret;450438439439+ if (unlikely(!root)) {440440+ btrfs_err(trans->fs_info,441441+ "missing extent root for extent at bytenr %llu", bytenr);442442+ return -EUCLEAN;443443+ }444444+451445 key.objectid = bytenr;452446 if (parent) {453447 key.type = BTRFS_SHARED_DATA_REF_KEY;···527509 u32 size;528510 u32 num_refs;529511 int ret;512512+513513+ if (unlikely(!root)) {514514+ btrfs_err(trans->fs_info,515515+ "missing extent root for extent at bytenr %llu", bytenr);516516+ return -EUCLEAN;517517+ }530518531519 key.objectid = bytenr;532520 if (node->parent) {···692668 struct btrfs_key key;693669 int ret;694670671671+ if (unlikely(!root)) {672672+ btrfs_err(trans->fs_info,673673+ "missing extent root for extent at bytenr %llu", bytenr);674674+ return -EUCLEAN;675675+ }676676+695677 key.objectid = bytenr;696678 if (parent) {697679 key.type = BTRFS_SHARED_BLOCK_REF_KEY;···721691 struct btrfs_root *root = btrfs_extent_root(trans->fs_info, bytenr);722692 struct btrfs_key key;723693 int ret;694694+695695+ if (unlikely(!root)) {696696+ btrfs_err(trans->fs_info,697697+ "missing extent root for extent at bytenr %llu", bytenr);698698+ return -EUCLEAN;699699+ }724700725701 key.objectid = bytenr;726702 if (node->parent) {···817781 int ret;818782 bool skinny_metadata = btrfs_fs_incompat(fs_info, SKINNY_METADATA);819783 int needed;784784+785785+ if (unlikely(!root)) {786786+ btrfs_err(fs_info,787787+ "missing extent root for extent at bytenr %llu", bytenr);788788+ return -EUCLEAN;789789+ }820790821791 key.objectid = bytenr;822792 key.type = BTRFS_EXTENT_ITEM_KEY;···17221680 }1723168117241682 root = btrfs_extent_root(fs_info, key.objectid);16831683+ if (unlikely(!root)) {16841684+ btrfs_err(fs_info,16851685+ "missing extent root for extent at bytenr %llu",16861686+ key.objectid);16871687+ return -EUCLEAN;16881688+ }17251689again:17261690 ret = btrfs_search_slot(trans, root, &key, path, 0, 1);17271691 if (ret < 0) {···19741926 struct btrfs_root *csum_root;1975192719761928 csum_root = btrfs_csum_root(fs_info, head->bytenr);19771977- ret = btrfs_del_csums(trans, csum_root, head->bytenr,19781978- head->num_bytes);19291929+ if (unlikely(!csum_root)) {19301930+ btrfs_err(fs_info,19311931+ "missing csum root for extent at bytenr %llu",19321932+ head->bytenr);19331933+ ret = -EUCLEAN;19341934+ } else {19351935+ ret = btrfs_del_csums(trans, csum_root, head->bytenr,19361936+ head->num_bytes);19371937+ }19791938 }19801939 }19811940···24332378 u32 expected_size;24342379 int type;24352380 int ret;23812381+23822382+ if (unlikely(!extent_root)) {23832383+ btrfs_err(fs_info,23842384+ "missing extent root for extent at bytenr %llu", bytenr);23852385+ return -EUCLEAN;23862386+ }2436238724372388 key.objectid = bytenr;24382389 key.type = BTRFS_EXTENT_ITEM_KEY;···31543093 struct btrfs_root *csum_root;3155309431563095 csum_root = btrfs_csum_root(trans->fs_info, bytenr);30963096+ if (unlikely(!csum_root)) {30973097+ ret = -EUCLEAN;30983098+ btrfs_abort_transaction(trans, ret);30993099+ btrfs_err(trans->fs_info,31003100+ "missing csum root for extent at bytenr %llu",31013101+ bytenr);31023102+ return ret;31033103+ }31043104+31573105 ret = btrfs_del_csums(trans, csum_root, bytenr, num_bytes);31583106 if (unlikely(ret)) {31593107 btrfs_abort_transaction(trans, ret);···32923222 u64 delayed_ref_root = href->owning_root;3293322332943224 extent_root = btrfs_extent_root(info, bytenr);32953295- ASSERT(extent_root);32253225+ if (unlikely(!extent_root)) {32263226+ btrfs_err(info,32273227+ "missing extent root for extent at bytenr %llu", bytenr);32283228+ return -EUCLEAN;32293229+ }3296323032973231 path = btrfs_alloc_path();32983232 if (!path)···50134939 size += btrfs_extent_inline_ref_size(BTRFS_EXTENT_OWNER_REF_KEY);50144940 size += btrfs_extent_inline_ref_size(type);5015494149424942+ extent_root = btrfs_extent_root(fs_info, ins->objectid);49434943+ if (unlikely(!extent_root)) {49444944+ btrfs_err(fs_info,49454945+ "missing extent root for extent at bytenr %llu",49464946+ ins->objectid);49474947+ return -EUCLEAN;49484948+ }49494949+50164950 path = btrfs_alloc_path();50174951 if (!path)50184952 return -ENOMEM;5019495350205020- extent_root = btrfs_extent_root(fs_info, ins->objectid);50214954 ret = btrfs_insert_empty_item(trans, extent_root, path, ins, size);50224955 if (ret) {50234956 btrfs_free_path(path);···51005019 size += sizeof(*block_info);51015020 }5102502150225022+ extent_root = btrfs_extent_root(fs_info, extent_key.objectid);50235023+ if (unlikely(!extent_root)) {50245024+ btrfs_err(fs_info,50255025+ "missing extent root for extent at bytenr %llu",50265026+ extent_key.objectid);50275027+ return -EUCLEAN;50285028+ }50295029+51035030 path = btrfs_alloc_path();51045031 if (!path)51055032 return -ENOMEM;5106503351075107- extent_root = btrfs_extent_root(fs_info, extent_key.objectid);51085034 ret = btrfs_insert_empty_item(trans, extent_root, path, &extent_key,51095035 size);51105036 if (ret) {
···308308 /* Current item doesn't contain the desired range, search again */309309 btrfs_release_path(path);310310 csum_root = btrfs_csum_root(fs_info, disk_bytenr);311311+ if (unlikely(!csum_root)) {312312+ btrfs_err(fs_info,313313+ "missing csum root for extent at bytenr %llu",314314+ disk_bytenr);315315+ return -EUCLEAN;316316+ }317317+311318 item = btrfs_lookup_csum(NULL, csum_root, path, disk_bytenr, 0);312319 if (IS_ERR(item)) {313320 ret = PTR_ERR(item);
+8-1
fs/btrfs/free-space-tree.c
···10731073 if (ret)10741074 return ret;1075107510761076+ extent_root = btrfs_extent_root(trans->fs_info, block_group->start);10771077+ if (unlikely(!extent_root)) {10781078+ btrfs_err(trans->fs_info,10791079+ "missing extent root for block group at offset %llu",10801080+ block_group->start);10811081+ return -EUCLEAN;10821082+ }10831083+10761084 mutex_lock(&block_group->free_space_lock);1077108510781086 /*···10941086 key.type = BTRFS_EXTENT_ITEM_KEY;10951087 key.offset = 0;1096108810971097- extent_root = btrfs_extent_root(trans->fs_info, key.objectid);10981089 ret = btrfs_search_slot_for_read(extent_root, &key, path, 1, 0);10991090 if (ret < 0)11001091 goto out_locked;
+39-5
fs/btrfs/inode.c
···20122012 */2013201320142014 csum_root = btrfs_csum_root(root->fs_info, io_start);20152015+ if (unlikely(!csum_root)) {20162016+ btrfs_err(root->fs_info,20172017+ "missing csum root for extent at bytenr %llu", io_start);20182018+ ret = -EUCLEAN;20192019+ goto out;20202020+ }20212021+20152022 ret = btrfs_lookup_csums_list(csum_root, io_start,20162023 io_start + args->file_extent.num_bytes - 1,20172024 NULL, nowait);···27562749 int ret;2757275027582751 list_for_each_entry(sum, list, list) {27592759- trans->adding_csums = true;27602760- if (!csum_root)27522752+ if (!csum_root) {27612753 csum_root = btrfs_csum_root(trans->fs_info,27622754 sum->logical);27552755+ if (unlikely(!csum_root)) {27562756+ btrfs_err(trans->fs_info,27572757+ "missing csum root for extent at bytenr %llu",27582758+ sum->logical);27592759+ return -EUCLEAN;27602760+ }27612761+ }27622762+ trans->adding_csums = true;27632763 ret = btrfs_csum_file_blocks(trans, csum_root, sum);27642764 trans->adding_csums = false;27652765 if (ret)···66266612 int ret;66276613 bool xa_reserved = false;6628661466156615+ if (!args->orphan && !args->subvol) {66166616+ /*66176617+ * Before anything else, check if we can add the name to the66186618+ * parent directory. We want to avoid a dir item overflow in66196619+ * case we have an existing dir item due to existing name66206620+ * hash collisions. We do this check here before we call66216621+ * btrfs_add_link() down below so that we can avoid a66226622+ * transaction abort (which could be exploited by malicious66236623+ * users).66246624+ *66256625+ * For subvolumes we already do this in btrfs_mksubvol().66266626+ */66276627+ ret = btrfs_check_dir_item_collision(BTRFS_I(dir)->root,66286628+ btrfs_ino(BTRFS_I(dir)),66296629+ name);66306630+ if (ret < 0)66316631+ return ret;66326632+ }66336633+66296634 path = btrfs_alloc_path();66306635 if (!path)66316636 return -ENOMEM;···98889855 int compression;98899856 size_t orig_count;98909857 const u32 min_folio_size = btrfs_min_folio_size(fs_info);98589858+ const u32 blocksize = fs_info->sectorsize;98919859 u64 start, end;98929860 u64 num_bytes, ram_bytes, disk_num_bytes;98939861 struct btrfs_key ins;···99999965 ret = -EFAULT;100009966 goto out_cb;100019967 }1000210002- if (bytes < min_folio_size)1000310003- folio_zero_range(folio, bytes, min_folio_size - bytes);1000410004- ret = bio_add_folio(&cb->bbio.bio, folio, folio_size(folio), 0);99689968+ if (!IS_ALIGNED(bytes, blocksize))99699969+ folio_zero_range(folio, bytes, round_up(bytes, blocksize) - bytes);99709970+ ret = bio_add_folio(&cb->bbio.bio, folio, round_up(bytes, blocksize), 0);100059971 if (unlikely(!ret)) {100069972 folio_put(folio);100079973 ret = -EINVAL;
+37-7
fs/btrfs/ioctl.c
···672672 goto out;673673 }674674675675+ /*676676+ * Subvolumes have orphans cleaned on first dentry lookup. A new677677+ * subvolume cannot have any orphans, so we should set the bit before we678678+ * add the subvolume dentry to the dentry cache, so that it is in the679679+ * same state as a subvolume after first lookup.680680+ */681681+ set_bit(BTRFS_ROOT_ORPHAN_CLEANUP, &new_root->state);675682 d_instantiate_new(dentry, new_inode_args.inode);676683 new_inode_args.inode = NULL;677684···36173610 }36183611 }3619361236203620- trans = btrfs_join_transaction(root);36133613+ /* 2 BTRFS_QGROUP_RELATION_KEY items. */36143614+ trans = btrfs_start_transaction(root, 2);36213615 if (IS_ERR(trans)) {36223616 ret = PTR_ERR(trans);36233617 goto out;···36903682 goto out;36913683 }3692368436933693- trans = btrfs_join_transaction(root);36853685+ /*36863686+ * 1 BTRFS_QGROUP_INFO_KEY item.36873687+ * 1 BTRFS_QGROUP_LIMIT_KEY item.36883688+ */36893689+ trans = btrfs_start_transaction(root, 2);36943690 if (IS_ERR(trans)) {36953691 ret = PTR_ERR(trans);36963692 goto out;···37433731 goto drop_write;37443732 }3745373337463746- trans = btrfs_join_transaction(root);37343734+ /* 1 BTRFS_QGROUP_LIMIT_KEY item. */37353735+ trans = btrfs_start_transaction(root, 1);37473736 if (IS_ERR(trans)) {37483737 ret = PTR_ERR(trans);37493738 goto out;···38653852 goto out;38663853 }3867385438553855+ received_uuid_changed = memcmp(root_item->received_uuid, sa->uuid,38563856+ BTRFS_UUID_SIZE);38573857+38583858+ /*38593859+ * Before we attempt to add the new received uuid, check if we have room38603860+ * for it in case there's already an item. If the size of the existing38613861+ * item plus this root's ID (u64) exceeds the maximum item size, we can38623862+ * return here without the need to abort a transaction. If we don't do38633863+ * this check, the btrfs_uuid_tree_add() call below would fail with38643864+ * -EOVERFLOW and result in a transaction abort. Malicious users could38653865+ * exploit this to turn the fs into RO mode.38663866+ */38673867+ if (received_uuid_changed && !btrfs_is_empty_uuid(sa->uuid)) {38683868+ ret = btrfs_uuid_tree_check_overflow(fs_info, sa->uuid,38693869+ BTRFS_UUID_KEY_RECEIVED_SUBVOL);38703870+ if (ret < 0)38713871+ goto out;38723872+ }38733873+38683874 /*38693875 * 1 - root item38703876 * 2 - uuid items (received uuid + subvol uuid)···38993867 sa->rtime.sec = ct.tv_sec;39003868 sa->rtime.nsec = ct.tv_nsec;3901386939023902- received_uuid_changed = memcmp(root_item->received_uuid, sa->uuid,39033903- BTRFS_UUID_SIZE);39043870 if (received_uuid_changed &&39053871 !btrfs_is_empty_uuid(root_item->received_uuid)) {39063872 ret = btrfs_uuid_tree_remove(trans, root_item->received_uuid,39073873 BTRFS_UUID_KEY_RECEIVED_SUBVOL,39083874 btrfs_root_id(root));39093875 if (unlikely(ret && ret != -ENOENT)) {39103910- btrfs_abort_transaction(trans, ret);39113876 btrfs_end_transaction(trans);39123877 goto out;39133878 }···3919389039203891 ret = btrfs_update_root(trans, fs_info->tree_root,39213892 &root->root_key, &root->root_item);39223922- if (ret < 0) {38933893+ if (unlikely(ret < 0)) {38943894+ btrfs_abort_transaction(trans, ret);39233895 btrfs_end_transaction(trans);39243896 goto out;39253897 }
+2-2
fs/btrfs/lzo.c
···429429int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)430430{431431 struct workspace *workspace = list_entry(ws, struct workspace, list);432432- const struct btrfs_fs_info *fs_info = cb->bbio.inode->root->fs_info;432432+ struct btrfs_fs_info *fs_info = cb->bbio.inode->root->fs_info;433433 const u32 sectorsize = fs_info->sectorsize;434434 struct folio_iter fi;435435 char *kaddr;···447447 /* There must be a compressed folio and matches the sectorsize. */448448 if (unlikely(!fi.folio))449449 return -EINVAL;450450- ASSERT(folio_size(fi.folio) == sectorsize);450450+ ASSERT(folio_size(fi.folio) == btrfs_min_folio_size(fs_info));451451 kaddr = kmap_local_folio(fi.folio, 0);452452 len_in = read_compress_length(kaddr);453453 kunmap_local(kaddr);
···37393739 mutex_lock(&fs_info->qgroup_rescan_lock);37403740 extent_root = btrfs_extent_root(fs_info,37413741 fs_info->qgroup_rescan_progress.objectid);37423742+ if (unlikely(!extent_root)) {37433743+ btrfs_err(fs_info,37443744+ "missing extent root for extent at bytenr %llu",37453745+ fs_info->qgroup_rescan_progress.objectid);37463746+ mutex_unlock(&fs_info->qgroup_rescan_lock);37473747+ return -EUCLEAN;37483748+ }37493749+37423750 ret = btrfs_search_slot_for_read(extent_root,37433751 &fs_info->qgroup_rescan_progress,37443752 path, 1, 0);
+10-2
fs/btrfs/raid56.c
···22972297static void fill_data_csums(struct btrfs_raid_bio *rbio)22982298{22992299 struct btrfs_fs_info *fs_info = rbio->bioc->fs_info;23002300- struct btrfs_root *csum_root = btrfs_csum_root(fs_info,23012301- rbio->bioc->full_stripe_logical);23002300+ struct btrfs_root *csum_root;23022301 const u64 start = rbio->bioc->full_stripe_logical;23032302 const u32 len = (rbio->nr_data * rbio->stripe_nsectors) <<23042303 fs_info->sectorsize_bits;···23272328 GFP_NOFS);23282329 if (!rbio->csum_buf || !rbio->csum_bitmap) {23292330 ret = -ENOMEM;23312331+ goto error;23322332+ }23332333+23342334+ csum_root = btrfs_csum_root(fs_info, rbio->bioc->full_stripe_logical);23352335+ if (unlikely(!csum_root)) {23362336+ btrfs_err(fs_info,23372337+ "missing csum root for extent at bytenr %llu",23382338+ rbio->bioc->full_stripe_logical);23392339+ ret = -EUCLEAN;23302340 goto error;23312341 }23322342
+34-7
fs/btrfs/relocation.c
···41854185 dest_addr = ins.objectid;41864186 dest_length = ins.offset;4187418741884188+ dest_bg = btrfs_lookup_block_group(fs_info, dest_addr);41894189+41884190 if (!is_data && !IS_ALIGNED(dest_length, fs_info->nodesize)) {41894191 u64 new_length = ALIGN_DOWN(dest_length, fs_info->nodesize);41904192···42974295 if (unlikely(ret))42984296 goto end;4299429743004300- dest_bg = btrfs_lookup_block_group(fs_info, dest_addr);43014301-43024298 adjust_block_group_remap_bytes(trans, dest_bg, dest_length);4303429943044300 mutex_lock(&dest_bg->free_space_lock);43054301 bg_needs_free_space = test_bit(BLOCK_GROUP_FLAG_NEEDS_FREE_SPACE,43064302 &dest_bg->runtime_flags);43074303 mutex_unlock(&dest_bg->free_space_lock);43084308- btrfs_put_block_group(dest_bg);4309430443104305 if (bg_needs_free_space) {43114306 ret = btrfs_add_block_group_free_space(trans, dest_bg);···43324333 btrfs_end_transaction(trans);43334334 }43344335 } else {43354335- dest_bg = btrfs_lookup_block_group(fs_info, dest_addr);43364336 btrfs_free_reserved_bytes(dest_bg, dest_length, 0);43374337- btrfs_put_block_group(dest_bg);4338433743394338 ret = btrfs_commit_transaction(trans);43404339 }43404340+43414341+ btrfs_put_block_group(dest_bg);4341434243424343 return ret;43434344}···4398439943994400 leaf = path->nodes[0];44004401 }44024402+44034403+ btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);44014404 }4402440544034406 remap = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_remap_item);···49534952 struct btrfs_space_info *sinfo = src_bg->space_info;4954495349554954 extent_root = btrfs_extent_root(fs_info, src_bg->start);49554955+ if (unlikely(!extent_root)) {49564956+ btrfs_err(fs_info,49574957+ "missing extent root for block group at offset %llu",49584958+ src_bg->start);49594959+ return -EUCLEAN;49604960+ }4956496149574962 trans = btrfs_start_transaction(extent_root, 0);49584963 if (IS_ERR(trans))···53115304 int ret;53125305 bool bg_is_ro = false;5313530653075307+ if (unlikely(!extent_root)) {53085308+ btrfs_err(fs_info,53095309+ "missing extent root for block group at offset %llu",53105310+ group_start);53115311+ return -EUCLEAN;53125312+ }53135313+53145314 /*53155315 * This only gets set if we had a half-deleted snapshot on mount. We53165316 * cannot allow relocation to start while we're still trying to clean up···55485534 goto out;55495535 }5550553655375537+ rc->extent_root = btrfs_extent_root(fs_info, 0);55385538+ if (unlikely(!rc->extent_root)) {55395539+ btrfs_err(fs_info, "missing extent root for extent at bytenr 0");55405540+ ret = -EUCLEAN;55415541+ goto out;55425542+ }55435543+55515544 ret = reloc_chunk_start(fs_info);55525545 if (ret < 0)55535546 goto out_end;55545554-55555555- rc->extent_root = btrfs_extent_root(fs_info, 0);5556554755575548 set_reloc_control(rc);55585549···56525633 struct btrfs_root *csum_root = btrfs_csum_root(fs_info, disk_bytenr);56535634 LIST_HEAD(list);56545635 int ret;56365636+56375637+ if (unlikely(!csum_root)) {56385638+ btrfs_mark_ordered_extent_error(ordered);56395639+ btrfs_err(fs_info,56405640+ "missing csum root for extent at bytenr %llu",56415641+ disk_bytenr);56425642+ return -EUCLEAN;56435643+ }5655564456565645 ret = btrfs_lookup_csums_list(csum_root, disk_bytenr,56575646 disk_bytenr + ordered->num_bytes - 1,
+4-1
fs/btrfs/space-info.c
···21942194 if (!btrfs_should_periodic_reclaim(space_info))21952195 continue;21962196 for (raid = 0; raid < BTRFS_NR_RAID_TYPES; raid++) {21972197- if (do_reclaim_sweep(space_info, raid))21972197+ if (do_reclaim_sweep(space_info, raid)) {21982198+ spin_lock(&space_info->lock);21982199 btrfs_set_periodic_reclaim_ready(space_info, false);22002200+ spin_unlock(&space_info->lock);22012201+ }21992202 }22002203 }22012204}
+16
fs/btrfs/transaction.c
···19051905 ret = btrfs_uuid_tree_add(trans, new_root_item->received_uuid,19061906 BTRFS_UUID_KEY_RECEIVED_SUBVOL,19071907 objectid);19081908+ /*19091909+ * We are creating of lot of snapshots of the same root that was19101910+ * received (has a received UUID) and reached a leaf's limit for19111911+ * an item. We can safely ignore this and avoid a transaction19121912+ * abort. A deletion of this snapshot will still work since we19131913+ * ignore if an item with a BTRFS_UUID_KEY_RECEIVED_SUBVOL key19141914+ * is missing (see btrfs_delete_subvolume()). Send/receive will19151915+ * work too since it peeks the first root id from the existing19161916+ * item (it could peek any), and in case it's missing it19171917+ * falls back to search by BTRFS_UUID_KEY_SUBVOL keys.19181918+ * Creation of a snapshot does not require CAP_SYS_ADMIN, so19191919+ * we don't want users triggering transaction aborts, either19201920+ * intentionally or not.19211921+ */19221922+ if (ret == -EOVERFLOW)19231923+ ret = 0;19081924 if (unlikely(ret && ret != -EEXIST)) {19091925 btrfs_abort_transaction(trans, ret);19101926 goto fail;
+18-1
fs/btrfs/tree-checker.c
···12841284 }12851285 if (unlikely(btrfs_root_drop_level(&ri) >= BTRFS_MAX_LEVEL)) {12861286 generic_err(leaf, slot,12871287- "invalid root level, have %u expect [0, %u]",12871287+ "invalid root drop_level, have %u expect [0, %u]",12881288 btrfs_root_drop_level(&ri), BTRFS_MAX_LEVEL - 1);12891289+ return -EUCLEAN;12901290+ }12911291+ /*12921292+ * If drop_progress.objectid is non-zero, a btrfs_drop_snapshot() was12931293+ * interrupted and the resume point was recorded in drop_progress and12941294+ * drop_level. In that case drop_level must be >= 1: level 0 is the12951295+ * leaf level and drop_snapshot never saves a checkpoint there (it12961296+ * only records checkpoints at internal node levels in DROP_REFERENCE12971297+ * stage). A zero drop_level combined with a non-zero drop_progress12981298+ * objectid indicates on-disk corruption and would cause a BUG_ON in12991299+ * merge_reloc_root() and btrfs_drop_snapshot() at mount time.13001300+ */13011301+ if (unlikely(btrfs_disk_key_objectid(&ri.drop_progress) != 0 &&13021302+ btrfs_root_drop_level(&ri) == 0)) {13031303+ generic_err(leaf, slot,13041304+ "invalid root drop_level 0 with non-zero drop_progress objectid %llu",13051305+ btrfs_disk_key_objectid(&ri.drop_progress));12891306 return -EUCLEAN;12901307 }12911308
+27
fs/btrfs/tree-log.c
···984984985985 sums = list_first_entry(&ordered_sums, struct btrfs_ordered_sum, list);986986 csum_root = btrfs_csum_root(fs_info, sums->logical);987987+ if (unlikely(!csum_root)) {988988+ btrfs_err(fs_info,989989+ "missing csum root for extent at bytenr %llu",990990+ sums->logical);991991+ ret = -EUCLEAN;992992+ }993993+987994 if (!ret) {988995 ret = btrfs_del_csums(trans, csum_root, sums->logical,989996 sums->len);···48974890 }4898489148994892 csum_root = btrfs_csum_root(trans->fs_info, disk_bytenr);48934893+ if (unlikely(!csum_root)) {48944894+ btrfs_err(trans->fs_info,48954895+ "missing csum root for extent at bytenr %llu",48964896+ disk_bytenr);48974897+ return -EUCLEAN;48984898+ }48994899+49004900 disk_bytenr += extent_offset;49014901 ret = btrfs_lookup_csums_list(csum_root, disk_bytenr,49024902 disk_bytenr + extent_num_bytes - 1,···51005086 /* block start is already adjusted for the file extent offset. */51015087 block_start = btrfs_extent_map_block_start(em);51025088 csum_root = btrfs_csum_root(trans->fs_info, block_start);50895089+ if (unlikely(!csum_root)) {50905090+ btrfs_err(trans->fs_info,50915091+ "missing csum root for extent at bytenr %llu",50925092+ block_start);50935093+ return -EUCLEAN;50945094+ }50955095+51035096 ret = btrfs_lookup_csums_list(csum_root, block_start + csum_offset,51045097 block_start + csum_offset + csum_len - 1,51055098 &ordered_sums, false);···62166195 struct btrfs_root *root,62176196 struct btrfs_log_ctx *ctx)62186197{61986198+ const bool orig_log_new_dentries = ctx->log_new_dentries;62196199 int ret = 0;6220620062216201 /*···62786256 * dir index key range logged for the directory. So we62796257 * must make sure the deletion is recorded.62806258 */62596259+ ctx->log_new_dentries = false;62816260 ret = btrfs_log_inode(trans, inode, LOG_INODE_ALL, ctx);62616261+ if (!ret && ctx->log_new_dentries)62626262+ ret = log_new_dir_dentries(trans, inode, ctx);62636263+62826264 btrfs_add_delayed_iput(inode);62836265 if (ret)62846266 break;···63176291 break;63186292 }6319629362946294+ ctx->log_new_dentries = orig_log_new_dentries;63206295 ctx->logging_conflict_inodes = false;63216296 if (ret)63226297 free_conflicting_inodes(ctx);
+38
fs/btrfs/uuid-tree.c
···199199 return 0;200200}201201202202+/*203203+ * Check if we can add one root ID to a UUID key.204204+ * If the key does not yet exists, we can, otherwise only if extended item does205205+ * not exceeds the maximum item size permitted by the leaf size.206206+ *207207+ * Returns 0 on success, negative value on error.208208+ */209209+int btrfs_uuid_tree_check_overflow(struct btrfs_fs_info *fs_info,210210+ const u8 *uuid, u8 type)211211+{212212+ BTRFS_PATH_AUTO_FREE(path);213213+ int ret;214214+ u32 item_size;215215+ struct btrfs_key key;216216+217217+ if (WARN_ON_ONCE(!fs_info->uuid_root))218218+ return -EINVAL;219219+220220+ path = btrfs_alloc_path();221221+ if (!path)222222+ return -ENOMEM;223223+224224+ btrfs_uuid_to_key(uuid, type, &key);225225+ ret = btrfs_search_slot(NULL, fs_info->uuid_root, &key, path, 0, 0);226226+ if (ret < 0)227227+ return ret;228228+ if (ret > 0)229229+ return 0;230230+231231+ item_size = btrfs_item_size(path->nodes[0], path->slots[0]);232232+233233+ if (sizeof(struct btrfs_item) + item_size + sizeof(u64) >234234+ BTRFS_LEAF_DATA_SIZE(fs_info))235235+ return -EOVERFLOW;236236+237237+ return 0;238238+}239239+202240static int btrfs_uuid_iter_rem(struct btrfs_root *uuid_root, u8 *uuid, u8 type,203241 u64 subid)204242{
···3587358735883588 /* step one, relocate all the extents inside this chunk */35893589 btrfs_scrub_pause(fs_info);35903590- ret = btrfs_relocate_block_group(fs_info, chunk_offset, true);35903590+ ret = btrfs_relocate_block_group(fs_info, chunk_offset, verbose);35913591 btrfs_scrub_continue(fs_info);35923592 if (ret) {35933593 /*···42774277end:42784278 while (!list_empty(chunks)) {42794279 bool is_unused;42804280+ struct btrfs_block_group *bg;4280428142814282 rci = list_first_entry(chunks, struct remap_chunk_info, list);4282428342834283- spin_lock(&rci->bg->lock);42844284- is_unused = !btrfs_is_block_group_used(rci->bg);42854285- spin_unlock(&rci->bg->lock);42844284+ bg = rci->bg;42854285+ if (bg) {42864286+ /*42874287+ * This is a bit racy and the 'used' status can change42884288+ * but this is not a problem as later functions will42894289+ * verify it again.42904290+ */42914291+ spin_lock(&bg->lock);42924292+ is_unused = !btrfs_is_block_group_used(bg);42934293+ spin_unlock(&bg->lock);4286429442874287- if (is_unused)42884288- btrfs_mark_bg_unused(rci->bg);42954295+ if (is_unused)42964296+ btrfs_mark_bg_unused(bg);4289429742904290- if (rci->made_ro)42914291- btrfs_dec_block_group_ro(rci->bg);42984298+ if (rci->made_ro)42994299+ btrfs_dec_block_group_ro(bg);4292430042934293- btrfs_put_block_group(rci->bg);43014301+ btrfs_put_block_group(bg);43024302+ }4294430342954304 list_del(&rci->list);42964305 kfree(rci);
+11-2
fs/btrfs/zoned.c
···337337 if (!btrfs_fs_incompat(fs_info, ZONED))338338 return 0;339339340340- mutex_lock(&fs_devices->device_list_mutex);340340+ /*341341+ * No need to take the device_list mutex here, we're still in the mount342342+ * path and devices cannot be added to or removed from the list yet.343343+ */341344 list_for_each_entry(device, &fs_devices->devices, dev_list) {342345 /* We can skip reading of zone info for missing devices */343346 if (!device->bdev)···350347 if (ret)351348 break;352349 }353353- mutex_unlock(&fs_devices->device_list_mutex);354350355351 return ret;356352}···12611259 key.offset = 0;1262126012631261 root = btrfs_extent_root(fs_info, key.objectid);12621262+ if (unlikely(!root)) {12631263+ btrfs_err(fs_info,12641264+ "missing extent root for extent at bytenr %llu",12651265+ key.objectid);12661266+ return -EUCLEAN;12671267+ }12681268+12641269 ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);12651270 /* We should not find the exact match */12661271 if (unlikely(!ret))
···13391339 struct ceph_client *cl = fsc->client;13401340 struct ceph_mds_client *mdsc = fsc->mdsc;13411341 struct inode *inode = d_inode(dentry);13421342+ struct ceph_inode_info *ci = ceph_inode(inode);13421343 struct ceph_mds_request *req;13431344 bool try_async = ceph_test_mount_opt(fsc, ASYNC_DIROPS);13441345 struct dentry *dn;···13641363 if (!dn) {13651364 try_async = false;13661365 } else {13671367- struct ceph_path_info path_info;13661366+ struct ceph_path_info path_info = {0};13681367 path = ceph_mdsc_build_path(mdsc, dn, &path_info, 0);13691368 if (IS_ERR(path)) {13701369 try_async = false;···14251424 * We have enough caps, so we assume that the unlink14261425 * will succeed. Fix up the target inode and dcache.14271426 */14281428- drop_nlink(inode);14271427+14281428+ /*14291429+ * Protect the i_nlink update with i_ceph_lock14301430+ * to precent racing against ceph_fill_inode()14311431+ * handling our completion on a worker thread14321432+ * and don't decrement if i_nlink has already14331433+ * been updated to zero by this completion.14341434+ */14351435+ spin_lock(&ci->i_ceph_lock);14361436+ if (inode->i_nlink > 0)14371437+ drop_nlink(inode);14381438+ spin_unlock(&ci->i_ceph_lock);14391439+14291440 d_delete(dentry);14301441 } else {14311442 spin_lock(&fsc->async_unlink_conflict_lock);
···8787 space programs which can be found in the Linux nfs-utils package,8888 available from http://linux-nfs.org/.89899090- If unsure, say Y.9090+ If unsure, say N.91919292config NFS_SWAP9393 bool "Provide swap over NFS support"···100100config NFS_V4_0101101 bool "NFS client support for NFSv4.0"102102 depends on NFS_V4103103+ default y103104 help104105 This option enables support for minor version 0 of the NFSv4 protocol105106 (RFC 3530) in the kernel's NFS client.
+6-1
fs/nfs/nfs3proc.c
···392392 if (status != 0)393393 goto out_release_acls;394394395395- if (d_alias)395395+ if (d_alias) {396396+ if (d_is_dir(d_alias)) {397397+ status = -EISDIR;398398+ goto out_dput;399399+ }396400 dentry = d_alias;401401+ }397402398403 /* When we created the file with exclusive semantics, make399404 * sure we set the attributes afterwards. */
+54-9
fs/nfsd/export.c
···3636 * second map contains a reference to the entry in the first map.3737 */38383939+static struct workqueue_struct *nfsd_export_wq;4040+3941#define EXPKEY_HASHBITS 84042#define EXPKEY_HASHMAX (1 << EXPKEY_HASHBITS)4143#define EXPKEY_HASHMASK (EXPKEY_HASHMAX -1)42444343-static void expkey_put(struct kref *ref)4545+static void expkey_release(struct work_struct *work)4446{4545- struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref);4747+ struct svc_expkey *key = container_of(to_rcu_work(work),4848+ struct svc_expkey, ek_rwork);46494750 if (test_bit(CACHE_VALID, &key->h.flags) &&4851 !test_bit(CACHE_NEGATIVE, &key->h.flags))4952 path_put(&key->ek_path);5053 auth_domain_put(key->ek_client);5151- kfree_rcu(key, ek_rcu);5454+ kfree(key);5555+}5656+5757+static void expkey_put(struct kref *ref)5858+{5959+ struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref);6060+6161+ INIT_RCU_WORK(&key->ek_rwork, expkey_release);6262+ queue_rcu_work(nfsd_export_wq, &key->ek_rwork);5263}53645465static int expkey_upcall(struct cache_detail *cd, struct cache_head *h)···364353 EXP_STATS_COUNTERS_NUM);365354}366355367367-static void svc_export_release(struct rcu_head *rcu_head)356356+static void svc_export_release(struct work_struct *work)368357{369369- struct svc_export *exp = container_of(rcu_head, struct svc_export,370370- ex_rcu);358358+ struct svc_export *exp = container_of(to_rcu_work(work),359359+ struct svc_export, ex_rwork);371360361361+ path_put(&exp->ex_path);362362+ auth_domain_put(exp->ex_client);372363 nfsd4_fslocs_free(&exp->ex_fslocs);373364 export_stats_destroy(exp->ex_stats);374365 kfree(exp->ex_stats);···382369{383370 struct svc_export *exp = container_of(ref, struct svc_export, h.ref);384371385385- path_put(&exp->ex_path);386386- auth_domain_put(exp->ex_client);387387- call_rcu(&exp->ex_rcu, svc_export_release);372372+ INIT_RCU_WORK(&exp->ex_rwork, svc_export_release);373373+ queue_rcu_work(nfsd_export_wq, &exp->ex_rwork);388374}389375390376static int svc_export_upcall(struct cache_detail *cd, struct cache_head *h)···14911479 .show = e_show,14921480};1493148114821482+/**14831483+ * nfsd_export_wq_init - allocate the export release workqueue14841484+ *14851485+ * Called once at module load. The workqueue runs deferred svc_export and14861486+ * svc_expkey release work scheduled by queue_rcu_work() in the cache put14871487+ * callbacks.14881488+ *14891489+ * Return values:14901490+ * %0: workqueue allocated14911491+ * %-ENOMEM: allocation failed14921492+ */14931493+int nfsd_export_wq_init(void)14941494+{14951495+ nfsd_export_wq = alloc_workqueue("nfsd_export", WQ_UNBOUND, 0);14961496+ if (!nfsd_export_wq)14971497+ return -ENOMEM;14981498+ return 0;14991499+}15001500+15011501+/**15021502+ * nfsd_export_wq_shutdown - drain and free the export release workqueue15031503+ *15041504+ * Called once at module unload. Per-namespace teardown in15051505+ * nfsd_export_shutdown() has already drained all deferred work.15061506+ */15071507+void nfsd_export_wq_shutdown(void)15081508+{15091509+ destroy_workqueue(nfsd_export_wq);15101510+}15111511+14941512/*14951513 * Initialize the exports module.14961514 */···1582154015831541 cache_unregister_net(nn->svc_expkey_cache, net);15841542 cache_unregister_net(nn->svc_export_cache, net);15431543+ /* Drain deferred export and expkey release work. */15441544+ rcu_barrier();15451545+ flush_workqueue(nfsd_export_wq);15851546 cache_destroy_net(nn->svc_expkey_cache, net);15861547 cache_destroy_net(nn->svc_export_cache, net);15871548 svcauth_unix_purge(net);
···541541 struct xdr_netobj cr_princhash;542542};543543544544-/* A reasonable value for REPLAY_ISIZE was estimated as follows: 545545- * The OPEN response, typically the largest, requires 546546- * 4(status) + 8(stateid) + 20(changeinfo) + 4(rflags) + 8(verifier) + 547547- * 4(deleg. type) + 8(deleg. stateid) + 4(deleg. recall flag) + 548548- * 20(deleg. space limit) + ~32(deleg. ace) = 112 bytes 544544+/*545545+ * REPLAY_ISIZE is sized for an OPEN response with delegation:546546+ * 4(status) + 8(stateid) + 20(changeinfo) + 4(rflags) +547547+ * 8(verifier) + 4(deleg. type) + 8(deleg. stateid) +548548+ * 4(deleg. recall flag) + 20(deleg. space limit) +549549+ * ~32(deleg. ace) = 112 bytes550550+ *551551+ * Some responses can exceed this. A LOCK denial includes the conflicting552552+ * lock owner, which can be up to 1024 bytes (NFS4_OPAQUE_LIMIT). Responses553553+ * larger than REPLAY_ISIZE are not cached in rp_ibuf; only rp_status is554554+ * saved. Enlarging this constant increases the size of every555555+ * nfs4_stateowner.549556 */550557551558#define NFSD4_REPLAY_ISIZE 112
···19551955 case Kerberos:19561956 if (!uid_eq(ctx->cred_uid, ses->cred_uid))19571957 return 0;19581958+ if (strncmp(ses->user_name ?: "",19591959+ ctx->username ?: "",19601960+ CIFS_MAX_USERNAME_LEN))19611961+ return 0;19581962 break;19591963 case NTLMv2:19601964 case RawNTLMSSP:
+2-2
fs/smb/client/dir.c
···187187 const char *full_path;188188 void *page = alloc_dentry_path();189189 struct inode *newinode = NULL;190190- unsigned int sbflags;190190+ unsigned int sbflags = cifs_sb_flags(cifs_sb);191191 int disposition;192192 struct TCP_Server_Info *server = tcon->ses->server;193193 struct cifs_open_parms oparms;···308308 goto out;309309 }310310311311+ create_options |= cifs_open_create_options(oflags, create_options);311312 /*312313 * if we're not using unix extensions, see if we need to set313314 * ATTR_READONLY on the create call···368367 * If Open reported that we actually created a file then we now have to369368 * set the mode if possible.370369 */371371- sbflags = cifs_sb_flags(cifs_sb);372370 if ((tcon->unix_ext) && (*oplock & CIFS_CREATE_ACTION)) {373371 struct cifs_unix_set_info_args args = {374372 .mode = mode,
+69-61
fs/smb/client/file.c
···255255 struct cifs_io_request *req = container_of(wreq, struct cifs_io_request, rreq);256256 int ret;257257258258- ret = cifs_get_writable_file(CIFS_I(wreq->inode), FIND_WR_ANY, &req->cfile);258258+ ret = cifs_get_writable_file(CIFS_I(wreq->inode), FIND_ANY, &req->cfile);259259 if (ret) {260260 cifs_dbg(VFS, "No writable handle in writepages ret=%d\n", ret);261261 return;···584584 *********************************************************************/585585586586 disposition = cifs_get_disposition(f_flags);587587-588587 /* BB pass O_SYNC flag through on file attributes .. BB */589589-590590- /* O_SYNC also has bit for O_DSYNC so following check picks up either */591591- if (f_flags & O_SYNC)592592- create_options |= CREATE_WRITE_THROUGH;593593-594594- if (f_flags & O_DIRECT)595595- create_options |= CREATE_NO_BUFFER;588588+ create_options |= cifs_open_create_options(f_flags, create_options);596589597590retry_open:598591 oparms = (struct cifs_open_parms) {···956963 return tcon->ses->server->ops->flush(xid, tcon,957964 &cfile->fid);958965 }959959- rc = cifs_get_writable_file(CIFS_I(inode), FIND_WR_ANY, &cfile);966966+ rc = cifs_get_writable_file(CIFS_I(inode), FIND_ANY, &cfile);960967 if (!rc) {961968 tcon = tlink_tcon(cfile->tlink);962969 rc = tcon->ses->server->ops->flush(xid, tcon, &cfile->fid);···981988 return -ERESTARTSYS;982989 mapping_set_error(inode->i_mapping, rc);983990984984- cfile = find_writable_file(cinode, FIND_WR_FSUID_ONLY);991991+ cfile = find_writable_file(cinode, FIND_FSUID_ONLY);985992 rc = cifs_file_flush(xid, inode, cfile);986993 if (!rc) {987994 if (cfile) {···9931000 if (!rc) {9941001 netfs_resize_file(&cinode->netfs, 0, true);9951002 cifs_setsize(inode, 0);996996- inode->i_blocks = 0;9971003 }9981004 }9991005 if (cfile)···1060106810611069 /* Get the cached handle as SMB2 close is deferred */10621070 if (OPEN_FMODE(file->f_flags) & FMODE_WRITE) {10631063- rc = cifs_get_writable_path(tcon, full_path,10641064- FIND_WR_FSUID_ONLY |10651065- FIND_WR_NO_PENDING_DELETE,10661066- &cfile);10711071+ rc = __cifs_get_writable_file(CIFS_I(inode),10721072+ FIND_FSUID_ONLY |10731073+ FIND_NO_PENDING_DELETE |10741074+ FIND_OPEN_FLAGS,10751075+ file->f_flags, &cfile);10671076 } else {10681068- rc = cifs_get_readable_path(tcon, full_path, &cfile);10771077+ cfile = __find_readable_file(CIFS_I(inode),10781078+ FIND_NO_PENDING_DELETE |10791079+ FIND_OPEN_FLAGS,10801080+ file->f_flags);10811081+ rc = cfile ? 0 : -ENOENT;10691082 }10701083 if (rc == 0) {10711071- unsigned int oflags = file->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC);10721072- unsigned int cflags = cfile->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC);10731073-10741074- if (cifs_convert_flags(oflags, 0) == cifs_convert_flags(cflags, 0) &&10751075- (oflags & (O_SYNC|O_DIRECT)) == (cflags & (O_SYNC|O_DIRECT))) {10761076- file->private_data = cfile;10771077- spin_lock(&CIFS_I(inode)->deferred_lock);10781078- cifs_del_deferred_close(cfile);10791079- spin_unlock(&CIFS_I(inode)->deferred_lock);10801080- goto use_cache;10811081- }10821082- _cifsFileInfo_put(cfile, true, false);10831083- } else {10841084- /* hard link on the defeered close file */10851085- rc = cifs_get_hardlink_path(tcon, inode, file);10861086- if (rc)10871087- cifs_close_deferred_file(CIFS_I(inode));10841084+ file->private_data = cfile;10851085+ spin_lock(&CIFS_I(inode)->deferred_lock);10861086+ cifs_del_deferred_close(cfile);10871087+ spin_unlock(&CIFS_I(inode)->deferred_lock);10881088+ goto use_cache;10881089 }10901090+ /* hard link on the deferred close file */10911091+ rc = cifs_get_hardlink_path(tcon, inode, file);10921092+ if (rc)10931093+ cifs_close_deferred_file(CIFS_I(inode));1089109410901095 if (server->oplocks)10911096 oplock = REQ_OPLOCK;···13031314 rdwr_for_fscache = 1;1304131513051316 desired_access = cifs_convert_flags(cfile->f_flags, rdwr_for_fscache);13061306-13071307- /* O_SYNC also has bit for O_DSYNC so following check picks up either */13081308- if (cfile->f_flags & O_SYNC)13091309- create_options |= CREATE_WRITE_THROUGH;13101310-13111311- if (cfile->f_flags & O_DIRECT)13121312- create_options |= CREATE_NO_BUFFER;13171317+ create_options |= cifs_open_create_options(cfile->f_flags,13181318+ create_options);1313131913141320 if (server->ops->get_lease_key)13151321 server->ops->get_lease_key(inode, &cfile->fid);···25082524 netfs_write_subrequest_terminated(&wdata->subreq, result);25092525}2510252625112511-struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode,25122512- bool fsuid_only)25272527+static bool open_flags_match(struct cifsInodeInfo *cinode,25282528+ unsigned int oflags, unsigned int cflags)25292529+{25302530+ struct inode *inode = &cinode->netfs.inode;25312531+ int crw = 0, orw = 0;25322532+25332533+ oflags &= ~(O_CREAT | O_EXCL | O_TRUNC);25342534+ cflags &= ~(O_CREAT | O_EXCL | O_TRUNC);25352535+25362536+ if (cifs_fscache_enabled(inode)) {25372537+ if (OPEN_FMODE(cflags) & FMODE_WRITE)25382538+ crw = 1;25392539+ if (OPEN_FMODE(oflags) & FMODE_WRITE)25402540+ orw = 1;25412541+ }25422542+ if (cifs_convert_flags(oflags, orw) != cifs_convert_flags(cflags, crw))25432543+ return false;25442544+25452545+ return (oflags & (O_SYNC | O_DIRECT)) == (cflags & (O_SYNC | O_DIRECT));25462546+}25472547+25482548+struct cifsFileInfo *__find_readable_file(struct cifsInodeInfo *cifs_inode,25492549+ unsigned int find_flags,25502550+ unsigned int open_flags)25132551{25142552 struct cifs_sb_info *cifs_sb = CIFS_SB(cifs_inode);25532553+ bool fsuid_only = find_flags & FIND_FSUID_ONLY;25152554 struct cifsFileInfo *open_file = NULL;2516255525172556 /* only filter by fsuid on multiuser mounts */···25472540 have a close pending, we go through the whole list */25482541 list_for_each_entry(open_file, &cifs_inode->openFileList, flist) {25492542 if (fsuid_only && !uid_eq(open_file->uid, current_fsuid()))25432543+ continue;25442544+ if ((find_flags & FIND_NO_PENDING_DELETE) &&25452545+ open_file->status_file_deleted)25462546+ continue;25472547+ if ((find_flags & FIND_OPEN_FLAGS) &&25482548+ !open_flags_match(cifs_inode, open_flags,25492549+ open_file->f_flags))25502550 continue;25512551 if (OPEN_FMODE(open_file->f_flags) & FMODE_READ) {25522552 if ((!open_file->invalidHandle)) {···25732559}2574256025752561/* Return -EBADF if no handle is found and general rc otherwise */25762576-int25772577-cifs_get_writable_file(struct cifsInodeInfo *cifs_inode, int flags,25782578- struct cifsFileInfo **ret_file)25622562+int __cifs_get_writable_file(struct cifsInodeInfo *cifs_inode,25632563+ unsigned int find_flags, unsigned int open_flags,25642564+ struct cifsFileInfo **ret_file)25792565{25802566 struct cifsFileInfo *open_file, *inv_file = NULL;25812567 struct cifs_sb_info *cifs_sb;25822568 bool any_available = false;25832569 int rc = -EBADF;25842570 unsigned int refind = 0;25852585- bool fsuid_only = flags & FIND_WR_FSUID_ONLY;25862586- bool with_delete = flags & FIND_WR_WITH_DELETE;25712571+ bool fsuid_only = find_flags & FIND_FSUID_ONLY;25722572+ bool with_delete = find_flags & FIND_WITH_DELETE;25872573 *ret_file = NULL;2588257425892575 /*···26172603 continue;26182604 if (with_delete && !(open_file->fid.access & DELETE))26192605 continue;26202620- if ((flags & FIND_WR_NO_PENDING_DELETE) &&26062606+ if ((find_flags & FIND_NO_PENDING_DELETE) &&26212607 open_file->status_file_deleted)26082608+ continue;26092609+ if ((find_flags & FIND_OPEN_FLAGS) &&26102610+ !open_flags_match(cifs_inode, open_flags,26112611+ open_file->f_flags))26222612 continue;26232613 if (OPEN_FMODE(open_file->f_flags) & FMODE_WRITE) {26242614 if (!open_file->invalidHandle) {···27402722 cinode = CIFS_I(d_inode(cfile->dentry));27412723 spin_unlock(&tcon->open_file_lock);27422724 free_dentry_path(page);27432743- *ret_file = find_readable_file(cinode, 0);27442744- if (*ret_file) {27452745- spin_lock(&cinode->open_file_lock);27462746- if ((*ret_file)->status_file_deleted) {27472747- spin_unlock(&cinode->open_file_lock);27482748- cifsFileInfo_put(*ret_file);27492749- *ret_file = NULL;27502750- } else {27512751- spin_unlock(&cinode->open_file_lock);27522752- }27532753- }27252725+ *ret_file = find_readable_file(cinode, FIND_ANY);27542726 return *ret_file ? 0 : -ENOENT;27552727 }27562728···28122804 }2813280528142806 if ((OPEN_FMODE(smbfile->f_flags) & FMODE_WRITE) == 0) {28152815- smbfile = find_writable_file(CIFS_I(inode), FIND_WR_ANY);28072807+ smbfile = find_writable_file(CIFS_I(inode), FIND_ANY);28162808 if (smbfile) {28172809 rc = server->ops->flush(xid, tcon, &smbfile->fid);28182810 cifsFileInfo_put(smbfile);
+1-1
fs/smb/client/fs_context.c
···19971997 ctx->backupuid_specified = false; /* no backup intent for a user */19981998 ctx->backupgid_specified = false; /* no backup intent for a group */1999199920002000- ctx->retrans = 1;20002000+ ctx->retrans = 0;20012001 ctx->reparse_type = CIFS_REPARSE_TYPE_DEFAULT;20022002 ctx->symlink_type = CIFS_SYMLINK_TYPE_DEFAULT;20032003 ctx->nonativesocket = 0;
+9-18
fs/smb/client/inode.c
···219219 */220220 if (is_size_safe_to_change(cifs_i, fattr->cf_eof, from_readdir)) {221221 i_size_write(inode, fattr->cf_eof);222222-223223- /*224224- * i_blocks is not related to (i_size / i_blksize),225225- * but instead 512 byte (2**9) size is required for226226- * calculating num blocks.227227- */228228- inode->i_blocks = (512 - 1 + fattr->cf_bytes) >> 9;222222+ inode->i_blocks = CIFS_INO_BLOCKS(fattr->cf_bytes);229223 }230224231225 if (S_ISLNK(fattr->cf_mode) && fattr->cf_symlink_target) {···29912997 }29922998 }2993299929942994- cfile = find_readable_file(cifs_i, false);30003000+ cfile = find_readable_file(cifs_i, FIND_ANY);29953001 if (cfile == NULL)29963002 return -EINVAL;29973003···30093015{30103016 spin_lock(&inode->i_lock);30113017 i_size_write(inode, offset);30183018+ /*30193019+ * Until we can query the server for actual allocation size,30203020+ * this is best estimate we have for blocks allocated for a file.30213021+ */30223022+ inode->i_blocks = CIFS_INO_BLOCKS(offset);30123023 spin_unlock(&inode->i_lock);30133024 inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode));30143025 truncate_pagecache(inode, offset);···30493050 size, false);30503051 cifs_dbg(FYI, "%s: set_file_size: rc = %d\n", __func__, rc);30513052 } else {30523052- open_file = find_writable_file(cifsInode, FIND_WR_FSUID_ONLY);30533053+ open_file = find_writable_file(cifsInode, FIND_FSUID_ONLY);30533054 if (open_file) {30543055 tcon = tlink_tcon(open_file->tlink);30553056 server = tcon->ses->server;···30863087 if (rc == 0) {30873088 netfs_resize_file(&cifsInode->netfs, size, true);30883089 cifs_setsize(inode, size);30893089- /*30903090- * i_blocks is not related to (i_size / i_blksize), but instead30913091- * 512 byte (2**9) size is required for calculating num blocks.30923092- * Until we can query the server for actual allocation size,30933093- * this is best estimate we have for blocks allocated for a file30943094- * Number of blocks must be rounded up so size 1 is not 0 blocks30953095- */30963096- inode->i_blocks = (512 - 1 + size) >> 9;30973090 }3098309130993092 return rc;···32103219 open_file->fid.netfid,32113220 open_file->pid);32123221 } else {32133213- open_file = find_writable_file(cifsInode, FIND_WR_FSUID_ONLY);32223222+ open_file = find_writable_file(cifsInode, FIND_FSUID_ONLY);32143223 if (open_file) {32153224 pTcon = tlink_tcon(open_file->tlink);32163225 rc = CIFSSMBUnixSetFileInfo(xid, pTcon, args,
+1-1
fs/smb/client/smb1ops.c
···960960 struct cifs_tcon *tcon;961961962962 /* if the file is already open for write, just use that fileid */963963- open_file = find_writable_file(cinode, FIND_WR_FSUID_ONLY);963963+ open_file = find_writable_file(cinode, FIND_FSUID_ONLY);964964965965 if (open_file) {966966 fid.netfid = open_file->fid.netfid;
+1-1
fs/smb/client/smb1transport.c
···460460 return 0;461461462462 /*463463- * Windows NT server returns error resposne (e.g. STATUS_DELETE_PENDING463463+ * Windows NT server returns error response (e.g. STATUS_DELETE_PENDING464464 * or STATUS_OBJECT_NAME_NOT_FOUND or ERRDOS/ERRbadfile or any other)465465 * for some TRANS2 requests without the RESPONSE flag set in header.466466 */
···109109}110110111111#if IS_ENABLED(CONFIG_SMB_KUNIT_TESTS)112112+#define EXPORT_SYMBOL_FOR_SMB_TEST(sym) \113113+ EXPORT_SYMBOL_FOR_MODULES(sym, "smb2maperror_test")114114+112115/* Previous prototype for eliminating the build warning. */113116const struct status_to_posix_error *smb2_get_err_map_test(__u32 smb2_status);114117···119116{120117 return smb2_get_err_map(smb2_status);121118}122122-EXPORT_SYMBOL_GPL(smb2_get_err_map_test);119119+EXPORT_SYMBOL_FOR_SMB_TEST(smb2_get_err_map_test);123120124121const struct status_to_posix_error *smb2_error_map_table_test = smb2_error_map_table;125125-EXPORT_SYMBOL_GPL(smb2_error_map_table_test);122122+EXPORT_SYMBOL_FOR_SMB_TEST(smb2_error_map_table_test);126123127124unsigned int smb2_error_map_num = ARRAY_SIZE(smb2_error_map_table);128128-EXPORT_SYMBOL_GPL(smb2_error_map_num);125125+EXPORT_SYMBOL_FOR_SMB_TEST(smb2_error_map_num);129126#endif
+18-20
fs/smb/client/smb2ops.c
···628628 struct smb_sockaddr_in6 *p6;629629 struct cifs_server_iface *info = NULL, *iface = NULL, *niface = NULL;630630 struct cifs_server_iface tmp_iface;631631+ __be16 port;631632 ssize_t bytes_left;632633 size_t next = 0;633634 int nb_iface = 0;···663662 goto out;664663 }665664665665+ spin_lock(&ses->server->srv_lock);666666+ if (ses->server->dstaddr.ss_family == AF_INET)667667+ port = ((struct sockaddr_in *)&ses->server->dstaddr)->sin_port;668668+ else if (ses->server->dstaddr.ss_family == AF_INET6)669669+ port = ((struct sockaddr_in6 *)&ses->server->dstaddr)->sin6_port;670670+ else671671+ port = cpu_to_be16(CIFS_PORT);672672+ spin_unlock(&ses->server->srv_lock);673673+666674 while (bytes_left >= (ssize_t)sizeof(*p)) {667675 memset(&tmp_iface, 0, sizeof(tmp_iface));668676 /* default to 1Gbps when link speed is unset */···692682 memcpy(&addr4->sin_addr, &p4->IPv4Address, 4);693683694684 /* [MS-SMB2] 2.2.32.5.1.1 Clients MUST ignore these */695695- addr4->sin_port = cpu_to_be16(CIFS_PORT);685685+ addr4->sin_port = port;696686697687 cifs_dbg(FYI, "%s: ipv4 %pI4\n", __func__,698688 &addr4->sin_addr);···706696 /* [MS-SMB2] 2.2.32.5.1.2 Clients MUST ignore these */707697 addr6->sin6_flowinfo = 0;708698 addr6->sin6_scope_id = 0;709709- addr6->sin6_port = cpu_to_be16(CIFS_PORT);699699+ addr6->sin6_port = port;710700711701 cifs_dbg(FYI, "%s: ipv6 %pI6\n", __func__,712702 &addr6->sin6_addr);···14971487{14981488 struct smb2_file_network_open_info file_inf;14991489 struct inode *inode;14901490+ u64 asize;15001491 int rc;1501149215021493 rc = __SMB2_close(xid, tcon, cfile->fid.persistent_fid,···15211510 inode_set_atime_to_ts(inode,15221511 cifs_NTtimeToUnix(file_inf.LastAccessTime));1523151215241524- /*15251525- * i_blocks is not related to (i_size / i_blksize),15261526- * but instead 512 byte (2**9) size is required for15271527- * calculating num blocks.15281528- */15291529- if (le64_to_cpu(file_inf.AllocationSize) > 4096)15301530- inode->i_blocks =15311531- (512 - 1 + le64_to_cpu(file_inf.AllocationSize)) >> 9;15131513+ asize = le64_to_cpu(file_inf.AllocationSize);15141514+ if (asize > 4096)15151515+ inode->i_blocks = CIFS_INO_BLOCKS(asize);1532151615331517 /* End of file and Attributes should not have to be updated on close */15341518 spin_unlock(&inode->i_lock);···22002194 rc = smb2_set_file_size(xid, tcon, trgtfile, dest_off + len, false);22012195 if (rc)22022196 goto duplicate_extents_out;22032203-22042204- /*22052205- * Although also could set plausible allocation size (i_blocks)22062206- * here in addition to setting the file size, in reflink22072207- * it is likely that the target file is sparse. Its allocation22082208- * size will be queried on next revalidate, but it is important22092209- * to make sure that file's cached size is updated immediately22102210- */22112197 netfs_resize_file(netfs_inode(inode), dest_off + len, true);22122198 cifs_setsize(inode, dest_off + len);22132199 }···33503352 struct cifsFileInfo *open_file = NULL;3351335333523354 if (inode && !(info & SACL_SECINFO))33533353- open_file = find_readable_file(CIFS_I(inode), true);33553355+ open_file = find_readable_file(CIFS_I(inode), FIND_FSUID_ONLY);33543356 if (!open_file || (info & SACL_SECINFO))33553357 return get_smb2_acl_by_path(cifs_sb, path, pacllen, info);33563358···38963898 * some servers (Windows2016) will not reflect recent writes in38973899 * QUERY_ALLOCATED_RANGES until SMB2_flush is called.38983900 */38993899- wrcfile = find_writable_file(cifsi, FIND_WR_ANY);39013901+ wrcfile = find_writable_file(cifsi, FIND_ANY);39003902 if (wrcfile) {39013903 filemap_write_and_wait(inode->i_mapping);39023904 smb2_flush_file(xid, tcon, &wrcfile->fid);
+4-1
fs/smb/client/smb2pdu.c
···5307530753085308 memset(&rqst, 0, sizeof(struct smb_rqst));53095309 rqst.rq_iov = iov;53105310- rqst.rq_nvec = n_vec + 1;53105310+ /* iov[0] is the SMB header; move payload to rq_iter for encryption safety */53115311+ rqst.rq_nvec = 1;53125312+ iov_iter_kvec(&rqst.rq_iter, ITER_SOURCE, &iov[1], n_vec,53135313+ io_parms->length);5311531453125315 if (retries) {53135316 /* Back-off before retry */
···14391439 return 0;1440144014411441out_abort:14421442+ /*14431443+ * Shut down the log before removing the dquot item from the AIL.14441444+ * Otherwise, the log tail may advance past this item's LSN while14451445+ * log writes are still in progress, making these unflushed changes14461446+ * unrecoverable on the next mount.14471447+ */14481448+ xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);14421449 dqp->q_flags &= ~XFS_DQFLAG_DIRTY;14431450 xfs_trans_ail_delete(lip, 0);14441444- xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);14451451 xfs_dqfunlock(dqp);14461452 return error;14471453}
+10-7
fs/xfs/xfs_healthmon.c
···141141 hm->mount_cookie = DETACHED_MOUNT_COOKIE;142142 spin_unlock(&xfs_healthmon_lock);143143144144+ /*145145+ * Wake up any readers that might remain. This can happen if unmount146146+ * races with the healthmon fd owner entering ->read_iter, having147147+ * already emptied the event queue.148148+ *149149+ * In the ->release case there shouldn't be any readers because the150150+ * only users of the waiter are read and poll.151151+ */152152+ wake_up_all(&hm->wait);153153+144154 trace_xfs_healthmon_detach(hm);145155 xfs_healthmon_put(hm);146156}···10371027 * process can create another health monitor file.10381028 */10391029 xfs_healthmon_detach(hm);10401040-10411041- /*10421042- * Wake up any readers that might be left. There shouldn't be any10431043- * because the only users of the waiter are read and poll.10441044- */10451045- wake_up_all(&hm->wait);10461046-10471030 xfs_healthmon_put(hm);10481031 return 0;10491032}
···4455#include <uapi/linux/auxvec.h>6677-#define AT_VECTOR_SIZE_BASE 22 /* NEW_AUX_ENT entries in auxiliary table */77+#define AT_VECTOR_SIZE_BASE 24 /* NEW_AUX_ENT entries in auxiliary table */88 /* number of "#define AT_.*" above, minus {AT_NULL, AT_IGNORE, AT_NOTELF} */99#endif /* _LINUX_AUXVEC_H */
+3-1
include/linux/build_bug.h
···3232/**3333 * BUILD_BUG_ON_MSG - break compile if a condition is true & emit supplied3434 * error message.3535- * @condition: the condition which the compiler should know is false.3535+ * @cond: the condition which the compiler should know is false.3636+ * @msg: build-time error message3637 *3738 * See BUILD_BUG_ON for description.3839 */···61606261/**6362 * static_assert - check integer constant expression at build time6363+ * @expr: expression to be checked6464 *6565 * static_assert() is a wrapper for the C11 _Static_assert, with a6666 * little macro magic to make the message optional (defaulting to the
+1
include/linux/console_struct.h
···160160 struct uni_pagedict **uni_pagedict_loc; /* [!] Location of uni_pagedict variable for this console */161161 u32 **vc_uni_lines; /* unicode screen content */162162 u16 *vc_saved_screen;163163+ u32 **vc_saved_uni_lines;163164 unsigned int vc_saved_cols;164165 unsigned int vc_saved_rows;165166 /* additional information is in vt_kern.h */
+54
include/linux/device.h
···483483 * on. This shrinks the "Board Support Packages" (BSPs) and484484 * minimizes board-specific #ifdefs in drivers.485485 * @driver_data: Private pointer for driver specific info.486486+ * @driver_override: Driver name to force a match. Do not touch directly; use487487+ * device_set_driver_override() instead.486488 * @links: Links to suppliers and consumers of this device.487489 * @power: For device power management.488490 * See Documentation/driver-api/pm/devices.rst for details.···578576 core doesn't touch it */579577 void *driver_data; /* Driver data, set and get with580578 dev_set_drvdata/dev_get_drvdata */579579+ struct {580580+ const char *name;581581+ spinlock_t lock;582582+ } driver_override;581583 struct mutex mutex; /* mutex to synchronize calls to582584 * its driver.583585 */···706700};707701708702#define kobj_to_dev(__kobj) container_of_const(__kobj, struct device, kobj)703703+704704+int __device_set_driver_override(struct device *dev, const char *s, size_t len);705705+706706+/**707707+ * device_set_driver_override() - Helper to set or clear driver override.708708+ * @dev: Device to change709709+ * @s: NUL-terminated string, new driver name to force a match, pass empty710710+ * string to clear it ("" or "\n", where the latter is only for sysfs711711+ * interface).712712+ *713713+ * Helper to set or clear driver override of a device.714714+ *715715+ * Returns: 0 on success or a negative error code on failure.716716+ */717717+static inline int device_set_driver_override(struct device *dev, const char *s)718718+{719719+ return __device_set_driver_override(dev, s, s ? strlen(s) : 0);720720+}721721+722722+/**723723+ * device_has_driver_override() - Check if a driver override has been set.724724+ * @dev: device to check725725+ *726726+ * Returns true if a driver override has been set for this device.727727+ */728728+static inline bool device_has_driver_override(struct device *dev)729729+{730730+ guard(spinlock)(&dev->driver_override.lock);731731+ return !!dev->driver_override.name;732732+}733733+734734+/**735735+ * device_match_driver_override() - Match a driver against the device's driver_override.736736+ * @dev: device to check737737+ * @drv: driver to match against738738+ *739739+ * Returns > 0 if a driver override is set and matches the given driver, 0 if a740740+ * driver override is set but does not match, or < 0 if a driver override is not741741+ * set at all.742742+ */743743+static inline int device_match_driver_override(struct device *dev,744744+ const struct device_driver *drv)745745+{746746+ guard(spinlock)(&dev->driver_override.lock);747747+ if (dev->driver_override.name)748748+ return !strcmp(dev->driver_override.name, drv->name);749749+ return -1;750750+}709751710752/**711753 * device_iommu_mapped - Returns true when the device DMA is translated
+4
include/linux/device/bus.h
···6565 * this bus.6666 * @pm: Power management operations of this bus, callback the specific6767 * device driver's pm-ops.6868+ * @driver_override: Set to true if this bus supports the driver_override6969+ * mechanism, which allows userspace to force a specific7070+ * driver to bind to a device via a sysfs attribute.6871 * @need_parent_lock: When probing or removing a device on this bus, the6972 * device core should lock the device's parent.7073 *···109106110107 const struct dev_pm_ops *pm;111108109109+ bool driver_override;112110 bool need_parent_lock;113111};114112
···5353 * tables.5454 * @ias: Input address (iova) size, in bits.5555 * @oas: Output address (paddr) size, in bits.5656- * @coherent_walk A flag to indicate whether or not page table walks made5656+ * @coherent_walk: A flag to indicate whether or not page table walks made5757 * by the IOMMU are coherent with the CPU caches.5858 * @tlb: TLB management callbacks for this set of tables.5959 * @iommu_dev: The device representing the DMA configuration for the···136136 void (*free)(void *cookie, void *pages, size_t size);137137138138 /* Low-level data specific to the table format */139139+ /* private: */139140 union {140141 struct {141142 u64 ttbr;···204203 * @unmap_pages: Unmap a range of virtually contiguous pages of the same size.205204 * @iova_to_phys: Translate iova to physical address.206205 * @pgtable_walk: (optional) Perform a page table walk for a given iova.206206+ * @read_and_clear_dirty: Record dirty info per IOVA. If an IOVA is dirty,207207+ * clear its dirty state from the PTE unless the208208+ * IOMMU_DIRTY_NO_CLEAR flag is passed in.207209 *208210 * These functions map directly onto the iommu_ops member functions with209211 * the same names.···235231 * the configuration actually provided by the allocator (e.g. the236232 * pgsize_bitmap may be restricted).237233 * @cookie: An opaque token provided by the IOMMU driver and passed back to238238- * the callback routines in cfg->tlb.234234+ * the callback routines.235235+ *236236+ * Returns: Pointer to the &struct io_pgtable_ops for this set of page tables.239237 */240238struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt,241239 struct io_pgtable_cfg *cfg,
···315315316316#endif /* CONFIG_PREEMPT_RT */317317318318-#if defined(WARN_CONTEXT_ANALYSIS)318318+#if defined(WARN_CONTEXT_ANALYSIS) && !defined(__CHECKER__)319319/*320320 * Because the compiler only knows about the base per-CPU variable, use this321321 * helper function to make the compiler think we lock/unlock the @base variable,
···234234};235235236236/**237237- * struct mmu_interval_notifier_ops237237+ * struct mmu_interval_notifier_ops - callback for range notification238238 * @invalidate: Upon return the caller must stop using any SPTEs within this239239 * range. This function can sleep. Return false only if sleeping240240 * was required but mmu_notifier_range_blockable(range) is false.···309309310310/**311311 * mmu_interval_set_seq - Save the invalidation sequence312312- * @interval_sub - The subscription passed to invalidate313313- * @cur_seq - The cur_seq passed to the invalidate() callback312312+ * @interval_sub: The subscription passed to invalidate313313+ * @cur_seq: The cur_seq passed to the invalidate() callback314314 *315315 * This must be called unconditionally from the invalidate callback of a316316 * struct mmu_interval_notifier_ops under the same lock that is used to call···329329330330/**331331 * mmu_interval_read_retry - End a read side critical section against a VA range332332- * interval_sub: The subscription333333- * seq: The return of the paired mmu_interval_read_begin()332332+ * @interval_sub: The subscription333333+ * @seq: The return of the paired mmu_interval_read_begin()334334 *335335 * This MUST be called under a user provided lock that is also held336336 * unconditionally by op->invalidate() when it calls mmu_interval_set_seq().···338338 * Each call should be paired with a single mmu_interval_read_begin() and339339 * should be used to conclude the read side.340340 *341341- * Returns true if an invalidation collided with this critical section, and341341+ * Returns: true if an invalidation collided with this critical section, and342342 * the caller should retry.343343 */344344static inline bool···350350351351/**352352 * mmu_interval_check_retry - Test if a collision has occurred353353- * interval_sub: The subscription354354- * seq: The return of the matching mmu_interval_read_begin()353353+ * @interval_sub: The subscription354354+ * @seq: The return of the matching mmu_interval_read_begin()355355 *356356 * This can be used in the critical section between mmu_interval_read_begin()357357- * and mmu_interval_read_retry(). A return of true indicates an invalidation358358- * has collided with this critical region and a future359359- * mmu_interval_read_retry() will return true.360360- *361361- * False is not reliable and only suggests a collision may not have362362- * occurred. It can be called many times and does not have to hold the user363363- * provided lock.357357+ * and mmu_interval_read_retry().364358 *365359 * This call can be used as part of loops and other expensive operations to366360 * expedite a retry.361361+ * It can be called many times and does not have to hold the user362362+ * provided lock.363363+ *364364+ * Returns: true indicates an invalidation has collided with this critical365365+ * region and a future mmu_interval_read_retry() will return true.366366+ * False is not reliable and only suggests a collision may not have367367+ * occurred.367368 */368369static inline bool369370mmu_interval_check_retry(struct mmu_interval_notifier *interval_sub,
···3131 struct resource *resource;32323333 const struct platform_device_id *id_entry;3434- /*3535- * Driver name to force a match. Do not set directly, because core3636- * frees it. Use driver_set_override() to set or clear it.3737- */3838- const char *driver_override;39344035 /* MFD cell pointer */4136 struct mfd_cell *mfd_cell;
+5-1
include/linux/rseq_types.h
···133133 * @active: MM CID is active for the task134134 * @cid: The CID associated to the task either permanently or135135 * borrowed from the CPU136136+ * @node: Queued in the per MM MMCID list136137 */137138struct sched_mm_cid {138139 unsigned int active;139140 unsigned int cid;141141+ struct hlist_node node;140142};141143142144/**···159157 * @work: Regular work to handle the affinity mode change case160158 * @lock: Spinlock to protect against affinity setting which can't take @mutex161159 * @mutex: Mutex to serialize forks and exits related to this mm160160+ * @user_list: List of the MM CID users of a MM162161 * @nr_cpus_allowed: The number of CPUs in the per MM allowed CPUs map. The map163162 * is growth only.164163 * @users: The number of tasks sharing this MM. Separate from mm::mm_users···180177181178 raw_spinlock_t lock;182179 struct mutex mutex;180180+ struct hlist_head user_list;183181184182 /* Low frequency modified */185183 unsigned int nr_cpus_allowed;186184 unsigned int users;187185 unsigned int pcpu_thrs;188186 unsigned int update_deferred;189189-}____cacheline_aligned_in_smp;187187+} ____cacheline_aligned;190188#else /* CONFIG_SCHED_MM_CID */191189struct mm_mm_cid { };192190struct sched_mm_cid { };
···195195void serial8250_do_set_divisor(struct uart_port *port, unsigned int baud,196196 unsigned int quot);197197int fsl8250_handle_irq(struct uart_port *port);198198+void serial8250_handle_irq_locked(struct uart_port *port, unsigned int iir);198199int serial8250_handle_irq(struct uart_port *port, unsigned int iir);199200u16 serial8250_rx_chars(struct uart_8250_port *up, u16 lsr);200201void serial8250_read_char(struct uart_8250_port *up, u16 lsr);
+2-2
include/linux/uaccess.h
···792792793793/**794794 * scoped_user_rw_access_size - Start a scoped user read/write access with given size795795- * @uptr Pointer to the user space address to read from and write to795795+ * @uptr: Pointer to the user space address to read from and write to796796 * @size: Size of the access starting from @uptr797797 * @elbl: Error label to goto when the access region is rejected798798 *···803803804804/**805805 * scoped_user_rw_access - Start a scoped user read/write access806806- * @uptr Pointer to the user space address to read from and write to806806+ * @uptr: Pointer to the user space address to read from and write to807807 * @elbl: Error label to goto when the access region is rejected808808 *809809 * The size of the access starting from @uptr is determined via sizeof(*@uptr)).
+6-2
include/linux/usb.h
···18621862 * SYNCHRONOUS CALL SUPPORT *18631863 *-------------------------------------------------------------------*/1864186418651865+/* Maximum value allowed for timeout in synchronous routines below */18661866+#define USB_MAX_SYNCHRONOUS_TIMEOUT 60000 /* ms */18671867+18651868extern int usb_control_msg(struct usb_device *dev, unsigned int pipe,18661869 __u8 request, __u8 requesttype, __u16 value, __u16 index,18671870 void *data, __u16 size, int timeout);18681871extern int usb_interrupt_msg(struct usb_device *usb_dev, unsigned int pipe,18691872 void *data, int len, int *actual_length, int timeout);18701873extern int usb_bulk_msg(struct usb_device *usb_dev, unsigned int pipe,18711871- void *data, int len, int *actual_length,18721872- int timeout);18741874+ void *data, int len, int *actual_length, int timeout);18751875+extern int usb_bulk_msg_killable(struct usb_device *usb_dev, unsigned int pipe,18761876+ void *data, int len, int *actual_length, int timeout);1873187718741878/* wrappers around usb_control_msg() for the most common standard requests */18751879int usb_control_msg_send(struct usb_device *dev, __u8 endpoint, __u8 request,
···132132#define FLAG_MULTI_PACKET 0x2000133133#define FLAG_RX_ASSEMBLE 0x4000 /* rx packets may span >1 frames */134134#define FLAG_NOARP 0x8000 /* device can't do ARP */135135+#define FLAG_NOMAXMTU 0x10000 /* allow max_mtu above hard_mtu */135136136137 /* init device ... can sleep, or cause probe() failure */137138 int (*bind)(struct usbnet *, struct usb_interface *);
+14
include/net/ip6_tunnel.h
···156156{157157 int pkt_len, err;158158159159+ if (unlikely(dev_recursion_level() > IP_TUNNEL_RECURSION_LIMIT)) {160160+ if (dev) {161161+ net_crit_ratelimited("Dead loop on virtual device %s, fix it urgently!\n",162162+ dev->name);163163+ DEV_STATS_INC(dev, tx_errors);164164+ }165165+ kfree_skb(skb);166166+ return;167167+ }168168+169169+ dev_xmit_recursion_inc();170170+159171 memset(skb->cb, 0, sizeof(struct inet6_skb_parm));160172 IP6CB(skb)->flags = ip6cb_flags;161173 pkt_len = skb->len - skb_inner_network_offset(skb);···178166 pkt_len = -1;179167 iptunnel_xmit_stats(dev, pkt_len);180168 }169169+170170+ dev_xmit_recursion_dec();181171}182172#endif183173#endif
+29-6
include/net/ip_tunnels.h
···2727#include <net/ip6_route.h>2828#endif29293030+/* Recursion limit for tunnel xmit to detect routing loops.3131+ * Unlike XMIT_RECURSION_LIMIT (8) used in the no-qdisc path, tunnel3232+ * recursion involves route lookups and full IP output, consuming much3333+ * more stack per level, so a lower limit is needed.3434+ */3535+#define IP_TUNNEL_RECURSION_LIMIT 43636+3037/* Keep error state on tunnel for 30 sec */3138#define IPTUNNEL_ERR_TIMEO (30*HZ)3239···665658static inline void iptunnel_xmit_stats(struct net_device *dev, int pkt_len)666659{667660 if (pkt_len > 0) {668668- struct pcpu_sw_netstats *tstats = get_cpu_ptr(dev->tstats);661661+ if (dev->pcpu_stat_type == NETDEV_PCPU_STAT_DSTATS) {662662+ struct pcpu_dstats *dstats = get_cpu_ptr(dev->dstats);669663670670- u64_stats_update_begin(&tstats->syncp);671671- u64_stats_add(&tstats->tx_bytes, pkt_len);672672- u64_stats_inc(&tstats->tx_packets);673673- u64_stats_update_end(&tstats->syncp);674674- put_cpu_ptr(tstats);664664+ u64_stats_update_begin(&dstats->syncp);665665+ u64_stats_add(&dstats->tx_bytes, pkt_len);666666+ u64_stats_inc(&dstats->tx_packets);667667+ u64_stats_update_end(&dstats->syncp);668668+ put_cpu_ptr(dstats);669669+ return;670670+ }671671+ if (dev->pcpu_stat_type == NETDEV_PCPU_STAT_TSTATS) {672672+ struct pcpu_sw_netstats *tstats = get_cpu_ptr(dev->tstats);673673+674674+ u64_stats_update_begin(&tstats->syncp);675675+ u64_stats_add(&tstats->tx_bytes, pkt_len);676676+ u64_stats_inc(&tstats->tx_packets);677677+ u64_stats_update_end(&tstats->syncp);678678+ put_cpu_ptr(tstats);679679+ return;680680+ }681681+ pr_err_once("iptunnel_xmit_stats pcpu_stat_type=%d\n",682682+ dev->pcpu_stat_type);683683+ WARN_ON_ONCE(1);675684 return;676685 }677686
+3-1
include/net/mac80211.h
···74077407 * @band: the band to transmit on74087408 * @sta: optional pointer to get the station to send the frame to74097409 *74107410- * Return: %true if the skb was prepared, %false otherwise74107410+ * Return: %true if the skb was prepared, %false otherwise.74117411+ * On failure, the skb is freed by this function; callers must not74127412+ * free it again.74117413 *74127414 * Note: must be called under RCU lock74137415 */
···8585 do {8686 if (filter == &dummy_filter)8787 return -EACCES;8888- ret = bpf_prog_run(filter->prog, &bpf_ctx);8888+ ret = bpf_prog_run_pin_on_cpu(filter->prog, &bpf_ctx);8989 if (!ret)9090 return -EACCES;9191 filter = filter->next;
+7-3
io_uring/eventfd.c
···7676{7777 bool skip = false;7878 struct io_ev_fd *ev_fd;7979-8080- if (READ_ONCE(ctx->rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)8181- return;7979+ struct io_rings *rings;82808381 guard(rcu)();8282+8383+ rings = rcu_dereference(ctx->rings_rcu);8484+ if (!rings)8585+ return;8686+ if (READ_ONCE(rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)8787+ return;8488 ev_fd = rcu_dereference(ctx->io_ev_fd);8589 /*8690 * Check again if ev_fd exists in case an io_eventfd_unregister call
+3-1
io_uring/io_uring.c
···17451745 * well as 2 contiguous entries.17461746 */17471747 if (!(ctx->flags & IORING_SETUP_SQE_MIXED) || *left < 2 ||17481748- !(ctx->cached_sq_head & (ctx->sq_entries - 1)))17481748+ (unsigned)(sqe - ctx->sq_sqes) >= ctx->sq_entries - 1)17491749 return io_init_fail_req(req, -EINVAL);17501750 /*17511751 * A 128b operation on a mixed SQ uses two entries, so we have···20662066 io_free_region(ctx->user, &ctx->sq_region);20672067 io_free_region(ctx->user, &ctx->ring_region);20682068 ctx->rings = NULL;20692069+ RCU_INIT_POINTER(ctx->rings_rcu, NULL);20692070 ctx->sq_sqes = NULL;20702071}20712072···27042703 if (ret)27052704 return ret;27062705 ctx->rings = rings = io_region_get_ptr(&ctx->ring_region);27062706+ rcu_assign_pointer(ctx->rings_rcu, rings);27072707 if (!(ctx->flags & IORING_SETUP_NO_SQARRAY))27082708 ctx->sq_array = (u32 *)((char *)rings + rl->sq_array_offset);27092709
+22-5
io_uring/kbuf.c
···34343535static bool io_kbuf_inc_commit(struct io_buffer_list *bl, int len)3636{3737+ /* No data consumed, return false early to avoid consuming the buffer */3838+ if (!len)3939+ return false;4040+3741 while (len) {3842 struct io_uring_buf *buf;3943 u32 buf_len, this_len;···115111116112 buf = req->kbuf;117113 bl = io_buffer_get_list(ctx, buf->bgid);118118- list_add(&buf->list, &bl->buf_list);119119- bl->nbufs++;114114+ /*115115+ * If the buffer list was upgraded to a ring-based one, or removed,116116+ * while the request was in-flight in io-wq, drop it.117117+ */118118+ if (bl && !(bl->flags & IOBL_BUF_RING)) {119119+ list_add(&buf->list, &bl->buf_list);120120+ bl->nbufs++;121121+ } else {122122+ kfree(buf);123123+ }120124 req->flags &= ~REQ_F_BUFFER_SELECTED;125125+ req->kbuf = NULL;121126122127 io_ring_submit_unlock(ctx, issue_flags);123128 return true;···216203 sel.addr = u64_to_user_ptr(READ_ONCE(buf->addr));217204218205 if (io_should_commit(req, issue_flags)) {219219- io_kbuf_commit(req, sel.buf_list, *len, 1);206206+ if (!io_kbuf_commit(req, sel.buf_list, *len, 1))207207+ req->flags |= REQ_F_BUF_MORE;220208 sel.buf_list = NULL;221209 }222210 return sel;···350336 */351337 if (ret > 0) {352338 req->flags |= REQ_F_BUFFERS_COMMIT | REQ_F_BL_NO_RECYCLE;353353- io_kbuf_commit(req, sel->buf_list, arg->out_len, ret);339339+ if (!io_kbuf_commit(req, sel->buf_list, arg->out_len, ret))340340+ req->flags |= REQ_F_BUF_MORE;354341 }355342 } else {356343 ret = io_provided_buffers_select(req, &arg->out_len, sel->buf_list, arg->iovs);···397382398383 if (bl)399384 ret = io_kbuf_commit(req, bl, len, nr);385385+ if (ret && (req->flags & REQ_F_BUF_MORE))386386+ ret = false;400387401401- req->flags &= ~REQ_F_BUFFER_RING;388388+ req->flags &= ~(REQ_F_BUFFER_RING | REQ_F_BUF_MORE);402389 return ret;403390}404391
+7-2
io_uring/poll.c
···272272 atomic_andnot(IO_POLL_RETRY_FLAG, &req->poll_refs);273273 v &= ~IO_POLL_RETRY_FLAG;274274 }275275+ v &= IO_POLL_REF_MASK;275276 }276277277278 /* the mask was stashed in __io_poll_execute */···305304 return IOU_POLL_REMOVE_POLL_USE_RES;306305 }307306 } else {308308- int ret = io_poll_issue(req, tw);307307+ int ret;309308309309+ /* multiple refs and HUP, ensure we loop once more */310310+ if ((req->cqe.res & (POLLHUP | POLLRDHUP)) && v != 1)311311+ v--;312312+313313+ ret = io_poll_issue(req, tw);310314 if (ret == IOU_COMPLETE)311315 return IOU_POLL_REMOVE_POLL_USE_RES;312316 else if (ret == IOU_REQUEUE)···327321 * Release all references, retry if someone tried to restart328322 * task_work while we were executing it.329323 */330330- v &= IO_POLL_REF_MASK;331324 } while (atomic_sub_return(v, &req->poll_refs) & IO_POLL_REF_MASK);332325333326 io_napi_add(req);
+13-2
io_uring/register.c
···202202 return -EPERM;203203 /*204204 * Similar to seccomp, disallow setting a filter if task_no_new_privs205205- * is true and we're not CAP_SYS_ADMIN.205205+ * is false and we're not CAP_SYS_ADMIN.206206 */207207 if (!task_no_new_privs(current) &&208208 !ns_capable_noaudit(current_user_ns(), CAP_SYS_ADMIN))···238238239239 /*240240 * Similar to seccomp, disallow setting a filter if task_no_new_privs241241- * is true and we're not CAP_SYS_ADMIN.241241+ * is false and we're not CAP_SYS_ADMIN.242242 */243243 if (!task_no_new_privs(current) &&244244 !ns_capable_noaudit(current_user_ns(), CAP_SYS_ADMIN))···633633 ctx->sq_entries = p->sq_entries;634634 ctx->cq_entries = p->cq_entries;635635636636+ /*637637+ * Just mark any flag we may have missed and that the application638638+ * should act on unconditionally. Worst case it'll be an extra639639+ * syscall.640640+ */641641+ atomic_or(IORING_SQ_TASKRUN | IORING_SQ_NEED_WAKEUP, &n.rings->sq_flags);636642 ctx->rings = n.rings;643643+ rcu_assign_pointer(ctx->rings_rcu, n.rings);644644+637645 ctx->sq_sqes = n.sq_sqes;638646 swap_old(ctx, o, n, ring_region);639647 swap_old(ctx, o, n, sq_region);···650642out:651643 spin_unlock(&ctx->completion_lock);652644 mutex_unlock(&ctx->mmap_lock);645645+ /* Wait for concurrent io_ctx_mark_taskrun() */646646+ if (to_free == &o)647647+ synchronize_rcu_expedited();653648 io_register_free_rings(ctx, to_free);654649655650 if (ctx->sq_data)
+20-2
io_uring/tw.c
···152152 WARN_ON_ONCE(ret);153153}154154155155+/*156156+ * Sets IORING_SQ_TASKRUN in the sq_flags shared with userspace, using the157157+ * RCU protected rings pointer to be safe against concurrent ring resizing.158158+ */159159+static void io_ctx_mark_taskrun(struct io_ring_ctx *ctx)160160+{161161+ lockdep_assert_in_rcu_read_lock();162162+163163+ if (ctx->flags & IORING_SETUP_TASKRUN_FLAG) {164164+ struct io_rings *rings = rcu_dereference(ctx->rings_rcu);165165+166166+ atomic_or(IORING_SQ_TASKRUN, &rings->sq_flags);167167+ }168168+}169169+155170void io_req_local_work_add(struct io_kiocb *req, unsigned flags)156171{157172 struct io_ring_ctx *ctx = req->ctx;···221206 */222207223208 if (!head) {224224- if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)225225- atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);209209+ io_ctx_mark_taskrun(ctx);226210 if (ctx->has_evfd)227211 io_eventfd_signal(ctx, false);228212 }···245231 if (!llist_add(&req->io_task_work.node, &tctx->task_list))246232 return;247233234234+ /*235235+ * Doesn't need to use ->rings_rcu, as resizing isn't supported for236236+ * !DEFER_TASKRUN.237237+ */248238 if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)249239 atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);250240
+20-4
kernel/bpf/btf.c
···17871787 * of the _bh() version.17881788 */17891789 spin_lock_irqsave(&btf_idr_lock, flags);17901790- idr_remove(&btf_idr, btf->id);17901790+ if (btf->id) {17911791+ idr_remove(&btf_idr, btf->id);17921792+ /*17931793+ * Clear the id here to make this function idempotent, since it will get17941794+ * called a couple of times for module BTFs: on module unload, and then17951795+ * the final btf_put(). btf_alloc_id() starts IDs with 1, so we can use17961796+ * 0 as sentinel value.17971797+ */17981798+ WRITE_ONCE(btf->id, 0);17991799+ }17911800 spin_unlock_irqrestore(&btf_idr_lock, flags);17921801}17931802···81248115{81258116 const struct btf *btf = filp->private_data;8126811781278127- seq_printf(m, "btf_id:\t%u\n", btf->id);81188118+ seq_printf(m, "btf_id:\t%u\n", READ_ONCE(btf->id));81288119}81298120#endif81308121···82068197 if (copy_from_user(&info, uinfo, info_copy))82078198 return -EFAULT;8208819982098209- info.id = btf->id;82008200+ info.id = READ_ONCE(btf->id);82108201 ubtf = u64_to_user_ptr(info.btf);82118202 btf_copy = min_t(u32, btf->data_size, info.btf_size);82128203 if (copy_to_user(ubtf, btf->data, btf_copy))···8269826082708261u32 btf_obj_id(const struct btf *btf)82718262{82728272- return btf->id;82638263+ return READ_ONCE(btf->id);82738264}8274826582758266bool btf_is_kernel(const struct btf *btf)···83918382 if (btf_mod->module != module)83928383 continue;8393838483858385+ /*83868386+ * For modules, we do the freeing of BTF IDR as soon as83878387+ * module goes away to disable BTF discovery, since the83888388+ * btf_try_get_module() on such BTFs will fail. This may83898389+ * be called again on btf_put(), but it's ok to do so.83908390+ */83918391+ btf_free_id(btf_mod->btf);83948392 list_del(&btf_mod->list);83958393 if (btf_mod->sysfs_attr)83968394 sysfs_remove_bin_file(btf_kobj, btf_mod->sysfs_attr);
+35-8
kernel/bpf/core.c
···14221422 *to++ = BPF_ALU64_IMM(BPF_XOR, BPF_REG_AX, imm_rnd);14231423 *to++ = BPF_STX_MEM(from->code, from->dst_reg, BPF_REG_AX, from->off);14241424 break;14251425+14261426+ case BPF_ST | BPF_PROBE_MEM32 | BPF_DW:14271427+ case BPF_ST | BPF_PROBE_MEM32 | BPF_W:14281428+ case BPF_ST | BPF_PROBE_MEM32 | BPF_H:14291429+ case BPF_ST | BPF_PROBE_MEM32 | BPF_B:14301430+ *to++ = BPF_ALU64_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^14311431+ from->imm);14321432+ *to++ = BPF_ALU64_IMM(BPF_XOR, BPF_REG_AX, imm_rnd);14331433+ /*14341434+ * Cannot use BPF_STX_MEM() macro here as it14351435+ * hardcodes BPF_MEM mode, losing PROBE_MEM3214361436+ * and breaking arena addressing in the JIT.14371437+ */14381438+ *to++ = (struct bpf_insn) {14391439+ .code = BPF_STX | BPF_PROBE_MEM32 |14401440+ BPF_SIZE(from->code),14411441+ .dst_reg = from->dst_reg,14421442+ .src_reg = BPF_REG_AX,14431443+ .off = from->off,14441444+ };14451445+ break;14251446 }14261447out:14271448 return to - to_buff;···17571736}1758173717591738#ifndef CONFIG_BPF_JIT_ALWAYS_ON17391739+/* Absolute value of s32 without undefined behavior for S32_MIN */17401740+static u32 abs_s32(s32 x)17411741+{17421742+ return x >= 0 ? (u32)x : -(u32)x;17431743+}17441744+17601745/**17611746 * ___bpf_prog_run - run eBPF program on a given context17621747 * @regs: is the array of MAX_BPF_EXT_REG eBPF pseudo-registers···19271900 DST = do_div(AX, (u32) SRC);19281901 break;19291902 case 1:19301930- AX = abs((s32)DST);19311931- AX = do_div(AX, abs((s32)SRC));19031903+ AX = abs_s32((s32)DST);19041904+ AX = do_div(AX, abs_s32((s32)SRC));19321905 if ((s32)DST < 0)19331906 DST = (u32)-AX;19341907 else···19551928 DST = do_div(AX, (u32) IMM);19561929 break;19571930 case 1:19581958- AX = abs((s32)DST);19591959- AX = do_div(AX, abs((s32)IMM));19311931+ AX = abs_s32((s32)DST);19321932+ AX = do_div(AX, abs_s32((s32)IMM));19601933 if ((s32)DST < 0)19611934 DST = (u32)-AX;19621935 else···19821955 DST = (u32) AX;19831956 break;19841957 case 1:19851985- AX = abs((s32)DST);19861986- do_div(AX, abs((s32)SRC));19581958+ AX = abs_s32((s32)DST);19591959+ do_div(AX, abs_s32((s32)SRC));19871960 if (((s32)DST < 0) == ((s32)SRC < 0))19881961 DST = (u32)AX;19891962 else···20091982 DST = (u32) AX;20101983 break;20111984 case 1:20122012- AX = abs((s32)DST);20132013- do_div(AX, abs((s32)IMM));19851985+ AX = abs_s32((s32)DST);19861986+ do_div(AX, abs_s32((s32)IMM));20141987 if (((s32)DST < 0) == ((s32)IMM < 0))20151988 DST = (u32)AX;20161989 else
+25-8
kernel/bpf/verifier.c
···1591015910 /* Apply bswap if alu64 or switch between big-endian and little-endian machines */1591115911 bool need_bswap = alu64 || (to_le == is_big_endian);15912159121591315913+ /*1591415914+ * If the register is mutated, manually reset its scalar ID to break1591515915+ * any existing ties and avoid incorrect bounds propagation.1591615916+ */1591715917+ if (need_bswap || insn->imm == 16 || insn->imm == 32)1591815918+ dst_reg->id = 0;1591915919+1591315920 if (need_bswap) {1591415921 if (insn->imm == 16)1591515922 dst_reg->var_off = tnum_bswap16(dst_reg->var_off);···1599915992 else1600015993 return 0;16001159941600216002- branch = push_stack(env, env->insn_idx + 1, env->insn_idx, false);1599515995+ branch = push_stack(env, env->insn_idx, env->insn_idx, false);1600315996 if (IS_ERR(branch))1600415997 return PTR_ERR(branch);1600515998···1741517408 continue;1741617409 if ((reg->id & ~BPF_ADD_CONST) != (known_reg->id & ~BPF_ADD_CONST))1741717410 continue;1741117411+ /*1741217412+ * Skip mixed 32/64-bit links: the delta relationship doesn't1741317413+ * hold across different ALU widths.1741417414+ */1741517415+ if (((reg->id ^ known_reg->id) & BPF_ADD_CONST) == BPF_ADD_CONST)1741617416+ continue;1741817417 if ((!(reg->id & BPF_ADD_CONST) && !(known_reg->id & BPF_ADD_CONST)) ||1741917418 reg->off == known_reg->off) {1742017419 s32 saved_subreg_def = reg->subreg_def;···1744817435 scalar32_min_max_add(reg, &fake_reg);1744917436 scalar_min_max_add(reg, &fake_reg);1745017437 reg->var_off = tnum_add(reg->var_off, fake_reg.var_off);1745117451- if (known_reg->id & BPF_ADD_CONST32)1743817438+ if ((reg->id | known_reg->id) & BPF_ADD_CONST32)1745217439 zext_32_to_64(reg);1745317440 reg_bounds_sync(reg);1745417441 }···1987619863 * Also verify that new value satisfies old value range knowledge.1987719864 */19878198651987919879- /* ADD_CONST mismatch: different linking semantics */1988019880- if ((rold->id & BPF_ADD_CONST) && !(rcur->id & BPF_ADD_CONST))1988119881- return false;1988219882-1988319883- if (rold->id && !(rold->id & BPF_ADD_CONST) && (rcur->id & BPF_ADD_CONST))1986619866+ /*1986719867+ * ADD_CONST flags must match exactly: BPF_ADD_CONST32 and1986819868+ * BPF_ADD_CONST64 have different linking semantics in1986919869+ * sync_linked_regs() (alu32 zero-extends, alu64 does not),1987019870+ * so pruning across different flag types is unsafe.1987119871+ */1987219872+ if (rold->id &&1987319873+ (rold->id & BPF_ADD_CONST) != (rcur->id & BPF_ADD_CONST))1988419874 return false;19885198751988619876 /* Both have offset linkage: offsets must match */···2092020904 * state when it exits.2092120905 */2092220906 int err = check_resource_leak(env, exception_exit,2092320923- !env->cur_state->curframe,2090720907+ exception_exit || !env->cur_state->curframe,2090820908+ exception_exit ? "bpf_throw" :2092420909 "BPF_EXIT instruction in main prog");2092520910 if (err)2092620911 return err;
+6
kernel/cgroup/cgroup.c
···51095109 return;5110511051115111 task = list_entry(it->task_pos, struct task_struct, cg_list);51125112+ /*51135113+ * Hide tasks that are exiting but not yet removed. Keep zombie51145114+ * leaders with live threads visible.51155115+ */51165116+ if ((task->flags & PF_EXITING) && !atomic_read(&task->signal->live))51175117+ goto repeat;5112511851135119 if (it->flags & CSS_TASK_ITER_PROCS) {51145120 /* if PROCS, skip over tasks which aren't group leaders */
+31-28
kernel/cgroup/cpuset.c
···879879 /*880880 * Cgroup v2 doesn't support domain attributes, just set all of them881881 * to SD_ATTR_INIT. Also non-isolating partition root CPUs are a882882- * subset of HK_TYPE_DOMAIN housekeeping CPUs.882882+ * subset of HK_TYPE_DOMAIN_BOOT housekeeping CPUs.883883 */884884 for (i = 0; i < ndoms; i++) {885885 /*···888888 */889889 if (!csa || csa[i] == &top_cpuset)890890 cpumask_and(doms[i], top_cpuset.effective_cpus,891891- housekeeping_cpumask(HK_TYPE_DOMAIN));891891+ housekeeping_cpumask(HK_TYPE_DOMAIN_BOOT));892892 else893893 cpumask_copy(doms[i], csa[i]->effective_cpus);894894 if (dattr)···13291329}1330133013311331/*13321332- * update_hk_sched_domains - Update HK cpumasks & rebuild sched domains13321332+ * cpuset_update_sd_hk_unlock - Rebuild sched domains, update HK & unlock13331333 *13341334- * Update housekeeping cpumasks and rebuild sched domains if necessary.13351335- * This should be called at the end of cpuset or hotplug actions.13341334+ * Update housekeeping cpumasks and rebuild sched domains if necessary and13351335+ * then do a cpuset_full_unlock().13361336+ * This should be called at the end of cpuset operation.13361337 */13371337-static void update_hk_sched_domains(void)13381338+static void cpuset_update_sd_hk_unlock(void)13391339+ __releases(&cpuset_mutex)13401340+ __releases(&cpuset_top_mutex)13381341{13421342+ /* force_sd_rebuild will be cleared in rebuild_sched_domains_locked() */13431343+ if (force_sd_rebuild)13441344+ rebuild_sched_domains_locked();13451345+13391346 if (update_housekeeping) {13401340- /* Updating HK cpumasks implies rebuild sched domains */13411347 update_housekeeping = false;13421342- force_sd_rebuild = true;13431348 cpumask_copy(isolated_hk_cpus, isolated_cpus);1344134913451350 /*···13551350 mutex_unlock(&cpuset_mutex);13561351 cpus_read_unlock();13571352 WARN_ON_ONCE(housekeeping_update(isolated_hk_cpus));13581358- cpus_read_lock();13591359- mutex_lock(&cpuset_mutex);13531353+ mutex_unlock(&cpuset_top_mutex);13541354+ } else {13551355+ cpuset_full_unlock();13601356 }13611361- /* force_sd_rebuild will be cleared in rebuild_sched_domains_locked() */13621362- if (force_sd_rebuild)13631363- rebuild_sched_domains_locked();13641357}1365135813661359/*13671367- * Work function to invoke update_hk_sched_domains()13601360+ * Work function to invoke cpuset_update_sd_hk_unlock()13681361 */13691362static void hk_sd_workfn(struct work_struct *work)13701363{13711364 cpuset_full_lock();13721372- update_hk_sched_domains();13731373- cpuset_full_unlock();13651365+ cpuset_update_sd_hk_unlock();13741366}1375136713761368/**···3232323032333231 free_cpuset(trialcs);32343232out_unlock:32353235- update_hk_sched_domains();32363236- cpuset_full_unlock();32333233+ cpuset_update_sd_hk_unlock();32373234 if (of_cft(of)->private == FILE_MEMLIST)32383235 schedule_flush_migrate_mm();32393236 return retval ?: nbytes;···33393338 cpuset_full_lock();33403339 if (is_cpuset_online(cs))33413340 retval = update_prstate(cs, val);33423342- update_hk_sched_domains();33433343- cpuset_full_unlock();33413341+ cpuset_update_sd_hk_unlock();33443342 return retval ?: nbytes;33453343}33463344···35133513 /* Reset valid partition back to member */35143514 if (is_partition_valid(cs))35153515 update_prstate(cs, PRS_MEMBER);35163516- update_hk_sched_domains();35173517- cpuset_full_unlock();35163516+ cpuset_update_sd_hk_unlock();35183517}3519351835203519static void cpuset_css_free(struct cgroup_subsys_state *css)···39223923 rcu_read_unlock();39233924 }3924392539253925-39263926 /*39273927- * Queue a work to call housekeeping_update() & rebuild_sched_domains()39283928- * There will be a slight delay before the HK_TYPE_DOMAIN housekeeping39293929- * cpumask can correctly reflect what is in isolated_cpus.39273927+ * rebuild_sched_domains() will always be called directly if needed39283928+ * to make sure that newly added or removed CPU will be reflected in39293929+ * the sched domains. However, if isolated partition invalidation39303930+ * or recreation is being done (update_housekeeping set), a work item39313931+ * will be queued to call housekeeping_update() to update the39323932+ * corresponding housekeeping cpumasks after some slight delay.39303933 *39313934 * We rely on WORK_STRUCT_PENDING_BIT to not requeue a work item that39323935 * is still pending. Before the pending bit is cleared, the work data···39373936 * previously queued work. Since hk_sd_workfn() doesn't use the work39383937 * item at all, this is not a problem.39393938 */39403940- if (update_housekeeping || force_sd_rebuild)39413941- queue_work(system_unbound_wq, &hk_sd_work);39393939+ if (force_sd_rebuild)39403940+ rebuild_sched_domains_cpuslocked();39413941+ if (update_housekeeping)39423942+ queue_work(system_dfl_wq, &hk_sd_work);3942394339433944 free_tmpmasks(ptmp);39443945}
···11441144 lockdep_assert_held(&kprobe_mutex);1145114511461146 ret = ftrace_set_filter_ip(ops, (unsigned long)p->addr, 0, 0);11471147- if (WARN_ONCE(ret < 0, "Failed to arm kprobe-ftrace at %pS (error %d)\n", p->addr, ret))11471147+ if (ret < 0)11481148 return ret;1149114911501150 if (*cnt == 0) {11511151 ret = register_ftrace_function(ops);11521152- if (WARN(ret < 0, "Failed to register kprobe-ftrace (error %d)\n", ret)) {11521152+ if (ret < 0) {11531153 /*11541154 * At this point, sinec ops is not registered, we should be sefe from11551155 * registering empty filter.···11781178 int ret;1179117911801180 lockdep_assert_held(&kprobe_mutex);11811181+ if (unlikely(kprobe_ftrace_disabled)) {11821182+ /* Now ftrace is disabled forever, disarm is already done. */11831183+ return 0;11841184+ }1181118511821186 if (*cnt == 1) {11831187 ret = unregister_ftrace_function(ops);
+29-52
kernel/sched/core.c
···47294729 scx_cancel_fork(p);47304730}4731473147324732+static void sched_mm_cid_fork(struct task_struct *t);47334733+47324734void sched_post_fork(struct task_struct *p)47334735{47364736+ sched_mm_cid_fork(p);47344737 uclamp_post_fork(p);47354738 scx_post_fork(p);47364739}···1062010617 }1062110618}10622106191062310623-static bool mm_cid_fixup_task_to_cpu(struct task_struct *t, struct mm_struct *mm)1062010620+static void mm_cid_fixup_task_to_cpu(struct task_struct *t, struct mm_struct *mm)1062410621{1062510622 /* Remote access to mm::mm_cid::pcpu requires rq_lock */1062610623 guard(task_rq_lock)(t);1062710627- /* If the task is not active it is not in the users count */1062810628- if (!t->mm_cid.active)1062910629- return false;1063010624 if (cid_on_task(t->mm_cid.cid)) {1063110625 /* If running on the CPU, put the CID in transit mode, otherwise drop it */1063210626 if (task_rq(t)->curr == t)···1063110631 else1063210632 mm_unset_cid_on_task(t);1063310633 }1063410634- return true;1063510635-}1063610636-1063710637-static void mm_cid_do_fixup_tasks_to_cpus(struct mm_struct *mm)1063810638-{1063910639- struct task_struct *p, *t;1064010640- unsigned int users;1064110641-1064210642- /*1064310643- * This can obviously race with a concurrent affinity change, which1064410644- * increases the number of allowed CPUs for this mm, but that does1064510645- * not affect the mode and only changes the CID constraints. A1064610646- * possible switch back to per task mode happens either in the1064710647- * deferred handler function or in the next fork()/exit().1064810648- *1064910649- * The caller has already transferred. The newly incoming task is1065010650- * already accounted for, but not yet visible.1065110651- */1065210652- users = mm->mm_cid.users - 2;1065310653- if (!users)1065410654- return;1065510655-1065610656- guard(rcu)();1065710657- for_other_threads(current, t) {1065810658- if (mm_cid_fixup_task_to_cpu(t, mm))1065910659- users--;1066010660- }1066110661-1066210662- if (!users)1066310663- return;1066410664-1066510665- /* Happens only for VM_CLONE processes. */1066610666- for_each_process_thread(p, t) {1066710667- if (t == current || t->mm != mm)1066810668- continue;1066910669- if (mm_cid_fixup_task_to_cpu(t, mm)) {1067010670- if (--users == 0)1067110671- return;1067210672- }1067310673- }1067410634}10675106351067610636static void mm_cid_fixup_tasks_to_cpus(void)1067710637{1067810638 struct mm_struct *mm = current->mm;1063910639+ struct task_struct *t;10679106401068010680- mm_cid_do_fixup_tasks_to_cpus(mm);1064110641+ lockdep_assert_held(&mm->mm_cid.mutex);1064210642+1064310643+ hlist_for_each_entry(t, &mm->mm_cid.user_list, mm_cid.node) {1064410644+ /* Current has already transferred before invoking the fixup. */1064510645+ if (t != current)1064610646+ mm_cid_fixup_task_to_cpu(t, mm);1064710647+ }1064810648+1068110649 mm_cid_complete_transit(mm, MM_CID_ONCPU);1068210650}10683106511068410652static bool sched_mm_cid_add_user(struct task_struct *t, struct mm_struct *mm)1068510653{1065410654+ lockdep_assert_held(&mm->mm_cid.lock);1065510655+1068610656 t->mm_cid.active = 1;1065710657+ hlist_add_head(&t->mm_cid.node, &mm->mm_cid.user_list);1068710658 mm->mm_cid.users++;1068810659 return mm_update_max_cids(mm);1068910660}10690106611069110691-void sched_mm_cid_fork(struct task_struct *t)1066210662+static void sched_mm_cid_fork(struct task_struct *t)1069210663{1069310664 struct mm_struct *mm = t->mm;1069410665 bool percpu;10695106661069610696- WARN_ON_ONCE(!mm || t->mm_cid.cid != MM_CID_UNSET);1066710667+ if (!mm)1066810668+ return;1066910669+1067010670+ WARN_ON_ONCE(t->mm_cid.cid != MM_CID_UNSET);10697106711069810672 guard(mutex)(&mm->mm_cid.mutex);1069910673 scoped_guard(raw_spinlock_irq, &mm->mm_cid.lock) {···10706107321070710733static bool sched_mm_cid_remove_user(struct task_struct *t)1070810734{1073510735+ lockdep_assert_held(&t->mm->mm_cid.lock);1073610736+1070910737 t->mm_cid.active = 0;1071010710- scoped_guard(preempt) {1071110711- /* Clear the transition bit */1071210712- t->mm_cid.cid = cid_from_transit_cid(t->mm_cid.cid);1071310713- mm_unset_cid_on_task(t);1071410714- }1073810738+ /* Clear the transition bit */1073910739+ t->mm_cid.cid = cid_from_transit_cid(t->mm_cid.cid);1074010740+ mm_unset_cid_on_task(t);1074110741+ hlist_del_init(&t->mm_cid.node);1071510742 t->mm->mm_cid.users--;1071610743 return mm_update_max_cids(t->mm);1071710744}···1085510880 mutex_init(&mm->mm_cid.mutex);1085610881 mm->mm_cid.irq_work = IRQ_WORK_INIT_HARD(mm_cid_irq_work);1085710882 INIT_WORK(&mm->mm_cid.work, mm_cid_work_fn);1088310883+ INIT_HLIST_HEAD(&mm->mm_cid.user_list);1085810884 cpumask_copy(mm_cpus_allowed(mm), &p->cpus_mask);1085910885 bitmap_zero(mm_cidmask(mm), num_possible_cpus());1086010886}1086110887#else /* CONFIG_SCHED_MM_CID */1086210888static inline void mm_update_cpus_allowed(struct mm_struct *mm, const struct cpumask *affmsk) { }1088910889+static inline void sched_mm_cid_fork(struct task_struct *t) { }1086310890#endif /* !CONFIG_SCHED_MM_CID */10864108911086510892static DEFINE_PER_CPU(struct sched_change_ctx, sched_change_ctx);
+11-11
kernel/sched/ext.c
···11031103 }1104110411051105 /* seq records the order tasks are queued, used by BPF DSQ iterator */11061106- dsq->seq++;11061106+ WRITE_ONCE(dsq->seq, dsq->seq + 1);11071107 p->scx.dsq_seq = dsq->seq;1108110811091109 dsq_mod_nr(dsq, 1);···14701470 p->scx.flags |= SCX_TASK_RESET_RUNNABLE_AT;14711471}1472147214731473-static void enqueue_task_scx(struct rq *rq, struct task_struct *p, int enq_flags)14731473+static void enqueue_task_scx(struct rq *rq, struct task_struct *p, int core_enq_flags)14741474{14751475 struct scx_sched *sch = scx_root;14761476 int sticky_cpu = p->scx.sticky_cpu;14771477+ u64 enq_flags = core_enq_flags | rq->scx.extra_enq_flags;1477147814781479 if (enq_flags & ENQUEUE_WAKEUP)14791480 rq->scx.flags |= SCX_RQ_IN_WAKEUP;14801480-14811481- enq_flags |= rq->scx.extra_enq_flags;1482148114831482 if (sticky_cpu >= 0)14841483 p->scx.sticky_cpu = -1;···39073908 * consider offloading iff the total queued duration is over the39083909 * threshold.39093910 */39103910- min_delta_us = scx_bypass_lb_intv_us / SCX_BYPASS_LB_MIN_DELTA_DIV;39113911- if (delta < DIV_ROUND_UP(min_delta_us, scx_slice_bypass_us))39113911+ min_delta_us = READ_ONCE(scx_bypass_lb_intv_us) / SCX_BYPASS_LB_MIN_DELTA_DIV;39123912+ if (delta < DIV_ROUND_UP(min_delta_us, READ_ONCE(scx_slice_bypass_us)))39123913 return 0;3913391439143915 raw_spin_rq_lock_irq(rq);···41364137 WARN_ON_ONCE(scx_bypass_depth <= 0);41374138 if (scx_bypass_depth != 1)41384139 goto unlock;41394139- WRITE_ONCE(scx_slice_dfl, scx_slice_bypass_us * NSEC_PER_USEC);41404140+ WRITE_ONCE(scx_slice_dfl, READ_ONCE(scx_slice_bypass_us) * NSEC_PER_USEC);41404141 bypass_timestamp = ktime_get_ns();41414142 if (sch)41424143 scx_add_event(sch, SCX_EV_BYPASS_ACTIVATE, 1);···52585259 if (!READ_ONCE(helper)) {52595260 mutex_lock(&helper_mutex);52605261 if (!helper) {52615261- helper = kthread_run_worker(0, "scx_enable_helper");52625262- if (IS_ERR_OR_NULL(helper)) {52635263- helper = NULL;52625262+ struct kthread_worker *w =52635263+ kthread_run_worker(0, "scx_enable_helper");52645264+ if (IS_ERR_OR_NULL(w)) {52645265 mutex_unlock(&helper_mutex);52655266 return -ENOMEM;52665267 }52675267- sched_set_fifo(helper->task);52685268+ sched_set_fifo(w->task);52695269+ WRITE_ONCE(helper, w);52685270 }52695271 mutex_unlock(&helper_mutex);52705272 }
+98-16
kernel/sched/ext_internal.h
···10351035};1036103610371037/*10381038- * sched_ext_entity->ops_state10381038+ * Task Ownership State Machine (sched_ext_entity->ops_state)10391039 *10401040- * Used to track the task ownership between the SCX core and the BPF scheduler.10411041- * State transitions look as follows:10401040+ * The sched_ext core uses this state machine to track task ownership10411041+ * between the SCX core and the BPF scheduler. This allows the BPF10421042+ * scheduler to dispatch tasks without strict ordering requirements, while10431043+ * the SCX core safely rejects invalid dispatches.10421044 *10431043- * NONE -> QUEUEING -> QUEUED -> DISPATCHING10441044- * ^ | |10451045- * | v v10461046- * \-------------------------------/10451045+ * State Transitions10471046 *10481048- * QUEUEING and DISPATCHING states can be waited upon. See wait_ops_state() call10491049- * sites for explanations on the conditions being waited upon and why they are10501050- * safe. Transitions out of them into NONE or QUEUED must store_release and the10511051- * waiters should load_acquire.10471047+ * .------------> NONE (owned by SCX core)10481048+ * | | ^10491049+ * | enqueue | | direct dispatch10501050+ * | v |10511051+ * | QUEUEING -------'10521052+ * | |10531053+ * | enqueue |10541054+ * | completes |10551055+ * | v10561056+ * | QUEUED (owned by BPF scheduler)10571057+ * | |10581058+ * | dispatch |10591059+ * | |10601060+ * | v10611061+ * | DISPATCHING10621062+ * | |10631063+ * | dispatch |10641064+ * | completes |10651065+ * `---------------'10521066 *10531053- * Tracking scx_ops_state enables sched_ext core to reliably determine whether10541054- * any given task can be dispatched by the BPF scheduler at all times and thus10551055- * relaxes the requirements on the BPF scheduler. This allows the BPF scheduler10561056- * to try to dispatch any task anytime regardless of its state as the SCX core10571057- * can safely reject invalid dispatches.10671067+ * State Descriptions10681068+ *10691069+ * - %SCX_OPSS_NONE:10701070+ * Task is owned by the SCX core. It's either on a run queue, running,10711071+ * or being manipulated by the core scheduler. The BPF scheduler has no10721072+ * claim on this task.10731073+ *10741074+ * - %SCX_OPSS_QUEUEING:10751075+ * Transitional state while transferring a task from the SCX core to10761076+ * the BPF scheduler. The task's rq lock is held during this state.10771077+ * Since QUEUEING is both entered and exited under the rq lock, dequeue10781078+ * can never observe this state (it would be a BUG). When finishing a10791079+ * dispatch, if the task is still in %SCX_OPSS_QUEUEING the completion10801080+ * path busy-waits for it to leave this state (via wait_ops_state())10811081+ * before retrying.10821082+ *10831083+ * - %SCX_OPSS_QUEUED:10841084+ * Task is owned by the BPF scheduler. It's on a DSQ (dispatch queue)10851085+ * and the BPF scheduler is responsible for dispatching it. A QSEQ10861086+ * (queue sequence number) is embedded in this state to detect10871087+ * dispatch/dequeue races: if a task is dequeued and re-enqueued, the10881088+ * QSEQ changes and any in-flight dispatch operations targeting the old10891089+ * QSEQ are safely ignored.10901090+ *10911091+ * - %SCX_OPSS_DISPATCHING:10921092+ * Transitional state while transferring a task from the BPF scheduler10931093+ * back to the SCX core. This state indicates the BPF scheduler has10941094+ * selected the task for execution. When dequeue needs to take the task10951095+ * off a DSQ and it is still in %SCX_OPSS_DISPATCHING, the dequeue path10961096+ * busy-waits for it to leave this state (via wait_ops_state()) before10971097+ * proceeding. Exits to %SCX_OPSS_NONE when dispatch completes.10981098+ *10991099+ * Memory Ordering11001100+ *11011101+ * Transitions out of %SCX_OPSS_QUEUEING and %SCX_OPSS_DISPATCHING into11021102+ * %SCX_OPSS_NONE or %SCX_OPSS_QUEUED must use atomic_long_set_release()11031103+ * and waiters must use atomic_long_read_acquire(). This ensures proper11041104+ * synchronization between concurrent operations.11051105+ *11061106+ * Cross-CPU Task Migration11071107+ *11081108+ * When moving a task in the %SCX_OPSS_DISPATCHING state, we can't simply11091109+ * grab the target CPU's rq lock because a concurrent dequeue might be11101110+ * waiting on %SCX_OPSS_DISPATCHING while holding the source rq lock11111111+ * (deadlock).11121112+ *11131113+ * The sched_ext core uses a "lock dancing" protocol coordinated by11141114+ * p->scx.holding_cpu. When moving a task to a different rq:11151115+ *11161116+ * 1. Verify task can be moved (CPU affinity, migration_disabled, etc.)11171117+ * 2. Set p->scx.holding_cpu to the current CPU11181118+ * 3. Set task state to %SCX_OPSS_NONE; dequeue waits while DISPATCHING11191119+ * is set, so clearing DISPATCHING first prevents the circular wait11201120+ * (safe to lock the rq we need)11211121+ * 4. Unlock the current CPU's rq11221122+ * 5. Lock src_rq (where the task currently lives)11231123+ * 6. Verify p->scx.holding_cpu == current CPU, if not, dequeue won the11241124+ * race (dequeue clears holding_cpu to -1 when it takes the task), in11251125+ * this case migration is aborted11261126+ * 7. If src_rq == dst_rq: clear holding_cpu and enqueue directly11271127+ * into dst_rq's local DSQ (no lock swap needed)11281128+ * 8. Otherwise: call move_remote_task_to_local_dsq(), which releases11291129+ * src_rq, locks dst_rq, and performs the deactivate/activate11301130+ * migration cycle (dst_rq is held on return)11311131+ * 9. Unlock dst_rq and re-lock the current CPU's rq to restore11321132+ * the lock state expected by the caller11331133+ *11341134+ * If any verification fails, abort the migration.11351135+ *11361136+ * This state tracking allows the BPF scheduler to try to dispatch any task11371137+ * at any time regardless of its state. The SCX core can safely11381138+ * reject/ignore invalid dispatches, simplifying the BPF scheduler11391139+ * implementation.10581140 */10591141enum scx_ops_state {10601142 SCX_OPSS_NONE, /* owned by the SCX core */
+30-9
kernel/sched/idle.c
···161161 return cpuidle_enter(drv, dev, next_state);162162}163163164164+static void idle_call_stop_or_retain_tick(bool stop_tick)165165+{166166+ if (stop_tick || tick_nohz_tick_stopped())167167+ tick_nohz_idle_stop_tick();168168+ else169169+ tick_nohz_idle_retain_tick();170170+}171171+164172/**165173 * cpuidle_idle_call - the main idle function166174 *···178170 * set, and it returns with polling set. If it ever stops polling, it179171 * must clear the polling bit.180172 */181181-static void cpuidle_idle_call(void)173173+static void cpuidle_idle_call(bool stop_tick)182174{183175 struct cpuidle_device *dev = cpuidle_get_device();184176 struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev);···194186 }195187196188 if (cpuidle_not_available(drv, dev)) {197197- tick_nohz_idle_stop_tick();189189+ idle_call_stop_or_retain_tick(stop_tick);198190199191 default_idle_call();200192 goto exit_idle;···229221230222 next_state = cpuidle_find_deepest_state(drv, dev, max_latency_ns);231223 call_cpuidle(drv, dev, next_state);232232- } else {233233- bool stop_tick = true;224224+ } else if (drv->state_count > 1) {225225+ /*226226+ * stop_tick is expected to be true by default by cpuidle227227+ * governors, which allows them to select idle states with228228+ * target residency above the tick period length.229229+ */230230+ stop_tick = true;234231235232 /*236233 * Ask the cpuidle framework to choose a convenient idle state.237234 */238235 next_state = cpuidle_select(drv, dev, &stop_tick);239236240240- if (stop_tick || tick_nohz_tick_stopped())241241- tick_nohz_idle_stop_tick();242242- else243243- tick_nohz_idle_retain_tick();237237+ idle_call_stop_or_retain_tick(stop_tick);244238245239 entered_state = call_cpuidle(drv, dev, next_state);246240 /*247241 * Give the governor an opportunity to reflect on the outcome248242 */249243 cpuidle_reflect(dev, entered_state);244244+ } else {245245+ idle_call_stop_or_retain_tick(stop_tick);246246+247247+ /*248248+ * If there is only a single idle state (or none), there is249249+ * nothing meaningful for the governor to choose. Skip the250250+ * governor and always use state 0.251251+ */252252+ call_cpuidle(drv, dev, 0);250253 }251254252255exit_idle:···278259static void do_idle(void)279260{280261 int cpu = smp_processor_id();262262+ bool got_tick = false;281263282264 /*283265 * Check if we need to update blocked load···349329 tick_nohz_idle_restart_tick();350330 cpu_idle_poll();351331 } else {352352- cpuidle_idle_call();332332+ cpuidle_idle_call(got_tick);353333 }334334+ got_tick = tick_nohz_idle_got_tick();354335 arch_cpu_idle_exit();355336 }356337
+1-1
kernel/time/time.c
···697697 *698698 * Return: jiffies_64 value converted to 64-bit "clock_t" (CLOCKS_PER_SEC)699699 */700700-u64 jiffies_64_to_clock_t(u64 x)700700+notrace u64 jiffies_64_to_clock_t(u64 x)701701{702702#if (TICK_NSEC % (NSEC_PER_SEC / USER_HZ)) == 0703703# if HZ < USER_HZ
+2-2
kernel/trace/ftrace.c
···66066606 if (!orig_hash)66076607 goto unlock;6608660866096609- /* Enable the tmp_ops to have the same functions as the direct ops */66096609+ /* Enable the tmp_ops to have the same functions as the hash object. */66106610 ftrace_ops_init(&tmp_ops);66116611- tmp_ops.func_hash = ops->func_hash;66116611+ tmp_ops.func_hash->filter_hash = hash;6612661266136613 err = register_ftrace_function_nolock(&tmp_ops);66146614 if (err)
···555555 lockdep_assert_held(&event_mutex);556556557557 if (enabled) {558558- if (!list_empty(&tr->marker_list))558558+ if (tr->trace_flags & TRACE_ITER(COPY_MARKER))559559 return false;560560561561 list_add_rcu(&tr->marker_list, &marker_copies);···563563 return true;564564 }565565566566- if (list_empty(&tr->marker_list))566566+ if (!(tr->trace_flags & TRACE_ITER(COPY_MARKER)))567567 return false;568568569569- list_del_init(&tr->marker_list);569569+ list_del_rcu(&tr->marker_list);570570 tr->trace_flags &= ~TRACE_ITER(COPY_MARKER);571571 return true;572572}···6784678467856785 do {67866786 /*67876787+ * It is possible that something is trying to migrate this67886788+ * task. What happens then, is when preemption is enabled,67896789+ * the migration thread will preempt this task, try to67906790+ * migrate it, fail, then let it run again. That will67916791+ * cause this to loop again and never succeed.67926792+ * On failures, enabled and disable preemption with67936793+ * migration enabled, to allow the migration thread to67946794+ * migrate this task.67956795+ */67966796+ if (trys) {67976797+ preempt_enable_notrace();67986798+ preempt_disable_notrace();67996799+ cpu = smp_processor_id();68006800+ buffer = per_cpu_ptr(tinfo->tbuf, cpu)->buf;68016801+ }68026802+68036803+ /*67876804 * If for some reason, copy_from_user() always causes a context67886805 * switch, this would then cause an infinite loop.67896806 * If this task is preempted by another user space task, it···9761974497629745 list_del(&tr->list);9763974697479747+ if (printk_trace == tr)97489748+ update_printk_trace(&global_trace);97499749+97509750+ /* Must be done before disabling all the flags */97519751+ if (update_marker_trace(tr, 0))97529752+ synchronize_rcu();97539753+97649754 /* Disable all the flags that were enabled coming in */97659755 for (i = 0; i < TRACE_FLAGS_MAX_SIZE; i++) {97669756 if ((1ULL << i) & ZEROED_TRACE_FLAGS)97679757 set_tracer_flag(tr, 1ULL << i, 0);97689758 }97699769-97709770- if (printk_trace == tr)97719771- update_printk_trace(&global_trace);97729772-97739773- if (update_marker_trace(tr, 0))97749774- synchronize_rcu();9775975997769760 tracing_set_nop(tr);97779761 clear_ftrace_function_probes(tr);
+28-27
kernel/workqueue.c
···190190 int id; /* I: pool ID */191191 unsigned int flags; /* L: flags */192192193193- unsigned long watchdog_ts; /* L: watchdog timestamp */193193+ unsigned long last_progress_ts; /* L: last forward progress timestamp */194194 bool cpu_stall; /* WD: stalled cpu bound pool */195195196196 /*···16971697 WARN_ON_ONCE(!(*wdb & WORK_STRUCT_INACTIVE));16981698 trace_workqueue_activate_work(work);16991699 if (list_empty(&pwq->pool->worklist))17001700- pwq->pool->watchdog_ts = jiffies;17001700+ pwq->pool->last_progress_ts = jiffies;17011701 move_linked_works(work, &pwq->pool->worklist, NULL);17021702 __clear_bit(WORK_STRUCT_INACTIVE_BIT, wdb);17031703}···23482348 */23492349 if (list_empty(&pwq->inactive_works) && pwq_tryinc_nr_active(pwq, false)) {23502350 if (list_empty(&pool->worklist))23512351- pool->watchdog_ts = jiffies;23512351+ pool->last_progress_ts = jiffies;2352235223532353 trace_workqueue_activate_work(work);23542354 insert_work(pwq, work, &pool->worklist, work_flags);···32043204 worker->current_pwq = pwq;32053205 if (worker->task)32063206 worker->current_at = worker->task->se.sum_exec_runtime;32073207+ worker->current_start = jiffies;32073208 work_data = *work_data_bits(work);32083209 worker->current_color = get_work_color(work_data);32093210···33533352 while ((work = list_first_entry_or_null(&worker->scheduled,33543353 struct work_struct, entry))) {33553354 if (first) {33563356- worker->pool->watchdog_ts = jiffies;33553355+ worker->pool->last_progress_ts = jiffies;33573356 first = false;33583357 }33593358 process_one_work(worker, work);···48514850 pool->cpu = -1;48524851 pool->node = NUMA_NO_NODE;48534852 pool->flags |= POOL_DISASSOCIATED;48544854- pool->watchdog_ts = jiffies;48534853+ pool->last_progress_ts = jiffies;48554854 INIT_LIST_HEAD(&pool->worklist);48564855 INIT_LIST_HEAD(&pool->idle_list);48574856 hash_init(pool->busy_hash);···62756274{62766275 struct worker_pool *pool = worker->pool;6277627662786278- if (pool->flags & WQ_BH)62776277+ if (pool->flags & POOL_BH)62796278 pr_cont("bh%s",62806279 pool->attrs->nice == HIGHPRI_NICE_LEVEL ? "-hi" : "");62816280 else···63606359 pr_cont(" %s", comma ? "," : "");63616360 pr_cont_worker_id(worker);63626361 pr_cont(":%ps", worker->current_func);63626362+ pr_cont(" for %us",63636363+ jiffies_to_msecs(jiffies - worker->current_start) / 1000);63636364 list_for_each_entry(work, &worker->scheduled, entry)63646365 pr_cont_work(false, work, &pcws);63656366 pr_cont_work_flush(comma, (work_func_t)-1L, &pcws);···6465646264666463 /* How long the first pending work is waiting for a worker. */64676464 if (!list_empty(&pool->worklist))64686468- hung = jiffies_to_msecs(jiffies - pool->watchdog_ts) / 1000;64656465+ hung = jiffies_to_msecs(jiffies - pool->last_progress_ts) / 1000;6469646664706467 /*64716468 * Defer printing to avoid deadlocks in console drivers that···7583758075847581/*75857582 * Show workers that might prevent the processing of pending work items.75867586- * The only candidates are CPU-bound workers in the running state.75877587- * Pending work items should be handled by another idle worker75887588- * in all other situations.75837583+ * A busy worker that is not running on the CPU (e.g. sleeping in75847584+ * wait_event_idle() with PF_WQ_WORKER cleared) can stall the pool just as75857585+ * effectively as a CPU-bound one, so dump every in-flight worker.75897586 */75907590-static void show_cpu_pool_hog(struct worker_pool *pool)75877587+static void show_cpu_pool_busy_workers(struct worker_pool *pool)75917588{75927589 struct worker *worker;75937590 unsigned long irq_flags;···75967593 raw_spin_lock_irqsave(&pool->lock, irq_flags);7597759475987595 hash_for_each(pool->busy_hash, bkt, worker, hentry) {75997599- if (task_is_running(worker->task)) {76007600- /*76017601- * Defer printing to avoid deadlocks in console76027602- * drivers that queue work while holding locks76037603- * also taken in their write paths.76047604- */76057605- printk_deferred_enter();75967596+ /*75977597+ * Defer printing to avoid deadlocks in console75987598+ * drivers that queue work while holding locks75997599+ * also taken in their write paths.76007600+ */76017601+ printk_deferred_enter();7606760276077607- pr_info("pool %d:\n", pool->id);76087608- sched_show_task(worker->task);76037603+ pr_info("pool %d:\n", pool->id);76047604+ sched_show_task(worker->task);7609760576107610- printk_deferred_exit();76117611- }76067606+ printk_deferred_exit();76127607 }7613760876147609 raw_spin_unlock_irqrestore(&pool->lock, irq_flags);76157610}7616761176177617-static void show_cpu_pools_hogs(void)76127612+static void show_cpu_pools_busy_workers(void)76187613{76197614 struct worker_pool *pool;76207615 int pi;7621761676227622- pr_info("Showing backtraces of running workers in stalled CPU-bound worker pools:\n");76177617+ pr_info("Showing backtraces of busy workers in stalled worker pools:\n");7623761876247619 rcu_read_lock();7625762076267621 for_each_pool(pool, pi) {76277622 if (pool->cpu_stall)76287628- show_cpu_pool_hog(pool);76237623+ show_cpu_pool_busy_workers(pool);7629762476307625 }76317626···76927691 touched = READ_ONCE(per_cpu(wq_watchdog_touched_cpu, pool->cpu));76937692 else76947693 touched = READ_ONCE(wq_watchdog_touched);76957695- pool_ts = READ_ONCE(pool->watchdog_ts);76947694+ pool_ts = READ_ONCE(pool->last_progress_ts);7696769576977696 if (time_after(pool_ts, touched))76987697 ts = pool_ts;···77207719 show_all_workqueues();7721772077227721 if (cpu_pool_stall)77237723- show_cpu_pools_hogs();77227722+ show_cpu_pools_busy_workers();7724772377257724 if (lockup_detected)77267725 panic_on_wq_watchdog(max_stall_time);
+1
kernel/workqueue_internal.h
···3232 work_func_t current_func; /* K: function */3333 struct pool_workqueue *current_pwq; /* K: pwq */3434 u64 current_at; /* K: runtime at start or last wakeup */3535+ unsigned long current_start; /* K: start time of current work item */3536 unsigned int current_color; /* K: color */36373738 int sleeping; /* S: is worker sleeping? */
+5-4
lib/bootconfig.c
···316316 depth ? "." : "");317317 if (ret < 0)318318 return ret;319319- if (ret > size) {319319+ if (ret >= size) {320320 size = 0;321321 } else {322322 size -= ret;···532532static int __init __xbc_open_brace(char *p)533533{534534 /* Push the last key as open brace */535535- open_brace[brace_index++] = xbc_node_index(last_parent);536535 if (brace_index >= XBC_DEPTH_MAX)537536 return xbc_parse_error("Exceed max depth of braces", p);537537+ open_brace[brace_index++] = xbc_node_index(last_parent);538538539539 return 0;540540}···723723 if (op == ':') {724724 unsigned short nidx = child->next;725725726726- xbc_init_node(child, v, XBC_VALUE);726726+ if (xbc_init_node(child, v, XBC_VALUE) < 0)727727+ return xbc_parse_error("Failed to override value", v);727728 child->next = nidx; /* keep subkeys */728729 goto array;729730 }···803802804803 /* Brace closing */805804 if (brace_index) {806806- n = &xbc_nodes[open_brace[brace_index]];805805+ n = &xbc_nodes[open_brace[brace_index - 1]];807806 return xbc_parse_error("Brace is not closed",808807 xbc_node_get_data(n));809808 }
+3
lib/crypto/Makefile
···5555libaes-$(CONFIG_X86) += x86/aes-aesni.o5656endif # CONFIG_CRYPTO_LIB_AES_ARCH57575858+# clean-files must be defined unconditionally5959+clean-files += powerpc/aesp8-ppc.S6060+5861################################################################################59626063obj-$(CONFIG_CRYPTO_LIB_AESCFB) += libaescfb.o
+4-1
mm/cma.c
···10131013 unsigned long count)10141014{10151015 struct cma_memrange *cmr;10161016+ unsigned long ret = 0;10161017 unsigned long i, pfn;1017101810181019 cmr = find_cma_memrange(cma, pages, count);···1022102110231022 pfn = page_to_pfn(pages);10241023 for (i = 0; i < count; i++, pfn++)10251025- VM_WARN_ON(!put_page_testzero(pfn_to_page(pfn)));10241024+ ret += !put_page_testzero(pfn_to_page(pfn));10251025+10261026+ WARN(ret, "%lu pages are still in use!\n", ret);1026102710271028 __cma_release_frozen(cma, cmr, pages, count);10281029
+6-1
mm/damon/core.c
···15621562 }15631563 ctx->walk_control = control;15641564 mutex_unlock(&ctx->walk_control_lock);15651565- if (!damon_is_running(ctx))15651565+ if (!damon_is_running(ctx)) {15661566+ mutex_lock(&ctx->walk_control_lock);15671567+ if (ctx->walk_control == control)15681568+ ctx->walk_control = NULL;15691569+ mutex_unlock(&ctx->walk_control_lock);15661570 return -EINVAL;15711571+ }15671572 wait_for_completion(&control->completion);15681573 if (control->canceled)15691574 return -ECANCELED;
+11-5
mm/huge_memory.c
···27972797 _dst_pmd = pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);27982798 } else {27992799 src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);28002800- _dst_pmd = folio_mk_pmd(src_folio, dst_vma->vm_page_prot);28002800+ _dst_pmd = move_soft_dirty_pmd(src_pmdval);28012801+ _dst_pmd = clear_uffd_wp_pmd(_dst_pmd);28012802 }28022803 set_pmd_at(mm, dst_addr, dst_pmd, _dst_pmd);28032804···36323631 const bool is_anon = folio_test_anon(folio);36333632 int old_order = folio_order(folio);36343633 int start_order = split_type == SPLIT_TYPE_UNIFORM ? new_order : old_order - 1;36343634+ struct folio *old_folio = folio;36353635 int split_order;3636363636373637 /*···36533651 * uniform split has xas_split_alloc() called before36543652 * irq is disabled to allocate enough memory, whereas36553653 * non-uniform split can handle ENOMEM.36543654+ * Use the to-be-split folio, so that a parallel36553655+ * folio_try_get() waits on it until xarray is updated36563656+ * with after-split folios and the original one is36573657+ * unfrozen.36563658 */36573657- if (split_type == SPLIT_TYPE_UNIFORM)36583658- xas_split(xas, folio, old_order);36593659- else {36593659+ if (split_type == SPLIT_TYPE_UNIFORM) {36603660+ xas_split(xas, old_folio, old_order);36613661+ } else {36603662 xas_set_order(xas, folio->index, split_order);36613661- xas_try_split(xas, folio, old_order);36633663+ xas_try_split(xas, old_folio, old_order);36623664 if (xas_error(xas))36633665 return xas_error(xas);36643666 }
+2-2
mm/hugetlb.c
···31013101 * extract the actual node first.31023102 */31033103 if (m)31043104- listnode = early_pfn_to_nid(PHYS_PFN(virt_to_phys(m)));31043104+ listnode = early_pfn_to_nid(PHYS_PFN(__pa(m)));31053105 }3106310631073107 if (m) {···31603160 * The head struct page is used to get folio information by the HugeTLB31613161 * subsystem like zone id and node id.31623162 */31633163- memblock_reserved_mark_noinit(virt_to_phys((void *)m + PAGE_SIZE),31633163+ memblock_reserved_mark_noinit(__pa((void *)m + PAGE_SIZE),31643164 huge_page_size(h) - PAGE_SIZE);3165316531663166 return 1;
···146146 for (i = 0; i < nr_folios; i++) {147147 struct memfd_luo_folio_ser *pfolio = &folios_ser[i];148148 struct folio *folio = folios[i];149149- unsigned int flags = 0;150149151150 err = kho_preserve_folio(folio);152151 if (err)153152 goto err_unpreserve;154153155155- if (folio_test_dirty(folio))156156- flags |= MEMFD_LUO_FOLIO_DIRTY;157157- if (folio_test_uptodate(folio))158158- flags |= MEMFD_LUO_FOLIO_UPTODATE;154154+ folio_lock(folio);155155+156156+ /*157157+ * A dirty folio is one which has been written to. A clean folio158158+ * is its opposite. Since a clean folio does not carry user159159+ * data, it can be freed by page reclaim under memory pressure.160160+ *161161+ * Saving the dirty flag at prepare() time doesn't work since it162162+ * can change later. Saving it at freeze() also won't work163163+ * because the dirty bit is normally synced at unmap and there164164+ * might still be a mapping of the file at freeze().165165+ *166166+ * To see why this is a problem, say a folio is clean at167167+ * preserve, but gets dirtied later. The pfolio flags will mark168168+ * it as clean. After retrieve, the next kernel might try to169169+ * reclaim this folio under memory pressure, losing user data.170170+ *171171+ * Unconditionally mark it dirty to avoid this problem. This172172+ * comes at the cost of making clean folios un-reclaimable after173173+ * live update.174174+ */175175+ folio_mark_dirty(folio);176176+177177+ /*178178+ * If the folio is not uptodate, it was fallocated but never179179+ * used. Saving this flag at prepare() doesn't work since it180180+ * might change later when someone uses the folio.181181+ *182182+ * Since we have taken the performance penalty of allocating,183183+ * zeroing, and pinning all the folios in the holes, take a bit184184+ * more and zero all non-uptodate folios too.185185+ *186186+ * NOTE: For someone looking to improve preserve performance,187187+ * this is a good place to look.188188+ */189189+ if (!folio_test_uptodate(folio)) {190190+ folio_zero_range(folio, 0, folio_size(folio));191191+ flush_dcache_folio(folio);192192+ folio_mark_uptodate(folio);193193+ }194194+195195+ folio_unlock(folio);159196160197 pfolio->pfn = folio_pfn(folio);161161- pfolio->flags = flags;198198+ pfolio->flags = MEMFD_LUO_FOLIO_DIRTY | MEMFD_LUO_FOLIO_UPTODATE;162199 pfolio->index = folio->index;163200 }164201
+17-4
mm/rmap.c
···19551955 if (userfaultfd_wp(vma))19561956 return 1;1957195719581958- return folio_pte_batch(folio, pvmw->pte, pte, max_nr);19581958+ /*19591959+ * If unmap fails, we need to restore the ptes. To avoid accidentally19601960+ * upgrading write permissions for ptes that were not originally19611961+ * writable, and to avoid losing the soft-dirty bit, use the19621962+ * appropriate FPB flags.19631963+ */19641964+ return folio_pte_batch_flags(folio, vma, pvmw->pte, &pte, max_nr,19651965+ FPB_RESPECT_WRITE | FPB_RESPECT_SOFT_DIRTY);19591966}1960196719611968/*···24502443 __maybe_unused pmd_t pmdval;2451244424522445 if (flags & TTU_SPLIT_HUGE_PMD) {24462446+ /*24472447+ * split_huge_pmd_locked() might leave the24482448+ * folio mapped through PTEs. Retry the walk24492449+ * so we can detect this scenario and properly24502450+ * abort the walk.24512451+ */24532452 split_huge_pmd_locked(vma, pvmw.address,24542453 pvmw.pmd, true);24552455- ret = false;24562456- page_vma_mapped_walk_done(&pvmw);24572457- break;24542454+ flags &= ~TTU_SPLIT_HUGE_PMD;24552455+ page_vma_mapped_walk_restart(&pvmw);24562456+ continue;24582457 }24592458#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION24602459 pmdval = pmdp_get(pvmw.pmd);
+4-7
mm/slub.c
···21192119 size_t sz = sizeof(struct slabobj_ext) * slab->objects;21202120 struct kmem_cache *obj_exts_cache;2121212121222122- /*21232123- * slabobj_ext array for KMALLOC_CGROUP allocations21242124- * are served from KMALLOC_NORMAL caches.21252125- */21262126- if (!mem_alloc_profiling_enabled())21272127- return sz;21282128-21292122 if (sz > KMALLOC_MAX_CACHE_SIZE)21302123 return sz;21312124···27902797 if (s->flags & SLAB_KMALLOC)27912798 mark_obj_codetag_empty(sheaf);2792279928002800+ VM_WARN_ON_ONCE(sheaf->size > 0);27932801 kfree(sheaf);2794280227952803 stat(s, SHEAF_FREE);···28222828 return 0;28232829}2824283028312831+static void sheaf_flush_unused(struct kmem_cache *s, struct slab_sheaf *sheaf);2825283228262833static struct slab_sheaf *alloc_full_sheaf(struct kmem_cache *s, gfp_t gfp)28272834{···28322837 return NULL;2833283828342839 if (refill_sheaf(s, sheaf, gfp | __GFP_NOMEMALLOC | __GFP_NOWARN)) {28402840+ sheaf_flush_unused(s, sheaf);28352841 free_empty_sheaf(s, sheaf);28362842 return NULL;28372843 }···46194623 * we must be very low on memory so don't bother46204624 * with the barn46214625 */46264626+ sheaf_flush_unused(s, empty);46224627 free_empty_sheaf(s, empty);46234628 }46244629 } else {
···66276627 * state.66286628 */66296629 if (hci_dev_test_flag(hdev, HCI_LE_SCAN)) {66306630- hci_scan_disable_sync(hdev);66316630 hci_dev_set_flag(hdev, HCI_LE_SCAN_INTERRUPTED);66316631+ hci_scan_disable_sync(hdev);66326632 }6633663366346634 /* Update random address, but set require_privacy to false so
+14-2
net/bluetooth/hidp/core.c
···986986 skb_queue_purge(&session->intr_transmit);987987 fput(session->intr_sock->file);988988 fput(session->ctrl_sock->file);989989- l2cap_conn_put(session->conn);989989+ if (session->conn)990990+ l2cap_conn_put(session->conn);990991 kfree(session);991992}992993···1165116411661165 down_write(&hidp_session_sem);1167116611671167+ /* Drop L2CAP reference immediately to indicate that11681168+ * l2cap_unregister_user() shall not be called as it is already11691169+ * considered removed.11701170+ */11711171+ if (session->conn) {11721172+ l2cap_conn_put(session->conn);11731173+ session->conn = NULL;11741174+ }11751175+11681176 hidp_session_terminate(session);1169117711701178 cancel_work_sync(&session->dev_init);···13111301 * Instead, this call has the same semantics as if user-space tried to13121302 * delete the session.13131303 */13141314- l2cap_unregister_user(session->conn, &session->user);13041304+ if (session->conn)13051305+ l2cap_unregister_user(session->conn, &session->user);13061306+13151307 hidp_session_put(session);1316130813171309 module_put_and_kthread_exit(0);
+31-20
net/bluetooth/l2cap_core.c
···1678167816791679int l2cap_register_user(struct l2cap_conn *conn, struct l2cap_user *user)16801680{16811681- struct hci_dev *hdev = conn->hcon->hdev;16821681 int ret;1683168216841683 /* We need to check whether l2cap_conn is registered. If it is not, we16851685- * must not register the l2cap_user. l2cap_conn_del() is unregisters16861686- * l2cap_conn objects, but doesn't provide its own locking. Instead, it16871687- * relies on the parent hci_conn object to be locked. This itself relies16881688- * on the hci_dev object to be locked. So we must lock the hci device16891689- * here, too. */16841684+ * must not register the l2cap_user. l2cap_conn_del() unregisters16851685+ * l2cap_conn objects under conn->lock, and we use the same lock here16861686+ * to protect access to conn->users and conn->hchan.16871687+ */1690168816911691- hci_dev_lock(hdev);16891689+ mutex_lock(&conn->lock);1692169016931691 if (!list_empty(&user->list)) {16941692 ret = -EINVAL;···17071709 ret = 0;1708171017091711out_unlock:17101710- hci_dev_unlock(hdev);17121712+ mutex_unlock(&conn->lock);17111713 return ret;17121714}17131715EXPORT_SYMBOL(l2cap_register_user);1714171617151717void l2cap_unregister_user(struct l2cap_conn *conn, struct l2cap_user *user)17161718{17171717- struct hci_dev *hdev = conn->hcon->hdev;17181718-17191719- hci_dev_lock(hdev);17191719+ mutex_lock(&conn->lock);1720172017211721 if (list_empty(&user->list))17221722 goto out_unlock;···17231727 user->remove(conn, user);1724172817251729out_unlock:17261726- hci_dev_unlock(hdev);17301730+ mutex_unlock(&conn->lock);17271731}17281732EXPORT_SYMBOL(l2cap_unregister_user);17291733···4612461646134617 switch (type) {46144618 case L2CAP_IT_FEAT_MASK:46154615- conn->feat_mask = get_unaligned_le32(rsp->data);46194619+ if (cmd_len >= sizeof(*rsp) + sizeof(u32))46204620+ conn->feat_mask = get_unaligned_le32(rsp->data);4616462146174622 if (conn->feat_mask & L2CAP_FEAT_FIXED_CHAN) {46184623 struct l2cap_info_req req;···46324635 break;4633463646344637 case L2CAP_IT_FIXED_CHAN:46354635- conn->remote_fixed_chan = rsp->data[0];46384638+ if (cmd_len >= sizeof(*rsp) + sizeof(rsp->data[0]))46394639+ conn->remote_fixed_chan = rsp->data[0];46364640 conn->info_state |= L2CAP_INFO_FEAT_MASK_REQ_DONE;46374641 conn->info_ident = 0;46384642···50575059 u16 mtu, mps;50585060 __le16 psm;50595061 u8 result, rsp_len = 0;50605060- int i, num_scid;50625062+ int i, num_scid = 0;50615063 bool defer = false;5062506450635065 if (!enable_ecred)···50665068 memset(pdu, 0, sizeof(*pdu));5067506950685070 if (cmd_len < sizeof(*req) || (cmd_len - sizeof(*req)) % sizeof(u16)) {50715071+ result = L2CAP_CR_LE_INVALID_PARAMS;50725072+ goto response;50735073+ }50745074+50755075+ /* Check if there are no pending channels with the same ident */50765076+ __l2cap_chan_list_id(conn, cmd->ident, l2cap_ecred_list_defer,50775077+ &num_scid);50785078+ if (num_scid) {50695079 result = L2CAP_CR_LE_INVALID_PARAMS;50705080 goto response;50715081 }···54305424 u8 *data)54315425{54325426 struct l2cap_chan *chan, *tmp;54335433- struct l2cap_ecred_conn_rsp *rsp = (void *) data;54275427+ struct l2cap_ecred_reconf_rsp *rsp = (void *)data;54345428 u16 result;5435542954365430 if (cmd_len < sizeof(*rsp))···5438543254395433 result = __le16_to_cpu(rsp->result);5440543454415441- BT_DBG("result 0x%4.4x", rsp->result);54355435+ BT_DBG("result 0x%4.4x", result);5442543654435437 if (!result)54445438 return 0;···66686662 return -ENOBUFS;66696663 }6670666466716671- if (chan->imtu < skb->len) {66726672- BT_ERR("Too big LE L2CAP PDU");66656665+ if (skb->len > chan->imtu) {66666666+ BT_ERR("Too big LE L2CAP PDU: len %u > %u", skb->len,66676667+ chan->imtu);66686668+ l2cap_send_disconn_req(chan, ECONNRESET);66736669 return -ENOBUFS;66746670 }66756671···66976689 sdu_len, skb->len, chan->imtu);6698669066996691 if (sdu_len > chan->imtu) {67006700- BT_ERR("Too big LE L2CAP SDU length received");66926692+ BT_ERR("Too big LE L2CAP SDU length: len %u > %u",66936693+ skb->len, sdu_len);66946694+ l2cap_send_disconn_req(chan, ECONNRESET);67016695 err = -EMSGSIZE;67026696 goto failed;67036697 }···6735672567366726 if (chan->sdu->len + skb->len > chan->sdu_len) {67376727 BT_ERR("Too much LE L2CAP data received");67286728+ l2cap_send_disconn_req(chan, ECONNRESET);67386729 err = -EINVAL;67396730 goto failed;67406731 }
···27432743 if (!test_bit(SMP_FLAG_DEBUG_KEY, &smp->flags) &&27442744 !crypto_memneq(key, smp->local_pk, 64)) {27452745 bt_dev_err(hdev, "Remote and local public keys are identical");27462746- return SMP_UNSPECIFIED;27462746+ return SMP_DHKEY_CHECK_FAILED;27472747 }2748274827492749 memcpy(smp->remote_pk, key, 64);
···124124125125#include <trace/events/sock.h>126126127127+/* Keep the definition of IPv6 disable here for now, to avoid annoying linker128128+ * issues in case IPv6=m129129+ */130130+int disable_ipv6_mod;131131+EXPORT_SYMBOL(disable_ipv6_mod);132132+127133/* The inetsw table contains everything that inet_create needs to128134 * build a new socket.129135 */
···344344 break;345345 case NETDEV_REGISTER:346346 /* NOP if not matching or already registered */347347- if (!match || (changename && ops))347347+ if (!match || ops)348348 continue;349349350350 ops = kmemdup(&basechain->ops,
···16401640 int i;1641164116421642 nft_pipapo_for_each_field(f, i, m) {16431643+ bool last = i == m->field_count - 1;16431644 int g;1644164516451646 for (g = 0; g < f->groups; g++) {···16601659 }1661166016621661 pipapo_unmap(f->mt, f->rules, rulemap[i].to, rulemap[i].n,16631663- rulemap[i + 1].n, i == m->field_count - 1);16621662+ last ? 0 : rulemap[i + 1].n, last);16641663 if (pipapo_resize(f, f->rules, f->rules - rulemap[i].n)) {16651664 /* We can ignore this, a failure to shrink tables down16661665 * doesn't make tables invalid.
+10-61
net/netfilter/nft_set_rbtree.c
···304304 priv->start_rbe_cookie = (unsigned long)rbe;305305}306306307307-static void nft_rbtree_set_start_cookie_open(struct nft_rbtree *priv,308308- const struct nft_rbtree_elem *rbe,309309- unsigned long open_interval)310310-{311311- priv->start_rbe_cookie = (unsigned long)rbe | open_interval;312312-}313313-314314-#define NFT_RBTREE_OPEN_INTERVAL 1UL315315-316307static bool nft_rbtree_cmp_start_cookie(struct nft_rbtree *priv,317308 const struct nft_rbtree_elem *rbe)318309{319319- return (priv->start_rbe_cookie & ~NFT_RBTREE_OPEN_INTERVAL) == (unsigned long)rbe;310310+ return priv->start_rbe_cookie == (unsigned long)rbe;320311}321312322313static bool nft_rbtree_insert_same_interval(const struct net *net,···337346338347static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,339348 struct nft_rbtree_elem *new,340340- struct nft_elem_priv **elem_priv, u64 tstamp, bool last)349349+ struct nft_elem_priv **elem_priv, u64 tstamp)341350{342351 struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL, *rbe_prev;343352 struct rb_node *node, *next, *parent, **p, *first = NULL;344353 struct nft_rbtree *priv = nft_set_priv(set);345354 u8 cur_genmask = nft_genmask_cur(net);346355 u8 genmask = nft_genmask_next(net);347347- unsigned long open_interval = 0;348356 int d;349357350358 /* Descend the tree to search for an existing element greater than the···449459 }450460 }451461452452- if (nft_rbtree_interval_null(set, new)) {462462+ if (nft_rbtree_interval_null(set, new))453463 priv->start_rbe_cookie = 0;454454- } else if (nft_rbtree_interval_start(new) && priv->start_rbe_cookie) {455455- if (nft_set_is_anonymous(set)) {456456- priv->start_rbe_cookie = 0;457457- } else if (priv->start_rbe_cookie & NFT_RBTREE_OPEN_INTERVAL) {458458- /* Previous element is an open interval that partially459459- * overlaps with an existing non-open interval.460460- */461461- return -ENOTEMPTY;462462- }463463- }464464+ else if (nft_rbtree_interval_start(new) && priv->start_rbe_cookie)465465+ priv->start_rbe_cookie = 0;464466465467 /* - new start element matching existing start element: full overlap466468 * reported as -EEXIST, cleared by caller if NLM_F_EXCL is not given.···460478 if (rbe_ge && !nft_rbtree_cmp(set, new, rbe_ge) &&461479 nft_rbtree_interval_start(rbe_ge) == nft_rbtree_interval_start(new)) {462480 *elem_priv = &rbe_ge->priv;463463-464464- /* - Corner case: new start element of open interval (which465465- * comes as last element in the batch) overlaps the start of466466- * an existing interval with an end element: partial overlap.467467- */468468- node = rb_first(&priv->root);469469- rbe = __nft_rbtree_next_active(node, genmask);470470- if (rbe && nft_rbtree_interval_end(rbe)) {471471- rbe = nft_rbtree_next_active(rbe, genmask);472472- if (rbe &&473473- nft_rbtree_interval_start(rbe) &&474474- !nft_rbtree_cmp(set, new, rbe)) {475475- if (last)476476- return -ENOTEMPTY;477477-478478- /* Maybe open interval? */479479- open_interval = NFT_RBTREE_OPEN_INTERVAL;480480- }481481- }482482- nft_rbtree_set_start_cookie_open(priv, rbe_ge, open_interval);483483-481481+ nft_rbtree_set_start_cookie(priv, rbe_ge);484482 return -EEXIST;485483 }486484···513551 */514552 if (rbe_ge &&515553 nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new))516516- return -ENOTEMPTY;517517-518518- /* - start element overlaps an open interval but end element is new:519519- * partial overlap, reported as -ENOEMPTY.520520- */521521- if (!rbe_ge && priv->start_rbe_cookie && nft_rbtree_interval_end(new))522554 return -ENOTEMPTY;523555524556 /* Accepted element: pick insertion point depending on key value */···624668 struct nft_elem_priv **elem_priv)625669{626670 struct nft_rbtree_elem *rbe = nft_elem_priv_cast(elem->priv);627627- bool last = !!(elem->flags & NFT_SET_ELEM_INTERNAL_LAST);628671 struct nft_rbtree *priv = nft_set_priv(set);629672 u64 tstamp = nft_net_tstamp(net);630673 int err;···640685 cond_resched();641686642687 write_lock_bh(&priv->lock);643643- err = __nft_rbtree_insert(net, set, rbe, elem_priv, tstamp, last);688688+ err = __nft_rbtree_insert(net, set, rbe, elem_priv, tstamp);644689 write_unlock_bh(&priv->lock);645645-646646- if (nft_rbtree_interval_end(rbe))647647- priv->start_rbe_cookie = 0;648648-649690 } while (err == -EAGAIN);650691651692 return err;···729778 const struct nft_set_elem *elem)730779{731780 struct nft_rbtree_elem *rbe, *this = nft_elem_priv_cast(elem->priv);732732- bool last = !!(elem->flags & NFT_SET_ELEM_INTERNAL_LAST);733781 struct nft_rbtree *priv = nft_set_priv(set);734782 const struct rb_node *parent = priv->root.rb_node;735783 u8 genmask = nft_genmask_next(net);···769819 continue;770820 }771821772772- if (nft_rbtree_interval_start(rbe)) {773773- if (!last)774774- nft_rbtree_set_start_cookie(priv, rbe);775775- } else if (!nft_rbtree_deactivate_same_interval(net, priv, rbe))822822+ if (nft_rbtree_interval_start(rbe))823823+ nft_rbtree_set_start_cookie(priv, rbe);824824+ else if (!nft_rbtree_deactivate_same_interval(net, priv, rbe))776825 return NULL;777826778827 nft_rbtree_flush(net, set, &rbe->priv);
+4
net/netfilter/xt_CT.c
···1616#include <net/netfilter/nf_conntrack_ecache.h>1717#include <net/netfilter/nf_conntrack_timeout.h>1818#include <net/netfilter/nf_conntrack_zones.h>1919+#include "nf_internals.h"19202021static inline int xt_ct_target(struct sk_buff *skb, struct nf_conn *ct)2122{···284283 struct nf_conn_help *help;285284286285 if (ct) {286286+ if (info->helper[0] || info->timeout[0])287287+ nf_queue_nf_hook_drop(par->net);288288+287289 help = nfct_help(ct);288290 xt_ct_put_helper(help);289291
+6
net/netfilter/xt_IDLETIMER.c
···318318319319 info->timer = __idletimer_tg_find_by_label(info->label);320320 if (info->timer) {321321+ if (info->timer->timer_type & XT_IDLETIMER_ALARM) {322322+ pr_debug("Adding/Replacing rule with same label and different timer type is not allowed\n");323323+ mutex_unlock(&list_mutex);324324+ return -EINVAL;325325+ }326326+321327 info->timer->refcnt++;322328 mod_timer(&info->timer->timer,323329 secs_to_jiffies(info->timeout) + jiffies);
+2-2
net/netfilter/xt_dccp.c
···6262 return true;6363 }64646565- if (op[i] < 2)6565+ if (op[i] < 2 || i == optlen - 1)6666 i++;6767 else6868- i += op[i+1]?:1;6868+ i += op[i + 1] ? : 1;6969 }70707171 spin_unlock_bh(&dccp_buflock);
+4-2
net/netfilter/xt_tcpudp.c
···59596060 for (i = 0; i < optlen; ) {6161 if (op[i] == option) return !invert;6262- if (op[i] < 2) i++;6363- else i += op[i+1]?:1;6262+ if (op[i] < 2 || i == optlen - 1)6363+ i++;6464+ else6565+ i += op[i + 1] ? : 1;6466 }65676668 return invert;
+2-2
net/netfilter/xt_time.c
···223223224224 localtime_2(¤t_time, stamp);225225226226- if (!(info->weekdays_match & (1 << current_time.weekday)))226226+ if (!(info->weekdays_match & (1U << current_time.weekday)))227227 return false;228228229229 /* Do not spend time computing monthday if all days match anyway */230230 if (info->monthdays_match != XT_TIME_ALL_MONTHDAYS) {231231 localtime_3(¤t_time, stamp);232232- if (!(info->monthdays_match & (1 << current_time.monthday)))232232+ if (!(info->monthdays_match & (1U << current_time.monthday)))233233 return false;234234 }235235
···267267 * Lookup or create a remote transport endpoint record for the specified268268 * address.269269 *270270- * Return: The peer record found with a reference, %NULL if no record is found271271- * or a negative error code if the address is invalid or unsupported.270270+ * Return: The peer record found with a reference or a negative error code if271271+ * the address is invalid or unsupported.272272 */273273struct rxrpc_peer *rxrpc_kernel_lookup_peer(struct socket *sock,274274 struct sockaddr_rxrpc *srx, gfp_t gfp)275275{276276+ struct rxrpc_peer *peer;276277 struct rxrpc_sock *rx = rxrpc_sk(sock->sk);277278 int ret;278279···281280 if (ret < 0)282281 return ERR_PTR(ret);283282284284- return rxrpc_lookup_peer(rx->local, srx, gfp);283283+ peer = rxrpc_lookup_peer(rx->local, srx, gfp);284284+ return peer ?: ERR_PTR(-ENOMEM);285285}286286EXPORT_SYMBOL(rxrpc_kernel_lookup_peer);287287
···401401/// ```402402/// use kernel::cpufreq::{DEFAULT_TRANSITION_LATENCY_NS, Policy};403403///404404+/// #[allow(clippy::double_parens, reason = "False positive before 1.92.0")]404405/// fn update_policy(policy: &mut Policy) {405406/// policy406407/// .set_dvfs_possible_from_any_cpu(true)
+50-64
rust/kernel/dma.rs
···461461 self.count * core::mem::size_of::<T>()462462 }463463464464+ /// Returns the raw pointer to the allocated region in the CPU's virtual address space.465465+ #[inline]466466+ pub fn as_ptr(&self) -> *const [T] {467467+ core::ptr::slice_from_raw_parts(self.cpu_addr.as_ptr(), self.count)468468+ }469469+470470+ /// Returns the raw pointer to the allocated region in the CPU's virtual address space as471471+ /// a mutable pointer.472472+ #[inline]473473+ pub fn as_mut_ptr(&self) -> *mut [T] {474474+ core::ptr::slice_from_raw_parts_mut(self.cpu_addr.as_ptr(), self.count)475475+ }476476+464477 /// Returns the base address to the allocated region in the CPU's virtual address space.465478 pub fn start_ptr(&self) -> *const T {466479 self.cpu_addr.as_ptr()···594581 Ok(())595582 }596583597597- /// Returns a pointer to an element from the region with bounds checking. `offset` is in598598- /// units of `T`, not the number of bytes.599599- ///600600- /// Public but hidden since it should only be used from [`dma_read`] and [`dma_write`] macros.601601- #[doc(hidden)]602602- pub fn item_from_index(&self, offset: usize) -> Result<*mut T> {603603- if offset >= self.count {604604- return Err(EINVAL);605605- }606606- // SAFETY:607607- // - The pointer is valid due to type invariant on `CoherentAllocation`608608- // and we've just checked that the range and index is within bounds.609609- // - `offset` can't overflow since it is smaller than `self.count` and we've checked610610- // that `self.count` won't overflow early in the constructor.611611- Ok(unsafe { self.cpu_addr.as_ptr().add(offset) })612612- }613613-614584 /// Reads the value of `field` and ensures that its type is [`FromBytes`].615585 ///616586 /// # Safety···666670667671/// Reads a field of an item from an allocated region of structs.668672///673673+/// The syntax is of the form `kernel::dma_read!(dma, proj)` where `dma` is an expression evaluating674674+/// to a [`CoherentAllocation`] and `proj` is a [projection specification](kernel::ptr::project!).675675+///669676/// # Examples670677///671678/// ```···683684/// unsafe impl kernel::transmute::AsBytes for MyStruct{};684685///685686/// # fn test(alloc: &kernel::dma::CoherentAllocation<MyStruct>) -> Result {686686-/// let whole = kernel::dma_read!(alloc[2]);687687-/// let field = kernel::dma_read!(alloc[1].field);687687+/// let whole = kernel::dma_read!(alloc, [2]?);688688+/// let field = kernel::dma_read!(alloc, [1]?.field);688689/// # Ok::<(), Error>(()) }689690/// ```690691#[macro_export]691692macro_rules! dma_read {692692- ($dma:expr, $idx: expr, $($field:tt)*) => {{693693- (|| -> ::core::result::Result<_, $crate::error::Error> {694694- let item = $crate::dma::CoherentAllocation::item_from_index(&$dma, $idx)?;695695- // SAFETY: `item_from_index` ensures that `item` is always a valid pointer and can be696696- // dereferenced. The compiler also further validates the expression on whether `field`697697- // is a member of `item` when expanded by the macro.698698- unsafe {699699- let ptr_field = ::core::ptr::addr_of!((*item) $($field)*);700700- ::core::result::Result::Ok(701701- $crate::dma::CoherentAllocation::field_read(&$dma, ptr_field)702702- )703703- }704704- })()693693+ ($dma:expr, $($proj:tt)*) => {{694694+ let dma = &$dma;695695+ let ptr = $crate::ptr::project!(696696+ $crate::dma::CoherentAllocation::as_ptr(dma), $($proj)*697697+ );698698+ // SAFETY: The pointer created by the projection is within the DMA region.699699+ unsafe { $crate::dma::CoherentAllocation::field_read(dma, ptr) }705700 }};706706- ($dma:ident [ $idx:expr ] $($field:tt)* ) => {707707- $crate::dma_read!($dma, $idx, $($field)*)708708- };709709- ($($dma:ident).* [ $idx:expr ] $($field:tt)* ) => {710710- $crate::dma_read!($($dma).*, $idx, $($field)*)711711- };712701}713702714703/// Writes to a field of an item from an allocated region of structs.704704+///705705+/// The syntax is of the form `kernel::dma_write!(dma, proj, val)` where `dma` is an expression706706+/// evaluating to a [`CoherentAllocation`], `proj` is a707707+/// [projection specification](kernel::ptr::project!), and `val` is the value to be written to the708708+/// projected location.715709///716710/// # Examples717711///···720728/// unsafe impl kernel::transmute::AsBytes for MyStruct{};721729///722730/// # fn test(alloc: &kernel::dma::CoherentAllocation<MyStruct>) -> Result {723723-/// kernel::dma_write!(alloc[2].member = 0xf);724724-/// kernel::dma_write!(alloc[1] = MyStruct { member: 0xf });731731+/// kernel::dma_write!(alloc, [2]?.member, 0xf);732732+/// kernel::dma_write!(alloc, [1]?, MyStruct { member: 0xf });725733/// # Ok::<(), Error>(()) }726734/// ```727735#[macro_export]728736macro_rules! dma_write {729729- ($dma:ident [ $idx:expr ] $($field:tt)*) => {{730730- $crate::dma_write!($dma, $idx, $($field)*)737737+ (@parse [$dma:expr] [$($proj:tt)*] [, $val:expr]) => {{738738+ let dma = &$dma;739739+ let ptr = $crate::ptr::project!(740740+ mut $crate::dma::CoherentAllocation::as_mut_ptr(dma), $($proj)*741741+ );742742+ let val = $val;743743+ // SAFETY: The pointer created by the projection is within the DMA region.744744+ unsafe { $crate::dma::CoherentAllocation::field_write(dma, ptr, val) }731745 }};732732- ($($dma:ident).* [ $idx:expr ] $($field:tt)* ) => {{733733- $crate::dma_write!($($dma).*, $idx, $($field)*)734734- }};735735- ($dma:expr, $idx: expr, = $val:expr) => {736736- (|| -> ::core::result::Result<_, $crate::error::Error> {737737- let item = $crate::dma::CoherentAllocation::item_from_index(&$dma, $idx)?;738738- // SAFETY: `item_from_index` ensures that `item` is always a valid item.739739- unsafe { $crate::dma::CoherentAllocation::field_write(&$dma, item, $val) }740740- ::core::result::Result::Ok(())741741- })()746746+ (@parse [$dma:expr] [$($proj:tt)*] [.$field:tt $($rest:tt)*]) => {747747+ $crate::dma_write!(@parse [$dma] [$($proj)* .$field] [$($rest)*])742748 };743743- ($dma:expr, $idx: expr, $(.$field:ident)* = $val:expr) => {744744- (|| -> ::core::result::Result<_, $crate::error::Error> {745745- let item = $crate::dma::CoherentAllocation::item_from_index(&$dma, $idx)?;746746- // SAFETY: `item_from_index` ensures that `item` is always a valid pointer and can be747747- // dereferenced. The compiler also further validates the expression on whether `field`748748- // is a member of `item` when expanded by the macro.749749- unsafe {750750- let ptr_field = ::core::ptr::addr_of_mut!((*item) $(.$field)*);751751- $crate::dma::CoherentAllocation::field_write(&$dma, ptr_field, $val)752752- }753753- ::core::result::Result::Ok(())754754- })()749749+ (@parse [$dma:expr] [$($proj:tt)*] [[$index:expr]? $($rest:tt)*]) => {750750+ $crate::dma_write!(@parse [$dma] [$($proj)* [$index]?] [$($rest)*])751751+ };752752+ (@parse [$dma:expr] [$($proj:tt)*] [[$index:expr] $($rest:tt)*]) => {753753+ $crate::dma_write!(@parse [$dma] [$($proj)* [$index]] [$($rest)*])754754+ };755755+ ($dma:expr, $($rest:tt)*) => {756756+ $crate::dma_write!(@parse [$dma] [] [$($rest)*])755757 };756758}
+4
rust/kernel/lib.rs
···2020#![feature(generic_nonzero)]2121#![feature(inline_const)]2222#![feature(pointer_is_aligned)]2323+#![feature(slice_ptr_len)]2324//2425// Stable since Rust 1.80.0.2526#![feature(slice_flatten)]···3736#![feature(const_option)]3837#![feature(const_ptr_write)]3938#![feature(const_refs_to_cell)]3939+//4040+// Stable since Rust 1.84.0.4141+#![feature(strict_provenance)]4042//4143// Expected to become stable.4244#![feature(arbitrary_self_types)]
+29-1
rust/kernel/ptr.rs
···2233//! Types and functions to work with pointers and addresses.4455-use core::mem::align_of;55+pub mod projection;66+pub use crate::project_pointer as project;77+88+use core::mem::{99+ align_of,1010+ size_of, //1111+};612use core::num::NonZero;713814/// Type representing an alignment, which is always a power of two.···231225}232226233227impl_alignable_uint!(u8, u16, u32, u64, usize);228228+229229+/// Trait to represent compile-time known size information.230230+///231231+/// This is a generalization of [`size_of`] that works for dynamically sized types.232232+pub trait KnownSize {233233+ /// Get the size of an object of this type in bytes, with the metadata of the given pointer.234234+ fn size(p: *const Self) -> usize;235235+}236236+237237+impl<T> KnownSize for T {238238+ #[inline(always)]239239+ fn size(_: *const Self) -> usize {240240+ size_of::<T>()241241+ }242242+}243243+244244+impl<T> KnownSize for [T] {245245+ #[inline(always)]246246+ fn size(p: *const Self) -> usize {247247+ p.len() * size_of::<T>()248248+ }249249+}
+305
rust/kernel/ptr/projection.rs
···11+// SPDX-License-Identifier: GPL-2.022+33+//! Infrastructure for handling projections.44+55+use core::{66+ mem::MaybeUninit,77+ ops::Deref, //88+};99+1010+use crate::prelude::*;1111+1212+/// Error raised when a projection is attempted on an array or slice out of bounds.1313+pub struct OutOfBound;1414+1515+impl From<OutOfBound> for Error {1616+ #[inline(always)]1717+ fn from(_: OutOfBound) -> Self {1818+ ERANGE1919+ }2020+}2121+2222+/// A helper trait to perform index projection.2323+///2424+/// This is similar to [`core::slice::SliceIndex`], but operates on raw pointers safely and2525+/// fallibly.2626+///2727+/// # Safety2828+///2929+/// The implementation of `index` and `get` (if [`Some`] is returned) must ensure that, if provided3030+/// input pointer `slice` and returned pointer `output`, then:3131+/// - `output` has the same provenance as `slice`;3232+/// - `output.byte_offset_from(slice)` is between 0 to3333+/// `KnownSize::size(slice) - KnownSize::size(output)`.3434+///3535+/// This means that if the input pointer is valid, then pointer returned by `get` or `index` is3636+/// also valid.3737+#[diagnostic::on_unimplemented(message = "`{Self}` cannot be used to index `{T}`")]3838+#[doc(hidden)]3939+pub unsafe trait ProjectIndex<T: ?Sized>: Sized {4040+ type Output: ?Sized;4141+4242+ /// Returns an index-projected pointer, if in bounds.4343+ fn get(self, slice: *mut T) -> Option<*mut Self::Output>;4444+4545+ /// Returns an index-projected pointer; fail the build if it cannot be proved to be in bounds.4646+ #[inline(always)]4747+ fn index(self, slice: *mut T) -> *mut Self::Output {4848+ Self::get(self, slice).unwrap_or_else(|| build_error!())4949+ }5050+}5151+5252+// Forward array impl to slice impl.5353+//5454+// SAFETY: Safety requirement guaranteed by the forwarded impl.5555+unsafe impl<T, I, const N: usize> ProjectIndex<[T; N]> for I5656+where5757+ I: ProjectIndex<[T]>,5858+{5959+ type Output = <I as ProjectIndex<[T]>>::Output;6060+6161+ #[inline(always)]6262+ fn get(self, slice: *mut [T; N]) -> Option<*mut Self::Output> {6363+ <I as ProjectIndex<[T]>>::get(self, slice)6464+ }6565+6666+ #[inline(always)]6767+ fn index(self, slice: *mut [T; N]) -> *mut Self::Output {6868+ <I as ProjectIndex<[T]>>::index(self, slice)6969+ }7070+}7171+7272+// SAFETY: `get`-returned pointer has the same provenance as `slice` and the offset is checked to7373+// not exceed the required bound.7474+unsafe impl<T> ProjectIndex<[T]> for usize {7575+ type Output = T;7676+7777+ #[inline(always)]7878+ fn get(self, slice: *mut [T]) -> Option<*mut T> {7979+ if self >= slice.len() {8080+ None8181+ } else {8282+ Some(slice.cast::<T>().wrapping_add(self))8383+ }8484+ }8585+}8686+8787+// SAFETY: `get`-returned pointer has the same provenance as `slice` and the offset is checked to8888+// not exceed the required bound.8989+unsafe impl<T> ProjectIndex<[T]> for core::ops::Range<usize> {9090+ type Output = [T];9191+9292+ #[inline(always)]9393+ fn get(self, slice: *mut [T]) -> Option<*mut [T]> {9494+ let new_len = self.end.checked_sub(self.start)?;9595+ if self.end > slice.len() {9696+ return None;9797+ }9898+ Some(core::ptr::slice_from_raw_parts_mut(9999+ slice.cast::<T>().wrapping_add(self.start),100100+ new_len,101101+ ))102102+ }103103+}104104+105105+// SAFETY: Safety requirement guaranteed by the forwarded impl.106106+unsafe impl<T> ProjectIndex<[T]> for core::ops::RangeTo<usize> {107107+ type Output = [T];108108+109109+ #[inline(always)]110110+ fn get(self, slice: *mut [T]) -> Option<*mut [T]> {111111+ (0..self.end).get(slice)112112+ }113113+}114114+115115+// SAFETY: Safety requirement guaranteed by the forwarded impl.116116+unsafe impl<T> ProjectIndex<[T]> for core::ops::RangeFrom<usize> {117117+ type Output = [T];118118+119119+ #[inline(always)]120120+ fn get(self, slice: *mut [T]) -> Option<*mut [T]> {121121+ (self.start..slice.len()).get(slice)122122+ }123123+}124124+125125+// SAFETY: `get` returned the pointer as is, so it always has the same provenance and offset of 0.126126+unsafe impl<T> ProjectIndex<[T]> for core::ops::RangeFull {127127+ type Output = [T];128128+129129+ #[inline(always)]130130+ fn get(self, slice: *mut [T]) -> Option<*mut [T]> {131131+ Some(slice)132132+ }133133+}134134+135135+/// A helper trait to perform field projection.136136+///137137+/// This trait has a `DEREF` generic parameter so it can be implemented twice for types that138138+/// implement [`Deref`]. This will cause an ambiguity error and thus block [`Deref`] types being139139+/// used as base of projection, as they can inject unsoundness. Users therefore must not specify140140+/// `DEREF` and should always leave it to be inferred.141141+///142142+/// # Safety143143+///144144+/// `proj` may only invoke `f` with a valid allocation, as the documentation of [`Self::proj`]145145+/// describes.146146+#[doc(hidden)]147147+pub unsafe trait ProjectField<const DEREF: bool> {148148+ /// Project a pointer to a type to a pointer of a field.149149+ ///150150+ /// `f` may only be invoked with a valid allocation so it can safely obtain raw pointers to151151+ /// fields using `&raw mut`.152152+ ///153153+ /// This is needed because `base` might not point to a valid allocation, while `&raw mut`154154+ /// requires pointers to be in bounds of a valid allocation.155155+ ///156156+ /// # Safety157157+ ///158158+ /// `f` must return a pointer in bounds of the provided pointer.159159+ unsafe fn proj<F>(base: *mut Self, f: impl FnOnce(*mut Self) -> *mut F) -> *mut F;160160+}161161+162162+// NOTE: in theory, this API should work for `T: ?Sized` and `F: ?Sized`, too. However, we cannot163163+// currently support that as we need to obtain a valid allocation that `&raw const` can operate on.164164+//165165+// SAFETY: `proj` invokes `f` with valid allocation.166166+unsafe impl<T> ProjectField<false> for T {167167+ #[inline(always)]168168+ unsafe fn proj<F>(base: *mut Self, f: impl FnOnce(*mut Self) -> *mut F) -> *mut F {169169+ // Create a valid allocation to start projection, as `base` is not necessarily so. The170170+ // memory is never actually used so it will be optimized out, so it should work even for171171+ // very large `T` (`memoffset` crate also relies on this). To be extra certain, we also172172+ // annotate `f` closure with `#[inline(always)]` in the macro.173173+ let mut place = MaybeUninit::uninit();174174+ let place_base = place.as_mut_ptr();175175+ let field = f(place_base);176176+ // SAFETY: `field` is in bounds from `base` per safety requirement.177177+ let offset = unsafe { field.byte_offset_from(place_base) };178178+ // Use `wrapping_byte_offset` as `base` does not need to be of valid allocation.179179+ base.wrapping_byte_offset(offset).cast()180180+ }181181+}182182+183183+// SAFETY: Vacuously satisfied.184184+unsafe impl<T: Deref> ProjectField<true> for T {185185+ #[inline(always)]186186+ unsafe fn proj<F>(_: *mut Self, _: impl FnOnce(*mut Self) -> *mut F) -> *mut F {187187+ build_error!("this function is a guard against `Deref` impl and is never invoked");188188+ }189189+}190190+191191+/// Create a projection from a raw pointer.192192+///193193+/// The projected pointer is within the memory region marked by the input pointer. There is no194194+/// requirement that the input raw pointer needs to be valid, so this macro may be used for195195+/// projecting pointers outside normal address space, e.g. I/O pointers. However, if the input196196+/// pointer is valid, the projected pointer is also valid.197197+///198198+/// Supported projections include field projections and index projections.199199+/// It is not allowed to project into types that implement custom [`Deref`] or200200+/// [`Index`](core::ops::Index).201201+///202202+/// The macro has basic syntax of `kernel::ptr::project!(ptr, projection)`, where `ptr` is an203203+/// expression that evaluates to a raw pointer which serves as the base of projection. `projection`204204+/// can be a projection expression of form `.field` (normally identifier, or numeral in case of205205+/// tuple structs) or of form `[index]`.206206+///207207+/// If a mutable pointer is needed, the macro input can be prefixed with the `mut` keyword, i.e.208208+/// `kernel::ptr::project!(mut ptr, projection)`. By default, a const pointer is created.209209+///210210+/// `ptr::project!` macro can perform both fallible indexing and build-time checked indexing.211211+/// `[index]` form performs build-time bounds checking; if compiler fails to prove `[index]` is in212212+/// bounds, compilation will fail. `[index]?` can be used to perform runtime bounds checking;213213+/// `OutOfBound` error is raised via `?` if the index is out of bounds.214214+///215215+/// # Examples216216+///217217+/// Field projections are performed with `.field_name`:218218+///219219+/// ```220220+/// struct MyStruct { field: u32, }221221+/// let ptr: *const MyStruct = core::ptr::dangling();222222+/// let field_ptr: *const u32 = kernel::ptr::project!(ptr, .field);223223+///224224+/// struct MyTupleStruct(u32, u32);225225+///226226+/// fn proj(ptr: *const MyTupleStruct) {227227+/// let field_ptr: *const u32 = kernel::ptr::project!(ptr, .1);228228+/// }229229+/// ```230230+///231231+/// Index projections are performed with `[index]`:232232+///233233+/// ```234234+/// fn proj(ptr: *const [u8; 32]) -> Result {235235+/// let field_ptr: *const u8 = kernel::ptr::project!(ptr, [1]);236236+/// // The following invocation, if uncommented, would fail the build.237237+/// //238238+/// // kernel::ptr::project!(ptr, [128]);239239+///240240+/// // This will raise an `OutOfBound` error (which is convertible to `ERANGE`).241241+/// kernel::ptr::project!(ptr, [128]?);242242+/// Ok(())243243+/// }244244+/// ```245245+///246246+/// If you need to match on the error instead of propagate, put the invocation inside a closure:247247+///248248+/// ```249249+/// let ptr: *const [u8; 32] = core::ptr::dangling();250250+/// let field_ptr: Result<*const u8> = (|| -> Result<_> {251251+/// Ok(kernel::ptr::project!(ptr, [128]?))252252+/// })();253253+/// assert!(field_ptr.is_err());254254+/// ```255255+///256256+/// For mutable pointers, put `mut` as the first token in macro invocation.257257+///258258+/// ```259259+/// let ptr: *mut [(u8, u16); 32] = core::ptr::dangling_mut();260260+/// let field_ptr: *mut u16 = kernel::ptr::project!(mut ptr, [1].1);261261+/// ```262262+#[macro_export]263263+macro_rules! project_pointer {264264+ (@gen $ptr:ident, ) => {};265265+ // Field projection. `$field` needs to be `tt` to support tuple index like `.0`.266266+ (@gen $ptr:ident, .$field:tt $($rest:tt)*) => {267267+ // SAFETY: The provided closure always returns an in-bounds pointer.268268+ let $ptr = unsafe {269269+ $crate::ptr::projection::ProjectField::proj($ptr, #[inline(always)] |ptr| {270270+ // Check unaligned field. Not all users (e.g. DMA) can handle unaligned271271+ // projections.272272+ if false {273273+ let _ = &(*ptr).$field;274274+ }275275+ // SAFETY: `$field` is in bounds, and no implicit `Deref` is possible (if the276276+ // type implements `Deref`, Rust cannot infer the generic parameter `DEREF`).277277+ &raw mut (*ptr).$field278278+ })279279+ };280280+ $crate::ptr::project!(@gen $ptr, $($rest)*)281281+ };282282+ // Fallible index projection.283283+ (@gen $ptr:ident, [$index:expr]? $($rest:tt)*) => {284284+ let $ptr = $crate::ptr::projection::ProjectIndex::get($index, $ptr)285285+ .ok_or($crate::ptr::projection::OutOfBound)?;286286+ $crate::ptr::project!(@gen $ptr, $($rest)*)287287+ };288288+ // Build-time checked index projection.289289+ (@gen $ptr:ident, [$index:expr] $($rest:tt)*) => {290290+ let $ptr = $crate::ptr::projection::ProjectIndex::index($index, $ptr);291291+ $crate::ptr::project!(@gen $ptr, $($rest)*)292292+ };293293+ (mut $ptr:expr, $($proj:tt)*) => {{294294+ let ptr: *mut _ = $ptr;295295+ $crate::ptr::project!(@gen ptr, $($proj)*);296296+ ptr297297+ }};298298+ ($ptr:expr, $($proj:tt)*) => {{299299+ let ptr = <*const _>::cast_mut($ptr);300300+ // We currently always project using mutable pointer, as it is not decided whether `&raw301301+ // const` allows the resulting pointer to be mutated (see documentation of `addr_of!`).302302+ $crate::ptr::project!(@gen ptr, $($proj)*);303303+ ptr.cast_const()304304+ }};305305+}
+2-2
rust/kernel/str.rs
···664664///665665/// * The first byte of `buffer` is always zero.666666/// * The length of `buffer` is at least 1.667667-pub(crate) struct NullTerminatedFormatter<'a> {667667+pub struct NullTerminatedFormatter<'a> {668668 buffer: &'a mut [u8],669669}670670671671impl<'a> NullTerminatedFormatter<'a> {672672 /// Create a new [`Self`] instance.673673- pub(crate) fn new(buffer: &'a mut [u8]) -> Option<NullTerminatedFormatter<'a>> {673673+ pub fn new(buffer: &'a mut [u8]) -> Option<NullTerminatedFormatter<'a>> {674674 *(buffer.first_mut()?) = 0;675675676676 // INVARIANT:
+23-46
rust/pin-init/internal/src/init.rs
···62626363enum InitializerAttribute {6464 DefaultError(DefaultErrorAttribute),6565- DisableInitializedFieldAccess,6665}67666867struct DefaultErrorAttribute {···8586 let error = error.map_or_else(8687 || {8788 if let Some(default_error) = attrs.iter().fold(None, |acc, attr| {8989+ #[expect(irrefutable_let_patterns)]8890 if let InitializerAttribute::DefaultError(DefaultErrorAttribute { ty }) = attr {8991 Some(ty.clone())9092 } else {···145145 };146146 // `mixed_site` ensures that the data is not accessible to the user-controlled code.147147 let data = Ident::new("__data", Span::mixed_site());148148- let init_fields = init_fields(149149- &fields,150150- pinned,151151- !attrs152152- .iter()153153- .any(|attr| matches!(attr, InitializerAttribute::DisableInitializedFieldAccess)),154154- &data,155155- &slot,156156- );148148+ let init_fields = init_fields(&fields, pinned, &data, &slot);157149 let field_check = make_field_check(&fields, init_kind, &path);158150 Ok(quote! {{159159- // We do not want to allow arbitrary returns, so we declare this type as the `Ok` return160160- // type and shadow it later when we insert the arbitrary user code. That way there will be161161- // no possibility of returning without `unsafe`.162162- struct __InitOk;163163-164151 // Get the data about fields from the supplied type.165152 // SAFETY: TODO166153 let #data = unsafe {···157170 #path::#get_data()158171 };159172 // Ensure that `#data` really is of type `#data` and help with type inference:160160- let init = ::pin_init::__internal::#data_trait::make_closure::<_, __InitOk, #error>(173173+ let init = ::pin_init::__internal::#data_trait::make_closure::<_, #error>(161174 #data,162175 move |slot| {163163- {164164- // Shadow the structure so it cannot be used to return early.165165- struct __InitOk;166166- #zeroable_check167167- #this168168- #init_fields169169- #field_check170170- }171171- Ok(__InitOk)176176+ #zeroable_check177177+ #this178178+ #init_fields179179+ #field_check180180+ // SAFETY: we are the `init!` macro that is allowed to call this.181181+ Ok(unsafe { ::pin_init::__internal::InitOk::new() })172182 }173183 );174184 let init = move |slot| -> ::core::result::Result<(), #error> {···220236fn init_fields(221237 fields: &Punctuated<InitializerField, Token![,]>,222238 pinned: bool,223223- generate_initialized_accessors: bool,224239 data: &Ident,225240 slot: &Ident,226241) -> TokenStream {···243260 });244261 // Again span for better diagnostics245262 let write = quote_spanned!(ident.span()=> ::core::ptr::write);263263+ // NOTE: the field accessor ensures that the initialized field is properly aligned.264264+ // Unaligned fields will cause the compiler to emit E0793. We do not support265265+ // unaligned fields since `Init::__init` requires an aligned pointer; the call to266266+ // `ptr::write` below has the same requirement.246267 let accessor = if pinned {247268 let project_ident = format_ident!("__project_{ident}");248269 quote! {···259272 unsafe { &mut (*#slot).#ident }260273 }261274 };262262- let accessor = generate_initialized_accessors.then(|| {263263- quote! {264264- #(#cfgs)*265265- #[allow(unused_variables)]266266- let #ident = #accessor;267267- }268268- });269275 quote! {270276 #(#attrs)*271277 {···266286 // SAFETY: TODO267287 unsafe { #write(::core::ptr::addr_of_mut!((*#slot).#ident), #value_ident) };268288 }269269- #accessor289289+ #(#cfgs)*290290+ #[allow(unused_variables)]291291+ let #ident = #accessor;270292 }271293 }272294 InitializerKind::Init { ident, value, .. } => {273295 // Again span for better diagnostics274296 let init = format_ident!("init", span = value.span());297297+ // NOTE: the field accessor ensures that the initialized field is properly aligned.298298+ // Unaligned fields will cause the compiler to emit E0793. We do not support299299+ // unaligned fields since `Init::__init` requires an aligned pointer; the call to300300+ // `ptr::write` below has the same requirement.275301 let (value_init, accessor) = if pinned {276302 let project_ident = format_ident!("__project_{ident}");277303 (···312326 },313327 )314328 };315315- let accessor = generate_initialized_accessors.then(|| {316316- quote! {317317- #(#cfgs)*318318- #[allow(unused_variables)]319319- let #ident = #accessor;320320- }321321- });322329 quote! {323330 #(#attrs)*324331 {325332 let #init = #value;326333 #value_init327334 }328328- #accessor335335+ #(#cfgs)*336336+ #[allow(unused_variables)]337337+ let #ident = #accessor;329338 }330339 }331340 InitializerKind::Code { block: value, .. } => quote! {···447466 if a.path().is_ident("default_error") {448467 a.parse_args::<DefaultErrorAttribute>()449468 .map(InitializerAttribute::DefaultError)450450- } else if a.path().is_ident("disable_initialized_field_access") {451451- a.meta452452- .require_path_only()453453- .map(|_| InitializerAttribute::DisableInitializedFieldAccess)454469 } else {455470 Err(syn::Error::new_spanned(a, "unknown initializer attribute"))456471 }
+24-4
rust/pin-init/src/__internal.rs
···4646 }4747}48484949+/// Token type to signify successful initialization.5050+///5151+/// Can only be constructed via the unsafe [`Self::new`] function. The initializer macros use this5252+/// token type to prevent returning `Ok` from an initializer without initializing all fields.5353+pub struct InitOk(());5454+5555+impl InitOk {5656+ /// Creates a new token.5757+ ///5858+ /// # Safety5959+ ///6060+ /// This function may only be called from the `init!` macro in `../internal/src/init.rs`.6161+ #[inline(always)]6262+ pub unsafe fn new() -> Self {6363+ Self(())6464+ }6565+}6666+4967/// This trait is only implemented via the `#[pin_data]` proc-macro. It is used to facilitate5068/// the pin projections within the initializers.5169///···8668 type Datee: ?Sized + HasPinData;87698870 /// Type inference helper function.8989- fn make_closure<F, O, E>(self, f: F) -> F7171+ #[inline(always)]7272+ fn make_closure<F, E>(self, f: F) -> F9073 where9191- F: FnOnce(*mut Self::Datee) -> Result<O, E>,7474+ F: FnOnce(*mut Self::Datee) -> Result<InitOk, E>,9275 {9376 f9477 }···11798 type Datee: ?Sized + HasInitData;11899119100 /// Type inference helper function.120120- fn make_closure<F, O, E>(self, f: F) -> F101101+ #[inline(always)]102102+ fn make_closure<F, E>(self, f: F) -> F121103 where122122- F: FnOnce(*mut Self::Datee) -> Result<O, E>,104104+ F: FnOnce(*mut Self::Datee) -> Result<InitOk, E>,123105 {124106 f125107 }
+16-14
samples/rust/rust_dma.rs
···6868 CoherentAllocation::alloc_coherent(pdev.as_ref(), TEST_VALUES.len(), GFP_KERNEL)?;69697070 for (i, value) in TEST_VALUES.into_iter().enumerate() {7171- kernel::dma_write!(ca[i] = MyStruct::new(value.0, value.1))?;7171+ kernel::dma_write!(ca, [i]?, MyStruct::new(value.0, value.1));7272 }73737474 let size = 4 * page::PAGE_SIZE;···8585 }8686}87878888+impl DmaSampleDriver {8989+ fn check_dma(&self) -> Result {9090+ for (i, value) in TEST_VALUES.into_iter().enumerate() {9191+ let val0 = kernel::dma_read!(self.ca, [i]?.h);9292+ let val1 = kernel::dma_read!(self.ca, [i]?.b);9393+9494+ assert_eq!(val0, value.0);9595+ assert_eq!(val1, value.1);9696+ }9797+9898+ Ok(())9999+ }100100+}101101+88102#[pinned_drop]89103impl PinnedDrop for DmaSampleDriver {90104 fn drop(self: Pin<&mut Self>) {91105 dev_info!(self.pdev, "Unload DMA test driver.\n");921069393- for (i, value) in TEST_VALUES.into_iter().enumerate() {9494- let val0 = kernel::dma_read!(self.ca[i].h);9595- let val1 = kernel::dma_read!(self.ca[i].b);9696- assert!(val0.is_ok());9797- assert!(val1.is_ok());9898-9999- if let Ok(val0) = val0 {100100- assert_eq!(val0, value.0);101101- }102102- if let Ok(val1) = val1 {103103- assert_eq!(val1, value.1);104104- }105105- }107107+ assert!(self.check_dma().is_ok());106108107109 for (i, entry) in self.sgt.iter().enumerate() {108110 dev_info!(
···11+// SPDX-License-Identifier: GPL-2.022+/*33+ * wq_stall - Test module for the workqueue stall detector.44+ *55+ * Deliberately creates a workqueue stall so the watchdog fires and66+ * prints diagnostic output. Useful for verifying that the stall77+ * detector correctly identifies stuck workers and produces useful88+ * backtraces.99+ *1010+ * The stall is triggered by clearing PF_WQ_WORKER before sleeping,1111+ * which hides the worker from the concurrency manager. A second1212+ * work item queued on the same pool then sits in the worklist with1313+ * no worker available to process it.1414+ *1515+ * After ~30s the workqueue watchdog fires:1616+ * BUG: workqueue lockup - pool cpus=N ...1717+ *1818+ * Build:1919+ * make -C <kernel tree> M=samples/workqueue/stall_detector modules2020+ *2121+ * Copyright (c) 2026 Meta Platforms, Inc. and affiliates.2222+ * Copyright (c) 2026 Breno Leitao <leitao@debian.org>2323+ */2424+2525+#include <linux/module.h>2626+#include <linux/workqueue.h>2727+#include <linux/wait.h>2828+#include <linux/atomic.h>2929+#include <linux/sched.h>3030+3131+static DECLARE_WAIT_QUEUE_HEAD(stall_wq_head);3232+static atomic_t wake_condition = ATOMIC_INIT(0);3333+static struct work_struct stall_work1;3434+static struct work_struct stall_work2;3535+3636+static void stall_work2_fn(struct work_struct *work)3737+{3838+ pr_info("wq_stall: second work item finally ran\n");3939+}4040+4141+static void stall_work1_fn(struct work_struct *work)4242+{4343+ pr_info("wq_stall: first work item running on cpu %d\n",4444+ raw_smp_processor_id());4545+4646+ /*4747+ * Queue second item while we're still counted as running4848+ * (pool->nr_running > 0). Since schedule_work() on a per-CPU4949+ * workqueue targets raw_smp_processor_id(), item 2 lands on the5050+ * same pool. __queue_work -> kick_pool -> need_more_worker()5151+ * sees nr_running > 0 and does NOT wake a new worker.5252+ */5353+ schedule_work(&stall_work2);5454+5555+ /*5656+ * Hide from the workqueue concurrency manager. Without5757+ * PF_WQ_WORKER, schedule() won't call wq_worker_sleeping(),5858+ * so nr_running is never decremented and no replacement5959+ * worker is created. Item 2 stays stuck in pool->worklist.6060+ */6161+ current->flags &= ~PF_WQ_WORKER;6262+6363+ pr_info("wq_stall: entering wait_event_idle (PF_WQ_WORKER cleared)\n");6464+ pr_info("wq_stall: expect 'BUG: workqueue lockup' in ~30-60s\n");6565+ wait_event_idle(stall_wq_head, atomic_read(&wake_condition) != 0);6666+6767+ /* Restore so process_one_work() cleanup works correctly */6868+ current->flags |= PF_WQ_WORKER;6969+ pr_info("wq_stall: woke up, PF_WQ_WORKER restored\n");7070+}7171+7272+static int __init wq_stall_init(void)7373+{7474+ pr_info("wq_stall: loading\n");7575+7676+ INIT_WORK(&stall_work1, stall_work1_fn);7777+ INIT_WORK(&stall_work2, stall_work2_fn);7878+ schedule_work(&stall_work1);7979+8080+ return 0;8181+}8282+8383+static void __exit wq_stall_exit(void)8484+{8585+ pr_info("wq_stall: unloading\n");8686+ atomic_set(&wake_condition, 1);8787+ wake_up(&stall_wq_head);8888+ flush_work(&stall_work1);8989+ flush_work(&stall_work2);9090+ pr_info("wq_stall: all work flushed, module unloaded\n");9191+}9292+9393+module_init(wq_stall_init);9494+module_exit(wq_stall_exit);9595+9696+MODULE_LICENSE("GPL");9797+MODULE_DESCRIPTION("Reproduce workqueue stall caused by PF_WQ_WORKER misuse");9898+MODULE_AUTHOR("Breno Leitao <leitao@debian.org>");
+3-1
scripts/Makefile.build
···310310311311# The features in this list are the ones allowed for non-`rust/` code.312312#313313+# - Stable since Rust 1.79.0: `feature(slice_ptr_len)`.313314# - Stable since Rust 1.81.0: `feature(lint_reasons)`.314315# - Stable since Rust 1.82.0: `feature(asm_const)`,315316# `feature(offset_of_nested)`, `feature(raw_ref_op)`.317317+# - Stable since Rust 1.84.0: `feature(strict_provenance)`.316318# - Stable since Rust 1.87.0: `feature(asm_goto)`.317319# - Expected to become stable: `feature(arbitrary_self_types)`.318320# - To be determined: `feature(used_with_arg)`.319321#320322# Please see https://github.com/Rust-for-Linux/linux/issues/2 for details on321323# the unstable features in use.322322-rust_allowed_features := asm_const,asm_goto,arbitrary_self_types,lint_reasons,offset_of_nested,raw_ref_op,used_with_arg324324+rust_allowed_features := asm_const,asm_goto,arbitrary_self_types,lint_reasons,offset_of_nested,raw_ref_op,slice_ptr_len,strict_provenance,used_with_arg323325324326# `--out-dir` is required to avoid temporaries being created by `rustc` in the325327# current working directory, which may be not accessible in the out-of-tree
+4-5
scripts/livepatch/klp-build
···285285# application from appending it with '+' due to a dirty git working tree.286286set_kernelversion() {287287 local file="$SRC/scripts/setlocalversion"288288- local localversion288288+ local kernelrelease289289290290 stash_file "$file"291291292292- localversion="$(cd "$SRC" && make --no-print-directory kernelversion)"293293- localversion="$(cd "$SRC" && KERNELVERSION="$localversion" ./scripts/setlocalversion)"294294- [[ -z "$localversion" ]] && die "setlocalversion failed"292292+ kernelrelease="$(cd "$SRC" && make syncconfig &>/dev/null && make -s kernelrelease)"293293+ [[ -z "$kernelrelease" ]] && die "failed to get kernel version"295294296296- sed -i "2i echo $localversion; exit 0" scripts/setlocalversion295295+ sed -i "2i echo $kernelrelease; exit 0" scripts/setlocalversion297296}298297299298get_patch_files() {
+134-91
security/apparmor/apparmorfs.c
···3232#include "include/crypto.h"3333#include "include/ipc.h"3434#include "include/label.h"3535+#include "include/lib.h"3536#include "include/policy.h"3637#include "include/policy_ns.h"3738#include "include/resource.h"···6362 * securityfs and apparmorfs filesystems.6463 */65646565+#define IREF_POISON 10166666767/*6868 * support fns···8179 if (!private)8280 return;83818484- aa_put_loaddata(private->loaddata);8282+ aa_put_i_loaddata(private->loaddata);8583 kvfree(private);8684}8785···155153 return 0;156154}157155156156+static struct aa_ns *get_ns_common_ref(struct aa_common_ref *ref)157157+{158158+ if (ref) {159159+ struct aa_label *reflabel = container_of(ref, struct aa_label,160160+ count);161161+ return aa_get_ns(labels_ns(reflabel));162162+ }163163+164164+ return NULL;165165+}166166+167167+static struct aa_proxy *get_proxy_common_ref(struct aa_common_ref *ref)168168+{169169+ if (ref)170170+ return aa_get_proxy(container_of(ref, struct aa_proxy, count));171171+172172+ return NULL;173173+}174174+175175+static struct aa_loaddata *get_loaddata_common_ref(struct aa_common_ref *ref)176176+{177177+ if (ref)178178+ return aa_get_i_loaddata(container_of(ref, struct aa_loaddata,179179+ count));180180+ return NULL;181181+}182182+183183+static void aa_put_common_ref(struct aa_common_ref *ref)184184+{185185+ if (!ref)186186+ return;187187+188188+ switch (ref->reftype) {189189+ case REF_RAWDATA:190190+ aa_put_i_loaddata(container_of(ref, struct aa_loaddata,191191+ count));192192+ break;193193+ case REF_PROXY:194194+ aa_put_proxy(container_of(ref, struct aa_proxy,195195+ count));196196+ break;197197+ case REF_NS:198198+ /* ns count is held on its unconfined label */199199+ aa_put_ns(labels_ns(container_of(ref, struct aa_label, count)));200200+ break;201201+ default:202202+ AA_BUG(true, "unknown refcount type");203203+ break;204204+ }205205+}206206+207207+static void aa_get_common_ref(struct aa_common_ref *ref)208208+{209209+ kref_get(&ref->count);210210+}211211+212212+static void aafs_evict(struct inode *inode)213213+{214214+ struct aa_common_ref *ref = inode->i_private;215215+216216+ clear_inode(inode);217217+ aa_put_common_ref(ref);218218+ inode->i_private = (void *) IREF_POISON;219219+}220220+158221static void aafs_free_inode(struct inode *inode)159222{160223 if (S_ISLNK(inode->i_mode))···229162230163static const struct super_operations aafs_super_ops = {231164 .statfs = simple_statfs,165165+ .evict_inode = aafs_evict,232166 .free_inode = aafs_free_inode,233167 .show_path = aafs_show_path,234168};···330262 * aafs_remove(). Will return ERR_PTR on failure.331263 */332264static struct dentry *aafs_create(const char *name, umode_t mode,333333- struct dentry *parent, void *data, void *link,265265+ struct dentry *parent,266266+ struct aa_common_ref *data, void *link,334267 const struct file_operations *fops,335268 const struct inode_operations *iops)336269{···368299 goto fail_dentry;369300 inode_unlock(dir);370301302302+ if (data)303303+ aa_get_common_ref(data);304304+371305 return dentry;372306373307fail_dentry:···395323 * see aafs_create396324 */397325static struct dentry *aafs_create_file(const char *name, umode_t mode,398398- struct dentry *parent, void *data,326326+ struct dentry *parent,327327+ struct aa_common_ref *data,399328 const struct file_operations *fops)400329{401330 return aafs_create(name, mode, parent, data, NULL, fops, NULL);···482409483410 data->size = copy_size;484411 if (copy_from_user(data->data, userbuf, copy_size)) {485485- aa_put_loaddata(data);412412+ /* trigger free - don't need to put pcount */413413+ aa_put_i_loaddata(data);486414 return ERR_PTR(-EFAULT);487415 }488416···491417}492418493419static ssize_t policy_update(u32 mask, const char __user *buf, size_t size,494494- loff_t *pos, struct aa_ns *ns)420420+ loff_t *pos, struct aa_ns *ns,421421+ const struct cred *ocred)495422{496423 struct aa_loaddata *data;497424 struct aa_label *label;···503428 /* high level check about policy management - fine grained in504429 * below after unpack505430 */506506- error = aa_may_manage_policy(current_cred(), label, ns, mask);431431+ error = aa_may_manage_policy(current_cred(), label, ns, ocred, mask);507432 if (error)508433 goto end_section;509434···511436 error = PTR_ERR(data);512437 if (!IS_ERR(data)) {513438 error = aa_replace_profiles(ns, label, mask, data);514514- aa_put_loaddata(data);439439+ /* put pcount, which will put count and free if no440440+ * profiles referencing it.441441+ */442442+ aa_put_profile_loaddata(data);515443 }516444end_section:517445 end_current_label_crit_section(label);···526448static ssize_t profile_load(struct file *f, const char __user *buf, size_t size,527449 loff_t *pos)528450{529529- struct aa_ns *ns = aa_get_ns(f->f_inode->i_private);530530- int error = policy_update(AA_MAY_LOAD_POLICY, buf, size, pos, ns);451451+ struct aa_ns *ns = get_ns_common_ref(f->f_inode->i_private);452452+ int error = policy_update(AA_MAY_LOAD_POLICY, buf, size, pos, ns,453453+ f->f_cred);531454532455 aa_put_ns(ns);533456···544465static ssize_t profile_replace(struct file *f, const char __user *buf,545466 size_t size, loff_t *pos)546467{547547- struct aa_ns *ns = aa_get_ns(f->f_inode->i_private);468468+ struct aa_ns *ns = get_ns_common_ref(f->f_inode->i_private);548469 int error = policy_update(AA_MAY_LOAD_POLICY | AA_MAY_REPLACE_POLICY,549549- buf, size, pos, ns);470470+ buf, size, pos, ns, f->f_cred);550471 aa_put_ns(ns);551472552473 return error;···564485 struct aa_loaddata *data;565486 struct aa_label *label;566487 ssize_t error;567567- struct aa_ns *ns = aa_get_ns(f->f_inode->i_private);488488+ struct aa_ns *ns = get_ns_common_ref(f->f_inode->i_private);568489569490 label = begin_current_label_crit_section();570491 /* high level check about policy management - fine grained in571492 * below after unpack572493 */573494 error = aa_may_manage_policy(current_cred(), label, ns,574574- AA_MAY_REMOVE_POLICY);495495+ f->f_cred, AA_MAY_REMOVE_POLICY);575496 if (error)576497 goto out;577498···585506 if (!IS_ERR(data)) {586507 data->data[size] = 0;587508 error = aa_remove_profiles(ns, label, data->data, size);588588- aa_put_loaddata(data);509509+ aa_put_profile_loaddata(data);589510 }590511 out:591512 end_current_label_crit_section(label);···654575 if (!rev)655576 return -ENOMEM;656577657657- rev->ns = aa_get_ns(inode->i_private);578578+ rev->ns = get_ns_common_ref(inode->i_private);658579 if (!rev->ns)659580 rev->ns = aa_get_current_ns();660581 file->private_data = rev;···11401061static int seq_profile_open(struct inode *inode, struct file *file,11411062 int (*show)(struct seq_file *, void *))11421063{11431143- struct aa_proxy *proxy = aa_get_proxy(inode->i_private);10641064+ struct aa_proxy *proxy = get_proxy_common_ref(inode->i_private);11441065 int error = single_open(file, show, proxy);1145106611461067 if (error) {···13321253static int seq_rawdata_open(struct inode *inode, struct file *file,13331254 int (*show)(struct seq_file *, void *))13341255{13351335- struct aa_loaddata *data = __aa_get_loaddata(inode->i_private);12561256+ struct aa_loaddata *data = get_loaddata_common_ref(inode->i_private);13361257 int error;1337125813381259 if (!data)13391339- /* lost race this ent is being reaped */13401260 return -ENOENT;1341126113421262 error = single_open(file, show, data);13431263 if (error) {13441264 AA_BUG(file->private_data &&13451265 ((struct seq_file *)file->private_data)->private);13461346- aa_put_loaddata(data);12661266+ aa_put_i_loaddata(data);13471267 }1348126813491269 return error;···13531275 struct seq_file *seq = (struct seq_file *) file->private_data;1354127613551277 if (seq)13561356- aa_put_loaddata(seq->private);12781278+ aa_put_i_loaddata(seq->private);1357127913581280 return single_release(inode, file);13591281}···14651387 if (!aa_current_policy_view_capable(NULL))14661388 return -EACCES;1467138914681468- loaddata = __aa_get_loaddata(inode->i_private);13901390+ loaddata = get_loaddata_common_ref(inode->i_private);14691391 if (!loaddata)14701470- /* lost race: this entry is being reaped */14711392 return -ENOENT;1472139314731394 private = rawdata_f_data_alloc(loaddata->size);···14911414 return error;1492141514931416fail_private_alloc:14941494- aa_put_loaddata(loaddata);14171417+ aa_put_i_loaddata(loaddata);14951418 return error;14961419}14971420···1508143115091432 for (i = 0; i < AAFS_LOADDATA_NDENTS; i++) {15101433 if (!IS_ERR_OR_NULL(rawdata->dents[i])) {15111511- /* no refcounts on i_private */15121434 aafs_remove(rawdata->dents[i]);15131435 rawdata->dents[i] = NULL;15141436 }···15501474 return PTR_ERR(dir);15511475 rawdata->dents[AAFS_LOADDATA_DIR] = dir;1552147615531553- dent = aafs_create_file("abi", S_IFREG | 0444, dir, rawdata,14771477+ dent = aafs_create_file("abi", S_IFREG | 0444, dir, &rawdata->count,15541478 &seq_rawdata_abi_fops);15551479 if (IS_ERR(dent))15561480 goto fail;15571481 rawdata->dents[AAFS_LOADDATA_ABI] = dent;1558148215591559- dent = aafs_create_file("revision", S_IFREG | 0444, dir, rawdata,15601560- &seq_rawdata_revision_fops);14831483+ dent = aafs_create_file("revision", S_IFREG | 0444, dir,14841484+ &rawdata->count,14851485+ &seq_rawdata_revision_fops);15611486 if (IS_ERR(dent))15621487 goto fail;15631488 rawdata->dents[AAFS_LOADDATA_REVISION] = dent;1564148915651490 if (aa_g_hash_policy) {15661491 dent = aafs_create_file("sha256", S_IFREG | 0444, dir,15671567- rawdata, &seq_rawdata_hash_fops);14921492+ &rawdata->count,14931493+ &seq_rawdata_hash_fops);15681494 if (IS_ERR(dent))15691495 goto fail;15701496 rawdata->dents[AAFS_LOADDATA_HASH] = dent;15711497 }1572149815731499 dent = aafs_create_file("compressed_size", S_IFREG | 0444, dir,15741574- rawdata,15001500+ &rawdata->count,15751501 &seq_rawdata_compressed_size_fops);15761502 if (IS_ERR(dent))15771503 goto fail;15781504 rawdata->dents[AAFS_LOADDATA_COMPRESSED_SIZE] = dent;1579150515801580- dent = aafs_create_file("raw_data", S_IFREG | 0444,15811581- dir, rawdata, &rawdata_fops);15061506+ dent = aafs_create_file("raw_data", S_IFREG | 0444, dir,15071507+ &rawdata->count, &rawdata_fops);15821508 if (IS_ERR(dent))15831509 goto fail;15841510 rawdata->dents[AAFS_LOADDATA_DATA] = dent;···1588151015891511 rawdata->ns = aa_get_ns(ns);15901512 list_add(&rawdata->list, &ns->rawdata_list);15911591- /* no refcount on inode rawdata */1592151315931514 return 0;1594151515951516fail:15961517 remove_rawdata_dents(rawdata);15971597-15981518 return PTR_ERR(dent);15991519}16001520#endif /* CONFIG_SECURITY_APPARMOR_EXPORT_BINARY */···16161540 __aafs_profile_rmdir(child);1617154116181542 for (i = AAFS_PROF_SIZEOF - 1; i >= 0; --i) {16191619- struct aa_proxy *proxy;16201543 if (!profile->dents[i])16211544 continue;1622154516231623- proxy = d_inode(profile->dents[i])->i_private;16241546 aafs_remove(profile->dents[i]);16251625- aa_put_proxy(proxy);16261547 profile->dents[i] = NULL;16271548 }16281549}···16531580 struct aa_profile *profile,16541581 const struct file_operations *fops)16551582{16561656- struct aa_proxy *proxy = aa_get_proxy(profile->label.proxy);16571657- struct dentry *dent;16581658-16591659- dent = aafs_create_file(name, S_IFREG | 0444, dir, proxy, fops);16601660- if (IS_ERR(dent))16611661- aa_put_proxy(proxy);16621662-16631663- return dent;15831583+ return aafs_create_file(name, S_IFREG | 0444, dir, &profile->label.proxy->count, fops);16641584}1665158516661586#ifdef CONFIG_SECURITY_APPARMOR_EXPORT_BINARY···17031637 struct delayed_call *done,17041638 const char *name)17051639{17061706- struct aa_proxy *proxy = inode->i_private;16401640+ struct aa_common_ref *ref = inode->i_private;16411641+ struct aa_proxy *proxy = container_of(ref, struct aa_proxy, count);17071642 struct aa_label *label;17081643 struct aa_profile *profile;17091644 char *target;···18461779 if (profile->rawdata) {18471780 if (aa_g_hash_policy) {18481781 dent = aafs_create("raw_sha256", S_IFLNK | 0444, dir,18491849- profile->label.proxy, NULL, NULL,18501850- &rawdata_link_sha256_iops);17821782+ &profile->label.proxy->count, NULL,17831783+ NULL, &rawdata_link_sha256_iops);18511784 if (IS_ERR(dent))18521785 goto fail;18531853- aa_get_proxy(profile->label.proxy);18541786 profile->dents[AAFS_PROF_RAW_HASH] = dent;18551787 }18561788 dent = aafs_create("raw_abi", S_IFLNK | 0444, dir,18571857- profile->label.proxy, NULL, NULL,17891789+ &profile->label.proxy->count, NULL, NULL,18581790 &rawdata_link_abi_iops);18591791 if (IS_ERR(dent))18601792 goto fail;18611861- aa_get_proxy(profile->label.proxy);18621793 profile->dents[AAFS_PROF_RAW_ABI] = dent;1863179418641795 dent = aafs_create("raw_data", S_IFLNK | 0444, dir,18651865- profile->label.proxy, NULL, NULL,17961796+ &profile->label.proxy->count, NULL, NULL,18661797 &rawdata_link_data_iops);18671798 if (IS_ERR(dent))18681799 goto fail;18691869- aa_get_proxy(profile->label.proxy);18701800 profile->dents[AAFS_PROF_RAW_DATA] = dent;18711801 }18721802#endif /*CONFIG_SECURITY_APPARMOR_EXPORT_BINARY */···18941830 int error;1895183118961832 label = begin_current_label_crit_section();18971897- error = aa_may_manage_policy(current_cred(), label, NULL,18331833+ error = aa_may_manage_policy(current_cred(), label, NULL, NULL,18981834 AA_MAY_LOAD_POLICY);18991835 end_current_label_crit_section(label);19001836 if (error)19011837 return ERR_PTR(error);1902183819031903- parent = aa_get_ns(dir->i_private);18391839+ parent = get_ns_common_ref(dir->i_private);19041840 AA_BUG(d_inode(ns_subns_dir(parent)) != dir);1905184119061842 /* we have to unlock and then relock to get locking order right···19441880 int error;1945188119461882 label = begin_current_label_crit_section();19471947- error = aa_may_manage_policy(current_cred(), label, NULL,18831883+ error = aa_may_manage_policy(current_cred(), label, NULL, NULL,19481884 AA_MAY_LOAD_POLICY);19491885 end_current_label_crit_section(label);19501886 if (error)19511887 return error;1952188819531953- parent = aa_get_ns(dir->i_private);18891889+ parent = get_ns_common_ref(dir->i_private);19541890 /* rmdir calls the generic securityfs functions to remove files19551891 * from the apparmor dir. It is up to the apparmor ns locking19561892 * to avoid races.···2020195620211957 __aa_fs_list_remove_rawdata(ns);2022195820232023- if (ns_subns_dir(ns)) {20242024- sub = d_inode(ns_subns_dir(ns))->i_private;20252025- aa_put_ns(sub);20262026- }20272027- if (ns_subload(ns)) {20282028- sub = d_inode(ns_subload(ns))->i_private;20292029- aa_put_ns(sub);20302030- }20312031- if (ns_subreplace(ns)) {20322032- sub = d_inode(ns_subreplace(ns))->i_private;20332033- aa_put_ns(sub);20342034- }20352035- if (ns_subremove(ns)) {20362036- sub = d_inode(ns_subremove(ns))->i_private;20372037- aa_put_ns(sub);20382038- }20392039- if (ns_subrevision(ns)) {20402040- sub = d_inode(ns_subrevision(ns))->i_private;20412041- aa_put_ns(sub);20422042- }20432043-20441959 for (i = AAFS_NS_SIZEOF - 1; i >= 0; --i) {20451960 aafs_remove(ns->dents[i]);20461961 ns->dents[i] = NULL;···20442001 return PTR_ERR(dent);20452002 ns_subdata_dir(ns) = dent;2046200320472047- dent = aafs_create_file("revision", 0444, dir, ns,20042004+ dent = aafs_create_file("revision", 0444, dir,20052005+ &ns->unconfined->label.count,20482006 &aa_fs_ns_revision_fops);20492007 if (IS_ERR(dent))20502008 return PTR_ERR(dent);20512051- aa_get_ns(ns);20522009 ns_subrevision(ns) = dent;2053201020542054- dent = aafs_create_file(".load", 0640, dir, ns,20552055- &aa_fs_profile_load);20112011+ dent = aafs_create_file(".load", 0640, dir,20122012+ &ns->unconfined->label.count,20132013+ &aa_fs_profile_load);20562014 if (IS_ERR(dent))20572015 return PTR_ERR(dent);20582058- aa_get_ns(ns);20592016 ns_subload(ns) = dent;2060201720612061- dent = aafs_create_file(".replace", 0640, dir, ns,20622062- &aa_fs_profile_replace);20182018+ dent = aafs_create_file(".replace", 0640, dir,20192019+ &ns->unconfined->label.count,20202020+ &aa_fs_profile_replace);20632021 if (IS_ERR(dent))20642022 return PTR_ERR(dent);20652065- aa_get_ns(ns);20662023 ns_subreplace(ns) = dent;2067202420682068- dent = aafs_create_file(".remove", 0640, dir, ns,20692069- &aa_fs_profile_remove);20252025+ dent = aafs_create_file(".remove", 0640, dir,20262026+ &ns->unconfined->label.count,20272027+ &aa_fs_profile_remove);20702028 if (IS_ERR(dent))20712029 return PTR_ERR(dent);20722072- aa_get_ns(ns);20732030 ns_subremove(ns) = dent;2074203120752032 /* use create_dentry so we can supply private data */20762076- dent = aafs_create("namespaces", S_IFDIR | 0755, dir, ns, NULL, NULL,20772077- &ns_dir_inode_operations);20332033+ dent = aafs_create("namespaces", S_IFDIR | 0755, dir,20342034+ &ns->unconfined->label.count,20352035+ NULL, NULL, &ns_dir_inode_operations);20782036 if (IS_ERR(dent))20792037 return PTR_ERR(dent);20802080- aa_get_ns(ns);20812038 ns_subns_dir(ns) = dent;2082203920832040 return 0;
+8-8
security/apparmor/include/label.h
···102102103103struct aa_label;104104struct aa_proxy {105105- struct kref count;105105+ struct aa_common_ref count;106106 struct aa_label __rcu *label;107107};108108···125125 * vec: vector of profiles comprising the compound label126126 */127127struct aa_label {128128- struct kref count;128128+ struct aa_common_ref count;129129 struct rb_node node;130130 struct rcu_head rcu;131131 struct aa_proxy *proxy;···357357 */358358static inline struct aa_label *__aa_get_label(struct aa_label *l)359359{360360- if (l && kref_get_unless_zero(&l->count))360360+ if (l && kref_get_unless_zero(&l->count.count))361361 return l;362362363363 return NULL;···366366static inline struct aa_label *aa_get_label(struct aa_label *l)367367{368368 if (l)369369- kref_get(&(l->count));369369+ kref_get(&(l->count.count));370370371371 return l;372372}···386386 rcu_read_lock();387387 do {388388 c = rcu_dereference(*l);389389- } while (c && !kref_get_unless_zero(&c->count));389389+ } while (c && !kref_get_unless_zero(&c->count.count));390390 rcu_read_unlock();391391392392 return c;···426426static inline void aa_put_label(struct aa_label *l)427427{428428 if (l)429429- kref_put(&l->count, aa_label_kref);429429+ kref_put(&l->count.count, aa_label_kref);430430}431431432432/* wrapper fn to indicate semantics of the check */···443443static inline struct aa_proxy *aa_get_proxy(struct aa_proxy *proxy)444444{445445 if (proxy)446446- kref_get(&(proxy->count));446446+ kref_get(&(proxy->count.count));447447448448 return proxy;449449}···451451static inline void aa_put_proxy(struct aa_proxy *proxy)452452{453453 if (proxy)454454- kref_put(&proxy->count, aa_proxy_kref);454454+ kref_put(&proxy->count.count, aa_proxy_kref);455455}456456457457void __aa_proxy_redirect(struct aa_label *orig, struct aa_label *new);
+12
security/apparmor/include/lib.h
···102102/* Security blob offsets */103103extern struct lsm_blob_sizes apparmor_blob_sizes;104104105105+enum reftype {106106+ REF_NS,107107+ REF_PROXY,108108+ REF_RAWDATA,109109+};110110+111111+/* common reference count used by data the shows up in aafs */112112+struct aa_common_ref {113113+ struct kref count;114114+ enum reftype reftype;115115+};116116+105117/**106118 * aa_strneq - compare null terminated @str to a non null terminated substring107119 * @str: a null terminated string
···1818#include "label.h"1919#include "policy.h"20202121+/* Match max depth of user namespaces */2222+#define MAX_NS_DEPTH 3221232224/* struct aa_ns_acct - accounting of profiles in namespace2325 * @max_size: maximum space allowed for all profiles in namespace
+49-34
security/apparmor/include/policy_unpack.h
···8787 u32 version;8888};89899090-/*9191- * struct aa_loaddata - buffer of policy raw_data set9090+/* struct aa_loaddata - buffer of policy raw_data set9191+ * @count: inode/filesystem refcount - use aa_get_i_loaddata()9292+ * @pcount: profile refcount - use aa_get_profile_loaddata()9393+ * @list: list the loaddata is on9494+ * @work: used to do a delayed cleanup9595+ * @dents: refs to dents created in aafs9696+ * @ns: the namespace this loaddata was loaded into9797+ * @name:9898+ * @size: the size of the data that was loaded9999+ * @compressed_size: the size of the data when it is compressed100100+ * @revision: unique revision count that this data was loaded as101101+ * @abi: the abi number the loaddata uses102102+ * @hash: a hash of the loaddata, used to help dedup data92103 *9393- * there is no loaddata ref for being on ns list, nor a ref from9494- * d_inode(@dentry) when grab a ref from these, @ns->lock must be held9595- * && __aa_get_loaddata() needs to be used, and the return value9696- * checked, if NULL the loaddata is already being reaped and should be9797- * considered dead.104104+ * There is no loaddata ref for being on ns->rawdata_list, so105105+ * @ns->lock must be held when walking the list. Dentries and106106+ * inode opens hold refs on @count; profiles hold refs on @pcount.107107+ * When the last @pcount drops, do_ploaddata_rmfs() removes the108108+ * fs entries and drops the associated @count ref.98109 */99110struct aa_loaddata {100100- struct kref count;111111+ struct aa_common_ref count;112112+ struct kref pcount;101113 struct list_head list;102114 struct work_struct work;103115 struct dentry *dents[AAFS_LOADDATA_NDENTS];···131119int aa_unpack(struct aa_loaddata *udata, struct list_head *lh, const char **ns);132120133121/**134134- * __aa_get_loaddata - get a reference count to uncounted data reference135135- * @data: reference to get a count on136136- *137137- * Returns: pointer to reference OR NULL if race is lost and reference is138138- * being repeated.139139- * Requires: @data->ns->lock held, and the return code MUST be checked140140- *141141- * Use only from inode->i_private and @data->list found references142142- */143143-static inline struct aa_loaddata *144144-__aa_get_loaddata(struct aa_loaddata *data)145145-{146146- if (data && kref_get_unless_zero(&(data->count)))147147- return data;148148-149149- return NULL;150150-}151151-152152-/**153122 * aa_get_loaddata - get a reference count from a counted data reference154123 * @data: reference to get a count on155124 *156156- * Returns: point to reference125125+ * Returns: pointer to reference157126 * Requires: @data to have a valid reference count on it. It is a bug158127 * if the race to reap can be encountered when it is used.159128 */160129static inline struct aa_loaddata *161161-aa_get_loaddata(struct aa_loaddata *data)130130+aa_get_i_loaddata(struct aa_loaddata *data)162131{163163- struct aa_loaddata *tmp = __aa_get_loaddata(data);164132165165- AA_BUG(data && !tmp);133133+ if (data)134134+ kref_get(&(data->count.count));135135+ return data;136136+}166137167167- return tmp;138138+139139+/**140140+ * aa_get_profile_loaddata - get a profile reference count on loaddata141141+ * @data: reference to get a count on142142+ *143143+ * Returns: pointer to reference144144+ * Requires: @data to have a valid reference count on it.145145+ */146146+static inline struct aa_loaddata *147147+aa_get_profile_loaddata(struct aa_loaddata *data)148148+{149149+ if (data)150150+ kref_get(&(data->pcount));151151+ return data;168152}169153170154void __aa_loaddata_update(struct aa_loaddata *data, long revision);171155bool aa_rawdata_eq(struct aa_loaddata *l, struct aa_loaddata *r);172156void aa_loaddata_kref(struct kref *kref);157157+void aa_ploaddata_kref(struct kref *kref);173158struct aa_loaddata *aa_loaddata_alloc(size_t size);174174-static inline void aa_put_loaddata(struct aa_loaddata *data)159159+static inline void aa_put_i_loaddata(struct aa_loaddata *data)175160{176161 if (data)177177- kref_put(&data->count, aa_loaddata_kref);162162+ kref_put(&data->count.count, aa_loaddata_kref);163163+}164164+165165+static inline void aa_put_profile_loaddata(struct aa_loaddata *data)166166+{167167+ if (data)168168+ kref_put(&data->pcount, aa_ploaddata_kref);178169}179170180171#if IS_ENABLED(CONFIG_KUNIT)
···160160 if (state_count == 0)161161 goto out;162162 for (i = 0; i < state_count; i++) {163163- if (!(BASE_TABLE(dfa)[i] & MATCH_FLAG_DIFF_ENCODE) &&164164- (DEFAULT_TABLE(dfa)[i] >= state_count))163163+ if (DEFAULT_TABLE(dfa)[i] >= state_count) {164164+ pr_err("AppArmor DFA default state out of bounds");165165 goto out;166166+ }166167 if (BASE_TABLE(dfa)[i] & MATCH_FLAGS_INVALID) {167168 pr_err("AppArmor DFA state with invalid match flags");168169 goto out;···202201 size_t j, k;203202204203 for (j = i;205205- (BASE_TABLE(dfa)[j] & MATCH_FLAG_DIFF_ENCODE) &&206206- !(BASE_TABLE(dfa)[j] & MARK_DIFF_ENCODE);204204+ ((BASE_TABLE(dfa)[j] & MATCH_FLAG_DIFF_ENCODE) &&205205+ !(BASE_TABLE(dfa)[j] & MARK_DIFF_ENCODE_VERIFIED));207206 j = k) {207207+ if (BASE_TABLE(dfa)[j] & MARK_DIFF_ENCODE)208208+ /* loop in current chain */209209+ goto out;208210 k = DEFAULT_TABLE(dfa)[j];209211 if (j == k)212212+ /* self loop */210213 goto out;211211- if (k < j)212212- break; /* already verified */213214 BASE_TABLE(dfa)[j] |= MARK_DIFF_ENCODE;215215+ }216216+ /* move mark to verified */217217+ for (j = i;218218+ (BASE_TABLE(dfa)[j] & MATCH_FLAG_DIFF_ENCODE);219219+ j = k) {220220+ k = DEFAULT_TABLE(dfa)[j];221221+ if (j < i)222222+ /* jumps to state/chain that has been223223+ * verified224224+ */225225+ break;226226+ BASE_TABLE(dfa)[j] &= ~MARK_DIFF_ENCODE;227227+ BASE_TABLE(dfa)[j] |= MARK_DIFF_ENCODE_VERIFIED;214228 }215229 }216230 error = 0;···479463 if (dfa->tables[YYTD_ID_EC]) {480464 /* Equivalence class table defined */481465 u8 *equiv = EQUIV_TABLE(dfa);482482- for (; len; len--)483483- match_char(state, def, base, next, check,484484- equiv[(u8) *str++]);466466+ for (; len; len--) {467467+ u8 c = equiv[(u8) *str];468468+469469+ match_char(state, def, base, next, check, c);470470+ str++;471471+ }485472 } else {486473 /* default is direct to next state */487487- for (; len; len--)488488- match_char(state, def, base, next, check, (u8) *str++);474474+ for (; len; len--) {475475+ match_char(state, def, base, next, check, (u8) *str);476476+ str++;477477+ }489478 }490479491480 return state;···524503 /* Equivalence class table defined */525504 u8 *equiv = EQUIV_TABLE(dfa);526505 /* default is direct to next state */527527- while (*str)528528- match_char(state, def, base, next, check,529529- equiv[(u8) *str++]);506506+ while (*str) {507507+ u8 c = equiv[(u8) *str];508508+509509+ match_char(state, def, base, next, check, c);510510+ str++;511511+ }530512 } else {531513 /* default is direct to next state */532532- while (*str)533533- match_char(state, def, base, next, check, (u8) *str++);514514+ while (*str) {515515+ match_char(state, def, base, next, check, (u8) *str);516516+ str++;517517+ }534518 }535519536520 return state;
+67-10
security/apparmor/policy.c
···191191}192192193193/**194194- * __remove_profile - remove old profile, and children195195- * @profile: profile to be replaced (NOT NULL)194194+ * __remove_profile - remove profile, and children195195+ * @profile: profile to be removed (NOT NULL)196196 *197197 * Requires: namespace list lock be held, or list not be shared198198 */199199static void __remove_profile(struct aa_profile *profile)200200{201201+ struct aa_profile *curr, *to_remove;202202+201203 AA_BUG(!profile);202204 AA_BUG(!profile->ns);203205 AA_BUG(!mutex_is_locked(&profile->ns->lock));204206205207 /* release any children lists first */206206- __aa_profile_list_release(&profile->base.profiles);208208+ if (!list_empty(&profile->base.profiles)) {209209+ curr = list_first_entry(&profile->base.profiles, struct aa_profile, base.list);210210+211211+ while (curr != profile) {212212+213213+ while (!list_empty(&curr->base.profiles))214214+ curr = list_first_entry(&curr->base.profiles,215215+ struct aa_profile, base.list);216216+217217+ to_remove = curr;218218+ if (!list_is_last(&to_remove->base.list,219219+ &aa_deref_parent(curr)->base.profiles))220220+ curr = list_next_entry(to_remove, base.list);221221+ else222222+ curr = aa_deref_parent(curr);223223+224224+ /* released by free_profile */225225+ aa_label_remove(&to_remove->label);226226+ __aafs_profile_rmdir(to_remove);227227+ __list_remove_profile(to_remove);228228+ }229229+ }230230+207231 /* released by free_profile */208232 aa_label_remove(&profile->label);209233 __aafs_profile_rmdir(profile);···350326 }351327352328 kfree_sensitive(profile->hash);353353- aa_put_loaddata(profile->rawdata);329329+ aa_put_profile_loaddata(profile->rawdata);354330 aa_label_destroy(&profile->label);355331356332 kfree_sensitive(profile);···942918 return res;943919}944920921921+static bool is_subset_of_obj_privilege(const struct cred *cred,922922+ struct aa_label *label,923923+ const struct cred *ocred)924924+{925925+ if (cred == ocred)926926+ return true;927927+928928+ if (!aa_label_is_subset(label, cred_label(ocred)))929929+ return false;930930+ /* don't allow crossing userns for now */931931+ if (cred->user_ns != ocred->user_ns)932932+ return false;933933+ if (!cap_issubset(cred->cap_inheritable, ocred->cap_inheritable))934934+ return false;935935+ if (!cap_issubset(cred->cap_permitted, ocred->cap_permitted))936936+ return false;937937+ if (!cap_issubset(cred->cap_effective, ocred->cap_effective))938938+ return false;939939+ if (!cap_issubset(cred->cap_bset, ocred->cap_bset))940940+ return false;941941+ if (!cap_issubset(cred->cap_ambient, ocred->cap_ambient))942942+ return false;943943+ return true;944944+}945945+946946+945947/**946948 * aa_may_manage_policy - can the current task manage policy947949 * @subj_cred: subjects cred948950 * @label: label to check if it can manage policy949951 * @ns: namespace being managed by @label (may be NULL if @label's ns)952952+ * @ocred: object cred if request is coming from an open object950953 * @mask: contains the policy manipulation operation being done951954 *952955 * Returns: 0 if the task is allowed to manipulate policy else error953956 */954957int aa_may_manage_policy(const struct cred *subj_cred, struct aa_label *label,955955- struct aa_ns *ns, u32 mask)958958+ struct aa_ns *ns, const struct cred *ocred, u32 mask)956959{957960 const char *op;958961···993942 /* check if loading policy is locked out */994943 if (aa_g_lock_policy)995944 return audit_policy(label, op, NULL, NULL, "policy_locked",945945+ -EACCES);946946+947947+ if (ocred && !is_subset_of_obj_privilege(subj_cred, label, ocred))948948+ return audit_policy(label, op, NULL, NULL,949949+ "not privileged for target profile",996950 -EACCES);997951998952 if (!aa_policy_admin_capable(subj_cred, label, ns))···11711115 LIST_HEAD(lh);1172111611731117 op = mask & AA_MAY_REPLACE_POLICY ? OP_PROF_REPL : OP_PROF_LOAD;11741174- aa_get_loaddata(udata);11181118+ aa_get_profile_loaddata(udata);11751119 /* released below */11761120 error = aa_unpack(udata, &lh, &ns_name);11771121 if (error)···11981142 goto fail;11991143 }12001144 ns_name = ent->ns_name;11451145+ ent->ns_name = NULL;12011146 } else12021147 count++;12031148 }···12231166 if (aa_rawdata_eq(rawdata_ent, udata)) {12241167 struct aa_loaddata *tmp;1225116812261226- tmp = __aa_get_loaddata(rawdata_ent);11691169+ tmp = aa_get_profile_loaddata(rawdata_ent);12271170 /* check we didn't fail the race */12281171 if (tmp) {12291229- aa_put_loaddata(udata);11721172+ aa_put_profile_loaddata(udata);12301173 udata = tmp;12311174 break;12321175 }···12391182 struct aa_profile *p;1240118312411184 if (aa_g_export_binary)12421242- ent->new->rawdata = aa_get_loaddata(udata);11851185+ ent->new->rawdata = aa_get_profile_loaddata(udata);12431186 error = __lookup_replace(ns, ent->new->base.hname,12441187 !(mask & AA_MAY_REPLACE_POLICY),12451188 &ent->old, &info);···1372131513731316out:13741317 aa_put_ns(ns);13751375- aa_put_loaddata(udata);13181318+ aa_put_profile_loaddata(udata);13761319 kfree(ns_name);1377132013781321 if (error)
+2
security/apparmor/policy_ns.c
···223223 AA_BUG(!name);224224 AA_BUG(!mutex_is_locked(&parent->lock));225225226226+ if (parent->level > MAX_NS_DEPTH)227227+ return ERR_PTR(-ENOSPC);226228 ns = alloc_ns(parent->base.hname, name);227229 if (!ns)228230 return ERR_PTR(-ENOMEM);
+45-20
security/apparmor/policy_unpack.c
···109109 return memcmp(l->data, r->data, r->compressed_size ?: r->size) == 0;110110}111111112112-/*113113- * need to take the ns mutex lock which is NOT safe most places that114114- * put_loaddata is called, so we have to delay freeing it115115- */116116-static void do_loaddata_free(struct work_struct *work)112112+static void do_loaddata_free(struct aa_loaddata *d)117113{118118- struct aa_loaddata *d = container_of(work, struct aa_loaddata, work);119119- struct aa_ns *ns = aa_get_ns(d->ns);120120-121121- if (ns) {122122- mutex_lock_nested(&ns->lock, ns->level);123123- __aa_fs_remove_rawdata(d);124124- mutex_unlock(&ns->lock);125125- aa_put_ns(ns);126126- }127127-128114 kfree_sensitive(d->hash);129115 kfree_sensitive(d->name);130116 kvfree(d->data);···119133120134void aa_loaddata_kref(struct kref *kref)121135{122122- struct aa_loaddata *d = container_of(kref, struct aa_loaddata, count);136136+ struct aa_loaddata *d = container_of(kref, struct aa_loaddata,137137+ count.count);138138+139139+ do_loaddata_free(d);140140+}141141+142142+/*143143+ * need to take the ns mutex lock which is NOT safe most places that144144+ * put_loaddata is called, so we have to delay freeing it145145+ */146146+static void do_ploaddata_rmfs(struct work_struct *work)147147+{148148+ struct aa_loaddata *d = container_of(work, struct aa_loaddata, work);149149+ struct aa_ns *ns = aa_get_ns(d->ns);150150+151151+ if (ns) {152152+ mutex_lock_nested(&ns->lock, ns->level);153153+ /* remove fs ref to loaddata */154154+ __aa_fs_remove_rawdata(d);155155+ mutex_unlock(&ns->lock);156156+ aa_put_ns(ns);157157+ }158158+ /* called by dropping last pcount, so drop its associated icount */159159+ aa_put_i_loaddata(d);160160+}161161+162162+void aa_ploaddata_kref(struct kref *kref)163163+{164164+ struct aa_loaddata *d = container_of(kref, struct aa_loaddata, pcount);123165124166 if (d) {125125- INIT_WORK(&d->work, do_loaddata_free);167167+ INIT_WORK(&d->work, do_ploaddata_rmfs);126168 schedule_work(&d->work);127169 }128170}···167153 kfree(d);168154 return ERR_PTR(-ENOMEM);169155 }170170- kref_init(&d->count);156156+ kref_init(&d->count.count);157157+ d->count.reftype = REF_RAWDATA;158158+ kref_init(&d->pcount);171159 INIT_LIST_HEAD(&d->list);172160173161 return d;···10261010 if (!aa_unpack_u32(e, &pdb->start[AA_CLASS_FILE], "dfa_start")) {10271011 /* default start state for xmatch and file dfa */10281012 pdb->start[AA_CLASS_FILE] = DFA_START;10291029- } /* setup class index */10131013+ }10141014+10151015+ size_t state_count = pdb->dfa->tables[YYTD_ID_BASE]->td_lolen;10161016+10171017+ if (pdb->start[0] >= state_count ||10181018+ pdb->start[AA_CLASS_FILE] >= state_count) {10191019+ *info = "invalid dfa start state";10201020+ goto fail;10211021+ }10221022+10231023+ /* setup class index */10301024 for (i = AA_CLASS_FILE + 1; i <= AA_CLASS_LAST; i++) {10311025 pdb->start[i] = aa_dfa_next(pdb->dfa, pdb->start[0],10321026 i);···14351409{14361410 int error = -EPROTONOSUPPORT;14371411 const char *name = NULL;14381438- *ns = NULL;1439141214401413 /* get the interface version */14411414 if (!aa_unpack_u32(e, &e->version, "version")) {
+16-3
sound/core/pcm_native.c
···21442144 for (;;) {21452145 long tout;21462146 struct snd_pcm_runtime *to_check;21472147+ unsigned int drain_rate;21482148+ snd_pcm_uframes_t drain_bufsz;21492149+ bool drain_no_period_wakeup;21502150+21472151 if (signal_pending(current)) {21482152 result = -ERESTARTSYS;21492153 break;···21672163 snd_pcm_group_unref(group, substream);21682164 if (!to_check)21692165 break; /* all drained */21662166+ /*21672167+ * Cache the runtime fields needed after unlock.21682168+ * A concurrent close() on the linked stream may free21692169+ * its runtime via snd_pcm_detach_substream() once we21702170+ * release the stream lock below.21712171+ */21722172+ drain_no_period_wakeup = to_check->no_period_wakeup;21732173+ drain_rate = to_check->rate;21742174+ drain_bufsz = to_check->buffer_size;21702175 init_waitqueue_entry(&wait, current);21712176 set_current_state(TASK_INTERRUPTIBLE);21722177 add_wait_queue(&to_check->sleep, &wait);21732178 snd_pcm_stream_unlock_irq(substream);21742174- if (runtime->no_period_wakeup)21792179+ if (drain_no_period_wakeup)21752180 tout = MAX_SCHEDULE_TIMEOUT;21762181 else {21772182 tout = 100;21782178- if (runtime->rate) {21792179- long t = runtime->buffer_size * 1100 / runtime->rate;21832183+ if (drain_rate) {21842184+ long t = drain_bufsz * 1100 / drain_rate;21802185 tout = max(t, tout);21812186 }21822187 tout = msecs_to_jiffies(tout);
···13601360 if (!pdev_sec)13611361 return -ENOMEM;1362136213631363- pdev_sec->driver_override = kstrdup("samsung-i2s", GFP_KERNEL);13641364- if (!pdev_sec->driver_override) {13631363+ ret = device_set_driver_override(&pdev_sec->dev, "samsung-i2s");13641364+ if (ret) {13651365 platform_device_put(pdev_sec);13661366- return -ENOMEM;13661366+ return ret;13671367 }1368136813691369 ret = platform_device_add(pdev_sec);
+8-3
sound/soc/soc-core.c
···462462463463 list_del(&rtd->list);464464465465- if (delayed_work_pending(&rtd->delayed_work))466466- flush_delayed_work(&rtd->delayed_work);465465+ flush_delayed_work(&rtd->delayed_work);467466 snd_soc_pcm_component_free(rtd);468467469468 /*···1863186418641865/*18651866 * Check if a DMI field is valid, i.e. not containing any string18661866- * in the black list.18671867+ * in the black list and not the empty string.18671868 */18681869static int is_dmi_valid(const char *field)18691870{18701871 int i = 0;18721872+18731873+ if (!field[0])18741874+ return 0;1871187518721876 while (dmi_blacklist[i]) {18731877 if (strstr(field, dmi_blacklist[i]))···21242122 for_each_card_rtds(card, rtd)21252123 if (rtd->initialized)21262124 snd_soc_link_exit(rtd);21252125+ /* flush delayed work before removing DAIs and DAPM widgets */21262126+ snd_soc_flush_all_delayed_work(card);21272127+21272128 /* remove and free each DAI */21282129 soc_remove_link_dais(card);21292130 soc_remove_link_components(card);
···110110 __u64 ld_op:1, /* 0: load op */111111 st_op:1, /* 1: store op */112112 dc_l1tlb_miss:1, /* 2: data cache L1TLB miss */113113- dc_l2tlb_miss:1, /* 3: data cache L2TLB hit in 2M page */113113+ dc_l2tlb_miss:1, /* 3: data cache L2TLB miss in 2M page */114114 dc_l1tlb_hit_2m:1, /* 4: data cache L1TLB hit in 2M page */115115 dc_l1tlb_hit_1g:1, /* 5: data cache L1TLB hit in 1G page */116116 dc_l2tlb_hit_2m:1, /* 6: data cache L2TLB hit in 2M page */
+3-1
tools/arch/x86/include/asm/cpufeatures.h
···8484#define X86_FEATURE_PEBS ( 3*32+12) /* "pebs" Precise-Event Based Sampling */8585#define X86_FEATURE_BTS ( 3*32+13) /* "bts" Branch Trace Store */8686#define X86_FEATURE_SYSCALL32 ( 3*32+14) /* syscall in IA32 userspace */8787-#define X86_FEATURE_SYSENTER32 ( 3*32+15) /* sysenter in IA32 userspace */8787+#define X86_FEATURE_SYSFAST32 ( 3*32+15) /* sysenter/syscall in IA32 userspace */8888#define X86_FEATURE_REP_GOOD ( 3*32+16) /* "rep_good" REP microcode works well */8989#define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* "amd_lbr_v2" AMD Last Branch Record Extension Version 2 */9090#define X86_FEATURE_CLEAR_CPU_BUF ( 3*32+18) /* Clear CPU buffers using VERW */···326326#define X86_FEATURE_AMX_FP16 (12*32+21) /* AMX fp16 Support */327327#define X86_FEATURE_AVX_IFMA (12*32+23) /* Support for VPMADD52[H,L]UQ */328328#define X86_FEATURE_LAM (12*32+26) /* "lam" Linear Address Masking */329329+#define X86_FEATURE_MOVRS (12*32+31) /* MOVRS instructions */329330330331/* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */331332#define X86_FEATURE_CLZERO (13*32+ 0) /* "clzero" CLZERO instruction */···473472#define X86_FEATURE_GP_ON_USER_CPUID (20*32+17) /* User CPUID faulting */474473475474#define X86_FEATURE_PREFETCHI (20*32+20) /* Prefetch Data/Instruction to Cache Level */475475+#define X86_FEATURE_ERAPS (20*32+24) /* Enhanced Return Address Predictor Security */476476#define X86_FEATURE_SBPB (20*32+27) /* Selective Branch Prediction Barrier */477477#define X86_FEATURE_IBPB_BRTYPE (20*32+28) /* MSR_PRED_CMD[IBPB] flushes all branch type predictions */478478#define X86_FEATURE_SRSO_NO (20*32+29) /* CPU is not affected by SRSO */
···2222#define CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) (0x10 + (cpu * 2))23232424/*2525- * Below are the definition of bit offsets for perf option, and works as2626- * arbitrary values for all ETM versions.2727- *2828- * Most of them are orignally from ETMv3.5/PTM's ETMCR config, therefore,2929- * ETMv3.5/PTM doesn't define ETMCR config bits with prefix "ETM3_" and3030- * directly use below macros as config bits.3131- */3232-#define ETM_OPT_BRANCH_BROADCAST 83333-#define ETM_OPT_CYCACC 123434-#define ETM_OPT_CTXTID 143535-#define ETM_OPT_CTXTID2 153636-#define ETM_OPT_TS 283737-#define ETM_OPT_RETSTK 293838-3939-/* ETMv4 CONFIGR programming bits for the ETM OPTs */4040-#define ETM4_CFG_BIT_BB 34141-#define ETM4_CFG_BIT_CYCACC 44242-#define ETM4_CFG_BIT_CTXTID 64343-#define ETM4_CFG_BIT_VMID 74444-#define ETM4_CFG_BIT_TS 114545-#define ETM4_CFG_BIT_RETSTK 124646-#define ETM4_CFG_BIT_VMID_OPT 154747-4848-/*4925 * Interpretation of the PERF_RECORD_AUX_OUTPUT_HW_ID payload.5026 * Used to associate a CPU with the CoreSight Trace ID.5127 * [07:00] - Trace ID - uses 8 bits to make value easy to read in file.
+4
tools/include/linux/gfp.h
···55#include <linux/types.h>66#include <linux/gfp_types.h>7788+/* Helper macro to avoid gfp flags if they are the default one */99+#define __default_gfp(a,...) a1010+#define default_gfp(...) __default_gfp(__VA_ARGS__ __VA_OPT__(,) GFP_KERNEL)1111+812static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags)913{1014 return !!(gfp_flags & __GFP_DIRECT_RECLAIM);
+7-2
tools/include/linux/gfp_types.h
···139139 * %__GFP_ACCOUNT causes the allocation to be accounted to kmemcg.140140 *141141 * %__GFP_NO_OBJ_EXT causes slab allocation to have no object extension.142142+ * mark_obj_codetag_empty() should be called upon freeing for objects allocated143143+ * with this flag to indicate that their NULL tags are expected and normal.142144 */143145#define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE)144146#define __GFP_WRITE ((__force gfp_t)___GFP_WRITE)···311309 *312310 * %GFP_ATOMIC users can not sleep and need the allocation to succeed. A lower313311 * watermark is applied to allow access to "atomic reserves".314314- * The current implementation doesn't support NMI and few other strict315315- * non-preemptive contexts (e.g. raw_spin_lock). The same applies to %GFP_NOWAIT.312312+ * The current implementation doesn't support NMI, nor contexts that disable313313+ * preemption under PREEMPT_RT. This includes raw_spin_lock() and plain314314+ * preempt_disable() - see "Memory allocation" in315315+ * Documentation/core-api/real-time/differences.rst for more info.316316 *317317 * %GFP_KERNEL is typical for kernel-internal allocations. The caller requires318318 * %ZONE_NORMAL or a lower zone for direct access but can direct reclaim.···325321 * %GFP_NOWAIT is for kernel allocations that should not stall for direct326322 * reclaim, start physical IO or use any filesystem callback. It is very327323 * likely to fail to allocate memory, even for very small allocations.324324+ * The same restrictions on calling contexts apply as for %GFP_ATOMIC.328325 *329326 * %GFP_NOIO will use direct reclaim to discard clean pages or slab pages330327 * that do not require the starting of any physical IO.
+19
tools/include/linux/overflow.h
···6969})70707171/**7272+ * size_mul() - Calculate size_t multiplication with saturation at SIZE_MAX7373+ * @factor1: first factor7474+ * @factor2: second factor7575+ *7676+ * Returns: calculate @factor1 * @factor2, both promoted to size_t,7777+ * with any overflow causing the return value to be SIZE_MAX. The7878+ * lvalue must be size_t to avoid implicit type conversion.7979+ */8080+static inline size_t __must_check size_mul(size_t factor1, size_t factor2)8181+{8282+ size_t bytes;8383+8484+ if (check_mul_overflow(factor1, factor2, &bytes))8585+ return SIZE_MAX;8686+8787+ return bytes;8888+}8989+9090+/**7291 * array_size() - Calculate size of 2-dimensional array.7392 *7493 * @a: dimension one
···860860#define __NR_listns 470861861__SYSCALL(__NR_listns, sys_listns)862862863863+#define __NR_rseq_slice_yield 471864864+__SYSCALL(__NR_rseq_slice_yield, sys_rseq_slice_yield)865865+863866#undef __NR_syscalls864864-#define __NR_syscalls 471867867+#define __NR_syscalls 472865868866869/*867870 * 32 bit systems traditionally used different
···107107#define ERROR_ELF(format, ...) __WARN_ELF(ERROR_STR, format, ##__VA_ARGS__)108108#define ERROR_GLIBC(format, ...) __WARN_GLIBC(ERROR_STR, format, ##__VA_ARGS__)109109#define ERROR_FUNC(sec, offset, format, ...) __WARN_FUNC(ERROR_STR, sec, offset, format, ##__VA_ARGS__)110110-#define ERROR_INSN(insn, format, ...) WARN_FUNC(insn->sec, insn->offset, format, ##__VA_ARGS__)110110+#define ERROR_INSN(insn, format, ...) ERROR_FUNC(insn->sec, insn->offset, format, ##__VA_ARGS__)111111112112extern bool debug;113113extern int indent;
+28-14
tools/objtool/klp-diff.c
···1414#include <objtool/util.h>1515#include <arch/special.h>16161717+#include <linux/align.h>1718#include <linux/objtool_types.h>1819#include <linux/livepatch_external.h>1920#include <linux/stringify.h>···561560 }562561563562 if (!is_sec_sym(patched_sym))564564- offset = sec_size(out_sec);563563+ offset = ALIGN(sec_size(out_sec), out_sec->sh.sh_addralign);565564566565 if (patched_sym->len || is_sec_sym(patched_sym)) {567566 void *data = NULL;···13351334 * be applied after static branch/call init, resulting in code corruption.13361335 *13371336 * Validate a special section entry to avoid that. Note that an inert13381338- * tracepoint is harmless enough, in that case just skip the entry and print a13391339- * warning. Otherwise, return an error.13371337+ * tracepoint or pr_debug() is harmless enough, in that case just skip the13381338+ * entry and print a warning. Otherwise, return an error.13401339 *13411341- * This is only a temporary limitation which will be fixed when livepatch adds13421342- * support for submodules: fully self-contained modules which are embedded in13431343- * the top-level livepatch module's data and which can be loaded on demand when13441344- * their corresponding to-be-patched module gets loaded. Then klp relocs can13451345- * be retired.13401340+ * TODO: This is only a temporary limitation which will be fixed when livepatch13411341+ * adds support for submodules: fully self-contained modules which are embedded13421342+ * in the top-level livepatch module's data and which can be loaded on demand13431343+ * when their corresponding to-be-patched module gets loaded. Then klp relocs13441344+ * can be retired.13461345 *13471346 * Return:13481347 * -1: error: validation failed13491349- * 1: warning: tracepoint skipped13481348+ * 1: warning: disabled tracepoint or pr_debug()13501349 * 0: success13511350 */13521351static int validate_special_section_klp_reloc(struct elfs *e, struct symbol *sym)13531352{13541353 bool static_branch = !strcmp(sym->sec->name, "__jump_table");13551354 bool static_call = !strcmp(sym->sec->name, ".static_call_sites");13561356- struct symbol *code_sym = NULL;13551355+ const char *code_sym = NULL;13571356 unsigned long code_offset = 0;13581357 struct reloc *reloc;13591358 int ret = 0;···13651364 const char *sym_modname;13661365 struct export *export;1367136613671367+ if (convert_reloc_sym(e->patched, reloc))13681368+ continue;13691369+13681370 /* Static branch/call keys are always STT_OBJECT */13691371 if (reloc->sym->type != STT_OBJECT) {1370137213711373 /* Save code location which can be printed below */13721374 if (reloc->sym->type == STT_FUNC && !code_sym) {13731373- code_sym = reloc->sym;13751375+ code_sym = reloc->sym->name;13741376 code_offset = reloc_addend(reloc);13751377 }13761378···13961392 if (!strcmp(sym_modname, "vmlinux"))13971393 continue;1398139413951395+ if (!code_sym)13961396+ code_sym = "<unknown>";13971397+13991398 if (static_branch) {14001399 if (strstarts(reloc->sym->name, "__tracepoint_")) {14011400 WARN("%s: disabling unsupported tracepoint %s",14021402- code_sym->name, reloc->sym->name + 13);14011401+ code_sym, reloc->sym->name + 13);14021402+ ret = 1;14031403+ continue;14041404+ }14051405+14061406+ if (strstr(reloc->sym->name, "__UNIQUE_ID_ddebug_")) {14071407+ WARN("%s: disabling unsupported pr_debug()",14081408+ code_sym);14031409 ret = 1;14041410 continue;14051411 }1406141214071413 ERROR("%s+0x%lx: unsupported static branch key %s. Use static_key_enabled() instead",14081408- code_sym->name, code_offset, reloc->sym->name);14141414+ code_sym, code_offset, reloc->sym->name);14091415 return -1;14101416 }14111417···14261412 }1427141314281414 ERROR("%s()+0x%lx: unsupported static call key %s. Use KLP_STATIC_CALL() instead",14291429- code_sym->name, code_offset, reloc->sym->name);14151415+ code_sym, code_offset, reloc->sym->name);14301416 return -1;14311417 }14321418
···274274 PYLINT := $(shell which pylint 2> /dev/null)275275endif276276277277-export srctree OUTPUT RM CC CXX RUSTC LD AR CFLAGS CXXFLAGS V BISON FLEX AWK277277+export srctree OUTPUT RM CC CXX RUSTC LD AR CFLAGS CXXFLAGS RUST_FLAGS V BISON FLEX AWK278278export HOSTCC HOSTLD HOSTAR HOSTCFLAGS SHELLCHECK MYPY PYLINT279279280280include $(srctree)/tools/build/Makefile.include
+1
tools/perf/arch/arm/entry/syscalls/syscall.tbl
···485485468 common file_getattr sys_file_getattr486486469 common file_setattr sys_file_setattr487487470 common listns sys_listns488488+471 common rseq_slice_yield sys_rseq_slice_yield
···561561468 common file_getattr sys_file_getattr562562469 common file_setattr sys_file_setattr563563470 common listns sys_listns564564+471 nospu rseq_slice_yield sys_rseq_slice_yield
+392-467
tools/perf/arch/s390/entry/syscalls/syscall.tbl
···33# System call table for s39044#55# Format:66+# <nr> <abi> <syscall> <entry>67#77-# <nr> <abi> <syscall> <entry-64bit> <compat-entry>88-#99-# where <abi> can be common, 64, or 3288+# <abi> is always common.1091111-1 common exit sys_exit sys_exit1212-2 common fork sys_fork sys_fork1313-3 common read sys_read compat_sys_s390_read1414-4 common write sys_write compat_sys_s390_write1515-5 common open sys_open compat_sys_open1616-6 common close sys_close sys_close1717-7 common restart_syscall sys_restart_syscall sys_restart_syscall1818-8 common creat sys_creat sys_creat1919-9 common link sys_link sys_link2020-10 common unlink sys_unlink sys_unlink2121-11 common execve sys_execve compat_sys_execve2222-12 common chdir sys_chdir sys_chdir2323-13 32 time - sys_time322424-14 common mknod sys_mknod sys_mknod2525-15 common chmod sys_chmod sys_chmod2626-16 32 lchown - sys_lchown162727-19 common lseek sys_lseek compat_sys_lseek2828-20 common getpid sys_getpid sys_getpid2929-21 common mount sys_mount sys_mount3030-22 common umount sys_oldumount sys_oldumount3131-23 32 setuid - sys_setuid163232-24 32 getuid - sys_getuid163333-25 32 stime - sys_stime323434-26 common ptrace sys_ptrace compat_sys_ptrace3535-27 common alarm sys_alarm sys_alarm3636-29 common pause sys_pause sys_pause3737-30 common utime sys_utime sys_utime323838-33 common access sys_access sys_access3939-34 common nice sys_nice sys_nice4040-36 common sync sys_sync sys_sync4141-37 common kill sys_kill sys_kill4242-38 common rename sys_rename sys_rename4343-39 common mkdir sys_mkdir sys_mkdir4444-40 common rmdir sys_rmdir sys_rmdir4545-41 common dup sys_dup sys_dup4646-42 common pipe sys_pipe sys_pipe4747-43 common times sys_times compat_sys_times4848-45 common brk sys_brk sys_brk4949-46 32 setgid - sys_setgid165050-47 32 getgid - sys_getgid165151-48 common signal sys_signal sys_signal5252-49 32 geteuid - sys_geteuid165353-50 32 getegid - sys_getegid165454-51 common acct sys_acct sys_acct5555-52 common umount2 sys_umount sys_umount5656-54 common ioctl sys_ioctl compat_sys_ioctl5757-55 common fcntl sys_fcntl compat_sys_fcntl5858-57 common setpgid sys_setpgid sys_setpgid5959-60 common umask sys_umask sys_umask6060-61 common chroot sys_chroot sys_chroot6161-62 common ustat sys_ustat compat_sys_ustat6262-63 common dup2 sys_dup2 sys_dup26363-64 common getppid sys_getppid sys_getppid6464-65 common getpgrp sys_getpgrp sys_getpgrp6565-66 common setsid sys_setsid sys_setsid6666-67 common sigaction sys_sigaction compat_sys_sigaction6767-70 32 setreuid - sys_setreuid166868-71 32 setregid - sys_setregid166969-72 common sigsuspend sys_sigsuspend sys_sigsuspend7070-73 common sigpending sys_sigpending compat_sys_sigpending7171-74 common sethostname sys_sethostname sys_sethostname7272-75 common setrlimit sys_setrlimit compat_sys_setrlimit7373-76 32 getrlimit - compat_sys_old_getrlimit7474-77 common getrusage sys_getrusage compat_sys_getrusage7575-78 common gettimeofday sys_gettimeofday compat_sys_gettimeofday7676-79 common settimeofday sys_settimeofday compat_sys_settimeofday7777-80 32 getgroups - sys_getgroups167878-81 32 setgroups - sys_setgroups167979-83 common symlink sys_symlink sys_symlink8080-85 common readlink sys_readlink sys_readlink8181-86 common uselib sys_uselib sys_uselib8282-87 common swapon sys_swapon sys_swapon8383-88 common reboot sys_reboot sys_reboot8484-89 common readdir - compat_sys_old_readdir8585-90 common mmap sys_old_mmap compat_sys_s390_old_mmap8686-91 common munmap sys_munmap sys_munmap8787-92 common truncate sys_truncate compat_sys_truncate8888-93 common ftruncate sys_ftruncate compat_sys_ftruncate8989-94 common fchmod sys_fchmod sys_fchmod9090-95 32 fchown - sys_fchown169191-96 common getpriority sys_getpriority sys_getpriority9292-97 common setpriority sys_setpriority sys_setpriority9393-99 common statfs sys_statfs compat_sys_statfs9494-100 common fstatfs sys_fstatfs compat_sys_fstatfs9595-101 32 ioperm - -9696-102 common socketcall sys_socketcall compat_sys_socketcall9797-103 common syslog sys_syslog sys_syslog9898-104 common setitimer sys_setitimer compat_sys_setitimer9999-105 common getitimer sys_getitimer compat_sys_getitimer100100-106 common stat sys_newstat compat_sys_newstat101101-107 common lstat sys_newlstat compat_sys_newlstat102102-108 common fstat sys_newfstat compat_sys_newfstat103103-110 common lookup_dcookie - -104104-111 common vhangup sys_vhangup sys_vhangup105105-112 common idle - -106106-114 common wait4 sys_wait4 compat_sys_wait4107107-115 common swapoff sys_swapoff sys_swapoff108108-116 common sysinfo sys_sysinfo compat_sys_sysinfo109109-117 common ipc sys_s390_ipc compat_sys_s390_ipc110110-118 common fsync sys_fsync sys_fsync111111-119 common sigreturn sys_sigreturn compat_sys_sigreturn112112-120 common clone sys_clone sys_clone113113-121 common setdomainname sys_setdomainname sys_setdomainname114114-122 common uname sys_newuname sys_newuname115115-124 common adjtimex sys_adjtimex sys_adjtimex_time32116116-125 common mprotect sys_mprotect sys_mprotect117117-126 common sigprocmask sys_sigprocmask compat_sys_sigprocmask118118-127 common create_module - -119119-128 common init_module sys_init_module sys_init_module120120-129 common delete_module sys_delete_module sys_delete_module121121-130 common get_kernel_syms - -122122-131 common quotactl sys_quotactl sys_quotactl123123-132 common getpgid sys_getpgid sys_getpgid124124-133 common fchdir sys_fchdir sys_fchdir125125-134 common bdflush sys_ni_syscall sys_ni_syscall126126-135 common sysfs sys_sysfs sys_sysfs127127-136 common personality sys_s390_personality sys_s390_personality128128-137 common afs_syscall - -129129-138 32 setfsuid - sys_setfsuid16130130-139 32 setfsgid - sys_setfsgid16131131-140 32 _llseek - sys_llseek132132-141 common getdents sys_getdents compat_sys_getdents133133-142 32 _newselect - compat_sys_select134134-142 64 select sys_select -135135-143 common flock sys_flock sys_flock136136-144 common msync sys_msync sys_msync137137-145 common readv sys_readv sys_readv138138-146 common writev sys_writev sys_writev139139-147 common getsid sys_getsid sys_getsid140140-148 common fdatasync sys_fdatasync sys_fdatasync141141-149 common _sysctl - -142142-150 common mlock sys_mlock sys_mlock143143-151 common munlock sys_munlock sys_munlock144144-152 common mlockall sys_mlockall sys_mlockall145145-153 common munlockall sys_munlockall sys_munlockall146146-154 common sched_setparam sys_sched_setparam sys_sched_setparam147147-155 common sched_getparam sys_sched_getparam sys_sched_getparam148148-156 common sched_setscheduler sys_sched_setscheduler sys_sched_setscheduler149149-157 common sched_getscheduler sys_sched_getscheduler sys_sched_getscheduler150150-158 common sched_yield sys_sched_yield sys_sched_yield151151-159 common sched_get_priority_max sys_sched_get_priority_max sys_sched_get_priority_max152152-160 common sched_get_priority_min sys_sched_get_priority_min sys_sched_get_priority_min153153-161 common sched_rr_get_interval sys_sched_rr_get_interval sys_sched_rr_get_interval_time32154154-162 common nanosleep sys_nanosleep sys_nanosleep_time32155155-163 common mremap sys_mremap sys_mremap156156-164 32 setresuid - sys_setresuid16157157-165 32 getresuid - sys_getresuid16158158-167 common query_module - -159159-168 common poll sys_poll sys_poll160160-169 common nfsservctl - -161161-170 32 setresgid - sys_setresgid16162162-171 32 getresgid - sys_getresgid16163163-172 common prctl sys_prctl sys_prctl164164-173 common rt_sigreturn sys_rt_sigreturn compat_sys_rt_sigreturn165165-174 common rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction166166-175 common rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask167167-176 common rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending168168-177 common rt_sigtimedwait sys_rt_sigtimedwait compat_sys_rt_sigtimedwait_time32169169-178 common rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo170170-179 common rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend171171-180 common pread64 sys_pread64 compat_sys_s390_pread64172172-181 common pwrite64 sys_pwrite64 compat_sys_s390_pwrite64173173-182 32 chown - sys_chown16174174-183 common getcwd sys_getcwd sys_getcwd175175-184 common capget sys_capget sys_capget176176-185 common capset sys_capset sys_capset177177-186 common sigaltstack sys_sigaltstack compat_sys_sigaltstack178178-187 common sendfile sys_sendfile64 compat_sys_sendfile179179-188 common getpmsg - -180180-189 common putpmsg - -181181-190 common vfork sys_vfork sys_vfork182182-191 32 ugetrlimit - compat_sys_getrlimit183183-191 64 getrlimit sys_getrlimit -184184-192 32 mmap2 - compat_sys_s390_mmap2185185-193 32 truncate64 - compat_sys_s390_truncate64186186-194 32 ftruncate64 - compat_sys_s390_ftruncate64187187-195 32 stat64 - compat_sys_s390_stat64188188-196 32 lstat64 - compat_sys_s390_lstat64189189-197 32 fstat64 - compat_sys_s390_fstat64190190-198 32 lchown32 - sys_lchown191191-198 64 lchown sys_lchown -192192-199 32 getuid32 - sys_getuid193193-199 64 getuid sys_getuid -194194-200 32 getgid32 - sys_getgid195195-200 64 getgid sys_getgid -196196-201 32 geteuid32 - sys_geteuid197197-201 64 geteuid sys_geteuid -198198-202 32 getegid32 - sys_getegid199199-202 64 getegid sys_getegid -200200-203 32 setreuid32 - sys_setreuid201201-203 64 setreuid sys_setreuid -202202-204 32 setregid32 - sys_setregid203203-204 64 setregid sys_setregid -204204-205 32 getgroups32 - sys_getgroups205205-205 64 getgroups sys_getgroups -206206-206 32 setgroups32 - sys_setgroups207207-206 64 setgroups sys_setgroups -208208-207 32 fchown32 - sys_fchown209209-207 64 fchown sys_fchown -210210-208 32 setresuid32 - sys_setresuid211211-208 64 setresuid sys_setresuid -212212-209 32 getresuid32 - sys_getresuid213213-209 64 getresuid sys_getresuid -214214-210 32 setresgid32 - sys_setresgid215215-210 64 setresgid sys_setresgid -216216-211 32 getresgid32 - sys_getresgid217217-211 64 getresgid sys_getresgid -218218-212 32 chown32 - sys_chown219219-212 64 chown sys_chown -220220-213 32 setuid32 - sys_setuid221221-213 64 setuid sys_setuid -222222-214 32 setgid32 - sys_setgid223223-214 64 setgid sys_setgid -224224-215 32 setfsuid32 - sys_setfsuid225225-215 64 setfsuid sys_setfsuid -226226-216 32 setfsgid32 - sys_setfsgid227227-216 64 setfsgid sys_setfsgid -228228-217 common pivot_root sys_pivot_root sys_pivot_root229229-218 common mincore sys_mincore sys_mincore230230-219 common madvise sys_madvise sys_madvise231231-220 common getdents64 sys_getdents64 sys_getdents64232232-221 32 fcntl64 - compat_sys_fcntl64233233-222 common readahead sys_readahead compat_sys_s390_readahead234234-223 32 sendfile64 - compat_sys_sendfile64235235-224 common setxattr sys_setxattr sys_setxattr236236-225 common lsetxattr sys_lsetxattr sys_lsetxattr237237-226 common fsetxattr sys_fsetxattr sys_fsetxattr238238-227 common getxattr sys_getxattr sys_getxattr239239-228 common lgetxattr sys_lgetxattr sys_lgetxattr240240-229 common fgetxattr sys_fgetxattr sys_fgetxattr241241-230 common listxattr sys_listxattr sys_listxattr242242-231 common llistxattr sys_llistxattr sys_llistxattr243243-232 common flistxattr sys_flistxattr sys_flistxattr244244-233 common removexattr sys_removexattr sys_removexattr245245-234 common lremovexattr sys_lremovexattr sys_lremovexattr246246-235 common fremovexattr sys_fremovexattr sys_fremovexattr247247-236 common gettid sys_gettid sys_gettid248248-237 common tkill sys_tkill sys_tkill249249-238 common futex sys_futex sys_futex_time32250250-239 common sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity251251-240 common sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity252252-241 common tgkill sys_tgkill sys_tgkill253253-243 common io_setup sys_io_setup compat_sys_io_setup254254-244 common io_destroy sys_io_destroy sys_io_destroy255255-245 common io_getevents sys_io_getevents sys_io_getevents_time32256256-246 common io_submit sys_io_submit compat_sys_io_submit257257-247 common io_cancel sys_io_cancel sys_io_cancel258258-248 common exit_group sys_exit_group sys_exit_group259259-249 common epoll_create sys_epoll_create sys_epoll_create260260-250 common epoll_ctl sys_epoll_ctl sys_epoll_ctl261261-251 common epoll_wait sys_epoll_wait sys_epoll_wait262262-252 common set_tid_address sys_set_tid_address sys_set_tid_address263263-253 common fadvise64 sys_fadvise64_64 compat_sys_s390_fadvise64264264-254 common timer_create sys_timer_create compat_sys_timer_create265265-255 common timer_settime sys_timer_settime sys_timer_settime32266266-256 common timer_gettime sys_timer_gettime sys_timer_gettime32267267-257 common timer_getoverrun sys_timer_getoverrun sys_timer_getoverrun268268-258 common timer_delete sys_timer_delete sys_timer_delete269269-259 common clock_settime sys_clock_settime sys_clock_settime32270270-260 common clock_gettime sys_clock_gettime sys_clock_gettime32271271-261 common clock_getres sys_clock_getres sys_clock_getres_time32272272-262 common clock_nanosleep sys_clock_nanosleep sys_clock_nanosleep_time32273273-264 32 fadvise64_64 - compat_sys_s390_fadvise64_64274274-265 common statfs64 sys_statfs64 compat_sys_statfs64275275-266 common fstatfs64 sys_fstatfs64 compat_sys_fstatfs64276276-267 common remap_file_pages sys_remap_file_pages sys_remap_file_pages277277-268 common mbind sys_mbind sys_mbind278278-269 common get_mempolicy sys_get_mempolicy sys_get_mempolicy279279-270 common set_mempolicy sys_set_mempolicy sys_set_mempolicy280280-271 common mq_open sys_mq_open compat_sys_mq_open281281-272 common mq_unlink sys_mq_unlink sys_mq_unlink282282-273 common mq_timedsend sys_mq_timedsend sys_mq_timedsend_time32283283-274 common mq_timedreceive sys_mq_timedreceive sys_mq_timedreceive_time32284284-275 common mq_notify sys_mq_notify compat_sys_mq_notify285285-276 common mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr286286-277 common kexec_load sys_kexec_load compat_sys_kexec_load287287-278 common add_key sys_add_key sys_add_key288288-279 common request_key sys_request_key sys_request_key289289-280 common keyctl sys_keyctl compat_sys_keyctl290290-281 common waitid sys_waitid compat_sys_waitid291291-282 common ioprio_set sys_ioprio_set sys_ioprio_set292292-283 common ioprio_get sys_ioprio_get sys_ioprio_get293293-284 common inotify_init sys_inotify_init sys_inotify_init294294-285 common inotify_add_watch sys_inotify_add_watch sys_inotify_add_watch295295-286 common inotify_rm_watch sys_inotify_rm_watch sys_inotify_rm_watch296296-287 common migrate_pages sys_migrate_pages sys_migrate_pages297297-288 common openat sys_openat compat_sys_openat298298-289 common mkdirat sys_mkdirat sys_mkdirat299299-290 common mknodat sys_mknodat sys_mknodat300300-291 common fchownat sys_fchownat sys_fchownat301301-292 common futimesat sys_futimesat sys_futimesat_time32302302-293 32 fstatat64 - compat_sys_s390_fstatat64303303-293 64 newfstatat sys_newfstatat -304304-294 common unlinkat sys_unlinkat sys_unlinkat305305-295 common renameat sys_renameat sys_renameat306306-296 common linkat sys_linkat sys_linkat307307-297 common symlinkat sys_symlinkat sys_symlinkat308308-298 common readlinkat sys_readlinkat sys_readlinkat309309-299 common fchmodat sys_fchmodat sys_fchmodat310310-300 common faccessat sys_faccessat sys_faccessat311311-301 common pselect6 sys_pselect6 compat_sys_pselect6_time32312312-302 common ppoll sys_ppoll compat_sys_ppoll_time32313313-303 common unshare sys_unshare sys_unshare314314-304 common set_robust_list sys_set_robust_list compat_sys_set_robust_list315315-305 common get_robust_list sys_get_robust_list compat_sys_get_robust_list316316-306 common splice sys_splice sys_splice317317-307 common sync_file_range sys_sync_file_range compat_sys_s390_sync_file_range318318-308 common tee sys_tee sys_tee319319-309 common vmsplice sys_vmsplice sys_vmsplice320320-310 common move_pages sys_move_pages sys_move_pages321321-311 common getcpu sys_getcpu sys_getcpu322322-312 common epoll_pwait sys_epoll_pwait compat_sys_epoll_pwait323323-313 common utimes sys_utimes sys_utimes_time32324324-314 common fallocate sys_fallocate compat_sys_s390_fallocate325325-315 common utimensat sys_utimensat sys_utimensat_time32326326-316 common signalfd sys_signalfd compat_sys_signalfd327327-317 common timerfd - -328328-318 common eventfd sys_eventfd sys_eventfd329329-319 common timerfd_create sys_timerfd_create sys_timerfd_create330330-320 common timerfd_settime sys_timerfd_settime sys_timerfd_settime32331331-321 common timerfd_gettime sys_timerfd_gettime sys_timerfd_gettime32332332-322 common signalfd4 sys_signalfd4 compat_sys_signalfd4333333-323 common eventfd2 sys_eventfd2 sys_eventfd2334334-324 common inotify_init1 sys_inotify_init1 sys_inotify_init1335335-325 common pipe2 sys_pipe2 sys_pipe2336336-326 common dup3 sys_dup3 sys_dup3337337-327 common epoll_create1 sys_epoll_create1 sys_epoll_create1338338-328 common preadv sys_preadv compat_sys_preadv339339-329 common pwritev sys_pwritev compat_sys_pwritev340340-330 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo341341-331 common perf_event_open sys_perf_event_open sys_perf_event_open342342-332 common fanotify_init sys_fanotify_init sys_fanotify_init343343-333 common fanotify_mark sys_fanotify_mark compat_sys_fanotify_mark344344-334 common prlimit64 sys_prlimit64 sys_prlimit64345345-335 common name_to_handle_at sys_name_to_handle_at sys_name_to_handle_at346346-336 common open_by_handle_at sys_open_by_handle_at compat_sys_open_by_handle_at347347-337 common clock_adjtime sys_clock_adjtime sys_clock_adjtime32348348-338 common syncfs sys_syncfs sys_syncfs349349-339 common setns sys_setns sys_setns350350-340 common process_vm_readv sys_process_vm_readv sys_process_vm_readv351351-341 common process_vm_writev sys_process_vm_writev sys_process_vm_writev352352-342 common s390_runtime_instr sys_s390_runtime_instr sys_s390_runtime_instr353353-343 common kcmp sys_kcmp sys_kcmp354354-344 common finit_module sys_finit_module sys_finit_module355355-345 common sched_setattr sys_sched_setattr sys_sched_setattr356356-346 common sched_getattr sys_sched_getattr sys_sched_getattr357357-347 common renameat2 sys_renameat2 sys_renameat2358358-348 common seccomp sys_seccomp sys_seccomp359359-349 common getrandom sys_getrandom sys_getrandom360360-350 common memfd_create sys_memfd_create sys_memfd_create361361-351 common bpf sys_bpf sys_bpf362362-352 common s390_pci_mmio_write sys_s390_pci_mmio_write sys_s390_pci_mmio_write363363-353 common s390_pci_mmio_read sys_s390_pci_mmio_read sys_s390_pci_mmio_read364364-354 common execveat sys_execveat compat_sys_execveat365365-355 common userfaultfd sys_userfaultfd sys_userfaultfd366366-356 common membarrier sys_membarrier sys_membarrier367367-357 common recvmmsg sys_recvmmsg compat_sys_recvmmsg_time32368368-358 common sendmmsg sys_sendmmsg compat_sys_sendmmsg369369-359 common socket sys_socket sys_socket370370-360 common socketpair sys_socketpair sys_socketpair371371-361 common bind sys_bind sys_bind372372-362 common connect sys_connect sys_connect373373-363 common listen sys_listen sys_listen374374-364 common accept4 sys_accept4 sys_accept4375375-365 common getsockopt sys_getsockopt sys_getsockopt376376-366 common setsockopt sys_setsockopt sys_setsockopt377377-367 common getsockname sys_getsockname sys_getsockname378378-368 common getpeername sys_getpeername sys_getpeername379379-369 common sendto sys_sendto sys_sendto380380-370 common sendmsg sys_sendmsg compat_sys_sendmsg381381-371 common recvfrom sys_recvfrom compat_sys_recvfrom382382-372 common recvmsg sys_recvmsg compat_sys_recvmsg383383-373 common shutdown sys_shutdown sys_shutdown384384-374 common mlock2 sys_mlock2 sys_mlock2385385-375 common copy_file_range sys_copy_file_range sys_copy_file_range386386-376 common preadv2 sys_preadv2 compat_sys_preadv2387387-377 common pwritev2 sys_pwritev2 compat_sys_pwritev2388388-378 common s390_guarded_storage sys_s390_guarded_storage sys_s390_guarded_storage389389-379 common statx sys_statx sys_statx390390-380 common s390_sthyi sys_s390_sthyi sys_s390_sthyi391391-381 common kexec_file_load sys_kexec_file_load sys_kexec_file_load392392-382 common io_pgetevents sys_io_pgetevents compat_sys_io_pgetevents393393-383 common rseq sys_rseq sys_rseq394394-384 common pkey_mprotect sys_pkey_mprotect sys_pkey_mprotect395395-385 common pkey_alloc sys_pkey_alloc sys_pkey_alloc396396-386 common pkey_free sys_pkey_free sys_pkey_free1010+1 common exit sys_exit1111+2 common fork sys_fork1212+3 common read sys_read1313+4 common write sys_write1414+5 common open sys_open1515+6 common close sys_close1616+7 common restart_syscall sys_restart_syscall1717+8 common creat sys_creat1818+9 common link sys_link1919+10 common unlink sys_unlink2020+11 common execve sys_execve2121+12 common chdir sys_chdir2222+14 common mknod sys_mknod2323+15 common chmod sys_chmod2424+19 common lseek sys_lseek2525+20 common getpid sys_getpid2626+21 common mount sys_mount2727+22 common umount sys_oldumount2828+26 common ptrace sys_ptrace2929+27 common alarm sys_alarm3030+29 common pause sys_pause3131+30 common utime sys_utime3232+33 common access sys_access3333+34 common nice sys_nice3434+36 common sync sys_sync3535+37 common kill sys_kill3636+38 common rename sys_rename3737+39 common mkdir sys_mkdir3838+40 common rmdir sys_rmdir3939+41 common dup sys_dup4040+42 common pipe sys_pipe4141+43 common times sys_times4242+45 common brk sys_brk4343+48 common signal sys_signal4444+51 common acct sys_acct4545+52 common umount2 sys_umount4646+54 common ioctl sys_ioctl4747+55 common fcntl sys_fcntl4848+57 common setpgid sys_setpgid4949+60 common umask sys_umask5050+61 common chroot sys_chroot5151+62 common ustat sys_ustat5252+63 common dup2 sys_dup25353+64 common getppid sys_getppid5454+65 common getpgrp sys_getpgrp5555+66 common setsid sys_setsid5656+67 common sigaction sys_sigaction5757+72 common sigsuspend sys_sigsuspend5858+73 common sigpending sys_sigpending5959+74 common sethostname sys_sethostname6060+75 common setrlimit sys_setrlimit6161+77 common getrusage sys_getrusage6262+78 common gettimeofday sys_gettimeofday6363+79 common settimeofday sys_settimeofday6464+83 common symlink sys_symlink6565+85 common readlink sys_readlink6666+86 common uselib sys_uselib6767+87 common swapon sys_swapon6868+88 common reboot sys_reboot6969+89 common readdir sys_ni_syscall7070+90 common mmap sys_old_mmap7171+91 common munmap sys_munmap7272+92 common truncate sys_truncate7373+93 common ftruncate sys_ftruncate7474+94 common fchmod sys_fchmod7575+96 common getpriority sys_getpriority7676+97 common setpriority sys_setpriority7777+99 common statfs sys_statfs7878+100 common fstatfs sys_fstatfs7979+102 common socketcall sys_socketcall8080+103 common syslog sys_syslog8181+104 common setitimer sys_setitimer8282+105 common getitimer sys_getitimer8383+106 common stat sys_newstat8484+107 common lstat sys_newlstat8585+108 common fstat sys_newfstat8686+110 common lookup_dcookie sys_ni_syscall8787+111 common vhangup sys_vhangup8888+112 common idle sys_ni_syscall8989+114 common wait4 sys_wait49090+115 common swapoff sys_swapoff9191+116 common sysinfo sys_sysinfo9292+117 common ipc sys_s390_ipc9393+118 common fsync sys_fsync9494+119 common sigreturn sys_sigreturn9595+120 common clone sys_clone9696+121 common setdomainname sys_setdomainname9797+122 common uname sys_newuname9898+124 common adjtimex sys_adjtimex9999+125 common mprotect sys_mprotect100100+126 common sigprocmask sys_sigprocmask101101+127 common create_module sys_ni_syscall102102+128 common init_module sys_init_module103103+129 common delete_module sys_delete_module104104+130 common get_kernel_syms sys_ni_syscall105105+131 common quotactl sys_quotactl106106+132 common getpgid sys_getpgid107107+133 common fchdir sys_fchdir108108+134 common bdflush sys_ni_syscall109109+135 common sysfs sys_sysfs110110+136 common personality sys_s390_personality111111+137 common afs_syscall sys_ni_syscall112112+141 common getdents sys_getdents113113+142 common select sys_select114114+143 common flock sys_flock115115+144 common msync sys_msync116116+145 common readv sys_readv117117+146 common writev sys_writev118118+147 common getsid sys_getsid119119+148 common fdatasync sys_fdatasync120120+149 common _sysctl sys_ni_syscall121121+150 common mlock sys_mlock122122+151 common munlock sys_munlock123123+152 common mlockall sys_mlockall124124+153 common munlockall sys_munlockall125125+154 common sched_setparam sys_sched_setparam126126+155 common sched_getparam sys_sched_getparam127127+156 common sched_setscheduler sys_sched_setscheduler128128+157 common sched_getscheduler sys_sched_getscheduler129129+158 common sched_yield sys_sched_yield130130+159 common sched_get_priority_max sys_sched_get_priority_max131131+160 common sched_get_priority_min sys_sched_get_priority_min132132+161 common sched_rr_get_interval sys_sched_rr_get_interval133133+162 common nanosleep sys_nanosleep134134+163 common mremap sys_mremap135135+167 common query_module sys_ni_syscall136136+168 common poll sys_poll137137+169 common nfsservctl sys_ni_syscall138138+172 common prctl sys_prctl139139+173 common rt_sigreturn sys_rt_sigreturn140140+174 common rt_sigaction sys_rt_sigaction141141+175 common rt_sigprocmask sys_rt_sigprocmask142142+176 common rt_sigpending sys_rt_sigpending143143+177 common rt_sigtimedwait sys_rt_sigtimedwait144144+178 common rt_sigqueueinfo sys_rt_sigqueueinfo145145+179 common rt_sigsuspend sys_rt_sigsuspend146146+180 common pread64 sys_pread64147147+181 common pwrite64 sys_pwrite64148148+183 common getcwd sys_getcwd149149+184 common capget sys_capget150150+185 common capset sys_capset151151+186 common sigaltstack sys_sigaltstack152152+187 common sendfile sys_sendfile64153153+188 common getpmsg sys_ni_syscall154154+189 common putpmsg sys_ni_syscall155155+190 common vfork sys_vfork156156+191 common getrlimit sys_getrlimit157157+198 common lchown sys_lchown158158+199 common getuid sys_getuid159159+200 common getgid sys_getgid160160+201 common geteuid sys_geteuid161161+202 common getegid sys_getegid162162+203 common setreuid sys_setreuid163163+204 common setregid sys_setregid164164+205 common getgroups sys_getgroups165165+206 common setgroups sys_setgroups166166+207 common fchown sys_fchown167167+208 common setresuid sys_setresuid168168+209 common getresuid sys_getresuid169169+210 common setresgid sys_setresgid170170+211 common getresgid sys_getresgid171171+212 common chown sys_chown172172+213 common setuid sys_setuid173173+214 common setgid sys_setgid174174+215 common setfsuid sys_setfsuid175175+216 common setfsgid sys_setfsgid176176+217 common pivot_root sys_pivot_root177177+218 common mincore sys_mincore178178+219 common madvise sys_madvise179179+220 common getdents64 sys_getdents64180180+222 common readahead sys_readahead181181+224 common setxattr sys_setxattr182182+225 common lsetxattr sys_lsetxattr183183+226 common fsetxattr sys_fsetxattr184184+227 common getxattr sys_getxattr185185+228 common lgetxattr sys_lgetxattr186186+229 common fgetxattr sys_fgetxattr187187+230 common listxattr sys_listxattr188188+231 common llistxattr sys_llistxattr189189+232 common flistxattr sys_flistxattr190190+233 common removexattr sys_removexattr191191+234 common lremovexattr sys_lremovexattr192192+235 common fremovexattr sys_fremovexattr193193+236 common gettid sys_gettid194194+237 common tkill sys_tkill195195+238 common futex sys_futex196196+239 common sched_setaffinity sys_sched_setaffinity197197+240 common sched_getaffinity sys_sched_getaffinity198198+241 common tgkill sys_tgkill199199+243 common io_setup sys_io_setup200200+244 common io_destroy sys_io_destroy201201+245 common io_getevents sys_io_getevents202202+246 common io_submit sys_io_submit203203+247 common io_cancel sys_io_cancel204204+248 common exit_group sys_exit_group205205+249 common epoll_create sys_epoll_create206206+250 common epoll_ctl sys_epoll_ctl207207+251 common epoll_wait sys_epoll_wait208208+252 common set_tid_address sys_set_tid_address209209+253 common fadvise64 sys_fadvise64_64210210+254 common timer_create sys_timer_create211211+255 common timer_settime sys_timer_settime212212+256 common timer_gettime sys_timer_gettime213213+257 common timer_getoverrun sys_timer_getoverrun214214+258 common timer_delete sys_timer_delete215215+259 common clock_settime sys_clock_settime216216+260 common clock_gettime sys_clock_gettime217217+261 common clock_getres sys_clock_getres218218+262 common clock_nanosleep sys_clock_nanosleep219219+265 common statfs64 sys_statfs64220220+266 common fstatfs64 sys_fstatfs64221221+267 common remap_file_pages sys_remap_file_pages222222+268 common mbind sys_mbind223223+269 common get_mempolicy sys_get_mempolicy224224+270 common set_mempolicy sys_set_mempolicy225225+271 common mq_open sys_mq_open226226+272 common mq_unlink sys_mq_unlink227227+273 common mq_timedsend sys_mq_timedsend228228+274 common mq_timedreceive sys_mq_timedreceive229229+275 common mq_notify sys_mq_notify230230+276 common mq_getsetattr sys_mq_getsetattr231231+277 common kexec_load sys_kexec_load232232+278 common add_key sys_add_key233233+279 common request_key sys_request_key234234+280 common keyctl sys_keyctl235235+281 common waitid sys_waitid236236+282 common ioprio_set sys_ioprio_set237237+283 common ioprio_get sys_ioprio_get238238+284 common inotify_init sys_inotify_init239239+285 common inotify_add_watch sys_inotify_add_watch240240+286 common inotify_rm_watch sys_inotify_rm_watch241241+287 common migrate_pages sys_migrate_pages242242+288 common openat sys_openat243243+289 common mkdirat sys_mkdirat244244+290 common mknodat sys_mknodat245245+291 common fchownat sys_fchownat246246+292 common futimesat sys_futimesat247247+293 common newfstatat sys_newfstatat248248+294 common unlinkat sys_unlinkat249249+295 common renameat sys_renameat250250+296 common linkat sys_linkat251251+297 common symlinkat sys_symlinkat252252+298 common readlinkat sys_readlinkat253253+299 common fchmodat sys_fchmodat254254+300 common faccessat sys_faccessat255255+301 common pselect6 sys_pselect6256256+302 common ppoll sys_ppoll257257+303 common unshare sys_unshare258258+304 common set_robust_list sys_set_robust_list259259+305 common get_robust_list sys_get_robust_list260260+306 common splice sys_splice261261+307 common sync_file_range sys_sync_file_range262262+308 common tee sys_tee263263+309 common vmsplice sys_vmsplice264264+310 common move_pages sys_move_pages265265+311 common getcpu sys_getcpu266266+312 common epoll_pwait sys_epoll_pwait267267+313 common utimes sys_utimes268268+314 common fallocate sys_fallocate269269+315 common utimensat sys_utimensat270270+316 common signalfd sys_signalfd271271+317 common timerfd sys_ni_syscall272272+318 common eventfd sys_eventfd273273+319 common timerfd_create sys_timerfd_create274274+320 common timerfd_settime sys_timerfd_settime275275+321 common timerfd_gettime sys_timerfd_gettime276276+322 common signalfd4 sys_signalfd4277277+323 common eventfd2 sys_eventfd2278278+324 common inotify_init1 sys_inotify_init1279279+325 common pipe2 sys_pipe2280280+326 common dup3 sys_dup3281281+327 common epoll_create1 sys_epoll_create1282282+328 common preadv sys_preadv283283+329 common pwritev sys_pwritev284284+330 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo285285+331 common perf_event_open sys_perf_event_open286286+332 common fanotify_init sys_fanotify_init287287+333 common fanotify_mark sys_fanotify_mark288288+334 common prlimit64 sys_prlimit64289289+335 common name_to_handle_at sys_name_to_handle_at290290+336 common open_by_handle_at sys_open_by_handle_at291291+337 common clock_adjtime sys_clock_adjtime292292+338 common syncfs sys_syncfs293293+339 common setns sys_setns294294+340 common process_vm_readv sys_process_vm_readv295295+341 common process_vm_writev sys_process_vm_writev296296+342 common s390_runtime_instr sys_s390_runtime_instr297297+343 common kcmp sys_kcmp298298+344 common finit_module sys_finit_module299299+345 common sched_setattr sys_sched_setattr300300+346 common sched_getattr sys_sched_getattr301301+347 common renameat2 sys_renameat2302302+348 common seccomp sys_seccomp303303+349 common getrandom sys_getrandom304304+350 common memfd_create sys_memfd_create305305+351 common bpf sys_bpf306306+352 common s390_pci_mmio_write sys_s390_pci_mmio_write307307+353 common s390_pci_mmio_read sys_s390_pci_mmio_read308308+354 common execveat sys_execveat309309+355 common userfaultfd sys_userfaultfd310310+356 common membarrier sys_membarrier311311+357 common recvmmsg sys_recvmmsg312312+358 common sendmmsg sys_sendmmsg313313+359 common socket sys_socket314314+360 common socketpair sys_socketpair315315+361 common bind sys_bind316316+362 common connect sys_connect317317+363 common listen sys_listen318318+364 common accept4 sys_accept4319319+365 common getsockopt sys_getsockopt320320+366 common setsockopt sys_setsockopt321321+367 common getsockname sys_getsockname322322+368 common getpeername sys_getpeername323323+369 common sendto sys_sendto324324+370 common sendmsg sys_sendmsg325325+371 common recvfrom sys_recvfrom326326+372 common recvmsg sys_recvmsg327327+373 common shutdown sys_shutdown328328+374 common mlock2 sys_mlock2329329+375 common copy_file_range sys_copy_file_range330330+376 common preadv2 sys_preadv2331331+377 common pwritev2 sys_pwritev2332332+378 common s390_guarded_storage sys_s390_guarded_storage333333+379 common statx sys_statx334334+380 common s390_sthyi sys_s390_sthyi335335+381 common kexec_file_load sys_kexec_file_load336336+382 common io_pgetevents sys_io_pgetevents337337+383 common rseq sys_rseq338338+384 common pkey_mprotect sys_pkey_mprotect339339+385 common pkey_alloc sys_pkey_alloc340340+386 common pkey_free sys_pkey_free397341# room for arch specific syscalls398398-392 64 semtimedop sys_semtimedop -399399-393 common semget sys_semget sys_semget400400-394 common semctl sys_semctl compat_sys_semctl401401-395 common shmget sys_shmget sys_shmget402402-396 common shmctl sys_shmctl compat_sys_shmctl403403-397 common shmat sys_shmat compat_sys_shmat404404-398 common shmdt sys_shmdt sys_shmdt405405-399 common msgget sys_msgget sys_msgget406406-400 common msgsnd sys_msgsnd compat_sys_msgsnd407407-401 common msgrcv sys_msgrcv compat_sys_msgrcv408408-402 common msgctl sys_msgctl compat_sys_msgctl409409-403 32 clock_gettime64 - sys_clock_gettime410410-404 32 clock_settime64 - sys_clock_settime411411-405 32 clock_adjtime64 - sys_clock_adjtime412412-406 32 clock_getres_time64 - sys_clock_getres413413-407 32 clock_nanosleep_time64 - sys_clock_nanosleep414414-408 32 timer_gettime64 - sys_timer_gettime415415-409 32 timer_settime64 - sys_timer_settime416416-410 32 timerfd_gettime64 - sys_timerfd_gettime417417-411 32 timerfd_settime64 - sys_timerfd_settime418418-412 32 utimensat_time64 - sys_utimensat419419-413 32 pselect6_time64 - compat_sys_pselect6_time64420420-414 32 ppoll_time64 - compat_sys_ppoll_time64421421-416 32 io_pgetevents_time64 - compat_sys_io_pgetevents_time64422422-417 32 recvmmsg_time64 - compat_sys_recvmmsg_time64423423-418 32 mq_timedsend_time64 - sys_mq_timedsend424424-419 32 mq_timedreceive_time64 - sys_mq_timedreceive425425-420 32 semtimedop_time64 - sys_semtimedop426426-421 32 rt_sigtimedwait_time64 - compat_sys_rt_sigtimedwait_time64427427-422 32 futex_time64 - sys_futex428428-423 32 sched_rr_get_interval_time64 - sys_sched_rr_get_interval429429-424 common pidfd_send_signal sys_pidfd_send_signal sys_pidfd_send_signal430430-425 common io_uring_setup sys_io_uring_setup sys_io_uring_setup431431-426 common io_uring_enter sys_io_uring_enter sys_io_uring_enter432432-427 common io_uring_register sys_io_uring_register sys_io_uring_register433433-428 common open_tree sys_open_tree sys_open_tree434434-429 common move_mount sys_move_mount sys_move_mount435435-430 common fsopen sys_fsopen sys_fsopen436436-431 common fsconfig sys_fsconfig sys_fsconfig437437-432 common fsmount sys_fsmount sys_fsmount438438-433 common fspick sys_fspick sys_fspick439439-434 common pidfd_open sys_pidfd_open sys_pidfd_open440440-435 common clone3 sys_clone3 sys_clone3441441-436 common close_range sys_close_range sys_close_range442442-437 common openat2 sys_openat2 sys_openat2443443-438 common pidfd_getfd sys_pidfd_getfd sys_pidfd_getfd444444-439 common faccessat2 sys_faccessat2 sys_faccessat2445445-440 common process_madvise sys_process_madvise sys_process_madvise446446-441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2447447-442 common mount_setattr sys_mount_setattr sys_mount_setattr448448-443 common quotactl_fd sys_quotactl_fd sys_quotactl_fd449449-444 common landlock_create_ruleset sys_landlock_create_ruleset sys_landlock_create_ruleset450450-445 common landlock_add_rule sys_landlock_add_rule sys_landlock_add_rule451451-446 common landlock_restrict_self sys_landlock_restrict_self sys_landlock_restrict_self452452-447 common memfd_secret sys_memfd_secret sys_memfd_secret453453-448 common process_mrelease sys_process_mrelease sys_process_mrelease454454-449 common futex_waitv sys_futex_waitv sys_futex_waitv455455-450 common set_mempolicy_home_node sys_set_mempolicy_home_node sys_set_mempolicy_home_node456456-451 common cachestat sys_cachestat sys_cachestat457457-452 common fchmodat2 sys_fchmodat2 sys_fchmodat2458458-453 common map_shadow_stack sys_map_shadow_stack sys_map_shadow_stack459459-454 common futex_wake sys_futex_wake sys_futex_wake460460-455 common futex_wait sys_futex_wait sys_futex_wait461461-456 common futex_requeue sys_futex_requeue sys_futex_requeue462462-457 common statmount sys_statmount sys_statmount463463-458 common listmount sys_listmount sys_listmount464464-459 common lsm_get_self_attr sys_lsm_get_self_attr sys_lsm_get_self_attr465465-460 common lsm_set_self_attr sys_lsm_set_self_attr sys_lsm_set_self_attr466466-461 common lsm_list_modules sys_lsm_list_modules sys_lsm_list_modules467467-462 common mseal sys_mseal sys_mseal468468-463 common setxattrat sys_setxattrat sys_setxattrat469469-464 common getxattrat sys_getxattrat sys_getxattrat470470-465 common listxattrat sys_listxattrat sys_listxattrat471471-466 common removexattrat sys_removexattrat sys_removexattrat472472-467 common open_tree_attr sys_open_tree_attr sys_open_tree_attr473473-468 common file_getattr sys_file_getattr sys_file_getattr474474-469 common file_setattr sys_file_setattr sys_file_setattr475475-470 common listns sys_listns sys_listns342342+392 common semtimedop sys_semtimedop343343+393 common semget sys_semget344344+394 common semctl sys_semctl345345+395 common shmget sys_shmget346346+396 common shmctl sys_shmctl347347+397 common shmat sys_shmat348348+398 common shmdt sys_shmdt349349+399 common msgget sys_msgget350350+400 common msgsnd sys_msgsnd351351+401 common msgrcv sys_msgrcv352352+402 common msgctl sys_msgctl353353+424 common pidfd_send_signal sys_pidfd_send_signal354354+425 common io_uring_setup sys_io_uring_setup355355+426 common io_uring_enter sys_io_uring_enter356356+427 common io_uring_register sys_io_uring_register357357+428 common open_tree sys_open_tree358358+429 common move_mount sys_move_mount359359+430 common fsopen sys_fsopen360360+431 common fsconfig sys_fsconfig361361+432 common fsmount sys_fsmount362362+433 common fspick sys_fspick363363+434 common pidfd_open sys_pidfd_open364364+435 common clone3 sys_clone3365365+436 common close_range sys_close_range366366+437 common openat2 sys_openat2367367+438 common pidfd_getfd sys_pidfd_getfd368368+439 common faccessat2 sys_faccessat2369369+440 common process_madvise sys_process_madvise370370+441 common epoll_pwait2 sys_epoll_pwait2371371+442 common mount_setattr sys_mount_setattr372372+443 common quotactl_fd sys_quotactl_fd373373+444 common landlock_create_ruleset sys_landlock_create_ruleset374374+445 common landlock_add_rule sys_landlock_add_rule375375+446 common landlock_restrict_self sys_landlock_restrict_self376376+447 common memfd_secret sys_memfd_secret377377+448 common process_mrelease sys_process_mrelease378378+449 common futex_waitv sys_futex_waitv379379+450 common set_mempolicy_home_node sys_set_mempolicy_home_node380380+451 common cachestat sys_cachestat381381+452 common fchmodat2 sys_fchmodat2382382+453 common map_shadow_stack sys_map_shadow_stack383383+454 common futex_wake sys_futex_wake384384+455 common futex_wait sys_futex_wait385385+456 common futex_requeue sys_futex_requeue386386+457 common statmount sys_statmount387387+458 common listmount sys_listmount388388+459 common lsm_get_self_attr sys_lsm_get_self_attr389389+460 common lsm_set_self_attr sys_lsm_set_self_attr390390+461 common lsm_list_modules sys_lsm_list_modules391391+462 common mseal sys_mseal392392+463 common setxattrat sys_setxattrat393393+464 common getxattrat sys_getxattrat394394+465 common listxattrat sys_listxattrat395395+466 common removexattrat sys_removexattrat396396+467 common open_tree_attr sys_open_tree_attr397397+468 common file_getattr sys_file_getattr398398+469 common file_setattr sys_file_setattr399399+470 common listns sys_listns400400+471 common rseq_slice_yield sys_rseq_slice_yield
+1
tools/perf/arch/sh/entry/syscalls/syscall.tbl
···474474468 common file_getattr sys_file_getattr475475469 common file_setattr sys_file_setattr476476470 common listns sys_listns477477+471 common rseq_slice_yield sys_rseq_slice_yield
+2-1
tools/perf/arch/sparc/entry/syscalls/syscall.tbl
···480480432 common fsmount sys_fsmount481481433 common fspick sys_fspick482482434 common pidfd_open sys_pidfd_open483483-# 435 reserved for clone3483483+435 common clone3 __sys_clone3484484436 common close_range sys_close_range485485437 common openat2 sys_openat2486486438 common pidfd_getfd sys_pidfd_getfd···516516468 common file_getattr sys_file_getattr517517469 common file_setattr sys_file_setattr518518470 common listns sys_listns519519+471 common rseq_slice_yield sys_rseq_slice_yield
···395395468 common file_getattr sys_file_getattr396396469 common file_setattr sys_file_setattr397397470 common listns sys_listns398398+471 common rseq_slice_yield sys_rseq_slice_yield398399399400#400401# Due to a historical design error, certain syscalls are numbered differently
+1
tools/perf/arch/xtensa/entry/syscalls/syscall.tbl
···441441468 common file_getattr sys_file_getattr442442469 common file_setattr sys_file_setattr443443470 common listns sys_listns444444+471 common rseq_slice_yield sys_rseq_slice_yield
···214214quiet_cmd_rm = RM $^215215216216prune_orphans: $(ORPHAN_FILES)217217- $(Q)$(call echo-cmd,rm)rm -f $^217217+ # The list of files can be long. Use xargs to prevent issues.218218+ $(Q)$(call echo-cmd,rm)echo "$^" | xargs rm -f218219219220JEVENTS_DEPS += prune_orphans220221endif
···7777 */7878#define IRQ_WORK_VECTOR 0xf679798080+/* IRQ vector for PMIs when running a guest with a mediated PMU. */8081#define PERF_GUEST_MEDIATED_PMI_VECTOR 0xf581828283#define DEFERRED_ERROR_VECTOR 0xf4
+1
tools/perf/trace/beauty/include/uapi/linux/fs.h
···253253#define FS_XFLAG_FILESTREAM 0x00004000 /* use filestream allocator */254254#define FS_XFLAG_DAX 0x00008000 /* use DAX for IO */255255#define FS_XFLAG_COWEXTSIZE 0x00010000 /* CoW extent size allocator hint */256256+#define FS_XFLAG_VERITY 0x00020000 /* fs-verity enabled */256257#define FS_XFLAG_HASATTR 0x80000000 /* no DIFLAG for this */257258258259/* the read-only stuff doesn't really belong here, but any other place is
···6161/*6262 * open_tree() flags.6363 */6464-#define OPEN_TREE_CLONE 1 /* Clone the target tree and attach the clone */6464+#define OPEN_TREE_CLONE (1 << 0) /* Clone the target tree and attach the clone */6565+#define OPEN_TREE_NAMESPACE (1 << 1) /* Clone the target tree into a new mount namespace */6566#define OPEN_TREE_CLOEXEC O_CLOEXEC /* Close the file on execve() */66676768/*···198197 */199198struct mnt_id_req {200199 __u32 size;201201- __u32 mnt_ns_fd;200200+ union {201201+ __u32 mnt_ns_fd;202202+ __u32 mnt_fd;203203+ };202204 __u64 mnt_id;203205 __u64 param;204206 __u64 mnt_ns_id;···235231 */236232#define LSMT_ROOT 0xffffffffffffffff /* root mount */237233#define LISTMOUNT_REVERSE (1 << 0) /* List later mounts first */234234+235235+/*236236+ * @flag bits for statmount(2)237237+ */238238+#define STATMOUNT_BY_FD 0x00000001U /* want mountinfo for given fd */238239239240#endif /* _UAPI_LINUX_MOUNT_H */
···386386# define PR_FUTEX_HASH_SET_SLOTS 1387387# define PR_FUTEX_HASH_GET_SLOTS 2388388389389+/* RSEQ time slice extensions */390390+#define PR_RSEQ_SLICE_EXTENSION 79391391+# define PR_RSEQ_SLICE_EXTENSION_GET 1392392+# define PR_RSEQ_SLICE_EXTENSION_SET 2393393+/*394394+ * Bits for RSEQ_SLICE_EXTENSION_GET/SET395395+ * PR_RSEQ_SLICE_EXT_ENABLE: Enable396396+ */397397+# define PR_RSEQ_SLICE_EXT_ENABLE 0x01398398+399399+/*400400+ * Get the current indirect branch tracking configuration for the current401401+ * thread, this will be the value configured via PR_SET_INDIR_BR_LP_STATUS.402402+ */403403+#define PR_GET_INDIR_BR_LP_STATUS 80404404+405405+/*406406+ * Set the indirect branch tracking configuration. PR_INDIR_BR_LP_ENABLE will407407+ * enable cpu feature for user thread, to track all indirect branches and ensure408408+ * they land on arch defined landing pad instruction.409409+ * x86 - If enabled, an indirect branch must land on an ENDBRANCH instruction.410410+ * arch64 - If enabled, an indirect branch must land on a BTI instruction.411411+ * riscv - If enabled, an indirect branch must land on an lpad instruction.412412+ * PR_INDIR_BR_LP_DISABLE will disable feature for user thread and indirect413413+ * branches will no more be tracked by cpu to land on arch defined landing pad414414+ * instruction.415415+ */416416+#define PR_SET_INDIR_BR_LP_STATUS 81417417+# define PR_INDIR_BR_LP_ENABLE (1UL << 0)418418+419419+/*420420+ * Prevent further changes to the specified indirect branch tracking421421+ * configuration. All bits may be locked via this call, including422422+ * undefined bits.423423+ */424424+#define PR_LOCK_INDIR_BR_LP_STATUS 82425425+389426#endif /* _LINUX_PRCTL_H */
···549549 /*550550 * Process the PE_CONTEXT packets if we have a valid contextID or VMID.551551 * If the kernel is running at EL2, the PID is traced in CONTEXTIDR_EL2552552- * as VMID, Bit ETM_OPT_CTXTID2 is set in this case.552552+ * as VMID, Format attribute 'contextid2' is set in this case.553553 */554554 switch (cs_etm__get_pid_fmt(etmq)) {555555 case CS_ETM_PIDFMT_CTXTID:
+13-23
tools/perf/util/cs-etm.c
···194194 * CS_ETM_PIDFMT_CTXTID2: CONTEXTIDR_EL2 is traced.195195 * CS_ETM_PIDFMT_NONE: No context IDs196196 *197197- * It's possible that the two bits ETM_OPT_CTXTID and ETM_OPT_CTXTID2197197+ * It's possible that the two format attributes 'contextid1' and 'contextid2'198198 * are enabled at the same time when the session runs on an EL2 kernel.199199 * This means the CONTEXTIDR_EL1 and CONTEXTIDR_EL2 both will be200200 * recorded in the trace data, the tool will selectively use···210210 if (metadata[CS_ETM_MAGIC] == __perf_cs_etmv3_magic) {211211 val = metadata[CS_ETM_ETMCR];212212 /* CONTEXTIDR is traced */213213- if (val & BIT(ETM_OPT_CTXTID))213213+ if (val & ETMCR_CTXTID)214214 return CS_ETM_PIDFMT_CTXTID;215215 } else {216216 val = metadata[CS_ETMV4_TRCCONFIGR];217217 /* CONTEXTIDR_EL2 is traced */218218- if (val & (BIT(ETM4_CFG_BIT_VMID) | BIT(ETM4_CFG_BIT_VMID_OPT)))218218+ if (val & (TRCCONFIGR_VMID | TRCCONFIGR_VMIDOPT))219219 return CS_ETM_PIDFMT_CTXTID2;220220 /* CONTEXTIDR_EL1 is traced */221221- else if (val & BIT(ETM4_CFG_BIT_CTXTID))221221+ else if (val & TRCCONFIGR_CID)222222 return CS_ETM_PIDFMT_CTXTID;223223 }224224···29142914 return 0;29152915}2916291629172917-static int cs_etm__setup_timeless_decoding(struct cs_etm_auxtrace *etm)29172917+static void cs_etm__setup_timeless_decoding(struct cs_etm_auxtrace *etm)29182918{29192919- struct evsel *evsel;29202920- struct evlist *evlist = etm->session->evlist;29192919+ /* Take first ETM as all options will be the same for all ETMs */29202920+ u64 *metadata = etm->metadata[0];2921292129222922 /* Override timeless mode with user input from --itrace=Z */29232923 if (etm->synth_opts.timeless_decoding) {29242924 etm->timeless_decoding = true;29252925- return 0;29252925+ return;29262926 }2927292729282928- /*29292929- * Find the cs_etm evsel and look at what its timestamp setting was29302930- */29312931- evlist__for_each_entry(evlist, evsel)29322932- if (cs_etm__evsel_is_auxtrace(etm->session, evsel)) {29332933- etm->timeless_decoding =29342934- !(evsel->core.attr.config & BIT(ETM_OPT_TS));29352935- return 0;29362936- }29372937-29382938- pr_err("CS ETM: Couldn't find ETM evsel\n");29392939- return -EINVAL;29282928+ if (metadata[CS_ETM_MAGIC] == __perf_cs_etmv3_magic)29292929+ etm->timeless_decoding = !(metadata[CS_ETM_ETMCR] & ETMCR_TIMESTAMP_EN);29302930+ else29312931+ etm->timeless_decoding = !(metadata[CS_ETMV4_TRCCONFIGR] & TRCCONFIGR_TS);29402932}2941293329422934/*···34913499 etm->auxtrace.evsel_is_auxtrace = cs_etm__evsel_is_auxtrace;34923500 session->auxtrace = &etm->auxtrace;3493350134943494- err = cs_etm__setup_timeless_decoding(etm);34953495- if (err)34963496- return err;35023502+ cs_etm__setup_timeless_decoding(etm);3497350334983504 etm->tc.time_shift = tc->time_shift;34993505 etm->tc.time_mult = tc->time_mult;
+15
tools/perf/util/cs-etm.h
···230230/* CoreSight trace ID is currently the bottom 7 bits of the value */231231#define CORESIGHT_TRACE_ID_VAL_MASK GENMASK(6, 0)232232233233+/* ETMv4 CONFIGR register bits */234234+#define TRCCONFIGR_BB BIT(3)235235+#define TRCCONFIGR_CCI BIT(4)236236+#define TRCCONFIGR_CID BIT(6)237237+#define TRCCONFIGR_VMID BIT(7)238238+#define TRCCONFIGR_TS BIT(11)239239+#define TRCCONFIGR_RS BIT(12)240240+#define TRCCONFIGR_VMIDOPT BIT(15)241241+242242+/* ETMv3 ETMCR register bits */243243+#define ETMCR_CYC_ACC BIT(12)244244+#define ETMCR_CTXTID BIT(14)245245+#define ETMCR_TIMESTAMP_EN BIT(28)246246+#define ETMCR_RETURN_STACK BIT(29)247247+233248int cs_etm__process_auxtrace_info(union perf_event *event,234249 struct perf_session *session);235250void cs_etm_get_default_config(const struct perf_pmu *pmu, struct perf_event_attr *attr);
+1-1
tools/perf/util/disasm.c
···384384 start = map__unmap_ip(map, sym->start);385385 end = map__unmap_ip(map, sym->end);386386387387- ops->target.outside = target.addr < start || target.addr > end;387387+ ops->target.outside = target.addr < start || target.addr >= end;388388389389 /*390390 * FIXME: things like this in _cpp_lex_token (gcc's cc1 program):
···3030# its policy for the relative importance of performance versus energy savings to3131# the processor. See man CPUPOWER-SET(1) for additional details3232#PERF_BIAS=3333+3434+# Set the Energy Performance Preference3535+# Available options can be read from3636+# /sys/devices/system/cpu/cpufreq/policy0/energy_performance_available_preferences3737+#EPP=
+6
tools/power/cpupower/cpupower.sh
···2323 cpupower set -b "$PERF_BIAS" > /dev/null || ESTATUS=12424fi25252626+# apply Energy Performance Preference2727+if test -n "$EPP"2828+then2929+ cpupower set -e "$EPP" > /dev/null || ESTATUS=13030+fi3131+2632exit $ESTATUS
+5-1
tools/power/cpupower/utils/cpupower-set.c
···124124 }125125126126 if (params.turbo_boost) {127127- ret = cpupower_set_turbo_boost(turbo_boost);127127+ if (cpupower_cpu_info.vendor == X86_VENDOR_INTEL)128128+ ret = cpupower_set_intel_turbo_boost(turbo_boost);129129+ else130130+ ret = cpupower_set_generic_turbo_boost(turbo_boost);131131+128132 if (ret)129133 fprintf(stderr, "Error setting turbo-boost\n");130134 }
+4-1
tools/power/cpupower/utils/helpers/helpers.h
···104104/* cpuid and cpuinfo helpers **************************/105105106106int cpufreq_has_generic_boost_support(bool *active);107107-int cpupower_set_turbo_boost(int turbo_boost);107107+int cpupower_set_generic_turbo_boost(int turbo_boost);108108109109/* X86 ONLY ****************************************/110110#if defined(__i386__) || defined(__x86_64__)···143143144144int cpufreq_has_x86_boost_support(unsigned int cpu, int *support,145145 int *active, int *states);146146+int cpupower_set_intel_turbo_boost(int turbo_boost);146147147148/* AMD P-State stuff **************************/148149bool cpupower_amd_pstate_enabled(void);···189188190189static inline int cpufreq_has_x86_boost_support(unsigned int cpu, int *support,191190 int *active, int *states)191191+{ return -1; }192192+static inline int cpupower_set_intel_turbo_boost(int turbo_boost)192193{ return -1; }193194194195static inline bool cpupower_amd_pstate_enabled(void)
+39-2
tools/power/cpupower/utils/helpers/misc.c
···1919{2020 int ret;2121 unsigned long long val;2222+ char linebuf[MAX_LINE_LEN];2323+ char path[SYSFS_PATH_MAX];2424+ char *endp;22252326 *support = *active = *states = 0;2427···4542 }4643 } else if (cpupower_cpu_info.caps & CPUPOWER_CAP_AMD_PSTATE) {4744 amd_pstate_boost_init(cpu, support, active);4848- } else if (cpupower_cpu_info.caps & CPUPOWER_CAP_INTEL_IDA)4545+ } else if (cpupower_cpu_info.caps & CPUPOWER_CAP_INTEL_IDA) {4946 *support = *active = 1;4747+4848+ snprintf(path, sizeof(path), PATH_TO_CPU "intel_pstate/no_turbo");4949+5050+ if (!is_valid_path(path))5151+ return 0;5252+5353+ if (cpupower_read_sysfs(path, linebuf, MAX_LINE_LEN) == 0)5454+ return -1;5555+5656+ val = strtol(linebuf, &endp, 0);5757+ if (endp == linebuf || errno == ERANGE)5858+ return -1;5959+6060+ *active = !val;6161+ }6262+ return 0;6363+}6464+6565+int cpupower_set_intel_turbo_boost(int turbo_boost)6666+{6767+ char path[SYSFS_PATH_MAX];6868+ char linebuf[2] = {};6969+7070+ snprintf(path, sizeof(path), PATH_TO_CPU "intel_pstate/no_turbo");7171+7272+ /* Fallback to generic solution when intel_pstate driver not running */7373+ if (!is_valid_path(path))7474+ return cpupower_set_generic_turbo_boost(turbo_boost);7575+7676+ snprintf(linebuf, sizeof(linebuf), "%d", !turbo_boost);7777+7878+ if (cpupower_write_sysfs(path, linebuf, 2) <= 0)7979+ return -1;8080+5081 return 0;5182}5283···311274 }312275}313276314314-int cpupower_set_turbo_boost(int turbo_boost)277277+int cpupower_set_generic_turbo_boost(int turbo_boost)315278{316279 char path[SYSFS_PATH_MAX];317280 char linebuf[2] = {};
+2-2
tools/power/cpupower/utils/powercap-info.c
···3838 printf(" (%s)\n", mode ? "enabled" : "disabled");39394040 if (zone->has_power_uw)4141- printf(_("%sPower can be monitored in micro Jules\n"),4141+ printf(_("%sPower can be monitored in micro Watts\n"),4242 pr_prefix);43434444 if (zone->has_energy_uj)4545- printf(_("%sPower can be monitored in micro Watts\n"),4545+ printf(_("%sPower can be monitored in micro Jules\n"),4646 pr_prefix);47474848 printf("\n");
+1
tools/scripts/syscall.tbl
···411411468 common file_getattr sys_file_getattr412412469 common file_setattr sys_file_setattr413413470 common listns sys_listns414414+471 common rseq_slice_yield sys_rseq_slice_yield
+1-1
tools/testing/selftests/bpf/Makefile
···409409 CC="$(HOSTCC)" LD="$(HOSTLD)" AR="$(HOSTAR)" \410410 LIBBPF_INCLUDE=$(HOST_INCLUDE_DIR) \411411 EXTRA_LDFLAGS='$(SAN_LDFLAGS) $(EXTRA_LDFLAGS)' \412412- HOSTPKG_CONFIG=$(PKG_CONFIG) \412412+ HOSTPKG_CONFIG='$(PKG_CONFIG)' \413413 OUTPUT=$(HOST_BUILD_DIR)/resolve_btfids/ BPFOBJ=$(HOST_BPFOBJ)414414415415# Get Clang's default includes on this system, as opposed to those seen by
···2828 kci_test_fdb_get2929 kci_test_fdb_del3030 kci_test_neigh_get3131+ kci_test_neigh_update3132 kci_test_bridge_parent_id3233 kci_test_address_proto3334 kci_test_enslave_bonding···11591158 fi1160115911611160 end_test "PASS: neigh get"11611161+}11621162+11631163+kci_test_neigh_update()11641164+{11651165+ dstip=10.0.2.411661166+ dstmac=de:ad:be:ef:13:3711671167+ local ret=011681168+11691169+ for proxy in "" "proxy" ; do11701170+ # add a neighbour entry without any flags11711171+ run_cmd ip neigh add $proxy $dstip dev "$devdummy" lladdr $dstmac nud permanent11721172+ run_cmd_grep $dstip ip neigh show $proxy11731173+ run_cmd_grep_fail "$dstip dev $devdummy .*\(managed\|use\|router\|extern\)" ip neigh show $proxy11741174+11751175+ # set the extern_learn flag, but no other11761176+ run_cmd ip neigh change $proxy $dstip dev "$devdummy" extern_learn11771177+ run_cmd_grep "$dstip dev $devdummy .* extern_learn" ip neigh show $proxy11781178+ run_cmd_grep_fail "$dstip dev $devdummy .* \(managed\|use\|router\)" ip neigh show $proxy11791179+11801180+ # flags are reset when not provided11811181+ run_cmd ip neigh change $proxy $dstip dev "$devdummy"11821182+ run_cmd_grep $dstip ip neigh show $proxy11831183+ run_cmd_grep_fail "$dstip dev $devdummy .* extern_learn" ip neigh show $proxy11841184+11851185+ # add a protocol11861186+ run_cmd ip neigh change $proxy $dstip dev "$devdummy" protocol boot11871187+ run_cmd_grep "$dstip dev $devdummy .* proto boot" ip neigh show $proxy11881188+11891189+ # protocol is retained when not provided11901190+ run_cmd ip neigh change $proxy $dstip dev "$devdummy"11911191+ run_cmd_grep "$dstip dev $devdummy .* proto boot" ip neigh show $proxy11921192+11931193+ # change protocol11941194+ run_cmd ip neigh change $proxy $dstip dev "$devdummy" protocol static11951195+ run_cmd_grep "$dstip dev $devdummy .* proto static" ip neigh show $proxy11961196+11971197+ # also check an extended flag for non-proxy neighs11981198+ if [ "$proxy" = "" ]; then11991199+ run_cmd ip neigh change $proxy $dstip dev "$devdummy" managed12001200+ run_cmd_grep "$dstip dev $devdummy managed" ip neigh show $proxy12011201+12021202+ run_cmd ip neigh change $proxy $dstip dev "$devdummy" lladdr $dstmac12031203+ run_cmd_grep_fail "$dstip dev $devdummy managed" ip neigh show $proxy12041204+ fi12051205+12061206+ run_cmd ip neigh del $proxy $dstip dev "$devdummy"12071207+ done12081208+12091209+ if [ $ret -ne 0 ];then12101210+ end_test "FAIL: neigh update"12111211+ return 112121212+ fi12131213+12141214+ end_test "PASS: neigh update"11621215}1163121611641217kci_test_bridge_parent_id()
···11#include <asm/ppc_asm.h>2233-FUNC_START(enter_vmx_usercopy)44- li r3,155- blr66-77-FUNC_START(exit_vmx_usercopy)88- li r3,099- blr1010-113FUNC_START(enter_vmx_ops)124 li r3,1135 blr