···65996599 To turn off having tracepoints sent to printk,66006600 echo 0 > /proc/sys/kernel/tracepoint_printk66016601 Note, echoing 1 into this file without the66026602- tracepoint_printk kernel cmdline option has no effect.66026602+ tp_printk kernel cmdline option has no effect.6603660366046604 The tp_printk_stop_on_boot (see below) can also be used66056605 to stop the printing of events to console at
+2-2
Documentation/admin-guide/mm/zswap.rst
···155155156156Some users cannot tolerate the swapping that comes with zswap store failures157157and zswap writebacks. Swapping can be disabled entirely (without disabling158158-zswap itself) on a cgroup-basis as follows:158158+zswap itself) on a cgroup-basis as follows::159159160160 echo 0 > /sys/fs/cgroup/<cgroup-name>/memory.zswap.writeback161161···166166When there is a sizable amount of cold memory residing in the zswap pool, it167167can be advantageous to proactively write these cold pages to swap and reclaim168168the memory for other use cases. By default, the zswap shrinker is disabled.169169-User can enable it as follows:169169+User can enable it as follows::170170171171 echo Y > /sys/module/zswap/parameters/shrinker_enabled172172
+2
Documentation/dev-tools/testing-overview.rst
···104104 KASAN and can be used in production. See Documentation/dev-tools/kfence.rst105105* lockdep is a locking correctness validator. See106106 Documentation/locking/lockdep-design.rst107107+* Runtime Verification (RV) supports checking specific behaviours for a given108108+ subsystem. See Documentation/trace/rv/runtime-verification.rst107109* There are several other pieces of debug instrumentation in the kernel, many108110 of which can be found in lib/Kconfig.debug109111
···11-Status: Unstable - ABI compatibility may be broken in the future22-31Binding for Keystone gate control driver which uses PSC controller IP.4253This binding uses the common clock binding[1].
···11-Status: Unstable - ABI compatibility may be broken in the future22-31Binding for keystone PLLs. The main PLL IP typically has a multiplier,42a divider and a post divider. The additional PLL IPs like ARMPLL, DDRPLL53and PAPLL are controlled by the memory mapped register where as the Main
···11Binding for Texas Instruments ADPLL clock.2233-Binding status: Unstable - ABI compatibility may be broken in the future44-53This binding uses the common clock binding[1]. It assumes a64register-mapped ADPLL with two to three selectable input clocks75and three to four children.
···11Binding for Texas Instruments APLL clock.2233-Binding status: Unstable - ABI compatibility may be broken in the future44-53This binding uses the common clock binding[1]. It assumes a64register-mapped APLL with usually two selectable input clocks75(reference clock and bypass clock), with analog phase locked
···11Binding for Texas Instruments autoidle clock.2233-Binding status: Unstable - ABI compatibility may be broken in the future44-53This binding uses the common clock binding[1]. It assumes a register mapped64clock which can be put to idle automatically by hardware based on the usage75and a configuration bit setting. Autoidle clock is never an individual
···11Binding for Texas Instruments clockdomain.2233-Binding status: Unstable - ABI compatibility may be broken in the future44-53This binding uses the common clock binding[1] in consumer role.64Every clock on TI SoC belongs to one clockdomain, but software75only needs this information for specific clocks which require
···11Binding for TI composite clock.2233-Binding status: Unstable - ABI compatibility may be broken in the future44-53This binding uses the common clock binding[1]. It assumes a64register-mapped composite clock with multiple different sub-types;75
···11Binding for TI divider clock2233-Binding status: Unstable - ABI compatibility may be broken in the future44-53This binding uses the common clock binding[1]. It assumes a64register-mapped adjustable clock rate divider that does not gate and has75only one input clock or parent. By default the value programmed into
···11Binding for Texas Instruments DPLL clock.2233-Binding status: Unstable - ABI compatibility may be broken in the future44-53This binding uses the common clock binding[1]. It assumes a64register-mapped DPLL with usually two selectable input clocks75(reference clock and bypass clock), with digital phase locked
···11Binding for Texas Instruments FAPLL clock.2233-Binding status: Unstable - ABI compatibility may be broken in the future44-53This binding uses the common clock binding[1]. It assumes a64register-mapped FAPLL with usually two selectable input clocks75(reference clock and bypass clock), and one or more child
···11Binding for TI fixed factor rate clock sources.2233-Binding status: Unstable - ABI compatibility may be broken in the future44-53This binding uses the common clock binding[1], and also uses the autoidle64support from TI autoidle clock [2].75
···11Binding for Texas Instruments gate clock.2233-Binding status: Unstable - ABI compatibility may be broken in the future44-53This binding uses the common clock binding[1]. This clock is64quite much similar to the basic gate-clock [2], however,75it supports a number of additional features. If no register
···11Binding for Texas Instruments interface clock.2233-Binding status: Unstable - ABI compatibility may be broken in the future44-53This binding uses the common clock binding[1]. This clock is64quite much similar to the basic gate-clock [2], however,75it supports a number of additional features, including
···11Binding for TI mux clock.2233-Binding status: Unstable - ABI compatibility may be broken in the future44-53This binding uses the common clock binding[1]. It assumes a64register-mapped multiplexer with multiple input clock signals or75parents, one of which can be selected as output. This clock does not
···11TI Davinci DSP devices22=======================3344-Binding status: Unstable - Subject to changes for DT representation of clocks55- and resets66-74The TI Davinci family of SoCs usually contains a TI DSP Core sub-system that85is used to offload some of the processor-intensive tasks or algorithms, for96achieving various system level goals.
···2020 a GPIO spec for the external headphone detect pin. If jd-mode = 0,2121 we will get the JD status by getting the value of hp-detect-gpios.22222323+- cbj-sleeve-gpios:2424+ a GPIO spec to control the external combo jack circuit to tie the sleeve/ring22525+ contacts to the ground or floating. It could avoid some electric noise from the2626+ active speaker jacks.2727+2328- realtek,in2-differential2429 Boolean. Indicate MIC2 input are differential, rather than single-ended.2530···7368 compatible = "realtek,rt5650";7469 reg = <0x1a>;7570 hp-detect-gpios = <&gpio 19 0>;7171+ cbj-sleeve-gpios = <&gpio 20 0>;7672 interrupt-parent = <&gpio>;7773 interrupts = <7 IRQ_TYPE_EDGE_FALLING>;7874 realtek,dmic-en = "true";
···6060 be implemented in an always-on power domain."61616262patternProperties:6363- '^frame@[0-9a-z]*$':6363+ '^frame@[0-9a-f]+$':6464 type: object6565 additionalProperties: false6666 description: A timer node has up to 8 frame sub-nodes, each with the following properties.
···11+.. SPDX-License-Identifier: GPL-2.022+33+==========================44+Devlink E-Switch Attribute55+==========================66+77+Devlink E-Switch supports two modes of operation: legacy and switchdev.88+Legacy mode operates based on traditional MAC/VLAN steering rules. Switching99+decisions are made based on MAC addresses, VLANs, etc. There is limited ability1010+to offload switching rules to hardware.1111+1212+On the other hand, switchdev mode allows for more advanced offloading1313+capabilities of the E-Switch to hardware. In switchdev mode, more switching1414+rules and logic can be offloaded to the hardware switch ASIC. It enables1515+representor netdevices that represent the slow path of virtual functions (VFs)1616+or scalable-functions (SFs) of the device. See more information about1717+:ref:`Documentation/networking/switchdev.rst <switchdev>` and1818+:ref:`Documentation/networking/representors.rst <representors>`.1919+2020+In addition, the devlink E-Switch also comes with other attributes listed2121+in the following section.2222+2323+Attributes Description2424+======================2525+2626+The following is a list of E-Switch attributes.2727+2828+.. list-table:: E-Switch attributes2929+ :widths: 8 5 453030+3131+ * - Name3232+ - Type3333+ - Description3434+ * - ``mode``3535+ - enum3636+ - The mode of the device. The mode can be one of the following:3737+3838+ * ``legacy`` operates based on traditional MAC/VLAN steering3939+ rules.4040+ * ``switchdev`` allows for more advanced offloading capabilities of4141+ the E-Switch to hardware.4242+ * - ``inline-mode``4343+ - enum4444+ - Some HWs need the VF driver to put part of the packet4545+ headers on the TX descriptor so the e-switch can do proper4646+ matching and steering. Support for both switchdev mode and legacy mode.4747+4848+ * ``none`` none.4949+ * ``link`` L2 mode.5050+ * ``network`` L3 mode.5151+ * ``transport`` L4 mode.5252+ * - ``encap-mode``5353+ - enum5454+ - The encapsulation mode of the device. Support for both switchdev mode5555+ and legacy mode. The mode can be one of the following:5656+5757+ * ``none`` Disable encapsulation support.5858+ * ``basic`` Enable encapsulation support.5959+6060+Example Usage6161+=============6262+6363+.. code:: shell6464+6565+ # enable switchdev mode6666+ $ devlink dev eswitch set pci/0000:08:00.0 mode switchdev6767+6868+ # set inline-mode and encap-mode6969+ $ devlink dev eswitch set pci/0000:08:00.0 inline-mode none encap-mode basic7070+7171+ # display devlink device eswitch attributes7272+ $ devlink dev eswitch show pci/0000:08:00.07373+ pci/0000:08:00.0: mode switchdev inline-mode none encap-mode basic7474+7575+ # enable encap-mode with legacy mode7676+ $ devlink dev eswitch set pci/0000:08:00.0 mode legacy inline-mode none encap-mode basic
···4646Hence, the ASID for the SEV-enabled guests must be from 1 to a maximum value4747defined in the CPUID 0x8000001f[ecx] field.48484949-SEV Key Management5050-==================4949+The KVM_MEMORY_ENCRYPT_OP ioctl5050+===============================51515252-The SEV guest key management is handled by a separate processor called the AMD5353-Secure Processor (AMD-SP). Firmware running inside the AMD-SP provides a secure5454-key management interface to perform common hypervisor activities such as5555-encrypting bootstrap code, snapshot, migrating and debugging the guest. For more5656-information, see the SEV Key Management spec [api-spec]_5757-5858-The main ioctl to access SEV is KVM_MEMORY_ENCRYPT_OP. If the argument5959-to KVM_MEMORY_ENCRYPT_OP is NULL, the ioctl returns 0 if SEV is enabled6060-and ``ENOTTY`` if it is disabled (on some older versions of Linux,6161-the ioctl runs normally even with a NULL argument, and therefore will6262-likely return ``EFAULT``). If non-NULL, the argument to KVM_MEMORY_ENCRYPT_OP6363-must be a struct kvm_sev_cmd::5252+The main ioctl to access SEV is KVM_MEMORY_ENCRYPT_OP, which operates on5353+the VM file descriptor. If the argument to KVM_MEMORY_ENCRYPT_OP is NULL,5454+the ioctl returns 0 if SEV is enabled and ``ENOTTY`` if it is disabled5555+(on some older versions of Linux, the ioctl tries to run normally even5656+with a NULL argument, and therefore will likely return ``EFAULT`` instead5757+of zero if SEV is enabled). If non-NULL, the argument to5858+KVM_MEMORY_ENCRYPT_OP must be a struct kvm_sev_cmd::64596560 struct kvm_sev_cmd {6661 __u32 id;···8287The KVM_SEV_INIT command is used by the hypervisor to initialize the SEV platform8388context. In a typical workflow, this command should be the first command issued.84898585-The firmware can be initialized either by using its own non-volatile storage or8686-the OS can manage the NV storage for the firmware using the module parameter8787-``init_ex_path``. If the file specified by ``init_ex_path`` does not exist or8888-is invalid, the OS will create or override the file with output from PSP.89909091Returns: 0 on success, -negative on error9192···424433issued by the hypervisor to make the guest ready for execution.425434426435Returns: 0 on success, -negative on error436436+437437+Firmware Management438438+===================439439+440440+The SEV guest key management is handled by a separate processor called the AMD441441+Secure Processor (AMD-SP). Firmware running inside the AMD-SP provides a secure442442+key management interface to perform common hypervisor activities such as443443+encrypting bootstrap code, snapshot, migrating and debugging the guest. For more444444+information, see the SEV Key Management spec [api-spec]_445445+446446+The AMD-SP firmware can be initialized either by using its own non-volatile447447+storage or the OS can manage the NV storage for the firmware using448448+parameter ``init_ex_path`` of the ``ccp`` module. If the file specified449449+by ``init_ex_path`` does not exist or is invalid, the OS will create or450450+override the file with PSP non-volatile storage.427451428452References429453==========
+9-10
Documentation/virt/kvm/x86/msr.rst
···193193 Asynchronous page fault (APF) control MSR.194194195195 Bits 63-6 hold 64-byte aligned physical address of a 64 byte memory area196196- which must be in guest RAM and must be zeroed. This memory is expected197197- to hold a copy of the following structure::196196+ which must be in guest RAM. This memory is expected to hold the197197+ following structure::198198199199 struct kvm_vcpu_pv_apf_data {200200 /* Used for 'page not present' events delivered via #PF */···204204 __u32 token;205205206206 __u8 pad[56];207207- __u32 enabled;208207 };209208210209 Bits 5-4 of the MSR are reserved and should be zero. Bit 0 is set to 1···231232 as regular page fault, guest must reset 'flags' to '0' before it does232233 something that can generate normal page fault.233234234234- Bytes 5-7 of 64 byte memory location ('token') will be written to by the235235+ Bytes 4-7 of 64 byte memory location ('token') will be written to by the235236 hypervisor at the time of APF 'page ready' event injection. The content236236- of these bytes is a token which was previously delivered as 'page not237237- present' event. The event indicates the page in now available. Guest is238238- supposed to write '0' to 'token' when it is done handling 'page ready'239239- event and to write 1' to MSR_KVM_ASYNC_PF_ACK after clearing the location;240240- writing to the MSR forces KVM to re-scan its queue and deliver the next241241- pending notification.237237+ of these bytes is a token which was previously delivered in CR2 as238238+ 'page not present' event. The event indicates the page is now available.239239+ Guest is supposed to write '0' to 'token' when it is done handling240240+ 'page ready' event and to write '1' to MSR_KVM_ASYNC_PF_ACK after241241+ clearing the location; writing to the MSR forces KVM to re-scan its242242+ queue and deliver the next pending notification.242243243244 Note, MSR_KVM_ASYNC_PF_INT MSR specifying the interrupt vector for 'page244245 ready' APF delivery needs to be written to before enabling APF mechanism
···291291 blr x22922920:293293 mov_q x0, HCR_HOST_NVHE_FLAGS294294+295295+ /*296296+ * Compliant CPUs advertise their VHE-onlyness with297297+ * ID_AA64MMFR4_EL1.E2H0 < 0. HCR_EL2.E2H can be298298+ * RES1 in that case. Publish the E2H bit early so that299299+ * it can be picked up by the init_el2_state macro.300300+ *301301+ * Fruity CPUs seem to have HCR_EL2.E2H set to RAO/WI, but302302+ * don't advertise it (they predate this relaxation).303303+ */304304+ mrs_s x1, SYS_ID_AA64MMFR4_EL1305305+ tbz x1, #(ID_AA64MMFR4_EL1_E2H0_SHIFT + ID_AA64MMFR4_EL1_E2H0_WIDTH - 1), 1f306306+307307+ orr x0, x0, #HCR_E2H308308+1:294309 msr hcr_el2, x0295310 isb296311···318303319304 mov_q x1, INIT_SCTLR_EL1_MMU_OFF320305321321- /*322322- * Compliant CPUs advertise their VHE-onlyness with323323- * ID_AA64MMFR4_EL1.E2H0 < 0. HCR_EL2.E2H can be324324- * RES1 in that case.325325- *326326- * Fruity CPUs seem to have HCR_EL2.E2H set to RES1, but327327- * don't advertise it (they predate this relaxation).328328- */329329- mrs_s x0, SYS_ID_AA64MMFR4_EL1330330- ubfx x0, x0, #ID_AA64MMFR4_EL1_E2H0_SHIFT, #ID_AA64MMFR4_EL1_E2H0_WIDTH331331- tbnz x0, #(ID_AA64MMFR4_EL1_E2H0_SHIFT + ID_AA64MMFR4_EL1_E2H0_WIDTH - 1), 1f332332-333306 mrs x0, hcr_el2334307 and x0, x0, #HCR_E2H335308 cbz x0, 2f336336-1:309309+337310 /* Set a sane SCTLR_EL1, the VHE way */338311 pre_disable_mmu_workaround339312 msr_s SYS_SCTLR_EL12, x1
+1-4
arch/arm64/kernel/ptrace.c
···761761{762762 unsigned int vq;763763 bool active;764764- bool fpsimd_only;765764 enum vec_type task_type;766765767766 memset(header, 0, sizeof(*header));···776777 case ARM64_VEC_SVE:777778 if (test_tsk_thread_flag(target, TIF_SVE_VL_INHERIT))778779 header->flags |= SVE_PT_VL_INHERIT;779779- fpsimd_only = !test_tsk_thread_flag(target, TIF_SVE);780780 break;781781 case ARM64_VEC_SME:782782 if (test_tsk_thread_flag(target, TIF_SME_VL_INHERIT))783783 header->flags |= SVE_PT_VL_INHERIT;784784- fpsimd_only = false;785784 break;786785 default:787786 WARN_ON_ONCE(1);···787790 }788791789792 if (active) {790790- if (fpsimd_only) {793793+ if (target->thread.fp_type == FP_STATE_FPSIMD) {791794 header->flags |= SVE_PT_REGS_FPSIMD;792795 } else {793796 header->flags |= SVE_PT_REGS_SVE;
···7575 if (!IS_ENABLED(CONFIG_PGSTE))7676 return KERNEL_FAULT;7777 gmap = (struct gmap *)S390_lowcore.gmap;7878- if (regs->cr1 == gmap->asce)7878+ if (gmap && gmap->asce == regs->cr1)7979 return GMAP_FAULT;8080 return KERNEL_FAULT;8181 }
+93
arch/x86/coco/core.c
···33 * Confidential Computing Platform Capability checks44 *55 * Copyright (C) 2021 Advanced Micro Devices, Inc.66+ * Copyright (C) 2024 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.67 *78 * Author: Tom Lendacky <thomas.lendacky@amd.com>89 */9101011#include <linux/export.h>1112#include <linux/cc_platform.h>1313+#include <linux/string.h>1414+#include <linux/random.h>12151616+#include <asm/archrandom.h>1317#include <asm/coco.h>1418#include <asm/processor.h>15191620enum cc_vendor cc_vendor __ro_after_init = CC_VENDOR_NONE;1721u64 cc_mask __ro_after_init;2222+2323+static struct cc_attr_flags {2424+ __u64 host_sev_snp : 1,2525+ __resv : 63;2626+} cc_flags;18271928static bool noinstr intel_cc_platform_has(enum cc_attr attr)2029{···9889 case CC_ATTR_GUEST_SEV_SNP:9990 return sev_status & MSR_AMD64_SEV_SNP_ENABLED;100919292+ case CC_ATTR_HOST_SEV_SNP:9393+ return cc_flags.host_sev_snp;9494+10195 default:10296 return false;10397 }···160148 }161149}162150EXPORT_SYMBOL_GPL(cc_mkdec);151151+152152+static void amd_cc_platform_clear(enum cc_attr attr)153153+{154154+ switch (attr) {155155+ case CC_ATTR_HOST_SEV_SNP:156156+ cc_flags.host_sev_snp = 0;157157+ break;158158+ default:159159+ break;160160+ }161161+}162162+163163+void cc_platform_clear(enum cc_attr attr)164164+{165165+ switch (cc_vendor) {166166+ case CC_VENDOR_AMD:167167+ amd_cc_platform_clear(attr);168168+ break;169169+ default:170170+ break;171171+ }172172+}173173+174174+static void amd_cc_platform_set(enum cc_attr attr)175175+{176176+ switch (attr) {177177+ case CC_ATTR_HOST_SEV_SNP:178178+ cc_flags.host_sev_snp = 1;179179+ break;180180+ default:181181+ break;182182+ }183183+}184184+185185+void cc_platform_set(enum cc_attr attr)186186+{187187+ switch (cc_vendor) {188188+ case CC_VENDOR_AMD:189189+ amd_cc_platform_set(attr);190190+ break;191191+ default:192192+ break;193193+ }194194+}195195+196196+__init void cc_random_init(void)197197+{198198+ /*199199+ * The seed is 32 bytes (in units of longs), which is 256 bits, which200200+ * is the security level that the RNG is targeting.201201+ */202202+ unsigned long rng_seed[32 / sizeof(long)];203203+ size_t i, longs;204204+205205+ if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))206206+ return;207207+208208+ /*209209+ * Since the CoCo threat model includes the host, the only reliable210210+ * source of entropy that can be neither observed nor manipulated is211211+ * RDRAND. Usually, RDRAND failure is considered tolerable, but since212212+ * CoCo guests have no other unobservable source of entropy, it's213213+ * important to at least ensure the RNG gets some initial random seeds.214214+ */215215+ for (i = 0; i < ARRAY_SIZE(rng_seed); i += longs) {216216+ longs = arch_get_random_longs(&rng_seed[i], ARRAY_SIZE(rng_seed) - i);217217+218218+ /*219219+ * A zero return value means that the guest doesn't have RDRAND220220+ * or the CPU is physically broken, and in both cases that221221+ * means most crypto inside of the CoCo instance will be222222+ * broken, defeating the purpose of CoCo in the first place. So223223+ * just panic here because it's absolutely unsafe to continue224224+ * executing.225225+ */226226+ if (longs == 0)227227+ panic("RDRAND is defective.");228228+ }229229+ add_device_randomness(rng_seed, sizeof(rng_seed));230230+ memzero_explicit(rng_seed, sizeof(rng_seed));231231+}
+4-4
arch/x86/events/intel/ds.c
···12371237 struct pmu *pmu = event->pmu;1238123812391239 /*12401240- * Make sure we get updated with the first PEBS12411241- * event. It will trigger also during removal, but12421242- * that does not hurt:12401240+ * Make sure we get updated with the first PEBS event.12411241+ * During removal, ->pebs_data_cfg is still valid for12421242+ * the last PEBS event. Don't clear it.12431243 */12441244- if (cpuc->n_pebs == 1)12441244+ if ((cpuc->n_pebs == 1) && add)12451245 cpuc->pebs_data_cfg = PEBS_UPDATE_DS_SW;1246124612471247 if (needed_cb != pebs_needs_sched_cb(cpuc)) {
···345345#endif346346}347347348348+static void bsp_determine_snp(struct cpuinfo_x86 *c)349349+{350350+#ifdef CONFIG_ARCH_HAS_CC_PLATFORM351351+ cc_vendor = CC_VENDOR_AMD;352352+353353+ if (cpu_has(c, X86_FEATURE_SEV_SNP)) {354354+ /*355355+ * RMP table entry format is not architectural and is defined by the356356+ * per-processor PPR. Restrict SNP support on the known CPU models357357+ * for which the RMP table entry format is currently defined for.358358+ */359359+ if (!cpu_has(c, X86_FEATURE_HYPERVISOR) &&360360+ c->x86 >= 0x19 && snp_probe_rmptable_info()) {361361+ cc_platform_set(CC_ATTR_HOST_SEV_SNP);362362+ } else {363363+ setup_clear_cpu_cap(X86_FEATURE_SEV_SNP);364364+ cc_platform_clear(CC_ATTR_HOST_SEV_SNP);365365+ }366366+ }367367+#endif368368+}369369+348370static void bsp_init_amd(struct cpuinfo_x86 *c)349371{350372 if (cpu_has(c, X86_FEATURE_CONSTANT_TSC)) {···474452 break;475453 }476454477477- if (cpu_has(c, X86_FEATURE_SEV_SNP)) {478478- /*479479- * RMP table entry format is not architectural and it can vary by processor480480- * and is defined by the per-processor PPR. Restrict SNP support on the481481- * known CPU model and family for which the RMP table entry format is482482- * currently defined for.483483- */484484- if (!boot_cpu_has(X86_FEATURE_ZEN3) &&485485- !boot_cpu_has(X86_FEATURE_ZEN4) &&486486- !boot_cpu_has(X86_FEATURE_ZEN5))487487- setup_clear_cpu_cap(X86_FEATURE_SEV_SNP);488488- else if (!snp_probe_rmptable_info())489489- setup_clear_cpu_cap(X86_FEATURE_SEV_SNP);490490- }491491-455455+ bsp_determine_snp(c);492456 return;493457494458warn:
···108108 (boot_cpu_data.x86 >= 0x0f)))109109 return;110110111111- if (cpu_feature_enabled(X86_FEATURE_SEV_SNP))111111+ if (cc_platform_has(CC_ATTR_HOST_SEV_SNP))112112 return;113113114114 rdmsr(MSR_AMD64_SYSCFG, lo, hi);
+2-1
arch/x86/kernel/cpu/resctrl/internal.h
···7878 else7979 cpu = cpumask_any_but(mask, exclude_cpu);80808181- if (!IS_ENABLED(CONFIG_NO_HZ_FULL))8181+ /* Only continue if tick_nohz_full_mask has been initialized. */8282+ if (!tick_nohz_full_enabled())8283 return cpu;83848485 /* If the CPU picked isn't marked nohz_full nothing more needs doing. */
+6-5
arch/x86/kernel/kvm.c
···65656666early_param("no-steal-acc", parse_no_stealacc);67676868+static DEFINE_PER_CPU_READ_MOSTLY(bool, async_pf_enabled);6869static DEFINE_PER_CPU_DECRYPTED(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64);6970DEFINE_PER_CPU_DECRYPTED(struct kvm_steal_time, steal_time) __aligned(64) __visible;7071static int has_steal_clock = 0;···245244{246245 u32 flags = 0;247246248248- if (__this_cpu_read(apf_reason.enabled)) {247247+ if (__this_cpu_read(async_pf_enabled)) {249248 flags = __this_cpu_read(apf_reason.flags);250249 __this_cpu_write(apf_reason.flags, 0);251250 }···296295297296 inc_irq_stat(irq_hv_callback_count);298297299299- if (__this_cpu_read(apf_reason.enabled)) {298298+ if (__this_cpu_read(async_pf_enabled)) {300299 token = __this_cpu_read(apf_reason.token);301300 kvm_async_pf_task_wake(token);302301 __this_cpu_write(apf_reason.token, 0);···363362 wrmsrl(MSR_KVM_ASYNC_PF_INT, HYPERVISOR_CALLBACK_VECTOR);364363365364 wrmsrl(MSR_KVM_ASYNC_PF_EN, pa);366366- __this_cpu_write(apf_reason.enabled, 1);365365+ __this_cpu_write(async_pf_enabled, true);367366 pr_debug("setup async PF for cpu %d\n", smp_processor_id());368367 }369368···384383385384static void kvm_pv_disable_apf(void)386385{387387- if (!__this_cpu_read(apf_reason.enabled))386386+ if (!__this_cpu_read(async_pf_enabled))388387 return;389388390389 wrmsrl(MSR_KVM_ASYNC_PF_EN, 0);391391- __this_cpu_write(apf_reason.enabled, 0);390390+ __this_cpu_write(async_pf_enabled, false);392391393392 pr_debug("disable async PF for cpu %d\n", smp_processor_id());394393}
···22842284}22852285device_initcall(snp_init_platform_device);2286228622872287-void kdump_sev_callback(void)22882288-{22892289- /*22902290- * Do wbinvd() on remote CPUs when SNP is enabled in order to22912291- * safely do SNP_SHUTDOWN on the local CPU.22922292- */22932293- if (cpu_feature_enabled(X86_FEATURE_SEV_SNP))22942294- wbinvd();22952295-}22962296-22972287void sev_show_status(void)22982288{22992289 int i;
+1
arch/x86/kvm/Kconfig
···122122 default y123123 depends on KVM_AMD && X86_64124124 depends on CRYPTO_DEV_SP_PSP && !(KVM_AMD=y && CRYPTO_DEV_CCP_DD=m)125125+ select ARCH_HAS_CC_PLATFORM125126 help126127 Provides support for launching Encrypted VMs (SEV) and Encrypted VMs127128 with Encrypted State (SEV-ES) on AMD processors.
···8484};85858686/* Called with the sev_bitmap_lock held, or on shutdown */8787-static int sev_flush_asids(int min_asid, int max_asid)8787+static int sev_flush_asids(unsigned int min_asid, unsigned int max_asid)8888{8989- int ret, asid, error = 0;8989+ int ret, error = 0;9090+ unsigned int asid;90919192 /* Check if there are any ASIDs to reclaim before performing a flush */9293 asid = find_next_bit(sev_reclaim_asid_bitmap, nr_asids, min_asid);···117116}118117119118/* Must be called with the sev_bitmap_lock held */120120-static bool __sev_recycle_asids(int min_asid, int max_asid)119119+static bool __sev_recycle_asids(unsigned int min_asid, unsigned int max_asid)121120{122121 if (sev_flush_asids(min_asid, max_asid))123122 return false;···144143145144static int sev_asid_new(struct kvm_sev_info *sev)146145{147147- int asid, min_asid, max_asid, ret;146146+ /*147147+ * SEV-enabled guests must use asid from min_sev_asid to max_sev_asid.148148+ * SEV-ES-enabled guest can use from 1 to min_sev_asid - 1.149149+ * Note: min ASID can end up larger than the max if basic SEV support is150150+ * effectively disabled by disallowing use of ASIDs for SEV guests.151151+ */152152+ unsigned int min_asid = sev->es_active ? 1 : min_sev_asid;153153+ unsigned int max_asid = sev->es_active ? min_sev_asid - 1 : max_sev_asid;154154+ unsigned int asid;148155 bool retry = true;156156+ int ret;157157+158158+ if (min_asid > max_asid)159159+ return -ENOTTY;149160150161 WARN_ON(sev->misc_cg);151162 sev->misc_cg = get_current_misc_cg();···170157171158 mutex_lock(&sev_bitmap_lock);172159173173- /*174174- * SEV-enabled guests must use asid from min_sev_asid to max_sev_asid.175175- * SEV-ES-enabled guest can use from 1 to min_sev_asid - 1.176176- */177177- min_asid = sev->es_active ? 1 : min_sev_asid;178178- max_asid = sev->es_active ? min_sev_asid - 1 : max_sev_asid;179160again:180161 asid = find_next_zero_bit(sev_asid_bitmap, max_asid + 1, min_asid);181162 if (asid > max_asid) {···186179187180 mutex_unlock(&sev_bitmap_lock);188181189189- return asid;182182+ sev->asid = asid;183183+ return 0;190184e_uncharge:191185 sev_misc_cg_uncharge(sev);192186 put_misc_cg(sev->misc_cg);···195187 return ret;196188}197189198198-static int sev_get_asid(struct kvm *kvm)190190+static unsigned int sev_get_asid(struct kvm *kvm)199191{200192 struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;201193···255247{256248 struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;257249 struct sev_platform_init_args init_args = {0};258258- int asid, ret;250250+ int ret;259251260252 if (kvm->created_vcpus)261253 return -EINVAL;262254263263- ret = -EBUSY;264255 if (unlikely(sev->active))265265- return ret;256256+ return -EINVAL;266257267258 sev->active = true;268259 sev->es_active = argp->id == KVM_SEV_ES_INIT;269269- asid = sev_asid_new(sev);270270- if (asid < 0)260260+ ret = sev_asid_new(sev);261261+ if (ret)271262 goto e_no_asid;272272- sev->asid = asid;273263274264 init_args.probe = false;275265 ret = sev_platform_init(&init_args);···293287294288static int sev_bind_asid(struct kvm *kvm, unsigned int handle, int *error)295289{290290+ unsigned int asid = sev_get_asid(kvm);296291 struct sev_data_activate activate;297297- int asid = sev_get_asid(kvm);298292 int ret;299293300294 /* activate ASID on the given handle */···22462240 goto out;22472241 }2248224222492249- sev_asid_count = max_sev_asid - min_sev_asid + 1;22502250- WARN_ON_ONCE(misc_cg_set_capacity(MISC_CG_RES_SEV, sev_asid_count));22432243+ if (min_sev_asid <= max_sev_asid) {22442244+ sev_asid_count = max_sev_asid - min_sev_asid + 1;22452245+ WARN_ON_ONCE(misc_cg_set_capacity(MISC_CG_RES_SEV, sev_asid_count));22462246+ }22512247 sev_supported = true;2252224822532249 /* SEV-ES support requested? */···22802272out:22812273 if (boot_cpu_has(X86_FEATURE_SEV))22822274 pr_info("SEV %s (ASIDs %u - %u)\n",22832283- sev_supported ? "enabled" : "disabled",22752275+ sev_supported ? min_sev_asid <= max_sev_asid ? "enabled" :22762276+ "unusable" :22772277+ "disabled",22842278 min_sev_asid, max_sev_asid);22852279 if (boot_cpu_has(X86_FEATURE_SEV_ES))22862280 pr_info("SEV-ES %s (ASIDs %u - %u)\n",···23302320 */23312321static void sev_flush_encrypted_page(struct kvm_vcpu *vcpu, void *va)23322322{23332333- int asid = to_kvm_svm(vcpu->kvm)->sev_info.asid;23232323+ unsigned int asid = sev_get_asid(vcpu->kvm);2334232423352325 /*23362326 * Note! The address must be a kernel address, as regular page walk···26482638void pre_sev_run(struct vcpu_svm *svm, int cpu)26492639{26502640 struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, cpu);26512651- int asid = sev_get_asid(svm->vcpu.kvm);26412641+ unsigned int asid = sev_get_asid(svm->vcpu.kvm);2652264226532643 /* Assign the asid allocated with this SEV guest */26542644 svm->asid = asid;···31843174 unsigned long pfn;31853175 struct page *p;3186317631873187- if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))31773177+ if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))31883178 return alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);3189317931903180 /*
···947947 memtype_free(paddr, paddr + size);948948}949949950950+static int get_pat_info(struct vm_area_struct *vma, resource_size_t *paddr,951951+ pgprot_t *pgprot)952952+{953953+ unsigned long prot;954954+955955+ VM_WARN_ON_ONCE(!(vma->vm_flags & VM_PAT));956956+957957+ /*958958+ * We need the starting PFN and cachemode used for track_pfn_remap()959959+ * that covered the whole VMA. For most mappings, we can obtain that960960+ * information from the page tables. For COW mappings, we might now961961+ * suddenly have anon folios mapped and follow_phys() will fail.962962+ *963963+ * Fallback to using vma->vm_pgoff, see remap_pfn_range_notrack(), to964964+ * detect the PFN. If we need the cachemode as well, we're out of luck965965+ * for now and have to fail fork().966966+ */967967+ if (!follow_phys(vma, vma->vm_start, 0, &prot, paddr)) {968968+ if (pgprot)969969+ *pgprot = __pgprot(prot);970970+ return 0;971971+ }972972+ if (is_cow_mapping(vma->vm_flags)) {973973+ if (pgprot)974974+ return -EINVAL;975975+ *paddr = (resource_size_t)vma->vm_pgoff << PAGE_SHIFT;976976+ return 0;977977+ }978978+ WARN_ON_ONCE(1);979979+ return -EINVAL;980980+}981981+950982/*951983 * track_pfn_copy is called when vma that is covering the pfnmap gets952984 * copied through copy_page_range().···989957int track_pfn_copy(struct vm_area_struct *vma)990958{991959 resource_size_t paddr;992992- unsigned long prot;993960 unsigned long vma_size = vma->vm_end - vma->vm_start;994961 pgprot_t pgprot;995962996963 if (vma->vm_flags & VM_PAT) {997997- /*998998- * reserve the whole chunk covered by vma. We need the999999- * starting address and protection from pte.10001000- */10011001- if (follow_phys(vma, vma->vm_start, 0, &prot, &paddr)) {10021002- WARN_ON_ONCE(1);964964+ if (get_pat_info(vma, &paddr, &pgprot))1003965 return -EINVAL;10041004- }10051005- pgprot = __pgprot(prot);966966+ /* reserve the whole chunk covered by vma. */1006967 return reserve_pfn_range(paddr, vma_size, &pgprot, 1);1007968 }1008969···10701045 unsigned long size, bool mm_wr_locked)10711046{10721047 resource_size_t paddr;10731073- unsigned long prot;1074104810751049 if (vma && !(vma->vm_flags & VM_PAT))10761050 return;···10771053 /* free the chunk starting from pfn or the whole chunk */10781054 paddr = (resource_size_t)pfn << PAGE_SHIFT;10791055 if (!paddr && !size) {10801080- if (follow_phys(vma, vma->vm_start, 0, &prot, &paddr)) {10811081- WARN_ON_ONCE(1);10561056+ if (get_pat_info(vma, &paddr, NULL))10821057 return;10831083- }10841084-10851058 size = vma->vm_end - vma->vm_start;10861059 }10871060 free_pfn_range(paddr, size);
+8-11
arch/x86/net/bpf_jit_comp.c
···480480static int emit_rsb_call(u8 **pprog, void *func, void *ip)481481{482482 OPTIMIZER_HIDE_VAR(func);483483- x86_call_depth_emit_accounting(pprog, func);483483+ ip += x86_call_depth_emit_accounting(pprog, func, ip);484484 return emit_patch(pprog, func, ip, 0xE8);485485}486486···1972197219731973 /* call */19741974 case BPF_JMP | BPF_CALL: {19751975- int offs;19751975+ u8 *ip = image + addrs[i - 1];1976197619771977 func = (u8 *) __bpf_call_base + imm32;19781978 if (tail_call_reachable) {19791979 RESTORE_TAIL_CALL_CNT(bpf_prog->aux->stack_depth);19801980- if (!imm32)19811981- return -EINVAL;19821982- offs = 7 + x86_call_depth_emit_accounting(&prog, func);19831983- } else {19841984- if (!imm32)19851985- return -EINVAL;19861986- offs = x86_call_depth_emit_accounting(&prog, func);19801980+ ip += 7;19871981 }19881988- if (emit_call(&prog, func, image + addrs[i - 1] + offs))19821982+ if (!imm32)19831983+ return -EINVAL;19841984+ ip += x86_call_depth_emit_accounting(&prog, func, ip);19851985+ if (emit_call(&prog, func, ip))19891986 return -EINVAL;19901987 break;19911988 }···28322835 * Direct-call fentry stub, as such it needs accounting for the28332836 * __fentry__ call.28342837 */28352835- x86_call_depth_emit_accounting(&prog, NULL);28382838+ x86_call_depth_emit_accounting(&prog, NULL, image);28362839 }28372840 EMIT1(0x55); /* push rbp */28382841 EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
+18-8
arch/x86/virt/svm/sev.c
···7777{7878 u64 val;79798080- if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))8080+ if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))8181 return 0;82828383 rdmsrl(MSR_AMD64_SYSCFG, val);···9898{9999 u64 val;100100101101- if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))101101+ if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))102102 return 0;103103104104 rdmsrl(MSR_AMD64_SYSCFG, val);···174174 u64 rmptable_size;175175 u64 val;176176177177- if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))177177+ if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))178178 return 0;179179180180 if (!amd_iommu_snp_en)181181- return 0;181181+ goto nosnp;182182183183 if (!probed_rmp_size)184184 goto nosnp;···225225 return 0;226226227227nosnp:228228- setup_clear_cpu_cap(X86_FEATURE_SEV_SNP);228228+ cc_platform_clear(CC_ATTR_HOST_SEV_SNP);229229 return -ENOSYS;230230}231231···246246{247247 struct rmpentry *large_entry, *entry;248248249249- if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))249249+ if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))250250 return ERR_PTR(-ENODEV);251251252252 entry = get_rmpentry(pfn);···363363 unsigned long paddr = pfn << PAGE_SHIFT;364364 int ret;365365366366- if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))366366+ if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))367367 return -ENODEV;368368369369 if (!pfn_valid(pfn))···472472 unsigned long paddr = pfn << PAGE_SHIFT;473473 int ret, level;474474475475- if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))475475+ if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))476476 return -ENODEV;477477478478 level = RMP_TO_PG_LEVEL(state->pagesize);···558558 spin_unlock(&snp_leaked_pages_list_lock);559559}560560EXPORT_SYMBOL_GPL(snp_leak_pages);561561+562562+void kdump_sev_callback(void)563563+{564564+ /*565565+ * Do wbinvd() on remote CPUs when SNP is enabled in order to566566+ * safely do SNP_SHUTDOWN on the local CPU.567567+ */568568+ if (cc_platform_has(CC_ATTR_HOST_SEV_SNP))569569+ wbinvd();570570+}
+66-18
block/bdev.c
···583583 mutex_unlock(&bdev->bd_holder_lock);584584 bd_clear_claiming(whole, holder);585585 mutex_unlock(&bdev_lock);586586-587587- if (hops && hops->get_holder)588588- hops->get_holder(holder);589586}590587591588/**···605608static void bd_end_claim(struct block_device *bdev, void *holder)606609{607610 struct block_device *whole = bdev_whole(bdev);608608- const struct blk_holder_ops *hops = bdev->bd_holder_ops;609611 bool unblock = false;610612611613 /*···626630 if (!whole->bd_holders)627631 whole->bd_holder = NULL;628632 mutex_unlock(&bdev_lock);629629-630630- if (hops && hops->put_holder)631631- hops->put_holder(holder);632633633634 /*634635 * If this was the last claim, remove holder link and unblock evpoll if···769776770777static bool bdev_writes_blocked(struct block_device *bdev)771778{772772- return bdev->bd_writers == -1;779779+ return bdev->bd_writers < 0;773780}774781775782static void bdev_block_writes(struct block_device *bdev)776783{777777- bdev->bd_writers = -1;784784+ bdev->bd_writers--;778785}779786780787static void bdev_unblock_writes(struct block_device *bdev)781788{782782- bdev->bd_writers = 0;789789+ bdev->bd_writers++;783790}784791785792static bool bdev_may_open(struct block_device *bdev, blk_mode_t mode)···806813 bdev->bd_writers++;807814}808815816816+static inline bool bdev_unclaimed(const struct file *bdev_file)817817+{818818+ return bdev_file->private_data == BDEV_I(bdev_file->f_mapping->host);819819+}820820+809821static void bdev_yield_write_access(struct file *bdev_file)810822{811823 struct block_device *bdev;···818820 if (bdev_allow_write_mounted)819821 return;820822823823+ if (bdev_unclaimed(bdev_file))824824+ return;825825+821826 bdev = file_bdev(bdev_file);822822- /* Yield exclusive or shared write access. */823823- if (bdev_file->f_mode & FMODE_WRITE) {824824- if (bdev_writes_blocked(bdev))825825- bdev_unblock_writes(bdev);826826- else827827- bdev->bd_writers--;828828- }827827+828828+ if (bdev_file->f_mode & FMODE_WRITE_RESTRICTED)829829+ bdev_unblock_writes(bdev);830830+ else if (bdev_file->f_mode & FMODE_WRITE)831831+ bdev->bd_writers--;829832}830833831834/**···906907 bdev_file->f_mode |= FMODE_BUF_RASYNC | FMODE_CAN_ODIRECT;907908 if (bdev_nowait(bdev))908909 bdev_file->f_mode |= FMODE_NOWAIT;910910+ if (mode & BLK_OPEN_RESTRICT_WRITES)911911+ bdev_file->f_mode |= FMODE_WRITE_RESTRICTED;909912 bdev_file->f_mapping = bdev->bd_inode->i_mapping;910913 bdev_file->f_wb_err = filemap_sample_wb_err(bdev_file->f_mapping);911914 bdev_file->private_data = holder;···10131012}10141013EXPORT_SYMBOL(bdev_file_open_by_path);1015101410151015+static inline void bd_yield_claim(struct file *bdev_file)10161016+{10171017+ struct block_device *bdev = file_bdev(bdev_file);10181018+ void *holder = bdev_file->private_data;10191019+10201020+ lockdep_assert_held(&bdev->bd_disk->open_mutex);10211021+10221022+ if (WARN_ON_ONCE(IS_ERR_OR_NULL(holder)))10231023+ return;10241024+10251025+ if (!bdev_unclaimed(bdev_file))10261026+ bd_end_claim(bdev, holder);10271027+}10281028+10161029void bdev_release(struct file *bdev_file)10171030{10181031 struct block_device *bdev = file_bdev(bdev_file);···10511036 bdev_yield_write_access(bdev_file);1052103710531038 if (holder)10541054- bd_end_claim(bdev, holder);10391039+ bd_yield_claim(bdev_file);1055104010561041 /*10571042 * Trigger event checking and tell drivers to flush MEDIA_CHANGE···10701055put_no_open:10711056 blkdev_put_no_open(bdev);10721057}10581058+10591059+/**10601060+ * bdev_fput - yield claim to the block device and put the file10611061+ * @bdev_file: open block device10621062+ *10631063+ * Yield claim on the block device and put the file. Ensure that the10641064+ * block device can be reclaimed before the file is closed which is a10651065+ * deferred operation.10661066+ */10671067+void bdev_fput(struct file *bdev_file)10681068+{10691069+ if (WARN_ON_ONCE(bdev_file->f_op != &def_blk_fops))10701070+ return;10711071+10721072+ if (bdev_file->private_data) {10731073+ struct block_device *bdev = file_bdev(bdev_file);10741074+ struct gendisk *disk = bdev->bd_disk;10751075+10761076+ mutex_lock(&disk->open_mutex);10771077+ bdev_yield_write_access(bdev_file);10781078+ bd_yield_claim(bdev_file);10791079+ /*10801080+ * Tell release we already gave up our hold on the10811081+ * device and if write restrictions are available that10821082+ * we already gave up write access to the device.10831083+ */10841084+ bdev_file->private_data = BDEV_I(bdev_file->f_mapping->host);10851085+ mutex_unlock(&disk->open_mutex);10861086+ }10871087+10881088+ fput(bdev_file);10891089+}10901090+EXPORT_SYMBOL(bdev_fput);1073109110741092/**10751093 * lookup_bdev() - Look up a struct block_device by name.
+3-2
block/ioctl.c
···9696 unsigned long arg)9797{9898 uint64_t range[2];9999- uint64_t start, len;9999+ uint64_t start, len, end;100100 struct inode *inode = bdev->bd_inode;101101 int err;102102···117117 if (len & 511)118118 return -EINVAL;119119120120- if (start + len > bdev_nr_bytes(bdev))120120+ if (check_add_overflow(start, len, &end) ||121121+ end > bdev_nr_bytes(bdev))121122 return -EINVAL;122123123124 filemap_invalidate_lock(inode->i_mapping);
+10-12
drivers/acpi/thermal.c
···662662{663663 int result;664664665665- tz->thermal_zone = thermal_zone_device_register_with_trips("acpitz",666666- trip_table,667667- trip_count,668668- tz,669669- &acpi_thermal_zone_ops,670670- NULL,671671- passive_delay,672672- tz->polling_frequency * 100);665665+ if (trip_count)666666+ tz->thermal_zone = thermal_zone_device_register_with_trips(667667+ "acpitz", trip_table, trip_count, tz,668668+ &acpi_thermal_zone_ops, NULL, passive_delay,669669+ tz->polling_frequency * 100);670670+ else671671+ tz->thermal_zone = thermal_tripless_zone_device_register(672672+ "acpitz", tz, &acpi_thermal_zone_ops, NULL);673673+673674 if (IS_ERR(tz->thermal_zone))674675 return PTR_ERR(tz->thermal_zone);675676···902901 trip++;903902 }904903905905- if (trip == trip_table) {904904+ if (trip == trip_table)906905 pr_warn(FW_BUG "No valid trip points!\n");907907- result = -ENODEV;908908- goto free_memory;909909- }910906911907 result = acpi_thermal_register_thermal_zone(tz, trip_table,912908 trip - trip_table,
···200200 pclk = sg->sata0_pclk;201201 else202202 pclk = sg->sata1_pclk;203203- clk_enable(pclk);203203+ ret = clk_enable(pclk);204204+ if (ret)205205+ return ret;206206+204207 msleep(10);205208206209 /* Do not keep clocking a bridge that is not online */
···4444static void __fw_devlink_link_to_consumers(struct device *dev);4545static bool fw_devlink_drv_reg_done;4646static bool fw_devlink_best_effort;4747+static struct workqueue_struct *device_link_wq;47484849/**4950 * __fwnode_link_add - Create a link between two fwnode_handles.···534533 /*535534 * It may take a while to complete this work because of the SRCU536535 * synchronization in device_link_release_fn() and if the consumer or537537- * supplier devices get deleted when it runs, so put it into the "long"538538- * workqueue.536536+ * supplier devices get deleted when it runs, so put it into the537537+ * dedicated workqueue.539538 */540540- queue_work(system_long_wq, &link->rm_work);539539+ queue_work(device_link_wq, &link->rm_work);541540}541541+542542+/**543543+ * device_link_wait_removal - Wait for ongoing devlink removal jobs to terminate544544+ */545545+void device_link_wait_removal(void)546546+{547547+ /*548548+ * devlink removal jobs are queued in the dedicated work queue.549549+ * To be sure that all removal jobs are terminated, ensure that any550550+ * scheduled work has run to completion.551551+ */552552+ flush_workqueue(device_link_wq);553553+}554554+EXPORT_SYMBOL_GPL(device_link_wait_removal);542555543556static struct class devlink_class = {544557 .name = "devlink",···41794164 sysfs_dev_char_kobj = kobject_create_and_add("char", dev_kobj);41804165 if (!sysfs_dev_char_kobj)41814166 goto char_kobj_err;41674167+ device_link_wq = alloc_workqueue("device_link_wq", 0, 0);41684168+ if (!device_link_wq)41694169+ goto wq_err;4182417041834171 return 0;4184417241734173+ wq_err:41744174+ kobject_put(sysfs_dev_char_kobj);41854175 char_kobj_err:41864176 kobject_put(sysfs_dev_block_kobj);41874177 block_kobj_err:
+3-3
drivers/base/regmap/regcache-maple.c
···112112 unsigned long *entry, *lower, *upper;113113 unsigned long lower_index, lower_last;114114 unsigned long upper_index, upper_last;115115- int ret;115115+ int ret = 0;116116117117 lower = NULL;118118 upper = NULL;···145145 upper_index = max + 1;146146 upper_last = mas.last;147147148148- upper = kmemdup(&entry[max + 1],148148+ upper = kmemdup(&entry[max - mas.index + 1],149149 ((mas.last - max) *150150 sizeof(unsigned long)),151151 map->alloc_flags);···244244 unsigned long lmin = min;245245 unsigned long lmax = max;246246 unsigned int r, v, sync_start;247247- int ret;247247+ int ret = 0;248248 bool sync_needed = false;249249250250 map->cache_bypass = true;
+37
drivers/base/regmap/regmap.c
···28392839EXPORT_SYMBOL_GPL(regmap_read);2840284028412841/**28422842+ * regmap_read_bypassed() - Read a value from a single register direct28432843+ * from the device, bypassing the cache28442844+ *28452845+ * @map: Register map to read from28462846+ * @reg: Register to be read from28472847+ * @val: Pointer to store read value28482848+ *28492849+ * A value of zero will be returned on success, a negative errno will28502850+ * be returned in error cases.28512851+ */28522852+int regmap_read_bypassed(struct regmap *map, unsigned int reg, unsigned int *val)28532853+{28542854+ int ret;28552855+ bool bypass, cache_only;28562856+28572857+ if (!IS_ALIGNED(reg, map->reg_stride))28582858+ return -EINVAL;28592859+28602860+ map->lock(map->lock_arg);28612861+28622862+ bypass = map->cache_bypass;28632863+ cache_only = map->cache_only;28642864+ map->cache_bypass = true;28652865+ map->cache_only = false;28662866+28672867+ ret = _regmap_read(map, reg, val);28682868+28692869+ map->cache_bypass = bypass;28702870+ map->cache_only = cache_only;28712871+28722872+ map->unlock(map->lock_arg);28732873+28742874+ return ret;28752875+}28762876+EXPORT_SYMBOL_GPL(regmap_read_bypassed);28772877+28782878+/**28422879 * regmap_raw_read() - Read raw data from the device28432880 *28442881 * @map: Register map to read from
···77 *88 * Copyright (C) 2007 Texas Instruments, Inc.99 * Copyright (c) 2010, 2012, 2018 The Linux Foundation. All rights reserved.1010- * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved.1110 *1211 * Acknowledgements:1312 * This file is based on hci_ll.c, which was...···225226 struct qca_power *bt_power;226227 u32 init_speed;227228 u32 oper_speed;229229+ bool bdaddr_property_broken;228230 const char *firmware_name;229231};230232···18431843 const char *firmware_name = qca_get_firmware_name(hu);18441844 int ret;18451845 struct qca_btsoc_version ver;18461846+ struct qca_serdev *qcadev;18461847 const char *soc_name;1847184818481849 ret = qca_check_speeds(hu);···19051904 case QCA_WCN6750:19061905 case QCA_WCN6855:19071906 case QCA_WCN7850:19071907+ set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);1908190819091909- /* Set BDA quirk bit for reading BDA value from fwnode property19101910- * only if that property exist in DT.19111911- */19121912- if (fwnode_property_present(dev_fwnode(hdev->dev.parent), "local-bd-address")) {19131913- set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);19141914- bt_dev_info(hdev, "setting quirk bit to read BDA from fwnode later");19151915- } else {19161916- bt_dev_dbg(hdev, "local-bd-address` is not present in the devicetree so not setting quirk bit for BDA");19171917- }19091909+ qcadev = serdev_device_get_drvdata(hu->serdev);19101910+ if (qcadev->bdaddr_property_broken)19111911+ set_bit(HCI_QUIRK_BDADDR_PROPERTY_BROKEN, &hdev->quirks);1918191219191913 hci_set_aosp_capable(hdev);19201914···22902294 &qcadev->oper_speed);22912295 if (!qcadev->oper_speed)22922296 BT_DBG("UART will pick default operating speed");22972297+22982298+ qcadev->bdaddr_property_broken = device_property_read_bool(&serdev->dev,22992299+ "qcom,local-bd-address-broken");2293230022942301 if (data)22952302 qcadev->btsoc_type = data->soc_type;
+1-1
drivers/crypto/ccp/sev-dev.c
···10901090 void *arg = &data;10911091 int cmd, rc = 0;1092109210931093- if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))10931093+ if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))10941094 return -ENODEV;1095109510961096 sev = psp->sev_data;
+5-1
drivers/firewire/ohci.c
···2060206020612061 ohci->generation = generation;20622062 reg_write(ohci, OHCI1394_IntEventClear, OHCI1394_busReset);20632063+ if (param_debug & OHCI_PARAM_DEBUG_BUSRESETS)20642064+ reg_write(ohci, OHCI1394_IntMaskSet, OHCI1394_busReset);2063206520642066 if (ohci->quirks & QUIRK_RESET_PACKET)20652067 ohci->request_generation = generation;···21272125 return IRQ_NONE;2128212621292127 /*21302130- * busReset and postedWriteErr must not be cleared yet21282128+ * busReset and postedWriteErr events must not be cleared yet21312129 * (OHCI 1.1 clauses 7.2.3.2 and 13.2.8.1)21322130 */21332131 reg_write(ohci, OHCI1394_IntEventClear,21342132 event & ~(OHCI1394_busReset | OHCI1394_postedWriteErr));21352133 log_irqs(ohci, event);21342134+ if (event & OHCI1394_busReset)21352135+ reg_write(ohci, OHCI1394_IntMaskClear, OHCI1394_busReset);2136213621372137 if (event & OHCI1394_selfIDComplete)21382138 queue_work(selfid_workqueue, &ohci->bus_reset_work);
+32-16
drivers/gpio/gpiolib-cdev.c
···728728 GPIO_V2_LINE_EVENT_FALLING_EDGE;729729}730730731731+static inline char *make_irq_label(const char *orig)732732+{733733+ char *new;734734+735735+ if (!orig)736736+ return NULL;737737+738738+ new = kstrdup_and_replace(orig, '/', ':', GFP_KERNEL);739739+ if (!new)740740+ return ERR_PTR(-ENOMEM);741741+742742+ return new;743743+}744744+745745+static inline void free_irq_label(const char *label)746746+{747747+ kfree(label);748748+}749749+731750#ifdef CONFIG_HTE732751733752static enum hte_return process_hw_ts_thread(void *p)···10341015{10351016 unsigned long irqflags;10361017 int ret, level, irq;10181018+ char *label;1037101910381020 /* try hardware */10391021 ret = gpiod_set_debounce(line->desc, debounce_period_us);···10571037 if (irq < 0)10581038 return -ENXIO;1059103910401040+ label = make_irq_label(line->req->label);10411041+ if (IS_ERR(label))10421042+ return -ENOMEM;10431043+10601044 irqflags = IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING;10611045 ret = request_irq(irq, debounce_irq_handler, irqflags,10621062- line->req->label, line);10631063- if (ret)10461046+ label, line);10471047+ if (ret) {10481048+ free_irq_label(label);10641049 return ret;10501050+ }10651051 line->irq = irq;10661052 } else {10671053 ret = hte_edge_setup(line, GPIO_V2_LINE_FLAG_EDGE_BOTH);···11071081 return lc->attrs[i].attr.debounce_period_us;11081082 }11091083 return 0;11101110-}11111111-11121112-static inline char *make_irq_label(const char *orig)11131113-{11141114- return kstrdup_and_replace(orig, '/', ':', GFP_KERNEL);11151115-}11161116-11171117-static inline void free_irq_label(const char *label)11181118-{11191119- kfree(label);11201084}1121108511221086static void edge_detector_stop(struct line *line)···11741158 irqflags |= IRQF_ONESHOT;1175115911761160 label = make_irq_label(line->req->label);11771177- if (!label)11781178- return -ENOMEM;11611161+ if (IS_ERR(label))11621162+ return PTR_ERR(label);1179116311801164 /* Request a thread to read the events */11811165 ret = request_threaded_irq(irq, edge_irq_handler, edge_irq_thread,···22332217 goto out_free_le;2234221822352219 label = make_irq_label(le->label);22362236- if (!label) {22372237- ret = -ENOMEM;22202220+ if (IS_ERR(label)) {22212221+ ret = PTR_ERR(label);22382222 goto out_free_le;22392223 }22402224
···5252 * @adapter: I2C adapter for the DDC bus5353 * @offset: register offset5454 * @buffer: buffer for return data5555- * @size: sizo of the buffer5555+ * @size: size of the buffer5656 *5757 * Reads @size bytes from the DP dual mode adaptor registers5858 * starting at @offset.···116116 * @adapter: I2C adapter for the DDC bus117117 * @offset: register offset118118 * @buffer: buffer for write data119119- * @size: sizo of the buffer119119+ * @size: size of the buffer120120 *121121 * Writes @size bytes to the DP dual mode adaptor registers122122 * starting at @offset.
+6-1
drivers/gpu/drm/drm_prime.c
···582582{583583 struct drm_gem_object *obj = dma_buf->priv;584584585585- if (!obj->funcs->get_sg_table)585585+ /*586586+ * drm_gem_map_dma_buf() requires obj->get_sg_table(), but drivers587587+ * that implement their own ->map_dma_buf() do not.588588+ */589589+ if (dma_buf->ops->map_dma_buf == drm_gem_map_dma_buf &&590590+ !obj->funcs->get_sg_table)586591 return -ENOSYS;587592588593 return drm_gem_pin(obj);
···499499 /* The values must be in increasing order */500500 static const int mtl_rates[] = {501501 162000, 216000, 243000, 270000, 324000, 432000, 540000, 675000,502502- 810000, 1000000, 1350000, 2000000,502502+ 810000, 1000000, 2000000,503503 };504504 static const int icl_rates[] = {505505 162000, 216000, 270000, 324000, 432000, 540000, 648000, 810000,···14221422 if (DISPLAY_VER(dev_priv) >= 12)14231423 return true;1424142414251425- if (DISPLAY_VER(dev_priv) == 11 && encoder->port != PORT_A)14251425+ if (DISPLAY_VER(dev_priv) == 11 && encoder->port != PORT_A &&14261426+ !intel_crtc_has_type(pipe_config, INTEL_OUTPUT_DP_MST))14261427 return true;1427142814281429 return false;···19181917 dsc_max_bpp = min(dsc_max_bpp, pipe_bpp - 1);1919191819201919 for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {19211921- if (valid_dsc_bpp[i] < dsc_min_bpp ||19221922- valid_dsc_bpp[i] > dsc_max_bpp)19201920+ if (valid_dsc_bpp[i] < dsc_min_bpp)19211921+ continue;19221922+ if (valid_dsc_bpp[i] > dsc_max_bpp)19231923 break;1924192419251925 ret = dsc_compute_link_config(intel_dp,···65596557 intel_connector->get_hw_state = intel_ddi_connector_get_hw_state;65606558 else65616559 intel_connector->get_hw_state = intel_connector_get_hw_state;65606560+ intel_connector->sync_state = intel_dp_connector_sync_state;6562656165636562 if (!intel_edp_init_connector(intel_dp, intel_connector)) {65646563 intel_dp_aux_fini(intel_dp);
+1-1
drivers/gpu/drm/i915/display/intel_dp_mst.c
···13551355 return 0;13561356 }1357135713581358- if (DISPLAY_VER(dev_priv) >= 10 &&13581358+ if (HAS_DSC_MST(dev_priv) &&13591359 drm_dp_sink_supports_dsc(intel_connector->dp.dsc_dpcd)) {13601360 /*13611361 * TBD pass the connector BPC,
+56-22
drivers/gpu/drm/i915/display/intel_psr.c
···1994199419951995void intel_psr2_program_trans_man_trk_ctl(const struct intel_crtc_state *crtc_state)19961996{19971997+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);19971998 struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);19981999 enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;19992000 struct intel_encoder *encoder;···2014201320152014 intel_de_write(dev_priv, PSR2_MAN_TRK_CTL(cpu_transcoder),20162015 crtc_state->psr2_man_track_ctl);20162016+20172017+ if (!crtc_state->enable_psr2_su_region_et)20182018+ return;20192019+20202020+ intel_de_write(dev_priv, PIPE_SRCSZ_ERLY_TPT(crtc->pipe),20212021+ crtc_state->pipe_srcsz_early_tpt);20172022}2018202320192024static void psr2_man_trk_ctl_calc(struct intel_crtc_state *crtc_state,···20562049 }20572050exit:20582051 crtc_state->psr2_man_track_ctl = val;20522052+}20532053+20542054+static u32 psr2_pipe_srcsz_early_tpt_calc(struct intel_crtc_state *crtc_state,20552055+ bool full_update)20562056+{20572057+ int width, height;20582058+20592059+ if (!crtc_state->enable_psr2_su_region_et || full_update)20602060+ return 0;20612061+20622062+ width = drm_rect_width(&crtc_state->psr2_su_area);20632063+ height = drm_rect_height(&crtc_state->psr2_su_area);20642064+20652065+ return PIPESRC_WIDTH(width - 1) | PIPESRC_HEIGHT(height - 1);20592066}2060206720612068static void clip_area_update(struct drm_rect *overlap_damage_area,···21162095 * cursor fully when cursor is in SU area.21172096 */21182097static void21192119-intel_psr2_sel_fetch_et_alignment(struct intel_crtc_state *crtc_state,21202120- struct intel_plane_state *cursor_state)20982098+intel_psr2_sel_fetch_et_alignment(struct intel_atomic_state *state,20992099+ struct intel_crtc *crtc)21212100{21222122- struct drm_rect inter;21012101+ struct intel_crtc_state *crtc_state = intel_atomic_get_new_crtc_state(state, crtc);21022102+ struct intel_plane_state *new_plane_state;21032103+ struct intel_plane *plane;21042104+ int i;2123210521242124- if (!crtc_state->enable_psr2_su_region_et ||21252125- !cursor_state->uapi.visible)21062106+ if (!crtc_state->enable_psr2_su_region_et)21262107 return;2127210821282128- inter = crtc_state->psr2_su_area;21292129- if (!drm_rect_intersect(&inter, &cursor_state->uapi.dst))21302130- return;21092109+ for_each_new_intel_plane_in_state(state, plane, new_plane_state, i) {21102110+ struct drm_rect inter;2131211121322132- clip_area_update(&crtc_state->psr2_su_area, &cursor_state->uapi.dst,21332133- &crtc_state->pipe_src);21122112+ if (new_plane_state->uapi.crtc != crtc_state->uapi.crtc)21132113+ continue;21142114+21152115+ if (plane->id != PLANE_CURSOR)21162116+ continue;21172117+21182118+ if (!new_plane_state->uapi.visible)21192119+ continue;21202120+21212121+ inter = crtc_state->psr2_su_area;21222122+ if (!drm_rect_intersect(&inter, &new_plane_state->uapi.dst))21232123+ continue;21242124+21252125+ clip_area_update(&crtc_state->psr2_su_area, &new_plane_state->uapi.dst,21262126+ &crtc_state->pipe_src);21272127+ }21342128}2135212921362130/*···21882152{21892153 struct drm_i915_private *dev_priv = to_i915(state->base.dev);21902154 struct intel_crtc_state *crtc_state = intel_atomic_get_new_crtc_state(state, crtc);21912191- struct intel_plane_state *new_plane_state, *old_plane_state,21922192- *cursor_plane_state = NULL;21552155+ struct intel_plane_state *new_plane_state, *old_plane_state;21932156 struct intel_plane *plane;21942157 bool full_update = false;21952158 int i, ret;···22732238 damaged_area.x2 += new_plane_state->uapi.dst.x1 - src.x1;2274223922752240 clip_area_update(&crtc_state->psr2_su_area, &damaged_area, &crtc_state->pipe_src);22762276-22772277- /*22782278- * Cursor plane new state is stored to adjust su area to cover22792279- * cursor are fully.22802280- */22812281- if (plane->id == PLANE_CURSOR)22822282- cursor_plane_state = new_plane_state;22832241 }2284224222852243 /*···23012273 if (ret)23022274 return ret;2303227523042304- /* Adjust su area to cover cursor fully as necessary */23052305- if (cursor_plane_state)23062306- intel_psr2_sel_fetch_et_alignment(crtc_state, cursor_plane_state);22762276+ /*22772277+ * Adjust su area to cover cursor fully as necessary (early22782278+ * transport). This needs to be done after22792279+ * drm_atomic_add_affected_planes to ensure visible cursor is added into22802280+ * affected planes even when cursor is not updated by itself.22812281+ */22822282+ intel_psr2_sel_fetch_et_alignment(state, crtc);2307228323082284 intel_psr2_sel_fetch_pipe_alignment(crtc_state);23092285···2370233823712339skip_sel_fetch_set_loop:23722340 psr2_man_trk_ctl_calc(crtc_state, full_update);23412341+ crtc_state->pipe_srcsz_early_tpt =23422342+ psr2_pipe_srcsz_early_tpt_calc(crtc_state, full_update);23732343 return 0;23742344}23752345
+3
drivers/gpu/drm/i915/gt/gen8_ppgtt.c
···961961 struct i915_vma *vma;962962 int ret;963963964964+ if (!intel_gt_needs_wa_16018031267(vm->gt))965965+ return 0;966966+964967 /* The memory will be used only by GPU. */965968 obj = i915_gem_object_create_lmem(i915, PAGE_SIZE,966969 I915_BO_ALLOC_VOLATILE |
+17
drivers/gpu/drm/i915/gt/intel_engine_cs.c
···908908 info->engine_mask &= ~BIT(GSC0);909909 }910910911911+ /*912912+ * Do not create the command streamer for CCS slices beyond the first.913913+ * All the workload submitted to the first engine will be shared among914914+ * all the slices.915915+ *916916+ * Once the user will be allowed to customize the CCS mode, then this917917+ * check needs to be removed.918918+ */919919+ if (IS_DG2(gt->i915)) {920920+ u8 first_ccs = __ffs(CCS_MASK(gt));921921+922922+ /* Mask off all the CCS engine */923923+ info->engine_mask &= ~GENMASK(CCS3, CCS0);924924+ /* Put back in the first CCS engine */925925+ info->engine_mask |= BIT(_CCS(first_ccs));926926+ }927927+911928 return info->engine_mask;912929}913930
···1010#include "intel_engine_regs.h"1111#include "intel_gpu_commands.h"1212#include "intel_gt.h"1313+#include "intel_gt_ccs_mode.h"1314#include "intel_gt_mcr.h"1415#include "intel_gt_print.h"1516#include "intel_gt_regs.h"···5251 * registers belonging to BCS, VCS or VECS should be implemented in5352 * xcs_engine_wa_init(). Workarounds for registers not belonging to a specific5453 * engine's MMIO range but that are part of of the common RCS/CCS reset domain5555- * should be implemented in general_render_compute_wa_init().5454+ * should be implemented in general_render_compute_wa_init(). The settings5555+ * about the CCS load balancing should be added in ccs_engine_wa_mode().5656 *5757 * - GT workarounds: the list of these WAs is applied whenever these registers5858 * revert to their default values: on GPU reset, suspend/resume [1]_, etc.···28562854 wa_write_clr(wal, GEN8_GARBCNTL, GEN12_BUS_HASH_CTL_BIT_EXC);28572855}2858285628572857+static void ccs_engine_wa_mode(struct intel_engine_cs *engine, struct i915_wa_list *wal)28582858+{28592859+ struct intel_gt *gt = engine->gt;28602860+28612861+ if (!IS_DG2(gt->i915))28622862+ return;28632863+28642864+ /*28652865+ * Wa_14019159160: This workaround, along with others, leads to28662866+ * significant challenges in utilizing load balancing among the28672867+ * CCS slices. Consequently, an architectural decision has been28682868+ * made to completely disable automatic CCS load balancing.28692869+ */28702870+ wa_masked_en(wal, GEN12_RCU_MODE, XEHP_RCU_MODE_FIXED_SLICE_CCS_MODE);28712871+28722872+ /*28732873+ * After having disabled automatic load balancing we need to28742874+ * assign all slices to a single CCS. We will call it CCS mode 128752875+ */28762876+ intel_gt_apply_ccs_mode(gt);28772877+}28782878+28592879/*28602880 * The workarounds in this function apply to shared registers in28612881 * the general render reset domain that aren't tied to a···30283004 * to a single RCS/CCS engine's workaround list since30293005 * they're reset as part of the general render domain reset.30303006 */30313031- if (engine->flags & I915_ENGINE_FIRST_RENDER_COMPUTE)30073007+ if (engine->flags & I915_ENGINE_FIRST_RENDER_COMPUTE) {30323008 general_render_compute_wa_init(engine, wal);30093009+ ccs_engine_wa_mode(engine, wal);30103010+ }3033301130343012 if (engine->class == COMPUTE_CLASS)30353013 ccs_engine_wa_init(engine, wal);
+3-3
drivers/gpu/drm/nouveau/nouveau_uvmm.c
···812812 struct drm_gpuva_op_unmap *u = r->unmap;813813 struct nouveau_uvma *uvma = uvma_from_va(u->va);814814 u64 addr = uvma->va.va.addr;815815- u64 range = uvma->va.va.range;815815+ u64 end = uvma->va.va.addr + uvma->va.va.range;816816817817 if (r->prev)818818 addr = r->prev->va.addr + r->prev->va.range;819819820820 if (r->next)821821- range = r->next->va.addr - addr;821821+ end = r->next->va.addr;822822823823- op_unmap_range(u, addr, range);823823+ op_unmap_range(u, addr, end - addr);824824}825825826826static int
···363363 /** @ufence_wq: user fence wait queue */364364 wait_queue_head_t ufence_wq;365365366366+ /** @preempt_fence_wq: used to serialize preempt fences */367367+ struct workqueue_struct *preempt_fence_wq;368368+366369 /** @ordered_wq: used to serialize compute mode resume */367370 struct workqueue_struct *ordered_wq;368371
+7-72
drivers/gpu/drm/xe/xe_exec.c
···9494 * Unlock all9595 */96969797+/*9898+ * Add validation and rebinding to the drm_exec locking loop, since both can9999+ * trigger eviction which may require sleeping dma_resv locks.100100+ */97101static int xe_exec_fn(struct drm_gpuvm_exec *vm_exec)98102{99103 struct xe_vm *vm = container_of(vm_exec->vm, struct xe_vm, gpuvm);100100- struct drm_gem_object *obj;101101- unsigned long index;102102- int num_fences;103103- int ret;104104105105- ret = drm_gpuvm_validate(vm_exec->vm, &vm_exec->exec);106106- if (ret)107107- return ret;108108-109109- /*110110- * 1 fence slot for the final submit, and 1 more for every per-tile for111111- * GPU bind and 1 extra for CPU bind. Note that there are potentially112112- * many vma per object/dma-resv, however the fence slot will just be113113- * re-used, since they are largely the same timeline and the seqno114114- * should be in order. In the case of CPU bind there is dummy fence used115115- * for all CPU binds, so no need to have a per-tile slot for that.116116- */117117- num_fences = 1 + 1 + vm->xe->info.tile_count;118118-119119- /*120120- * We don't know upfront exactly how many fence slots we will need at121121- * the start of the exec, since the TTM bo_validate above can consume122122- * numerous fence slots. Also due to how the dma_resv_reserve_fences()123123- * works it only ensures that at least that many fence slots are124124- * available i.e if there are already 10 slots available and we reserve125125- * two more, it can just noop without reserving anything. With this it126126- * is quite possible that TTM steals some of the fence slots and then127127- * when it comes time to do the vma binding and final exec stage we are128128- * lacking enough fence slots, leading to some nasty BUG_ON() when129129- * adding the fences. Hence just add our own fences here, after the130130- * validate stage.131131- */132132- drm_exec_for_each_locked_object(&vm_exec->exec, index, obj) {133133- ret = dma_resv_reserve_fences(obj->resv, num_fences);134134- if (ret)135135- return ret;136136- }137137-138138- return 0;105105+ /* The fence slot added here is intended for the exec sched job. */106106+ return xe_vm_validate_rebind(vm, &vm_exec->exec, 1);139107}140108141109int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)···120152 struct drm_exec *exec = &vm_exec.exec;121153 u32 i, num_syncs = 0, num_ufence = 0;122154 struct xe_sched_job *job;123123- struct dma_fence *rebind_fence;124155 struct xe_vm *vm;125156 bool write_locked, skip_retry = false;126157 ktime_t end = 0;···257290 goto err_exec;258291 }259292260260- /*261261- * Rebind any invalidated userptr or evicted BOs in the VM, non-compute262262- * VM mode only.263263- */264264- rebind_fence = xe_vm_rebind(vm, false);265265- if (IS_ERR(rebind_fence)) {266266- err = PTR_ERR(rebind_fence);267267- goto err_put_job;268268- }269269-270270- /*271271- * We store the rebind_fence in the VM so subsequent execs don't get272272- * scheduled before the rebinds of userptrs / evicted BOs is complete.273273- */274274- if (rebind_fence) {275275- dma_fence_put(vm->rebind_fence);276276- vm->rebind_fence = rebind_fence;277277- }278278- if (vm->rebind_fence) {279279- if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,280280- &vm->rebind_fence->flags)) {281281- dma_fence_put(vm->rebind_fence);282282- vm->rebind_fence = NULL;283283- } else {284284- dma_fence_get(vm->rebind_fence);285285- err = drm_sched_job_add_dependency(&job->drm,286286- vm->rebind_fence);287287- if (err)288288- goto err_put_job;289289- }290290- }291291-292292- /* Wait behind munmap style rebinds */293293+ /* Wait behind rebinds */293294 if (!xe_vm_in_lr_mode(vm)) {294295 err = drm_sched_job_add_resv_dependencies(&job->drm,295296 xe_vm_resv(vm),
+5
drivers/gpu/drm/xe/xe_exec_queue_types.h
···148148 const struct xe_ring_ops *ring_ops;149149 /** @entity: DRM sched entity for this exec queue (1 to 1 relationship) */150150 struct drm_sched_entity *entity;151151+ /**152152+ * @tlb_flush_seqno: The seqno of the last rebind tlb flush performed153153+ * Protected by @vm's resv. Unused if @vm == NULL.154154+ */155155+ u64 tlb_flush_seqno;151156 /** @lrc: logical ring context for this exec queue */152157 struct xe_lrc lrc[];153158};
+1-2
drivers/gpu/drm/xe/xe_gt_pagefault.c
···100100{101101 struct xe_bo *bo = xe_vma_bo(vma);102102 struct xe_vm *vm = xe_vma_vm(vma);103103- unsigned int num_shared = 2; /* slots for bind + move */104103 int err;105104106106- err = xe_vm_prepare_vma(exec, vma, num_shared);105105+ err = xe_vm_lock_vma(exec, vma);107106 if (err)108107 return err;109108
···3939 } user_fence;4040 /** @migrate_flush_flags: Additional flush flags for migration jobs */4141 u32 migrate_flush_flags;4242+ /** @ring_ops_flush_tlb: The ring ops need to flush TLB before payload. */4343+ bool ring_ops_flush_tlb;4244 /** @batch_addr: batch buffer address of job */4345 u64 batch_addr[];4446};
+67-43
drivers/gpu/drm/xe/xe_vm.c
···482482 return 0;483483}484484485485+/**486486+ * xe_vm_validate_rebind() - Validate buffer objects and rebind vmas487487+ * @vm: The vm for which we are rebinding.488488+ * @exec: The struct drm_exec with the locked GEM objects.489489+ * @num_fences: The number of fences to reserve for the operation, not490490+ * including rebinds and validations.491491+ *492492+ * Validates all evicted gem objects and rebinds their vmas. Note that493493+ * rebindings may cause evictions and hence the validation-rebind494494+ * sequence is rerun until there are no more objects to validate.495495+ *496496+ * Return: 0 on success, negative error code on error. In particular,497497+ * may return -EINTR or -ERESTARTSYS if interrupted, and -EDEADLK if498498+ * the drm_exec transaction needs to be restarted.499499+ */500500+int xe_vm_validate_rebind(struct xe_vm *vm, struct drm_exec *exec,501501+ unsigned int num_fences)502502+{503503+ struct drm_gem_object *obj;504504+ unsigned long index;505505+ int ret;506506+507507+ do {508508+ ret = drm_gpuvm_validate(&vm->gpuvm, exec);509509+ if (ret)510510+ return ret;511511+512512+ ret = xe_vm_rebind(vm, false);513513+ if (ret)514514+ return ret;515515+ } while (!list_empty(&vm->gpuvm.evict.list));516516+517517+ drm_exec_for_each_locked_object(exec, index, obj) {518518+ ret = dma_resv_reserve_fences(obj->resv, num_fences);519519+ if (ret)520520+ return ret;521521+ }522522+523523+ return 0;524524+}525525+485526static int xe_preempt_work_begin(struct drm_exec *exec, struct xe_vm *vm,486527 bool *done)487528{488529 int err;489530490490- /*491491- * 1 fence for each preempt fence plus a fence for each tile from a492492- * possible rebind493493- */494494- err = drm_gpuvm_prepare_vm(&vm->gpuvm, exec, vm->preempt.num_exec_queues +495495- vm->xe->info.tile_count);531531+ err = drm_gpuvm_prepare_vm(&vm->gpuvm, exec, 0);496532 if (err)497533 return err;498534···543507 return 0;544508 }545509546546- err = drm_gpuvm_prepare_objects(&vm->gpuvm, exec, vm->preempt.num_exec_queues);510510+ err = drm_gpuvm_prepare_objects(&vm->gpuvm, exec, 0);547511 if (err)548512 return err;549513···551515 if (err)552516 return err;553517554554- return drm_gpuvm_validate(&vm->gpuvm, exec);518518+ /*519519+ * Add validation and rebinding to the locking loop since both can520520+ * cause evictions which may require blocing dma_resv locks.521521+ * The fence reservation here is intended for the new preempt fences522522+ * we attach at the end of the rebind work.523523+ */524524+ return xe_vm_validate_rebind(vm, exec, vm->preempt.num_exec_queues);555525}556526557527static void preempt_rebind_work_func(struct work_struct *w)558528{559529 struct xe_vm *vm = container_of(w, struct xe_vm, preempt.rebind_work);560530 struct drm_exec exec;561561- struct dma_fence *rebind_fence;562531 unsigned int fence_count = 0;563532 LIST_HEAD(preempt_fences);564533 ktime_t end = 0;···609568 if (err)610569 goto out_unlock;611570612612- rebind_fence = xe_vm_rebind(vm, true);613613- if (IS_ERR(rebind_fence)) {614614- err = PTR_ERR(rebind_fence);571571+ err = xe_vm_rebind(vm, true);572572+ if (err)615573 goto out_unlock;616616- }617574618618- if (rebind_fence) {619619- dma_fence_wait(rebind_fence, false);620620- dma_fence_put(rebind_fence);621621- }622622-623623- /* Wait on munmap style VM unbinds */575575+ /* Wait on rebinds and munmap style VM unbinds */624576 wait = dma_resv_wait_timeout(xe_vm_resv(vm),625577 DMA_RESV_USAGE_KERNEL,626578 false, MAX_SCHEDULE_TIMEOUT);···807773 struct xe_sync_entry *syncs, u32 num_syncs,808774 bool first_op, bool last_op);809775810810-struct dma_fence *xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)776776+int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)811777{812812- struct dma_fence *fence = NULL;778778+ struct dma_fence *fence;813779 struct xe_vma *vma, *next;814780815781 lockdep_assert_held(&vm->lock);816782 if (xe_vm_in_lr_mode(vm) && !rebind_worker)817817- return NULL;783783+ return 0;818784819785 xe_vm_assert_held(vm);820786 list_for_each_entry_safe(vma, next, &vm->rebind_list,···822788 xe_assert(vm->xe, vma->tile_present);823789824790 list_del_init(&vma->combined_links.rebind);825825- dma_fence_put(fence);826791 if (rebind_worker)827792 trace_xe_vma_rebind_worker(vma);828793 else829794 trace_xe_vma_rebind_exec(vma);830795 fence = xe_vm_bind_vma(vma, NULL, NULL, 0, false, false);831796 if (IS_ERR(fence))832832- return fence;797797+ return PTR_ERR(fence);798798+ dma_fence_put(fence);833799 }834800835835- return fence;801801+ return 0;836802}837803838804static void xe_vma_free(struct xe_vma *vma)···10381004}1039100510401006/**10411041- * xe_vm_prepare_vma() - drm_exec utility to lock a vma10071007+ * xe_vm_lock_vma() - drm_exec utility to lock a vma10421008 * @exec: The drm_exec object we're currently locking for.10431009 * @vma: The vma for witch we want to lock the vm resv and any attached10441010 * object's resv.10451045- * @num_shared: The number of dma-fence slots to pre-allocate in the10461046- * objects' reservation objects.10471011 *10481012 * Return: 0 on success, negative error code on error. In particular10491013 * may return -EDEADLK on WW transaction contention and -EINTR if10501014 * an interruptible wait is terminated by a signal.10511015 */10521052-int xe_vm_prepare_vma(struct drm_exec *exec, struct xe_vma *vma,10531053- unsigned int num_shared)10161016+int xe_vm_lock_vma(struct drm_exec *exec, struct xe_vma *vma)10541017{10551018 struct xe_vm *vm = xe_vma_vm(vma);10561019 struct xe_bo *bo = xe_vma_bo(vma);10571020 int err;1058102110591022 XE_WARN_ON(!vm);10601060- if (num_shared)10611061- err = drm_exec_prepare_obj(exec, xe_vm_obj(vm), num_shared);10621062- else10631063- err = drm_exec_lock_obj(exec, xe_vm_obj(vm));10641064- if (!err && bo && !bo->vm) {10651065- if (num_shared)10661066- err = drm_exec_prepare_obj(exec, &bo->ttm.base, num_shared);10671067- else10681068- err = drm_exec_lock_obj(exec, &bo->ttm.base);10691069- }10231023+10241024+ err = drm_exec_lock_obj(exec, xe_vm_obj(vm));10251025+ if (!err && bo && !bo->vm)10261026+ err = drm_exec_lock_obj(exec, &bo->ttm.base);1070102710711028 return err;10721029}···1069104410701045 drm_exec_init(&exec, 0, 0);10711046 drm_exec_until_all_locked(&exec) {10721072- err = xe_vm_prepare_vma(&exec, vma, 0);10471047+ err = xe_vm_lock_vma(&exec, vma);10731048 drm_exec_retry_on_contention(&exec);10741049 if (XE_WARN_ON(err))10751050 break;···16141589 XE_WARN_ON(vm->pt_root[id]);1615159016161591 trace_xe_vm_free(vm);16171617- dma_fence_put(vm->rebind_fence);16181592 kfree(vm);16191593}16201594···2536251225372513 lockdep_assert_held_write(&vm->lock);2538251425392539- err = xe_vm_prepare_vma(exec, vma, 1);25152515+ err = xe_vm_lock_vma(exec, vma);25402516 if (err)25412517 return err;25422518
···177177 */178178 struct list_head rebind_list;179179180180- /** @rebind_fence: rebind fence from execbuf */181181- struct dma_fence *rebind_fence;182182-183180 /**184181 * @destroy_work: worker to destroy VM, needed as a dma_fence signaling185182 * from an irq context can be last put and the destroy needs to be able···261264 bool capture_once;262265 } error_capture;263266267267+ /**268268+ * @tlb_flush_seqno: Required TLB flush seqno for the next exec.269269+ * protected by the vm resv.270270+ */271271+ u64 tlb_flush_seqno;264272 /** @batch_invalidate_tlb: Always invalidate TLB before batch start */265273 bool batch_invalidate_tlb;266274 /** @xef: XE file handle for tracking this VM's drm client */
···32283228static void iommu_snp_enable(void)32293229{32303230#ifdef CONFIG_KVM_AMD_SEV32313231- if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))32313231+ if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))32323232 return;32333233 /*32343234 * The SNP support requires that IOMMU must be enabled, and is···32363236 */32373237 if (no_iommu || iommu_default_passthrough()) {32383238 pr_err("SNP: IOMMU disabled or configured in passthrough mode, SNP cannot be supported.\n");32393239+ cc_platform_clear(CC_ATTR_HOST_SEV_SNP);32393240 return;32403241 }3241324232423243 amd_iommu_snp_en = check_feature(FEATURE_SNP);32433244 if (!amd_iommu_snp_en) {32443245 pr_err("SNP: IOMMU SNP feature not enabled, SNP cannot be supported.\n");32463246+ cc_platform_clear(CC_ATTR_HOST_SEV_SNP);32453247 return;32463248 }32473249
···9494 return tmp & 0xffff;9595}96969797-int sja1110_pcs_mdio_write_c45(struct mii_bus *bus, int phy, int reg, int mmd,9797+int sja1110_pcs_mdio_write_c45(struct mii_bus *bus, int phy, int mmd, int reg,9898 u16 val)9999{100100 struct sja1105_mdio_private *mdio_priv = bus->priv;
+12-4
drivers/net/ethernet/broadcom/genet/bcmgenet.c
···32803280}3281328132823282/* Returns a reusable dma control register value */32833283-static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv)32833283+static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv, bool flush_rx)32843284{32853285 unsigned int i;32863286 u32 reg;···33043304 bcmgenet_umac_writel(priv, 1, UMAC_TX_FLUSH);33053305 udelay(10);33063306 bcmgenet_umac_writel(priv, 0, UMAC_TX_FLUSH);33073307+33083308+ if (flush_rx) {33093309+ reg = bcmgenet_rbuf_ctrl_get(priv);33103310+ bcmgenet_rbuf_ctrl_set(priv, reg | BIT(0));33113311+ udelay(10);33123312+ bcmgenet_rbuf_ctrl_set(priv, reg);33133313+ udelay(10);33143314+ }3307331533083316 return dma_ctrl;33093317}···3376336833773369 bcmgenet_set_hw_addr(priv, dev->dev_addr);3378337033793379- /* Disable RX/TX DMA and flush TX queues */33803380- dma_ctrl = bcmgenet_dma_disable(priv);33713371+ /* Disable RX/TX DMA and flush TX and RX queues */33723372+ dma_ctrl = bcmgenet_dma_disable(priv, true);3381337333823374 /* Reinitialize TDMA and RDMA and SW housekeeping */33833375 ret = bcmgenet_init_dma(priv);···42434235 bcmgenet_hfb_create_rxnfc_filter(priv, rule);4244423642454237 /* Disable RX/TX DMA and flush TX queues */42464246- dma_ctrl = bcmgenet_dma_disable(priv);42384238+ dma_ctrl = bcmgenet_dma_disable(priv, false);4247423942484240 /* Reinitialize TDMA and RDMA and SW housekeeping */42494241 ret = bcmgenet_init_dma(priv);
+9-2
drivers/net/ethernet/freescale/fec_main.c
···24542454 fep->link = 0;24552455 fep->full_duplex = 0;2456245624572457- phy_dev->mac_managed_pm = true;24582458-24592457 phy_attached_info(phy_dev);2460245824612459 return 0;···24652467 struct net_device *ndev = platform_get_drvdata(pdev);24662468 struct fec_enet_private *fep = netdev_priv(ndev);24672469 bool suppress_preamble = false;24702470+ struct phy_device *phydev;24682471 struct device_node *node;24692472 int err = -ENXIO;24702473 u32 mii_speed, holdtime;24712474 u32 bus_freq;24752475+ int addr;2472247624732477 /*24742478 * The i.MX28 dual fec interfaces are not equal.···25832583 if (err)25842584 goto err_out_free_mdiobus;25852585 of_node_put(node);25862586+25872587+ /* find all the PHY devices on the bus and set mac_managed_pm to true */25882588+ for (addr = 0; addr < PHY_MAX_ADDR; addr++) {25892589+ phydev = mdiobus_get_phy(fep->mii_bus, addr);25902590+ if (phydev)25912591+ phydev->mac_managed_pm = true;25922592+ }2586259325872594 mii_cnt++;25882595
···222222 if (hw->mac.type >= e1000_pch_lpt) {223223 /* Only unforce SMBus if ME is not active */224224 if (!(er32(FWSM) & E1000_ICH_FWSM_FW_VALID)) {225225+ /* Switching PHY interface always returns MDI error226226+ * so disable retry mechanism to avoid wasting time227227+ */228228+ e1000e_disable_phy_retry(hw);229229+225230 /* Unforce SMBus mode in PHY */226231 e1e_rphy_locked(hw, CV_SMB_CTRL, &phy_reg);227232 phy_reg &= ~CV_SMB_CTRL_FORCE_SMBUS;228233 e1e_wphy_locked(hw, CV_SMB_CTRL, phy_reg);234234+235235+ e1000e_enable_phy_retry(hw);229236230237 /* Unforce SMBus mode in MAC */231238 mac_reg = er32(CTRL_EXT);···317310 goto out;318311 }319312313313+ /* There is no guarantee that the PHY is accessible at this time314314+ * so disable retry mechanism to avoid wasting time315315+ */316316+ e1000e_disable_phy_retry(hw);317317+320318 /* The MAC-PHY interconnect may be in SMBus mode. If the PHY is321319 * inaccessible and resetting the PHY is not blocked, toggle the322320 * LANPHYPC Value bit to force the interconnect to PCIe mode.···392380 break;393381 }394382383383+ e1000e_enable_phy_retry(hw);384384+395385 hw->phy.ops.release(hw);396386 if (!ret_val) {397387···462448 phy->autoneg_mask = AUTONEG_ADVERTISE_SPEED_DEFAULT;463449464450 phy->id = e1000_phy_unknown;451451+452452+ if (hw->mac.type == e1000_pch_mtp) {453453+ phy->retry_count = 2;454454+ e1000e_enable_phy_retry(hw);455455+ }465456466457 ret_val = e1000_init_phy_workarounds_pchlan(hw);467458 if (ret_val)···11651146 if (ret_val)11661147 goto out;1167114811681168- /* Force SMBus mode in PHY */11691169- ret_val = e1000_read_phy_reg_hv_locked(hw, CV_SMB_CTRL, &phy_reg);11701170- if (ret_val)11711171- goto release;11721172- phy_reg |= CV_SMB_CTRL_FORCE_SMBUS;11731173- e1000_write_phy_reg_hv_locked(hw, CV_SMB_CTRL, phy_reg);11741174-11751175- /* Force SMBus mode in MAC */11761176- mac_reg = er32(CTRL_EXT);11771177- mac_reg |= E1000_CTRL_EXT_FORCE_SMBUS;11781178- ew32(CTRL_EXT, mac_reg);11791179-11801149 /* Si workaround for ULP entry flow on i127/rev6 h/w. Enable11811150 * LPLU and disable Gig speed when entering ULP11821151 */···13201313 /* Toggle LANPHYPC Value bit */13211314 e1000_toggle_lanphypc_pch_lpt(hw);1322131513161316+ /* Switching PHY interface always returns MDI error13171317+ * so disable retry mechanism to avoid wasting time13181318+ */13191319+ e1000e_disable_phy_retry(hw);13201320+13231321 /* Unforce SMBus mode in PHY */13241322 ret_val = e1000_read_phy_reg_hv_locked(hw, CV_SMB_CTRL, &phy_reg);13251323 if (ret_val) {···13441332 }13451333 phy_reg &= ~CV_SMB_CTRL_FORCE_SMBUS;13461334 e1000_write_phy_reg_hv_locked(hw, CV_SMB_CTRL, phy_reg);13351335+13361336+ e1000e_enable_phy_retry(hw);1347133713481338 /* Unforce SMBus mode in MAC */13491339 mac_reg = er32(CTRL_EXT);
+18
drivers/net/ethernet/intel/e1000e/netdev.c
···66236623 struct e1000_hw *hw = &adapter->hw;66246624 u32 ctrl, ctrl_ext, rctl, status, wufc;66256625 int retval = 0;66266626+ u16 smb_ctrl;6626662766276628 /* Runtime suspend should only enable wakeup for link changes */66286629 if (runtime)···66976696 if (retval)66986697 return retval;66996698 }66996699+67006700+ /* Force SMBUS to allow WOL */67016701+ /* Switching PHY interface always returns MDI error67026702+ * so disable retry mechanism to avoid wasting time67036703+ */67046704+ e1000e_disable_phy_retry(hw);67056705+67066706+ e1e_rphy(hw, CV_SMB_CTRL, &smb_ctrl);67076707+ smb_ctrl |= CV_SMB_CTRL_FORCE_SMBUS;67086708+ e1e_wphy(hw, CV_SMB_CTRL, smb_ctrl);67096709+67106710+ e1000e_enable_phy_retry(hw);67116711+67126712+ /* Force SMBus mode in MAC */67136713+ ctrl_ext = er32(CTRL_EXT);67146714+ ctrl_ext |= E1000_CTRL_EXT_FORCE_SMBUS;67156715+ ew32(CTRL_EXT, ctrl_ext);67006716 }6701671767026718 /* Ensure that the appropriate bits are set in LPI_CTRL
+114-70
drivers/net/ethernet/intel/e1000e/phy.c
···107107 return e1e_wphy(hw, M88E1000_PHY_GEN_CONTROL, 0);108108}109109110110+void e1000e_disable_phy_retry(struct e1000_hw *hw)111111+{112112+ hw->phy.retry_enabled = false;113113+}114114+115115+void e1000e_enable_phy_retry(struct e1000_hw *hw)116116+{117117+ hw->phy.retry_enabled = true;118118+}119119+110120/**111121 * e1000e_read_phy_reg_mdic - Read MDI control register112122 * @hw: pointer to the HW structure···128118 **/129119s32 e1000e_read_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 *data)130120{121121+ u32 i, mdic = 0, retry_counter, retry_max;131122 struct e1000_phy_info *phy = &hw->phy;132132- u32 i, mdic = 0;123123+ bool success;133124134125 if (offset > MAX_PHY_REG_ADDRESS) {135126 e_dbg("PHY Address %d is out of range\n", offset);136127 return -E1000_ERR_PARAM;137128 }138129130130+ retry_max = phy->retry_enabled ? phy->retry_count : 0;131131+139132 /* Set up Op-code, Phy Address, and register offset in the MDI140133 * Control register. The MAC will take care of interfacing with the141134 * PHY to retrieve the desired data.142135 */143143- mdic = ((offset << E1000_MDIC_REG_SHIFT) |144144- (phy->addr << E1000_MDIC_PHY_SHIFT) |145145- (E1000_MDIC_OP_READ));136136+ for (retry_counter = 0; retry_counter <= retry_max; retry_counter++) {137137+ success = true;146138147147- ew32(MDIC, mdic);139139+ mdic = ((offset << E1000_MDIC_REG_SHIFT) |140140+ (phy->addr << E1000_MDIC_PHY_SHIFT) |141141+ (E1000_MDIC_OP_READ));148142149149- /* Poll the ready bit to see if the MDI read completed150150- * Increasing the time out as testing showed failures with151151- * the lower time out152152- */153153- for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) {154154- udelay(50);155155- mdic = er32(MDIC);156156- if (mdic & E1000_MDIC_READY)157157- break;158158- }159159- if (!(mdic & E1000_MDIC_READY)) {160160- e_dbg("MDI Read PHY Reg Address %d did not complete\n", offset);161161- return -E1000_ERR_PHY;162162- }163163- if (mdic & E1000_MDIC_ERROR) {164164- e_dbg("MDI Read PHY Reg Address %d Error\n", offset);165165- return -E1000_ERR_PHY;166166- }167167- if (FIELD_GET(E1000_MDIC_REG_MASK, mdic) != offset) {168168- e_dbg("MDI Read offset error - requested %d, returned %d\n",169169- offset, FIELD_GET(E1000_MDIC_REG_MASK, mdic));170170- return -E1000_ERR_PHY;171171- }172172- *data = (u16)mdic;143143+ ew32(MDIC, mdic);173144174174- /* Allow some time after each MDIC transaction to avoid175175- * reading duplicate data in the next MDIC transaction.176176- */177177- if (hw->mac.type == e1000_pch2lan)178178- udelay(100);179179- return 0;145145+ /* Poll the ready bit to see if the MDI read completed146146+ * Increasing the time out as testing showed failures with147147+ * the lower time out148148+ */149149+ for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) {150150+ usleep_range(50, 60);151151+ mdic = er32(MDIC);152152+ if (mdic & E1000_MDIC_READY)153153+ break;154154+ }155155+ if (!(mdic & E1000_MDIC_READY)) {156156+ e_dbg("MDI Read PHY Reg Address %d did not complete\n",157157+ offset);158158+ success = false;159159+ }160160+ if (mdic & E1000_MDIC_ERROR) {161161+ e_dbg("MDI Read PHY Reg Address %d Error\n", offset);162162+ success = false;163163+ }164164+ if (FIELD_GET(E1000_MDIC_REG_MASK, mdic) != offset) {165165+ e_dbg("MDI Read offset error - requested %d, returned %d\n",166166+ offset, FIELD_GET(E1000_MDIC_REG_MASK, mdic));167167+ success = false;168168+ }169169+170170+ /* Allow some time after each MDIC transaction to avoid171171+ * reading duplicate data in the next MDIC transaction.172172+ */173173+ if (hw->mac.type == e1000_pch2lan)174174+ usleep_range(100, 150);175175+176176+ if (success) {177177+ *data = (u16)mdic;178178+ return 0;179179+ }180180+181181+ if (retry_counter != retry_max) {182182+ e_dbg("Perform retry on PHY transaction...\n");183183+ mdelay(10);184184+ }185185+ }186186+187187+ return -E1000_ERR_PHY;180188}181189182190/**···207179 **/208180s32 e1000e_write_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 data)209181{182182+ u32 i, mdic = 0, retry_counter, retry_max;210183 struct e1000_phy_info *phy = &hw->phy;211211- u32 i, mdic = 0;184184+ bool success;212185213186 if (offset > MAX_PHY_REG_ADDRESS) {214187 e_dbg("PHY Address %d is out of range\n", offset);215188 return -E1000_ERR_PARAM;216189 }217190191191+ retry_max = phy->retry_enabled ? phy->retry_count : 0;192192+218193 /* Set up Op-code, Phy Address, and register offset in the MDI219194 * Control register. The MAC will take care of interfacing with the220195 * PHY to retrieve the desired data.221196 */222222- mdic = (((u32)data) |223223- (offset << E1000_MDIC_REG_SHIFT) |224224- (phy->addr << E1000_MDIC_PHY_SHIFT) |225225- (E1000_MDIC_OP_WRITE));197197+ for (retry_counter = 0; retry_counter <= retry_max; retry_counter++) {198198+ success = true;226199227227- ew32(MDIC, mdic);200200+ mdic = (((u32)data) |201201+ (offset << E1000_MDIC_REG_SHIFT) |202202+ (phy->addr << E1000_MDIC_PHY_SHIFT) |203203+ (E1000_MDIC_OP_WRITE));228204229229- /* Poll the ready bit to see if the MDI read completed230230- * Increasing the time out as testing showed failures with231231- * the lower time out232232- */233233- for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) {234234- udelay(50);235235- mdic = er32(MDIC);236236- if (mdic & E1000_MDIC_READY)237237- break;238238- }239239- if (!(mdic & E1000_MDIC_READY)) {240240- e_dbg("MDI Write PHY Reg Address %d did not complete\n", offset);241241- return -E1000_ERR_PHY;242242- }243243- if (mdic & E1000_MDIC_ERROR) {244244- e_dbg("MDI Write PHY Red Address %d Error\n", offset);245245- return -E1000_ERR_PHY;246246- }247247- if (FIELD_GET(E1000_MDIC_REG_MASK, mdic) != offset) {248248- e_dbg("MDI Write offset error - requested %d, returned %d\n",249249- offset, FIELD_GET(E1000_MDIC_REG_MASK, mdic));250250- return -E1000_ERR_PHY;205205+ ew32(MDIC, mdic);206206+207207+ /* Poll the ready bit to see if the MDI read completed208208+ * Increasing the time out as testing showed failures with209209+ * the lower time out210210+ */211211+ for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) {212212+ usleep_range(50, 60);213213+ mdic = er32(MDIC);214214+ if (mdic & E1000_MDIC_READY)215215+ break;216216+ }217217+ if (!(mdic & E1000_MDIC_READY)) {218218+ e_dbg("MDI Write PHY Reg Address %d did not complete\n",219219+ offset);220220+ success = false;221221+ }222222+ if (mdic & E1000_MDIC_ERROR) {223223+ e_dbg("MDI Write PHY Reg Address %d Error\n", offset);224224+ success = false;225225+ }226226+ if (FIELD_GET(E1000_MDIC_REG_MASK, mdic) != offset) {227227+ e_dbg("MDI Write offset error - requested %d, returned %d\n",228228+ offset, FIELD_GET(E1000_MDIC_REG_MASK, mdic));229229+ success = false;230230+ }231231+232232+ /* Allow some time after each MDIC transaction to avoid233233+ * reading duplicate data in the next MDIC transaction.234234+ */235235+ if (hw->mac.type == e1000_pch2lan)236236+ usleep_range(100, 150);237237+238238+ if (success)239239+ return 0;240240+241241+ if (retry_counter != retry_max) {242242+ e_dbg("Perform retry on PHY transaction...\n");243243+ mdelay(10);244244+ }251245 }252246253253- /* Allow some time after each MDIC transaction to avoid254254- * reading duplicate data in the next MDIC transaction.255255- */256256- if (hw->mac.type == e1000_pch2lan)257257- udelay(100);258258-259259- return 0;247247+ return -E1000_ERR_PHY;260248}261249262250/**
···955955 struct rcu_head rcu; /* to avoid race with update stats on free */956956 char name[I40E_INT_NAME_STR_LEN];957957 bool arm_wb_state;958958+ bool in_busy_poll;958959 int irq_num; /* IRQ assigned to this q_vector */959960} ____cacheline_internodealigned_in_smp;960961
···26302630 return failure ? budget : (int)total_rx_packets;26312631}2632263226332633-static inline u32 i40e_buildreg_itr(const int type, u16 itr)26332633+/**26342634+ * i40e_buildreg_itr - build a value for writing to I40E_PFINT_DYN_CTLN register26352635+ * @itr_idx: interrupt throttling index26362636+ * @interval: interrupt throttling interval value in usecs26372637+ * @force_swint: force software interrupt26382638+ *26392639+ * The function builds a value for I40E_PFINT_DYN_CTLN register that26402640+ * is used to update interrupt throttling interval for specified ITR index26412641+ * and optionally enforces a software interrupt. If the @itr_idx is equal26422642+ * to I40E_ITR_NONE then no interval change is applied and only @force_swint26432643+ * parameter is taken into account. If the interval change and enforced26442644+ * software interrupt are not requested then the built value just enables26452645+ * appropriate vector interrupt.26462646+ **/26472647+static u32 i40e_buildreg_itr(enum i40e_dyn_idx itr_idx, u16 interval,26482648+ bool force_swint)26342649{26352650 u32 val;26362651···26592644 * an event in the PBA anyway so we need to rely on the automask26602645 * to hold pending events for us until the interrupt is re-enabled26612646 *26622662- * The itr value is reported in microseconds, and the register26632663- * value is recorded in 2 microsecond units. For this reason we26642664- * only need to shift by the interval shift - 1 instead of the26652665- * full value.26472647+ * We have to shift the given value as it is reported in microseconds26482648+ * and the register value is recorded in 2 microsecond units.26662649 */26672667- itr &= I40E_ITR_MASK;26502650+ interval >>= 1;2668265126522652+ /* 1. Enable vector interrupt26532653+ * 2. Update the interval for the specified ITR index26542654+ * (I40E_ITR_NONE in the register is used to indicate that26552655+ * no interval update is requested)26562656+ */26692657 val = I40E_PFINT_DYN_CTLN_INTENA_MASK |26702670- (type << I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT) |26712671- (itr << (I40E_PFINT_DYN_CTLN_INTERVAL_SHIFT - 1));26582658+ FIELD_PREP(I40E_PFINT_DYN_CTLN_ITR_INDX_MASK, itr_idx) |26592659+ FIELD_PREP(I40E_PFINT_DYN_CTLN_INTERVAL_MASK, interval);26602660+26612661+ /* 3. Enforce software interrupt trigger if requested26622662+ * (These software interrupts rate is limited by ITR2 that is26632663+ * set to 20K interrupts per second)26642664+ */26652665+ if (force_swint)26662666+ val |= I40E_PFINT_DYN_CTLN_SWINT_TRIG_MASK |26672667+ I40E_PFINT_DYN_CTLN_SW_ITR_INDX_ENA_MASK |26682668+ FIELD_PREP(I40E_PFINT_DYN_CTLN_SW_ITR_INDX_MASK,26692669+ I40E_SW_ITR);2672267026732671 return val;26742672}26752675-26762676-/* a small macro to shorten up some long lines */26772677-#define INTREG I40E_PFINT_DYN_CTLN2678267326792674/* The act of updating the ITR will cause it to immediately trigger. In order26802675 * to prevent this from throwing off adaptive update statistics we defer the···27042679static inline void i40e_update_enable_itr(struct i40e_vsi *vsi,27052680 struct i40e_q_vector *q_vector)27062681{26822682+ enum i40e_dyn_idx itr_idx = I40E_ITR_NONE;27072683 struct i40e_hw *hw = &vsi->back->hw;27082708- u32 intval;26842684+ u16 interval = 0;26852685+ u32 itr_val;2709268627102687 /* If we don't have MSIX, then we only need to re-enable icr0 */27112688 if (!test_bit(I40E_FLAG_MSIX_ENA, vsi->back->flags)) {···27292702 */27302703 if (q_vector->rx.target_itr < q_vector->rx.current_itr) {27312704 /* Rx ITR needs to be reduced, this is highest priority */27322732- intval = i40e_buildreg_itr(I40E_RX_ITR,27332733- q_vector->rx.target_itr);27052705+ itr_idx = I40E_RX_ITR;27062706+ interval = q_vector->rx.target_itr;27342707 q_vector->rx.current_itr = q_vector->rx.target_itr;27352708 q_vector->itr_countdown = ITR_COUNTDOWN_START;27362709 } else if ((q_vector->tx.target_itr < q_vector->tx.current_itr) ||···27392712 /* Tx ITR needs to be reduced, this is second priority27402713 * Tx ITR needs to be increased more than Rx, fourth priority27412714 */27422742- intval = i40e_buildreg_itr(I40E_TX_ITR,27432743- q_vector->tx.target_itr);27152715+ itr_idx = I40E_TX_ITR;27162716+ interval = q_vector->tx.target_itr;27442717 q_vector->tx.current_itr = q_vector->tx.target_itr;27452718 q_vector->itr_countdown = ITR_COUNTDOWN_START;27462719 } else if (q_vector->rx.current_itr != q_vector->rx.target_itr) {27472720 /* Rx ITR needs to be increased, third priority */27482748- intval = i40e_buildreg_itr(I40E_RX_ITR,27492749- q_vector->rx.target_itr);27212721+ itr_idx = I40E_RX_ITR;27222722+ interval = q_vector->rx.target_itr;27502723 q_vector->rx.current_itr = q_vector->rx.target_itr;27512724 q_vector->itr_countdown = ITR_COUNTDOWN_START;27522725 } else {27532726 /* No ITR update, lowest priority */27542754- intval = i40e_buildreg_itr(I40E_ITR_NONE, 0);27552727 if (q_vector->itr_countdown)27562728 q_vector->itr_countdown--;27572729 }2758273027592759- if (!test_bit(__I40E_VSI_DOWN, vsi->state))27602760- wr32(hw, INTREG(q_vector->reg_idx), intval);27312731+ /* Do not update interrupt control register if VSI is down */27322732+ if (test_bit(__I40E_VSI_DOWN, vsi->state))27332733+ return;27342734+27352735+ /* Update ITR interval if necessary and enforce software interrupt27362736+ * if we are exiting busy poll.27372737+ */27382738+ if (q_vector->in_busy_poll) {27392739+ itr_val = i40e_buildreg_itr(itr_idx, interval, true);27402740+ q_vector->in_busy_poll = false;27412741+ } else {27422742+ itr_val = i40e_buildreg_itr(itr_idx, interval, false);27432743+ }27442744+ wr32(hw, I40E_PFINT_DYN_CTLN(q_vector->reg_idx), itr_val);27612745}2762274627632747/**···28832845 */28842846 if (likely(napi_complete_done(napi, work_done)))28852847 i40e_update_enable_itr(vsi, q_vector);28482848+ else28492849+ q_vector->in_busy_poll = true;2886285028872851 return min(work_done, budget - 1);28882852}
+1
drivers/net/ethernet/intel/i40e/i40e_txrx.h
···6868/* these are indexes into ITRN registers */6969#define I40E_RX_ITR I40E_IDX_ITR07070#define I40E_TX_ITR I40E_IDX_ITR17171+#define I40E_SW_ITR I40E_IDX_ITR271727273/* Supported RSS offloads */7374#define I40E_DEFAULT_RSS_HENA ( \
···16241624{16251625 struct i40e_hw *hw = &pf->hw;16261626 struct i40e_vf *vf;16271627- int i, v;16281627 u32 reg;16281628+ int i;1629162916301630 /* If we don't have any VFs, then there is nothing to reset */16311631 if (!pf->num_alloc_vfs)···16361636 return false;1637163716381638 /* Begin reset on all VFs at once */16391639- for (v = 0; v < pf->num_alloc_vfs; v++) {16401640- vf = &pf->vf[v];16391639+ for (vf = &pf->vf[0]; vf < &pf->vf[pf->num_alloc_vfs]; ++vf) {16411640 /* If VF is being reset no need to trigger reset again */16421641 if (!test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))16431643- i40e_trigger_vf_reset(&pf->vf[v], flr);16421642+ i40e_trigger_vf_reset(vf, flr);16441643 }1645164416461645 /* HW requires some time to make sure it can flush the FIFO for a VF···16481649 * the VFs using a simple iterator that increments once that VF has16491650 * finished resetting.16501651 */16511651- for (i = 0, v = 0; i < 10 && v < pf->num_alloc_vfs; i++) {16521652+ for (i = 0, vf = &pf->vf[0]; i < 10 && vf < &pf->vf[pf->num_alloc_vfs]; ++i) {16521653 usleep_range(10000, 20000);1653165416541655 /* Check each VF in sequence, beginning with the VF to fail16551656 * the previous check.16561657 */16571657- while (v < pf->num_alloc_vfs) {16581658- vf = &pf->vf[v];16581658+ while (vf < &pf->vf[pf->num_alloc_vfs]) {16591659 if (!test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) {16601660 reg = rd32(hw, I40E_VPGEN_VFRSTAT(vf->vf_id));16611661 if (!(reg & I40E_VPGEN_VFRSTAT_VFRD_MASK))···16641666 /* If the current VF has finished resetting, move on16651667 * to the next VF in sequence.16661668 */16671667- v++;16691669+ ++vf;16681670 }16691671 }16701672···16741676 /* Display a warning if at least one VF didn't manage to reset in16751677 * time, but continue on with the operation.16761678 */16771677- if (v < pf->num_alloc_vfs)16791679+ if (vf < &pf->vf[pf->num_alloc_vfs])16781680 dev_err(&pf->pdev->dev, "VF reset check timeout on VF %d\n",16791679- pf->vf[v].vf_id);16811681+ vf->vf_id);16801682 usleep_range(10000, 20000);1681168316821684 /* Begin disabling all the rings associated with VFs, but do not wait16831685 * between each VF.16841686 */16851685- for (v = 0; v < pf->num_alloc_vfs; v++) {16871687+ for (vf = &pf->vf[0]; vf < &pf->vf[pf->num_alloc_vfs]; ++vf) {16861688 /* On initial reset, we don't have any queues to disable */16871687- if (pf->vf[v].lan_vsi_idx == 0)16891689+ if (vf->lan_vsi_idx == 0)16881690 continue;1689169116901692 /* If VF is reset in another thread just continue */16911693 if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))16921694 continue;1693169516941694- i40e_vsi_stop_rings_no_wait(pf->vsi[pf->vf[v].lan_vsi_idx]);16961696+ i40e_vsi_stop_rings_no_wait(pf->vsi[vf->lan_vsi_idx]);16951697 }1696169816971699 /* Now that we've notified HW to disable all of the VF rings, wait16981700 * until they finish.16991701 */17001700- for (v = 0; v < pf->num_alloc_vfs; v++) {17021702+ for (vf = &pf->vf[0]; vf < &pf->vf[pf->num_alloc_vfs]; ++vf) {17011703 /* On initial reset, we don't have any queues to disable */17021702- if (pf->vf[v].lan_vsi_idx == 0)17041704+ if (vf->lan_vsi_idx == 0)17031705 continue;1704170617051707 /* If VF is reset in another thread just continue */17061708 if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))17071709 continue;1708171017091709- i40e_vsi_wait_queues_disabled(pf->vsi[pf->vf[v].lan_vsi_idx]);17111711+ i40e_vsi_wait_queues_disabled(pf->vsi[vf->lan_vsi_idx]);17101712 }1711171317121714 /* Hw may need up to 50ms to finish disabling the RX queues. We···17151717 mdelay(50);1716171817171719 /* Finish the reset on each VF */17181718- for (v = 0; v < pf->num_alloc_vfs; v++) {17201720+ for (vf = &pf->vf[0]; vf < &pf->vf[pf->num_alloc_vfs]; ++vf) {17191721 /* If VF is reset in another thread just continue */17201722 if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))17211723 continue;1722172417231723- i40e_cleanup_reset_vf(&pf->vf[v]);17251725+ i40e_cleanup_reset_vf(vf);17241726 }1725172717261728 i40e_flush(hw);···31373139 /* Allow to delete VF primary MAC only if it was not set31383140 * administratively by PF or if VF is trusted.31393141 */31403140- if (ether_addr_equal(addr, vf->default_lan_addr.addr) &&31413141- i40e_can_vf_change_mac(vf))31423142- was_unimac_deleted = true;31433143- else31443144- continue;31423142+ if (ether_addr_equal(addr, vf->default_lan_addr.addr)) {31433143+ if (i40e_can_vf_change_mac(vf))31443144+ was_unimac_deleted = true;31453145+ else31463146+ continue;31473147+ }3145314831463149 if (i40e_del_mac_filter(vsi, al->list[i].addr)) {31473150 ret = -EINVAL;
···29412941 rx_ptype = le16_get_bits(rx_desc->ptype_err_fflags0,29422942 VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M);2943294329442944+ skb->protocol = eth_type_trans(skb, rxq->vport->netdev);29452945+29442946 decoded = rxq->vport->rx_ptype_lkup[rx_ptype];29452947 /* If we don't know the ptype we can't do anything else with it. Just29462948 * pass it up the stack as-is.···2952295029532951 /* process RSS/hash */29542952 idpf_rx_hash(rxq, skb, rx_desc, &decoded);29552955-29562956- skb->protocol = eth_type_trans(skb, rxq->vport->netdev);2957295329582954 if (le16_get_bits(rx_desc->hdrlen_flags,29592955 VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M))
···24312431 struct lan8814_ptp_rx_ts *rx_ts, *tmp;24322432 int txcfg = 0, rxcfg = 0;24332433 int pkt_ts_enable;24342434+ int tx_mod;2434243524352436 ptp_priv->hwts_tx_type = config->tx_type;24362437 ptp_priv->rx_filter = config->rx_filter;···24782477 lanphy_write_page_reg(ptp_priv->phydev, 5, PTP_RX_TIMESTAMP_EN, pkt_ts_enable);24792478 lanphy_write_page_reg(ptp_priv->phydev, 5, PTP_TX_TIMESTAMP_EN, pkt_ts_enable);2480247924812481- if (ptp_priv->hwts_tx_type == HWTSTAMP_TX_ONESTEP_SYNC)24802480+ tx_mod = lanphy_read_page_reg(ptp_priv->phydev, 5, PTP_TX_MOD);24812481+ if (ptp_priv->hwts_tx_type == HWTSTAMP_TX_ONESTEP_SYNC) {24822482 lanphy_write_page_reg(ptp_priv->phydev, 5, PTP_TX_MOD,24832483- PTP_TX_MOD_TX_PTP_SYNC_TS_INSERT_);24832483+ tx_mod | PTP_TX_MOD_TX_PTP_SYNC_TS_INSERT_);24842484+ } else if (ptp_priv->hwts_tx_type == HWTSTAMP_TX_ON) {24852485+ lanphy_write_page_reg(ptp_priv->phydev, 5, PTP_TX_MOD,24862486+ tx_mod & ~PTP_TX_MOD_TX_PTP_SYNC_TS_INSERT_);24872487+ }2484248824852489 if (config->rx_filter != HWTSTAMP_FILTER_NONE)24862490 lan8814_config_ts_intr(ptp_priv->phydev, true);···25432537 }25442538}2545253925462546-static void lan8814_get_sig_rx(struct sk_buff *skb, u16 *sig)25402540+static bool lan8814_get_sig_rx(struct sk_buff *skb, u16 *sig)25472541{25482542 struct ptp_header *ptp_header;25492543 u32 type;···25532547 ptp_header = ptp_parse_header(skb, type);25542548 skb_pull_inline(skb, ETH_HLEN);2555254925502550+ if (!ptp_header)25512551+ return false;25522552+25562553 *sig = (__force u16)(ntohs(ptp_header->sequence_id));25542554+ return true;25572555}2558255625592557static bool lan8814_match_rx_skb(struct kszphy_ptp_priv *ptp_priv,···25692559 bool ret = false;25702560 u16 skb_sig;2571256125722572- lan8814_get_sig_rx(skb, &skb_sig);25622562+ if (!lan8814_get_sig_rx(skb, &skb_sig))25632563+ return ret;2573256425742565 /* Iterate over all RX timestamps and match it with the received skbs */25752566 spin_lock_irqsave(&ptp_priv->rx_ts_lock, flags);···28452834 return 0;28462835}2847283628482848-static void lan8814_get_sig_tx(struct sk_buff *skb, u16 *sig)28372837+static bool lan8814_get_sig_tx(struct sk_buff *skb, u16 *sig)28492838{28502839 struct ptp_header *ptp_header;28512840 u32 type;···28532842 type = ptp_classify_raw(skb);28542843 ptp_header = ptp_parse_header(skb, type);2855284428452845+ if (!ptp_header)28462846+ return false;28472847+28562848 *sig = (__force u16)(ntohs(ptp_header->sequence_id));28492849+ return true;28572850}2858285128592852static void lan8814_match_tx_skb(struct kszphy_ptp_priv *ptp_priv,···2871285628722857 spin_lock_irqsave(&ptp_priv->tx_queue.lock, flags);28732858 skb_queue_walk_safe(&ptp_priv->tx_queue, skb, skb_tmp) {28742874- lan8814_get_sig_tx(skb, &skb_sig);28592859+ if (!lan8814_get_sig_tx(skb, &skb_sig))28602860+ continue;2875286128762862 if (memcmp(&skb_sig, &seq_id, sizeof(seq_id)))28772863 continue;···2926291029272911 spin_lock_irqsave(&ptp_priv->rx_queue.lock, flags);29282912 skb_queue_walk_safe(&ptp_priv->rx_queue, skb, skb_tmp) {29292929- lan8814_get_sig_rx(skb, &skb_sig);29132913+ if (!lan8814_get_sig_rx(skb, &skb_sig))29142914+ continue;2930291529312916 if (memcmp(&skb_sig, &rx_ts->seq_id, sizeof(rx_ts->seq_id)))29322917 continue;
+2
drivers/net/usb/ax88179_178a.c
···1273127312741274 if (is_valid_ether_addr(mac)) {12751275 eth_hw_addr_set(dev->net, mac);12761276+ if (!is_local_ether_addr(mac))12771277+ dev->net->addr_assign_type = NET_ADDR_PERM;12761278 } else {12771279 netdev_info(dev->net, "invalid MAC address, using random\n");12781280 eth_hw_addr_random(dev->net);
+1
drivers/net/xen-netfront.c
···285285 return NULL;286286 }287287 skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE);288288+ skb_mark_for_recycle(skb);288289289290 /* Align ip header to a 16 bytes boundary */290291 skb_reserve(skb, NET_IP_ALIGN);
+32-9
drivers/nvme/host/core.c
···20762076 bool vwc = ns->ctrl->vwc & NVME_CTRL_VWC_PRESENT;20772077 struct queue_limits lim;20782078 struct nvme_id_ns_nvm *nvm = NULL;20792079+ struct nvme_zone_info zi = {};20792080 struct nvme_id_ns *id;20802081 sector_t capacity;20812082 unsigned lbaf;···20892088 if (id->ncap == 0) {20902089 /* namespace not allocated or attached */20912090 info->is_removed = true;20922092- ret = -ENODEV;20912091+ ret = -ENXIO;20932092 goto out;20942093 }20942094+ lbaf = nvme_lbaf_index(id->flbas);2095209520962096 if (ns->ctrl->ctratt & NVME_CTRL_ATTR_ELBAS) {20972097 ret = nvme_identify_ns_nvm(ns->ctrl, info->nsid, &nvm);···21002098 goto out;21012099 }2102210021012101+ if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&21022102+ ns->head->ids.csi == NVME_CSI_ZNS) {21032103+ ret = nvme_query_zone_info(ns, lbaf, &zi);21042104+ if (ret < 0)21052105+ goto out;21062106+ }21072107+21032108 blk_mq_freeze_queue(ns->disk->queue);21042104- lbaf = nvme_lbaf_index(id->flbas);21052109 ns->head->lba_shift = id->lbaf[lbaf].ds;21062110 ns->head->nuse = le64_to_cpu(id->nuse);21072111 capacity = nvme_lba_to_sect(ns->head, le64_to_cpu(id->nsze));···21202112 capacity = 0;21212113 nvme_config_discard(ns, &lim);21222114 if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&21232123- ns->head->ids.csi == NVME_CSI_ZNS) {21242124- ret = nvme_update_zone_info(ns, lbaf, &lim);21252125- if (ret) {21262126- blk_mq_unfreeze_queue(ns->disk->queue);21272127- goto out;21282128- }21292129- }21152115+ ns->head->ids.csi == NVME_CSI_ZNS)21162116+ nvme_update_zone_info(ns, &lim, &zi);21302117 ret = queue_limits_commit_update(ns->disk->queue, &lim);21312118 if (ret) {21322119 blk_mq_unfreeze_queue(ns->disk->queue);···22042201 }2205220222062203 if (!ret && nvme_ns_head_multipath(ns->head)) {22042204+ struct queue_limits *ns_lim = &ns->disk->queue->limits;22072205 struct queue_limits lim;2208220622092207 blk_mq_freeze_queue(ns->head->disk->queue);···22162212 set_disk_ro(ns->head->disk, nvme_ns_is_readonly(ns, info));22172213 nvme_mpath_revalidate_paths(ns);2218221422152215+ /*22162216+ * queue_limits mixes values that are the hardware limitations22172217+ * for bio splitting with what is the device configuration.22182218+ *22192219+ * For NVMe the device configuration can change after e.g. a22202220+ * Format command, and we really want to pick up the new format22212221+ * value here. But we must still stack the queue limits to the22222222+ * least common denominator for multipathing to split the bios22232223+ * properly.22242224+ *22252225+ * To work around this, we explicitly set the device22262226+ * configuration to those that we just queried, but only stack22272227+ * the splitting limits in to make sure we still obey possibly22282228+ * lower limitations of other controllers.22292229+ */22192230 lim = queue_limits_start_update(ns->head->disk->queue);22312231+ lim.logical_block_size = ns_lim->logical_block_size;22322232+ lim.physical_block_size = ns_lim->physical_block_size;22332233+ lim.io_min = ns_lim->io_min;22342234+ lim.io_opt = ns_lim->io_opt;22202235 queue_limits_stack_bdev(&lim, ns->disk->part0, 0,22212236 ns->head->disk->disk_name);22222237 ret = queue_limits_commit_update(ns->head->disk->queue, &lim);
···15411541 }1542154215431543 down_read(&nvmet_config_sem);15441544+ if (!strncmp(nvmet_disc_subsys->subsysnqn, subsysnqn,15451545+ NVMF_NQN_SIZE)) {15461546+ if (kref_get_unless_zero(&nvmet_disc_subsys->ref)) {15471547+ up_read(&nvmet_config_sem);15481548+ return nvmet_disc_subsys;15491549+ }15501550+ }15441551 list_for_each_entry(p, &port->subsystems, entry) {15451552 if (!strncmp(p->subsys->subsysnqn, subsysnqn,15461553 NVMF_NQN_SIZE)) {
+10-7
drivers/nvme/target/fc.c
···11151115}1116111611171117static bool11181118-nvmet_fc_assoc_exits(struct nvmet_fc_tgtport *tgtport, u64 association_id)11181118+nvmet_fc_assoc_exists(struct nvmet_fc_tgtport *tgtport, u64 association_id)11191119{11201120 struct nvmet_fc_tgt_assoc *a;11211121+ bool found = false;1121112211231123+ rcu_read_lock();11221124 list_for_each_entry_rcu(a, &tgtport->assoc_list, a_list) {11231123- if (association_id == a->association_id)11241124- return true;11251125+ if (association_id == a->association_id) {11261126+ found = true;11271127+ break;11281128+ }11251129 }11301130+ rcu_read_unlock();1126113111271127- return false;11321132+ return found;11281133}1129113411301135static struct nvmet_fc_tgt_assoc *···11691164 ran = ran << BYTES_FOR_QID_SHIFT;1170116511711166 spin_lock_irqsave(&tgtport->lock, flags);11721172- rcu_read_lock();11731173- if (!nvmet_fc_assoc_exits(tgtport, ran)) {11671167+ if (!nvmet_fc_assoc_exists(tgtport, ran)) {11741168 assoc->association_id = ran;11751169 list_add_tail_rcu(&assoc->a_list, &tgtport->assoc_list);11761170 done = true;11771171 }11781178- rcu_read_unlock();11791172 spin_unlock_irqrestore(&tgtport->lock, flags);11801173 } while (!done);11811174
+12
drivers/of/dynamic.c
···991010#define pr_fmt(fmt) "OF: " fmt11111212+#include <linux/device.h>1213#include <linux/of.h>1314#include <linux/spinlock.h>1415#include <linux/slab.h>···667666void of_changeset_destroy(struct of_changeset *ocs)668667{669668 struct of_changeset_entry *ce, *cen;669669+670670+ /*671671+ * When a device is deleted, the device links to/from it are also queued672672+ * for deletion. Until these device links are freed, the devices673673+ * themselves aren't freed. If the device being deleted is due to an674674+ * overlay change, this device might be holding a reference to a device675675+ * node that will be freed. So, wait until all already pending device676676+ * links are deleted before freeing a device node. This ensures we don't677677+ * free any device node that has a non-zero reference count.678678+ */679679+ device_link_wait_removal();670680671681 list_for_each_entry_safe_reverse(ce, cen, &ocs->entries, node)672682 __of_changeset_entry_destroy(ce);
+8
drivers/of/module.c
···1616 ssize_t csize;1717 ssize_t tsize;18181919+ /*2020+ * Prevent a kernel oops in vsnprintf() -- it only allows passing a2121+ * NULL ptr when the length is also 0. Also filter out the negative2222+ * lengths...2323+ */2424+ if ((len > 0 && !str) || len < 0)2525+ return -EINVAL;2626+1927 /* Name & Type */2028 /* %p eats all alphanum characters, so %c must be used here */2129 csize = snprintf(str, len, "of:N%pOFn%c%s", np, 'T',
+4
drivers/perf/riscv_pmu.c
···313313 u64 event_config = 0;314314 uint64_t cmask;315315316316+ /* driver does not support branch stack sampling */317317+ if (has_branch_stack(event))318318+ return -EOPNOTSUPP;319319+316320 hwc->flags = 0;317321 mapped_event = rvpmu->event_map(event, &event_config);318322 if (mapped_event < 0) {
+1-1
drivers/pwm/core.c
···443443 if (IS_ERR(pwm))444444 return pwm;445445446446- if (args->args_count > 1)446446+ if (args->args_count > 0)447447 pwm->args.period = args->args[0];448448449449 pwm->args.polarity = PWM_POLARITY_NORMAL;
···606606607607 /* There might be no cooling devices yet. */608608 if (!num_actors) {609609- ret = -EINVAL;609609+ ret = 0;610610 goto clean_state;611611 }612612···679679 return -ENOMEM;680680681681 get_governor_trips(tz, params);682682- if (!params->trip_max) {683683- dev_warn(&tz->device, "power_allocator: missing trip_max\n");684684- kfree(params);685685- return -EINVAL;686686- }687682688683 ret = check_power_actors(tz, params);689684 if (ret < 0) {···709714 else710715 params->sustainable_power = tz->tzp->sustainable_power;711716712712- estimate_pid_constants(tz, tz->tzp->sustainable_power,713713- params->trip_switch_on,714714- params->trip_max->temperature);717717+ if (params->trip_max)718718+ estimate_pid_constants(tz, tz->tzp->sustainable_power,719719+ params->trip_switch_on,720720+ params->trip_max->temperature);715721716722 reset_pid_controller(params);717723
+7-2
drivers/ufs/core/ufshcd.c
···3217321732183218 /* MCQ mode */32193219 if (is_mcq_enabled(hba)) {32203220- err = ufshcd_clear_cmd(hba, lrbp->task_tag);32203220+ /* successfully cleared the command, retry if needed */32213221+ if (ufshcd_clear_cmd(hba, lrbp->task_tag) == 0)32223222+ err = -EAGAIN;32213223 hba->dev_cmd.complete = NULL;32223224 return err;32233225 }···9793979197949792 /* UFS device & link must be active before we enter in this function */97959793 if (!ufshcd_is_ufs_dev_active(hba) || !ufshcd_is_link_active(hba)) {97969796- ret = -EINVAL;97949794+ /* Wait err handler finish or trigger err recovery */97959795+ if (!ufshcd_eh_in_progress(hba))97969796+ ufshcd_force_error_recovery(hba);97979797+ ret = -EBUSY;97979798 goto enable_scaling;97989799 }97999800
···226226 fallthrough;227227 case BCH_WATERMARK_btree_copygc:228228 case BCH_WATERMARK_reclaim:229229+ case BCH_WATERMARK_interior_updates:229230 break;230231 }231232
···1414#include "move.h"1515#include "nocow_locking.h"1616#include "rebalance.h"1717+#include "snapshot.h"1718#include "subvolume.h"1819#include "trace.h"1920···510509 unsigned ptrs_locked = 0;511510 int ret = 0;512511512512+ /*513513+ * fs is corrupt we have a key for a snapshot node that doesn't exist,514514+ * and we have to check for this because we go rw before repairing the515515+ * snapshots table - just skip it, we can move it later.516516+ */517517+ if (unlikely(k.k->p.snapshot && !bch2_snapshot_equiv(c, k.k->p.snapshot)))518518+ return -BCH_ERR_data_update_done;519519+513520 bch2_bkey_buf_init(&m->k);514521 bch2_bkey_buf_reassemble(&m->k, c, k);515522 m->btree_id = btree_id;···580571 move_ctxt_wait_event(ctxt,581572 (locked = bch2_bucket_nocow_trylock(&c->nocow_locks,582573 PTR_BUCKET_POS(c, &p.ptr), 0)) ||583583- (!atomic_read(&ctxt->read_sectors) &&584584- !atomic_read(&ctxt->write_sectors)));574574+ list_empty(&ctxt->ios));585575586576 if (!locked)587577 bch2_bucket_nocow_lock(&c->nocow_locks,
···11+// SPDX-License-Identifier: GPL-2.022+33+#include "eytzinger.h"44+55+/**66+ * is_aligned - is this pointer & size okay for word-wide copying?77+ * @base: pointer to data88+ * @size: size of each element99+ * @align: required alignment (typically 4 or 8)1010+ *1111+ * Returns true if elements can be copied using word loads and stores.1212+ * The size must be a multiple of the alignment, and the base address must1313+ * be if we do not have CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS.1414+ *1515+ * For some reason, gcc doesn't know to optimize "if (a & mask || b & mask)"1616+ * to "if ((a | b) & mask)", so we do that by hand.1717+ */1818+__attribute_const__ __always_inline1919+static bool is_aligned(const void *base, size_t size, unsigned char align)2020+{2121+ unsigned char lsbits = (unsigned char)size;2222+2323+ (void)base;2424+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS2525+ lsbits |= (unsigned char)(uintptr_t)base;2626+#endif2727+ return (lsbits & (align - 1)) == 0;2828+}2929+3030+/**3131+ * swap_words_32 - swap two elements in 32-bit chunks3232+ * @a: pointer to the first element to swap3333+ * @b: pointer to the second element to swap3434+ * @n: element size (must be a multiple of 4)3535+ *3636+ * Exchange the two objects in memory. This exploits base+index addressing,3737+ * which basically all CPUs have, to minimize loop overhead computations.3838+ *3939+ * For some reason, on x86 gcc 7.3.0 adds a redundant test of n at the4040+ * bottom of the loop, even though the zero flag is still valid from the4141+ * subtract (since the intervening mov instructions don't alter the flags).4242+ * Gcc 8.1.0 doesn't have that problem.4343+ */4444+static void swap_words_32(void *a, void *b, size_t n)4545+{4646+ do {4747+ u32 t = *(u32 *)(a + (n -= 4));4848+ *(u32 *)(a + n) = *(u32 *)(b + n);4949+ *(u32 *)(b + n) = t;5050+ } while (n);5151+}5252+5353+/**5454+ * swap_words_64 - swap two elements in 64-bit chunks5555+ * @a: pointer to the first element to swap5656+ * @b: pointer to the second element to swap5757+ * @n: element size (must be a multiple of 8)5858+ *5959+ * Exchange the two objects in memory. This exploits base+index6060+ * addressing, which basically all CPUs have, to minimize loop overhead6161+ * computations.6262+ *6363+ * We'd like to use 64-bit loads if possible. If they're not, emulating6464+ * one requires base+index+4 addressing which x86 has but most other6565+ * processors do not. If CONFIG_64BIT, we definitely have 64-bit loads,6666+ * but it's possible to have 64-bit loads without 64-bit pointers (e.g.6767+ * x32 ABI). Are there any cases the kernel needs to worry about?6868+ */6969+static void swap_words_64(void *a, void *b, size_t n)7070+{7171+ do {7272+#ifdef CONFIG_64BIT7373+ u64 t = *(u64 *)(a + (n -= 8));7474+ *(u64 *)(a + n) = *(u64 *)(b + n);7575+ *(u64 *)(b + n) = t;7676+#else7777+ /* Use two 32-bit transfers to avoid base+index+4 addressing */7878+ u32 t = *(u32 *)(a + (n -= 4));7979+ *(u32 *)(a + n) = *(u32 *)(b + n);8080+ *(u32 *)(b + n) = t;8181+8282+ t = *(u32 *)(a + (n -= 4));8383+ *(u32 *)(a + n) = *(u32 *)(b + n);8484+ *(u32 *)(b + n) = t;8585+#endif8686+ } while (n);8787+}8888+8989+/**9090+ * swap_bytes - swap two elements a byte at a time9191+ * @a: pointer to the first element to swap9292+ * @b: pointer to the second element to swap9393+ * @n: element size9494+ *9595+ * This is the fallback if alignment doesn't allow using larger chunks.9696+ */9797+static void swap_bytes(void *a, void *b, size_t n)9898+{9999+ do {100100+ char t = ((char *)a)[--n];101101+ ((char *)a)[n] = ((char *)b)[n];102102+ ((char *)b)[n] = t;103103+ } while (n);104104+}105105+106106+/*107107+ * The values are arbitrary as long as they can't be confused with108108+ * a pointer, but small integers make for the smallest compare109109+ * instructions.110110+ */111111+#define SWAP_WORDS_64 (swap_r_func_t)0112112+#define SWAP_WORDS_32 (swap_r_func_t)1113113+#define SWAP_BYTES (swap_r_func_t)2114114+#define SWAP_WRAPPER (swap_r_func_t)3115115+116116+struct wrapper {117117+ cmp_func_t cmp;118118+ swap_func_t swap;119119+};120120+121121+/*122122+ * The function pointer is last to make tail calls most efficient if the123123+ * compiler decides not to inline this function.124124+ */125125+static void do_swap(void *a, void *b, size_t size, swap_r_func_t swap_func, const void *priv)126126+{127127+ if (swap_func == SWAP_WRAPPER) {128128+ ((const struct wrapper *)priv)->swap(a, b, (int)size);129129+ return;130130+ }131131+132132+ if (swap_func == SWAP_WORDS_64)133133+ swap_words_64(a, b, size);134134+ else if (swap_func == SWAP_WORDS_32)135135+ swap_words_32(a, b, size);136136+ else if (swap_func == SWAP_BYTES)137137+ swap_bytes(a, b, size);138138+ else139139+ swap_func(a, b, (int)size, priv);140140+}141141+142142+#define _CMP_WRAPPER ((cmp_r_func_t)0L)143143+144144+static int do_cmp(const void *a, const void *b, cmp_r_func_t cmp, const void *priv)145145+{146146+ if (cmp == _CMP_WRAPPER)147147+ return ((const struct wrapper *)priv)->cmp(a, b);148148+ return cmp(a, b, priv);149149+}150150+151151+static inline int eytzinger0_do_cmp(void *base, size_t n, size_t size,152152+ cmp_r_func_t cmp_func, const void *priv,153153+ size_t l, size_t r)154154+{155155+ return do_cmp(base + inorder_to_eytzinger0(l, n) * size,156156+ base + inorder_to_eytzinger0(r, n) * size,157157+ cmp_func, priv);158158+}159159+160160+static inline void eytzinger0_do_swap(void *base, size_t n, size_t size,161161+ swap_r_func_t swap_func, const void *priv,162162+ size_t l, size_t r)163163+{164164+ do_swap(base + inorder_to_eytzinger0(l, n) * size,165165+ base + inorder_to_eytzinger0(r, n) * size,166166+ size, swap_func, priv);167167+}168168+169169+void eytzinger0_sort_r(void *base, size_t n, size_t size,170170+ cmp_r_func_t cmp_func,171171+ swap_r_func_t swap_func,172172+ const void *priv)173173+{174174+ int i, c, r;175175+176176+ /* called from 'sort' without swap function, let's pick the default */177177+ if (swap_func == SWAP_WRAPPER && !((struct wrapper *)priv)->swap)178178+ swap_func = NULL;179179+180180+ if (!swap_func) {181181+ if (is_aligned(base, size, 8))182182+ swap_func = SWAP_WORDS_64;183183+ else if (is_aligned(base, size, 4))184184+ swap_func = SWAP_WORDS_32;185185+ else186186+ swap_func = SWAP_BYTES;187187+ }188188+189189+ /* heapify */190190+ for (i = n / 2 - 1; i >= 0; --i) {191191+ for (r = i; r * 2 + 1 < n; r = c) {192192+ c = r * 2 + 1;193193+194194+ if (c + 1 < n &&195195+ eytzinger0_do_cmp(base, n, size, cmp_func, priv, c, c + 1) < 0)196196+ c++;197197+198198+ if (eytzinger0_do_cmp(base, n, size, cmp_func, priv, r, c) >= 0)199199+ break;200200+201201+ eytzinger0_do_swap(base, n, size, swap_func, priv, r, c);202202+ }203203+ }204204+205205+ /* sort */206206+ for (i = n - 1; i > 0; --i) {207207+ eytzinger0_do_swap(base, n, size, swap_func, priv, 0, i);208208+209209+ for (r = 0; r * 2 + 1 < i; r = c) {210210+ c = r * 2 + 1;211211+212212+ if (c + 1 < i &&213213+ eytzinger0_do_cmp(base, n, size, cmp_func, priv, c, c + 1) < 0)214214+ c++;215215+216216+ if (eytzinger0_do_cmp(base, n, size, cmp_func, priv, r, c) >= 0)217217+ break;218218+219219+ eytzinger0_do_swap(base, n, size, swap_func, priv, r, c);220220+ }221221+ }222222+}223223+224224+void eytzinger0_sort(void *base, size_t n, size_t size,225225+ cmp_func_t cmp_func,226226+ swap_func_t swap_func)227227+{228228+ struct wrapper w = {229229+ .cmp = cmp_func,230230+ .swap = swap_func,231231+ };232232+233233+ return eytzinger0_sort_r(base, n, size, _CMP_WRAPPER, SWAP_WRAPPER, &w);234234+}
+37-26
fs/bcachefs/eytzinger.h
···55#include <linux/bitops.h>66#include <linux/log2.h>7788-#include "util.h"88+#ifdef EYTZINGER_DEBUG99+#define EYTZINGER_BUG_ON(cond) BUG_ON(cond)1010+#else1111+#define EYTZINGER_BUG_ON(cond)1212+#endif9131014/*1115 * Traversal for trees in eytzinger layout - a full binary tree layed out in an1212- * array1313- */1414-1515-/*1616- * One based indexing version:1616+ * array.1717 *1818- * With one based indexing each level of the tree starts at a power of two -1919- * good for cacheline alignment:1818+ * Consider using an eytzinger tree any time you would otherwise be doing binary1919+ * search over an array. Binary search is a worst case scenario for branch2020+ * prediction and prefetching, but in an eytzinger tree every node's children2121+ * are adjacent in memory, thus we can prefetch children before knowing the2222+ * result of the comparison, assuming multiple nodes fit on a cacheline.2323+ *2424+ * Two variants are provided, for one based indexing and zero based indexing.2525+ *2626+ * Zero based indexing is more convenient, but one based indexing has better2727+ * alignment and thus better performance because each new level of the tree2828+ * starts at a power of two, and thus if element 0 was cacheline aligned, each2929+ * new level will be as well.2030 */21312232static inline unsigned eytzinger1_child(unsigned i, unsigned child)2333{2424- EBUG_ON(child > 1);3434+ EYTZINGER_BUG_ON(child > 1);25352636 return (i << 1) + child;2737}···68586959static inline unsigned eytzinger1_next(unsigned i, unsigned size)7060{7171- EBUG_ON(i > size);6161+ EYTZINGER_BUG_ON(i > size);72627363 if (eytzinger1_right_child(i) <= size) {7464 i = eytzinger1_right_child(i);···84748575static inline unsigned eytzinger1_prev(unsigned i, unsigned size)8676{8787- EBUG_ON(i > size);7777+ EYTZINGER_BUG_ON(i > size);88788979 if (eytzinger1_left_child(i) <= size) {9080 i = eytzinger1_left_child(i) + 1;···111101 unsigned shift = __fls(size) - b;112102 int s;113103114114- EBUG_ON(!i || i > size);104104+ EYTZINGER_BUG_ON(!i || i > size);115105116106 i ^= 1U << b;117107 i <<= 1;···136126 unsigned shift;137127 int s;138128139139- EBUG_ON(!i || i > size);129129+ EYTZINGER_BUG_ON(!i || i > size);140130141131 /*142132 * sign bit trick:···174164175165static inline unsigned eytzinger0_child(unsigned i, unsigned child)176166{177177- EBUG_ON(child > 1);167167+ EYTZINGER_BUG_ON(child > 1);178168179169 return (i << 1) + 1 + child;180170}···241231 (_i) != -1; \242232 (_i) = eytzinger0_next((_i), (_size)))243233244244-typedef int (*eytzinger_cmp_fn)(const void *l, const void *r, size_t size);245245-246234/* return greatest node <= @search, or -1 if not found */247235static inline ssize_t eytzinger0_find_le(void *base, size_t nr, size_t size,248248- eytzinger_cmp_fn cmp, const void *search)236236+ cmp_func_t cmp, const void *search)249237{250238 unsigned i, n = 0;251239···252244253245 do {254246 i = n;255255- n = eytzinger0_child(i, cmp(search, base + i * size, size) >= 0);247247+ n = eytzinger0_child(i, cmp(base + i * size, search) <= 0);256248 } while (n < nr);257249258250 if (n & 1) {259251 /* @i was greater than @search, return previous node: */260260-261261- if (i == eytzinger0_first(nr))262262- return -1;263263-264252 return eytzinger0_prev(i, nr);265253 } else {266254 return i;267255 }256256+}257257+258258+static inline ssize_t eytzinger0_find_gt(void *base, size_t nr, size_t size,259259+ cmp_func_t cmp, const void *search)260260+{261261+ ssize_t idx = eytzinger0_find_le(base, nr, size, cmp, search);262262+ return eytzinger0_next(idx, size);268263}269264270265#define eytzinger0_find(base, nr, size, _cmp, search) \···280269 int _res; \281270 \282271 while (_i < _nr && \283283- (_res = _cmp(_search, _base + _i * _size, _size))) \272272+ (_res = _cmp(_search, _base + _i * _size))) \284273 _i = eytzinger0_child(_i, _res > 0); \285274 _i; \286275})287276288288-void eytzinger0_sort(void *, size_t, size_t,289289- int (*cmp_func)(const void *, const void *, size_t),290290- void (*swap_func)(void *, void *, size_t));277277+void eytzinger0_sort_r(void *, size_t, size_t,278278+ cmp_r_func_t, swap_r_func_t, const void *);279279+void eytzinger0_sort(void *, size_t, size_t, cmp_func_t, swap_func_t);291280292281#endif /* _EYTZINGER_H */
···1558155815591559 for (i = 0; i < sbi->s_ndevs; i++) {15601560 if (i > 0)15611561- fput(FDEV(i).bdev_file);15611561+ bdev_fput(FDEV(i).bdev_file);15621562#ifdef CONFIG_BLK_DEV_ZONED15631563 kvfree(FDEV(i).blkz_seq);15641564#endif
···442442 /* set fid protocol-specific info */443443 void (*set_fid)(struct cifsFileInfo *, struct cifs_fid *, __u32);444444 /* close a file */445445- void (*close)(const unsigned int, struct cifs_tcon *,445445+ int (*close)(const unsigned int, struct cifs_tcon *,446446 struct cifs_fid *);447447 /* close a file, returning file attributes and timestamps */448448- void (*close_getattr)(const unsigned int xid, struct cifs_tcon *tcon,448448+ int (*close_getattr)(const unsigned int xid, struct cifs_tcon *tcon,449449 struct cifsFileInfo *pfile_info);450450 /* send a flush request to the server */451451 int (*flush)(const unsigned int, struct cifs_tcon *, struct cifs_fid *);···12811281 struct cached_fids *cfids;12821282 /* BB add field for back pointer to sb struct(s)? */12831283#ifdef CONFIG_CIFS_DFS_UPCALL12841284- struct list_head dfs_ses_list;12851284 struct delayed_work dfs_cache_work;12861285#endif12871286 struct delayed_work query_interfaces; /* query interfaces workqueue job */···14391440 bool swapfile:1;14401441 bool oplock_break_cancelled:1;14411442 bool status_file_deleted:1; /* file has been deleted */14431443+ bool offload:1; /* offload final part of _put to a wq */14421444 unsigned int oplock_epoch; /* epoch from the lease break */14431445 __u32 oplock_level; /* oplock/lease level from the lease break */14441446 int count;···14481448 struct cifs_search_info srch_inf;14491449 struct work_struct oplock_break; /* work for oplock breaks */14501450 struct work_struct put; /* work for the final part of _put */14511451+ struct work_struct serverclose; /* work for serverclose */14511452 struct delayed_work deferred;14521453 bool deferred_close_scheduled; /* Flag to indicate close is scheduled */14531454 char *symlink_target;···18051804 struct TCP_Server_Info *server;18061805 struct cifs_ses *ses;18071806 struct cifs_tcon *tcon;18081808- struct list_head dfs_ses_list;18091807};1810180818111809static inline void __free_dfs_info_param(struct dfs_info3_param *param)···21052105extern struct workqueue_struct *fileinfo_put_wq;21062106extern struct workqueue_struct *cifsoplockd_wq;21072107extern struct workqueue_struct *deferredclose_wq;21082108+extern struct workqueue_struct *serverclose_wq;21082109extern __u32 cifs_lock_secret;2109211021102111extern mempool_t *cifs_sm_req_poolp;···23242323 struct smb2_file_link_info link_info;23252324 struct kvec ea_iov;23262325};23262326+23272327+static inline bool cifs_ses_exiting(struct cifs_ses *ses)23282328+{23292329+ bool ret;23302330+23312331+ spin_lock(&ses->ses_lock);23322332+ ret = ses->ses_status == SES_EXITING;23332333+ spin_unlock(&ses->ses_lock);23342334+ return ret;23352335+}2327233623282337#endif /* _CIFS_GLOB_H */
+10-10
fs/smb/client/cifsproto.h
···725725void cifs_put_tcon_super(struct super_block *sb);726726int cifs_wait_for_server_reconnect(struct TCP_Server_Info *server, bool retry);727727728728-/* Put references of @ses and @ses->dfs_root_ses */728728+/* Put references of @ses and its children */729729static inline void cifs_put_smb_ses(struct cifs_ses *ses)730730{731731- struct cifs_ses *rses = ses->dfs_root_ses;731731+ struct cifs_ses *next;732732733733- __cifs_put_smb_ses(ses);734734- if (rses)735735- __cifs_put_smb_ses(rses);733733+ do {734734+ next = ses->dfs_root_ses;735735+ __cifs_put_smb_ses(ses);736736+ } while ((ses = next));736737}737738738738-/* Get an active reference of @ses and @ses->dfs_root_ses.739739+/* Get an active reference of @ses and its children.739740 *740741 * NOTE: make sure to call this function when incrementing reference count of741742 * @ses to ensure that any DFS root session attached to it (@ses->dfs_root_ses)742743 * will also get its reference count incremented.743744 *744744- * cifs_put_smb_ses() will put both references, so call it when you're done.745745+ * cifs_put_smb_ses() will put all references, so call it when you're done.745746 */746747static inline void cifs_smb_ses_inc_refcount(struct cifs_ses *ses)747748{748749 lockdep_assert_held(&cifs_tcp_ses_lock);749750750750- ses->ses_count++;751751- if (ses->dfs_root_ses)752752- ses->dfs_root_ses->ses_count++;751751+ for (; ses; ses = ses->dfs_root_ses)752752+ ses->ses_count++;753753}754754755755static inline bool dfs_src_pathname_equal(const char *s1, const char *s2)
+2-4
fs/smb/client/cifssmb.c
···58545854 parm_data->list.EA_flags = 0;58555855 /* we checked above that name len is less than 255 */58565856 parm_data->list.name_len = (__u8)name_len;58575857- /* EA names are always ASCII */58585858- if (ea_name)58595859- strncpy(parm_data->list.name, ea_name, name_len);58605860- parm_data->list.name[name_len] = '\0';58575857+ /* EA names are always ASCII and NUL-terminated */58585858+ strscpy(parm_data->list.name, ea_name ?: "", name_len + 1);58615859 parm_data->list.value_len = cpu_to_le16(ea_value_len);58625860 /* caller ensures that ea_value_len is less than 64K but58635861 we need to ensure that it fits within the smb */
+100-57
fs/smb/client/connect.c
···175175176176 spin_lock(&cifs_tcp_ses_lock);177177 list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {178178+ if (cifs_ses_exiting(ses))179179+ continue;178180 spin_lock(&ses->chan_lock);179181 for (i = 0; i < ses->chan_count; i++) {180182 if (!ses->chans[i].server)···234232235233 spin_lock(&cifs_tcp_ses_lock);236234 list_for_each_entry_safe(ses, nses, &pserver->smb_ses_list, smb_ses_list) {237237- /* check if iface is still active */235235+ spin_lock(&ses->ses_lock);236236+ if (ses->ses_status == SES_EXITING) {237237+ spin_unlock(&ses->ses_lock);238238+ continue;239239+ }240240+ spin_unlock(&ses->ses_lock);241241+238242 spin_lock(&ses->chan_lock);239243 if (cifs_ses_get_chan_index(ses, server) ==240244 CIFS_INVAL_CHAN_INDEX) {···18681860 ctx->sectype != ses->sectype)18691861 return 0;1870186218631863+ if (ctx->dfs_root_ses != ses->dfs_root_ses)18641864+ return 0;18651865+18711866 /*18721867 * If an existing session is limited to less channels than18731868 * requested, it should not be reused···19741963 return rc;19751964}1976196519771977-/**19781978- * cifs_free_ipc - helper to release the session IPC tcon19791979- * @ses: smb session to unmount the IPC from19801980- *19811981- * Needs to be called everytime a session is destroyed.19821982- *19831983- * On session close, the IPC is closed and the server must release all tcons of the session.19841984- * No need to send a tree disconnect here.19851985- *19861986- * Besides, it will make the server to not close durable and resilient files on session close, as19871987- * specified in MS-SMB2 3.3.5.6 Receiving an SMB2 LOGOFF Request.19881988- */19891989-static int19901990-cifs_free_ipc(struct cifs_ses *ses)19911991-{19921992- struct cifs_tcon *tcon = ses->tcon_ipc;19931993-19941994- if (tcon == NULL)19951995- return 0;19961996-19971997- tconInfoFree(tcon);19981998- ses->tcon_ipc = NULL;19991999- return 0;20002000-}20012001-20021966static struct cifs_ses *20031967cifs_find_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)20041968{···20052019void __cifs_put_smb_ses(struct cifs_ses *ses)20062020{20072021 struct TCP_Server_Info *server = ses->server;20222022+ struct cifs_tcon *tcon;20082023 unsigned int xid;20092024 size_t i;20252025+ bool do_logoff;20102026 int rc;2011202720122012- spin_lock(&ses->ses_lock);20132013- if (ses->ses_status == SES_EXITING) {20142014- spin_unlock(&ses->ses_lock);20152015- return;20162016- }20172017- spin_unlock(&ses->ses_lock);20182018-20192019- cifs_dbg(FYI, "%s: ses_count=%d\n", __func__, ses->ses_count);20202020- cifs_dbg(FYI,20212021- "%s: ses ipc: %s\n", __func__, ses->tcon_ipc ? ses->tcon_ipc->tree_name : "NONE");20222022-20232028 spin_lock(&cifs_tcp_ses_lock);20242024- if (--ses->ses_count > 0) {20292029+ spin_lock(&ses->ses_lock);20302030+ cifs_dbg(FYI, "%s: id=0x%llx ses_count=%d ses_status=%u ipc=%s\n",20312031+ __func__, ses->Suid, ses->ses_count, ses->ses_status,20322032+ ses->tcon_ipc ? ses->tcon_ipc->tree_name : "none");20332033+ if (ses->ses_status == SES_EXITING || --ses->ses_count > 0) {20342034+ spin_unlock(&ses->ses_lock);20252035 spin_unlock(&cifs_tcp_ses_lock);20262036 return;20272037 }20282028- spin_lock(&ses->ses_lock);20292029- if (ses->ses_status == SES_GOOD)20302030- ses->ses_status = SES_EXITING;20312031- spin_unlock(&ses->ses_lock);20322032- spin_unlock(&cifs_tcp_ses_lock);20332033-20342038 /* ses_count can never go negative */20352039 WARN_ON(ses->ses_count < 0);2036204020372037- spin_lock(&ses->ses_lock);20382038- if (ses->ses_status == SES_EXITING && server->ops->logoff) {20392039- spin_unlock(&ses->ses_lock);20402040- cifs_free_ipc(ses);20412041+ spin_lock(&ses->chan_lock);20422042+ cifs_chan_clear_need_reconnect(ses, server);20432043+ spin_unlock(&ses->chan_lock);20442044+20452045+ do_logoff = ses->ses_status == SES_GOOD && server->ops->logoff;20462046+ ses->ses_status = SES_EXITING;20472047+ tcon = ses->tcon_ipc;20482048+ ses->tcon_ipc = NULL;20492049+ spin_unlock(&ses->ses_lock);20502050+ spin_unlock(&cifs_tcp_ses_lock);20512051+20522052+ /*20532053+ * On session close, the IPC is closed and the server must release all20542054+ * tcons of the session. No need to send a tree disconnect here.20552055+ *20562056+ * Besides, it will make the server to not close durable and resilient20572057+ * files on session close, as specified in MS-SMB2 3.3.5.6 Receiving an20582058+ * SMB2 LOGOFF Request.20592059+ */20602060+ tconInfoFree(tcon);20612061+ if (do_logoff) {20412062 xid = get_xid();20422063 rc = server->ops->logoff(xid, ses);20432064 if (rc)20442065 cifs_server_dbg(VFS, "%s: Session Logoff failure rc=%d\n",20452066 __func__, rc);20462067 _free_xid(xid);20472047- } else {20482048- spin_unlock(&ses->ses_lock);20492049- cifs_free_ipc(ses);20502068 }2051206920522070 spin_lock(&cifs_tcp_ses_lock);···23632373 * need to lock before changing something in the session.23642374 */23652375 spin_lock(&cifs_tcp_ses_lock);23762376+ if (ctx->dfs_root_ses)23772377+ cifs_smb_ses_inc_refcount(ctx->dfs_root_ses);23662378 ses->dfs_root_ses = ctx->dfs_root_ses;23672367- if (ses->dfs_root_ses)23682368- ses->dfs_root_ses->ses_count++;23692379 list_add(&ses->smb_ses_list, &server->smb_ses_list);23702380 spin_unlock(&cifs_tcp_ses_lock);23712381···33163326 cifs_put_smb_ses(mnt_ctx->ses);33173327 else if (mnt_ctx->server)33183328 cifs_put_tcp_session(mnt_ctx->server, 0);33293329+ mnt_ctx->ses = NULL;33303330+ mnt_ctx->tcon = NULL;33313331+ mnt_ctx->server = NULL;33193332 mnt_ctx->cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_POSIX_PATHS;33203333 free_xid(mnt_ctx->xid);33213334}···35973604 bool isdfs;35983605 int rc;3599360636003600- INIT_LIST_HEAD(&mnt_ctx.dfs_ses_list);36013601-36023607 rc = dfs_mount_share(&mnt_ctx, &isdfs);36033608 if (rc)36043609 goto error;···36273636 return rc;3628363736293638error:36303630- dfs_put_root_smb_sessions(&mnt_ctx.dfs_ses_list);36313639 cifs_mount_put_conns(&mnt_ctx);36323640 return rc;36333641}···36413651 goto error;3642365236433653 rc = cifs_mount_get_tcon(&mnt_ctx);36543654+ if (!rc) {36553655+ /*36563656+ * Prevent superblock from being created with any missing36573657+ * connections.36583658+ */36593659+ if (WARN_ON(!mnt_ctx.server))36603660+ rc = -EHOSTDOWN;36613661+ else if (WARN_ON(!mnt_ctx.ses))36623662+ rc = -EACCES;36633663+ else if (WARN_ON(!mnt_ctx.tcon))36643664+ rc = -ENOENT;36653665+ }36443666 if (rc)36453667 goto error;36463668···39903988}3991398939923990static struct cifs_tcon *39933993-cifs_construct_tcon(struct cifs_sb_info *cifs_sb, kuid_t fsuid)39913991+__cifs_construct_tcon(struct cifs_sb_info *cifs_sb, kuid_t fsuid)39943992{39953993 int rc;39963994 struct cifs_tcon *master_tcon = cifs_sb_master_tcon(cifs_sb);39973995 struct cifs_ses *ses;39983996 struct cifs_tcon *tcon = NULL;39993997 struct smb3_fs_context *ctx;39983998+ char *origin_fullpath = NULL;4000399940014000 ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);40024001 if (ctx == NULL)···40214018 ctx->sign = master_tcon->ses->sign;40224019 ctx->seal = master_tcon->seal;40234020 ctx->witness = master_tcon->use_witness;40214021+ ctx->dfs_root_ses = master_tcon->ses->dfs_root_ses;4024402240254023 rc = cifs_set_vol_auth(ctx, master_tcon->ses);40264024 if (rc) {···40414037 goto out;40424038 }4043403940404040+#ifdef CONFIG_CIFS_DFS_UPCALL40414041+ spin_lock(&master_tcon->tc_lock);40424042+ if (master_tcon->origin_fullpath) {40434043+ spin_unlock(&master_tcon->tc_lock);40444044+ origin_fullpath = dfs_get_path(cifs_sb, cifs_sb->ctx->source);40454045+ if (IS_ERR(origin_fullpath)) {40464046+ tcon = ERR_CAST(origin_fullpath);40474047+ origin_fullpath = NULL;40484048+ cifs_put_smb_ses(ses);40494049+ goto out;40504050+ }40514051+ } else {40524052+ spin_unlock(&master_tcon->tc_lock);40534053+ }40544054+#endif40554055+40444056 tcon = cifs_get_tcon(ses, ctx);40454057 if (IS_ERR(tcon)) {40464058 cifs_put_smb_ses(ses);40474059 goto out;40484060 }40614061+40624062+#ifdef CONFIG_CIFS_DFS_UPCALL40634063+ if (origin_fullpath) {40644064+ spin_lock(&tcon->tc_lock);40654065+ tcon->origin_fullpath = origin_fullpath;40664066+ spin_unlock(&tcon->tc_lock);40674067+ origin_fullpath = NULL;40684068+ queue_delayed_work(dfscache_wq, &tcon->dfs_cache_work,40694069+ dfs_cache_get_ttl() * HZ);40704070+ }40714071+#endif4049407240504073#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY40514074 if (cap_unix(ses))···40824051out:40834052 kfree(ctx->username);40844053 kfree_sensitive(ctx->password);40544054+ kfree(origin_fullpath);40854055 kfree(ctx);4086405640874057 return tcon;40584058+}40594059+40604060+static struct cifs_tcon *40614061+cifs_construct_tcon(struct cifs_sb_info *cifs_sb, kuid_t fsuid)40624062+{40634063+ struct cifs_tcon *ret;40644064+40654065+ cifs_mount_lock();40664066+ ret = __cifs_construct_tcon(cifs_sb, fsuid);40674067+ cifs_mount_unlock();40684068+ return ret;40884069}4089407040904071struct cifs_tcon *
+24-27
fs/smb/client/dfs.c
···6666}67676868/*6969- * Track individual DFS referral servers used by new DFS mount.7070- *7171- * On success, their lifetime will be shared by final tcon (dfs_ses_list).7272- * Otherwise, they will be put by dfs_put_root_smb_sessions() in cifs_mount().6969+ * Get an active reference of @ses so that next call to cifs_put_tcon() won't7070+ * release it as any new DFS referrals must go through its IPC tcon.7371 */7474-static int add_root_smb_session(struct cifs_mount_ctx *mnt_ctx)7272+static void add_root_smb_session(struct cifs_mount_ctx *mnt_ctx)7573{7674 struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;7777- struct dfs_root_ses *root_ses;7875 struct cifs_ses *ses = mnt_ctx->ses;79768077 if (ses) {8181- root_ses = kmalloc(sizeof(*root_ses), GFP_KERNEL);8282- if (!root_ses)8383- return -ENOMEM;8484-8585- INIT_LIST_HEAD(&root_ses->list);8686-8778 spin_lock(&cifs_tcp_ses_lock);8879 cifs_smb_ses_inc_refcount(ses);8980 spin_unlock(&cifs_tcp_ses_lock);9090- root_ses->ses = ses;9191- list_add_tail(&root_ses->list, &mnt_ctx->dfs_ses_list);9281 }9393- /* Select new DFS referral server so that new referrals go through it */9482 ctx->dfs_root_ses = ses;9595- return 0;9683}97849885static inline int parse_dfs_target(struct smb3_fs_context *ctx,···172185 continue;173186 }174187175175- if (is_refsrv) {176176- rc = add_root_smb_session(mnt_ctx);177177- if (rc)178178- goto out;179179- }188188+ if (is_refsrv)189189+ add_root_smb_session(mnt_ctx);180190181191 rc = ref_walk_advance(rw);182192 if (!rc) {···216232 struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;217233 struct cifs_tcon *tcon;218234 char *origin_fullpath;235235+ bool new_tcon = true;219236 int rc;220237221238 origin_fullpath = dfs_get_path(cifs_sb, ctx->source);···224239 return PTR_ERR(origin_fullpath);225240226241 rc = dfs_referral_walk(mnt_ctx);242242+ if (!rc) {243243+ /*244244+ * Prevent superblock from being created with any missing245245+ * connections.246246+ */247247+ if (WARN_ON(!mnt_ctx->server))248248+ rc = -EHOSTDOWN;249249+ else if (WARN_ON(!mnt_ctx->ses))250250+ rc = -EACCES;251251+ else if (WARN_ON(!mnt_ctx->tcon))252252+ rc = -ENOENT;253253+ }227254 if (rc)228255 goto out;229256···244247 if (!tcon->origin_fullpath) {245248 tcon->origin_fullpath = origin_fullpath;246249 origin_fullpath = NULL;250250+ } else {251251+ new_tcon = false;247252 }248253 spin_unlock(&tcon->tc_lock);249254250250- if (list_empty(&tcon->dfs_ses_list)) {251251- list_replace_init(&mnt_ctx->dfs_ses_list, &tcon->dfs_ses_list);255255+ if (new_tcon) {252256 queue_delayed_work(dfscache_wq, &tcon->dfs_cache_work,253257 dfs_cache_get_ttl() * HZ);254254- } else {255255- dfs_put_root_smb_sessions(&mnt_ctx->dfs_ses_list);256258 }257259258260out:···294298 if (rc)295299 return rc;296300297297- ctx->dfs_root_ses = mnt_ctx->ses;298301 /*299302 * If called with 'nodfs' mount option, then skip DFS resolving. Otherwise unconditionally300303 * try to get an DFS referral (even cached) to determine whether it is an DFS mount.···319324320325 *isdfs = true;321326 add_root_smb_session(mnt_ctx);322322- return __dfs_mount_share(mnt_ctx);327327+ rc = __dfs_mount_share(mnt_ctx);328328+ dfs_put_root_smb_sessions(mnt_ctx);329329+ return rc;323330}324331325332/* Update dfs referral path of superblock */
+21-12
fs/smb/client/dfs.h
···77#define _CIFS_DFS_H8899#include "cifsglob.h"1010+#include "cifsproto.h"1011#include "fs_context.h"1212+#include "dfs_cache.h"1113#include "cifs_unicode.h"1214#include <linux/namei.h>1315···116114 ref_walk_tit(rw));117115}118116119119-struct dfs_root_ses {120120- struct list_head list;121121- struct cifs_ses *ses;122122-};123123-124117int dfs_parse_target_referral(const char *full_path, const struct dfs_info3_param *ref,125118 struct smb3_fs_context *ctx);126119int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs);···130133{131134 struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;132135 struct cifs_sb_info *cifs_sb = mnt_ctx->cifs_sb;136136+ struct cifs_ses *rses = ctx->dfs_root_ses ?: mnt_ctx->ses;133137134134- return dfs_cache_find(mnt_ctx->xid, ctx->dfs_root_ses, cifs_sb->local_nls,138138+ return dfs_cache_find(mnt_ctx->xid, rses, cifs_sb->local_nls,135139 cifs_remap(cifs_sb), path, ref, tl);136140}137141138138-static inline void dfs_put_root_smb_sessions(struct list_head *head)142142+/*143143+ * cifs_get_smb_ses() already guarantees an active reference of144144+ * @ses->dfs_root_ses when a new session is created, so we need to put extra145145+ * references of all DFS root sessions that were used across the mount process146146+ * in dfs_mount_share().147147+ */148148+static inline void dfs_put_root_smb_sessions(struct cifs_mount_ctx *mnt_ctx)139149{140140- struct dfs_root_ses *root, *tmp;150150+ const struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;151151+ struct cifs_ses *ses = ctx->dfs_root_ses;152152+ struct cifs_ses *cur;141153142142- list_for_each_entry_safe(root, tmp, head, list) {143143- list_del_init(&root->list);144144- cifs_put_smb_ses(root->ses);145145- kfree(root);154154+ if (!ses)155155+ return;156156+157157+ for (cur = ses; cur; cur = cur->dfs_root_ses) {158158+ if (cur->dfs_root_ses)159159+ cifs_put_smb_ses(cur->dfs_root_ses);146160 }161161+ cifs_put_smb_ses(ses);147162}148163149164#endif /* _CIFS_DFS_H */
+24-29
fs/smb/client/dfs_cache.c
···11721172 return ret;11731173}1174117411751175-/* Refresh dfs referral of tcon and mark it for reconnect if needed */11761176-static int __refresh_tcon(const char *path, struct cifs_ses *ses, bool force_refresh)11751175+/* Refresh dfs referral of @ses and mark it for reconnect if needed */11761176+static void __refresh_ses_referral(struct cifs_ses *ses, bool force_refresh)11771177{11781178 struct TCP_Server_Info *server = ses->server;11791179 DFS_CACHE_TGT_LIST(old_tl);···11811181 bool needs_refresh = false;11821182 struct cache_entry *ce;11831183 unsigned int xid;11841184+ char *path = NULL;11841185 int rc = 0;1185118611861187 xid = get_xid();11881188+11891189+ mutex_lock(&server->refpath_lock);11901190+ if (server->leaf_fullpath) {11911191+ path = kstrdup(server->leaf_fullpath + 1, GFP_ATOMIC);11921192+ if (!path)11931193+ rc = -ENOMEM;11941194+ }11951195+ mutex_unlock(&server->refpath_lock);11961196+ if (!path)11971197+ goto out;1187119811881199 down_read(&htable_rw_lock);11891200 ce = lookup_cache_entry(path);···12291218 free_xid(xid);12301219 dfs_cache_free_tgts(&old_tl);12311220 dfs_cache_free_tgts(&new_tl);12321232- return rc;12211221+ kfree(path);12331222}1234122312351235-static int refresh_tcon(struct cifs_tcon *tcon, bool force_refresh)12241224+static inline void refresh_ses_referral(struct cifs_ses *ses)12361225{12371237- struct TCP_Server_Info *server = tcon->ses->server;12381238- struct cifs_ses *ses = tcon->ses;12261226+ __refresh_ses_referral(ses, false);12271227+}1239122812401240- mutex_lock(&server->refpath_lock);12411241- if (server->leaf_fullpath)12421242- __refresh_tcon(server->leaf_fullpath + 1, ses, force_refresh);12431243- mutex_unlock(&server->refpath_lock);12441244- return 0;12291229+static inline void force_refresh_ses_referral(struct cifs_ses *ses)12301230+{12311231+ __refresh_ses_referral(ses, true);12451232}1246123312471234/**···12801271 */12811272 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_USE_PREFIX_PATH;1282127312831283- return refresh_tcon(tcon, true);12741274+ force_refresh_ses_referral(tcon->ses);12751275+ return 0;12841276}1285127712861278/* Refresh all DFS referrals related to DFS tcon */12871279void dfs_cache_refresh(struct work_struct *work)12881280{12891289- struct TCP_Server_Info *server;12901290- struct dfs_root_ses *rses;12911281 struct cifs_tcon *tcon;12921282 struct cifs_ses *ses;1293128312941284 tcon = container_of(work, struct cifs_tcon, dfs_cache_work.work);12951295- ses = tcon->ses;12961296- server = ses->server;1297128512981298- mutex_lock(&server->refpath_lock);12991299- if (server->leaf_fullpath)13001300- __refresh_tcon(server->leaf_fullpath + 1, ses, false);13011301- mutex_unlock(&server->refpath_lock);13021302-13031303- list_for_each_entry(rses, &tcon->dfs_ses_list, list) {13041304- ses = rses->ses;13051305- server = ses->server;13061306- mutex_lock(&server->refpath_lock);13071307- if (server->leaf_fullpath)13081308- __refresh_tcon(server->leaf_fullpath + 1, ses, false);13091309- mutex_unlock(&server->refpath_lock);13101310- }12861286+ for (ses = tcon->ses; ses; ses = ses->dfs_root_ses)12871287+ refresh_ses_referral(ses);1311128813121289 queue_delayed_work(dfscache_wq, &tcon->dfs_cache_work,13131290 atomic_read(&dfs_cache_ttl) * HZ);
+15
fs/smb/client/dir.c
···189189 int disposition;190190 struct TCP_Server_Info *server = tcon->ses->server;191191 struct cifs_open_parms oparms;192192+ int rdwr_for_fscache = 0;192193193194 *oplock = 0;194195 if (tcon->ses->server->oplocks)···200199 free_dentry_path(page);201200 return PTR_ERR(full_path);202201 }202202+203203+ /* If we're caching, we need to be able to fill in around partial writes. */204204+ if (cifs_fscache_enabled(inode) && (oflags & O_ACCMODE) == O_WRONLY)205205+ rdwr_for_fscache = 1;203206204207#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY205208 if (tcon->unix_ext && cap_unix(tcon->ses) && !tcon->broken_posix_open &&···281276 desired_access |= GENERIC_READ; /* is this too little? */282277 if (OPEN_FMODE(oflags) & FMODE_WRITE)283278 desired_access |= GENERIC_WRITE;279279+ if (rdwr_for_fscache == 1)280280+ desired_access |= GENERIC_READ;284281285282 disposition = FILE_OVERWRITE_IF;286283 if ((oflags & (O_CREAT | O_EXCL)) == (O_CREAT | O_EXCL))···311304 if (!tcon->unix_ext && (mode & S_IWUGO) == 0)312305 create_options |= CREATE_OPTION_READONLY;313306307307+retry_open:314308 oparms = (struct cifs_open_parms) {315309 .tcon = tcon,316310 .cifs_sb = cifs_sb,···325317 rc = server->ops->open(xid, &oparms, oplock, buf);326318 if (rc) {327319 cifs_dbg(FYI, "cifs_create returned 0x%x\n", rc);320320+ if (rc == -EACCES && rdwr_for_fscache == 1) {321321+ desired_access &= ~GENERIC_READ;322322+ rdwr_for_fscache = 2;323323+ goto retry_open;324324+ }328325 goto out;329326 }327327+ if (rdwr_for_fscache == 2)328328+ cifs_invalidate_cache(inode, FSCACHE_INVAL_DIO_WRITE);330329331330#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY332331 /*
+95-16
fs/smb/client/file.c
···206206 */207207}208208209209-static inline int cifs_convert_flags(unsigned int flags)209209+static inline int cifs_convert_flags(unsigned int flags, int rdwr_for_fscache)210210{211211 if ((flags & O_ACCMODE) == O_RDONLY)212212 return GENERIC_READ;213213 else if ((flags & O_ACCMODE) == O_WRONLY)214214- return GENERIC_WRITE;214214+ return rdwr_for_fscache == 1 ? (GENERIC_READ | GENERIC_WRITE) : GENERIC_WRITE;215215 else if ((flags & O_ACCMODE) == O_RDWR) {216216 /* GENERIC_ALL is too much permission to request217217 can cause unnecessary access denied on create */···348348 int create_options = CREATE_NOT_DIR;349349 struct TCP_Server_Info *server = tcon->ses->server;350350 struct cifs_open_parms oparms;351351+ int rdwr_for_fscache = 0;351352352353 if (!server->ops->open)353354 return -ENOSYS;354355355355- desired_access = cifs_convert_flags(f_flags);356356+ /* If we're caching, we need to be able to fill in around partial writes. */357357+ if (cifs_fscache_enabled(inode) && (f_flags & O_ACCMODE) == O_WRONLY)358358+ rdwr_for_fscache = 1;359359+360360+ desired_access = cifs_convert_flags(f_flags, rdwr_for_fscache);356361357362/*********************************************************************358363 * open flag mapping table:···394389 if (f_flags & O_DIRECT)395390 create_options |= CREATE_NO_BUFFER;396391392392+retry_open:397393 oparms = (struct cifs_open_parms) {398394 .tcon = tcon,399395 .cifs_sb = cifs_sb,···406400 };407401408402 rc = server->ops->open(xid, &oparms, oplock, buf);409409- if (rc)403403+ if (rc) {404404+ if (rc == -EACCES && rdwr_for_fscache == 1) {405405+ desired_access = cifs_convert_flags(f_flags, 0);406406+ rdwr_for_fscache = 2;407407+ goto retry_open;408408+ }410409 return rc;410410+ }411411+ if (rdwr_for_fscache == 2)412412+ cifs_invalidate_cache(inode, FSCACHE_INVAL_DIO_WRITE);411413412414 /* TODO: Add support for calling posix query info but with passing in fid */413415 if (tcon->unix_ext)···459445}460446461447static void cifsFileInfo_put_work(struct work_struct *work);448448+void serverclose_work(struct work_struct *work);462449463450struct cifsFileInfo *cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,464451 struct tcon_link *tlink, __u32 oplock,···506491 cfile->tlink = cifs_get_tlink(tlink);507492 INIT_WORK(&cfile->oplock_break, cifs_oplock_break);508493 INIT_WORK(&cfile->put, cifsFileInfo_put_work);494494+ INIT_WORK(&cfile->serverclose, serverclose_work);509495 INIT_DELAYED_WORK(&cfile->deferred, smb2_deferred_work_close);510496 mutex_init(&cfile->fh_mutex);511497 spin_lock_init(&cfile->file_info_lock);···598582 cifsFileInfo_put_final(cifs_file);599583}600584585585+void serverclose_work(struct work_struct *work)586586+{587587+ struct cifsFileInfo *cifs_file = container_of(work,588588+ struct cifsFileInfo, serverclose);589589+590590+ struct cifs_tcon *tcon = tlink_tcon(cifs_file->tlink);591591+592592+ struct TCP_Server_Info *server = tcon->ses->server;593593+ int rc = 0;594594+ int retries = 0;595595+ int MAX_RETRIES = 4;596596+597597+ do {598598+ if (server->ops->close_getattr)599599+ rc = server->ops->close_getattr(0, tcon, cifs_file);600600+ else if (server->ops->close)601601+ rc = server->ops->close(0, tcon, &cifs_file->fid);602602+603603+ if (rc == -EBUSY || rc == -EAGAIN) {604604+ retries++;605605+ msleep(250);606606+ }607607+ } while ((rc == -EBUSY || rc == -EAGAIN) && (retries < MAX_RETRIES)608608+ );609609+610610+ if (retries == MAX_RETRIES)611611+ pr_warn("Serverclose failed %d times, giving up\n", MAX_RETRIES);612612+613613+ if (cifs_file->offload)614614+ queue_work(fileinfo_put_wq, &cifs_file->put);615615+ else616616+ cifsFileInfo_put_final(cifs_file);617617+}618618+601619/**602620 * cifsFileInfo_put - release a reference of file priv data603621 *···672622 struct cifs_fid fid = {};673623 struct cifs_pending_open open;674624 bool oplock_break_cancelled;625625+ bool serverclose_offloaded = false;675626676627 spin_lock(&tcon->open_file_lock);677628 spin_lock(&cifsi->open_file_lock);678629 spin_lock(&cifs_file->file_info_lock);630630+631631+ cifs_file->offload = offload;679632 if (--cifs_file->count > 0) {680633 spin_unlock(&cifs_file->file_info_lock);681634 spin_unlock(&cifsi->open_file_lock);···720667 if (!tcon->need_reconnect && !cifs_file->invalidHandle) {721668 struct TCP_Server_Info *server = tcon->ses->server;722669 unsigned int xid;670670+ int rc = 0;723671724672 xid = get_xid();725673 if (server->ops->close_getattr)726726- server->ops->close_getattr(xid, tcon, cifs_file);674674+ rc = server->ops->close_getattr(xid, tcon, cifs_file);727675 else if (server->ops->close)728728- server->ops->close(xid, tcon, &cifs_file->fid);676676+ rc = server->ops->close(xid, tcon, &cifs_file->fid);729677 _free_xid(xid);678678+679679+ if (rc == -EBUSY || rc == -EAGAIN) {680680+ // Server close failed, hence offloading it as an async op681681+ queue_work(serverclose_wq, &cifs_file->serverclose);682682+ serverclose_offloaded = true;683683+ }730684 }731685732686 if (oplock_break_cancelled)···741681742682 cifs_del_pending_open(&open);743683744744- if (offload)745745- queue_work(fileinfo_put_wq, &cifs_file->put);746746- else747747- cifsFileInfo_put_final(cifs_file);684684+ // if serverclose has been offloaded to wq (on failure), it will685685+ // handle offloading put as well. If serverclose not offloaded,686686+ // we need to handle offloading put here.687687+ if (!serverclose_offloaded) {688688+ if (offload)689689+ queue_work(fileinfo_put_wq, &cifs_file->put);690690+ else691691+ cifsFileInfo_put_final(cifs_file);692692+ }748693}749694750695int cifs_open(struct inode *inode, struct file *file)···899834use_cache:900835 fscache_use_cookie(cifs_inode_cookie(file_inode(file)),901836 file->f_mode & FMODE_WRITE);902902- if (file->f_flags & O_DIRECT &&903903- (!((file->f_flags & O_ACCMODE) != O_RDONLY) ||904904- file->f_flags & O_APPEND))905905- cifs_invalidate_cache(file_inode(file),906906- FSCACHE_INVAL_DIO_WRITE);837837+ if (!(file->f_flags & O_DIRECT))838838+ goto out;839839+ if ((file->f_flags & (O_ACCMODE | O_APPEND)) == O_RDONLY)840840+ goto out;841841+ cifs_invalidate_cache(file_inode(file), FSCACHE_INVAL_DIO_WRITE);907842908843out:909844 free_dentry_path(page);···968903 int disposition = FILE_OPEN;969904 int create_options = CREATE_NOT_DIR;970905 struct cifs_open_parms oparms;906906+ int rdwr_for_fscache = 0;971907972908 xid = get_xid();973909 mutex_lock(&cfile->fh_mutex);···1032966 }1033967#endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */103496810351035- desired_access = cifs_convert_flags(cfile->f_flags);969969+ /* If we're caching, we need to be able to fill in around partial writes. */970970+ if (cifs_fscache_enabled(inode) && (cfile->f_flags & O_ACCMODE) == O_WRONLY)971971+ rdwr_for_fscache = 1;972972+973973+ desired_access = cifs_convert_flags(cfile->f_flags, rdwr_for_fscache);10369741037975 /* O_SYNC also has bit for O_DSYNC so following check picks up either */1038976 if (cfile->f_flags & O_SYNC)···1048978 if (server->ops->get_lease_key)1049979 server->ops->get_lease_key(inode, &cfile->fid);1050980981981+retry_open:1051982 oparms = (struct cifs_open_parms) {1052983 .tcon = tcon,1053984 .cifs_sb = cifs_sb,···10741003 /* indicate that we need to relock the file */10751004 oparms.reconnect = true;10761005 }10061006+ if (rc == -EACCES && rdwr_for_fscache == 1) {10071007+ desired_access = cifs_convert_flags(cfile->f_flags, 0);10081008+ rdwr_for_fscache = 2;10091009+ goto retry_open;10101010+ }1077101110781012 if (rc) {10791013 mutex_unlock(&cfile->fh_mutex);···10861010 cifs_dbg(FYI, "oplock: %d\n", oplock);10871011 goto reopen_error_exit;10881012 }10131013+10141014+ if (rdwr_for_fscache == 2)10151015+ cifs_invalidate_cache(inode, FSCACHE_INVAL_DIO_WRITE);1089101610901017#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY10911018reopen_success:
···247247 spin_lock(&cifs_tcp_ses_lock);248248 list_for_each_entry(server_it, &cifs_tcp_ses_list, tcp_ses_list) {249249 list_for_each_entry(ses_it, &server_it->smb_ses_list, smb_ses_list) {250250- if (ses_it->Suid == out.session_id) {250250+ spin_lock(&ses_it->ses_lock);251251+ if (ses_it->ses_status != SES_EXITING &&252252+ ses_it->Suid == out.session_id) {251253 ses = ses_it;252254 /*253255 * since we are using the session outside the crit···257255 * so increment its refcount258256 */259257 cifs_smb_ses_inc_refcount(ses);258258+ spin_unlock(&ses_it->ses_lock);260259 found = true;261260 goto search_end;262261 }262262+ spin_unlock(&ses_it->ses_lock);263263 }264264 }265265search_end:
+2-6
fs/smb/client/misc.c
···138138 atomic_set(&ret_buf->num_local_opens, 0);139139 atomic_set(&ret_buf->num_remote_opens, 0);140140 ret_buf->stats_from_time = ktime_get_real_seconds();141141-#ifdef CONFIG_CIFS_DFS_UPCALL142142- INIT_LIST_HEAD(&ret_buf->dfs_ses_list);143143-#endif144141145142 return ret_buf;146143}···153156 atomic_dec(&tconInfoAllocCount);154157 kfree(tcon->nativeFileSystem);155158 kfree_sensitive(tcon->password);156156-#ifdef CONFIG_CIFS_DFS_UPCALL157157- dfs_put_root_smb_sessions(&tcon->dfs_ses_list);158158-#endif159159 kfree(tcon->origin_fullpath);160160 kfree(tcon);161161}···481487 /* look up tcon based on tid & uid */482488 spin_lock(&cifs_tcp_ses_lock);483489 list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {490490+ if (cifs_ses_exiting(ses))491491+ continue;484492 list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {485493 if (tcon->tid != buf->Tid)486494 continue;
···622622 /* look up tcon based on tid & uid */623623 spin_lock(&cifs_tcp_ses_lock);624624 list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {625625+ if (cifs_ses_exiting(ses))626626+ continue;625627 list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {626628 spin_lock(&tcon->open_file_lock);627629 cifs_stats_inc(···699697 /* look up tcon based on tid & uid */700698 spin_lock(&cifs_tcp_ses_lock);701699 list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {700700+ if (cifs_ses_exiting(ses))701701+ continue;702702 list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {703703704704 spin_lock(&tcon->open_file_lock);
+8-5
fs/smb/client/smb2ops.c
···14121412 memcpy(cfile->fid.create_guid, fid->create_guid, 16);14131413}1414141414151415-static void14151415+static int14161416smb2_close_file(const unsigned int xid, struct cifs_tcon *tcon,14171417 struct cifs_fid *fid)14181418{14191419- SMB2_close(xid, tcon, fid->persistent_fid, fid->volatile_fid);14191419+ return SMB2_close(xid, tcon, fid->persistent_fid, fid->volatile_fid);14201420}1421142114221422-static void14221422+static int14231423smb2_close_getattr(const unsigned int xid, struct cifs_tcon *tcon,14241424 struct cifsFileInfo *cfile)14251425{···14301430 rc = __SMB2_close(xid, tcon, cfile->fid.persistent_fid,14311431 cfile->fid.volatile_fid, &file_inf);14321432 if (rc)14331433- return;14331433+ return rc;1434143414351435 inode = d_inode(cfile->dentry);14361436···1459145914601460 /* End of file and Attributes should not have to be updated on close */14611461 spin_unlock(&inode->i_lock);14621462+ return rc;14621463}1463146414641465static int···2481248024822481 spin_lock(&cifs_tcp_ses_lock);24832482 list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {24832483+ if (cifs_ses_exiting(ses))24842484+ continue;24842485 list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {24852486 if (tcon->tid == le32_to_cpu(shdr->Id.SyncId.TreeId)) {24862487 spin_lock(&tcon->tc_lock);···39163913 strcat(message, "W");39173914 }39183915 if (!new_oplock)39193919- strncpy(message, "None", sizeof(message));39163916+ strscpy(message, "None");3920391739213918 cinode->oplock = new_oplock;39223919 cifs_dbg(FYI, "%s Lease granted on inode %p\n", message,
···20302030 fs_put_dax(btp->bt_daxdev, btp->bt_mount);20312031 /* the main block device is closed by kill_block_super */20322032 if (btp->bt_bdev != btp->bt_mount->m_super->s_bdev)20332033- fput(btp->bt_bdev_file);20332033+ bdev_fput(btp->bt_bdev_file);20342034 kfree(btp);20352035}20362036
+13-2
fs/xfs/xfs_inode.c
···13011301 */13021302 if (unlikely((tdp->i_diflags & XFS_DIFLAG_PROJINHERIT) &&13031303 tdp->i_projid != sip->i_projid)) {13041304- error = -EXDEV;13051305- goto error_return;13041304+ /*13051305+ * Project quota setup skips special files which can13061306+ * leave inodes in a PROJINHERIT directory without a13071307+ * project ID set. We need to allow links to be made13081308+ * to these "project-less" inodes because userspace13091309+ * expects them to succeed after project ID setup,13101310+ * but everything else should be rejected.13111311+ */13121312+ if (!special_file(VFS_I(sip)->i_mode) ||13131313+ sip->i_projid != 0) {13141314+ error = -EXDEV;13151315+ goto error_return;13161316+ }13061317 }1307131813081319 if (!resblks) {
+3-3
fs/xfs/xfs_super.c
···485485 mp->m_logdev_targp = mp->m_ddev_targp;486486 /* Handle won't be used, drop it */487487 if (logdev_file)488488- fput(logdev_file);488488+ bdev_fput(logdev_file);489489 }490490491491 return 0;···497497 xfs_free_buftarg(mp->m_ddev_targp);498498 out_close_rtdev:499499 if (rtdev_file)500500- fput(rtdev_file);500500+ bdev_fput(rtdev_file);501501 out_close_logdev:502502 if (logdev_file)503503- fput(logdev_file);503503+ bdev_fput(logdev_file);504504 return error;505505}506506
+1-1
include/kvm/arm_pmu.h
···8686 */8787#define kvm_pmu_update_vcpu_events(vcpu) \8888 do { \8989- if (!has_vhe() && kvm_vcpu_has_pmu(vcpu)) \8989+ if (!has_vhe() && kvm_arm_support_pmu_v3()) \9090 vcpu->arch.pmu.events = *kvm_get_pmu_events(); \9191 } while (0)9292
+1-10
include/linux/blkdev.h
···15051505 * Thaw the file system mounted on the block device.15061506 */15071507 int (*thaw)(struct block_device *bdev);15081508-15091509- /*15101510- * If needed, get a reference to the holder.15111511- */15121512- void (*get_holder)(void *holder);15131513-15141514- /*15151515- * Release the holder.15161516- */15171517- void (*put_holder)(void *holder);15181508};1519150915201510/*···1575158515761586int bdev_freeze(struct block_device *bdev);15771587int bdev_thaw(struct block_device *bdev);15881588+void bdev_fput(struct file *bdev_file);1578158915791590struct io_comp_batch {15801591 struct request *req_list;
+15-1
include/linux/bpf.h
···15741574 enum bpf_link_type type;15751575 const struct bpf_link_ops *ops;15761576 struct bpf_prog *prog;15771577- struct work_struct work;15771577+ /* rcu is used before freeing, work can be used to schedule that15781578+ * RCU-based freeing before that, so they never overlap15791579+ */15801580+ union {15811581+ struct rcu_head rcu;15821582+ struct work_struct work;15831583+ };15781584};1579158515801586struct bpf_link_ops {15811587 void (*release)(struct bpf_link *link);15881588+ /* deallocate link resources callback, called without RCU grace period15891589+ * waiting15901590+ */15821591 void (*dealloc)(struct bpf_link *link);15921592+ /* deallocate link resources callback, called after RCU grace period;15931593+ * if underlying BPF program is sleepable we go through tasks trace15941594+ * RCU GP and then "classic" RCU GP15951595+ */15961596+ void (*dealloc_deferred)(struct bpf_link *link);15831597 int (*detach)(struct bpf_link *link);15841598 int (*update_prog)(struct bpf_link *link, struct bpf_prog *new_prog,15851599 struct bpf_prog *old_prog);
+12
include/linux/cc_platform.h
···9090 * Examples include TDX Guest.9191 */9292 CC_ATTR_HOTPLUG_DISABLED,9393+9494+ /**9595+ * @CC_ATTR_HOST_SEV_SNP: AMD SNP enabled on the host.9696+ *9797+ * The host kernel is running with the necessary features9898+ * enabled to run SEV-SNP guests.9999+ */100100+ CC_ATTR_HOST_SEV_SNP,93101};9410295103#ifdef CONFIG_ARCH_HAS_CC_PLATFORM···115107 * * FALSE - Specified Confidential Computing attribute is not active116108 */117109bool cc_platform_has(enum cc_attr attr);110110+void cc_platform_set(enum cc_attr attr);111111+void cc_platform_clear(enum cc_attr attr);118112119113#else /* !CONFIG_ARCH_HAS_CC_PLATFORM */120114121115static inline bool cc_platform_has(enum cc_attr attr) { return false; }116116+static inline void cc_platform_set(enum cc_attr attr) { }117117+static inline void cc_platform_clear(enum cc_attr attr) { }122118123119#endif /* CONFIG_ARCH_HAS_CC_PLATFORM */124120
+1
include/linux/device.h
···12471247void device_link_remove(void *consumer, struct device *supplier);12481248void device_links_supplier_sync_state_pause(void);12491249void device_links_supplier_sync_state_resume(void);12501250+void device_link_wait_removal(void);1250125112511252/* Create alias, so I can be autoloaded. */12521253#define MODULE_ALIAS_CHARDEV(major,minor) \
-1
include/linux/energy_model.h
···245245 * max utilization to the allowed CPU capacity before calculating246246 * effective performance.247247 */248248- max_util = map_util_perf(max_util);249248 max_util = min(max_util, allowed_cpu_cap);250249251250 /*
+2
include/linux/fs.h
···121121#define FMODE_PWRITE ((__force fmode_t)0x10)122122/* File is opened for execution with sys_execve / sys_uselib */123123#define FMODE_EXEC ((__force fmode_t)0x20)124124+/* File writes are restricted (block device specific) */125125+#define FMODE_WRITE_RESTRICTED ((__force fmode_t)0x40)124126/* 32bit hashes as llseek() offset (for directories) */125127#define FMODE_32BITHASH ((__force fmode_t)0x200)126128/* 64bit hashes as llseek() offset (for directories) */
···12301230int regmap_raw_write_async(struct regmap *map, unsigned int reg,12311231 const void *val, size_t val_len);12321232int regmap_read(struct regmap *map, unsigned int reg, unsigned int *val);12331233+int regmap_read_bypassed(struct regmap *map, unsigned int reg, unsigned int *val);12331234int regmap_raw_read(struct regmap *map, unsigned int reg,12341235 void *val, size_t val_len);12351236int regmap_noinc_read(struct regmap *map, unsigned int reg,···1735173417361735static inline int regmap_read(struct regmap *map, unsigned int reg,17371736 unsigned int *val)17371737+{17381738+ WARN_ONCE(1, "regmap API is disabled");17391739+ return -EINVAL;17401740+}17411741+17421742+static inline int regmap_read_bypassed(struct regmap *map, unsigned int reg,17431743+ unsigned int *val)17381744{17391745 WARN_ONCE(1, "regmap API is disabled");17401746 return -EINVAL;
+2-2
include/linux/secretmem.h
···1313 /*1414 * Using folio_mapping() is quite slow because of the actual call1515 * instruction.1616- * We know that secretmem pages are not compound and LRU so we can1616+ * We know that secretmem pages are not compound, so we can1717 * save a couple of cycles here.1818 */1919- if (folio_test_large(folio) || !folio_test_lru(folio))1919+ if (folio_test_large(folio))2020 return false;21212222 mapping = (struct address_space *)
+3-4
include/linux/stackdepot.h
···4444union handle_parts {4545 depot_stack_handle_t handle;4646 struct {4747- /* pool_index is offset by 1 */4848- u32 pool_index : DEPOT_POOL_INDEX_BITS;4949- u32 offset : DEPOT_OFFSET_BITS;5050- u32 extra : STACK_DEPOT_EXTRA_BITS;4747+ u32 pool_index_plus_1 : DEPOT_POOL_INDEX_BITS;4848+ u32 offset : DEPOT_OFFSET_BITS;4949+ u32 extra : STACK_DEPOT_EXTRA_BITS;5150 };5251};5352
+9-2
include/linux/timecounter.h
···2222 *2323 * @read: returns the current cycle value2424 * @mask: bitmask for two's complement2525- * subtraction of non 64 bit counters,2525+ * subtraction of non-64-bit counters,2626 * see CYCLECOUNTER_MASK() helper macro2727 * @mult: cycle to nanosecond multiplier2828 * @shift: cycle to nanosecond divisor (power of two)···3535};36363737/**3838- * struct timecounter - layer above a %struct cyclecounter which counts nanoseconds3838+ * struct timecounter - layer above a &struct cyclecounter which counts nanoseconds3939 * Contains the state needed by timecounter_read() to detect4040 * cycle counter wrap around. Initialize with4141 * timecounter_init(). Also used to convert cycle counts into the···6666 * @cycles: Cycles6767 * @mask: bit mask for maintaining the 'frac' field6868 * @frac: pointer to storage for the fractional nanoseconds.6969+ *7070+ * Returns: cycle counter cycles converted to nanoseconds6971 */7072static inline u64 cyclecounter_cyc2ns(const struct cyclecounter *cc,7173 u64 cycles, u64 mask, u64 *frac)···81798280/**8381 * timecounter_adjtime - Shifts the time of the clock.8282+ * @tc: The &struct timecounter to adjust8483 * @delta: Desired change in nanoseconds.8584 */8685static inline void timecounter_adjtime(struct timecounter *tc, s64 delta)···110107 *111108 * In other words, keeps track of time since the same epoch as112109 * the function which generated the initial time stamp.110110+ *111111+ * Returns: nanoseconds since the initial time stamp113112 */114113extern u64 timecounter_read(struct timecounter *tc);115114···128123 *129124 * This allows conversion of cycle counter values which were generated130125 * in the past.126126+ *127127+ * Returns: cycle counter converted to nanoseconds since the initial time stamp131128 */132129extern u64 timecounter_cyc2time(const struct timecounter *tc,133130 u64 cycle_tstamp);
+42-7
include/linux/timekeeping.h
···2222 const struct timezone *tz);23232424/*2525- * ktime_get() family: read the current time in a multitude of ways,2525+ * ktime_get() family - read the current time in a multitude of ways.2626 *2727 * The default time reference is CLOCK_MONOTONIC, starting at2828 * boot time but not counting the time spent in suspend.2929 * For other references, use the functions with "real", "clocktai",3030 * "boottime" and "raw" suffixes.3131 *3232- * To get the time in a different format, use the ones wit3232+ * To get the time in a different format, use the ones with3333 * "ns", "ts64" and "seconds" suffix.3434 *3535 * See Documentation/core-api/timekeeping.rst for more details.···74747575/**7676 * ktime_get_real - get the real (wall-) time in ktime_t format7777+ *7878+ * Returns: real (wall) time in ktime_t format7779 */7880static inline ktime_t ktime_get_real(void)7981{···8886}89879088/**9191- * ktime_get_boottime - Returns monotonic time since boot in ktime_t format8989+ * ktime_get_boottime - Get monotonic time since boot in ktime_t format9290 *9391 * This is similar to CLOCK_MONTONIC/ktime_get, but also includes the9492 * time spent in suspend.9393+ *9494+ * Returns: monotonic time since boot in ktime_t format9595 */9696static inline ktime_t ktime_get_boottime(void)9797{···106102}107103108104/**109109- * ktime_get_clocktai - Returns the TAI time of day in ktime_t format105105+ * ktime_get_clocktai - Get the TAI time of day in ktime_t format106106+ *107107+ * Returns: the TAI time of day in ktime_t format110108 */111109static inline ktime_t ktime_get_clocktai(void)112110{···150144151145/**152146 * ktime_mono_to_real - Convert monotonic time to clock realtime147147+ * @mono: monotonic time to convert148148+ *149149+ * Returns: time converted to realtime clock153150 */154151static inline ktime_t ktime_mono_to_real(ktime_t mono)155152{156153 return ktime_mono_to_any(mono, TK_OFFS_REAL);157154}158155156156+/**157157+ * ktime_get_ns - Get the current time in nanoseconds158158+ *159159+ * Returns: current time converted to nanoseconds160160+ */159161static inline u64 ktime_get_ns(void)160162{161163 return ktime_to_ns(ktime_get());162164}163165166166+/**167167+ * ktime_get_real_ns - Get the current real/wall time in nanoseconds168168+ *169169+ * Returns: current real time converted to nanoseconds170170+ */164171static inline u64 ktime_get_real_ns(void)165172{166173 return ktime_to_ns(ktime_get_real());167174}168175176176+/**177177+ * ktime_get_boottime_ns - Get the monotonic time since boot in nanoseconds178178+ *179179+ * Returns: current boottime converted to nanoseconds180180+ */169181static inline u64 ktime_get_boottime_ns(void)170182{171183 return ktime_to_ns(ktime_get_boottime());172184}173185186186+/**187187+ * ktime_get_clocktai_ns - Get the current TAI time of day in nanoseconds188188+ *189189+ * Returns: current TAI time converted to nanoseconds190190+ */174191static inline u64 ktime_get_clocktai_ns(void)175192{176193 return ktime_to_ns(ktime_get_clocktai());177194}178195196196+/**197197+ * ktime_get_raw_ns - Get the raw monotonic time in nanoseconds198198+ *199199+ * Returns: current raw monotonic time converted to nanoseconds200200+ */179201static inline u64 ktime_get_raw_ns(void)180202{181203 return ktime_to_ns(ktime_get_raw());···258224259225extern void timekeeping_inject_sleeptime64(const struct timespec64 *delta);260226261261-/*262262- * struct ktime_timestanps - Simultaneous mono/boot/real timestamps227227+/**228228+ * struct ktime_timestamps - Simultaneous mono/boot/real timestamps263229 * @mono: Monotonic timestamp264230 * @boot: Boottime timestamp265231 * @real: Realtime timestamp···276242 * @cycles: Clocksource counter value to produce the system times277243 * @real: Realtime system time278244 * @raw: Monotonic raw system time279279- * @clock_was_set_seq: The sequence number of clock was set events245245+ * @cs_id: Clocksource ID246246+ * @clock_was_set_seq: The sequence number of clock-was-set events280247 * @cs_was_changed_seq: The sequence number of clocksource change events281248 */282249struct system_time_snapshot {
+10-2
include/linux/timer.h
···2222#define __TIMER_LOCKDEP_MAP_INITIALIZER(_kn)2323#endif24242525-/**2525+/*2626 * @TIMER_DEFERRABLE: A deferrable timer will work normally when the2727 * system is busy, but will not cause a CPU to come out of idle just2828 * to service it; instead, the timer will be serviced when the CPU···140140 * or not. Callers must ensure serialization wrt. other operations done141141 * to this timer, eg. interrupt contexts, or other CPUs on SMP.142142 *143143- * return value: 1 if the timer is pending, 0 if not.143143+ * Returns: 1 if the timer is pending, 0 if not.144144 */145145static inline int timer_pending(const struct timer_list * timer)146146{···175175 * See timer_delete_sync() for detailed explanation.176176 *177177 * Do not use in new code. Use timer_delete_sync() instead.178178+ *179179+ * Returns:180180+ * * %0 - The timer was not pending181181+ * * %1 - The timer was pending and deactivated178182 */179183static inline int del_timer_sync(struct timer_list *timer)180184{···192188 * See timer_delete() for detailed explanation.193189 *194190 * Do not use in new code. Use timer_delete() instead.191191+ *192192+ * Returns:193193+ * * %0 - The timer was not pending194194+ * * %1 - The timer was pending and deactivated195195 */196196static inline int del_timer(struct timer_list *timer)197197{
+28
include/linux/udp.h
···150150 }151151}152152153153+DECLARE_STATIC_KEY_FALSE(udp_encap_needed_key);154154+#if IS_ENABLED(CONFIG_IPV6)155155+DECLARE_STATIC_KEY_FALSE(udpv6_encap_needed_key);156156+#endif157157+158158+static inline bool udp_encap_needed(void)159159+{160160+ if (static_branch_unlikely(&udp_encap_needed_key))161161+ return true;162162+163163+#if IS_ENABLED(CONFIG_IPV6)164164+ if (static_branch_unlikely(&udpv6_encap_needed_key))165165+ return true;166166+#endif167167+168168+ return false;169169+}170170+153171static inline bool udp_unexpected_gso(struct sock *sk, struct sk_buff *skb)154172{155173 if (!skb_is_gso(skb))···179161180162 if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST &&181163 !udp_test_bit(ACCEPT_FRAGLIST, sk))164164+ return true;165165+166166+ /* GSO packets lacking the SKB_GSO_UDP_TUNNEL/_CSUM bits might still167167+ * land in a tunnel as the socket check in udp_gro_receive cannot be168168+ * foolproof.169169+ */170170+ if (udp_encap_needed() &&171171+ READ_ONCE(udp_sk(sk)->encap_rcv) &&172172+ !(skb_shinfo(skb)->gso_type &173173+ (SKB_GSO_UDP_TUNNEL | SKB_GSO_UDP_TUNNEL_CSUM)))182174 return true;183175184176 return false;
+9
include/net/bluetooth/hci.h
···176176 */177177 HCI_QUIRK_USE_BDADDR_PROPERTY,178178179179+ /* When this quirk is set, the Bluetooth Device Address provided by180180+ * the 'local-bd-address' fwnode property is incorrectly specified in181181+ * big-endian order.182182+ *183183+ * This quirk can be set before hci_register_dev is called or184184+ * during the hdev->setup vendor callback.185185+ */186186+ HCI_QUIRK_BDADDR_PROPERTY_BROKEN,187187+179188 /* When this quirk is set, the duplicate filtering during180189 * scanning is based on Bluetooth devices addresses. To allow181190 * RSSI based updates, restart scanning if needed.
···367367 if (S_ISREG(mode)) {368368 int ml = maybe_link();369369 if (ml >= 0) {370370- int openflags = O_WRONLY|O_CREAT;370370+ int openflags = O_WRONLY|O_CREAT|O_LARGEFILE;371371 if (ml != 1)372372 openflags |= O_TRUNC;373373 wfile = filp_open(collected, openflags, mode);
···17171818#define IO_BUFFER_LIST_BUF_PER_PAGE (PAGE_SIZE / sizeof(struct io_uring_buf))19192020-#define BGID_ARRAY 642121-2220/* BIDs are addressed by a 16-bit field in a CQE */2321#define MAX_BIDS_PER_BGID (1 << 16)2422···3840 int inuse;3941};40424141-static struct io_buffer_list *__io_buffer_get_list(struct io_ring_ctx *ctx,4242- struct io_buffer_list *bl,4343- unsigned int bgid)4343+static inline struct io_buffer_list *__io_buffer_get_list(struct io_ring_ctx *ctx,4444+ unsigned int bgid)4445{4545- if (bl && bgid < BGID_ARRAY)4646- return &bl[bgid];4747-4846 return xa_load(&ctx->io_bl_xa, bgid);4947}5048···4955{5056 lockdep_assert_held(&ctx->uring_lock);51575252- return __io_buffer_get_list(ctx, ctx->io_bl, bgid);5858+ return __io_buffer_get_list(ctx, bgid);5359}54605561static int io_buffer_add_list(struct io_ring_ctx *ctx,···6167 * always under the ->uring_lock, but the RCU lookup from mmap does.6268 */6369 bl->bgid = bgid;6464- smp_store_release(&bl->is_ready, 1);6565-6666- if (bgid < BGID_ARRAY)6767- return 0;6868-7070+ atomic_set(&bl->refs, 1);6971 return xa_err(xa_store(&ctx->io_bl_xa, bgid, bl, GFP_KERNEL));7072}7173···198208 return ret;199209}200210201201-static __cold int io_init_bl_list(struct io_ring_ctx *ctx)202202-{203203- struct io_buffer_list *bl;204204- int i;205205-206206- bl = kcalloc(BGID_ARRAY, sizeof(struct io_buffer_list), GFP_KERNEL);207207- if (!bl)208208- return -ENOMEM;209209-210210- for (i = 0; i < BGID_ARRAY; i++) {211211- INIT_LIST_HEAD(&bl[i].buf_list);212212- bl[i].bgid = i;213213- }214214-215215- smp_store_release(&ctx->io_bl, bl);216216- return 0;217217-}218218-219211/*220212 * Mark the given mapped range as free for reuse221213 */···266294 return i;267295}268296297297+void io_put_bl(struct io_ring_ctx *ctx, struct io_buffer_list *bl)298298+{299299+ if (atomic_dec_and_test(&bl->refs)) {300300+ __io_remove_buffers(ctx, bl, -1U);301301+ kfree_rcu(bl, rcu);302302+ }303303+}304304+269305void io_destroy_buffers(struct io_ring_ctx *ctx)270306{271307 struct io_buffer_list *bl;272308 struct list_head *item, *tmp;273309 struct io_buffer *buf;274310 unsigned long index;275275- int i;276276-277277- for (i = 0; i < BGID_ARRAY; i++) {278278- if (!ctx->io_bl)279279- break;280280- __io_remove_buffers(ctx, &ctx->io_bl[i], -1U);281281- }282311283312 xa_for_each(&ctx->io_bl_xa, index, bl) {284313 xa_erase(&ctx->io_bl_xa, bl->bgid);285285- __io_remove_buffers(ctx, bl, -1U);286286- kfree_rcu(bl, rcu);314314+ io_put_bl(ctx, bl);287315 }288316289317 /*···461489462490 io_ring_submit_lock(ctx, issue_flags);463491464464- if (unlikely(p->bgid < BGID_ARRAY && !ctx->io_bl)) {465465- ret = io_init_bl_list(ctx);466466- if (ret)467467- goto err;468468- }469469-470492 bl = io_buffer_get_list(ctx, p->bgid);471493 if (unlikely(!bl)) {472494 bl = kzalloc(sizeof(*bl), GFP_KERNEL_ACCOUNT);···473507 if (ret) {474508 /*475509 * Doesn't need rcu free as it was never visible, but476476- * let's keep it consistent throughout. Also can't477477- * be a lower indexed array group, as adding one478478- * where lookup failed cannot happen.510510+ * let's keep it consistent throughout.479511 */480480- if (p->bgid >= BGID_ARRAY)481481- kfree_rcu(bl, rcu);482482- else483483- WARN_ON_ONCE(1);512512+ kfree_rcu(bl, rcu);484513 goto err;485514 }486515 }···640679 if (reg.ring_entries >= 65536)641680 return -EINVAL;642681643643- if (unlikely(reg.bgid < BGID_ARRAY && !ctx->io_bl)) {644644- int ret = io_init_bl_list(ctx);645645- if (ret)646646- return ret;647647- }648648-649682 bl = io_buffer_get_list(ctx, reg.bgid);650683 if (bl) {651684 /* if mapped buffer ring OR classic exists, don't allow */···688733 if (!bl->is_buf_ring)689734 return -EINVAL;690735691691- __io_remove_buffers(ctx, bl, -1U);692692- if (bl->bgid >= BGID_ARRAY) {693693- xa_erase(&ctx->io_bl_xa, bl->bgid);694694- kfree_rcu(bl, rcu);695695- }736736+ xa_erase(&ctx->io_bl_xa, bl->bgid);737737+ io_put_bl(ctx, bl);696738 return 0;697739}698740···719767 return 0;720768}721769722722-void *io_pbuf_get_address(struct io_ring_ctx *ctx, unsigned long bgid)770770+struct io_buffer_list *io_pbuf_get_bl(struct io_ring_ctx *ctx,771771+ unsigned long bgid)723772{724773 struct io_buffer_list *bl;774774+ bool ret;725775726726- bl = __io_buffer_get_list(ctx, smp_load_acquire(&ctx->io_bl), bgid);727727-728728- if (!bl || !bl->is_mmap)729729- return NULL;730776 /*731731- * Ensure the list is fully setup. Only strictly needed for RCU lookup732732- * via mmap, and in that case only for the array indexed groups. For733733- * the xarray lookups, it's either visible and ready, or not at all.777777+ * We have to be a bit careful here - we're inside mmap and cannot grab778778+ * the uring_lock. This means the buffer_list could be simultaneously779779+ * going away, if someone is trying to be sneaky. Look it up under rcu780780+ * so we know it's not going away, and attempt to grab a reference to781781+ * it. If the ref is already zero, then fail the mapping. If successful,782782+ * the caller will call io_put_bl() to drop the the reference at at the783783+ * end. This may then safely free the buffer_list (and drop the pages)784784+ * at that point, vm_insert_pages() would've already grabbed the785785+ * necessary vma references.734786 */735735- if (!smp_load_acquire(&bl->is_ready))736736- return NULL;787787+ rcu_read_lock();788788+ bl = xa_load(&ctx->io_bl_xa, bgid);789789+ /* must be a mmap'able buffer ring and have pages */790790+ ret = false;791791+ if (bl && bl->is_mmap)792792+ ret = atomic_inc_not_zero(&bl->refs);793793+ rcu_read_unlock();737794738738- return bl->buf_ring;795795+ if (ret)796796+ return bl;797797+798798+ return ERR_PTR(-EINVAL);739799}740800741801/*
+5-3
io_uring/kbuf.h
···2525 __u16 head;2626 __u16 mask;27272828+ atomic_t refs;2929+2830 /* ring mapped provided buffers */2931 __u8 is_buf_ring;3032 /* ring mapped provided buffers, but mmap'ed by application */3133 __u8 is_mmap;3232- /* bl is visible from an RCU point of view for lookup */3333- __u8 is_ready;3434};35353636struct io_buffer {···61616262bool io_kbuf_recycle_legacy(struct io_kiocb *req, unsigned issue_flags);63636464-void *io_pbuf_get_address(struct io_ring_ctx *ctx, unsigned long bgid);6464+void io_put_bl(struct io_ring_ctx *ctx, struct io_buffer_list *bl);6565+struct io_buffer_list *io_pbuf_get_bl(struct io_ring_ctx *ctx,6666+ unsigned long bgid);65676668static inline bool io_kbuf_recycle_ring(struct io_kiocb *req)6769{
+8-1
io_uring/rw.c
···937937 ret = __io_read(req, issue_flags);938938939939 /*940940+ * If the file doesn't support proper NOWAIT, then disable multishot941941+ * and stay in single shot mode.942942+ */943943+ if (!io_file_supports_nowait(req))944944+ req->flags &= ~REQ_F_APOLL_MULTISHOT;945945+946946+ /*940947 * If we get -EAGAIN, recycle our buffer and just let normal poll941948 * handling arm it.942949 */···962955 /*963956 * Any successful return value will keep the multishot read armed.964957 */965965- if (ret > 0) {958958+ if (ret > 0 && req->flags & REQ_F_APOLL_MULTISHOT) {966959 /*967960 * Put our buffer and post a CQE. If we fail to post a CQE, then968961 * jump to the termination path. This request is then done.
+32-3
kernel/bpf/syscall.c
···30243024 atomic64_inc(&link->refcnt);30253025}3026302630273027+static void bpf_link_defer_dealloc_rcu_gp(struct rcu_head *rcu)30283028+{30293029+ struct bpf_link *link = container_of(rcu, struct bpf_link, rcu);30303030+30313031+ /* free bpf_link and its containing memory */30323032+ link->ops->dealloc_deferred(link);30333033+}30343034+30353035+static void bpf_link_defer_dealloc_mult_rcu_gp(struct rcu_head *rcu)30363036+{30373037+ if (rcu_trace_implies_rcu_gp())30383038+ bpf_link_defer_dealloc_rcu_gp(rcu);30393039+ else30403040+ call_rcu(rcu, bpf_link_defer_dealloc_rcu_gp);30413041+}30423042+30273043/* bpf_link_free is guaranteed to be called from process context */30283044static void bpf_link_free(struct bpf_link *link)30293045{30463046+ bool sleepable = false;30473047+30303048 bpf_link_free_id(link->id);30313049 if (link->prog) {30503050+ sleepable = link->prog->sleepable;30323051 /* detach BPF program, clean up used resources */30333052 link->ops->release(link);30343053 bpf_prog_put(link->prog);30353054 }30363036- /* free bpf_link and its containing memory */30373037- link->ops->dealloc(link);30553055+ if (link->ops->dealloc_deferred) {30563056+ /* schedule BPF link deallocation; if underlying BPF program30573057+ * is sleepable, we need to first wait for RCU tasks trace30583058+ * sync, then go through "classic" RCU grace period30593059+ */30603060+ if (sleepable)30613061+ call_rcu_tasks_trace(&link->rcu, bpf_link_defer_dealloc_mult_rcu_gp);30623062+ else30633063+ call_rcu(&link->rcu, bpf_link_defer_dealloc_rcu_gp);30643064+ }30653065+ if (link->ops->dealloc)30663066+ link->ops->dealloc(link);30383067}3039306830403069static void bpf_link_put_deferred(struct work_struct *work)···3573354435743545static const struct bpf_link_ops bpf_raw_tp_link_lops = {35753546 .release = bpf_raw_tp_link_release,35763576- .dealloc = bpf_raw_tp_link_dealloc,35473547+ .dealloc_deferred = bpf_raw_tp_link_dealloc,35773548 .show_fdinfo = bpf_raw_tp_link_show_fdinfo,35783549 .fill_link_info = bpf_raw_tp_link_fill_link_info,35793550};
···697697698698/**699699 * tick_nohz_update_jiffies - update jiffies when idle was interrupted700700+ * @now: current ktime_t700701 *701702 * Called from interrupt entry when the CPU was idle702703 *···795794 * This time is measured via accounting rather than sampling,796795 * and is as accurate as ktime_get() is.797796 *798798- * This function returns -1 if NOHZ is not enabled.797797+ * Return: -1 if NOHZ is not enabled, else total idle time of the @cpu799798 */800799u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time)801800{···821820 * This time is measured via accounting rather than sampling,822821 * and is as accurate as ktime_get() is.823822 *824824- * This function returns -1 if NOHZ is not enabled.823823+ * Return: -1 if NOHZ is not enabled, else total iowait time of @cpu825824 */826825u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time)827826{···1288128712891288/**12901289 * tick_nohz_idle_got_tick - Check whether or not the tick handler has run12901290+ *12911291+ * Return: %true if the tick handler has run, otherwise %false12911292 */12921293bool tick_nohz_idle_got_tick(void)12931294{···13081305 * stopped, it returns the next hrtimer.13091306 *13101307 * Called from power state control code with interrupts disabled13081308+ *13091309+ * Return: the next expiration time13111310 */13121311ktime_t tick_nohz_get_next_hrtimer(void)13131312{···13251320 * The return value of this function and/or the value returned by it through the13261321 * @delta_next pointer can be negative which must be taken into account by its13271322 * callers.13231323+ *13241324+ * Return: the expected length of the current sleep13281325 */13291326ktime_t tick_nohz_get_sleep_length(ktime_t *delta_next)13301327{···13641357/**13651358 * tick_nohz_get_idle_calls_cpu - return the current idle calls counter value13661359 * for a particular CPU.13601360+ * @cpu: target CPU number13671361 *13681362 * Called from the schedutil frequency scaling governor in scheduler context.13631363+ *13641364+ * Return: the current idle calls counter value for @cpu13691365 */13701366unsigned long tick_nohz_get_idle_calls_cpu(int cpu)13711367{···13811371 * tick_nohz_get_idle_calls - return the current idle calls counter value13821372 *13831373 * Called from the schedutil frequency scaling governor in scheduler context.13741374+ *13751375+ * Return: the current idle calls counter value for the current CPU13841376 */13851377unsigned long tick_nohz_get_idle_calls(void)13861378{···1571155915721560/**15731561 * tick_setup_sched_timer - setup the tick emulation timer15741574- * @mode: tick_nohz_mode to setup for15621562+ * @hrtimer: whether to use the hrtimer or not15751563 */15761564void tick_setup_sched_timer(bool hrtimer)15771565{
+1-1
kernel/time/tick-sched.h
···4646 * @next_tick: Next tick to be fired when in dynticks mode.4747 * @idle_jiffies: jiffies at the entry to idle for idle time accounting4848 * @idle_waketime: Time when the idle was interrupted4949+ * @idle_sleeptime_seq: sequence counter for data consistency4950 * @idle_entrytime: Time when the idle call was entered5050- * @nohz_mode: Mode - one state of tick_nohz_mode5151 * @last_jiffies: Base jiffies snapshot when next event was last computed5252 * @timer_expires_base: Base time clock monotonic for @timer_expires5353 * @timer_expires: Anticipated timer expiration time (in case sched tick is stopped)
+11-11
kernel/time/timer.c
···64646565/*6666 * The timer wheel has LVL_DEPTH array levels. Each level provides an array of6767- * LVL_SIZE buckets. Each level is driven by its own clock and therefor each6767+ * LVL_SIZE buckets. Each level is driven by its own clock and therefore each6868 * level has a different granularity.6969 *7070- * The level granularity is: LVL_CLK_DIV ^ lvl7070+ * The level granularity is: LVL_CLK_DIV ^ level7171 * The level clock frequency is: HZ / (LVL_CLK_DIV ^ level)7272 *7373 * The array level of a newly armed timer depends on the relative expiry7474 * time. The farther the expiry time is away the higher the array level and7575- * therefor the granularity becomes.7575+ * therefore the granularity becomes.7676 *7777 * Contrary to the original timer wheel implementation, which aims for 'exact'7878 * expiry of the timers, this implementation removes the need for recascading···207207 * struct timer_base - Per CPU timer base (number of base depends on config)208208 * @lock: Lock protecting the timer_base209209 * @running_timer: When expiring timers, the lock is dropped. To make210210- * sure not to race agains deleting/modifying a210210+ * sure not to race against deleting/modifying a211211 * currently running timer, the pointer is set to the212212 * timer, which expires at the moment. If no timer is213213 * running, the pointer is NULL.···737737}738738739739/*740740- * fixup_init is called when:740740+ * timer_fixup_init is called when:741741 * - an active object is initialized742742 */743743static bool timer_fixup_init(void *addr, enum debug_obj_state state)···761761}762762763763/*764764- * fixup_activate is called when:764764+ * timer_fixup_activate is called when:765765 * - an active object is activated766766 * - an unknown non-static object is activated767767 */···783783}784784785785/*786786- * fixup_free is called when:786786+ * timer_fixup_free is called when:787787 * - an active object is freed788788 */789789static bool timer_fixup_free(void *addr, enum debug_obj_state state)···801801}802802803803/*804804- * fixup_assert_init is called when:804804+ * timer_fixup_assert_init is called when:805805 * - an untracked/uninit-ed object is found806806 */807807static bool timer_fixup_assert_init(void *addr, enum debug_obj_state state)···914914 * @key: lockdep class key of the fake lock used for tracking timer915915 * sync lock dependencies916916 *917917- * init_timer_key() must be done to a timer prior calling *any* of the917917+ * init_timer_key() must be done to a timer prior to calling *any* of the918918 * other timer functions.919919 */920920void init_timer_key(struct timer_list *timer,···14171417 * If @shutdown is set then the lock has to be taken whether the14181418 * timer is pending or not to protect against a concurrent rearm14191419 * which might hit between the lockless pending check and the lock14201420- * aquisition. By taking the lock it is ensured that such a newly14201420+ * acquisition. By taking the lock it is ensured that such a newly14211421 * enqueued timer is dequeued and cannot end up with14221422 * timer->function == NULL in the expiry code.14231423 *···2306230623072307 /*23082308 * When timer base is not set idle, undo the effect of23092309- * tmigr_cpu_deactivate() to prevent inconsitent states - active23092309+ * tmigr_cpu_deactivate() to prevent inconsistent states - active23102310 * timer base but inactive timer migration hierarchy.23112311 *23122312 * When timer base was already marked idle, nothing will be
+31-1
kernel/time/timer_migration.c
···751751752752 first_childevt = evt = data->evt;753753754754+ /*755755+ * Walking the hierarchy is required in any case when a756756+ * remote expiry was done before. This ensures to not lose757757+ * already queued events in non active groups (see section758758+ * "Required event and timerqueue update after a remote759759+ * expiry" in the documentation at the top).760760+ *761761+ * The two call sites which are executed without a remote expiry762762+ * before, are not prevented from propagating changes through763763+ * the hierarchy by the return:764764+ * - When entering this path by tmigr_new_timer(), @evt->ignore765765+ * is never set.766766+ * - tmigr_inactive_up() takes care of the propagation by767767+ * itself and ignores the return value. But an immediate768768+ * return is possible if there is a parent, sparing group769769+ * locking at this level, because the upper walking call to770770+ * the parent will take care about removing this event from771771+ * within the group and update next_expiry accordingly.772772+ *773773+ * However if there is no parent, ie: the hierarchy has only a774774+ * single level so @group is the top level group, make sure the775775+ * first event information of the group is updated properly and776776+ * also handled properly, so skip this fast return path.777777+ */778778+ if (evt->ignore && !remote && group->parent)779779+ return true;780780+754781 raw_spin_lock(&group->lock);755782756783 childstate.state = 0;···789762 * queue when the expiry time changed only or when it could be ignored.790763 */791764 if (timerqueue_node_queued(&evt->nextevt)) {792792- if ((evt->nextevt.expires == nextexp) && !evt->ignore)765765+ if ((evt->nextevt.expires == nextexp) && !evt->ignore) {766766+ /* Make sure not to miss a new CPU event with the same expiry */767767+ evt->cpu = first_childevt->cpu;793768 goto check_toplvl;769769+ }794770795771 if (!timerqueue_del(&group->events, &evt->nextevt))796772 WRITE_ONCE(group->next_expiry, KTIME_MAX);
···59735973 goto out;59745974 pte = ptep_get(ptep);5975597559765976+ /* Never return PFNs of anon folios in COW mappings. */59775977+ if (vm_normal_folio(vma, address, pte))59785978+ goto unlock;59795979+59765980 if ((flags & FOLL_WRITE) && !pte_write(pte))59775981 goto unlock;59785982
+45-29
mm/vmalloc.c
···989989 return atomic_long_read(&nr_vmalloc_pages);990990}991991992992+static struct vmap_area *__find_vmap_area(unsigned long addr, struct rb_root *root)993993+{994994+ struct rb_node *n = root->rb_node;995995+996996+ addr = (unsigned long)kasan_reset_tag((void *)addr);997997+998998+ while (n) {999999+ struct vmap_area *va;10001000+10011001+ va = rb_entry(n, struct vmap_area, rb_node);10021002+ if (addr < va->va_start)10031003+ n = n->rb_left;10041004+ else if (addr >= va->va_end)10051005+ n = n->rb_right;10061006+ else10071007+ return va;10081008+ }10091009+10101010+ return NULL;10111011+}10121012+9921013/* Look up the first VA which satisfies addr < va_end, NULL if none. */9931014static struct vmap_area *9941015__find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root)···10461025static struct vmap_node *10471026find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va)10481027{10491049- struct vmap_node *vn, *va_node = NULL;10501050- struct vmap_area *va_lowest;10281028+ unsigned long va_start_lowest;10291029+ struct vmap_node *vn;10511030 int i;1052103110531053- for (i = 0; i < nr_vmap_nodes; i++) {10321032+repeat:10331033+ for (i = 0, va_start_lowest = 0; i < nr_vmap_nodes; i++) {10541034 vn = &vmap_nodes[i];1055103510561036 spin_lock(&vn->busy.lock);10571057- va_lowest = __find_vmap_area_exceed_addr(addr, &vn->busy.root);10581058- if (va_lowest) {10591059- if (!va_node || va_lowest->va_start < (*va)->va_start) {10601060- if (va_node)10611061- spin_unlock(&va_node->busy.lock);10371037+ *va = __find_vmap_area_exceed_addr(addr, &vn->busy.root);1062103810631063- *va = va_lowest;10641064- va_node = vn;10651065- continue;10661066- }10671067- }10391039+ if (*va)10401040+ if (!va_start_lowest || (*va)->va_start < va_start_lowest)10411041+ va_start_lowest = (*va)->va_start;10681042 spin_unlock(&vn->busy.lock);10691043 }1070104410711071- return va_node;10721072-}10451045+ /*10461046+ * Check if found VA exists, it might have gone away. In this case we10471047+ * repeat the search because a VA has been removed concurrently and we10481048+ * need to proceed to the next one, which is a rare case.10491049+ */10501050+ if (va_start_lowest) {10511051+ vn = addr_to_node(va_start_lowest);1073105210741074-static struct vmap_area *__find_vmap_area(unsigned long addr, struct rb_root *root)10751075-{10761076- struct rb_node *n = root->rb_node;10531053+ spin_lock(&vn->busy.lock);10541054+ *va = __find_vmap_area(va_start_lowest, &vn->busy.root);1077105510781078- addr = (unsigned long)kasan_reset_tag((void *)addr);10561056+ if (*va)10571057+ return vn;1079105810801080- while (n) {10811081- struct vmap_area *va;10821082-10831083- va = rb_entry(n, struct vmap_area, rb_node);10841084- if (addr < va->va_start)10851085- n = n->rb_left;10861086- else if (addr >= va->va_end)10871087- n = n->rb_right;10881088- else10891089- return va;10591059+ spin_unlock(&vn->busy.lock);10601060+ goto repeat;10901061 }1091106210921063 return NULL;···23552342 struct vmap_node *vn;23562343 struct vmap_area *va;23572344 int i, j;23452345+23462346+ if (unlikely(!vmap_initialized))23472347+ return NULL;2358234823592349 /*23602350 * An addr_to_node_id(addr) converts an address to a node index
+5-5
net/9p/client.c
···15831583 received = rsize;15841584 }1585158515861586- p9_debug(P9_DEBUG_9P, "<<< RREAD count %d\n", count);15861586+ p9_debug(P9_DEBUG_9P, "<<< RREAD count %d\n", received);1587158715881588 if (non_zc) {15891589 int n = copy_to_iter(dataptr, received, to);···16091609 int total = 0;16101610 *err = 0;1611161116121612- p9_debug(P9_DEBUG_9P, ">>> TWRITE fid %d offset %llu count %zd\n",16131613- fid->fid, offset, iov_iter_count(from));16141614-16151612 while (iov_iter_count(from)) {16161613 int count = iov_iter_count(from);16171614 int rsize = fid->iounit;···1619162216201623 if (count < rsize)16211624 rsize = count;16251625+16261626+ p9_debug(P9_DEBUG_9P, ">>> TWRITE fid %d offset %llu count %d (/%d)\n",16271627+ fid->fid, offset, rsize, count);1622162816231629 /* Don't bother zerocopy for small IO (< 1024) */16241630 if (clnt->trans_mod->zc_request && rsize > 1024) {···16501650 written = rsize;16511651 }1652165216531653- p9_debug(P9_DEBUG_9P, "<<< RWRITE count %d\n", count);16531653+ p9_debug(P9_DEBUG_9P, "<<< RWRITE count %d\n", written);1654165416551655 p9_req_put(clnt, req);16561656 iov_iter_revert(from, count - written - iov_iter_count(from));
-1
net/9p/trans_fd.c
···9595 * @unsent_req_list: accounting for requests that haven't been sent9696 * @rreq: read request9797 * @wreq: write request9898- * @req: current request being processed (if any)9998 * @tmp_buf: temporary buffer to read in header10099 * @rc: temporary fcall for reading current frame101100 * @wpos: write position for current frame
···429429 * PP consumers must pay attention to run APIs in the appropriate context430430 * (e.g. NAPI context).431431 */432432-static DEFINE_PER_CPU_ALIGNED(struct page_pool *, system_page_pool);432432+static DEFINE_PER_CPU(struct page_pool *, system_page_pool);433433434434#ifdef CONFIG_LOCKDEP435435/*
+2-1
net/core/gro.c
···192192 }193193194194merge:195195- /* sk owenrship - if any - completely transferred to the aggregated packet */195195+ /* sk ownership - if any - completely transferred to the aggregated packet */196196 skb->destructor = NULL;197197+ skb->sk = NULL;197198 delta_truesize = skb->truesize;198199 if (offset > headlen) {199200 unsigned int eat = offset - headlen;
+6
net/core/sock_map.c
···411411 struct sock *sk;412412 int err = 0;413413414414+ if (irqs_disabled())415415+ return -EOPNOTSUPP; /* locks here are hardirq-unsafe */416416+414417 spin_lock_bh(&stab->lock);415418 sk = *psk;416419 if (!sk_test || sk_test == sk)···935932 struct bpf_shtab_bucket *bucket;936933 struct bpf_shtab_elem *elem;937934 int ret = -ENOENT;935935+936936+ if (irqs_disabled())937937+ return -EOPNOTSUPP; /* locks here are hardirq-unsafe */938938939939 hash = sock_hash_bucket_hash(key, key_size);940940 bucket = sock_hash_select_bucket(htab, hash);
+6-7
net/hsr/hsr_device.c
···132132{133133 struct hsr_priv *hsr;134134 struct hsr_port *port;135135- char designation;135135+ const char *designation = NULL;136136137137 hsr = netdev_priv(dev);138138- designation = '\0';139138140139 hsr_for_each_port(hsr, port) {141140 if (port->type == HSR_PT_MASTER)142141 continue;143142 switch (port->type) {144143 case HSR_PT_SLAVE_A:145145- designation = 'A';144144+ designation = "Slave A";146145 break;147146 case HSR_PT_SLAVE_B:148148- designation = 'B';147147+ designation = "Slave B";149148 break;150149 default:151151- designation = '?';150150+ designation = "Unknown";152151 }153152 if (!is_slave_up(port->dev))154154- netdev_warn(dev, "Slave %c (%s) is not up; please bring it up to get a fully working HSR network\n",153153+ netdev_warn(dev, "%s (%s) is not up; please bring it up to get a fully working HSR network\n",155154 designation, port->dev->name);156155 }157156158158- if (designation == '\0')157157+ if (!designation)159158 netdev_warn(dev, "No slave devices configured\n");160159161160 return 0;
···449449 NAPI_GRO_CB(p)->count++;450450 p->data_len += skb->len;451451452452- /* sk owenrship - if any - completely transferred to the aggregated packet */452452+ /* sk ownership - if any - completely transferred to the aggregated packet */453453 skb->destructor = NULL;454454+ skb->sk = NULL;454455 p->truesize += skb->truesize;455456 p->len += skb->len;456457···552551 unsigned int off = skb_gro_offset(skb);553552 int flush = 1;554553555555- /* we can do L4 aggregation only if the packet can't land in a tunnel556556- * otherwise we could corrupt the inner stream554554+ /* We can do L4 aggregation only if the packet can't land in a tunnel555555+ * otherwise we could corrupt the inner stream. Detecting such packets556556+ * cannot be foolproof and the aggregation might still happen in some557557+ * cases. Such packets should be caught in udp_unexpected_gso later.557558 */558559 NAPI_GRO_CB(skb)->is_flist = 0;559560 if (!sk || !udp_sk(sk)->gro_receive) {561561+ /* If the packet was locally encapsulated in a UDP tunnel that562562+ * wasn't detected above, do not GRO.563563+ */564564+ if (skb->encapsulation)565565+ goto out;566566+560567 if (skb->dev->features & NETIF_F_GRO_FRAGLIST)561568 NAPI_GRO_CB(skb)->is_flist = sk ? !udp_test_bit(GRO_ENABLED, sk) : 1;562569···728719 skb_shinfo(skb)->gso_type |= (SKB_GSO_FRAGLIST|SKB_GSO_UDP_L4);729720 skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count;730721731731- if (skb->ip_summed == CHECKSUM_UNNECESSARY) {732732- if (skb->csum_level < SKB_MAX_CSUM_LEVEL)733733- skb->csum_level++;734734- } else {735735- skb->ip_summed = CHECKSUM_UNNECESSARY;736736- skb->csum_level = 0;737737- }722722+ __skb_incr_checksum_unnecessary(skb);738723739724 return 0;740725 }
···14931493 struct mptcp_subflow_context *subflow;14941494 int space, cap;1495149514961496+ /* bpf can land here with a wrong sk type */14971497+ if (sk->sk_protocol == IPPROTO_TCP)14981498+ return -EINVAL;14991499+14961500 if (sk->sk_userlocks & SOCK_RCVBUF_LOCK)14971501 cap = sk->sk_rcvbuf >> 1;14981502 else
+2
net/mptcp/subflow.c
···905905 return child;906906907907fallback:908908+ if (fallback)909909+ SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK);908910 mptcp_subflow_drop_ctx(child);909911 return child;910912}
···302302 }303303 ret = PTR_ERR(trans_private);304304 /* Trigger connection so that its ready for the next retry */305305- if (ret == -ENODEV)305305+ if (ret == -ENODEV && cp)306306 rds_conn_connect_if_down(cp->cp_conn);307307 goto out;308308 }
···809809 notify = !sch->q.qlen && !WARN_ON_ONCE(!n &&810810 !qdisc_is_offloaded);811811 /* TODO: perform the search on a per txq basis */812812- sch = qdisc_lookup(qdisc_dev(sch), TC_H_MAJ(parentid));812812+ sch = qdisc_lookup_rcu(qdisc_dev(sch), TC_H_MAJ(parentid));813813 if (sch == NULL) {814814 WARN_ON_ONCE(parentid != TC_H_ROOT);815815 break;
+1-9
net/sunrpc/svcsock.c
···12061206 * MSG_SPLICE_PAGES is used exclusively to reduce the number of12071207 * copy operations in this path. Therefore the caller must ensure12081208 * that the pages backing @xdr are unchanging.12091209- *12101210- * Note that the send is non-blocking. The caller has incremented12111211- * the reference count on each page backing the RPC message, and12121212- * the network layer will "put" these pages when transmission is12131213- * complete.12141214- *12151215- * This is safe for our RPC services because the memory backing12161216- * the head and tail components is never kmalloc'd. These always12171217- * come from pages in the svc_rqst::rq_pages array.12181209 */12191210static int svc_tcp_sendmsg(struct svc_sock *svsk, struct svc_rqst *rqstp,12201211 rpc_fraghdr marker, unsigned int *sentp)···12351244 iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, rqstp->rq_bvec,12361245 1 + count, sizeof(marker) + rqstp->rq_res.len);12371246 ret = sock_sendmsg(svsk->sk_sock, &msg);12471247+ page_frag_free(buf);12381248 if (ret < 0)12391249 return ret;12401250 *sentp += ret;
···17931793EXPORT_SYMBOL(security_path_mknod);1794179417951795/**17961796- * security_path_post_mknod() - Update inode security field after file creation17961796+ * security_path_post_mknod() - Update inode security after reg file creation17971797 * @idmap: idmap of the mount17981798 * @dentry: new file17991799 *18001800- * Update inode security field after a file has been created.18001800+ * Update inode security field after a regular file has been created.18011801 */18021802void security_path_post_mknod(struct mnt_idmap *idmap, struct dentry *dentry)18031803{
+7-5
security/selinux/selinuxfs.c
···21232123 .kill_sb = sel_kill_sb,21242124};2125212521262126-static struct vfsmount *selinuxfs_mount __ro_after_init;21272126struct path selinux_null __ro_after_init;2128212721292128static int __init init_sel_fs(void)···21442145 return err;21452146 }2146214721472147- selinux_null.mnt = selinuxfs_mount = kern_mount(&sel_fs_type);21482148- if (IS_ERR(selinuxfs_mount)) {21482148+ selinux_null.mnt = kern_mount(&sel_fs_type);21492149+ if (IS_ERR(selinux_null.mnt)) {21492150 pr_err("selinuxfs: could not mount!\n");21502150- err = PTR_ERR(selinuxfs_mount);21512151- selinuxfs_mount = NULL;21512151+ err = PTR_ERR(selinux_null.mnt);21522152+ selinux_null.mnt = NULL;21532153+ return err;21522154 }21552155+21532156 selinux_null.dentry = d_hash_and_lookup(selinux_null.mnt->mnt_root,21542157 &null_name);21552158 if (IS_ERR(selinux_null.dentry)) {21562159 pr_err("selinuxfs: could not lookup null!\n");21572160 err = PTR_ERR(selinux_null.dentry);21582161 selinux_null.dentry = NULL;21622162+ return err;21592163 }2160216421612165 return err;
+7-1
sound/oss/dmasound/dmasound_paula.c
···725725 dmasound_deinit();726726}727727728728-static struct platform_driver amiga_audio_driver = {728728+/*729729+ * amiga_audio_remove() lives in .exit.text. For drivers registered via730730+ * module_platform_driver_probe() this is ok because they cannot get unbound at731731+ * runtime. So mark the driver struct with __refdata to prevent modpost732732+ * triggering a section mismatch warning.733733+ */734734+static struct platform_driver amiga_audio_driver __refdata = {729735 .remove_new = __exit_p(amiga_audio_remove),730736 .driver = {731737 .name = "amiga-audio",
+2-5
sound/pci/emu10k1/emu10k1_callback.c
···255255 /* check if sample is finished playing (non-looping only) */256256 if (bp != best + V_OFF && bp != best + V_FREE &&257257 (vp->reg.sample_mode & SNDRV_SFNT_SAMPLE_SINGLESHOT)) {258258- val = snd_emu10k1_ptr_read(hw, CCCA_CURRADDR, vp->ch) - 64;258258+ val = snd_emu10k1_ptr_read(hw, CCCA_CURRADDR, vp->ch);259259 if (val >= vp->reg.loopstart)260260 bp = best + V_OFF;261261 }···362362363363 map = (hw->silent_page.addr << hw->address_mode) | (hw->address_mode ? MAP_PTI_MASK1 : MAP_PTI_MASK0);364364365365- addr = vp->reg.start + 64;365365+ addr = vp->reg.start;366366 temp = vp->reg.parm.filterQ;367367 ccca = (temp << 28) | addr;368368 if (vp->apitch < 0xe400)···429429430430 /* Q & current address (Q 4bit value, MSB) */431431 CCCA, ccca,432432-433433- /* cache */434434- CCR, REG_VAL_PUT(CCR_CACHEINVALIDSIZE, 64),435432436433 /* reset volume */437434 VTFT, vtarget | vp->ftarget,
···644644 ret = cs35l56_wait_for_firmware_boot(&cs35l56->base);645645 if (ret)646646 goto err_powered_up;647647+648648+ regcache_cache_only(cs35l56->base.regmap, false);647649 }648650649651 /* Disable auto-hibernate so that runtime_pm has control */···10031001 ret = cs35l56_wait_for_firmware_boot(&cs35l56->base);10041002 if (ret)10051003 goto err;10041004+10051005+ regcache_cache_only(cs35l56->base.regmap, false);1006100610071007 ret = cs35l56_set_patch(&cs35l56->base);10081008 if (ret)
···10931093static int cs35l41_dsp_init(struct cs35l41_private *cs35l41)10941094{10951095 struct wm_adsp *dsp;10961096+ uint32_t dsp1rx5_src;10961097 int ret;1097109810981099 dsp = &cs35l41->dsp;···11131112 return ret;11141113 }1115111411161116- ret = regmap_write(cs35l41->regmap, CS35L41_DSP1_RX5_SRC,11171117- CS35L41_INPUT_SRC_VPMON);11181118- if (ret < 0) {11191119- dev_err(cs35l41->dev, "Write INPUT_SRC_VPMON failed: %d\n", ret);11151115+ switch (cs35l41->hw_cfg.bst_type) {11161116+ case CS35L41_INT_BOOST:11171117+ case CS35L41_SHD_BOOST_ACTV:11181118+ dsp1rx5_src = CS35L41_INPUT_SRC_VPMON;11191119+ break;11201120+ case CS35L41_EXT_BOOST:11211121+ case CS35L41_SHD_BOOST_PASS:11221122+ dsp1rx5_src = CS35L41_INPUT_SRC_VBSTMON;11231123+ break;11241124+ default:11251125+ dev_err(cs35l41->dev, "wm_halo_init failed - Invalid Boost Type: %d\n",11261126+ cs35l41->hw_cfg.bst_type);11201127 goto err_dsp;11211128 }11221122- ret = regmap_write(cs35l41->regmap, CS35L41_DSP1_RX6_SRC,11231123- CS35L41_INPUT_SRC_CLASSH);11291129+11301130+ ret = regmap_write(cs35l41->regmap, CS35L41_DSP1_RX5_SRC, dsp1rx5_src);11241131 if (ret < 0) {11251125- dev_err(cs35l41->dev, "Write INPUT_SRC_CLASSH failed: %d\n", ret);11321132+ dev_err(cs35l41->dev, "Write DSP1RX5_SRC: %d failed: %d\n", dsp1rx5_src, ret);11331133+ goto err_dsp;11341134+ }11351135+ ret = regmap_write(cs35l41->regmap, CS35L41_DSP1_RX6_SRC, CS35L41_INPUT_SRC_VBSTMON);11361136+ if (ret < 0) {11371137+ dev_err(cs35l41->dev, "Write CS35L41_INPUT_SRC_VBSTMON failed: %d\n", ret);11261138 goto err_dsp;11271139 }11281140 ret = regmap_write(cs35l41->regmap, CS35L41_DSP1_RX7_SRC,
-2
sound/soc/codecs/cs35l56-sdw.c
···188188 goto out;189189 }190190191191- regcache_cache_only(cs35l56->base.regmap, false);192192-193191 ret = cs35l56_init(cs35l56);194192 if (ret < 0) {195193 regcache_cache_only(cs35l56->base.regmap, true);
+55-30
sound/soc/codecs/cs35l56-shared.c
···4141static const struct reg_default cs35l56_reg_defaults[] = {4242 /* no defaults for OTP_MEM - first read populates cache */43434444- { CS35L56_ASP1_ENABLES1, 0x00000000 },4545- { CS35L56_ASP1_CONTROL1, 0x00000028 },4646- { CS35L56_ASP1_CONTROL2, 0x18180200 },4747- { CS35L56_ASP1_CONTROL3, 0x00000002 },4848- { CS35L56_ASP1_FRAME_CONTROL1, 0x03020100 },4949- { CS35L56_ASP1_FRAME_CONTROL5, 0x00020100 },5050- { CS35L56_ASP1_DATA_CONTROL1, 0x00000018 },5151- { CS35L56_ASP1_DATA_CONTROL5, 0x00000018 },5252-5353- /* no defaults for ASP1TX mixer */4444+ /*4545+ * No defaults for ASP1 control or ASP1TX mixer. See4646+ * cs35l56_populate_asp1_register_defaults() and4747+ * cs35l56_sync_asp1_mixer_widgets_with_firmware().4848+ */54495550 { CS35L56_SWIRE_DP3_CH1_INPUT, 0x00000018 },5651 { CS35L56_SWIRE_DP3_CH2_INPUT, 0x00000019 },···206211 }207212}208213214214+static const struct reg_sequence cs35l56_asp1_defaults[] = {215215+ REG_SEQ0(CS35L56_ASP1_ENABLES1, 0x00000000),216216+ REG_SEQ0(CS35L56_ASP1_CONTROL1, 0x00000028),217217+ REG_SEQ0(CS35L56_ASP1_CONTROL2, 0x18180200),218218+ REG_SEQ0(CS35L56_ASP1_CONTROL3, 0x00000002),219219+ REG_SEQ0(CS35L56_ASP1_FRAME_CONTROL1, 0x03020100),220220+ REG_SEQ0(CS35L56_ASP1_FRAME_CONTROL5, 0x00020100),221221+ REG_SEQ0(CS35L56_ASP1_DATA_CONTROL1, 0x00000018),222222+ REG_SEQ0(CS35L56_ASP1_DATA_CONTROL5, 0x00000018),223223+};224224+225225+/*226226+ * The firmware can have control of the ASP so we don't provide regmap227227+ * with defaults for these registers, to prevent a regcache_sync() from228228+ * overwriting the firmware settings. But if the machine driver hooks up229229+ * the ASP it means the driver is taking control of the ASP, so then the230230+ * registers are populated with the defaults.231231+ */232232+int cs35l56_init_asp1_regs_for_driver_control(struct cs35l56_base *cs35l56_base)233233+{234234+ if (!cs35l56_base->fw_owns_asp1)235235+ return 0;236236+237237+ cs35l56_base->fw_owns_asp1 = false;238238+239239+ return regmap_multi_reg_write(cs35l56_base->regmap, cs35l56_asp1_defaults,240240+ ARRAY_SIZE(cs35l56_asp1_defaults));241241+}242242+EXPORT_SYMBOL_NS_GPL(cs35l56_init_asp1_regs_for_driver_control, SND_SOC_CS35L56_SHARED);243243+209244/*210245 * The firmware boot sequence can overwrite the ASP1 config registers so that211246 * they don't match regmap's view of their values. Rewrite the values from the···243218 */244219int cs35l56_force_sync_asp1_registers_from_cache(struct cs35l56_base *cs35l56_base)245220{246246- struct reg_sequence asp1_regs[] = {247247- { .reg = CS35L56_ASP1_ENABLES1 },248248- { .reg = CS35L56_ASP1_CONTROL1 },249249- { .reg = CS35L56_ASP1_CONTROL2 },250250- { .reg = CS35L56_ASP1_CONTROL3 },251251- { .reg = CS35L56_ASP1_FRAME_CONTROL1 },252252- { .reg = CS35L56_ASP1_FRAME_CONTROL5 },253253- { .reg = CS35L56_ASP1_DATA_CONTROL1 },254254- { .reg = CS35L56_ASP1_DATA_CONTROL5 },255255- };221221+ struct reg_sequence asp1_regs[ARRAY_SIZE(cs35l56_asp1_defaults)];256222 int i, ret;257223258258- /* Read values from regmap cache into a write sequence */224224+ if (cs35l56_base->fw_owns_asp1)225225+ return 0;226226+227227+ memcpy(asp1_regs, cs35l56_asp1_defaults, sizeof(asp1_regs));228228+229229+ /* Read current values from regmap cache into the write sequence */259230 for (i = 0; i < ARRAY_SIZE(asp1_regs); ++i) {260231 ret = regmap_read(cs35l56_base->regmap, asp1_regs[i].reg, &asp1_regs[i].def);261232 if (ret)···329308 reg = CS35L56_DSP1_HALO_STATE;330309331310 /*332332- * This can't be a regmap_read_poll_timeout() because cs35l56 will NAK333333- * I2C until it has booted which would terminate the poll311311+ * The regmap must remain in cache-only until the chip has312312+ * booted, so use a bypassed read of the status register.334313 */335335- poll_ret = read_poll_timeout(regmap_read, read_ret,314314+ poll_ret = read_poll_timeout(regmap_read_bypassed, read_ret,336315 (val < 0xFFFF) && (val >= CS35L56_HALO_STATE_BOOT_DONE),337316 CS35L56_HALO_STATE_POLL_US,338317 CS35L56_HALO_STATE_TIMEOUT_US,···384363 return;385364386365 cs35l56_wait_control_port_ready();387387- regcache_cache_only(cs35l56_base->regmap, false);366366+367367+ /* Leave in cache-only. This will be revoked when the chip has rebooted. */388368}389369EXPORT_SYMBOL_NS_GPL(cs35l56_system_reset, SND_SOC_CS35L56_SHARED);390370···600578 cs35l56_issue_wake_event(cs35l56_base);601579602580out_sync:603603- regcache_cache_only(cs35l56_base->regmap, false);604604-605581 ret = cs35l56_wait_for_firmware_boot(cs35l56_base);606582 if (ret) {607583 dev_err(cs35l56_base->dev, "Hibernate wake failed: %d\n", ret);608584 goto err;609585 }586586+587587+ regcache_cache_only(cs35l56_base->regmap, false);610588611589 ret = cs35l56_mbox_send(cs35l56_base, CS35L56_MBOX_CMD_PREVENT_AUTO_HIBERNATE);612590 if (ret)···707685708686int cs35l56_get_calibration(struct cs35l56_base *cs35l56_base)709687{710710- u64 silicon_uid;688688+ u64 silicon_uid = 0;711689 int ret;712690713691 /* Driver can't apply calibration to a secured part, so skip */···780758 * devices so the REVID needs to be determined before waiting for the781759 * firmware to boot.782760 */783783- ret = regmap_read(cs35l56_base->regmap, CS35L56_REVID, &revid);761761+ ret = regmap_read_bypassed(cs35l56_base->regmap, CS35L56_REVID, &revid);784762 if (ret < 0) {785763 dev_err(cs35l56_base->dev, "Get Revision ID failed\n");786764 return ret;···791769 if (ret)792770 return ret;793771794794- ret = regmap_read(cs35l56_base->regmap, CS35L56_DEVID, &devid);772772+ ret = regmap_read_bypassed(cs35l56_base->regmap, CS35L56_DEVID, &devid);795773 if (ret < 0) {796774 dev_err(cs35l56_base->dev, "Get Device ID failed\n");797775 return ret;···809787 }810788811789 cs35l56_base->type = devid & 0xFF;790790+791791+ /* Silicon is now identified and booted so exit cache-only */792792+ regcache_cache_only(cs35l56_base->regmap, false);812793813794 ret = regmap_read(cs35l56_base->regmap, CS35L56_DSP_RESTRICT_STS1, &secured);814795 if (ret) {
+25-1
sound/soc/codecs/cs35l56.c
···455455{456456 struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(codec_dai->component);457457 unsigned int val;458458+ int ret;458459459460 dev_dbg(cs35l56->base.dev, "%s: %#x\n", __func__, fmt);461461+462462+ ret = cs35l56_init_asp1_regs_for_driver_control(&cs35l56->base);463463+ if (ret)464464+ return ret;460465461466 switch (fmt & SND_SOC_DAIFMT_CLOCK_PROVIDER_MASK) {462467 case SND_SOC_DAIFMT_CBC_CFC:···536531 unsigned int rx_mask, int slots, int slot_width)537532{538533 struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(dai->component);534534+ int ret;535535+536536+ ret = cs35l56_init_asp1_regs_for_driver_control(&cs35l56->base);537537+ if (ret)538538+ return ret;539539540540 if ((slots == 0) || (slot_width == 0)) {541541 dev_dbg(cs35l56->base.dev, "tdm config cleared\n");···589579 struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(dai->component);590580 unsigned int rate = params_rate(params);591581 u8 asp_width, asp_wl;582582+ int ret;583583+584584+ ret = cs35l56_init_asp1_regs_for_driver_control(&cs35l56->base);585585+ if (ret)586586+ return ret;592587593588 asp_wl = params_width(params);594589 if (cs35l56->asp_slot_width)···650635 int clk_id, unsigned int freq, int dir)651636{652637 struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(dai->component);653653- int freq_id;638638+ int freq_id, ret;639639+640640+ ret = cs35l56_init_asp1_regs_for_driver_control(&cs35l56->base);641641+ if (ret)642642+ return ret;654643655644 if (freq == 0) {656645 cs35l56->sysclk_set = false;···14231404 cs35l56->base.cal_index = -1;14241405 cs35l56->speaker_id = -ENOENT;1425140614071407+ /* Assume that the firmware owns ASP1 until we know different */14081408+ cs35l56->base.fw_owns_asp1 = true;14091409+14261410 dev_set_drvdata(cs35l56->base.dev, cs35l56);1427141114281412 cs35l56_fill_supply_names(cs35l56->supplies);···15541532 return ret;1555153315561534 dev_dbg(cs35l56->base.dev, "Firmware rebooted after soft reset\n");15351535+15361536+ regcache_cache_only(cs35l56->base.regmap, false);15571537 }1558153815591539 /* Disable auto-hibernate so that runtime_pm has control */
+25
sound/soc/codecs/rt5645.c
···444444 struct regmap *regmap;445445 struct i2c_client *i2c;446446 struct gpio_desc *gpiod_hp_det;447447+ struct gpio_desc *gpiod_cbj_sleeve;447448 struct snd_soc_jack *hp_jack;448449 struct snd_soc_jack *mic_jack;449450 struct snd_soc_jack *btn_jack;···31873186 regmap_update_bits(rt5645->regmap, RT5645_IN1_CTRL2,31883187 RT5645_CBJ_MN_JD, 0);3189318831893189+ if (rt5645->gpiod_cbj_sleeve)31903190+ gpiod_set_value(rt5645->gpiod_cbj_sleeve, 1);31913191+31903192 msleep(600);31913193 regmap_read(rt5645->regmap, RT5645_IN1_CTRL3, &val);31923194 val &= 0x7;···32063202 snd_soc_dapm_disable_pin(dapm, "Mic Det Power");32073203 snd_soc_dapm_sync(dapm);32083204 rt5645->jack_type = SND_JACK_HEADPHONE;32053205+ if (rt5645->gpiod_cbj_sleeve)32063206+ gpiod_set_value(rt5645->gpiod_cbj_sleeve, 0);32093207 }32103208 if (rt5645->pdata.level_trigger_irq)32113209 regmap_update_bits(rt5645->regmap, RT5645_IRQ_CTRL2,···32353229 if (rt5645->pdata.level_trigger_irq)32363230 regmap_update_bits(rt5645->regmap, RT5645_IRQ_CTRL2,32373231 RT5645_JD_1_1_MASK, RT5645_JD_1_1_INV);32323232+32333233+ if (rt5645->gpiod_cbj_sleeve)32343234+ gpiod_set_value(rt5645->gpiod_cbj_sleeve, 0);32383235 }3239323632403237 return rt5645->jack_type;···40214012 return ret;40224013 }4023401440154015+ rt5645->gpiod_cbj_sleeve = devm_gpiod_get_optional(&i2c->dev, "cbj-sleeve",40164016+ GPIOD_OUT_LOW);40174017+40184018+ if (IS_ERR(rt5645->gpiod_cbj_sleeve)) {40194019+ ret = PTR_ERR(rt5645->gpiod_cbj_sleeve);40204020+ dev_info(&i2c->dev, "failed to initialize gpiod, ret=%d\n", ret);40214021+ if (ret != -ENOENT)40224022+ return ret;40234023+ }40244024+40244025 for (i = 0; i < ARRAY_SIZE(rt5645->supplies); i++)40254026 rt5645->supplies[i].supply = rt5645_supply_names[i];40264027···42784259 cancel_delayed_work_sync(&rt5645->jack_detect_work);42794260 cancel_delayed_work_sync(&rt5645->rcclock_work);4280426142624262+ if (rt5645->gpiod_cbj_sleeve)42634263+ gpiod_set_value(rt5645->gpiod_cbj_sleeve, 0);42644264+42814265 regulator_bulk_disable(ARRAY_SIZE(rt5645->supplies), rt5645->supplies);42824266}42834267···42964274 0);42974275 msleep(20);42984276 regmap_write(rt5645->regmap, RT5645_RESET, 0);42774277+42784278+ if (rt5645->gpiod_cbj_sleeve)42794279+ gpiod_set_value(rt5645->gpiod_cbj_sleeve, 0);42994280}4300428143014282static int __maybe_unused rt5645_sys_suspend(struct device *dev)
···15821582 if (!le32_to_cpu(dw->priv.size))15831583 return 0;1584158415851585+ w->no_wname_in_kcontrol_name = true;15861586+15851587 if (w->ignore_suspend && !AVS_S0IX_SUPPORTED) {15861588 dev_info_once(comp->dev, "Device does not support S0IX, check BIOS settings\n");15871589 w->ignore_suspend = false;
+13-11
sound/soc/intel/boards/bytcr_rt5640.c
···636636 BYT_RT5640_USE_AMCR0F28),637637 },638638 {639639- .matches = {640640- DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),641641- DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100TA"),642642- },643643- .driver_data = (void *)(BYT_RT5640_IN1_MAP |644644- BYT_RT5640_JD_SRC_JD2_IN4N |645645- BYT_RT5640_OVCD_TH_2000UA |646646- BYT_RT5640_OVCD_SF_0P75 |647647- BYT_RT5640_MCLK_EN),648648- },649649- {639639+ /* Asus T100TAF, unlike other T100TA* models this one has a mono speaker */650640 .matches = {651641 DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),652642 DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100TAF"),···648658 BYT_RT5640_MONO_SPEAKER |649659 BYT_RT5640_DIFF_MIC |650660 BYT_RT5640_SSP0_AIF2 |661661+ BYT_RT5640_MCLK_EN),662662+ },663663+ {664664+ /* Asus T100TA and T100TAM, must come after T100TAF (mono spk) match */665665+ .matches = {666666+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),667667+ DMI_MATCH(DMI_PRODUCT_NAME, "T100TA"),668668+ },669669+ .driver_data = (void *)(BYT_RT5640_IN1_MAP |670670+ BYT_RT5640_JD_SRC_JD2_IN4N |671671+ BYT_RT5640_OVCD_TH_2000UA |672672+ BYT_RT5640_OVCD_SF_0P75 |651673 BYT_RT5640_MCLK_EN),652674 },653675 {
+4-4
sound/soc/sof/amd/acp.c
···704704 goto unregister_dev;705705 }706706707707+ ret = acp_init(sdev);708708+ if (ret < 0)709709+ goto free_smn_dev;710710+707711 sdev->ipc_irq = pci->irq;708712 ret = request_threaded_irq(sdev->ipc_irq, acp_irq_handler, acp_irq_thread,709713 IRQF_SHARED, "AudioDSP", sdev);···716712 sdev->ipc_irq);717713 goto free_smn_dev;718714 }719719-720720- ret = acp_init(sdev);721721- if (ret < 0)722722- goto free_ipc_irq;723715724716 /* scan SoundWire capabilities exposed by DSDT */725717 ret = acp_sof_scan_sdw_devices(sdev, chip->sdw_acpi_dev_addr);
+11-7
sound/soc/sof/core.c
···339339 ret = snd_sof_probe(sdev);340340 if (ret < 0) {341341 dev_err(sdev->dev, "failed to probe DSP %d\n", ret);342342- sof_ops_free(sdev);343343- return ret;342342+ goto err_sof_probe;344343 }345344346345 /* check machine info */···350351 }351352352353 ret = sof_select_ipc_and_paths(sdev);353353- if (!ret && plat_data->ipc_type != base_profile->ipc_type) {354354+ if (ret) {355355+ goto err_machine_check;356356+ } else if (plat_data->ipc_type != base_profile->ipc_type) {354357 /* IPC type changed, re-initialize the ops */355358 sof_ops_free(sdev);356359357360 ret = validate_sof_ops(sdev);358361 if (ret < 0) {359362 snd_sof_remove(sdev);363363+ snd_sof_remove_late(sdev);360364 return ret;361365 }362366 }363367368368+ return 0;369369+364370err_machine_check:365365- if (ret) {366366- snd_sof_remove(sdev);367367- sof_ops_free(sdev);368368- }371371+ snd_sof_remove(sdev);372372+err_sof_probe:373373+ snd_sof_remove_late(sdev);374374+ sof_ops_free(sdev);369375370376 return ret;371377}
+18
sound/soc/sof/debug.c
···311311312312int snd_sof_dbg_init(struct snd_sof_dev *sdev)313313{314314+ struct snd_sof_pdata *plat_data = sdev->pdata;314315 struct snd_sof_dsp_ops *ops = sof_ops(sdev);315316 const struct snd_sof_debugfs_map *map;317317+ struct dentry *fw_profile;316318 int i;317319 int err;318320319321 /* use "sof" as top level debugFS dir */320322 sdev->debugfs_root = debugfs_create_dir("sof", NULL);323323+324324+ /* expose firmware/topology prefix/names for test purposes */325325+ fw_profile = debugfs_create_dir("fw_profile", sdev->debugfs_root);326326+327327+ debugfs_create_str("fw_path", 0444, fw_profile,328328+ (char **)&plat_data->fw_filename_prefix);329329+ debugfs_create_str("fw_lib_path", 0444, fw_profile,330330+ (char **)&plat_data->fw_lib_prefix);331331+ debugfs_create_str("tplg_path", 0444, fw_profile,332332+ (char **)&plat_data->tplg_filename_prefix);333333+ debugfs_create_str("fw_name", 0444, fw_profile,334334+ (char **)&plat_data->fw_filename);335335+ debugfs_create_str("tplg_name", 0444, fw_profile,336336+ (char **)&plat_data->tplg_filename);337337+ debugfs_create_u32("ipc_type", 0444, fw_profile,338338+ (u32 *)&plat_data->ipc_type);321339322340 /* init dfsentry list */323341 INIT_LIST_HEAD(&sdev->dfsentry_list);
+24-8
sound/soc/sof/intel/lnl.c
···3232};33333434/* this helps allows the DSP to setup DMIC/SSP */3535-static int hdac_bus_offload_dmic_ssp(struct hdac_bus *bus)3535+static int hdac_bus_offload_dmic_ssp(struct hdac_bus *bus, bool enable)3636{3737 int ret;38383939- ret = hdac_bus_eml_enable_offload(bus, true, AZX_REG_ML_LEPTR_ID_INTEL_SSP, true);3939+ ret = hdac_bus_eml_enable_offload(bus, true,4040+ AZX_REG_ML_LEPTR_ID_INTEL_SSP, enable);4041 if (ret < 0)4142 return ret;42434343- ret = hdac_bus_eml_enable_offload(bus, true, AZX_REG_ML_LEPTR_ID_INTEL_DMIC, true);4444+ ret = hdac_bus_eml_enable_offload(bus, true,4545+ AZX_REG_ML_LEPTR_ID_INTEL_DMIC, enable);4446 if (ret < 0)4547 return ret;4648···5755 if (ret < 0)5856 return ret;59576060- return hdac_bus_offload_dmic_ssp(sof_to_bus(sdev));5858+ return hdac_bus_offload_dmic_ssp(sof_to_bus(sdev), true);5959+}6060+6161+static void lnl_hda_dsp_remove(struct snd_sof_dev *sdev)6262+{6363+ int ret;6464+6565+ ret = hdac_bus_offload_dmic_ssp(sof_to_bus(sdev), false);6666+ if (ret < 0)6767+ dev_warn(sdev->dev,6868+ "Failed to disable offload for DMIC/SSP: %d\n", ret);6969+7070+ hda_dsp_remove(sdev);6171}62726373static int lnl_hda_dsp_resume(struct snd_sof_dev *sdev)···8066 if (ret < 0)8167 return ret;82688383- return hdac_bus_offload_dmic_ssp(sof_to_bus(sdev));6969+ return hdac_bus_offload_dmic_ssp(sof_to_bus(sdev), true);8470}85718672static int lnl_hda_dsp_runtime_resume(struct snd_sof_dev *sdev)···9177 if (ret < 0)9278 return ret;93799494- return hdac_bus_offload_dmic_ssp(sof_to_bus(sdev));8080+ return hdac_bus_offload_dmic_ssp(sof_to_bus(sdev), true);9581}96829783static int lnl_dsp_post_fw_run(struct snd_sof_dev *sdev)···118104 /* common defaults */119105 memcpy(&sof_lnl_ops, &sof_hda_common_ops, sizeof(struct snd_sof_dsp_ops));120106121121- /* probe */122122- if (!sdev->dspless_mode_selected)107107+ /* probe/remove */108108+ if (!sdev->dspless_mode_selected) {123109 sof_lnl_ops.probe = lnl_hda_dsp_probe;110110+ sof_lnl_ops.remove = lnl_hda_dsp_remove;111111+ }124112125113 /* shutdown */126114 sof_lnl_ops.shutdown = hda_dsp_shutdown;
···3737 snd_pcm_sframes_t delay;3838};39394040+/**4141+ * struct sof_ipc4_pcm_stream_priv - IPC4 specific private data4242+ * @time_info: pointer to time info struct if it is supported, otherwise NULL4343+ * @chain_dma_allocated: indicates the ChainDMA allocation state4444+ */4545+struct sof_ipc4_pcm_stream_priv {4646+ struct sof_ipc4_timestamp_info *time_info;4747+4848+ bool chain_dma_allocated;4949+};5050+5151+static inline struct sof_ipc4_timestamp_info *5252+sof_ipc4_sps_to_time_info(struct snd_sof_pcm_stream *sps)5353+{5454+ struct sof_ipc4_pcm_stream_priv *stream_priv = sps->private;5555+5656+ return stream_priv->time_info;5757+}5858+4059static int sof_ipc4_set_multi_pipeline_state(struct snd_sof_dev *sdev, u32 state,4160 struct ipc4_pipeline_set_state_data *trigger_list)4261{···272253 */273254274255static int sof_ipc4_chain_dma_trigger(struct snd_sof_dev *sdev,275275- int direction,256256+ struct snd_sof_pcm *spcm, int direction,276257 struct snd_sof_pcm_stream_pipeline_list *pipeline_list,277258 int state, int cmd)278259{279260 struct sof_ipc4_fw_data *ipc4_data = sdev->private;261261+ struct sof_ipc4_pcm_stream_priv *stream_priv;280262 bool allocate, enable, set_fifo_size;281263 struct sof_ipc4_msg msg = {{ 0 }};282282- int i;264264+ int ret, i;265265+266266+ stream_priv = spcm->stream[direction].private;283267284268 switch (state) {285269 case SOF_IPC4_PIPE_RUNNING: /* Allocate and start chained dma */···303281 set_fifo_size = false;304282 break;305283 case SOF_IPC4_PIPE_RESET: /* Disable and free chained DMA. */284284+285285+ /* ChainDMA can only be reset if it has been allocated */286286+ if (!stream_priv->chain_dma_allocated)287287+ return 0;288288+306289 allocate = false;307290 enable = false;308291 set_fifo_size = false;···365338 if (enable)366339 msg.primary |= SOF_IPC4_GLB_CHAIN_DMA_ENABLE_MASK;367340368368- return sof_ipc_tx_message_no_reply(sdev->ipc, &msg, 0);341341+ ret = sof_ipc_tx_message_no_reply(sdev->ipc, &msg, 0);342342+ /* Update the ChainDMA allocation state */343343+ if (!ret)344344+ stream_priv->chain_dma_allocated = allocate;345345+346346+ return ret;369347}370348371349static int sof_ipc4_trigger_pipelines(struct snd_soc_component *component,···410378 * trigger function that handles the rest for the substream.411379 */412380 if (pipeline->use_chain_dma)413413- return sof_ipc4_chain_dma_trigger(sdev, substream->stream,381381+ return sof_ipc4_chain_dma_trigger(sdev, spcm, substream->stream,414382 pipeline_list, state, cmd);415383416384 /* allocate memory for the pipeline data */···484452 * Invalidate the stream_start_offset to make sure that it is485453 * going to be updated if the stream resumes486454 */487487- time_info = spcm->stream[substream->stream].private;455455+ time_info = sof_ipc4_sps_to_time_info(&spcm->stream[substream->stream]);488456 if (time_info)489457 time_info->stream_start_offset = SOF_IPC4_INVALID_STREAM_POSITION;490458···738706static void sof_ipc4_pcm_free(struct snd_sof_dev *sdev, struct snd_sof_pcm *spcm)739707{740708 struct snd_sof_pcm_stream_pipeline_list *pipeline_list;709709+ struct sof_ipc4_pcm_stream_priv *stream_priv;741710 int stream;742711743712 for_each_pcm_streams(stream) {744713 pipeline_list = &spcm->stream[stream].pipeline_list;745714 kfree(pipeline_list->pipelines);746715 pipeline_list->pipelines = NULL;716716+717717+ stream_priv = spcm->stream[stream].private;718718+ kfree(stream_priv->time_info);747719 kfree(spcm->stream[stream].private);748720 spcm->stream[stream].private = NULL;749721 }···757721{758722 struct snd_sof_pcm_stream_pipeline_list *pipeline_list;759723 struct sof_ipc4_fw_data *ipc4_data = sdev->private;760760- struct sof_ipc4_timestamp_info *stream_info;724724+ struct sof_ipc4_pcm_stream_priv *stream_priv;725725+ struct sof_ipc4_timestamp_info *time_info;761726 bool support_info = true;762727 u32 abi_version;763728 u32 abi_offset;···786749 return -ENOMEM;787750 }788751789789- if (!support_info)790790- continue;791791-792792- stream_info = kzalloc(sizeof(*stream_info), GFP_KERNEL);793793- if (!stream_info) {752752+ stream_priv = kzalloc(sizeof(*stream_priv), GFP_KERNEL);753753+ if (!stream_priv) {794754 sof_ipc4_pcm_free(sdev, spcm);795755 return -ENOMEM;796756 }797757798798- spcm->stream[stream].private = stream_info;758758+ spcm->stream[stream].private = stream_priv;759759+760760+ if (!support_info)761761+ continue;762762+763763+ time_info = kzalloc(sizeof(*time_info), GFP_KERNEL);764764+ if (!time_info) {765765+ sof_ipc4_pcm_free(sdev, spcm);766766+ return -ENOMEM;767767+ }768768+769769+ stream_priv->time_info = time_info;799770 }800771801772 return 0;802773}803774804804-static void sof_ipc4_build_time_info(struct snd_sof_dev *sdev, struct snd_sof_pcm_stream *spcm)775775+static void sof_ipc4_build_time_info(struct snd_sof_dev *sdev, struct snd_sof_pcm_stream *sps)805776{806777 struct sof_ipc4_copier *host_copier = NULL;807778 struct sof_ipc4_copier *dai_copier = NULL;808779 struct sof_ipc4_llp_reading_slot llp_slot;809809- struct sof_ipc4_timestamp_info *info;780780+ struct sof_ipc4_timestamp_info *time_info;810781 struct snd_soc_dapm_widget *widget;811782 struct snd_sof_dai *dai;812783 int i;813784814785 /* find host & dai to locate info in memory window */815815- for_each_dapm_widgets(spcm->list, i, widget) {786786+ for_each_dapm_widgets(sps->list, i, widget) {816787 struct snd_sof_widget *swidget = widget->dobj.private;817788818789 if (!swidget)···840795 return;841796 }842797843843- info = spcm->private;844844- info->host_copier = host_copier;845845- info->dai_copier = dai_copier;846846- info->llp_offset = offsetof(struct sof_ipc4_fw_registers, llp_gpdma_reading_slots) +847847- sdev->fw_info_box.offset;798798+ time_info = sof_ipc4_sps_to_time_info(sps);799799+ time_info->host_copier = host_copier;800800+ time_info->dai_copier = dai_copier;801801+ time_info->llp_offset = offsetof(struct sof_ipc4_fw_registers,802802+ llp_gpdma_reading_slots) + sdev->fw_info_box.offset;848803849804 /* find llp slot used by current dai */850805 for (i = 0; i < SOF_IPC4_MAX_LLP_GPDMA_READING_SLOTS; i++) {851851- sof_mailbox_read(sdev, info->llp_offset, &llp_slot, sizeof(llp_slot));806806+ sof_mailbox_read(sdev, time_info->llp_offset, &llp_slot, sizeof(llp_slot));852807 if (llp_slot.node_id == dai_copier->data.gtw_cfg.node_id)853808 break;854809855855- info->llp_offset += sizeof(llp_slot);810810+ time_info->llp_offset += sizeof(llp_slot);856811 }857812858813 if (i < SOF_IPC4_MAX_LLP_GPDMA_READING_SLOTS)859814 return;860815861816 /* if no llp gpdma slot is used, check aggregated sdw slot */862862- info->llp_offset = offsetof(struct sof_ipc4_fw_registers, llp_sndw_reading_slots) +863863- sdev->fw_info_box.offset;817817+ time_info->llp_offset = offsetof(struct sof_ipc4_fw_registers,818818+ llp_sndw_reading_slots) + sdev->fw_info_box.offset;864819 for (i = 0; i < SOF_IPC4_MAX_LLP_SNDW_READING_SLOTS; i++) {865865- sof_mailbox_read(sdev, info->llp_offset, &llp_slot, sizeof(llp_slot));820820+ sof_mailbox_read(sdev, time_info->llp_offset, &llp_slot, sizeof(llp_slot));866821 if (llp_slot.node_id == dai_copier->data.gtw_cfg.node_id)867822 break;868823869869- info->llp_offset += sizeof(llp_slot);824824+ time_info->llp_offset += sizeof(llp_slot);870825 }871826872827 if (i < SOF_IPC4_MAX_LLP_SNDW_READING_SLOTS)873828 return;874829875830 /* check EVAD slot */876876- info->llp_offset = offsetof(struct sof_ipc4_fw_registers, llp_evad_reading_slot) +877877- sdev->fw_info_box.offset;878878- sof_mailbox_read(sdev, info->llp_offset, &llp_slot, sizeof(llp_slot));831831+ time_info->llp_offset = offsetof(struct sof_ipc4_fw_registers,832832+ llp_evad_reading_slot) + sdev->fw_info_box.offset;833833+ sof_mailbox_read(sdev, time_info->llp_offset, &llp_slot, sizeof(llp_slot));879834 if (llp_slot.node_id != dai_copier->data.gtw_cfg.node_id)880880- info->llp_offset = 0;835835+ time_info->llp_offset = 0;881836}882837883838static int sof_ipc4_pcm_hw_params(struct snd_soc_component *component,···894849 if (!spcm)895850 return -EINVAL;896851897897- time_info = spcm->stream[substream->stream].private;852852+ time_info = sof_ipc4_sps_to_time_info(&spcm->stream[substream->stream]);898853 /* delay calculation is not supported by current fw_reg ABI */899854 if (!time_info)900855 return 0;···909864910865static int sof_ipc4_get_stream_start_offset(struct snd_sof_dev *sdev,911866 struct snd_pcm_substream *substream,912912- struct snd_sof_pcm_stream *stream,867867+ struct snd_sof_pcm_stream *sps,913868 struct sof_ipc4_timestamp_info *time_info)914869{915870 struct sof_ipc4_copier *host_copier = time_info->host_copier;···963918 struct sof_ipc4_timestamp_info *time_info;964919 struct sof_ipc4_llp_reading_slot llp;965920 snd_pcm_uframes_t head_cnt, tail_cnt;966966- struct snd_sof_pcm_stream *stream;921921+ struct snd_sof_pcm_stream *sps;967922 u64 dai_cnt, host_cnt, host_ptr;968923 struct snd_sof_pcm *spcm;969924 int ret;···972927 if (!spcm)973928 return -EOPNOTSUPP;974929975975- stream = &spcm->stream[substream->stream];976976- time_info = stream->private;930930+ sps = &spcm->stream[substream->stream];931931+ time_info = sof_ipc4_sps_to_time_info(sps);977932 if (!time_info)978933 return -EOPNOTSUPP;979934···983938 * the statistics is complete. And it will not change after the first initiailization.984939 */985940 if (time_info->stream_start_offset == SOF_IPC4_INVALID_STREAM_POSITION) {986986- ret = sof_ipc4_get_stream_start_offset(sdev, substream, stream, time_info);941941+ ret = sof_ipc4_get_stream_start_offset(sdev, substream, sps, time_info);987942 if (ret < 0)988943 return -EOPNOTSUPP;989944 }···10751030{10761031 struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);10771032 struct sof_ipc4_timestamp_info *time_info;10781078- struct snd_sof_pcm_stream *stream;10791033 struct snd_sof_pcm *spcm;1080103410811035 spcm = snd_sof_find_spcm_dai(component, rtd);10821036 if (!spcm)10831037 return 0;1084103810851085- stream = &spcm->stream[substream->stream];10861086- time_info = stream->private;10391039+ time_info = sof_ipc4_sps_to_time_info(&spcm->stream[substream->stream]);10871040 /*10881041 * Report the stored delay value calculated in the pointer callback.10891042 * In the unlikely event that the calculation was skipped/aborted, the
+6-7
sound/soc/sof/pcm.c
···312312 ipc_first = true;313313 break;314314 case SNDRV_PCM_TRIGGER_SUSPEND:315315- if (sdev->system_suspend_target == SOF_SUSPEND_S0IX &&315315+ /*316316+ * If DSP D0I3 is allowed during S0iX, set the suspend_ignored flag for317317+ * D0I3-compatible streams to keep the firmware pipeline running318318+ */319319+ if (pcm_ops && pcm_ops->d0i3_supported_in_s0ix &&320320+ sdev->system_suspend_target == SOF_SUSPEND_S0IX &&316321 spcm->stream[substream->stream].d0i3_compatible) {317317- /*318318- * trap the event, not sending trigger stop to319319- * prevent the FW pipelines from being stopped,320320- * and mark the flag to ignore the upcoming DAPM321321- * PM events.322322- */323322 spcm->stream[substream->stream].suspend_ignored = true;324323 return 0;325324 }
+2
sound/soc/sof/sof-audio.h
···117117 * triggers. The FW keeps the host DMA running in this case and118118 * therefore the host must do the same and should stop the DMA during119119 * hw_free.120120+ * @d0i3_supported_in_s0ix: Allow DSP D0I3 during S0iX120121 */121122struct sof_ipc_pcm_ops {122123 int (*hw_params)(struct snd_soc_component *component, struct snd_pcm_substream *substream,···137136 bool reset_hw_params_during_stop;138137 bool ipc_first_on_start;139138 bool platform_stop_during_hw_free;139139+ bool d0i3_supported_in_s0ix;140140};141141142142/**
+3-4
sound/soc/tegra/tegra186_dspk.c
···11// SPDX-License-Identifier: GPL-2.0-only22+// SPDX-FileCopyrightText: Copyright (c) 2020-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.23//34// tegra186_dspk.c - Tegra186 DSPK driver44-//55-// Copyright (c) 2020 NVIDIA CORPORATION. All rights reserved.6576#include <linux/clk.h>87#include <linux/device.h>···240241 return -EINVAL;241242 }242243243243- cif_conf.client_bits = TEGRA_ACIF_BITS_24;244244-245244 switch (params_format(params)) {246245 case SNDRV_PCM_FORMAT_S16_LE:247246 cif_conf.audio_bits = TEGRA_ACIF_BITS_16;247247+ cif_conf.client_bits = TEGRA_ACIF_BITS_16;248248 break;249249 case SNDRV_PCM_FORMAT_S32_LE:250250 cif_conf.audio_bits = TEGRA_ACIF_BITS_32;251251+ cif_conf.client_bits = TEGRA_ACIF_BITS_24;251252 break;252253 default:253254 dev_err(dev, "unsupported format!\n");
+6-6
sound/soc/ti/davinci-mcasp.c
···2417241724182418 mcasp_reparent_fck(pdev);2419241924202420- ret = devm_snd_soc_register_component(&pdev->dev, &davinci_mcasp_component,24212421- &davinci_mcasp_dai[mcasp->op_mode], 1);24222422-24232423- if (ret != 0)24242424- goto err;24252425-24262420 ret = davinci_mcasp_get_dma_type(mcasp);24272421 switch (ret) {24282422 case PCM_EDMA:···24422448 dev_err(&pdev->dev, "register PCM failed: %d\n", ret);24432449 goto err;24442450 }24512451+24522452+ ret = devm_snd_soc_register_component(&pdev->dev, &davinci_mcasp_component,24532453+ &davinci_mcasp_dai[mcasp->op_mode], 1);24542454+24552455+ if (ret != 0)24562456+ goto err;2445245724462458no_audio:24472459 ret = davinci_mcasp_init_gpiochip(mcasp);
···6060 irq_iter = READ_ONCE(shared_data->nr_iter);6161 __GUEST_ASSERT(config_iter + 1 == irq_iter,6262 "config_iter + 1 = 0x%x, irq_iter = 0x%x.\n"6363- " Guest timer interrupt was not trigged within the specified\n"6363+ " Guest timer interrupt was not triggered within the specified\n"6464 " interval, try to increase the error margin by [-e] option.\n",6565 config_iter + 1, irq_iter);6666 }
+39
tools/testing/selftests/kvm/x86_64/kvm_pv_test.c
···133133 }134134}135135136136+static void test_pv_unhalt(void)137137+{138138+ struct kvm_vcpu *vcpu;139139+ struct kvm_vm *vm;140140+ struct kvm_cpuid_entry2 *ent;141141+ u32 kvm_sig_old;142142+143143+ pr_info("testing KVM_FEATURE_PV_UNHALT\n");144144+145145+ TEST_REQUIRE(KVM_CAP_X86_DISABLE_EXITS);146146+147147+ /* KVM_PV_UNHALT test */148148+ vm = vm_create_with_one_vcpu(&vcpu, guest_main);149149+ vcpu_set_cpuid_feature(vcpu, X86_FEATURE_KVM_PV_UNHALT);150150+151151+ TEST_ASSERT(vcpu_cpuid_has(vcpu, X86_FEATURE_KVM_PV_UNHALT),152152+ "Enabling X86_FEATURE_KVM_PV_UNHALT had no effect");153153+154154+ /* Make sure KVM clears vcpu->arch.kvm_cpuid */155155+ ent = vcpu_get_cpuid_entry(vcpu, KVM_CPUID_SIGNATURE);156156+ kvm_sig_old = ent->ebx;157157+ ent->ebx = 0xdeadbeef;158158+ vcpu_set_cpuid(vcpu);159159+160160+ vm_enable_cap(vm, KVM_CAP_X86_DISABLE_EXITS, KVM_X86_DISABLE_EXITS_HLT);161161+ ent = vcpu_get_cpuid_entry(vcpu, KVM_CPUID_SIGNATURE);162162+ ent->ebx = kvm_sig_old;163163+ vcpu_set_cpuid(vcpu);164164+165165+ TEST_ASSERT(!vcpu_cpuid_has(vcpu, X86_FEATURE_KVM_PV_UNHALT),166166+ "KVM_FEATURE_PV_UNHALT is set with KVM_CAP_X86_DISABLE_EXITS");167167+168168+ /* FIXME: actually test KVM_FEATURE_PV_UNHALT feature */169169+170170+ kvm_vm_free(vm);171171+}172172+136173int main(void)137174{138175 struct kvm_vcpu *vcpu;···188151189152 enter_guest(vcpu);190153 kvm_vm_free(vm);154154+155155+ test_pv_unhalt();191156}
···109109 fd1 = open_port(0, 1);110110 if (fd1 >= 0)111111 error(1, 0, "Was allowed to create an ipv4 reuseport on an already bound non-reuseport socket with no ipv6");112112- fprintf(stderr, "Success");112112+ fprintf(stderr, "Success\n");113113 return 0;114114}
+2-8
tools/testing/selftests/net/udpgro_fwd.sh
···244244 create_vxlan_pair245245 ip netns exec $NS_DST ethtool -K veth$DST generic-receive-offload on246246 ip netns exec $NS_DST ethtool -K veth$DST rx-gro-list on247247- run_test "GRO frag list over UDP tunnel" $OL_NET$DST 1 1247247+ run_test "GRO frag list over UDP tunnel" $OL_NET$DST 10 10248248 cleanup249249250250 # use NAT to circumvent GRO FWD check···258258 # load arp cache before running the test to reduce the amount of259259 # stray traffic on top of the UDP tunnel260260 ip netns exec $NS_SRC $PING -q -c 1 $OL_NET$DST_NAT >/dev/null261261- run_test "GRO fwd over UDP tunnel" $OL_NET$DST_NAT 1 1 $OL_NET$DST262262- cleanup263263-264264- create_vxlan_pair265265- run_bench "UDP tunnel fwd perf" $OL_NET$DST266266- ip netns exec $NS_DST ethtool -K veth$DST rx-udp-gro-forwarding on267267- run_bench "UDP tunnel GRO fwd perf" $OL_NET$DST261261+ run_test "GRO fwd over UDP tunnel" $OL_NET$DST_NAT 10 10 $OL_NET$DST268262 cleanup269263done270264