Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge v7.0-rc7 into drm-next

Thomas Zimmermann needs 2f42c1a61616 ("drm/ast: dp501: Fix
initialization of SCU2C") for drm-misc-next.

Conflicts:
- drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c

Just between e927b36ae18b ("drm/amd/display: Fix NULL pointer
dereference in dcn401_init_hw()") and it's cherry-pick that confused
git.

- drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c

Deleted in 6b0a6116286e ("drm/amd/pm: Unify version check in SMUv11")
but some cherry-picks confused git. Same for v12/v14.

Signed-off-by: Simona Vetter <simona.vetter@ffwll.ch>

+5332 -1775
+1
.get_maintainer.ignore
··· 1 1 Alan Cox <alan@lxorguk.ukuu.org.uk> 2 2 Alan Cox <root@hraefn.swansea.linux.org.uk> 3 3 Alyssa Rosenzweig <alyssa@rosenzweig.io> 4 + Askar Safin <safinaskar@gmail.com> 4 5 Christoph Hellwig <hch@lst.de> 5 6 Jeff Kirsher <jeffrey.t.kirsher@intel.com> 6 7 Marc Gonzalez <marc.w.gonzalez@free.fr>
+1 -1
Documentation/devicetree/bindings/auxdisplay/holtek,ht16k33.yaml
··· 66 66 required: 67 67 - refresh-rate-hz 68 68 69 - additionalProperties: false 69 + unevaluatedProperties: false 70 70 71 71 examples: 72 72 - |
+1
Documentation/devicetree/bindings/connector/usb-connector.yaml
··· 301 301 maxItems: 4 302 302 303 303 dependencies: 304 + pd-disable: [typec-power-opmode] 304 305 sink-vdos-v1: [ sink-vdos ] 305 306 sink-vdos: [ sink-vdos-v1 ] 306 307
+2 -2
Documentation/devicetree/bindings/gpio/microchip,mpfs-gpio.yaml
··· 37 37 const: 2 38 38 39 39 "#interrupt-cells": 40 - const: 1 40 + const: 2 41 41 42 42 ngpios: 43 43 description: ··· 86 86 gpio-controller; 87 87 #gpio-cells = <2>; 88 88 interrupt-controller; 89 - #interrupt-cells = <1>; 89 + #interrupt-cells = <2>; 90 90 interrupts = <53>, <53>, <53>, <53>, 91 91 <53>, <53>, <53>, <53>, 92 92 <53>, <53>, <53>, <53>,
+139 -15
Documentation/process/security-bugs.rst
··· 5 5 6 6 Linux kernel developers take security very seriously. As such, we'd 7 7 like to know when a security bug is found so that it can be fixed and 8 - disclosed as quickly as possible. Please report security bugs to the 9 - Linux kernel security team. 8 + disclosed as quickly as possible. 9 + 10 + Preparing your report 11 + --------------------- 12 + 13 + Like with any bug report, a security bug report requires a lot of analysis work 14 + from the developers, so the more information you can share about the issue, the 15 + better. Please review the procedure outlined in 16 + Documentation/admin-guide/reporting-issues.rst if you are unclear about what 17 + information is helpful. The following information are absolutely necessary in 18 + **any** security bug report: 19 + 20 + * **affected kernel version range**: with no version indication, your report 21 + will not be processed. A significant part of reports are for bugs that 22 + have already been fixed, so it is extremely important that vulnerabilities 23 + are verified on recent versions (development tree or latest stable 24 + version), at least by verifying that the code has not changed since the 25 + version where it was detected. 26 + 27 + * **description of the problem**: a detailed description of the problem, with 28 + traces showing its manifestation, and why you consider that the observed 29 + behavior as a problem in the kernel, is necessary. 30 + 31 + * **reproducer**: developers will need to be able to reproduce the problem to 32 + consider a fix as effective. This includes both a way to trigger the issue 33 + and a way to confirm it happens. A reproducer with low complexity 34 + dependencies will be needed (source code, shell script, sequence of 35 + instructions, file-system image etc). Binary-only executables are not 36 + accepted. Working exploits are extremely helpful and will not be released 37 + without consent from the reporter, unless they are already public. By 38 + definition if an issue cannot be reproduced, it is not exploitable, thus it 39 + is not a security bug. 40 + 41 + * **conditions**: if the bug depends on certain configuration options, 42 + sysctls, permissions, timing, code modifications etc, these should be 43 + indicated. 44 + 45 + In addition, the following information are highly desirable: 46 + 47 + * **suspected location of the bug**: the file names and functions where the 48 + bug is suspected to be present are very important, at least to help forward 49 + the report to the appropriate maintainers. When not possible (for example, 50 + "system freezes each time I run this command"), the security team will help 51 + identify the source of the bug. 52 + 53 + * **a proposed fix**: bug reporters who have analyzed the cause of a bug in 54 + the source code almost always have an accurate idea on how to fix it, 55 + because they spent a long time studying it and its implications. Proposing 56 + a tested fix will save maintainers a lot of time, even if the fix ends up 57 + not being the right one, because it helps understand the bug. When 58 + proposing a tested fix, please always format it in a way that can be 59 + immediately merged (see Documentation/process/submitting-patches.rst). 60 + This will save some back-and-forth exchanges if it is accepted, and you 61 + will be credited for finding and fixing this issue. Note that in this case 62 + only a ``Signed-off-by:`` tag is needed, without ``Reported-by:`` when the 63 + reporter and author are the same. 64 + 65 + * **mitigations**: very often during a bug analysis, some ways of mitigating 66 + the issue appear. It is useful to share them, as they can be helpful to 67 + keep end users protected during the time it takes them to apply the fix. 68 + 69 + Identifying contacts 70 + -------------------- 71 + 72 + The most effective way to report a security bug is to send it directly to the 73 + affected subsystem's maintainers and Cc: the Linux kernel security team. Do 74 + not send it to a public list at this stage, unless you have good reasons to 75 + consider the issue as being public or trivial to discover (e.g. result of a 76 + widely available automated vulnerability scanning tool that can be repeated by 77 + anyone). 78 + 79 + If you're sending a report for issues affecting multiple parts in the kernel, 80 + even if they're fairly similar issues, please send individual messages (think 81 + that maintainers will not all work on the issues at the same time). The only 82 + exception is when an issue concerns closely related parts maintained by the 83 + exact same subset of maintainers, and these parts are expected to be fixed all 84 + at once by the same commit, then it may be acceptable to report them at once. 85 + 86 + One difficulty for most first-time reporters is to figure the right list of 87 + recipients to send a report to. In the Linux kernel, all official maintainers 88 + are trusted, so the consequences of accidentally including the wrong maintainer 89 + are essentially a bit more noise for that person, i.e. nothing dramatic. As 90 + such, a suitable method to figure the list of maintainers (which kernel 91 + security officers use) is to rely on the get_maintainer.pl script, tuned to 92 + only report maintainers. This script, when passed a file name, will look for 93 + its path in the MAINTAINERS file to figure a hierarchical list of relevant 94 + maintainers. Calling it a first time with the finest level of filtering will 95 + most of the time return a short list of this specific file's maintainers:: 96 + 97 + $ ./scripts/get_maintainer.pl --no-l --no-r --pattern-depth 1 \ 98 + drivers/example.c 99 + Developer One <dev1@example.com> (maintainer:example driver) 100 + Developer Two <dev2@example.org> (maintainer:example driver) 101 + 102 + These two maintainers should then receive the message. If the command does not 103 + return anything, it means the affected file is part of a wider subsystem, so we 104 + should be less specific:: 105 + 106 + $ ./scripts/get_maintainer.pl --no-l --no-r drivers/example.c 107 + Developer One <dev1@example.com> (maintainer:example subsystem) 108 + Developer Two <dev2@example.org> (maintainer:example subsystem) 109 + Developer Three <dev3@example.com> (maintainer:example subsystem [GENERAL]) 110 + Developer Four <dev4@example.org> (maintainer:example subsystem [GENERAL]) 111 + 112 + Here, picking the first, most specific ones, is sufficient. When the list is 113 + long, it is possible to produce a comma-delimited e-mail address list on a 114 + single line suitable for use in the To: field of a mailer like this:: 115 + 116 + $ ./scripts/get_maintainer.pl --no-tree --no-l --no-r --no-n --m \ 117 + --no-git-fallback --no-substatus --no-rolestats --no-multiline \ 118 + --pattern-depth 1 drivers/example.c 119 + dev1@example.com, dev2@example.org 120 + 121 + or this for the wider list:: 122 + 123 + $ ./scripts/get_maintainer.pl --no-tree --no-l --no-r --no-n --m \ 124 + --no-git-fallback --no-substatus --no-rolestats --no-multiline \ 125 + drivers/example.c 126 + dev1@example.com, dev2@example.org, dev3@example.com, dev4@example.org 127 + 128 + If at this point you're still facing difficulties spotting the right 129 + maintainers, **and only in this case**, it's possible to send your report to 130 + the Linux kernel security team only. Your message will be triaged, and you 131 + will receive instructions about whom to contact, if needed. Your message may 132 + equally be forwarded as-is to the relevant maintainers. 133 + 134 + Sending the report 135 + ------------------ 136 + 137 + Reports are to be sent over e-mail exclusively. Please use a working e-mail 138 + address, preferably the same that you want to appear in ``Reported-by`` tags 139 + if any. If unsure, send your report to yourself first. 10 140 11 141 The security team and maintainers almost always require additional 12 142 information beyond what was initially provided in a report and rely on ··· 148 18 or cannot effectively discuss their findings may be abandoned if the 149 19 communication does not quickly improve. 150 20 151 - As it is with any bug, the more information provided the easier it 152 - will be to diagnose and fix. Please review the procedure outlined in 153 - 'Documentation/admin-guide/reporting-issues.rst' if you are unclear about what 154 - information is helpful. Any exploit code is very helpful and will not 155 - be released without consent from the reporter unless it has already been 156 - made public. 157 - 21 + The report must be sent to maintainers, with the security team in ``Cc:``. 158 22 The Linux kernel security team can be contacted by email at 159 23 <security@kernel.org>. This is a private list of security officers 160 - who will help verify the bug report and develop and release a fix. 161 - If you already have a fix, please include it with your report, as 162 - that can speed up the process considerably. It is possible that the 163 - security team will bring in extra help from area maintainers to 164 - understand and fix the security vulnerability. 24 + who will help verify the bug report and assist developers working on a fix. 25 + It is possible that the security team will bring in extra help from area 26 + maintainers to understand and fix the security vulnerability. 165 27 166 28 Please send **plain text** emails without attachments where possible. 167 29 It is much harder to have a context-quoted discussion about a complex ··· 164 42 Markdown, HTML and RST formatted reports are particularly frowned upon since 165 43 they're quite hard to read for humans and encourage to use dedicated viewers, 166 44 sometimes online, which by definition is not acceptable for a confidential 167 - security report. 45 + security report. Note that some mailers tend to mangle formatting of plain 46 + text by default, please consult Documentation/process/email-clients.rst for 47 + more info. 168 48 169 49 Disclosure and embargoed information 170 50 ------------------------------------
+1 -1
Makefile
··· 2 2 VERSION = 7 3 3 PATCHLEVEL = 0 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc7 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+1
arch/arm64/Kconfig
··· 252 252 select HAVE_RSEQ 253 253 select HAVE_RUST if RUSTC_SUPPORTS_ARM64 254 254 select HAVE_STACKPROTECTOR 255 + select HAVE_STATIC_CALL if CFI 255 256 select HAVE_SYSCALL_TRACEPOINTS 256 257 select HAVE_KPROBES 257 258 select HAVE_KRETPROBES
+31
arch/arm64/include/asm/static_call.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _ASM_STATIC_CALL_H 3 + #define _ASM_STATIC_CALL_H 4 + 5 + #define __ARCH_DEFINE_STATIC_CALL_TRAMP(name, target) \ 6 + asm(" .pushsection .static_call.text, \"ax\" \n" \ 7 + " .align 4 \n" \ 8 + " .globl " name " \n" \ 9 + name ": \n" \ 10 + " hint 34 /* BTI C */ \n" \ 11 + " adrp x16, 1f \n" \ 12 + " ldr x16, [x16, :lo12:1f] \n" \ 13 + " br x16 \n" \ 14 + " .type " name ", %function \n" \ 15 + " .size " name ", . - " name " \n" \ 16 + " .popsection \n" \ 17 + " .pushsection .rodata, \"a\" \n" \ 18 + " .align 3 \n" \ 19 + "1: .quad " target " \n" \ 20 + " .popsection \n") 21 + 22 + #define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) \ 23 + __ARCH_DEFINE_STATIC_CALL_TRAMP(STATIC_CALL_TRAMP_STR(name), #func) 24 + 25 + #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) \ 26 + ARCH_DEFINE_STATIC_CALL_TRAMP(name, __static_call_return0) 27 + 28 + #define ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) \ 29 + ARCH_DEFINE_STATIC_CALL_TRAMP(name, __static_call_return0) 30 + 31 + #endif /* _ASM_STATIC_CALL_H */
+1
arch/arm64/kernel/Makefile
··· 46 46 obj-$(CONFIG_PERF_EVENTS) += perf_regs.o perf_callchain.o 47 47 obj-$(CONFIG_HARDLOCKUP_DETECTOR_PERF) += watchdog_hld.o 48 48 obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o 49 + obj-$(CONFIG_HAVE_STATIC_CALL) += static_call.o 49 50 obj-$(CONFIG_CPU_PM) += sleep.o suspend.o 50 51 obj-$(CONFIG_KGDB) += kgdb.o 51 52 obj-$(CONFIG_EFI) += efi.o efi-rt-wrapper.o
+23
arch/arm64/kernel/static_call.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/static_call.h> 3 + #include <linux/memory.h> 4 + #include <asm/text-patching.h> 5 + 6 + void arch_static_call_transform(void *site, void *tramp, void *func, bool tail) 7 + { 8 + u64 literal; 9 + int ret; 10 + 11 + if (!func) 12 + func = __static_call_return0; 13 + 14 + /* decode the instructions to discover the literal address */ 15 + literal = ALIGN_DOWN((u64)tramp + 4, SZ_4K) + 16 + aarch64_insn_adrp_get_offset(le32_to_cpup(tramp + 4)) + 17 + 8 * aarch64_insn_decode_immediate(AARCH64_INSN_IMM_12, 18 + le32_to_cpup(tramp + 8)); 19 + 20 + ret = aarch64_insn_write_literal_u64((void *)literal, (u64)func); 21 + WARN_ON_ONCE(ret); 22 + } 23 + EXPORT_SYMBOL_GPL(arch_static_call_transform);
+1
arch/arm64/kernel/vmlinux.lds.S
··· 191 191 LOCK_TEXT 192 192 KPROBES_TEXT 193 193 HYPERVISOR_TEXT 194 + STATIC_CALL_TEXT 194 195 *(.gnu.warning) 195 196 } 196 197
-1
arch/mips/include/asm/cpu-features.h
··· 484 484 # endif 485 485 # ifndef cpu_vmbits 486 486 # define cpu_vmbits cpu_data[0].vmbits 487 - # define __NEED_VMBITS_PROBE 488 487 # endif 489 488 #endif 490 489
-2
arch/mips/include/asm/cpu-info.h
··· 80 80 int srsets; /* Shadow register sets */ 81 81 int package;/* physical package number */ 82 82 unsigned int globalnumber; 83 - #ifdef CONFIG_64BIT 84 83 int vmbits; /* Virtual memory size in bits */ 85 - #endif 86 84 void *data; /* Additional data */ 87 85 unsigned int watch_reg_count; /* Number that exist */ 88 86 unsigned int watch_reg_use_cnt; /* Usable by ptrace */
+2
arch/mips/include/asm/mipsregs.h
··· 1871 1871 1872 1872 #define read_c0_entryhi() __read_ulong_c0_register($10, 0) 1873 1873 #define write_c0_entryhi(val) __write_ulong_c0_register($10, 0, val) 1874 + #define read_c0_entryhi_64() __read_64bit_c0_register($10, 0) 1875 + #define write_c0_entryhi_64(val) __write_64bit_c0_register($10, 0, val) 1874 1876 1875 1877 #define read_c0_guestctl1() __read_32bit_c0_register($10, 4) 1876 1878 #define write_c0_guestctl1(val) __write_32bit_c0_register($10, 4, val)
+8 -5
arch/mips/kernel/cpu-probe.c
··· 210 210 211 211 static inline void cpu_probe_vmbits(struct cpuinfo_mips *c) 212 212 { 213 - #ifdef __NEED_VMBITS_PROBE 214 - write_c0_entryhi(0x3fffffffffffe000ULL); 215 - back_to_back_c0_hazard(); 216 - c->vmbits = fls64(read_c0_entryhi() & 0x3fffffffffffe000ULL); 217 - #endif 213 + int vmbits = 31; 214 + 215 + if (cpu_has_64bits) { 216 + write_c0_entryhi_64(0x3fffffffffffe000ULL); 217 + back_to_back_c0_hazard(); 218 + vmbits = fls64(read_c0_entryhi_64() & 0x3fffffffffffe000ULL); 219 + } 220 + c->vmbits = vmbits; 218 221 } 219 222 220 223 static void set_isa(struct cpuinfo_mips *c, unsigned int isa)
+2
arch/mips/kernel/cpu-r3k-probe.c
··· 137 137 else 138 138 cpu_set_nofpu_opts(c); 139 139 140 + c->vmbits = 31; 141 + 140 142 reserve_exception_space(0, 0x400); 141 143 } 142 144
+3 -3
arch/mips/lib/multi3.c
··· 4 4 #include "libgcc.h" 5 5 6 6 /* 7 - * GCC 7 & older can suboptimally generate __multi3 calls for mips64r6, so for 7 + * GCC 9 & older can suboptimally generate __multi3 calls for mips64r6, so for 8 8 * that specific case only we implement that intrinsic here. 9 9 * 10 10 * See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82981 11 11 */ 12 - #if defined(CONFIG_64BIT) && defined(CONFIG_CPU_MIPSR6) && (__GNUC__ < 8) 12 + #if defined(CONFIG_64BIT) && defined(CONFIG_CPU_MIPSR6) && (__GNUC__ < 10) 13 13 14 14 /* multiply 64-bit values, low 64-bits returned */ 15 15 static inline long long notrace dmulu(long long a, long long b) ··· 51 51 } 52 52 EXPORT_SYMBOL(__multi3); 53 53 54 - #endif /* 64BIT && CPU_MIPSR6 && GCC7 */ 54 + #endif /* 64BIT && CPU_MIPSR6 && GCC9 */
+17 -1
arch/mips/loongson64/env.c
··· 17 17 #include <linux/dma-map-ops.h> 18 18 #include <linux/export.h> 19 19 #include <linux/libfdt.h> 20 + #include <linux/minmax.h> 20 21 #include <linux/pci_ids.h> 22 + #include <linux/serial_core.h> 21 23 #include <linux/string_choices.h> 22 24 #include <asm/bootinfo.h> 23 25 #include <loongson.h> ··· 108 106 109 107 is_loongson64g = (read_c0_prid() & PRID_IMP_MASK) == PRID_IMP_LOONGSON_64G; 110 108 111 - for (i = 0; i < system->nr_uarts; i++) { 109 + for (i = 0; i < min(system->nr_uarts, MAX_UARTS); i++) { 112 110 uartdev = &system->uarts[i]; 111 + 112 + /* 113 + * Some firmware does not set nr_uarts properly and passes empty 114 + * items. Ignore them silently. 115 + */ 116 + if (uartdev->uart_base == 0) 117 + continue; 118 + 119 + /* Our DT only works with UPIO_MEM. */ 120 + if (uartdev->iotype != UPIO_MEM) { 121 + pr_warn("Ignore UART 0x%llx with iotype %u passed by firmware\n", 122 + uartdev->uart_base, uartdev->iotype); 123 + continue; 124 + } 113 125 114 126 ret = lefi_fixup_fdt_serial(fdt_buf, uartdev->uart_base, 115 127 uartdev->uartclk);
+2 -1
arch/mips/mm/cache.c
··· 207 207 { 208 208 if (IS_ENABLED(CONFIG_CPU_R3000) && cpu_has_3k_cache) 209 209 r3k_cache_init(); 210 - if (IS_ENABLED(CONFIG_CPU_R4K_CACHE_TLB) && cpu_has_4k_cache) 210 + if ((IS_ENABLED(CONFIG_CPU_R4K_CACHE_TLB) || 211 + IS_ENABLED(CONFIG_CPU_SB1)) && cpu_has_4k_cache) 211 212 r4k_cache_init(); 212 213 213 214 if (IS_ENABLED(CONFIG_CPU_CAVIUM_OCTEON) && cpu_has_octeon_cache)
+231 -56
arch/mips/mm/tlb-r4k.c
··· 13 13 #include <linux/sched.h> 14 14 #include <linux/smp.h> 15 15 #include <linux/memblock.h> 16 + #include <linux/minmax.h> 16 17 #include <linux/mm.h> 17 18 #include <linux/hugetlb.h> 18 19 #include <linux/export.h> ··· 25 24 #include <asm/hazards.h> 26 25 #include <asm/mmu_context.h> 27 26 #include <asm/tlb.h> 27 + #include <asm/tlbdebug.h> 28 28 #include <asm/tlbex.h> 29 29 #include <asm/tlbmisc.h> 30 30 #include <asm/setup.h> ··· 513 511 __setup("ntlb=", set_ntlb); 514 512 515 513 516 - /* Comparison function for EntryHi VPN fields. */ 517 - static int r4k_vpn_cmp(const void *a, const void *b) 514 + /* The start bit position of VPN2 and Mask in EntryHi/PageMask registers. */ 515 + #define VPN2_SHIFT 13 516 + 517 + /* Read full EntryHi even with CONFIG_32BIT. */ 518 + static inline unsigned long long read_c0_entryhi_native(void) 518 519 { 519 - long v = *(unsigned long *)a - *(unsigned long *)b; 520 - int s = sizeof(long) > sizeof(int) ? sizeof(long) * 8 - 1: 0; 521 - return s ? (v != 0) | v >> s : v; 520 + return cpu_has_64bits ? read_c0_entryhi_64() : read_c0_entryhi(); 521 + } 522 + 523 + /* Write full EntryHi even with CONFIG_32BIT. */ 524 + static inline void write_c0_entryhi_native(unsigned long long v) 525 + { 526 + if (cpu_has_64bits) 527 + write_c0_entryhi_64(v); 528 + else 529 + write_c0_entryhi(v); 530 + } 531 + 532 + /* TLB entry state for uniquification. */ 533 + struct tlbent { 534 + unsigned long long wired:1; 535 + unsigned long long global:1; 536 + unsigned long long asid:10; 537 + unsigned long long vpn:51; 538 + unsigned long long pagesz:5; 539 + unsigned long long index:14; 540 + }; 541 + 542 + /* 543 + * Comparison function for TLB entry sorting. Place wired entries first, 544 + * then global entries, then order by the increasing VPN/ASID and the 545 + * decreasing page size. This lets us avoid clashes with wired entries 546 + * easily and get entries for larger pages out of the way first. 547 + * 548 + * We could group bits so as to reduce the number of comparisons, but this 549 + * is seldom executed and not performance-critical, so prefer legibility. 550 + */ 551 + static int r4k_entry_cmp(const void *a, const void *b) 552 + { 553 + struct tlbent ea = *(struct tlbent *)a, eb = *(struct tlbent *)b; 554 + 555 + if (ea.wired > eb.wired) 556 + return -1; 557 + else if (ea.wired < eb.wired) 558 + return 1; 559 + else if (ea.global > eb.global) 560 + return -1; 561 + else if (ea.global < eb.global) 562 + return 1; 563 + else if (ea.vpn < eb.vpn) 564 + return -1; 565 + else if (ea.vpn > eb.vpn) 566 + return 1; 567 + else if (ea.asid < eb.asid) 568 + return -1; 569 + else if (ea.asid > eb.asid) 570 + return 1; 571 + else if (ea.pagesz > eb.pagesz) 572 + return -1; 573 + else if (ea.pagesz < eb.pagesz) 574 + return 1; 575 + else 576 + return 0; 577 + } 578 + 579 + /* 580 + * Fetch all the TLB entries. Mask individual VPN values retrieved with 581 + * the corresponding page mask and ignoring any 1KiB extension as we'll 582 + * be using 4KiB pages for uniquification. 583 + */ 584 + static void __ref r4k_tlb_uniquify_read(struct tlbent *tlb_vpns, int tlbsize) 585 + { 586 + int start = num_wired_entries(); 587 + unsigned long long vpn_mask; 588 + bool global; 589 + int i; 590 + 591 + vpn_mask = GENMASK(current_cpu_data.vmbits - 1, VPN2_SHIFT); 592 + vpn_mask |= cpu_has_64bits ? 3ULL << 62 : 1 << 31; 593 + 594 + for (i = 0; i < tlbsize; i++) { 595 + unsigned long long entryhi, vpn, mask, asid; 596 + unsigned int pagesz; 597 + 598 + write_c0_index(i); 599 + mtc0_tlbr_hazard(); 600 + tlb_read(); 601 + tlb_read_hazard(); 602 + 603 + global = !!(read_c0_entrylo0() & ENTRYLO_G); 604 + entryhi = read_c0_entryhi_native(); 605 + mask = read_c0_pagemask(); 606 + 607 + asid = entryhi & cpu_asid_mask(&current_cpu_data); 608 + vpn = (entryhi & vpn_mask & ~mask) >> VPN2_SHIFT; 609 + pagesz = ilog2((mask >> VPN2_SHIFT) + 1); 610 + 611 + tlb_vpns[i].global = global; 612 + tlb_vpns[i].asid = global ? 0 : asid; 613 + tlb_vpns[i].vpn = vpn; 614 + tlb_vpns[i].pagesz = pagesz; 615 + tlb_vpns[i].wired = i < start; 616 + tlb_vpns[i].index = i; 617 + } 618 + } 619 + 620 + /* 621 + * Write unique values to all but the wired TLB entries each, using 622 + * the 4KiB page size. This size might not be supported with R6, but 623 + * EHINV is mandatory for R6, so we won't ever be called in that case. 624 + * 625 + * A sorted table is supplied with any wired entries at the beginning, 626 + * followed by any global entries, and then finally regular entries. 627 + * We start at the VPN and ASID values of zero and only assign user 628 + * addresses, therefore guaranteeing no clash with addresses produced 629 + * by UNIQUE_ENTRYHI. We avoid any VPN values used by wired or global 630 + * entries, by increasing the VPN value beyond the span of such entry. 631 + * 632 + * When a VPN/ASID clash is found with a regular entry we increment the 633 + * ASID instead until no VPN/ASID clash has been found or the ASID space 634 + * has been exhausted, in which case we increase the VPN value beyond 635 + * the span of the largest clashing entry. 636 + * 637 + * We do not need to be concerned about FTLB or MMID configurations as 638 + * those are required to implement the EHINV feature. 639 + */ 640 + static void __ref r4k_tlb_uniquify_write(struct tlbent *tlb_vpns, int tlbsize) 641 + { 642 + unsigned long long asid, vpn, vpn_size, pagesz; 643 + int widx, gidx, idx, sidx, lidx, i; 644 + 645 + vpn_size = 1ULL << (current_cpu_data.vmbits - VPN2_SHIFT); 646 + pagesz = ilog2((PM_4K >> VPN2_SHIFT) + 1); 647 + 648 + write_c0_pagemask(PM_4K); 649 + write_c0_entrylo0(0); 650 + write_c0_entrylo1(0); 651 + 652 + asid = 0; 653 + vpn = 0; 654 + widx = 0; 655 + gidx = 0; 656 + for (sidx = 0; sidx < tlbsize && tlb_vpns[sidx].wired; sidx++) 657 + ; 658 + for (lidx = sidx; lidx < tlbsize && tlb_vpns[lidx].global; lidx++) 659 + ; 660 + idx = gidx = sidx + 1; 661 + for (i = sidx; i < tlbsize; i++) { 662 + unsigned long long entryhi, vpn_pagesz = 0; 663 + 664 + while (1) { 665 + if (WARN_ON(vpn >= vpn_size)) { 666 + dump_tlb_all(); 667 + /* Pray local_flush_tlb_all() will cope. */ 668 + return; 669 + } 670 + 671 + /* VPN must be below the next wired entry. */ 672 + if (widx < sidx && vpn >= tlb_vpns[widx].vpn) { 673 + vpn = max(vpn, 674 + (tlb_vpns[widx].vpn + 675 + (1ULL << tlb_vpns[widx].pagesz))); 676 + asid = 0; 677 + widx++; 678 + continue; 679 + } 680 + /* VPN must be below the next global entry. */ 681 + if (gidx < lidx && vpn >= tlb_vpns[gidx].vpn) { 682 + vpn = max(vpn, 683 + (tlb_vpns[gidx].vpn + 684 + (1ULL << tlb_vpns[gidx].pagesz))); 685 + asid = 0; 686 + gidx++; 687 + continue; 688 + } 689 + /* Try to find a free ASID so as to conserve VPNs. */ 690 + if (idx < tlbsize && vpn == tlb_vpns[idx].vpn && 691 + asid == tlb_vpns[idx].asid) { 692 + unsigned long long idx_pagesz; 693 + 694 + idx_pagesz = tlb_vpns[idx].pagesz; 695 + vpn_pagesz = max(vpn_pagesz, idx_pagesz); 696 + do 697 + idx++; 698 + while (idx < tlbsize && 699 + vpn == tlb_vpns[idx].vpn && 700 + asid == tlb_vpns[idx].asid); 701 + asid++; 702 + if (asid > cpu_asid_mask(&current_cpu_data)) { 703 + vpn += vpn_pagesz; 704 + asid = 0; 705 + vpn_pagesz = 0; 706 + } 707 + continue; 708 + } 709 + /* VPN mustn't be above the next regular entry. */ 710 + if (idx < tlbsize && vpn > tlb_vpns[idx].vpn) { 711 + vpn = max(vpn, 712 + (tlb_vpns[idx].vpn + 713 + (1ULL << tlb_vpns[idx].pagesz))); 714 + asid = 0; 715 + idx++; 716 + continue; 717 + } 718 + break; 719 + } 720 + 721 + entryhi = (vpn << VPN2_SHIFT) | asid; 722 + write_c0_entryhi_native(entryhi); 723 + write_c0_index(tlb_vpns[i].index); 724 + mtc0_tlbw_hazard(); 725 + tlb_write_indexed(); 726 + 727 + tlb_vpns[i].asid = asid; 728 + tlb_vpns[i].vpn = vpn; 729 + tlb_vpns[i].pagesz = pagesz; 730 + 731 + asid++; 732 + if (asid > cpu_asid_mask(&current_cpu_data)) { 733 + vpn += 1ULL << pagesz; 734 + asid = 0; 735 + } 736 + } 522 737 } 523 738 524 739 /* ··· 746 527 { 747 528 int tlbsize = current_cpu_data.tlbsize; 748 529 bool use_slab = slab_is_available(); 749 - int start = num_wired_entries(); 750 530 phys_addr_t tlb_vpn_size; 751 - unsigned long *tlb_vpns; 752 - unsigned long vpn_mask; 753 - int cnt, ent, idx, i; 754 - 755 - vpn_mask = GENMASK(cpu_vmbits - 1, 13); 756 - vpn_mask |= IS_ENABLED(CONFIG_64BIT) ? 3ULL << 62 : 1 << 31; 531 + struct tlbent *tlb_vpns; 757 532 758 533 tlb_vpn_size = tlbsize * sizeof(*tlb_vpns); 759 534 tlb_vpns = (use_slab ? 760 - kmalloc(tlb_vpn_size, GFP_KERNEL) : 535 + kmalloc(tlb_vpn_size, GFP_ATOMIC) : 761 536 memblock_alloc_raw(tlb_vpn_size, sizeof(*tlb_vpns))); 762 537 if (WARN_ON(!tlb_vpns)) 763 538 return; /* Pray local_flush_tlb_all() is good enough. */ 764 539 765 540 htw_stop(); 766 541 767 - for (i = start, cnt = 0; i < tlbsize; i++, cnt++) { 768 - unsigned long vpn; 542 + r4k_tlb_uniquify_read(tlb_vpns, tlbsize); 769 543 770 - write_c0_index(i); 771 - mtc0_tlbr_hazard(); 772 - tlb_read(); 773 - tlb_read_hazard(); 774 - vpn = read_c0_entryhi(); 775 - vpn &= vpn_mask & PAGE_MASK; 776 - tlb_vpns[cnt] = vpn; 544 + sort(tlb_vpns, tlbsize, sizeof(*tlb_vpns), r4k_entry_cmp, NULL); 777 545 778 - /* Prevent any large pages from overlapping regular ones. */ 779 - write_c0_pagemask(read_c0_pagemask() & PM_DEFAULT_MASK); 780 - mtc0_tlbw_hazard(); 781 - tlb_write_indexed(); 782 - tlbw_use_hazard(); 783 - } 784 - 785 - sort(tlb_vpns, cnt, sizeof(tlb_vpns[0]), r4k_vpn_cmp, NULL); 546 + r4k_tlb_uniquify_write(tlb_vpns, tlbsize); 786 547 787 548 write_c0_pagemask(PM_DEFAULT_MASK); 788 - write_c0_entrylo0(0); 789 - write_c0_entrylo1(0); 790 - 791 - idx = 0; 792 - ent = tlbsize; 793 - for (i = start; i < tlbsize; i++) 794 - while (1) { 795 - unsigned long entryhi, vpn; 796 - 797 - entryhi = UNIQUE_ENTRYHI(ent); 798 - vpn = entryhi & vpn_mask & PAGE_MASK; 799 - 800 - if (idx >= cnt || vpn < tlb_vpns[idx]) { 801 - write_c0_entryhi(entryhi); 802 - write_c0_index(i); 803 - mtc0_tlbw_hazard(); 804 - tlb_write_indexed(); 805 - ent++; 806 - break; 807 - } else if (vpn == tlb_vpns[idx]) { 808 - ent++; 809 - } else { 810 - idx++; 811 - } 812 - } 813 549 814 550 tlbw_use_hazard(); 815 551 htw_start(); ··· 814 640 temp_tlb_entry = current_cpu_data.tlbsize - 1; 815 641 816 642 /* From this point on the ARC firmware is dead. */ 817 - r4k_tlb_uniquify(); 643 + if (!cpu_has_tlbinv) 644 + r4k_tlb_uniquify(); 818 645 local_flush_tlb_all(); 819 646 820 647 /* Did I tell you that ARC SUCKS? */
+4 -4
arch/mips/ralink/clk.c
··· 21 21 { 22 22 switch (ralink_soc) { 23 23 case RT2880_SOC: 24 - *idx = 0; 24 + *idx = 1; 25 25 return "ralink,rt2880-sysc"; 26 26 case RT3883_SOC: 27 - *idx = 0; 27 + *idx = 1; 28 28 return "ralink,rt3883-sysc"; 29 29 case RT305X_SOC_RT3050: 30 - *idx = 0; 30 + *idx = 1; 31 31 return "ralink,rt3050-sysc"; 32 32 case RT305X_SOC_RT3052: 33 - *idx = 0; 33 + *idx = 1; 34 34 return "ralink,rt3052-sysc"; 35 35 case RT305X_SOC_RT3350: 36 36 *idx = 1;
+2 -2
arch/powerpc/kernel/dma-iommu.c
··· 67 67 } 68 68 bool arch_dma_alloc_direct(struct device *dev) 69 69 { 70 - if (dev->dma_ops_bypass) 70 + if (dev->dma_ops_bypass && dev->bus_dma_limit) 71 71 return true; 72 72 73 73 return false; ··· 75 75 76 76 bool arch_dma_free_direct(struct device *dev, dma_addr_t dma_handle) 77 77 { 78 - if (!dev->dma_ops_bypass) 78 + if (!dev->dma_ops_bypass || !dev->bus_dma_limit) 79 79 return false; 80 80 81 81 return is_direct_handle(dev, dma_handle);
+4
arch/riscv/include/asm/runtime-const.h
··· 2 2 #ifndef _ASM_RISCV_RUNTIME_CONST_H 3 3 #define _ASM_RISCV_RUNTIME_CONST_H 4 4 5 + #ifdef MODULE 6 + #error "Cannot use runtime-const infrastructure from modules" 7 + #endif 8 + 5 9 #include <asm/asm.h> 6 10 #include <asm/alternative.h> 7 11 #include <asm/cacheflush.h>
+7 -6
arch/riscv/include/uapi/asm/ptrace.h
··· 9 9 #ifndef __ASSEMBLER__ 10 10 11 11 #include <linux/types.h> 12 + #include <linux/const.h> 12 13 13 14 #define PTRACE_GETFDPIC 33 14 15 ··· 139 138 #define PTRACE_CFI_SS_LOCK_BIT 4 140 139 #define PTRACE_CFI_SS_PTR_BIT 5 141 140 142 - #define PTRACE_CFI_LP_EN_STATE BIT(PTRACE_CFI_LP_EN_BIT) 143 - #define PTRACE_CFI_LP_LOCK_STATE BIT(PTRACE_CFI_LP_LOCK_BIT) 144 - #define PTRACE_CFI_ELP_STATE BIT(PTRACE_CFI_ELP_BIT) 145 - #define PTRACE_CFI_SS_EN_STATE BIT(PTRACE_CFI_SS_EN_BIT) 146 - #define PTRACE_CFI_SS_LOCK_STATE BIT(PTRACE_CFI_SS_LOCK_BIT) 147 - #define PTRACE_CFI_SS_PTR_STATE BIT(PTRACE_CFI_SS_PTR_BIT) 141 + #define PTRACE_CFI_LP_EN_STATE _BITUL(PTRACE_CFI_LP_EN_BIT) 142 + #define PTRACE_CFI_LP_LOCK_STATE _BITUL(PTRACE_CFI_LP_LOCK_BIT) 143 + #define PTRACE_CFI_ELP_STATE _BITUL(PTRACE_CFI_ELP_BIT) 144 + #define PTRACE_CFI_SS_EN_STATE _BITUL(PTRACE_CFI_SS_EN_BIT) 145 + #define PTRACE_CFI_SS_LOCK_STATE _BITUL(PTRACE_CFI_SS_LOCK_BIT) 146 + #define PTRACE_CFI_SS_PTR_STATE _BITUL(PTRACE_CFI_SS_PTR_BIT) 148 147 149 148 #define PRACE_CFI_STATE_INVALID_MASK ~(PTRACE_CFI_LP_EN_STATE | \ 150 149 PTRACE_CFI_LP_LOCK_STATE | \
+4 -3
arch/riscv/kernel/kgdb.c
··· 175 175 {DBG_REG_T1, GDB_SIZEOF_REG, offsetof(struct pt_regs, t1)}, 176 176 {DBG_REG_T2, GDB_SIZEOF_REG, offsetof(struct pt_regs, t2)}, 177 177 {DBG_REG_FP, GDB_SIZEOF_REG, offsetof(struct pt_regs, s0)}, 178 - {DBG_REG_S1, GDB_SIZEOF_REG, offsetof(struct pt_regs, a1)}, 178 + {DBG_REG_S1, GDB_SIZEOF_REG, offsetof(struct pt_regs, s1)}, 179 179 {DBG_REG_A0, GDB_SIZEOF_REG, offsetof(struct pt_regs, a0)}, 180 180 {DBG_REG_A1, GDB_SIZEOF_REG, offsetof(struct pt_regs, a1)}, 181 181 {DBG_REG_A2, GDB_SIZEOF_REG, offsetof(struct pt_regs, a2)}, ··· 244 244 gdb_regs[DBG_REG_S6_OFF] = task->thread.s[6]; 245 245 gdb_regs[DBG_REG_S7_OFF] = task->thread.s[7]; 246 246 gdb_regs[DBG_REG_S8_OFF] = task->thread.s[8]; 247 - gdb_regs[DBG_REG_S9_OFF] = task->thread.s[10]; 248 - gdb_regs[DBG_REG_S10_OFF] = task->thread.s[11]; 247 + gdb_regs[DBG_REG_S9_OFF] = task->thread.s[9]; 248 + gdb_regs[DBG_REG_S10_OFF] = task->thread.s[10]; 249 + gdb_regs[DBG_REG_S11_OFF] = task->thread.s[11]; 249 250 gdb_regs[DBG_REG_EPC_OFF] = task->thread.ra; 250 251 } 251 252
+11 -10
arch/riscv/kernel/patch.c
··· 42 42 static __always_inline void *patch_map(void *addr, const unsigned int fixmap) 43 43 { 44 44 uintptr_t uintaddr = (uintptr_t) addr; 45 - struct page *page; 45 + phys_addr_t phys; 46 46 47 - if (core_kernel_text(uintaddr) || is_kernel_exittext(uintaddr)) 48 - page = phys_to_page(__pa_symbol(addr)); 49 - else if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) 50 - page = vmalloc_to_page(addr); 51 - else 47 + if (core_kernel_text(uintaddr) || is_kernel_exittext(uintaddr)) { 48 + phys = __pa_symbol(addr); 49 + } else if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) { 50 + struct page *page = vmalloc_to_page(addr); 51 + 52 + BUG_ON(!page); 53 + phys = page_to_phys(page) + offset_in_page(addr); 54 + } else { 52 55 return addr; 56 + } 53 57 54 - BUG_ON(!page); 55 - 56 - return (void *)set_fixmap_offset(fixmap, page_to_phys(page) + 57 - offset_in_page(addr)); 58 + return (void *)set_fixmap_offset(fixmap, phys); 58 59 } 59 60 60 61 static void patch_unmap(int fixmap)
+3 -1
arch/riscv/kernel/process.c
··· 347 347 if (arg & PR_TAGGED_ADDR_ENABLE && (tagged_addr_disabled || !pmlen)) 348 348 return -EINVAL; 349 349 350 - if (!(arg & PR_TAGGED_ADDR_ENABLE)) 350 + if (!(arg & PR_TAGGED_ADDR_ENABLE)) { 351 351 pmlen = PMLEN_0; 352 + pmm = ENVCFG_PMM_PMLEN_0; 353 + } 352 354 353 355 if (mmap_write_lock_killable(mm)) 354 356 return -EINTR;
+5 -1
arch/s390/kernel/perf_cpum_sf.c
··· 1168 1168 static void hw_perf_event_update(struct perf_event *event, int flush_all) 1169 1169 { 1170 1170 unsigned long long event_overflow, sampl_overflow, num_sdb; 1171 + struct cpu_hw_sf *cpuhw = this_cpu_ptr(&cpu_hw_sf); 1171 1172 struct hw_perf_event *hwc = &event->hw; 1172 1173 union hws_trailer_header prev, new; 1173 1174 struct hws_trailer_entry *te; ··· 1248 1247 * are dropped. 1249 1248 * Slightly increase the interval to avoid hitting this limit. 1250 1249 */ 1251 - if (event_overflow) 1250 + if (event_overflow) { 1252 1251 SAMPL_RATE(hwc) += DIV_ROUND_UP(SAMPL_RATE(hwc), 10); 1252 + if (SAMPL_RATE(hwc) > cpuhw->qsi.max_sampl_rate) 1253 + SAMPL_RATE(hwc) = cpuhw->qsi.max_sampl_rate; 1254 + } 1253 1255 } 1254 1256 1255 1257 static inline unsigned long aux_sdb_index(struct aux_buffer *aux,
+4 -2
arch/x86/events/intel/core.c
··· 4855 4855 intel_pmu_set_acr_caused_constr(leader, idx++, cause_mask); 4856 4856 4857 4857 if (leader->nr_siblings) { 4858 - for_each_sibling_event(sibling, leader) 4859 - intel_pmu_set_acr_caused_constr(sibling, idx++, cause_mask); 4858 + for_each_sibling_event(sibling, leader) { 4859 + if (is_x86_event(sibling)) 4860 + intel_pmu_set_acr_caused_constr(sibling, idx++, cause_mask); 4861 + } 4860 4862 } 4861 4863 4862 4864 if (leader != event)
+14
arch/x86/kernel/Makefile
··· 44 44 KCOV_INSTRUMENT_unwind_frame.o := n 45 45 KCOV_INSTRUMENT_unwind_guess.o := n 46 46 47 + # Disable KCOV to prevent crashes during kexec: load_segments() invalidates 48 + # the GS base, which KCOV relies on for per-CPU data. 49 + # 50 + # As KCOV and KEXEC compatibility should be preserved (e.g. syzkaller is 51 + # using it to collect crash dumps during kernel fuzzing), disabling 52 + # KCOV for KEXEC kernels is not an option. Selectively disabling KCOV 53 + # instrumentation for individual affected functions can be fragile, while 54 + # adding more checks to KCOV would slow it down. 55 + # 56 + # As a compromise solution, disable KCOV instrumentation for the whole 57 + # source code file. If its coverage is ever needed, other approaches 58 + # should be considered. 59 + KCOV_INSTRUMENT_machine_kexec_64.o := n 60 + 47 61 CFLAGS_head32.o := -fno-stack-protector 48 62 CFLAGS_head64.o := -fno-stack-protector 49 63 CFLAGS_irq.o := -I $(src)/../include/asm/trace
+2
arch/x86/mm/Makefile
··· 4 4 KCOV_INSTRUMENT_mem_encrypt.o := n 5 5 KCOV_INSTRUMENT_mem_encrypt_amd.o := n 6 6 KCOV_INSTRUMENT_pgprot.o := n 7 + # See the "Disable KCOV" comment in arch/x86/kernel/Makefile. 8 + KCOV_INSTRUMENT_physaddr.o := n 7 9 8 10 KASAN_SANITIZE_mem_encrypt.o := n 9 11 KASAN_SANITIZE_mem_encrypt_amd.o := n
+18 -6
arch/x86/platform/geode/geode-common.c
··· 28 28 .properties = geode_gpio_keys_props, 29 29 }; 30 30 31 - static struct property_entry geode_restart_key_props[] = { 32 - { /* Placeholder for GPIO property */ }, 31 + static struct software_node_ref_args geode_restart_gpio_ref; 32 + 33 + static const struct property_entry geode_restart_key_props[] = { 34 + PROPERTY_ENTRY_REF_ARRAY_LEN("gpios", &geode_restart_gpio_ref, 1), 33 35 PROPERTY_ENTRY_U32("linux,code", KEY_RESTART), 34 36 PROPERTY_ENTRY_STRING("label", "Reset button"), 35 37 PROPERTY_ENTRY_U32("debounce-interval", 100), ··· 66 64 struct platform_device *pd; 67 65 int err; 68 66 69 - geode_restart_key_props[0] = PROPERTY_ENTRY_GPIO("gpios", 70 - &geode_gpiochip_node, 67 + geode_restart_gpio_ref = SOFTWARE_NODE_REFERENCE(&geode_gpiochip_node, 71 68 pin, GPIO_ACTIVE_LOW); 72 69 73 70 err = software_node_register_node_group(geode_gpio_keys_swnodes); ··· 100 99 const struct software_node *group[MAX_LEDS + 2] = { 0 }; 101 100 struct software_node *swnodes; 102 101 struct property_entry *props; 102 + struct software_node_ref_args *gpio_refs; 103 103 struct platform_device_info led_info = { 104 104 .name = "leds-gpio", 105 105 .id = PLATFORM_DEVID_NONE, ··· 129 127 goto err_free_swnodes; 130 128 } 131 129 130 + gpio_refs = kzalloc_objs(*gpio_refs, n_leds); 131 + if (!gpio_refs) { 132 + err = -ENOMEM; 133 + goto err_free_props; 134 + } 135 + 132 136 group[0] = &geode_gpio_leds_node; 133 137 for (i = 0; i < n_leds; i++) { 134 138 node_name = kasprintf(GFP_KERNEL, "%s:%d", label, i); ··· 143 135 goto err_free_names; 144 136 } 145 137 138 + gpio_refs[i] = SOFTWARE_NODE_REFERENCE(&geode_gpiochip_node, 139 + leds[i].pin, 140 + GPIO_ACTIVE_LOW); 146 141 props[i * 3 + 0] = 147 - PROPERTY_ENTRY_GPIO("gpios", &geode_gpiochip_node, 148 - leds[i].pin, GPIO_ACTIVE_LOW); 142 + PROPERTY_ENTRY_REF_ARRAY_LEN("gpios", &gpio_refs[i], 1); 149 143 props[i * 3 + 1] = 150 144 PROPERTY_ENTRY_STRING("linux,default-trigger", 151 145 leds[i].default_on ? ··· 181 171 err_free_names: 182 172 while (--i >= 0) 183 173 kfree(swnodes[i].name); 174 + kfree(gpio_refs); 175 + err_free_props: 184 176 kfree(props); 185 177 err_free_swnodes: 186 178 kfree(swnodes);
+13 -40
crypto/af_alg.c
··· 623 623 sg_init_table(sgl->sg, MAX_SGL_ENTS + 1); 624 624 sgl->cur = 0; 625 625 626 - if (sg) 626 + if (sg) { 627 + sg_unmark_end(sg + MAX_SGL_ENTS - 1); 627 628 sg_chain(sg, MAX_SGL_ENTS + 1, sgl->sg); 629 + } 628 630 629 631 list_add_tail(&sgl->list, &ctx->tsgl_list); 630 632 } ··· 637 635 /** 638 636 * af_alg_count_tsgl - Count number of TX SG entries 639 637 * 640 - * The counting starts from the beginning of the SGL to @bytes. If 641 - * an @offset is provided, the counting of the SG entries starts at the @offset. 638 + * The counting starts from the beginning of the SGL to @bytes. 642 639 * 643 640 * @sk: socket of connection to user space 644 641 * @bytes: Count the number of SG entries holding given number of bytes. 645 - * @offset: Start the counting of SG entries from the given offset. 646 642 * Return: Number of TX SG entries found given the constraints 647 643 */ 648 - unsigned int af_alg_count_tsgl(struct sock *sk, size_t bytes, size_t offset) 644 + unsigned int af_alg_count_tsgl(struct sock *sk, size_t bytes) 649 645 { 650 646 const struct alg_sock *ask = alg_sk(sk); 651 647 const struct af_alg_ctx *ctx = ask->private; ··· 658 658 const struct scatterlist *sg = sgl->sg; 659 659 660 660 for (i = 0; i < sgl->cur; i++) { 661 - size_t bytes_count; 662 - 663 - /* Skip offset */ 664 - if (offset >= sg[i].length) { 665 - offset -= sg[i].length; 666 - bytes -= sg[i].length; 667 - continue; 668 - } 669 - 670 - bytes_count = sg[i].length - offset; 671 - 672 - offset = 0; 673 661 sgl_count++; 674 - 675 - /* If we have seen requested number of bytes, stop */ 676 - if (bytes_count >= bytes) 662 + if (sg[i].length >= bytes) 677 663 return sgl_count; 678 664 679 - bytes -= bytes_count; 665 + bytes -= sg[i].length; 680 666 } 681 667 } 682 668 ··· 674 688 * af_alg_pull_tsgl - Release the specified buffers from TX SGL 675 689 * 676 690 * If @dst is non-null, reassign the pages to @dst. The caller must release 677 - * the pages. If @dst_offset is given only reassign the pages to @dst starting 678 - * at the @dst_offset (byte). The caller must ensure that @dst is large 679 - * enough (e.g. by using af_alg_count_tsgl with the same offset). 691 + * the pages. 680 692 * 681 693 * @sk: socket of connection to user space 682 694 * @used: Number of bytes to pull from TX SGL 683 695 * @dst: If non-NULL, buffer is reassigned to dst SGL instead of releasing. The 684 696 * caller must release the buffers in dst. 685 - * @dst_offset: Reassign the TX SGL from given offset. All buffers before 686 - * reaching the offset is released. 687 697 */ 688 - void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst, 689 - size_t dst_offset) 698 + void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst) 690 699 { 691 700 struct alg_sock *ask = alg_sk(sk); 692 701 struct af_alg_ctx *ctx = ask->private; ··· 706 725 * SG entries in dst. 707 726 */ 708 727 if (dst) { 709 - if (dst_offset >= plen) { 710 - /* discard page before offset */ 711 - dst_offset -= plen; 712 - } else { 713 - /* reassign page to dst after offset */ 714 - get_page(page); 715 - sg_set_page(dst + j, page, 716 - plen - dst_offset, 717 - sg[i].offset + dst_offset); 718 - dst_offset = 0; 719 - j++; 720 - } 728 + /* reassign page to dst after offset */ 729 + get_page(page); 730 + sg_set_page(dst + j, page, plen, sg[i].offset); 731 + j++; 721 732 } 722 733 723 734 sg[i].length -= plen;
+19 -81
crypto/algif_aead.c
··· 26 26 #include <crypto/internal/aead.h> 27 27 #include <crypto/scatterwalk.h> 28 28 #include <crypto/if_alg.h> 29 - #include <crypto/skcipher.h> 30 29 #include <linux/init.h> 31 30 #include <linux/list.h> 32 31 #include <linux/kernel.h> ··· 71 72 struct alg_sock *pask = alg_sk(psk); 72 73 struct af_alg_ctx *ctx = ask->private; 73 74 struct crypto_aead *tfm = pask->private; 74 - unsigned int i, as = crypto_aead_authsize(tfm); 75 + unsigned int as = crypto_aead_authsize(tfm); 75 76 struct af_alg_async_req *areq; 76 - struct af_alg_tsgl *tsgl, *tmp; 77 77 struct scatterlist *rsgl_src, *tsgl_src = NULL; 78 78 int err = 0; 79 79 size_t used = 0; /* [in] TX bufs to be en/decrypted */ ··· 152 154 outlen -= less; 153 155 } 154 156 157 + /* 158 + * Create a per request TX SGL for this request which tracks the 159 + * SG entries from the global TX SGL. 160 + */ 155 161 processed = used + ctx->aead_assoclen; 156 - list_for_each_entry_safe(tsgl, tmp, &ctx->tsgl_list, list) { 157 - for (i = 0; i < tsgl->cur; i++) { 158 - struct scatterlist *process_sg = tsgl->sg + i; 159 - 160 - if (!(process_sg->length) || !sg_page(process_sg)) 161 - continue; 162 - tsgl_src = process_sg; 163 - break; 164 - } 165 - if (tsgl_src) 166 - break; 167 - } 168 - if (processed && !tsgl_src) { 169 - err = -EFAULT; 162 + areq->tsgl_entries = af_alg_count_tsgl(sk, processed); 163 + if (!areq->tsgl_entries) 164 + areq->tsgl_entries = 1; 165 + areq->tsgl = sock_kmalloc(sk, array_size(sizeof(*areq->tsgl), 166 + areq->tsgl_entries), 167 + GFP_KERNEL); 168 + if (!areq->tsgl) { 169 + err = -ENOMEM; 170 170 goto free; 171 171 } 172 + sg_init_table(areq->tsgl, areq->tsgl_entries); 173 + af_alg_pull_tsgl(sk, processed, areq->tsgl); 174 + tsgl_src = areq->tsgl; 172 175 173 176 /* 174 177 * Copy of AAD from source to destination ··· 178 179 * when user space uses an in-place cipher operation, the kernel 179 180 * will copy the data as it does not see whether such in-place operation 180 181 * is initiated. 181 - * 182 - * To ensure efficiency, the following implementation ensure that the 183 - * ciphers are invoked to perform a crypto operation in-place. This 184 - * is achieved by memory management specified as follows. 185 182 */ 186 183 187 184 /* Use the RX SGL as source (and destination) for crypto op. */ 188 185 rsgl_src = areq->first_rsgl.sgl.sgt.sgl; 189 186 190 - if (ctx->enc) { 191 - /* 192 - * Encryption operation - The in-place cipher operation is 193 - * achieved by the following operation: 194 - * 195 - * TX SGL: AAD || PT 196 - * | | 197 - * | copy | 198 - * v v 199 - * RX SGL: AAD || PT || Tag 200 - */ 201 - memcpy_sglist(areq->first_rsgl.sgl.sgt.sgl, tsgl_src, 202 - processed); 203 - af_alg_pull_tsgl(sk, processed, NULL, 0); 204 - } else { 205 - /* 206 - * Decryption operation - To achieve an in-place cipher 207 - * operation, the following SGL structure is used: 208 - * 209 - * TX SGL: AAD || CT || Tag 210 - * | | ^ 211 - * | copy | | Create SGL link. 212 - * v v | 213 - * RX SGL: AAD || CT ----+ 214 - */ 215 - 216 - /* Copy AAD || CT to RX SGL buffer for in-place operation. */ 217 - memcpy_sglist(areq->first_rsgl.sgl.sgt.sgl, tsgl_src, outlen); 218 - 219 - /* Create TX SGL for tag and chain it to RX SGL. */ 220 - areq->tsgl_entries = af_alg_count_tsgl(sk, processed, 221 - processed - as); 222 - if (!areq->tsgl_entries) 223 - areq->tsgl_entries = 1; 224 - areq->tsgl = sock_kmalloc(sk, array_size(sizeof(*areq->tsgl), 225 - areq->tsgl_entries), 226 - GFP_KERNEL); 227 - if (!areq->tsgl) { 228 - err = -ENOMEM; 229 - goto free; 230 - } 231 - sg_init_table(areq->tsgl, areq->tsgl_entries); 232 - 233 - /* Release TX SGL, except for tag data and reassign tag data. */ 234 - af_alg_pull_tsgl(sk, processed, areq->tsgl, processed - as); 235 - 236 - /* chain the areq TX SGL holding the tag with RX SGL */ 237 - if (usedpages) { 238 - /* RX SGL present */ 239 - struct af_alg_sgl *sgl_prev = &areq->last_rsgl->sgl; 240 - struct scatterlist *sg = sgl_prev->sgt.sgl; 241 - 242 - sg_unmark_end(sg + sgl_prev->sgt.nents - 1); 243 - sg_chain(sg, sgl_prev->sgt.nents + 1, areq->tsgl); 244 - } else 245 - /* no RX SGL present (e.g. authentication only) */ 246 - rsgl_src = areq->tsgl; 247 - } 187 + memcpy_sglist(rsgl_src, tsgl_src, ctx->aead_assoclen); 248 188 249 189 /* Initialize the crypto operation */ 250 - aead_request_set_crypt(&areq->cra_u.aead_req, rsgl_src, 190 + aead_request_set_crypt(&areq->cra_u.aead_req, tsgl_src, 251 191 areq->first_rsgl.sgl.sgt.sgl, used, ctx->iv); 252 192 aead_request_set_ad(&areq->cra_u.aead_req, ctx->aead_assoclen); 253 193 aead_request_set_tfm(&areq->cra_u.aead_req, tfm); ··· 388 450 struct crypto_aead *tfm = pask->private; 389 451 unsigned int ivlen = crypto_aead_ivsize(tfm); 390 452 391 - af_alg_pull_tsgl(sk, ctx->used, NULL, 0); 453 + af_alg_pull_tsgl(sk, ctx->used, NULL); 392 454 sock_kzfree_s(sk, ctx->iv, ivlen); 393 455 sock_kfree_s(sk, ctx, ctx->len); 394 456 af_alg_release_parent(sk);
+3 -3
crypto/algif_skcipher.c
··· 138 138 * Create a per request TX SGL for this request which tracks the 139 139 * SG entries from the global TX SGL. 140 140 */ 141 - areq->tsgl_entries = af_alg_count_tsgl(sk, len, 0); 141 + areq->tsgl_entries = af_alg_count_tsgl(sk, len); 142 142 if (!areq->tsgl_entries) 143 143 areq->tsgl_entries = 1; 144 144 areq->tsgl = sock_kmalloc(sk, array_size(sizeof(*areq->tsgl), ··· 149 149 goto free; 150 150 } 151 151 sg_init_table(areq->tsgl, areq->tsgl_entries); 152 - af_alg_pull_tsgl(sk, len, areq->tsgl, 0); 152 + af_alg_pull_tsgl(sk, len, areq->tsgl); 153 153 154 154 /* Initialize the crypto operation */ 155 155 skcipher_request_set_tfm(&areq->cra_u.skcipher_req, tfm); ··· 363 363 struct alg_sock *pask = alg_sk(psk); 364 364 struct crypto_skcipher *tfm = pask->private; 365 365 366 - af_alg_pull_tsgl(sk, ctx->used, NULL, 0); 366 + af_alg_pull_tsgl(sk, ctx->used, NULL); 367 367 sock_kzfree_s(sk, ctx->iv, crypto_skcipher_ivsize(tfm)); 368 368 if (ctx->state) 369 369 sock_kzfree_s(sk, ctx->state, crypto_skcipher_statesize(tfm));
+30 -20
crypto/authencesn.c
··· 207 207 u8 *ohash = areq_ctx->tail; 208 208 unsigned int cryptlen = req->cryptlen - authsize; 209 209 unsigned int assoclen = req->assoclen; 210 + struct scatterlist *src = req->src; 210 211 struct scatterlist *dst = req->dst; 211 212 u8 *ihash = ohash + crypto_ahash_digestsize(auth); 212 213 u32 tmp[2]; ··· 215 214 if (!authsize) 216 215 goto decrypt; 217 216 218 - /* Move high-order bits of sequence number back. */ 219 - scatterwalk_map_and_copy(tmp, dst, 4, 4, 0); 220 - scatterwalk_map_and_copy(tmp + 1, dst, assoclen + cryptlen, 4, 0); 221 - scatterwalk_map_and_copy(tmp, dst, 0, 8, 1); 217 + if (src == dst) { 218 + /* Move high-order bits of sequence number back. */ 219 + scatterwalk_map_and_copy(tmp, dst, 4, 4, 0); 220 + scatterwalk_map_and_copy(tmp + 1, dst, assoclen + cryptlen, 4, 0); 221 + scatterwalk_map_and_copy(tmp, dst, 0, 8, 1); 222 + } else 223 + memcpy_sglist(dst, src, assoclen); 222 224 223 225 if (crypto_memneq(ihash, ohash, authsize)) 224 226 return -EBADMSG; 225 227 226 228 decrypt: 227 229 228 - sg_init_table(areq_ctx->dst, 2); 230 + if (src != dst) 231 + src = scatterwalk_ffwd(areq_ctx->src, src, assoclen); 229 232 dst = scatterwalk_ffwd(areq_ctx->dst, dst, assoclen); 230 233 231 234 skcipher_request_set_tfm(skreq, ctx->enc); 232 235 skcipher_request_set_callback(skreq, flags, 233 236 req->base.complete, req->base.data); 234 - skcipher_request_set_crypt(skreq, dst, dst, cryptlen, req->iv); 237 + skcipher_request_set_crypt(skreq, src, dst, cryptlen, req->iv); 235 238 236 239 return crypto_skcipher_decrypt(skreq); 237 240 } ··· 260 255 unsigned int assoclen = req->assoclen; 261 256 unsigned int cryptlen = req->cryptlen; 262 257 u8 *ihash = ohash + crypto_ahash_digestsize(auth); 258 + struct scatterlist *src = req->src; 263 259 struct scatterlist *dst = req->dst; 264 260 u32 tmp[2]; 265 261 int err; ··· 268 262 if (assoclen < 8) 269 263 return -EINVAL; 270 264 271 - cryptlen -= authsize; 272 - 273 - if (req->src != dst) 274 - memcpy_sglist(dst, req->src, assoclen + cryptlen); 275 - 276 - scatterwalk_map_and_copy(ihash, req->src, assoclen + cryptlen, 277 - authsize, 0); 278 - 279 265 if (!authsize) 280 266 goto tail; 281 267 282 - /* Move high-order bits of sequence number to the end. */ 283 - scatterwalk_map_and_copy(tmp, dst, 0, 8, 0); 284 - scatterwalk_map_and_copy(tmp, dst, 4, 4, 1); 285 - scatterwalk_map_and_copy(tmp + 1, dst, assoclen + cryptlen, 4, 1); 268 + cryptlen -= authsize; 269 + scatterwalk_map_and_copy(ihash, req->src, assoclen + cryptlen, 270 + authsize, 0); 286 271 287 - sg_init_table(areq_ctx->dst, 2); 288 - dst = scatterwalk_ffwd(areq_ctx->dst, dst, 4); 272 + /* Move high-order bits of sequence number to the end. */ 273 + scatterwalk_map_and_copy(tmp, src, 0, 8, 0); 274 + if (src == dst) { 275 + scatterwalk_map_and_copy(tmp, dst, 4, 4, 1); 276 + scatterwalk_map_and_copy(tmp + 1, dst, assoclen + cryptlen, 4, 1); 277 + dst = scatterwalk_ffwd(areq_ctx->dst, dst, 4); 278 + } else { 279 + scatterwalk_map_and_copy(tmp, dst, 0, 4, 1); 280 + scatterwalk_map_and_copy(tmp + 1, dst, assoclen + cryptlen - 4, 4, 1); 281 + 282 + src = scatterwalk_ffwd(areq_ctx->src, src, 8); 283 + dst = scatterwalk_ffwd(areq_ctx->dst, dst, 4); 284 + memcpy_sglist(dst, src, assoclen + cryptlen - 8); 285 + dst = req->dst; 286 + } 289 287 290 288 ahash_request_set_tfm(ahreq, auth); 291 289 ahash_request_set_crypt(ahreq, dst, ohash, assoclen + cryptlen);
+7 -4
crypto/deflate.c
··· 164 164 165 165 do { 166 166 unsigned int dcur; 167 + unsigned long avail_in; 167 168 168 169 dcur = acomp_walk_next_dst(&walk); 169 - if (!dcur) { 170 - out_of_space = true; 171 - break; 172 - } 173 170 174 171 stream->avail_out = dcur; 175 172 stream->next_out = walk.dst.virt.addr; 173 + avail_in = stream->avail_in; 176 174 177 175 ret = zlib_inflate(stream, Z_NO_FLUSH); 176 + 177 + if (!dcur && avail_in == stream->avail_in) { 178 + out_of_space = true; 179 + break; 180 + } 178 181 179 182 dcur -= stream->avail_out; 180 183 acomp_walk_done_dst(&walk, dcur);
+45 -2
drivers/accel/qaic/qaic_control.c
··· 914 914 */ 915 915 return -ENODEV; 916 916 917 - if (status) { 917 + if (usr && status) { 918 918 /* 919 919 * Releasing resources failed on the device side, which puts 920 920 * us in a bind since they may still be in use, so enable the ··· 1109 1109 mutex_lock(&qdev->cntl_mutex); 1110 1110 if (!list_empty(&elem.list)) 1111 1111 list_del(&elem.list); 1112 + /* resp_worker() processed the response but the wait was interrupted */ 1113 + else if (ret == -ERESTARTSYS) 1114 + ret = 0; 1112 1115 if (!ret && !elem.buf) 1113 1116 ret = -ETIMEDOUT; 1114 1117 else if (ret > 0 && !elem.buf) ··· 1422 1419 } 1423 1420 mutex_unlock(&qdev->cntl_mutex); 1424 1421 1425 - if (!found) 1422 + if (!found) { 1423 + /* 1424 + * The user might have gone away at this point without waiting 1425 + * for QAIC_TRANS_DEACTIVATE_FROM_DEV transaction coming from 1426 + * the device. If this is not handled correctly, the host will 1427 + * not know that the DBC[n] has been freed on the device. 1428 + * Due to this failure in synchronization between the device and 1429 + * the host, if another user requests to activate a network, and 1430 + * the device assigns DBC[n] again, save_dbc_buf() will hang, 1431 + * waiting for dbc[n]->in_use to be set to false, which will not 1432 + * happen unless the qaic_dev_reset_clean_local_state() gets 1433 + * called by resetting the device (or re-inserting the module). 1434 + * 1435 + * As a solution, we look for QAIC_TRANS_DEACTIVATE_FROM_DEV 1436 + * transactions in the message before disposing of it, then 1437 + * handle releasing the DBC resources. 1438 + * 1439 + * Since the user has gone away, if the device could not 1440 + * deactivate the network (status != 0), there is no way to 1441 + * enable and reassign the DBC to the user. We can put trust in 1442 + * the device that it will release all the active DBCs in 1443 + * response to the QAIC_TRANS_TERMINATE_TO_DEV transaction, 1444 + * otherwise, the user can issue an soc_reset to the device. 1445 + */ 1446 + u32 msg_count = le32_to_cpu(msg->hdr.count); 1447 + u32 msg_len = le32_to_cpu(msg->hdr.len); 1448 + u32 len = 0; 1449 + int j; 1450 + 1451 + for (j = 0; j < msg_count && len < msg_len; ++j) { 1452 + struct wire_trans_hdr *trans_hdr; 1453 + 1454 + trans_hdr = (struct wire_trans_hdr *)(msg->data + len); 1455 + if (le32_to_cpu(trans_hdr->type) == QAIC_TRANS_DEACTIVATE_FROM_DEV) { 1456 + if (decode_deactivate(qdev, trans_hdr, &len, NULL)) 1457 + len += le32_to_cpu(trans_hdr->len); 1458 + } else { 1459 + len += le32_to_cpu(trans_hdr->len); 1460 + } 1461 + } 1426 1462 /* request must have timed out, drop packet */ 1427 1463 kfree(msg); 1464 + } 1428 1465 1429 1466 kfree(resp); 1430 1467 }
+7
drivers/acpi/riscv/rimt.c
··· 263 263 if (!rimt_fwnode) 264 264 return -EPROBE_DEFER; 265 265 266 + /* 267 + * EPROBE_DEFER ensures IOMMU is probed before the devices that 268 + * depend on them. During shutdown, however, the IOMMU may be removed 269 + * first, leading to issues. To avoid this, a device link is added 270 + * which enforces the correct removal order. 271 + */ 272 + device_link_add(dev, rimt_fwnode->dev, DL_FLAG_AUTOREMOVE_CONSUMER); 266 273 return acpi_iommu_fwspec_init(dev, deviceid, rimt_fwnode); 267 274 } 268 275
+5 -3
drivers/android/binder/page_range.rs
··· 13 13 // 14 14 // The shrinker will use trylock methods because it locks them in a different order. 15 15 16 + use crate::AssertSync; 17 + 16 18 use core::{ 17 19 marker::PhantomPinned, 18 20 mem::{size_of, size_of_val, MaybeUninit}, ··· 145 143 } 146 144 147 145 // We do not define any ops. For now, used only to check identity of vmas. 148 - static BINDER_VM_OPS: bindings::vm_operations_struct = pin_init::zeroed(); 146 + static BINDER_VM_OPS: AssertSync<bindings::vm_operations_struct> = AssertSync(pin_init::zeroed()); 149 147 150 148 // To ensure that we do not accidentally install pages into or zap pages from the wrong vma, we 151 149 // check its vm_ops and private data before using it. 152 150 fn check_vma(vma: &virt::VmaRef, owner: *const ShrinkablePageRange) -> Option<&virt::VmaMixedMap> { 153 151 // SAFETY: Just reading the vm_ops pointer of any active vma is safe. 154 152 let vm_ops = unsafe { (*vma.as_ptr()).vm_ops }; 155 - if !ptr::eq(vm_ops, &BINDER_VM_OPS) { 153 + if !ptr::eq(vm_ops, &BINDER_VM_OPS.0) { 156 154 return None; 157 155 } 158 156 ··· 344 342 345 343 // SAFETY: We own the vma, and we don't use any methods on VmaNew that rely on 346 344 // `vm_ops`. 347 - unsafe { (*vma.as_ptr()).vm_ops = &BINDER_VM_OPS }; 345 + unsafe { (*vma.as_ptr()).vm_ops = &BINDER_VM_OPS.0 }; 348 346 349 347 Ok(num_pages) 350 348 }
+1 -1
drivers/android/binder/rust_binder_main.rs
··· 306 306 /// Makes the inner type Sync. 307 307 #[repr(transparent)] 308 308 pub struct AssertSync<T>(T); 309 - // SAFETY: Used only to insert `file_operations` into a global, which is safe. 309 + // SAFETY: Used only to insert C bindings types into globals, which is safe. 310 310 unsafe impl<T> Sync for AssertSync<T> {} 311 311 312 312 /// File operations that rust_binderfs.c can use.
+12 -3
drivers/auxdisplay/lcd2s.c
··· 99 99 { 100 100 struct lcd2s_data *lcd2s = lcd->drvdata; 101 101 u8 buf[2] = { LCD2S_CMD_WRITE, c }; 102 + int ret; 102 103 103 - lcd2s_i2c_master_send(lcd2s->i2c, buf, sizeof(buf)); 104 + ret = lcd2s_i2c_master_send(lcd2s->i2c, buf, sizeof(buf)); 105 + if (ret < 0) 106 + return ret; 107 + if (ret != sizeof(buf)) 108 + return -EIO; 104 109 return 0; 105 110 } 106 111 ··· 113 108 { 114 109 struct lcd2s_data *lcd2s = lcd->drvdata; 115 110 u8 buf[3] = { LCD2S_CMD_CUR_POS, y + 1, x + 1 }; 111 + int ret; 116 112 117 - lcd2s_i2c_master_send(lcd2s->i2c, buf, sizeof(buf)); 118 - 113 + ret = lcd2s_i2c_master_send(lcd2s->i2c, buf, sizeof(buf)); 114 + if (ret < 0) 115 + return ret; 116 + if (ret != sizeof(buf)) 117 + return -EIO; 119 118 return 0; 120 119 } 121 120
+1 -1
drivers/auxdisplay/line-display.c
··· 365 365 366 366 static void linedisp_release(struct device *dev) 367 367 { 368 - struct linedisp *linedisp = to_linedisp(dev); 368 + struct linedisp *linedisp = container_of(dev, struct linedisp, dev); 369 369 370 370 kfree(linedisp->map); 371 371 kfree(linedisp->message);
-3
drivers/bluetooth/hci_h4.c
··· 109 109 { 110 110 struct h4_struct *h4 = hu->priv; 111 111 112 - if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 113 - return -EUNATCH; 114 - 115 112 h4->rx_skb = h4_recv_buf(hu, h4->rx_skb, data, count, 116 113 h4_recv_pkts, ARRAY_SIZE(h4_recv_pkts)); 117 114 if (IS_ERR(h4->rx_skb)) {
+5 -3
drivers/comedi/comedi_fops.c
··· 793 793 __comedi_clear_subdevice_runflags(s, COMEDI_SRF_RUNNING | 794 794 COMEDI_SRF_BUSY); 795 795 spin_unlock_irqrestore(&s->spin_lock, flags); 796 - if (comedi_is_runflags_busy(runflags)) { 796 + if (async) { 797 797 /* 798 798 * "Run active" counter was set to 1 when setting up the 799 799 * command. Decrement it and wait for it to become 0. 800 800 */ 801 - comedi_put_is_subdevice_running(s); 802 - wait_for_completion(&async->run_complete); 801 + if (comedi_is_runflags_busy(runflags)) { 802 + comedi_put_is_subdevice_running(s); 803 + wait_for_completion(&async->run_complete); 804 + } 803 805 comedi_buf_reset(s); 804 806 async->inttrig = NULL; 805 807 kfree(async->cmd.chanlist);
+8
drivers/comedi/drivers.c
··· 1063 1063 ret = -EIO; 1064 1064 goto out; 1065 1065 } 1066 + if (IS_ENABLED(CONFIG_LOCKDEP)) { 1067 + /* 1068 + * dev->spinlock is for private use by the attached low-level 1069 + * driver. Reinitialize it to stop lock-dependency tracking 1070 + * between attachments to different low-level drivers. 1071 + */ 1072 + spin_lock_init(&dev->spinlock); 1073 + } 1066 1074 dev->driver = driv; 1067 1075 dev->board_name = dev->board_ptr ? *(const char **)dev->board_ptr 1068 1076 : dev->driver->driver_name;
+12
drivers/comedi/drivers/dt2815.c
··· 175 175 ? current_range_type : voltage_range_type; 176 176 } 177 177 178 + /* 179 + * Check if hardware is present before attempting any I/O operations. 180 + * Reading 0xff from status register typically indicates no hardware 181 + * on the bus (floating bus reads as all 1s). 182 + */ 183 + if (inb(dev->iobase + DT2815_STATUS) == 0xff) { 184 + dev_err(dev->class_dev, 185 + "No hardware detected at I/O base 0x%lx\n", 186 + dev->iobase); 187 + return -ENODEV; 188 + } 189 + 178 190 /* Init the 2815 */ 179 191 outb(0x00, dev->iobase + DT2815_STATUS); 180 192 for (i = 0; i < 100; i++) {
+12 -4
drivers/comedi/drivers/me4000.c
··· 315 315 unsigned int val; 316 316 unsigned int i; 317 317 318 + /* Get data stream length from header. */ 319 + if (size >= 4) { 320 + file_length = (((unsigned int)data[0] & 0xff) << 24) + 321 + (((unsigned int)data[1] & 0xff) << 16) + 322 + (((unsigned int)data[2] & 0xff) << 8) + 323 + ((unsigned int)data[3] & 0xff); 324 + } 325 + if (size < 16 || file_length > size - 16) { 326 + dev_err(dev->class_dev, "Firmware length inconsistency\n"); 327 + return -EINVAL; 328 + } 329 + 318 330 if (!xilinx_iobase) 319 331 return -ENODEV; 320 332 ··· 358 346 outl(val, devpriv->plx_regbase + PLX9052_CNTRL); 359 347 360 348 /* Download Xilinx firmware */ 361 - file_length = (((unsigned int)data[0] & 0xff) << 24) + 362 - (((unsigned int)data[1] & 0xff) << 16) + 363 - (((unsigned int)data[2] & 0xff) << 8) + 364 - ((unsigned int)data[3] & 0xff); 365 349 usleep_range(10, 1000); 366 350 367 351 for (i = 0; i < file_length; i++) {
+19 -16
drivers/comedi/drivers/me_daq.c
··· 344 344 unsigned int file_length; 345 345 unsigned int i; 346 346 347 + /* 348 + * Format of the firmware 349 + * Build longs from the byte-wise coded header 350 + * Byte 1-3: length of the array 351 + * Byte 4-7: version 352 + * Byte 8-11: date 353 + * Byte 12-15: reserved 354 + */ 355 + if (size >= 4) { 356 + file_length = (((unsigned int)data[0] & 0xff) << 24) + 357 + (((unsigned int)data[1] & 0xff) << 16) + 358 + (((unsigned int)data[2] & 0xff) << 8) + 359 + ((unsigned int)data[3] & 0xff); 360 + } 361 + if (size < 16 || file_length > size - 16) { 362 + dev_err(dev->class_dev, "Firmware length inconsistency\n"); 363 + return -EINVAL; 364 + } 365 + 347 366 /* disable irq's on PLX */ 348 367 writel(0x00, devpriv->plx_regbase + PLX9052_INTCSR); 349 368 ··· 375 356 /* Write a dummy value to Xilinx */ 376 357 writeb(0x00, dev->mmio + 0x0); 377 358 sleep(1); 378 - 379 - /* 380 - * Format of the firmware 381 - * Build longs from the byte-wise coded header 382 - * Byte 1-3: length of the array 383 - * Byte 4-7: version 384 - * Byte 8-11: date 385 - * Byte 12-15: reserved 386 - */ 387 - if (size < 16) 388 - return -EINVAL; 389 - 390 - file_length = (((unsigned int)data[0] & 0xff) << 24) + 391 - (((unsigned int)data[1] & 0xff) << 16) + 392 - (((unsigned int)data[2] & 0xff) << 8) + 393 - ((unsigned int)data[3] & 0xff); 394 359 395 360 /* 396 361 * Loop for writing firmware byte by byte to xilinx
+2 -1
drivers/comedi/drivers/ni_atmio16d.c
··· 698 698 699 699 static void atmio16d_detach(struct comedi_device *dev) 700 700 { 701 - reset_atmio16d(dev); 701 + if (dev->private) 702 + reset_atmio16d(dev); 702 703 comedi_legacy_detach(dev); 703 704 } 704 705
+35 -32
drivers/counter/rz-mtu3-cnt.c
··· 107 107 struct rz_mtu3_cnt *const priv = counter_priv(counter); 108 108 unsigned long tmdr; 109 109 110 - pm_runtime_get_sync(priv->ch->dev); 110 + pm_runtime_get_sync(counter->parent); 111 111 tmdr = rz_mtu3_shared_reg_read(priv->ch, RZ_MTU3_TMDR3); 112 - pm_runtime_put(priv->ch->dev); 112 + pm_runtime_put(counter->parent); 113 113 114 114 if (id == RZ_MTU3_32_BIT_CH && test_bit(RZ_MTU3_TMDR3_LWA, &tmdr)) 115 115 return false; ··· 165 165 if (ret) 166 166 return ret; 167 167 168 - pm_runtime_get_sync(ch->dev); 168 + pm_runtime_get_sync(counter->parent); 169 169 if (count->id == RZ_MTU3_32_BIT_CH) 170 170 *val = rz_mtu3_32bit_ch_read(ch, RZ_MTU3_TCNTLW); 171 171 else 172 172 *val = rz_mtu3_16bit_ch_read(ch, RZ_MTU3_TCNT); 173 - pm_runtime_put(ch->dev); 173 + pm_runtime_put(counter->parent); 174 174 mutex_unlock(&priv->lock); 175 175 176 176 return 0; ··· 187 187 if (ret) 188 188 return ret; 189 189 190 - pm_runtime_get_sync(ch->dev); 190 + pm_runtime_get_sync(counter->parent); 191 191 if (count->id == RZ_MTU3_32_BIT_CH) 192 192 rz_mtu3_32bit_ch_write(ch, RZ_MTU3_TCNTLW, val); 193 193 else 194 194 rz_mtu3_16bit_ch_write(ch, RZ_MTU3_TCNT, val); 195 - pm_runtime_put(ch->dev); 195 + pm_runtime_put(counter->parent); 196 196 mutex_unlock(&priv->lock); 197 197 198 198 return 0; 199 199 } 200 200 201 201 static int rz_mtu3_count_function_read_helper(struct rz_mtu3_channel *const ch, 202 - struct rz_mtu3_cnt *const priv, 202 + struct counter_device *const counter, 203 203 enum counter_function *function) 204 204 { 205 205 u8 timer_mode; 206 206 207 - pm_runtime_get_sync(ch->dev); 207 + pm_runtime_get_sync(counter->parent); 208 208 timer_mode = rz_mtu3_8bit_ch_read(ch, RZ_MTU3_TMDR1); 209 - pm_runtime_put(ch->dev); 209 + pm_runtime_put(counter->parent); 210 210 211 211 switch (timer_mode & RZ_MTU3_TMDR1_PH_CNT_MODE_MASK) { 212 212 case RZ_MTU3_TMDR1_PH_CNT_MODE_1: ··· 240 240 if (ret) 241 241 return ret; 242 242 243 - ret = rz_mtu3_count_function_read_helper(ch, priv, function); 243 + ret = rz_mtu3_count_function_read_helper(ch, counter, function); 244 244 mutex_unlock(&priv->lock); 245 245 246 246 return ret; ··· 279 279 return -EINVAL; 280 280 } 281 281 282 - pm_runtime_get_sync(ch->dev); 282 + pm_runtime_get_sync(counter->parent); 283 283 rz_mtu3_8bit_ch_write(ch, RZ_MTU3_TMDR1, timer_mode); 284 - pm_runtime_put(ch->dev); 284 + pm_runtime_put(counter->parent); 285 285 mutex_unlock(&priv->lock); 286 286 287 287 return 0; ··· 300 300 if (ret) 301 301 return ret; 302 302 303 - pm_runtime_get_sync(ch->dev); 303 + pm_runtime_get_sync(counter->parent); 304 304 tsr = rz_mtu3_8bit_ch_read(ch, RZ_MTU3_TSR); 305 - pm_runtime_put(ch->dev); 305 + pm_runtime_put(counter->parent); 306 306 307 307 *direction = (tsr & RZ_MTU3_TSR_TCFD) ? 308 308 COUNTER_COUNT_DIRECTION_FORWARD : COUNTER_COUNT_DIRECTION_BACKWARD; ··· 377 377 return -EINVAL; 378 378 } 379 379 380 - pm_runtime_get_sync(ch->dev); 380 + pm_runtime_get_sync(counter->parent); 381 381 if (count->id == RZ_MTU3_32_BIT_CH) 382 382 rz_mtu3_32bit_ch_write(ch, RZ_MTU3_TGRALW, ceiling); 383 383 else 384 384 rz_mtu3_16bit_ch_write(ch, RZ_MTU3_TGRA, ceiling); 385 385 386 386 rz_mtu3_8bit_ch_write(ch, RZ_MTU3_TCR, RZ_MTU3_TCR_CCLR_TGRA); 387 - pm_runtime_put(ch->dev); 387 + pm_runtime_put(counter->parent); 388 388 mutex_unlock(&priv->lock); 389 389 390 390 return 0; ··· 495 495 static int rz_mtu3_count_enable_write(struct counter_device *counter, 496 496 struct counter_count *count, u8 enable) 497 497 { 498 - struct rz_mtu3_channel *const ch = rz_mtu3_get_ch(counter, count->id); 499 498 struct rz_mtu3_cnt *const priv = counter_priv(counter); 500 499 int ret = 0; 501 500 501 + mutex_lock(&priv->lock); 502 + 503 + if (priv->count_is_enabled[count->id] == enable) 504 + goto exit; 505 + 502 506 if (enable) { 503 - mutex_lock(&priv->lock); 504 - pm_runtime_get_sync(ch->dev); 507 + pm_runtime_get_sync(counter->parent); 505 508 ret = rz_mtu3_initialize_counter(counter, count->id); 506 509 if (ret == 0) 507 510 priv->count_is_enabled[count->id] = true; 508 - mutex_unlock(&priv->lock); 509 511 } else { 510 - mutex_lock(&priv->lock); 511 512 rz_mtu3_terminate_counter(counter, count->id); 512 513 priv->count_is_enabled[count->id] = false; 513 - pm_runtime_put(ch->dev); 514 - mutex_unlock(&priv->lock); 514 + pm_runtime_put(counter->parent); 515 515 } 516 + 517 + exit: 518 + mutex_unlock(&priv->lock); 516 519 517 520 return ret; 518 521 } ··· 543 540 if (ret) 544 541 return ret; 545 542 546 - pm_runtime_get_sync(priv->ch->dev); 543 + pm_runtime_get_sync(counter->parent); 547 544 tmdr = rz_mtu3_shared_reg_read(priv->ch, RZ_MTU3_TMDR3); 548 - pm_runtime_put(priv->ch->dev); 545 + pm_runtime_put(counter->parent); 549 546 *cascade_enable = test_bit(RZ_MTU3_TMDR3_LWA, &tmdr); 550 547 mutex_unlock(&priv->lock); 551 548 ··· 562 559 if (ret) 563 560 return ret; 564 561 565 - pm_runtime_get_sync(priv->ch->dev); 562 + pm_runtime_get_sync(counter->parent); 566 563 rz_mtu3_shared_reg_update_bit(priv->ch, RZ_MTU3_TMDR3, 567 564 RZ_MTU3_TMDR3_LWA, cascade_enable); 568 - pm_runtime_put(priv->ch->dev); 565 + pm_runtime_put(counter->parent); 569 566 mutex_unlock(&priv->lock); 570 567 571 568 return 0; ··· 582 579 if (ret) 583 580 return ret; 584 581 585 - pm_runtime_get_sync(priv->ch->dev); 582 + pm_runtime_get_sync(counter->parent); 586 583 tmdr = rz_mtu3_shared_reg_read(priv->ch, RZ_MTU3_TMDR3); 587 - pm_runtime_put(priv->ch->dev); 584 + pm_runtime_put(counter->parent); 588 585 *ext_input_phase_clock_select = test_bit(RZ_MTU3_TMDR3_PHCKSEL, &tmdr); 589 586 mutex_unlock(&priv->lock); 590 587 ··· 601 598 if (ret) 602 599 return ret; 603 600 604 - pm_runtime_get_sync(priv->ch->dev); 601 + pm_runtime_get_sync(counter->parent); 605 602 rz_mtu3_shared_reg_update_bit(priv->ch, RZ_MTU3_TMDR3, 606 603 RZ_MTU3_TMDR3_PHCKSEL, 607 604 ext_input_phase_clock_select); 608 - pm_runtime_put(priv->ch->dev); 605 + pm_runtime_put(counter->parent); 609 606 mutex_unlock(&priv->lock); 610 607 611 608 return 0; ··· 643 640 if (ret) 644 641 return ret; 645 642 646 - ret = rz_mtu3_count_function_read_helper(ch, priv, &function); 643 + ret = rz_mtu3_count_function_read_helper(ch, counter, &function); 647 644 if (ret) { 648 645 mutex_unlock(&priv->lock); 649 646 return ret;
+3 -3
drivers/cpufreq/cpufreq_governor.c
··· 468 468 /* Failure, so roll back. */ 469 469 pr_err("initialization failed (dbs_data kobject init error %d)\n", ret); 470 470 471 - kobject_put(&dbs_data->attr_set.kobj); 472 - 473 471 policy->governor_data = NULL; 474 472 475 473 if (!have_governor_per_policy()) 476 474 gov->gdbs_data = NULL; 477 - gov->exit(dbs_data); 475 + 476 + kobject_put(&dbs_data->attr_set.kobj); 477 + goto free_policy_dbs_info; 478 478 479 479 free_dbs_data: 480 480 kfree(dbs_data);
+2 -1
drivers/crypto/caam/caamalg_qi2.c
··· 3326 3326 if (aligned_len < keylen) 3327 3327 return -EOVERFLOW; 3328 3328 3329 - hashed_key = kmemdup(key, aligned_len, GFP_KERNEL); 3329 + hashed_key = kmalloc(aligned_len, GFP_KERNEL); 3330 3330 if (!hashed_key) 3331 3331 return -ENOMEM; 3332 + memcpy(hashed_key, key, keylen); 3332 3333 ret = hash_digest_key(ctx, &keylen, hashed_key, digestsize); 3333 3334 if (ret) 3334 3335 goto bad_free_key;
+2 -1
drivers/crypto/caam/caamhash.c
··· 441 441 if (aligned_len < keylen) 442 442 return -EOVERFLOW; 443 443 444 - hashed_key = kmemdup(key, keylen, GFP_KERNEL); 444 + hashed_key = kmalloc(aligned_len, GFP_KERNEL); 445 445 if (!hashed_key) 446 446 return -ENOMEM; 447 + memcpy(hashed_key, key, keylen); 447 448 ret = hash_digest_key(ctx, &keylen, hashed_key, digestsize); 448 449 if (ret) 449 450 goto bad_free_key;
+7 -4
drivers/crypto/tegra/tegra-se-aes.c
··· 529 529 .cra_name = "cbc(aes)", 530 530 .cra_driver_name = "cbc-aes-tegra", 531 531 .cra_priority = 500, 532 - .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER | CRYPTO_ALG_ASYNC, 532 + .cra_flags = CRYPTO_ALG_ASYNC, 533 533 .cra_blocksize = AES_BLOCK_SIZE, 534 534 .cra_ctxsize = sizeof(struct tegra_aes_ctx), 535 535 .cra_alignmask = 0xf, ··· 550 550 .cra_name = "ecb(aes)", 551 551 .cra_driver_name = "ecb-aes-tegra", 552 552 .cra_priority = 500, 553 - .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER | CRYPTO_ALG_ASYNC, 553 + .cra_flags = CRYPTO_ALG_ASYNC, 554 554 .cra_blocksize = AES_BLOCK_SIZE, 555 555 .cra_ctxsize = sizeof(struct tegra_aes_ctx), 556 556 .cra_alignmask = 0xf, ··· 572 572 .cra_name = "ctr(aes)", 573 573 .cra_driver_name = "ctr-aes-tegra", 574 574 .cra_priority = 500, 575 - .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER | CRYPTO_ALG_ASYNC, 575 + .cra_flags = CRYPTO_ALG_ASYNC, 576 576 .cra_blocksize = 1, 577 577 .cra_ctxsize = sizeof(struct tegra_aes_ctx), 578 578 .cra_alignmask = 0xf, ··· 594 594 .cra_name = "xts(aes)", 595 595 .cra_driver_name = "xts-aes-tegra", 596 596 .cra_priority = 500, 597 + .cra_flags = CRYPTO_ALG_ASYNC, 597 598 .cra_blocksize = AES_BLOCK_SIZE, 598 599 .cra_ctxsize = sizeof(struct tegra_aes_ctx), 599 600 .cra_alignmask = (__alignof__(u64) - 1), ··· 1923 1922 .cra_name = "gcm(aes)", 1924 1923 .cra_driver_name = "gcm-aes-tegra", 1925 1924 .cra_priority = 500, 1925 + .cra_flags = CRYPTO_ALG_ASYNC, 1926 1926 .cra_blocksize = 1, 1927 1927 .cra_ctxsize = sizeof(struct tegra_aead_ctx), 1928 1928 .cra_alignmask = 0xf, ··· 1946 1944 .cra_name = "ccm(aes)", 1947 1945 .cra_driver_name = "ccm-aes-tegra", 1948 1946 .cra_priority = 500, 1947 + .cra_flags = CRYPTO_ALG_ASYNC, 1949 1948 .cra_blocksize = 1, 1950 1949 .cra_ctxsize = sizeof(struct tegra_aead_ctx), 1951 1950 .cra_alignmask = 0xf, ··· 1974 1971 .cra_name = "cmac(aes)", 1975 1972 .cra_driver_name = "tegra-se-cmac", 1976 1973 .cra_priority = 300, 1977 - .cra_flags = CRYPTO_ALG_TYPE_AHASH, 1974 + .cra_flags = CRYPTO_ALG_ASYNC, 1978 1975 .cra_blocksize = AES_BLOCK_SIZE, 1979 1976 .cra_ctxsize = sizeof(struct tegra_cmac_ctx), 1980 1977 .cra_alignmask = 0,
+17 -13
drivers/crypto/tegra/tegra-se-hash.c
··· 761 761 .cra_name = "sha1", 762 762 .cra_driver_name = "tegra-se-sha1", 763 763 .cra_priority = 300, 764 - .cra_flags = CRYPTO_ALG_TYPE_AHASH, 764 + .cra_flags = CRYPTO_ALG_ASYNC, 765 765 .cra_blocksize = SHA1_BLOCK_SIZE, 766 766 .cra_ctxsize = sizeof(struct tegra_sha_ctx), 767 767 .cra_alignmask = 0, ··· 786 786 .cra_name = "sha224", 787 787 .cra_driver_name = "tegra-se-sha224", 788 788 .cra_priority = 300, 789 - .cra_flags = CRYPTO_ALG_TYPE_AHASH, 789 + .cra_flags = CRYPTO_ALG_ASYNC, 790 790 .cra_blocksize = SHA224_BLOCK_SIZE, 791 791 .cra_ctxsize = sizeof(struct tegra_sha_ctx), 792 792 .cra_alignmask = 0, ··· 811 811 .cra_name = "sha256", 812 812 .cra_driver_name = "tegra-se-sha256", 813 813 .cra_priority = 300, 814 - .cra_flags = CRYPTO_ALG_TYPE_AHASH, 814 + .cra_flags = CRYPTO_ALG_ASYNC, 815 815 .cra_blocksize = SHA256_BLOCK_SIZE, 816 816 .cra_ctxsize = sizeof(struct tegra_sha_ctx), 817 817 .cra_alignmask = 0, ··· 836 836 .cra_name = "sha384", 837 837 .cra_driver_name = "tegra-se-sha384", 838 838 .cra_priority = 300, 839 - .cra_flags = CRYPTO_ALG_TYPE_AHASH, 839 + .cra_flags = CRYPTO_ALG_ASYNC, 840 840 .cra_blocksize = SHA384_BLOCK_SIZE, 841 841 .cra_ctxsize = sizeof(struct tegra_sha_ctx), 842 842 .cra_alignmask = 0, ··· 861 861 .cra_name = "sha512", 862 862 .cra_driver_name = "tegra-se-sha512", 863 863 .cra_priority = 300, 864 - .cra_flags = CRYPTO_ALG_TYPE_AHASH, 864 + .cra_flags = CRYPTO_ALG_ASYNC, 865 865 .cra_blocksize = SHA512_BLOCK_SIZE, 866 866 .cra_ctxsize = sizeof(struct tegra_sha_ctx), 867 867 .cra_alignmask = 0, ··· 886 886 .cra_name = "sha3-224", 887 887 .cra_driver_name = "tegra-se-sha3-224", 888 888 .cra_priority = 300, 889 - .cra_flags = CRYPTO_ALG_TYPE_AHASH, 889 + .cra_flags = CRYPTO_ALG_ASYNC, 890 890 .cra_blocksize = SHA3_224_BLOCK_SIZE, 891 891 .cra_ctxsize = sizeof(struct tegra_sha_ctx), 892 892 .cra_alignmask = 0, ··· 911 911 .cra_name = "sha3-256", 912 912 .cra_driver_name = "tegra-se-sha3-256", 913 913 .cra_priority = 300, 914 - .cra_flags = CRYPTO_ALG_TYPE_AHASH, 914 + .cra_flags = CRYPTO_ALG_ASYNC, 915 915 .cra_blocksize = SHA3_256_BLOCK_SIZE, 916 916 .cra_ctxsize = sizeof(struct tegra_sha_ctx), 917 917 .cra_alignmask = 0, ··· 936 936 .cra_name = "sha3-384", 937 937 .cra_driver_name = "tegra-se-sha3-384", 938 938 .cra_priority = 300, 939 - .cra_flags = CRYPTO_ALG_TYPE_AHASH, 939 + .cra_flags = CRYPTO_ALG_ASYNC, 940 940 .cra_blocksize = SHA3_384_BLOCK_SIZE, 941 941 .cra_ctxsize = sizeof(struct tegra_sha_ctx), 942 942 .cra_alignmask = 0, ··· 961 961 .cra_name = "sha3-512", 962 962 .cra_driver_name = "tegra-se-sha3-512", 963 963 .cra_priority = 300, 964 - .cra_flags = CRYPTO_ALG_TYPE_AHASH, 964 + .cra_flags = CRYPTO_ALG_ASYNC, 965 965 .cra_blocksize = SHA3_512_BLOCK_SIZE, 966 966 .cra_ctxsize = sizeof(struct tegra_sha_ctx), 967 967 .cra_alignmask = 0, ··· 988 988 .cra_name = "hmac(sha224)", 989 989 .cra_driver_name = "tegra-se-hmac-sha224", 990 990 .cra_priority = 300, 991 - .cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_NEED_FALLBACK, 991 + .cra_flags = CRYPTO_ALG_ASYNC | 992 + CRYPTO_ALG_NEED_FALLBACK, 992 993 .cra_blocksize = SHA224_BLOCK_SIZE, 993 994 .cra_ctxsize = sizeof(struct tegra_sha_ctx), 994 995 .cra_alignmask = 0, ··· 1016 1015 .cra_name = "hmac(sha256)", 1017 1016 .cra_driver_name = "tegra-se-hmac-sha256", 1018 1017 .cra_priority = 300, 1019 - .cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_NEED_FALLBACK, 1018 + .cra_flags = CRYPTO_ALG_ASYNC | 1019 + CRYPTO_ALG_NEED_FALLBACK, 1020 1020 .cra_blocksize = SHA256_BLOCK_SIZE, 1021 1021 .cra_ctxsize = sizeof(struct tegra_sha_ctx), 1022 1022 .cra_alignmask = 0, ··· 1044 1042 .cra_name = "hmac(sha384)", 1045 1043 .cra_driver_name = "tegra-se-hmac-sha384", 1046 1044 .cra_priority = 300, 1047 - .cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_NEED_FALLBACK, 1045 + .cra_flags = CRYPTO_ALG_ASYNC | 1046 + CRYPTO_ALG_NEED_FALLBACK, 1048 1047 .cra_blocksize = SHA384_BLOCK_SIZE, 1049 1048 .cra_ctxsize = sizeof(struct tegra_sha_ctx), 1050 1049 .cra_alignmask = 0, ··· 1072 1069 .cra_name = "hmac(sha512)", 1073 1070 .cra_driver_name = "tegra-se-hmac-sha512", 1074 1071 .cra_priority = 300, 1075 - .cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_NEED_FALLBACK, 1072 + .cra_flags = CRYPTO_ALG_ASYNC | 1073 + CRYPTO_ALG_NEED_FALLBACK, 1076 1074 .cra_blocksize = SHA512_BLOCK_SIZE, 1077 1075 .cra_ctxsize = sizeof(struct tegra_sha_ctx), 1078 1076 .cra_alignmask = 0,
+1
drivers/gpib/Kconfig
··· 122 122 depends on OF 123 123 select GPIB_COMMON 124 124 select GPIB_NEC7210 125 + depends on HAS_IOMEM 125 126 help 126 127 GPIB driver for Fluke based cda devices. 127 128
+73 -23
drivers/gpib/common/gpib_os.c
··· 888 888 if (read_cmd.completed_transfer_count > read_cmd.requested_transfer_count) 889 889 return -EINVAL; 890 890 891 - desc = handle_to_descriptor(file_priv, read_cmd.handle); 892 - if (!desc) 893 - return -EINVAL; 894 - 895 891 if (WARN_ON_ONCE(sizeof(userbuf) > sizeof(read_cmd.buffer_ptr))) 896 892 return -EFAULT; 897 893 ··· 899 903 /* Check write access to buffer */ 900 904 if (!access_ok(userbuf, remain)) 901 905 return -EFAULT; 906 + 907 + /* Lock descriptors to prevent concurrent close from freeing descriptor */ 908 + if (mutex_lock_interruptible(&file_priv->descriptors_mutex)) 909 + return -ERESTARTSYS; 910 + desc = handle_to_descriptor(file_priv, read_cmd.handle); 911 + if (!desc) { 912 + mutex_unlock(&file_priv->descriptors_mutex); 913 + return -EINVAL; 914 + } 915 + atomic_inc(&desc->descriptor_busy); 916 + mutex_unlock(&file_priv->descriptors_mutex); 902 917 903 918 atomic_set(&desc->io_in_progress, 1); 904 919 ··· 944 937 retval = copy_to_user((void __user *)arg, &read_cmd, sizeof(read_cmd)); 945 938 946 939 atomic_set(&desc->io_in_progress, 0); 940 + atomic_dec(&desc->descriptor_busy); 947 941 948 942 wake_up_interruptible(&board->wait); 949 943 if (retval) ··· 972 964 if (cmd.completed_transfer_count > cmd.requested_transfer_count) 973 965 return -EINVAL; 974 966 975 - desc = handle_to_descriptor(file_priv, cmd.handle); 976 - if (!desc) 977 - return -EINVAL; 978 - 979 967 userbuf = (u8 __user *)(unsigned long)cmd.buffer_ptr; 980 968 userbuf += cmd.completed_transfer_count; 981 969 ··· 983 979 /* Check read access to buffer */ 984 980 if (!access_ok(userbuf, remain)) 985 981 return -EFAULT; 982 + 983 + /* Lock descriptors to prevent concurrent close from freeing descriptor */ 984 + if (mutex_lock_interruptible(&file_priv->descriptors_mutex)) 985 + return -ERESTARTSYS; 986 + desc = handle_to_descriptor(file_priv, cmd.handle); 987 + if (!desc) { 988 + mutex_unlock(&file_priv->descriptors_mutex); 989 + return -EINVAL; 990 + } 991 + atomic_inc(&desc->descriptor_busy); 992 + mutex_unlock(&file_priv->descriptors_mutex); 986 993 987 994 /* 988 995 * Write buffer loads till we empty the user supplied buffer. ··· 1018 1003 userbuf += bytes_written; 1019 1004 if (retval < 0) { 1020 1005 atomic_set(&desc->io_in_progress, 0); 1006 + atomic_dec(&desc->descriptor_busy); 1021 1007 1022 1008 wake_up_interruptible(&board->wait); 1023 1009 break; ··· 1038 1022 */ 1039 1023 if (!no_clear_io_in_prog || fault) 1040 1024 atomic_set(&desc->io_in_progress, 0); 1025 + atomic_dec(&desc->descriptor_busy); 1041 1026 1042 1027 wake_up_interruptible(&board->wait); 1043 1028 if (fault) ··· 1064 1047 if (write_cmd.completed_transfer_count > write_cmd.requested_transfer_count) 1065 1048 return -EINVAL; 1066 1049 1067 - desc = handle_to_descriptor(file_priv, write_cmd.handle); 1068 - if (!desc) 1069 - return -EINVAL; 1070 - 1071 1050 userbuf = (u8 __user *)(unsigned long)write_cmd.buffer_ptr; 1072 1051 userbuf += write_cmd.completed_transfer_count; 1073 1052 ··· 1072 1059 /* Check read access to buffer */ 1073 1060 if (!access_ok(userbuf, remain)) 1074 1061 return -EFAULT; 1062 + 1063 + /* Lock descriptors to prevent concurrent close from freeing descriptor */ 1064 + if (mutex_lock_interruptible(&file_priv->descriptors_mutex)) 1065 + return -ERESTARTSYS; 1066 + desc = handle_to_descriptor(file_priv, write_cmd.handle); 1067 + if (!desc) { 1068 + mutex_unlock(&file_priv->descriptors_mutex); 1069 + return -EINVAL; 1070 + } 1071 + atomic_inc(&desc->descriptor_busy); 1072 + mutex_unlock(&file_priv->descriptors_mutex); 1075 1073 1076 1074 atomic_set(&desc->io_in_progress, 1); 1077 1075 ··· 1118 1094 fault = copy_to_user((void __user *)arg, &write_cmd, sizeof(write_cmd)); 1119 1095 1120 1096 atomic_set(&desc->io_in_progress, 0); 1097 + atomic_dec(&desc->descriptor_busy); 1121 1098 1122 1099 wake_up_interruptible(&board->wait); 1123 1100 if (fault) ··· 1301 1276 { 1302 1277 struct gpib_close_dev_ioctl cmd; 1303 1278 struct gpib_file_private *file_priv = filep->private_data; 1279 + struct gpib_descriptor *desc; 1280 + unsigned int pad; 1281 + int sad; 1304 1282 int retval; 1305 1283 1306 1284 retval = copy_from_user(&cmd, (void __user *)arg, sizeof(cmd)); ··· 1312 1284 1313 1285 if (cmd.handle >= GPIB_MAX_NUM_DESCRIPTORS) 1314 1286 return -EINVAL; 1315 - if (!file_priv->descriptors[cmd.handle]) 1287 + 1288 + mutex_lock(&file_priv->descriptors_mutex); 1289 + desc = file_priv->descriptors[cmd.handle]; 1290 + if (!desc) { 1291 + mutex_unlock(&file_priv->descriptors_mutex); 1316 1292 return -EINVAL; 1317 - 1318 - retval = decrement_open_device_count(board, &board->device_list, 1319 - file_priv->descriptors[cmd.handle]->pad, 1320 - file_priv->descriptors[cmd.handle]->sad); 1321 - if (retval < 0) 1322 - return retval; 1323 - 1324 - kfree(file_priv->descriptors[cmd.handle]); 1293 + } 1294 + if (atomic_read(&desc->descriptor_busy)) { 1295 + mutex_unlock(&file_priv->descriptors_mutex); 1296 + return -EBUSY; 1297 + } 1298 + /* Remove from table while holding lock to prevent new IO from starting */ 1325 1299 file_priv->descriptors[cmd.handle] = NULL; 1300 + pad = desc->pad; 1301 + sad = desc->sad; 1302 + mutex_unlock(&file_priv->descriptors_mutex); 1326 1303 1327 - return 0; 1304 + retval = decrement_open_device_count(board, &board->device_list, pad, sad); 1305 + 1306 + kfree(desc); 1307 + return retval; 1328 1308 } 1329 1309 1330 1310 static int serial_poll_ioctl(struct gpib_board *board, unsigned long arg) ··· 1367 1331 if (retval) 1368 1332 return -EFAULT; 1369 1333 1334 + /* 1335 + * Lock descriptors to prevent concurrent close from freeing 1336 + * descriptor. ibwait() releases big_gpib_mutex when wait_mask 1337 + * is non-zero, so desc must be pinned with descriptor_busy. 1338 + */ 1339 + mutex_lock(&file_priv->descriptors_mutex); 1370 1340 desc = handle_to_descriptor(file_priv, wait_cmd.handle); 1371 - if (!desc) 1341 + if (!desc) { 1342 + mutex_unlock(&file_priv->descriptors_mutex); 1372 1343 return -EINVAL; 1344 + } 1345 + atomic_inc(&desc->descriptor_busy); 1346 + mutex_unlock(&file_priv->descriptors_mutex); 1373 1347 1374 1348 retval = ibwait(board, wait_cmd.wait_mask, wait_cmd.clear_mask, 1375 1349 wait_cmd.set_mask, &wait_cmd.ibsta, wait_cmd.usec_timeout, desc); 1350 + 1351 + atomic_dec(&desc->descriptor_busy); 1352 + 1376 1353 if (retval < 0) 1377 1354 return retval; 1378 1355 ··· 2084 2035 desc->is_board = 0; 2085 2036 desc->autopoll_enabled = 0; 2086 2037 atomic_set(&desc->io_in_progress, 0); 2038 + atomic_set(&desc->descriptor_busy, 0); 2087 2039 } 2088 2040 2089 2041 int gpib_register_driver(struct gpib_interface *interface, struct module *provider_module)
+8
drivers/gpib/include/gpib_types.h
··· 364 364 unsigned int pad; /* primary gpib address */ 365 365 int sad; /* secondary gpib address (negative means disabled) */ 366 366 atomic_t io_in_progress; 367 + /* 368 + * Kernel-only reference count to prevent descriptor from being 369 + * freed while IO handlers hold a pointer to it. Incremented 370 + * before each IO operation, decremented when done. Unlike 371 + * io_in_progress, this cannot be modified from userspace via 372 + * general_ibstatus(). 373 + */ 374 + atomic_t descriptor_busy; 367 375 unsigned is_board : 1; 368 376 unsigned autopoll_enabled : 1; 369 377 };
+2 -2
drivers/gpib/lpvo_usb_gpib/lpvo_usb_gpib.c
··· 406 406 for (j = 0 ; j < MAX_DEV ; j++) { 407 407 if ((assigned_usb_minors & 1 << j) == 0) 408 408 continue; 409 - udev = usb_get_dev(interface_to_usbdev(lpvo_usb_interfaces[j])); 409 + udev = interface_to_usbdev(lpvo_usb_interfaces[j]); 410 410 device_path = kobject_get_path(&udev->dev.kobj, GFP_KERNEL); 411 411 match = gpib_match_device_path(&lpvo_usb_interfaces[j]->dev, 412 412 config->device_path); ··· 421 421 for (j = 0 ; j < MAX_DEV ; j++) { 422 422 if ((assigned_usb_minors & 1 << j) == 0) 423 423 continue; 424 - udev = usb_get_dev(interface_to_usbdev(lpvo_usb_interfaces[j])); 424 + udev = interface_to_usbdev(lpvo_usb_interfaces[j]); 425 425 DIA_LOG(1, "dev. %d: bus %d -> %d dev: %d -> %d\n", j, 426 426 udev->bus->busnum, config->pci_bus, udev->devnum, config->pci_slot); 427 427 if (config->pci_bus == udev->bus->busnum &&
+9 -1
drivers/gpio/gpio-mxc.c
··· 584 584 unsigned long config; 585 585 bool ret = false; 586 586 int i, type; 587 + bool is_imx8qm = of_device_is_compatible(port->dev->of_node, "fsl,imx8qm-gpio"); 587 588 588 589 static const u32 pad_type_map[] = { 589 590 IMX_SCU_WAKEUP_OFF, /* 0 */ 590 591 IMX_SCU_WAKEUP_RISE_EDGE, /* IRQ_TYPE_EDGE_RISING */ 591 592 IMX_SCU_WAKEUP_FALL_EDGE, /* IRQ_TYPE_EDGE_FALLING */ 592 - IMX_SCU_WAKEUP_FALL_EDGE, /* IRQ_TYPE_EDGE_BOTH */ 593 + IMX_SCU_WAKEUP_RISE_EDGE, /* IRQ_TYPE_EDGE_BOTH */ 593 594 IMX_SCU_WAKEUP_HIGH_LVL, /* IRQ_TYPE_LEVEL_HIGH */ 594 595 IMX_SCU_WAKEUP_OFF, /* 5 */ 595 596 IMX_SCU_WAKEUP_OFF, /* 6 */ ··· 605 604 config = pad_type_map[type]; 606 605 else 607 606 config = IMX_SCU_WAKEUP_OFF; 607 + 608 + if (is_imx8qm && config == IMX_SCU_WAKEUP_FALL_EDGE) { 609 + dev_warn_once(port->dev, 610 + "No falling-edge support for wakeup on i.MX8QM\n"); 611 + config = IMX_SCU_WAKEUP_OFF; 612 + } 613 + 608 614 ret |= mxc_gpio_generic_config(port, i, config); 609 615 } 610 616 }
+2 -2
drivers/gpio/gpio-qixis-fpga.c
··· 60 60 return PTR_ERR(reg); 61 61 62 62 regmap = devm_regmap_init_mmio(&pdev->dev, reg, &regmap_config_8r_8v); 63 - if (!regmap) 64 - return -ENODEV; 63 + if (IS_ERR(regmap)) 64 + return PTR_ERR(regmap); 65 65 66 66 /* In this case, the offset of our register is 0 inside the 67 67 * regmap area that we just created.
+41 -16
drivers/gpio/gpiolib-shared.c
··· 443 443 } 444 444 #endif /* CONFIG_RESET_GPIO */ 445 445 446 - int gpio_shared_add_proxy_lookup(struct device *consumer, const char *con_id, 447 - unsigned long lflags) 446 + int gpio_shared_add_proxy_lookup(struct device *consumer, struct fwnode_handle *fwnode, 447 + const char *con_id, unsigned long lflags) 448 448 { 449 449 const char *dev_id = dev_name(consumer); 450 450 struct gpiod_lookup_table *lookup; ··· 458 458 if (!ref->fwnode && device_is_compatible(consumer, "reset-gpio")) { 459 459 if (!gpio_shared_dev_is_reset_gpio(consumer, entry, ref)) 460 460 continue; 461 - } else if (!device_match_fwnode(consumer, ref->fwnode)) { 461 + } else if (fwnode != ref->fwnode) { 462 462 continue; 463 463 } 464 464 ··· 506 506 auxiliary_device_uninit(adev); 507 507 } 508 508 509 - int gpio_device_setup_shared(struct gpio_device *gdev) 509 + int gpiochip_setup_shared(struct gpio_chip *gc) 510 510 { 511 + struct gpio_device *gdev = gc->gpiodev; 511 512 struct gpio_shared_entry *entry; 512 513 struct gpio_shared_ref *ref; 513 514 struct gpio_desc *desc; ··· 539 538 if (list_count_nodes(&entry->refs) <= 1) 540 539 continue; 541 540 542 - desc = &gdev->descs[entry->offset]; 541 + scoped_guard(mutex, &entry->lock) { 542 + #if IS_ENABLED(CONFIG_OF) 543 + if (is_of_node(entry->fwnode) && gc->of_xlate) { 544 + /* 545 + * This is the earliest that we can tranlate the 546 + * devicetree offset to the chip offset. 547 + */ 548 + struct of_phandle_args gpiospec = { }; 543 549 544 - __set_bit(GPIOD_FLAG_SHARED, &desc->flags); 545 - /* 546 - * Shared GPIOs are not requested via the normal path. Make 547 - * them inaccessible to anyone even before we register the 548 - * chip. 549 - */ 550 - ret = gpiod_request_commit(desc, "shared"); 551 - if (ret) 552 - return ret; 550 + gpiospec.np = to_of_node(entry->fwnode); 551 + gpiospec.args_count = 2; 552 + gpiospec.args[0] = entry->offset; 553 553 554 - pr_debug("GPIO %u owned by %s is shared by multiple consumers\n", 555 - entry->offset, gpio_device_get_label(gdev)); 554 + ret = gc->of_xlate(gc, &gpiospec, NULL); 555 + if (ret < 0) 556 + return ret; 557 + 558 + entry->offset = ret; 559 + } 560 + #endif /* CONFIG_OF */ 561 + 562 + desc = &gdev->descs[entry->offset]; 563 + 564 + __set_bit(GPIOD_FLAG_SHARED, &desc->flags); 565 + /* 566 + * Shared GPIOs are not requested via the normal path. Make 567 + * them inaccessible to anyone even before we register the 568 + * chip. 569 + */ 570 + ret = gpiod_request_commit(desc, "shared"); 571 + if (ret) 572 + return ret; 573 + 574 + pr_debug("GPIO %u owned by %s is shared by multiple consumers\n", 575 + entry->offset, gpio_device_get_label(gdev)); 576 + } 556 577 557 578 list_for_each_entry(ref, &entry->refs, list) { 558 579 pr_debug("Setting up a shared GPIO entry for %s (con_id: '%s')\n", ··· 598 575 struct gpio_shared_ref *ref; 599 576 600 577 list_for_each_entry(entry, &gpio_shared_list, list) { 578 + guard(mutex)(&entry->lock); 579 + 601 580 if (!device_match_fwnode(&gdev->dev, entry->fwnode)) 602 581 continue; 603 582
+7 -4
drivers/gpio/gpiolib-shared.h
··· 11 11 struct gpio_device; 12 12 struct gpio_desc; 13 13 struct device; 14 + struct fwnode_handle; 14 15 15 16 #if IS_ENABLED(CONFIG_GPIO_SHARED) 16 17 17 - int gpio_device_setup_shared(struct gpio_device *gdev); 18 + int gpiochip_setup_shared(struct gpio_chip *gc); 18 19 void gpio_device_teardown_shared(struct gpio_device *gdev); 19 - int gpio_shared_add_proxy_lookup(struct device *consumer, const char *con_id, 20 - unsigned long lflags); 20 + int gpio_shared_add_proxy_lookup(struct device *consumer, 21 + struct fwnode_handle *fwnode, 22 + const char *con_id, unsigned long lflags); 21 23 22 24 #else 23 25 24 - static inline int gpio_device_setup_shared(struct gpio_device *gdev) 26 + static inline int gpiochip_setup_shared(struct gpio_chip *gc) 25 27 { 26 28 return 0; 27 29 } ··· 31 29 static inline void gpio_device_teardown_shared(struct gpio_device *gdev) { } 32 30 33 31 static inline int gpio_shared_add_proxy_lookup(struct device *consumer, 32 + struct fwnode_handle *fwnode, 34 33 const char *con_id, 35 34 unsigned long lflags) 36 35 {
+66 -69
drivers/gpio/gpiolib.c
··· 892 892 #define gcdev_unregister(gdev) device_del(&(gdev)->dev) 893 893 #endif 894 894 895 + /* 896 + * An initial reference count has been held in gpiochip_add_data_with_key(). 897 + * The caller should drop the reference via gpio_device_put() on errors. 898 + */ 895 899 static int gpiochip_setup_dev(struct gpio_device *gdev) 896 900 { 897 901 struct fwnode_handle *fwnode = dev_fwnode(&gdev->dev); 898 902 int ret; 899 - 900 - device_initialize(&gdev->dev); 901 903 902 904 /* 903 905 * If fwnode doesn't belong to another device, it's safe to clear its ··· 966 964 list_for_each_entry_srcu(gdev, &gpio_devices, list, 967 965 srcu_read_lock_held(&gpio_devices_srcu)) { 968 966 ret = gpiochip_setup_dev(gdev); 969 - if (ret) 967 + if (ret) { 968 + gpio_device_put(gdev); 970 969 dev_err(&gdev->dev, 971 970 "Failed to initialize gpio device (%d)\n", ret); 971 + } 972 972 } 973 973 } 974 974 ··· 1051 1047 int base = 0; 1052 1048 int ret; 1053 1049 1054 - /* 1055 - * First: allocate and populate the internal stat container, and 1056 - * set up the struct device. 1057 - */ 1058 1050 gdev = kzalloc(sizeof(*gdev), GFP_KERNEL); 1059 1051 if (!gdev) 1060 1052 return -ENOMEM; 1061 - 1062 - gdev->dev.type = &gpio_dev_type; 1063 - gdev->dev.bus = &gpio_bus_type; 1064 - gdev->dev.parent = gc->parent; 1065 - rcu_assign_pointer(gdev->chip, gc); 1066 - 1067 1053 gc->gpiodev = gdev; 1068 1054 gpiochip_set_data(gc, data); 1069 - 1070 - device_set_node(&gdev->dev, gpiochip_choose_fwnode(gc)); 1071 1055 1072 1056 ret = ida_alloc(&gpio_ida, GFP_KERNEL); 1073 1057 if (ret < 0) 1074 1058 goto err_free_gdev; 1075 1059 gdev->id = ret; 1076 1060 1077 - ret = dev_set_name(&gdev->dev, GPIOCHIP_NAME "%d", gdev->id); 1061 + ret = init_srcu_struct(&gdev->srcu); 1078 1062 if (ret) 1079 1063 goto err_free_ida; 1064 + rcu_assign_pointer(gdev->chip, gc); 1080 1065 1066 + ret = init_srcu_struct(&gdev->desc_srcu); 1067 + if (ret) 1068 + goto err_cleanup_gdev_srcu; 1069 + 1070 + ret = dev_set_name(&gdev->dev, GPIOCHIP_NAME "%d", gdev->id); 1071 + if (ret) 1072 + goto err_cleanup_desc_srcu; 1073 + 1074 + device_initialize(&gdev->dev); 1075 + /* 1076 + * After this point any allocated resources to `gdev` will be 1077 + * free():ed by gpiodev_release(). If you add new resources 1078 + * then make sure they get free():ed there. 1079 + */ 1080 + gdev->dev.type = &gpio_dev_type; 1081 + gdev->dev.bus = &gpio_bus_type; 1082 + gdev->dev.parent = gc->parent; 1083 + device_set_node(&gdev->dev, gpiochip_choose_fwnode(gc)); 1084 + 1085 + ret = gpiochip_get_ngpios(gc, &gdev->dev); 1086 + if (ret) 1087 + goto err_put_device; 1088 + gdev->ngpio = gc->ngpio; 1089 + 1090 + gdev->descs = kcalloc(gc->ngpio, sizeof(*gdev->descs), GFP_KERNEL); 1091 + if (!gdev->descs) { 1092 + ret = -ENOMEM; 1093 + goto err_put_device; 1094 + } 1095 + 1096 + gdev->label = kstrdup_const(gc->label ?: "unknown", GFP_KERNEL); 1097 + if (!gdev->label) { 1098 + ret = -ENOMEM; 1099 + goto err_put_device; 1100 + } 1101 + 1102 + gdev->can_sleep = gc->can_sleep; 1103 + rwlock_init(&gdev->line_state_lock); 1104 + RAW_INIT_NOTIFIER_HEAD(&gdev->line_state_notifier); 1105 + BLOCKING_INIT_NOTIFIER_HEAD(&gdev->device_notifier); 1106 + #ifdef CONFIG_PINCTRL 1107 + INIT_LIST_HEAD(&gdev->pin_ranges); 1108 + #endif 1081 1109 if (gc->parent && gc->parent->driver) 1082 1110 gdev->owner = gc->parent->driver->owner; 1083 1111 else if (gc->owner) ··· 1117 1081 gdev->owner = gc->owner; 1118 1082 else 1119 1083 gdev->owner = THIS_MODULE; 1120 - 1121 - ret = gpiochip_get_ngpios(gc, &gdev->dev); 1122 - if (ret) 1123 - goto err_free_dev_name; 1124 - 1125 - gdev->descs = kcalloc(gc->ngpio, sizeof(*gdev->descs), GFP_KERNEL); 1126 - if (!gdev->descs) { 1127 - ret = -ENOMEM; 1128 - goto err_free_dev_name; 1129 - } 1130 - 1131 - gdev->label = kstrdup_const(gc->label ?: "unknown", GFP_KERNEL); 1132 - if (!gdev->label) { 1133 - ret = -ENOMEM; 1134 - goto err_free_descs; 1135 - } 1136 - 1137 - gdev->ngpio = gc->ngpio; 1138 - gdev->can_sleep = gc->can_sleep; 1139 - 1140 - rwlock_init(&gdev->line_state_lock); 1141 - RAW_INIT_NOTIFIER_HEAD(&gdev->line_state_notifier); 1142 - BLOCKING_INIT_NOTIFIER_HEAD(&gdev->device_notifier); 1143 - 1144 - ret = init_srcu_struct(&gdev->srcu); 1145 - if (ret) 1146 - goto err_free_label; 1147 - 1148 - ret = init_srcu_struct(&gdev->desc_srcu); 1149 - if (ret) 1150 - goto err_cleanup_gdev_srcu; 1151 1084 1152 1085 scoped_guard(mutex, &gpio_devices_lock) { 1153 1086 /* ··· 1132 1127 if (base < 0) { 1133 1128 ret = base; 1134 1129 base = 0; 1135 - goto err_cleanup_desc_srcu; 1130 + goto err_put_device; 1136 1131 } 1137 1132 1138 1133 /* ··· 1152 1147 ret = gpiodev_add_to_list_unlocked(gdev); 1153 1148 if (ret) { 1154 1149 gpiochip_err(gc, "GPIO integer space overlap, cannot add chip\n"); 1155 - goto err_cleanup_desc_srcu; 1150 + goto err_put_device; 1156 1151 } 1157 1152 } 1158 - 1159 - #ifdef CONFIG_PINCTRL 1160 - INIT_LIST_HEAD(&gdev->pin_ranges); 1161 - #endif 1162 1153 1163 1154 if (gc->names) 1164 1155 gpiochip_set_desc_names(gc); ··· 1211 1210 if (ret) 1212 1211 goto err_remove_irqchip_mask; 1213 1212 1214 - ret = gpio_device_setup_shared(gdev); 1213 + ret = gpiochip_setup_shared(gc); 1215 1214 if (ret) 1216 1215 goto err_remove_irqchip; 1217 1216 ··· 1249 1248 scoped_guard(mutex, &gpio_devices_lock) 1250 1249 list_del_rcu(&gdev->list); 1251 1250 synchronize_srcu(&gpio_devices_srcu); 1252 - if (gdev->dev.release) { 1253 - /* release() has been registered by gpiochip_setup_dev() */ 1254 - gpio_device_put(gdev); 1255 - goto err_print_message; 1256 - } 1251 + err_put_device: 1252 + gpio_device_put(gdev); 1253 + goto err_print_message; 1254 + 1257 1255 err_cleanup_desc_srcu: 1258 1256 cleanup_srcu_struct(&gdev->desc_srcu); 1259 1257 err_cleanup_gdev_srcu: 1260 1258 cleanup_srcu_struct(&gdev->srcu); 1261 - err_free_label: 1262 - kfree_const(gdev->label); 1263 - err_free_descs: 1264 - kfree(gdev->descs); 1265 - err_free_dev_name: 1266 - kfree(dev_name(&gdev->dev)); 1267 1259 err_free_ida: 1268 1260 ida_free(&gpio_ida, gdev->id); 1269 1261 err_free_gdev: 1270 1262 kfree(gdev); 1263 + 1271 1264 err_print_message: 1272 1265 /* failures here can mean systems won't boot... */ 1273 1266 if (ret != -EPROBE_DEFER) { ··· 2460 2465 return -EBUSY; 2461 2466 2462 2467 offset = gpiod_hwgpio(desc); 2463 - if (!gpiochip_line_is_valid(guard.gc, offset)) 2464 - return -EINVAL; 2468 + if (!gpiochip_line_is_valid(guard.gc, offset)) { 2469 + ret = -EINVAL; 2470 + goto out_clear_bit; 2471 + } 2465 2472 2466 2473 /* NOTE: gpio_request() can be called in early boot, 2467 2474 * before IRQs are enabled, for non-sleeping (SOC) GPIOs. ··· 4714 4717 * lookup table for the proxy device as previously 4715 4718 * we only knew the consumer's fwnode. 4716 4719 */ 4717 - ret = gpio_shared_add_proxy_lookup(consumer, con_id, 4718 - lookupflags); 4720 + ret = gpio_shared_add_proxy_lookup(consumer, fwnode, 4721 + con_id, lookupflags); 4719 4722 if (ret) 4720 4723 return ERR_PTR(ret); 4721 4724
+2 -2
drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
··· 368 368 dc->res_pool->funcs->update_bw_bounding_box && 369 369 dc->clk_mgr && dc->clk_mgr->bw_params) { 370 370 /* update bounding box if FAMS2 disabled, or if dchub clk has changed */ 371 - if (dc->clk_mgr) 372 - dc->res_pool->funcs->update_bw_bounding_box(dc, dc->clk_mgr->bw_params); 371 + dc->res_pool->funcs->update_bw_bounding_box(dc, 372 + dc->clk_mgr->bw_params); 373 373 } 374 374 } 375 375 }
+41
drivers/gpu/drm/amd/display/dc/resource/dcn10/dcn10_resource.c
··· 71 71 #include "dce/dce_dmcu.h" 72 72 #include "dce/dce_aux.h" 73 73 #include "dce/dce_i2c.h" 74 + #include "dio/dcn10/dcn10_dio.h" 74 75 75 76 #ifndef mmDP0_DP_DPHY_INTERNAL_CTRL 76 77 #define mmDP0_DP_DPHY_INTERNAL_CTRL 0x210f ··· 444 443 static const struct dcn_hubbub_mask hubbub_mask = { 445 444 HUBBUB_MASK_SH_LIST_DCN10(_MASK) 446 445 }; 446 + 447 + static const struct dcn_dio_registers dio_regs = { 448 + DIO_REG_LIST_DCN10() 449 + }; 450 + 451 + #define DIO_MASK_SH_LIST(mask_sh)\ 452 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 453 + 454 + static const struct dcn_dio_shift dio_shift = { 455 + DIO_MASK_SH_LIST(__SHIFT) 456 + }; 457 + 458 + static const struct dcn_dio_mask dio_mask = { 459 + DIO_MASK_SH_LIST(_MASK) 460 + }; 461 + 462 + static struct dio *dcn10_dio_create(struct dc_context *ctx) 463 + { 464 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 465 + 466 + if (!dio10) 467 + return NULL; 468 + 469 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 470 + 471 + return &dio10->base; 472 + } 447 473 448 474 static int map_transmitter_id_to_phy_instance( 449 475 enum transmitter transmitter) ··· 945 917 946 918 kfree(pool->base.hubbub); 947 919 pool->base.hubbub = NULL; 920 + 921 + if (pool->base.dio != NULL) { 922 + kfree(TO_DCN10_DIO(pool->base.dio)); 923 + pool->base.dio = NULL; 924 + } 948 925 949 926 for (i = 0; i < pool->base.pipe_count; i++) { 950 927 if (pool->base.opps[i] != NULL) ··· 1693 1660 if (pool->base.hubbub == NULL) { 1694 1661 BREAK_TO_DEBUGGER(); 1695 1662 dm_error("DC: failed to create hubbub!\n"); 1663 + goto fail; 1664 + } 1665 + 1666 + /* DIO */ 1667 + pool->base.dio = dcn10_dio_create(ctx); 1668 + if (pool->base.dio == NULL) { 1669 + BREAK_TO_DEBUGGER(); 1670 + dm_error("DC: failed to create dio!\n"); 1696 1671 goto fail; 1697 1672 } 1698 1673
+42
drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
··· 82 82 #include "dce/dce_dmcu.h" 83 83 #include "dce/dce_aux.h" 84 84 #include "dce/dce_i2c.h" 85 + #include "dio/dcn10/dcn10_dio.h" 85 86 #include "vm_helper.h" 86 87 87 88 #include "link_enc_cfg.h" ··· 550 549 static const struct dcn_hubbub_mask hubbub_mask = { 551 550 HUBBUB_MASK_SH_LIST_DCN20(_MASK) 552 551 }; 552 + 553 + static const struct dcn_dio_registers dio_regs = { 554 + DIO_REG_LIST_DCN10() 555 + }; 556 + 557 + #define DIO_MASK_SH_LIST(mask_sh)\ 558 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 559 + 560 + static const struct dcn_dio_shift dio_shift = { 561 + DIO_MASK_SH_LIST(__SHIFT) 562 + }; 563 + 564 + static const struct dcn_dio_mask dio_mask = { 565 + DIO_MASK_SH_LIST(_MASK) 566 + }; 567 + 568 + static struct dio *dcn20_dio_create(struct dc_context *ctx) 569 + { 570 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 571 + 572 + if (!dio10) 573 + return NULL; 574 + 575 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 576 + 577 + return &dio10->base; 578 + } 553 579 554 580 #define vmid_regs(id)\ 555 581 [id] = {\ ··· 1133 1105 kfree(pool->base.hubbub); 1134 1106 pool->base.hubbub = NULL; 1135 1107 } 1108 + 1109 + if (pool->base.dio != NULL) { 1110 + kfree(TO_DCN10_DIO(pool->base.dio)); 1111 + pool->base.dio = NULL; 1112 + } 1113 + 1136 1114 for (i = 0; i < pool->base.pipe_count; i++) { 1137 1115 if (pool->base.dpps[i] != NULL) 1138 1116 dcn20_dpp_destroy(&pool->base.dpps[i]); ··· 2740 2706 if (pool->base.hubbub == NULL) { 2741 2707 BREAK_TO_DEBUGGER(); 2742 2708 dm_error("DC: failed to create hubbub!\n"); 2709 + goto create_fail; 2710 + } 2711 + 2712 + /* DIO */ 2713 + pool->base.dio = dcn20_dio_create(ctx); 2714 + if (pool->base.dio == NULL) { 2715 + BREAK_TO_DEBUGGER(); 2716 + dm_error("DC: failed to create dio!\n"); 2743 2717 goto create_fail; 2744 2718 } 2745 2719
+41
drivers/gpu/drm/amd/display/dc/resource/dcn201/dcn201_resource.c
··· 56 56 #include "dce/dce_aux.h" 57 57 #include "dce/dce_i2c.h" 58 58 #include "dcn10/dcn10_resource.h" 59 + #include "dio/dcn10/dcn10_dio.h" 59 60 60 61 #include "cyan_skillfish_ip_offset.h" 61 62 ··· 756 755 return &hubbub->base; 757 756 } 758 757 758 + static const struct dcn_dio_registers dio_regs = { 759 + DIO_REG_LIST_DCN10() 760 + }; 761 + 762 + #define DIO_MASK_SH_LIST(mask_sh)\ 763 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 764 + 765 + static const struct dcn_dio_shift dio_shift = { 766 + DIO_MASK_SH_LIST(__SHIFT) 767 + }; 768 + 769 + static const struct dcn_dio_mask dio_mask = { 770 + DIO_MASK_SH_LIST(_MASK) 771 + }; 772 + 773 + static struct dio *dcn201_dio_create(struct dc_context *ctx) 774 + { 775 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 776 + 777 + if (!dio10) 778 + return NULL; 779 + 780 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 781 + 782 + return &dio10->base; 783 + } 784 + 759 785 static struct timing_generator *dcn201_timing_generator_create( 760 786 struct dc_context *ctx, 761 787 uint32_t instance) ··· 956 928 if (pool->base.hubbub != NULL) { 957 929 kfree(pool->base.hubbub); 958 930 pool->base.hubbub = NULL; 931 + } 932 + 933 + if (pool->base.dio != NULL) { 934 + kfree(TO_DCN10_DIO(pool->base.dio)); 935 + pool->base.dio = NULL; 959 936 } 960 937 961 938 for (i = 0; i < pool->base.pipe_count; i++) { ··· 1307 1274 pool->base.hubbub = dcn201_hubbub_create(ctx); 1308 1275 if (pool->base.hubbub == NULL) { 1309 1276 dm_error("DC: failed to create hubbub!\n"); 1277 + goto create_fail; 1278 + } 1279 + 1280 + /* DIO */ 1281 + pool->base.dio = dcn201_dio_create(ctx); 1282 + if (pool->base.dio == NULL) { 1283 + BREAK_TO_DEBUGGER(); 1284 + dm_error("DC: failed to create dio!\n"); 1310 1285 goto create_fail; 1311 1286 } 1312 1287
+34
drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
··· 84 84 #include "dce/dce_dmcu.h" 85 85 #include "dce/dce_aux.h" 86 86 #include "dce/dce_i2c.h" 87 + #include "dio/dcn10/dcn10_dio.h" 87 88 #include "dcn21_resource.h" 88 89 #include "vm_helper.h" 89 90 #include "dcn20/dcn20_vmid.h" ··· 330 329 HUBBUB_MASK_SH_LIST_DCN21(_MASK) 331 330 }; 332 331 332 + static const struct dcn_dio_registers dio_regs = { 333 + DIO_REG_LIST_DCN10() 334 + }; 335 + 336 + static const struct dcn_dio_shift dio_shift = { 0 }; 337 + 338 + static const struct dcn_dio_mask dio_mask = { 0 }; 339 + 340 + static struct dio *dcn21_dio_create(struct dc_context *ctx) 341 + { 342 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 343 + 344 + if (!dio10) 345 + return NULL; 346 + 347 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 348 + 349 + return &dio10->base; 350 + } 333 351 334 352 #define vmid_regs(id)\ 335 353 [id] = {\ ··· 697 677 kfree(pool->base.hubbub); 698 678 pool->base.hubbub = NULL; 699 679 } 680 + 681 + if (pool->base.dio != NULL) { 682 + kfree(TO_DCN10_DIO(pool->base.dio)); 683 + pool->base.dio = NULL; 684 + } 685 + 700 686 for (i = 0; i < pool->base.pipe_count; i++) { 701 687 if (pool->base.dpps[i] != NULL) 702 688 dcn20_dpp_destroy(&pool->base.dpps[i]); ··· 1685 1659 if (pool->base.hubbub == NULL) { 1686 1660 BREAK_TO_DEBUGGER(); 1687 1661 dm_error("DC: failed to create hubbub!\n"); 1662 + goto create_fail; 1663 + } 1664 + 1665 + /* DIO */ 1666 + pool->base.dio = dcn21_dio_create(ctx); 1667 + if (pool->base.dio == NULL) { 1668 + BREAK_TO_DEBUGGER(); 1669 + dm_error("DC: failed to create dio!\n"); 1688 1670 goto create_fail; 1689 1671 } 1690 1672
+42
drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c
··· 60 60 #include "dml/display_mode_vba.h" 61 61 #include "dcn30/dcn30_dccg.h" 62 62 #include "dcn10/dcn10_resource.h" 63 + #include "dio/dcn10/dcn10_dio.h" 63 64 #include "link_service.h" 64 65 #include "dce/dce_panel_cntl.h" 65 66 ··· 887 886 return &hubbub3->base; 888 887 } 889 888 889 + static const struct dcn_dio_registers dio_regs = { 890 + DIO_REG_LIST_DCN10() 891 + }; 892 + 893 + #define DIO_MASK_SH_LIST(mask_sh)\ 894 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 895 + 896 + static const struct dcn_dio_shift dio_shift = { 897 + DIO_MASK_SH_LIST(__SHIFT) 898 + }; 899 + 900 + static const struct dcn_dio_mask dio_mask = { 901 + DIO_MASK_SH_LIST(_MASK) 902 + }; 903 + 904 + static struct dio *dcn30_dio_create(struct dc_context *ctx) 905 + { 906 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 907 + 908 + if (!dio10) 909 + return NULL; 910 + 911 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 912 + 913 + return &dio10->base; 914 + } 915 + 890 916 static struct timing_generator *dcn30_timing_generator_create( 891 917 struct dc_context *ctx, 892 918 uint32_t instance) ··· 1124 1096 kfree(pool->base.hubbub); 1125 1097 pool->base.hubbub = NULL; 1126 1098 } 1099 + 1100 + if (pool->base.dio != NULL) { 1101 + kfree(TO_DCN10_DIO(pool->base.dio)); 1102 + pool->base.dio = NULL; 1103 + } 1104 + 1127 1105 for (i = 0; i < pool->base.pipe_count; i++) { 1128 1106 if (pool->base.dpps[i] != NULL) 1129 1107 dcn30_dpp_destroy(&pool->base.dpps[i]); ··· 2499 2465 if (pool->base.hubbub == NULL) { 2500 2466 BREAK_TO_DEBUGGER(); 2501 2467 dm_error("DC: failed to create hubbub!\n"); 2468 + goto create_fail; 2469 + } 2470 + 2471 + /* DIO */ 2472 + pool->base.dio = dcn30_dio_create(ctx); 2473 + if (pool->base.dio == NULL) { 2474 + BREAK_TO_DEBUGGER(); 2475 + dm_error("DC: failed to create dio!\n"); 2502 2476 goto create_fail; 2503 2477 } 2504 2478
+42
drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c
··· 59 59 #include "dml/display_mode_vba.h" 60 60 #include "dcn301/dcn301_dccg.h" 61 61 #include "dcn10/dcn10_resource.h" 62 + #include "dio/dcn10/dcn10_dio.h" 62 63 #include "dcn30/dcn30_dio_stream_encoder.h" 63 64 #include "dcn301/dcn301_dio_link_encoder.h" 64 65 #include "dcn301/dcn301_panel_cntl.h" ··· 844 843 return &hubbub3->base; 845 844 } 846 845 846 + static const struct dcn_dio_registers dio_regs = { 847 + DIO_REG_LIST_DCN10() 848 + }; 849 + 850 + #define DIO_MASK_SH_LIST(mask_sh)\ 851 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 852 + 853 + static const struct dcn_dio_shift dio_shift = { 854 + DIO_MASK_SH_LIST(__SHIFT) 855 + }; 856 + 857 + static const struct dcn_dio_mask dio_mask = { 858 + DIO_MASK_SH_LIST(_MASK) 859 + }; 860 + 861 + static struct dio *dcn301_dio_create(struct dc_context *ctx) 862 + { 863 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 864 + 865 + if (!dio10) 866 + return NULL; 867 + 868 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 869 + 870 + return &dio10->base; 871 + } 872 + 847 873 static struct timing_generator *dcn301_timing_generator_create( 848 874 struct dc_context *ctx, uint32_t instance) 849 875 { ··· 1095 1067 kfree(pool->base.hubbub); 1096 1068 pool->base.hubbub = NULL; 1097 1069 } 1070 + 1071 + if (pool->base.dio != NULL) { 1072 + kfree(TO_DCN10_DIO(pool->base.dio)); 1073 + pool->base.dio = NULL; 1074 + } 1075 + 1098 1076 for (i = 0; i < pool->base.pipe_count; i++) { 1099 1077 if (pool->base.dpps[i] != NULL) 1100 1078 dcn301_dpp_destroy(&pool->base.dpps[i]); ··· 1615 1581 if (pool->base.hubbub == NULL) { 1616 1582 BREAK_TO_DEBUGGER(); 1617 1583 dm_error("DC: failed to create hubbub!\n"); 1584 + goto create_fail; 1585 + } 1586 + 1587 + /* DIO */ 1588 + pool->base.dio = dcn301_dio_create(ctx); 1589 + if (pool->base.dio == NULL) { 1590 + BREAK_TO_DEBUGGER(); 1591 + dm_error("DC: failed to create dio!\n"); 1618 1592 goto create_fail; 1619 1593 } 1620 1594
+41
drivers/gpu/drm/amd/display/dc/resource/dcn302/dcn302_resource.c
··· 46 46 #include "dml/dcn30/dcn30_fpu.h" 47 47 48 48 #include "dcn10/dcn10_resource.h" 49 + #include "dio/dcn10/dcn10_dio.h" 49 50 50 51 #include "link_service.h" 51 52 ··· 253 252 static const struct dcn20_vmid_mask vmid_masks = { 254 253 DCN20_VMID_MASK_SH_LIST(_MASK) 255 254 }; 255 + 256 + static const struct dcn_dio_registers dio_regs = { 257 + DIO_REG_LIST_DCN10() 258 + }; 259 + 260 + #define DIO_MASK_SH_LIST(mask_sh)\ 261 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 262 + 263 + static const struct dcn_dio_shift dio_shift = { 264 + DIO_MASK_SH_LIST(__SHIFT) 265 + }; 266 + 267 + static const struct dcn_dio_mask dio_mask = { 268 + DIO_MASK_SH_LIST(_MASK) 269 + }; 270 + 271 + static struct dio *dcn302_dio_create(struct dc_context *ctx) 272 + { 273 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 274 + 275 + if (!dio10) 276 + return NULL; 277 + 278 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 279 + 280 + return &dio10->base; 281 + } 256 282 257 283 static struct hubbub *dcn302_hubbub_create(struct dc_context *ctx) 258 284 { ··· 1051 1023 pool->hubbub = NULL; 1052 1024 } 1053 1025 1026 + if (pool->dio != NULL) { 1027 + kfree(TO_DCN10_DIO(pool->dio)); 1028 + pool->dio = NULL; 1029 + } 1030 + 1054 1031 for (i = 0; i < pool->pipe_count; i++) { 1055 1032 if (pool->dpps[i] != NULL) { 1056 1033 kfree(TO_DCN20_DPP(pool->dpps[i])); ··· 1404 1371 if (pool->hubbub == NULL) { 1405 1372 BREAK_TO_DEBUGGER(); 1406 1373 dm_error("DC: failed to create hubbub!\n"); 1374 + goto create_fail; 1375 + } 1376 + 1377 + /* DIO */ 1378 + pool->dio = dcn302_dio_create(ctx); 1379 + if (pool->dio == NULL) { 1380 + BREAK_TO_DEBUGGER(); 1381 + dm_error("DC: failed to create dio!\n"); 1407 1382 goto create_fail; 1408 1383 } 1409 1384
+41
drivers/gpu/drm/amd/display/dc/resource/dcn303/dcn303_resource.c
··· 46 46 #include "dml/dcn30/dcn30_fpu.h" 47 47 48 48 #include "dcn10/dcn10_resource.h" 49 + #include "dio/dcn10/dcn10_dio.h" 49 50 50 51 #include "link_service.h" 51 52 ··· 249 248 static const struct dcn20_vmid_mask vmid_masks = { 250 249 DCN20_VMID_MASK_SH_LIST(_MASK) 251 250 }; 251 + 252 + static const struct dcn_dio_registers dio_regs = { 253 + DIO_REG_LIST_DCN10() 254 + }; 255 + 256 + #define DIO_MASK_SH_LIST(mask_sh)\ 257 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 258 + 259 + static const struct dcn_dio_shift dio_shift = { 260 + DIO_MASK_SH_LIST(__SHIFT) 261 + }; 262 + 263 + static const struct dcn_dio_mask dio_mask = { 264 + DIO_MASK_SH_LIST(_MASK) 265 + }; 266 + 267 + static struct dio *dcn303_dio_create(struct dc_context *ctx) 268 + { 269 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 270 + 271 + if (!dio10) 272 + return NULL; 273 + 274 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 275 + 276 + return &dio10->base; 277 + } 252 278 253 279 static struct hubbub *dcn303_hubbub_create(struct dc_context *ctx) 254 280 { ··· 995 967 pool->hubbub = NULL; 996 968 } 997 969 970 + if (pool->dio != NULL) { 971 + kfree(TO_DCN10_DIO(pool->dio)); 972 + pool->dio = NULL; 973 + } 974 + 998 975 for (i = 0; i < pool->pipe_count; i++) { 999 976 if (pool->dpps[i] != NULL) { 1000 977 kfree(TO_DCN20_DPP(pool->dpps[i])); ··· 1336 1303 if (pool->hubbub == NULL) { 1337 1304 BREAK_TO_DEBUGGER(); 1338 1305 dm_error("DC: failed to create hubbub!\n"); 1306 + goto create_fail; 1307 + } 1308 + 1309 + /* DIO */ 1310 + pool->dio = dcn303_dio_create(ctx); 1311 + if (pool->dio == NULL) { 1312 + BREAK_TO_DEBUGGER(); 1313 + dm_error("DC: failed to create dio!\n"); 1339 1314 goto create_fail; 1340 1315 } 1341 1316
+40
drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
··· 64 64 #include "dce/dce_audio.h" 65 65 #include "dce/dce_hwseq.h" 66 66 #include "clk_mgr.h" 67 + #include "dio/dcn10/dcn10_dio.h" 67 68 #include "dio/virtual/virtual_stream_encoder.h" 68 69 #include "dce110/dce110_resource.h" 69 70 #include "dml/display_mode_vba.h" ··· 811 810 DCN20_VMID_MASK_SH_LIST(_MASK) 812 811 }; 813 812 813 + static const struct dcn_dio_registers dio_regs = { 814 + DIO_REG_LIST_DCN10() 815 + }; 816 + 817 + #define DIO_MASK_SH_LIST(mask_sh)\ 818 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 819 + 820 + static const struct dcn_dio_shift dio_shift = { 821 + DIO_MASK_SH_LIST(__SHIFT) 822 + }; 823 + 824 + static const struct dcn_dio_mask dio_mask = { 825 + DIO_MASK_SH_LIST(_MASK) 826 + }; 827 + 814 828 static const struct resource_caps res_cap_dcn31 = { 815 829 .num_timing_generator = 4, 816 830 .num_opp = 4, ··· 1035 1019 num_rmu); 1036 1020 1037 1021 return &mpc30->base; 1022 + } 1023 + 1024 + static struct dio *dcn31_dio_create(struct dc_context *ctx) 1025 + { 1026 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 1027 + 1028 + if (!dio10) 1029 + return NULL; 1030 + 1031 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 1032 + 1033 + return &dio10->base; 1038 1034 } 1039 1035 1040 1036 static struct hubbub *dcn31_hubbub_create(struct dc_context *ctx) ··· 1424 1396 if (pool->base.hubbub != NULL) { 1425 1397 kfree(pool->base.hubbub); 1426 1398 pool->base.hubbub = NULL; 1399 + } 1400 + if (pool->base.dio != NULL) { 1401 + kfree(TO_DCN10_DIO(pool->base.dio)); 1402 + pool->base.dio = NULL; 1427 1403 } 1428 1404 for (i = 0; i < pool->base.pipe_count; i++) { 1429 1405 if (pool->base.dpps[i] != NULL) ··· 2094 2062 if (pool->base.hubbub == NULL) { 2095 2063 BREAK_TO_DEBUGGER(); 2096 2064 dm_error("DC: failed to create hubbub!\n"); 2065 + goto create_fail; 2066 + } 2067 + 2068 + /* DIO */ 2069 + pool->base.dio = dcn31_dio_create(ctx); 2070 + if (pool->base.dio == NULL) { 2071 + BREAK_TO_DEBUGGER(); 2072 + dm_error("DC: failed to create dio!\n"); 2097 2073 goto create_fail; 2098 2074 } 2099 2075
+40
drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
··· 66 66 #include "dce/dce_audio.h" 67 67 #include "dce/dce_hwseq.h" 68 68 #include "clk_mgr.h" 69 + #include "dio/dcn10/dcn10_dio.h" 69 70 #include "dio/virtual/virtual_stream_encoder.h" 70 71 #include "dce110/dce110_resource.h" 71 72 #include "dml/display_mode_vba.h" ··· 823 822 DCN20_VMID_MASK_SH_LIST(_MASK) 824 823 }; 825 824 825 + static const struct dcn_dio_registers dio_regs = { 826 + DIO_REG_LIST_DCN10() 827 + }; 828 + 829 + #define DIO_MASK_SH_LIST(mask_sh)\ 830 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 831 + 832 + static const struct dcn_dio_shift dio_shift = { 833 + DIO_MASK_SH_LIST(__SHIFT) 834 + }; 835 + 836 + static const struct dcn_dio_mask dio_mask = { 837 + DIO_MASK_SH_LIST(_MASK) 838 + }; 839 + 826 840 static const struct resource_caps res_cap_dcn314 = { 827 841 .num_timing_generator = 4, 828 842 .num_opp = 4, ··· 1093 1077 num_rmu); 1094 1078 1095 1079 return &mpc30->base; 1080 + } 1081 + 1082 + static struct dio *dcn314_dio_create(struct dc_context *ctx) 1083 + { 1084 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 1085 + 1086 + if (!dio10) 1087 + return NULL; 1088 + 1089 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 1090 + 1091 + return &dio10->base; 1096 1092 } 1097 1093 1098 1094 static struct hubbub *dcn31_hubbub_create(struct dc_context *ctx) ··· 1483 1455 if (pool->base.hubbub != NULL) { 1484 1456 kfree(pool->base.hubbub); 1485 1457 pool->base.hubbub = NULL; 1458 + } 1459 + if (pool->base.dio != NULL) { 1460 + kfree(TO_DCN10_DIO(pool->base.dio)); 1461 + pool->base.dio = NULL; 1486 1462 } 1487 1463 for (i = 0; i < pool->base.pipe_count; i++) { 1488 1464 if (pool->base.dpps[i] != NULL) ··· 2021 1989 if (pool->base.hubbub == NULL) { 2022 1990 BREAK_TO_DEBUGGER(); 2023 1991 dm_error("DC: failed to create hubbub!\n"); 1992 + goto create_fail; 1993 + } 1994 + 1995 + /* DIO */ 1996 + pool->base.dio = dcn314_dio_create(ctx); 1997 + if (pool->base.dio == NULL) { 1998 + BREAK_TO_DEBUGGER(); 1999 + dm_error("DC: failed to create dio!\n"); 2024 2000 goto create_fail; 2025 2001 } 2026 2002
+40
drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
··· 63 63 #include "dce/dce_audio.h" 64 64 #include "dce/dce_hwseq.h" 65 65 #include "clk_mgr.h" 66 + #include "dio/dcn10/dcn10_dio.h" 66 67 #include "dio/virtual/virtual_stream_encoder.h" 67 68 #include "dce110/dce110_resource.h" 68 69 #include "dml/display_mode_vba.h" ··· 810 809 DCN20_VMID_MASK_SH_LIST(_MASK) 811 810 }; 812 811 812 + static const struct dcn_dio_registers dio_regs = { 813 + DIO_REG_LIST_DCN10() 814 + }; 815 + 816 + #define DIO_MASK_SH_LIST(mask_sh)\ 817 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 818 + 819 + static const struct dcn_dio_shift dio_shift = { 820 + DIO_MASK_SH_LIST(__SHIFT) 821 + }; 822 + 823 + static const struct dcn_dio_mask dio_mask = { 824 + DIO_MASK_SH_LIST(_MASK) 825 + }; 826 + 813 827 static const struct resource_caps res_cap_dcn31 = { 814 828 .num_timing_generator = 4, 815 829 .num_opp = 4, ··· 1034 1018 num_rmu); 1035 1019 1036 1020 return &mpc30->base; 1021 + } 1022 + 1023 + static struct dio *dcn315_dio_create(struct dc_context *ctx) 1024 + { 1025 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 1026 + 1027 + if (!dio10) 1028 + return NULL; 1029 + 1030 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 1031 + 1032 + return &dio10->base; 1037 1033 } 1038 1034 1039 1035 static struct hubbub *dcn31_hubbub_create(struct dc_context *ctx) ··· 1425 1397 if (pool->base.hubbub != NULL) { 1426 1398 kfree(pool->base.hubbub); 1427 1399 pool->base.hubbub = NULL; 1400 + } 1401 + if (pool->base.dio != NULL) { 1402 + kfree(TO_DCN10_DIO(pool->base.dio)); 1403 + pool->base.dio = NULL; 1428 1404 } 1429 1405 for (i = 0; i < pool->base.pipe_count; i++) { 1430 1406 if (pool->base.dpps[i] != NULL) ··· 2044 2012 if (pool->base.hubbub == NULL) { 2045 2013 BREAK_TO_DEBUGGER(); 2046 2014 dm_error("DC: failed to create hubbub!\n"); 2015 + goto create_fail; 2016 + } 2017 + 2018 + /* DIO */ 2019 + pool->base.dio = dcn315_dio_create(ctx); 2020 + if (pool->base.dio == NULL) { 2021 + BREAK_TO_DEBUGGER(); 2022 + dm_error("DC: failed to create dio!\n"); 2047 2023 goto create_fail; 2048 2024 } 2049 2025
+40
drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
··· 63 63 #include "dce/dce_audio.h" 64 64 #include "dce/dce_hwseq.h" 65 65 #include "clk_mgr.h" 66 + #include "dio/dcn10/dcn10_dio.h" 66 67 #include "dio/virtual/virtual_stream_encoder.h" 67 68 #include "dce110/dce110_resource.h" 68 69 #include "dml/display_mode_vba.h" ··· 805 804 DCN20_VMID_MASK_SH_LIST(_MASK) 806 805 }; 807 806 807 + static const struct dcn_dio_registers dio_regs = { 808 + DIO_REG_LIST_DCN10() 809 + }; 810 + 811 + #define DIO_MASK_SH_LIST(mask_sh)\ 812 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 813 + 814 + static const struct dcn_dio_shift dio_shift = { 815 + DIO_MASK_SH_LIST(__SHIFT) 816 + }; 817 + 818 + static const struct dcn_dio_mask dio_mask = { 819 + DIO_MASK_SH_LIST(_MASK) 820 + }; 821 + 808 822 static const struct resource_caps res_cap_dcn31 = { 809 823 .num_timing_generator = 4, 810 824 .num_opp = 4, ··· 1027 1011 num_rmu); 1028 1012 1029 1013 return &mpc30->base; 1014 + } 1015 + 1016 + static struct dio *dcn316_dio_create(struct dc_context *ctx) 1017 + { 1018 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 1019 + 1020 + if (!dio10) 1021 + return NULL; 1022 + 1023 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 1024 + 1025 + return &dio10->base; 1030 1026 } 1031 1027 1032 1028 static struct hubbub *dcn31_hubbub_create(struct dc_context *ctx) ··· 1420 1392 if (pool->base.hubbub != NULL) { 1421 1393 kfree(pool->base.hubbub); 1422 1394 pool->base.hubbub = NULL; 1395 + } 1396 + if (pool->base.dio != NULL) { 1397 + kfree(TO_DCN10_DIO(pool->base.dio)); 1398 + pool->base.dio = NULL; 1423 1399 } 1424 1400 for (i = 0; i < pool->base.pipe_count; i++) { 1425 1401 if (pool->base.dpps[i] != NULL) ··· 1920 1888 if (pool->base.hubbub == NULL) { 1921 1889 BREAK_TO_DEBUGGER(); 1922 1890 dm_error("DC: failed to create hubbub!\n"); 1891 + goto create_fail; 1892 + } 1893 + 1894 + /* DIO */ 1895 + pool->base.dio = dcn316_dio_create(ctx); 1896 + if (pool->base.dio == NULL) { 1897 + BREAK_TO_DEBUGGER(); 1898 + dm_error("DC: failed to create dio!\n"); 1923 1899 goto create_fail; 1924 1900 } 1925 1901
+43
drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
··· 66 66 #include "dce/dce_hwseq.h" 67 67 #include "clk_mgr.h" 68 68 #include "dio/virtual/virtual_stream_encoder.h" 69 + #include "dio/dcn10/dcn10_dio.h" 69 70 #include "dml/display_mode_vba.h" 70 71 #include "dcn32/dcn32_dccg.h" 71 72 #include "dcn10/dcn10_resource.h" ··· 644 643 DCN20_VMID_MASK_SH_LIST(_MASK) 645 644 }; 646 645 646 + static struct dcn_dio_registers dio_regs; 647 + 648 + #define DIO_MASK_SH_LIST(mask_sh)\ 649 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 650 + 651 + static const struct dcn_dio_shift dio_shift = { 652 + DIO_MASK_SH_LIST(__SHIFT) 653 + }; 654 + 655 + static const struct dcn_dio_mask dio_mask = { 656 + DIO_MASK_SH_LIST(_MASK) 657 + }; 658 + 647 659 static const struct resource_caps res_cap_dcn32 = { 648 660 .num_timing_generator = 4, 649 661 .num_opp = 4, ··· 845 831 kfree(clk_src); 846 832 BREAK_TO_DEBUGGER(); 847 833 return NULL; 834 + } 835 + 836 + static struct dio *dcn32_dio_create(struct dc_context *ctx) 837 + { 838 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 839 + 840 + if (!dio10) 841 + return NULL; 842 + 843 + #undef REG_STRUCT 844 + #define REG_STRUCT dio_regs 845 + DIO_REG_LIST_DCN10(); 846 + 847 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 848 + 849 + return &dio10->base; 848 850 } 849 851 850 852 static struct hubbub *dcn32_hubbub_create(struct dc_context *ctx) ··· 1523 1493 1524 1494 if (pool->base.dccg != NULL) 1525 1495 dcn_dccg_destroy(&pool->base.dccg); 1496 + 1497 + if (pool->base.dio != NULL) { 1498 + kfree(TO_DCN10_DIO(pool->base.dio)); 1499 + pool->base.dio = NULL; 1500 + } 1526 1501 1527 1502 if (pool->base.oem_device != NULL) { 1528 1503 struct dc *dc = pool->base.oem_device->ctx->dc; ··· 2406 2371 if (pool->base.hubbub == NULL) { 2407 2372 BREAK_TO_DEBUGGER(); 2408 2373 dm_error("DC: failed to create hubbub!\n"); 2374 + goto create_fail; 2375 + } 2376 + 2377 + /* DIO */ 2378 + pool->base.dio = dcn32_dio_create(ctx); 2379 + if (pool->base.dio == NULL) { 2380 + BREAK_TO_DEBUGGER(); 2381 + dm_error("DC: failed to create dio!\n"); 2409 2382 goto create_fail; 2410 2383 } 2411 2384
+43
drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
··· 69 69 #include "dce/dce_hwseq.h" 70 70 #include "clk_mgr.h" 71 71 #include "dio/virtual/virtual_stream_encoder.h" 72 + #include "dio/dcn10/dcn10_dio.h" 72 73 #include "dml/display_mode_vba.h" 73 74 #include "dcn32/dcn32_dccg.h" 74 75 #include "dcn10/dcn10_resource.h" ··· 640 639 DCN20_VMID_MASK_SH_LIST(_MASK) 641 640 }; 642 641 642 + static struct dcn_dio_registers dio_regs; 643 + 644 + #define DIO_MASK_SH_LIST(mask_sh)\ 645 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 646 + 647 + static const struct dcn_dio_shift dio_shift = { 648 + DIO_MASK_SH_LIST(__SHIFT) 649 + }; 650 + 651 + static const struct dcn_dio_mask dio_mask = { 652 + DIO_MASK_SH_LIST(_MASK) 653 + }; 654 + 643 655 static const struct resource_caps res_cap_dcn321 = { 644 656 .num_timing_generator = 4, 645 657 .num_opp = 4, ··· 839 825 kfree(clk_src); 840 826 BREAK_TO_DEBUGGER(); 841 827 return NULL; 828 + } 829 + 830 + static struct dio *dcn321_dio_create(struct dc_context *ctx) 831 + { 832 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 833 + 834 + if (!dio10) 835 + return NULL; 836 + 837 + #undef REG_STRUCT 838 + #define REG_STRUCT dio_regs 839 + DIO_REG_LIST_DCN10(); 840 + 841 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 842 + 843 + return &dio10->base; 842 844 } 843 845 844 846 static struct hubbub *dcn321_hubbub_create(struct dc_context *ctx) ··· 1504 1474 if (pool->base.dccg != NULL) 1505 1475 dcn_dccg_destroy(&pool->base.dccg); 1506 1476 1477 + if (pool->base.dio != NULL) { 1478 + kfree(TO_DCN10_DIO(pool->base.dio)); 1479 + pool->base.dio = NULL; 1480 + } 1481 + 1507 1482 if (pool->base.oem_device != NULL) { 1508 1483 struct dc *dc = pool->base.oem_device->ctx->dc; 1509 1484 ··· 1905 1870 if (pool->base.hubbub == NULL) { 1906 1871 BREAK_TO_DEBUGGER(); 1907 1872 dm_error("DC: failed to create hubbub!\n"); 1873 + goto create_fail; 1874 + } 1875 + 1876 + /* DIO */ 1877 + pool->base.dio = dcn321_dio_create(ctx); 1878 + if (pool->base.dio == NULL) { 1879 + BREAK_TO_DEBUGGER(); 1880 + dm_error("DC: failed to create dio!\n"); 1908 1881 goto create_fail; 1909 1882 } 1910 1883
+43
drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
··· 71 71 #include "dce/dce_hwseq.h" 72 72 #include "clk_mgr.h" 73 73 #include "dio/virtual/virtual_stream_encoder.h" 74 + #include "dio/dcn10/dcn10_dio.h" 74 75 #include "dce110/dce110_resource.h" 75 76 #include "dml/display_mode_vba.h" 76 77 #include "dcn35/dcn35_dccg.h" ··· 665 664 DCN20_VMID_MASK_SH_LIST(_MASK) 666 665 }; 667 666 667 + static struct dcn_dio_registers dio_regs; 668 + 669 + #define DIO_MASK_SH_LIST(mask_sh)\ 670 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 671 + 672 + static const struct dcn_dio_shift dio_shift = { 673 + DIO_MASK_SH_LIST(__SHIFT) 674 + }; 675 + 676 + static const struct dcn_dio_mask dio_mask = { 677 + DIO_MASK_SH_LIST(_MASK) 678 + }; 679 + 668 680 static const struct resource_caps res_cap_dcn35 = { 669 681 .num_timing_generator = 4, 670 682 .num_opp = 4, ··· 985 971 num_rmu); 986 972 987 973 return &mpc30->base; 974 + } 975 + 976 + static struct dio *dcn35_dio_create(struct dc_context *ctx) 977 + { 978 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 979 + 980 + if (!dio10) 981 + return NULL; 982 + 983 + #undef REG_STRUCT 984 + #define REG_STRUCT dio_regs 985 + DIO_REG_LIST_DCN10(); 986 + 987 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 988 + 989 + return &dio10->base; 988 990 } 989 991 990 992 static struct hubbub *dcn35_hubbub_create(struct dc_context *ctx) ··· 1593 1563 1594 1564 if (pool->base.dccg != NULL) 1595 1565 dcn_dccg_destroy(&pool->base.dccg); 1566 + 1567 + if (pool->base.dio != NULL) { 1568 + kfree(TO_DCN10_DIO(pool->base.dio)); 1569 + pool->base.dio = NULL; 1570 + } 1596 1571 } 1597 1572 1598 1573 static struct hubp *dcn35_hubp_create( ··· 2077 2042 if (pool->base.hubbub == NULL) { 2078 2043 BREAK_TO_DEBUGGER(); 2079 2044 dm_error("DC: failed to create hubbub!\n"); 2045 + goto create_fail; 2046 + } 2047 + 2048 + /* DIO */ 2049 + pool->base.dio = dcn35_dio_create(ctx); 2050 + if (pool->base.dio == NULL) { 2051 + BREAK_TO_DEBUGGER(); 2052 + dm_error("DC: failed to create dio!\n"); 2080 2053 goto create_fail; 2081 2054 } 2082 2055
+43
drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
··· 50 50 #include "dce/dce_hwseq.h" 51 51 #include "clk_mgr.h" 52 52 #include "dio/virtual/virtual_stream_encoder.h" 53 + #include "dio/dcn10/dcn10_dio.h" 53 54 #include "dce110/dce110_resource.h" 54 55 #include "dml/display_mode_vba.h" 55 56 #include "dcn35/dcn35_dccg.h" ··· 645 644 DCN20_VMID_MASK_SH_LIST(_MASK) 646 645 }; 647 646 647 + static struct dcn_dio_registers dio_regs; 648 + 649 + #define DIO_MASK_SH_LIST(mask_sh)\ 650 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 651 + 652 + static const struct dcn_dio_shift dio_shift = { 653 + DIO_MASK_SH_LIST(__SHIFT) 654 + }; 655 + 656 + static const struct dcn_dio_mask dio_mask = { 657 + DIO_MASK_SH_LIST(_MASK) 658 + }; 659 + 648 660 static const struct resource_caps res_cap_dcn351 = { 649 661 .num_timing_generator = 4, 650 662 .num_opp = 4, ··· 965 951 num_rmu); 966 952 967 953 return &mpc30->base; 954 + } 955 + 956 + static struct dio *dcn351_dio_create(struct dc_context *ctx) 957 + { 958 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 959 + 960 + if (!dio10) 961 + return NULL; 962 + 963 + #undef REG_STRUCT 964 + #define REG_STRUCT dio_regs 965 + DIO_REG_LIST_DCN10(); 966 + 967 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 968 + 969 + return &dio10->base; 968 970 } 969 971 970 972 static struct hubbub *dcn35_hubbub_create(struct dc_context *ctx) ··· 1573 1543 1574 1544 if (pool->base.dccg != NULL) 1575 1545 dcn_dccg_destroy(&pool->base.dccg); 1546 + 1547 + if (pool->base.dio != NULL) { 1548 + kfree(TO_DCN10_DIO(pool->base.dio)); 1549 + pool->base.dio = NULL; 1550 + } 1576 1551 } 1577 1552 1578 1553 static struct hubp *dcn35_hubp_create( ··· 2049 2014 if (pool->base.hubbub == NULL) { 2050 2015 BREAK_TO_DEBUGGER(); 2051 2016 dm_error("DC: failed to create hubbub!\n"); 2017 + goto create_fail; 2018 + } 2019 + 2020 + /* DIO */ 2021 + pool->base.dio = dcn351_dio_create(ctx); 2022 + if (pool->base.dio == NULL) { 2023 + BREAK_TO_DEBUGGER(); 2024 + dm_error("DC: failed to create dio!\n"); 2052 2025 goto create_fail; 2053 2026 } 2054 2027
+43
drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c
··· 50 50 #include "dce/dce_hwseq.h" 51 51 #include "clk_mgr.h" 52 52 #include "dio/virtual/virtual_stream_encoder.h" 53 + #include "dio/dcn10/dcn10_dio.h" 53 54 #include "dce110/dce110_resource.h" 54 55 #include "dml/display_mode_vba.h" 55 56 #include "dcn35/dcn35_dccg.h" ··· 652 651 DCN20_VMID_MASK_SH_LIST(_MASK) 653 652 }; 654 653 654 + static struct dcn_dio_registers dio_regs; 655 + 656 + #define DIO_MASK_SH_LIST(mask_sh)\ 657 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 658 + 659 + static const struct dcn_dio_shift dio_shift = { 660 + DIO_MASK_SH_LIST(__SHIFT) 661 + }; 662 + 663 + static const struct dcn_dio_mask dio_mask = { 664 + DIO_MASK_SH_LIST(_MASK) 665 + }; 666 + 655 667 static const struct resource_caps res_cap_dcn36 = { 656 668 .num_timing_generator = 4, 657 669 .num_opp = 4, ··· 972 958 num_rmu); 973 959 974 960 return &mpc30->base; 961 + } 962 + 963 + static struct dio *dcn36_dio_create(struct dc_context *ctx) 964 + { 965 + struct dcn10_dio *dio10 = kzalloc_obj(struct dcn10_dio); 966 + 967 + if (!dio10) 968 + return NULL; 969 + 970 + #undef REG_STRUCT 971 + #define REG_STRUCT dio_regs 972 + DIO_REG_LIST_DCN10(); 973 + 974 + dcn10_dio_construct(dio10, ctx, &dio_regs, &dio_shift, &dio_mask); 975 + 976 + return &dio10->base; 975 977 } 976 978 977 979 static struct hubbub *dcn35_hubbub_create(struct dc_context *ctx) ··· 1580 1550 1581 1551 if (pool->base.dccg != NULL) 1582 1552 dcn_dccg_destroy(&pool->base.dccg); 1553 + 1554 + if (pool->base.dio != NULL) { 1555 + kfree(TO_DCN10_DIO(pool->base.dio)); 1556 + pool->base.dio = NULL; 1557 + } 1583 1558 } 1584 1559 1585 1560 static struct hubp *dcn35_hubp_create( ··· 2047 2012 if (pool->base.hubbub == NULL) { 2048 2013 BREAK_TO_DEBUGGER(); 2049 2014 dm_error("DC: failed to create hubbub!\n"); 2015 + goto create_fail; 2016 + } 2017 + 2018 + /* DIO */ 2019 + pool->base.dio = dcn36_dio_create(ctx); 2020 + if (pool->base.dio == NULL) { 2021 + BREAK_TO_DEBUGGER(); 2022 + dm_error("DC: failed to create dio!\n"); 2050 2023 goto create_fail; 2051 2024 } 2052 2025
+1 -1
drivers/gpu/drm/ast/ast_dp501.c
··· 436 436 /* Finally, clear bits [17:16] of SCU2c */ 437 437 data = ast_read32(ast, 0x1202c); 438 438 data &= 0xfffcffff; 439 - ast_write32(ast, 0, data); 439 + ast_write32(ast, 0x1202c, data); 440 440 441 441 /* Disable DVO */ 442 442 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xa3, 0xcf, 0x00);
+11 -5
drivers/gpu/drm/drm_bridge.c
··· 1603 1603 static void drm_bridge_debugfs_show_bridge(struct drm_printer *p, 1604 1604 struct drm_bridge *bridge, 1605 1605 unsigned int idx, 1606 - bool lingering) 1606 + bool lingering, 1607 + bool scoped) 1607 1608 { 1609 + unsigned int refcount = kref_read(&bridge->refcount); 1610 + 1611 + if (scoped) 1612 + refcount--; 1613 + 1608 1614 drm_printf(p, "bridge[%u]: %ps\n", idx, bridge->funcs); 1609 1615 1610 - drm_printf(p, "\trefcount: %u%s\n", kref_read(&bridge->refcount), 1616 + drm_printf(p, "\trefcount: %u%s\n", refcount, 1611 1617 lingering ? " [lingering]" : ""); 1612 1618 1613 1619 drm_printf(p, "\ttype: [%d] %s\n", ··· 1647 1641 mutex_lock(&bridge_lock); 1648 1642 1649 1643 list_for_each_entry(bridge, &bridge_list, list) 1650 - drm_bridge_debugfs_show_bridge(&p, bridge, idx++, false); 1644 + drm_bridge_debugfs_show_bridge(&p, bridge, idx++, false, false); 1651 1645 1652 1646 list_for_each_entry(bridge, &bridge_lingering_list, list) 1653 - drm_bridge_debugfs_show_bridge(&p, bridge, idx++, true); 1647 + drm_bridge_debugfs_show_bridge(&p, bridge, idx++, true, false); 1654 1648 1655 1649 mutex_unlock(&bridge_lock); 1656 1650 ··· 1665 1659 unsigned int idx = 0; 1666 1660 1667 1661 drm_for_each_bridge_in_chain_scoped(encoder, bridge) 1668 - drm_bridge_debugfs_show_bridge(&p, bridge, idx++, false); 1662 + drm_bridge_debugfs_show_bridge(&p, bridge, idx++, false, true); 1669 1663 1670 1664 return 0; 1671 1665 }
+1 -4
drivers/gpu/drm/drm_file.c
··· 233 233 void drm_file_free(struct drm_file *file) 234 234 { 235 235 struct drm_device *dev; 236 - int idx; 237 236 238 237 if (!file) 239 238 return; ··· 249 250 250 251 drm_events_release(file); 251 252 252 - if (drm_core_check_feature(dev, DRIVER_MODESET) && 253 - drm_dev_enter(dev, &idx)) { 253 + if (drm_core_check_feature(dev, DRIVER_MODESET)) { 254 254 drm_fb_release(file); 255 255 drm_property_destroy_user_blobs(dev, file); 256 - drm_dev_exit(idx); 257 256 } 258 257 259 258 if (drm_core_check_feature(dev, DRIVER_SYNCOBJ))
+2
drivers/gpu/drm/drm_ioc32.c
··· 28 28 * IN THE SOFTWARE. 29 29 */ 30 30 #include <linux/compat.h> 31 + #include <linux/nospec.h> 31 32 #include <linux/ratelimit.h> 32 33 #include <linux/export.h> 33 34 ··· 375 374 if (nr >= ARRAY_SIZE(drm_compat_ioctls)) 376 375 return drm_ioctl(filp, cmd, arg); 377 376 377 + nr = array_index_nospec(nr, ARRAY_SIZE(drm_compat_ioctls)); 378 378 fn = drm_compat_ioctls[nr].fn; 379 379 if (!fn) 380 380 return drm_ioctl(filp, cmd, arg);
+3 -6
drivers/gpu/drm/drm_mode_config.c
··· 589 589 */ 590 590 WARN_ON(!list_empty(&dev->mode_config.fb_list)); 591 591 list_for_each_entry_safe(fb, fbt, &dev->mode_config.fb_list, head) { 592 - if (list_empty(&fb->filp_head) || drm_framebuffer_read_refcount(fb) > 1) { 593 - struct drm_printer p = drm_dbg_printer(dev, DRM_UT_KMS, "[leaked fb]"); 592 + struct drm_printer p = drm_dbg_printer(dev, DRM_UT_KMS, "[leaked fb]"); 594 593 595 - drm_printf(&p, "framebuffer[%u]:\n", fb->base.id); 596 - drm_framebuffer_print_info(&p, 1, fb); 597 - } 598 - list_del_init(&fb->filp_head); 594 + drm_printf(&p, "framebuffer[%u]:\n", fb->base.id); 595 + drm_framebuffer_print_info(&p, 1, fb); 599 596 drm_framebuffer_free(&fb->base.refcount); 600 597 } 601 598
+1 -1
drivers/gpu/drm/i915/display/g4x_dp.c
··· 136 136 intel_dp->DP |= DP_SYNC_VS_HIGH; 137 137 intel_dp->DP |= DP_LINK_TRAIN_OFF_CPT; 138 138 139 - if (drm_dp_enhanced_frame_cap(intel_dp->dpcd)) 139 + if (pipe_config->enhanced_framing) 140 140 intel_dp->DP |= DP_ENHANCED_FRAMING; 141 141 142 142 intel_dp->DP |= DP_PIPE_SEL_IVB(crtc->pipe);
+54
drivers/gpu/drm/i915/display/intel_cdclk.c
··· 2972 2972 return 0; 2973 2973 } 2974 2974 2975 + static int intel_cdclk_update_crtc_min_voltage_level(struct intel_atomic_state *state, 2976 + struct intel_crtc *crtc, 2977 + u8 old_min_voltage_level, 2978 + u8 new_min_voltage_level, 2979 + bool *need_cdclk_calc) 2980 + { 2981 + struct intel_display *display = to_intel_display(state); 2982 + struct intel_cdclk_state *cdclk_state; 2983 + bool allow_voltage_level_decrease = intel_any_crtc_needs_modeset(state); 2984 + int ret; 2985 + 2986 + if (new_min_voltage_level == old_min_voltage_level) 2987 + return 0; 2988 + 2989 + if (!allow_voltage_level_decrease && 2990 + new_min_voltage_level < old_min_voltage_level) 2991 + return 0; 2992 + 2993 + cdclk_state = intel_atomic_get_cdclk_state(state); 2994 + if (IS_ERR(cdclk_state)) 2995 + return PTR_ERR(cdclk_state); 2996 + 2997 + old_min_voltage_level = cdclk_state->min_voltage_level[crtc->pipe]; 2998 + 2999 + if (new_min_voltage_level == old_min_voltage_level) 3000 + return 0; 3001 + 3002 + if (!allow_voltage_level_decrease && 3003 + new_min_voltage_level < old_min_voltage_level) 3004 + return 0; 3005 + 3006 + cdclk_state->min_voltage_level[crtc->pipe] = new_min_voltage_level; 3007 + 3008 + ret = intel_atomic_lock_global_state(&cdclk_state->base); 3009 + if (ret) 3010 + return ret; 3011 + 3012 + *need_cdclk_calc = true; 3013 + 3014 + drm_dbg_kms(display->drm, 3015 + "[CRTC:%d:%s] min voltage level: %d -> %d\n", 3016 + crtc->base.base.id, crtc->base.name, 3017 + old_min_voltage_level, new_min_voltage_level); 3018 + 3019 + return 0; 3020 + } 3021 + 2975 3022 int intel_cdclk_update_dbuf_bw_min_cdclk(struct intel_atomic_state *state, 2976 3023 int old_min_cdclk, int new_min_cdclk, 2977 3024 bool *need_cdclk_calc) ··· 3432 3385 old_crtc_state->min_cdclk, 3433 3386 new_crtc_state->min_cdclk, 3434 3387 need_cdclk_calc); 3388 + if (ret) 3389 + return ret; 3390 + 3391 + ret = intel_cdclk_update_crtc_min_voltage_level(state, crtc, 3392 + old_crtc_state->min_voltage_level, 3393 + new_crtc_state->min_voltage_level, 3394 + need_cdclk_calc); 3435 3395 if (ret) 3436 3396 return ret; 3437 3397 }
+2
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
··· 898 898 vma = radix_tree_lookup(&eb->gem_context->handles_vma, handle); 899 899 if (likely(vma && vma->vm == vm)) 900 900 vma = i915_vma_tryget(vma); 901 + else 902 + vma = NULL; 901 903 rcu_read_unlock(); 902 904 if (likely(vma)) 903 905 return vma;
+31 -15
drivers/gpu/drm/sysfb/efidrm.c
··· 157 157 struct drm_sysfb_device *sysfb; 158 158 struct drm_device *dev; 159 159 struct resource *mem = NULL; 160 - void __iomem *screen_base = NULL; 161 160 struct drm_plane *primary_plane; 162 161 struct drm_crtc *crtc; 163 162 struct drm_encoder *encoder; ··· 243 244 244 245 mem_flags = efidrm_get_mem_flags(dev, res->start, vsize); 245 246 246 - if (mem_flags & EFI_MEMORY_WC) 247 - screen_base = devm_ioremap_wc(&pdev->dev, mem->start, resource_size(mem)); 248 - else if (mem_flags & EFI_MEMORY_UC) 249 - screen_base = devm_ioremap(&pdev->dev, mem->start, resource_size(mem)); 250 - else if (mem_flags & EFI_MEMORY_WT) 251 - screen_base = devm_memremap(&pdev->dev, mem->start, resource_size(mem), 252 - MEMREMAP_WT); 253 - else if (mem_flags & EFI_MEMORY_WB) 254 - screen_base = devm_memremap(&pdev->dev, mem->start, resource_size(mem), 255 - MEMREMAP_WB); 256 - else 247 + if (mem_flags & EFI_MEMORY_WC) { 248 + void __iomem *screen_base = devm_ioremap_wc(&pdev->dev, mem->start, 249 + resource_size(mem)); 250 + 251 + if (!screen_base) 252 + return ERR_PTR(-ENXIO); 253 + iosys_map_set_vaddr_iomem(&sysfb->fb_addr, screen_base); 254 + } else if (mem_flags & EFI_MEMORY_UC) { 255 + void __iomem *screen_base = devm_ioremap(&pdev->dev, mem->start, 256 + resource_size(mem)); 257 + 258 + if (!screen_base) 259 + return ERR_PTR(-ENXIO); 260 + iosys_map_set_vaddr_iomem(&sysfb->fb_addr, screen_base); 261 + } else if (mem_flags & EFI_MEMORY_WT) { 262 + void *screen_base = devm_memremap(&pdev->dev, mem->start, 263 + resource_size(mem), MEMREMAP_WT); 264 + 265 + if (IS_ERR(screen_base)) 266 + return ERR_CAST(screen_base); 267 + iosys_map_set_vaddr(&sysfb->fb_addr, screen_base); 268 + } else if (mem_flags & EFI_MEMORY_WB) { 269 + void *screen_base = devm_memremap(&pdev->dev, mem->start, 270 + resource_size(mem), MEMREMAP_WB); 271 + 272 + if (IS_ERR(screen_base)) 273 + return ERR_CAST(screen_base); 274 + iosys_map_set_vaddr(&sysfb->fb_addr, screen_base); 275 + } else { 257 276 drm_err(dev, "invalid mem_flags: 0x%llx\n", mem_flags); 258 - if (!screen_base) 259 - return ERR_PTR(-ENOMEM); 260 - iosys_map_set_vaddr_iomem(&sysfb->fb_addr, screen_base); 277 + return ERR_PTR(-EINVAL); 278 + } 261 279 262 280 /* 263 281 * Modesetting
+13 -14
drivers/gpu/drm/xe/xe_device.c
··· 833 833 } 834 834 } 835 835 836 + static void xe_device_wedged_fini(struct drm_device *drm, void *arg) 837 + { 838 + struct xe_device *xe = arg; 839 + 840 + if (atomic_read(&xe->wedged.flag)) 841 + xe_pm_runtime_put(xe); 842 + } 843 + 836 844 int xe_device_probe(struct xe_device *xe) 837 845 { 838 846 struct xe_tile *tile; ··· 1016 1008 goto err_unregister_display; 1017 1009 1018 1010 detect_preproduction_hw(xe); 1011 + 1012 + err = drmm_add_action_or_reset(&xe->drm, xe_device_wedged_fini, xe); 1013 + if (err) 1014 + goto err_unregister_display; 1019 1015 1020 1016 return devm_add_action_or_reset(xe->drm.dev, xe_device_sanitize, xe); 1021 1017 ··· 1252 1240 return address & GENMASK_ULL(xe->info.va_bits - 1, 0); 1253 1241 } 1254 1242 1255 - static void xe_device_wedged_fini(struct drm_device *drm, void *arg) 1256 - { 1257 - struct xe_device *xe = arg; 1258 - 1259 - xe_pm_runtime_put(xe); 1260 - } 1261 - 1262 1243 /** 1263 1244 * DOC: Xe Device Wedging 1264 1245 * ··· 1329 1324 return; 1330 1325 } 1331 1326 1332 - xe_pm_runtime_get_noresume(xe); 1333 - 1334 - if (drmm_add_action_or_reset(&xe->drm, xe_device_wedged_fini, xe)) { 1335 - drm_err(&xe->drm, "Failed to register xe_device_wedged_fini clean-up. Although device is wedged.\n"); 1336 - return; 1337 - } 1338 - 1339 1327 if (!atomic_xchg(&xe->wedged.flag, 1)) { 1340 1328 xe->needs_flr_on_fini = true; 1329 + xe_pm_runtime_get_noresume(xe); 1341 1330 drm_err(&xe->drm, 1342 1331 "CRITICAL: Xe has declared device %s as wedged.\n" 1343 1332 "IOCTLs and executions are blocked.\n"
+16 -7
drivers/gpu/drm/xe/xe_pxp.c
··· 380 380 return 0; 381 381 } 382 382 383 + /* 384 + * On PTL, older GSC FWs have a bug that can cause them to crash during 385 + * PXP invalidation events, which leads to a complete loss of power 386 + * management on the media GT. Therefore, we can't use PXP on FWs that 387 + * have this bug, which was fixed in PTL GSC build 1396. 388 + */ 389 + if (xe->info.platform == XE_PANTHERLAKE && 390 + gt->uc.gsc.fw.versions.found[XE_UC_FW_VER_RELEASE].build < 1396) { 391 + drm_info(&xe->drm, "PXP requires PTL GSC build 1396 or newer\n"); 392 + return 0; 393 + } 394 + 383 395 pxp = drmm_kzalloc(&xe->drm, sizeof(struct xe_pxp), GFP_KERNEL); 384 396 if (!pxp) { 385 397 err = -ENOMEM; ··· 524 512 static int pxp_start(struct xe_pxp *pxp, u8 type) 525 513 { 526 514 int ret = 0; 527 - bool restart = false; 515 + bool restart; 528 516 529 517 if (!xe_pxp_is_enabled(pxp)) 530 518 return -ENODEV; ··· 552 540 if (!wait_for_completion_timeout(&pxp->activation, 553 541 msecs_to_jiffies(PXP_ACTIVATION_TIMEOUT_MS))) 554 542 return -ETIMEDOUT; 543 + 544 + restart = false; 555 545 556 546 mutex_lock(&pxp->mutex); 557 547 ··· 597 583 drm_err(&pxp->xe->drm, "PXP termination failed before start\n"); 598 584 mutex_lock(&pxp->mutex); 599 585 pxp->status = XE_PXP_ERROR; 586 + complete_all(&pxp->termination); 600 587 601 588 goto out_unlock; 602 589 } ··· 885 870 pxp->key_instance++; 886 871 needs_queue_inval = true; 887 872 break; 888 - default: 889 - drm_err(&pxp->xe->drm, "unexpected state during PXP suspend: %u", 890 - pxp->status); 891 - ret = -EIO; 892 - goto out; 893 873 } 894 874 895 875 /* ··· 909 899 910 900 pxp->last_suspend_key_instance = pxp->key_instance; 911 901 912 - out: 913 902 return ret; 914 903 } 915 904
+1 -1
drivers/gpu/drm/xe/xe_svm.c
··· 931 931 void xe_svm_close(struct xe_vm *vm) 932 932 { 933 933 xe_assert(vm->xe, xe_vm_is_closed(vm)); 934 - flush_work(&vm->svm.garbage_collector.work); 934 + disable_work_sync(&vm->svm.garbage_collector.work); 935 935 xe_svm_put_pagemaps(vm); 936 936 drm_pagemap_release_owner(&vm->svm.peer); 937 937 }
+6 -1
drivers/hwmon/asus-ec-sensors.c
··· 111 111 ec_sensor_temp_mb, 112 112 /* "T_Sensor" temperature sensor reading [℃] */ 113 113 ec_sensor_temp_t_sensor, 114 + /* like ec_sensor_temp_t_sensor, but at an alternate address [℃] */ 115 + ec_sensor_temp_t_sensor_alt1, 114 116 /* VRM temperature [℃] */ 115 117 ec_sensor_temp_vrm, 116 118 /* VRM east (right) temperature [℃] */ ··· 162 160 #define SENSOR_TEMP_CPU_PACKAGE BIT(ec_sensor_temp_cpu_package) 163 161 #define SENSOR_TEMP_MB BIT(ec_sensor_temp_mb) 164 162 #define SENSOR_TEMP_T_SENSOR BIT(ec_sensor_temp_t_sensor) 163 + #define SENSOR_TEMP_T_SENSOR_ALT1 BIT(ec_sensor_temp_t_sensor_alt1) 165 164 #define SENSOR_TEMP_VRM BIT(ec_sensor_temp_vrm) 166 165 #define SENSOR_TEMP_VRME BIT(ec_sensor_temp_vrme) 167 166 #define SENSOR_TEMP_VRMW BIT(ec_sensor_temp_vrmw) ··· 282 279 EC_SENSOR("VRM", hwmon_temp, 1, 0x00, 0x33), 283 280 [ec_sensor_temp_t_sensor] = 284 281 EC_SENSOR("T_Sensor", hwmon_temp, 1, 0x00, 0x36), 282 + [ec_sensor_temp_t_sensor_alt1] = 283 + EC_SENSOR("T_Sensor", hwmon_temp, 1, 0x00, 0x37), 285 284 [ec_sensor_fan_cpu_opt] = 286 285 EC_SENSOR("CPU_Opt", hwmon_fan, 2, 0x00, 0xb0), 287 286 [ec_sensor_temp_water_in] = ··· 524 519 static const struct ec_board_info board_info_prime_x670e_pro_wifi = { 525 520 .sensors = SENSOR_TEMP_CPU | SENSOR_TEMP_CPU_PACKAGE | 526 521 SENSOR_TEMP_MB | SENSOR_TEMP_VRM | 527 - SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CPU_OPT, 522 + SENSOR_TEMP_T_SENSOR_ALT1 | SENSOR_FAN_CPU_OPT, 528 523 .mutex_path = ACPI_GLOBAL_LOCK_PSEUDO_PATH, 529 524 .family = family_amd_600_series, 530 525 };
+9 -10
drivers/hwmon/occ/common.c
··· 420 420 return sysfs_emit(buf, "%u\n", val); 421 421 } 422 422 423 + static u64 occ_get_powr_avg(u64 accum, u32 samples) 424 + { 425 + return (samples == 0) ? 0 : 426 + mul_u64_u32_div(accum, 1000000UL, samples); 427 + } 428 + 423 429 static ssize_t occ_show_power_1(struct device *dev, 424 430 struct device_attribute *attr, char *buf) 425 431 { ··· 447 441 val = get_unaligned_be16(&power->sensor_id); 448 442 break; 449 443 case 1: 450 - val = get_unaligned_be32(&power->accumulator) / 451 - get_unaligned_be32(&power->update_tag); 452 - val *= 1000000ULL; 444 + val = occ_get_powr_avg(get_unaligned_be32(&power->accumulator), 445 + get_unaligned_be32(&power->update_tag)); 453 446 break; 454 447 case 2: 455 448 val = (u64)get_unaligned_be32(&power->update_tag) * ··· 462 457 } 463 458 464 459 return sysfs_emit(buf, "%llu\n", val); 465 - } 466 - 467 - static u64 occ_get_powr_avg(u64 accum, u32 samples) 468 - { 469 - return (samples == 0) ? 0 : 470 - mul_u64_u32_div(accum, 1000000UL, samples); 471 460 } 472 461 473 462 static ssize_t occ_show_power_2(struct device *dev, ··· 724 725 switch (sattr->nr) { 725 726 case 0: 726 727 if (extn->flags & EXTN_FLAG_SENSOR_ID) { 727 - rc = sysfs_emit(buf, "%u", 728 + rc = sysfs_emit(buf, "%u\n", 728 729 get_unaligned_be32(&extn->sensor_id)); 729 730 } else { 730 731 rc = sysfs_emit(buf, "%4phN\n", extn->name);
+1
drivers/hwmon/pmbus/ltc4286.c
··· 173 173 MODULE_AUTHOR("Delphine CC Chiu <Delphine_CC_Chiu@wiwynn.com>"); 174 174 MODULE_DESCRIPTION("PMBUS driver for LTC4286 and compatibles"); 175 175 MODULE_LICENSE("GPL"); 176 + MODULE_IMPORT_NS("PMBUS");
+4 -1
drivers/hwmon/pmbus/pxe1610.c
··· 104 104 * By default this device doesn't boot to page 0, so set page 0 105 105 * to access all pmbus registers. 106 106 */ 107 - i2c_smbus_write_byte_data(client, PMBUS_PAGE, 0); 107 + ret = i2c_smbus_write_byte_data(client, PMBUS_PAGE, 0); 108 + if (ret < 0) 109 + return dev_err_probe(&client->dev, ret, 110 + "Failed to set page 0\n"); 108 111 109 112 /* Read Manufacturer id */ 110 113 ret = i2c_smbus_read_block_data(client, PMBUS_MFR_ID, buf);
+5 -5
drivers/hwmon/pmbus/tps53679.c
··· 103 103 } 104 104 105 105 ret = i2c_smbus_read_block_data(client, PMBUS_IC_DEVICE_ID, buf); 106 - if (ret < 0) 107 - return ret; 106 + if (ret <= 0) 107 + return ret < 0 ? ret : -EIO; 108 108 109 - /* Adjust length if null terminator if present */ 109 + /* Adjust length if null terminator is present */ 110 110 buf_len = (buf[ret - 1] != '\x00' ? ret : ret - 1); 111 111 112 112 id_len = strlen(id); ··· 175 175 ret = i2c_smbus_read_block_data(client, PMBUS_IC_DEVICE_ID, buf); 176 176 if (ret < 0) 177 177 return ret; 178 - if (strncmp("TI\x53\x67\x60", buf, 5)) { 179 - dev_err(&client->dev, "Unexpected device ID: %s\n", buf); 178 + if (ret != 6 || memcmp(buf, "TI\x53\x67\x60\x00", 6)) { 179 + dev_err(&client->dev, "Unexpected device ID: %*ph\n", ret, buf); 180 180 return -ENODEV; 181 181 } 182 182
+2
drivers/iio/accel/adxl313_core.c
··· 998 998 999 999 ret = regmap_write(data->regmap, ADXL313_REG_FIFO_CTL, 1000 1000 FIELD_PREP(ADXL313_REG_FIFO_CTL_MODE_MSK, ADXL313_FIFO_BYPASS)); 1001 + if (ret) 1002 + return ret; 1001 1003 1002 1004 ret = regmap_write(data->regmap, ADXL313_REG_INT_ENABLE, 0); 1003 1005 if (ret)
+1 -1
drivers/iio/accel/adxl355_core.c
··· 745 745 BIT(IIO_CHAN_INFO_OFFSET), 746 746 .scan_index = 3, 747 747 .scan_type = { 748 - .sign = 's', 748 + .sign = 'u', 749 749 .realbits = 12, 750 750 .storagebits = 16, 751 751 .endianness = IIO_BE,
+1 -1
drivers/iio/accel/adxl380.c
··· 877 877 ret = regmap_update_bits(st->regmap, ADXL380_FIFO_CONFIG_0_REG, 878 878 ADXL380_FIFO_SAMPLES_8_MSK, 879 879 FIELD_PREP(ADXL380_FIFO_SAMPLES_8_MSK, 880 - (fifo_samples & BIT(8)))); 880 + !!(fifo_samples & BIT(8)))); 881 881 if (ret) 882 882 return ret; 883 883
+3 -5
drivers/iio/adc/ad4062.c
··· 719 719 } 720 720 st->gpo_irq[1] = true; 721 721 722 - return devm_request_threaded_irq(dev, ret, 723 - ad4062_irq_handler_drdy, 724 - NULL, IRQF_ONESHOT, indio_dev->name, 725 - indio_dev); 722 + return devm_request_irq(dev, ret, ad4062_irq_handler_drdy, 723 + IRQF_NO_THREAD, indio_dev->name, indio_dev); 726 724 } 727 725 728 726 static const struct iio_trigger_ops ad4062_trigger_ops = { ··· 953 955 default: 954 956 return -EINVAL; 955 957 } 956 - }; 958 + } 957 959 958 960 static int ad4062_write_raw(struct iio_dev *indio_dev, 959 961 struct iio_chan_spec const *chan, int val,
+6 -6
drivers/iio/adc/ade9000.c
··· 787 787 ADE9000_MIDDLE_PAGE_BIT); 788 788 if (ret) { 789 789 dev_err_ratelimited(dev, "IRQ0 WFB write fail"); 790 - return IRQ_HANDLED; 790 + return ret; 791 791 } 792 792 793 793 ade9000_configure_scan(indio_dev, ADE9000_REG_WF_BUFF); ··· 1123 1123 tmp &= ~ADE9000_PHASE_C_POS_BIT; 1124 1124 1125 1125 switch (tmp) { 1126 - case ADE9000_REG_AWATTOS: 1126 + case ADE9000_REG_AWATT: 1127 1127 return regmap_write(st->regmap, 1128 1128 ADE9000_ADDR_ADJUST(ADE9000_REG_AWATTOS, 1129 1129 chan->channel), val); ··· 1706 1706 1707 1707 init_completion(&st->reset_completion); 1708 1708 1709 + ret = devm_mutex_init(dev, &st->lock); 1710 + if (ret) 1711 + return ret; 1712 + 1709 1713 ret = ade9000_request_irq(dev, "irq0", ade9000_irq0_thread, indio_dev); 1710 1714 if (ret) 1711 1715 return ret; ··· 1719 1715 return ret; 1720 1716 1721 1717 ret = ade9000_request_irq(dev, "dready", ade9000_dready_thread, indio_dev); 1722 - if (ret) 1723 - return ret; 1724 - 1725 - ret = devm_mutex_init(dev, &st->lock); 1726 1718 if (ret) 1727 1719 return ret; 1728 1720
+1
drivers/iio/adc/aspeed_adc.c
··· 415 415 } 416 416 adc_engine_control_reg_val = 417 417 readl(data->base + ASPEED_REG_ENGINE_CONTROL); 418 + adc_engine_control_reg_val &= ~ASPEED_ADC_REF_VOLTAGE; 418 419 419 420 ret = devm_regulator_get_enable_read_voltage(data->dev, "vref"); 420 421 if (ret < 0 && ret != -ENODEV)
+5 -4
drivers/iio/adc/nxp-sar-adc.c
··· 718 718 struct nxp_sar_adc *info = iio_priv(indio_dev); 719 719 int ret; 720 720 721 + info->dma_chan = dma_request_chan(indio_dev->dev.parent, "rx"); 722 + if (IS_ERR(info->dma_chan)) 723 + return PTR_ERR(info->dma_chan); 724 + 721 725 nxp_sar_adc_dma_channels_enable(info, *indio_dev->active_scan_mask); 722 726 723 727 nxp_sar_adc_dma_cfg(info, true); ··· 742 738 out_dma_channels_disable: 743 739 nxp_sar_adc_dma_cfg(info, false); 744 740 nxp_sar_adc_dma_channels_disable(info, *indio_dev->active_scan_mask); 741 + dma_release_channel(info->dma_chan); 745 742 746 743 return ret; 747 744 } ··· 769 764 int current_mode = iio_device_get_current_mode(indio_dev); 770 765 unsigned long channel; 771 766 int ret; 772 - 773 - info->dma_chan = dma_request_chan(indio_dev->dev.parent, "rx"); 774 - if (IS_ERR(info->dma_chan)) 775 - return PTR_ERR(info->dma_chan); 776 767 777 768 info->channels_used = 0; 778 769
+20 -21
drivers/iio/adc/ti-adc161s626.c
··· 15 15 #include <linux/init.h> 16 16 #include <linux/err.h> 17 17 #include <linux/spi/spi.h> 18 + #include <linux/unaligned.h> 18 19 #include <linux/iio/iio.h> 19 20 #include <linux/iio/trigger.h> 20 21 #include <linux/iio/buffer.h> ··· 71 70 72 71 u8 read_size; 73 72 u8 shift; 74 - 75 - u8 buffer[16] __aligned(IIO_DMA_MINALIGN); 73 + u8 buf[3] __aligned(IIO_DMA_MINALIGN); 76 74 }; 77 75 78 76 static int ti_adc_read_measurement(struct ti_adc_data *data, ··· 80 80 int ret; 81 81 82 82 switch (data->read_size) { 83 - case 2: { 84 - __be16 buf; 85 - 86 - ret = spi_read(data->spi, (void *) &buf, 2); 83 + case 2: 84 + ret = spi_read(data->spi, data->buf, 2); 87 85 if (ret) 88 86 return ret; 89 87 90 - *val = be16_to_cpu(buf); 88 + *val = get_unaligned_be16(data->buf); 91 89 break; 92 - } 93 - case 3: { 94 - __be32 buf; 95 - 96 - ret = spi_read(data->spi, (void *) &buf, 3); 90 + case 3: 91 + ret = spi_read(data->spi, data->buf, 3); 97 92 if (ret) 98 93 return ret; 99 94 100 - *val = be32_to_cpu(buf) >> 8; 95 + *val = get_unaligned_be24(data->buf); 101 96 break; 102 - } 103 97 default: 104 98 return -EINVAL; 105 99 } ··· 108 114 struct iio_poll_func *pf = private; 109 115 struct iio_dev *indio_dev = pf->indio_dev; 110 116 struct ti_adc_data *data = iio_priv(indio_dev); 111 - int ret; 117 + struct { 118 + s16 data; 119 + aligned_s64 timestamp; 120 + } scan = { }; 121 + int ret, val; 112 122 113 - ret = ti_adc_read_measurement(data, &indio_dev->channels[0], 114 - (int *) &data->buffer); 115 - if (!ret) 116 - iio_push_to_buffers_with_timestamp(indio_dev, 117 - data->buffer, 118 - iio_get_time_ns(indio_dev)); 123 + ret = ti_adc_read_measurement(data, &indio_dev->channels[0], &val); 124 + if (ret) 125 + goto exit_notify_done; 119 126 127 + scan.data = val; 128 + iio_push_to_buffers_with_timestamp(indio_dev, &scan, iio_get_time_ns(indio_dev)); 129 + 130 + exit_notify_done: 120 131 iio_trigger_notify_done(indio_dev->trig); 121 132 122 133 return IRQ_HANDLED;
+1 -1
drivers/iio/adc/ti-ads1018.c
··· 249 249 struct iio_chan_spec const *chan, u16 *cnv) 250 250 { 251 251 u8 max_drate_mode = ads1018->chip_info->num_data_rate_mode_to_hz - 1; 252 - u8 drate = ads1018->chip_info->data_rate_mode_to_hz[max_drate_mode]; 252 + u32 drate = ads1018->chip_info->data_rate_mode_to_hz[max_drate_mode]; 253 253 u8 pga_mode = ads1018->chan_data[chan->scan_index].pga_mode; 254 254 struct spi_transfer xfer[2] = { 255 255 {
+6 -5
drivers/iio/adc/ti-ads1119.c
··· 274 274 275 275 ret = pm_runtime_resume_and_get(dev); 276 276 if (ret) 277 - goto pdown; 277 + return ret; 278 278 279 279 ret = ads1119_configure_channel(st, mux, gain, datarate); 280 280 if (ret) 281 281 goto pdown; 282 + 283 + if (st->client->irq) 284 + reinit_completion(&st->completion); 282 285 283 286 ret = i2c_smbus_write_byte(st->client, ADS1119_CMD_START_SYNC); 284 287 if (ret) ··· 738 735 return dev_err_probe(dev, ret, "Failed to setup IIO buffer\n"); 739 736 740 737 if (client->irq > 0) { 741 - ret = devm_request_threaded_irq(dev, client->irq, 742 - ads1119_irq_handler, 743 - NULL, IRQF_ONESHOT, 744 - "ads1119", indio_dev); 738 + ret = devm_request_irq(dev, client->irq, ads1119_irq_handler, 739 + IRQF_NO_THREAD, "ads1119", indio_dev); 745 740 if (ret) 746 741 return dev_err_probe(dev, ret, 747 742 "Failed to allocate irq\n");
+5 -3
drivers/iio/adc/ti-ads7950.c
··· 427 427 static int ti_ads7950_get(struct gpio_chip *chip, unsigned int offset) 428 428 { 429 429 struct ti_ads7950_state *st = gpiochip_get_data(chip); 430 + bool state; 430 431 int ret; 431 432 432 433 mutex_lock(&st->slock); 433 434 434 435 /* If set as output, return the output */ 435 436 if (st->gpio_cmd_settings_bitmask & BIT(offset)) { 436 - ret = st->cmd_settings_bitmask & BIT(offset); 437 + state = st->cmd_settings_bitmask & BIT(offset); 438 + ret = 0; 437 439 goto out; 438 440 } 439 441 ··· 446 444 if (ret) 447 445 goto out; 448 446 449 - ret = ((st->single_rx >> 12) & BIT(offset)) ? 1 : 0; 447 + state = (st->single_rx >> 12) & BIT(offset); 450 448 451 449 /* Revert back to original settings */ 452 450 st->cmd_settings_bitmask &= ~TI_ADS7950_CR_GPIO_DATA; ··· 458 456 out: 459 457 mutex_unlock(&st->slock); 460 458 461 - return ret; 459 + return ret ?: state; 462 460 } 463 461 464 462 static int ti_ads7950_get_direction(struct gpio_chip *chip,
+30 -18
drivers/iio/common/hid-sensors/hid-sensor-trigger.c
··· 14 14 #include <linux/iio/triggered_buffer.h> 15 15 #include <linux/iio/trigger_consumer.h> 16 16 #include <linux/iio/sysfs.h> 17 + #include <linux/iio/kfifo_buf.h> 17 18 #include "hid-sensor-trigger.h" 18 19 19 20 static ssize_t _hid_sensor_set_report_latency(struct device *dev, ··· 203 202 _hid_sensor_power_state(attrb, true); 204 203 } 205 204 206 - static int hid_sensor_data_rdy_trigger_set_state(struct iio_trigger *trig, 207 - bool state) 205 + static int buffer_postenable(struct iio_dev *indio_dev) 208 206 { 209 - return hid_sensor_power_state(iio_trigger_get_drvdata(trig), state); 207 + return hid_sensor_power_state(iio_device_get_drvdata(indio_dev), 1); 210 208 } 209 + 210 + static int buffer_predisable(struct iio_dev *indio_dev) 211 + { 212 + return hid_sensor_power_state(iio_device_get_drvdata(indio_dev), 0); 213 + } 214 + 215 + static const struct iio_buffer_setup_ops hid_sensor_buffer_ops = { 216 + .postenable = buffer_postenable, 217 + .predisable = buffer_predisable, 218 + }; 211 219 212 220 void hid_sensor_remove_trigger(struct iio_dev *indio_dev, 213 221 struct hid_sensor_common *attrb) ··· 229 219 cancel_work_sync(&attrb->work); 230 220 iio_trigger_unregister(attrb->trigger); 231 221 iio_trigger_free(attrb->trigger); 232 - iio_triggered_buffer_cleanup(indio_dev); 233 222 } 234 223 EXPORT_SYMBOL_NS(hid_sensor_remove_trigger, "IIO_HID"); 235 - 236 - static const struct iio_trigger_ops hid_sensor_trigger_ops = { 237 - .set_trigger_state = &hid_sensor_data_rdy_trigger_set_state, 238 - }; 239 224 240 225 int hid_sensor_setup_trigger(struct iio_dev *indio_dev, const char *name, 241 226 struct hid_sensor_common *attrb) ··· 244 239 else 245 240 fifo_attrs = NULL; 246 241 247 - ret = iio_triggered_buffer_setup_ext(indio_dev, 248 - &iio_pollfunc_store_time, NULL, 249 - IIO_BUFFER_DIRECTION_IN, 250 - NULL, fifo_attrs); 242 + indio_dev->modes = INDIO_DIRECT_MODE | INDIO_HARDWARE_TRIGGERED; 243 + 244 + ret = devm_iio_kfifo_buffer_setup_ext(&indio_dev->dev, indio_dev, 245 + &hid_sensor_buffer_ops, 246 + fifo_attrs); 251 247 if (ret) { 252 - dev_err(&indio_dev->dev, "Triggered Buffer Setup Failed\n"); 248 + dev_err(&indio_dev->dev, "Kfifo Buffer Setup Failed\n"); 253 249 return ret; 254 250 } 251 + 252 + /* 253 + * The current user space in distro "iio-sensor-proxy" is not working in 254 + * trigerless mode and it expects 255 + * /sys/bus/iio/devices/iio:device0/trigger/current_trigger. 256 + * The change replacing iio_triggered_buffer_setup_ext() with 257 + * devm_iio_kfifo_buffer_setup_ext() will not create attribute without 258 + * registering a trigger with INDIO_HARDWARE_TRIGGERED. 259 + * So the below code fragment is still required. 260 + */ 255 261 256 262 trig = iio_trigger_alloc(indio_dev->dev.parent, 257 263 "%s-dev%d", name, iio_device_id(indio_dev)); 258 264 if (trig == NULL) { 259 265 dev_err(&indio_dev->dev, "Trigger Allocate Failed\n"); 260 - ret = -ENOMEM; 261 - goto error_triggered_buffer_cleanup; 266 + return -ENOMEM; 262 267 } 263 268 264 269 iio_trigger_set_drvdata(trig, attrb); 265 - trig->ops = &hid_sensor_trigger_ops; 266 270 ret = iio_trigger_register(trig); 267 271 268 272 if (ret) { ··· 298 284 iio_trigger_unregister(trig); 299 285 error_free_trig: 300 286 iio_trigger_free(trig); 301 - error_triggered_buffer_cleanup: 302 - iio_triggered_buffer_cleanup(indio_dev); 303 287 return ret; 304 288 } 305 289 EXPORT_SYMBOL_NS(hid_sensor_setup_trigger, "IIO_HID");
+1 -1
drivers/iio/dac/ad5770r.c
··· 322 322 chan->address, 323 323 st->transf_buf, 2); 324 324 if (ret) 325 - return 0; 325 + return ret; 326 326 327 327 buf16 = get_unaligned_le16(st->transf_buf); 328 328 *val = buf16 >> 2;
+22 -29
drivers/iio/dac/mcp47feb02.c
··· 65 65 #define MCP47FEB02_MAX_SCALES_CH 3 66 66 #define MCP47FEB02_DAC_WIPER_UNLOCKED 0 67 67 #define MCP47FEB02_NORMAL_OPERATION 0 68 - #define MCP47FEB02_INTERNAL_BAND_GAP_mV 2440 68 + #define MCP47FEB02_INTERNAL_BAND_GAP_uV 2440000 69 69 #define NV_DAC_ADDR_OFFSET 0x10 70 70 71 71 enum mcp47feb02_vref_mode { ··· 697 697 }; 698 698 699 699 static void mcp47feb02_init_scale(struct mcp47feb02_data *data, enum mcp47feb02_scale scale, 700 - int vref_mV, int scale_avail[]) 700 + int vref_uV, int scale_avail[]) 701 701 { 702 702 u32 value_micro, value_int; 703 703 u64 tmp; 704 704 705 - /* vref_mV should not be negative */ 706 - tmp = (u64)vref_mV * MICRO >> data->chip_features->resolution; 705 + /* vref_uV should not be negative */ 706 + tmp = (u64)vref_uV * MILLI >> data->chip_features->resolution; 707 707 value_int = div_u64_rem(tmp, MICRO, &value_micro); 708 708 scale_avail[scale * 2] = value_int; 709 709 scale_avail[scale * 2 + 1] = value_micro; 710 710 } 711 711 712 - static int mcp47feb02_init_scales_avail(struct mcp47feb02_data *data, int vdd_mV, 713 - int vref_mV, int vref1_mV) 712 + static int mcp47feb02_init_scales_avail(struct mcp47feb02_data *data, int vdd_uV, 713 + int vref_uV, int vref1_uV) 714 714 { 715 - struct device *dev = regmap_get_device(data->regmap); 716 715 int tmp_vref; 717 716 718 - mcp47feb02_init_scale(data, MCP47FEB02_SCALE_VDD, vdd_mV, data->scale); 717 + mcp47feb02_init_scale(data, MCP47FEB02_SCALE_VDD, vdd_uV, data->scale); 719 718 720 719 if (data->use_vref) 721 - tmp_vref = vref_mV; 720 + tmp_vref = vref_uV; 722 721 else 723 - tmp_vref = MCP47FEB02_INTERNAL_BAND_GAP_mV; 722 + tmp_vref = MCP47FEB02_INTERNAL_BAND_GAP_uV; 724 723 725 724 mcp47feb02_init_scale(data, MCP47FEB02_SCALE_GAIN_X1, tmp_vref, data->scale); 726 725 mcp47feb02_init_scale(data, MCP47FEB02_SCALE_GAIN_X2, tmp_vref * 2, data->scale); 727 726 728 727 if (data->phys_channels >= 4) { 729 - mcp47feb02_init_scale(data, MCP47FEB02_SCALE_VDD, vdd_mV, data->scale_1); 730 - 731 - if (data->use_vref1 && vref1_mV <= 0) 732 - return dev_err_probe(dev, vref1_mV, "Invalid voltage for Vref1\n"); 728 + mcp47feb02_init_scale(data, MCP47FEB02_SCALE_VDD, vdd_uV, data->scale_1); 733 729 734 730 if (data->use_vref1) 735 - tmp_vref = vref1_mV; 731 + tmp_vref = vref1_uV; 736 732 else 737 - tmp_vref = MCP47FEB02_INTERNAL_BAND_GAP_mV; 733 + tmp_vref = MCP47FEB02_INTERNAL_BAND_GAP_uV; 738 734 739 735 mcp47feb02_init_scale(data, MCP47FEB02_SCALE_GAIN_X1, 740 736 tmp_vref, data->scale_1); ··· 951 955 u32 num_channels; 952 956 u8 chan_idx = 0; 953 957 954 - guard(mutex)(&data->lock); 955 - 956 958 num_channels = device_get_child_node_count(dev); 957 959 if (num_channels > chip_features->phys_channels) 958 960 return dev_err_probe(dev, -EINVAL, "More channels than the chip supports\n"); ··· 1074 1080 return 0; 1075 1081 } 1076 1082 1077 - static int mcp47feb02_init_ch_scales(struct mcp47feb02_data *data, int vdd_mV, 1078 - int vref_mV, int vref1_mV) 1083 + static int mcp47feb02_init_ch_scales(struct mcp47feb02_data *data, int vdd_uV, 1084 + int vref_uV, int vref1_uV) 1079 1085 { 1080 1086 unsigned int i; 1081 1087 ··· 1083 1089 struct device *dev = regmap_get_device(data->regmap); 1084 1090 int ret; 1085 1091 1086 - ret = mcp47feb02_init_scales_avail(data, vdd_mV, vref_mV, vref1_mV); 1092 + ret = mcp47feb02_init_scales_avail(data, vdd_uV, vref_uV, vref1_uV); 1087 1093 if (ret) 1088 1094 return dev_err_probe(dev, ret, "failed to init scales for ch %u\n", i); 1089 1095 } ··· 1097 1103 struct device *dev = &client->dev; 1098 1104 struct mcp47feb02_data *data; 1099 1105 struct iio_dev *indio_dev; 1100 - int vref1_mV = 0; 1101 - int vref_mV = 0; 1102 - int vdd_mV; 1103 - int ret; 1106 + int vref1_uV, vref_uV, vdd_uV, ret; 1104 1107 1105 1108 indio_dev = devm_iio_device_alloc(dev, sizeof(*data)); 1106 1109 if (!indio_dev) ··· 1134 1143 if (ret < 0) 1135 1144 return ret; 1136 1145 1137 - vdd_mV = ret / MILLI; 1146 + vdd_uV = ret; 1138 1147 1139 1148 ret = devm_regulator_get_enable_read_voltage(dev, "vref"); 1140 1149 if (ret > 0) { 1141 - vref_mV = ret / MILLI; 1150 + vref_uV = ret; 1142 1151 data->use_vref = true; 1143 1152 } else { 1153 + vref_uV = 0; 1144 1154 dev_dbg(dev, "using internal band gap as voltage reference.\n"); 1145 1155 dev_dbg(dev, "Vref is unavailable.\n"); 1146 1156 } ··· 1149 1157 if (chip_features->have_ext_vref1) { 1150 1158 ret = devm_regulator_get_enable_read_voltage(dev, "vref1"); 1151 1159 if (ret > 0) { 1152 - vref1_mV = ret / MILLI; 1160 + vref1_uV = ret; 1153 1161 data->use_vref1 = true; 1154 1162 } else { 1163 + vref1_uV = 0; 1155 1164 dev_dbg(dev, "using internal band gap as voltage reference 1.\n"); 1156 1165 dev_dbg(dev, "Vref1 is unavailable.\n"); 1157 1166 } ··· 1162 1169 if (ret) 1163 1170 return dev_err_probe(dev, ret, "Error initialising vref register\n"); 1164 1171 1165 - ret = mcp47feb02_init_ch_scales(data, vdd_mV, vref_mV, vref1_mV); 1172 + ret = mcp47feb02_init_ch_scales(data, vdd_uV, vref_uV, vref1_uV); 1166 1173 if (ret) 1167 1174 return ret; 1168 1175
+21 -11
drivers/iio/gyro/mpu3050-core.c
··· 1129 1129 1130 1130 ret = iio_trigger_register(mpu3050->trig); 1131 1131 if (ret) 1132 - return ret; 1132 + goto err_iio_trigger; 1133 1133 1134 1134 indio_dev->trig = iio_trigger_get(mpu3050->trig); 1135 1135 1136 1136 return 0; 1137 + 1138 + err_iio_trigger: 1139 + free_irq(mpu3050->irq, mpu3050->trig); 1140 + 1141 + return ret; 1137 1142 } 1138 1143 1139 1144 int mpu3050_common_probe(struct device *dev, ··· 1226 1221 goto err_power_down; 1227 1222 } 1228 1223 1229 - ret = iio_device_register(indio_dev); 1230 - if (ret) { 1231 - dev_err(dev, "device register failed\n"); 1232 - goto err_cleanup_buffer; 1233 - } 1234 - 1235 1224 dev_set_drvdata(dev, indio_dev); 1236 1225 1237 1226 /* Check if we have an assigned IRQ to use as trigger */ ··· 1248 1249 pm_runtime_use_autosuspend(dev); 1249 1250 pm_runtime_put(dev); 1250 1251 1252 + ret = iio_device_register(indio_dev); 1253 + if (ret) { 1254 + dev_err(dev, "device register failed\n"); 1255 + goto err_iio_device_register; 1256 + } 1257 + 1251 1258 return 0; 1252 1259 1253 - err_cleanup_buffer: 1260 + err_iio_device_register: 1261 + pm_runtime_get_sync(dev); 1262 + pm_runtime_put_noidle(dev); 1263 + pm_runtime_disable(dev); 1264 + if (irq) 1265 + free_irq(mpu3050->irq, mpu3050->trig); 1254 1266 iio_triggered_buffer_cleanup(indio_dev); 1255 1267 err_power_down: 1256 1268 mpu3050_power_down(mpu3050); ··· 1274 1264 struct iio_dev *indio_dev = dev_get_drvdata(dev); 1275 1265 struct mpu3050 *mpu3050 = iio_priv(indio_dev); 1276 1266 1267 + iio_device_unregister(indio_dev); 1277 1268 pm_runtime_get_sync(dev); 1278 1269 pm_runtime_put_noidle(dev); 1279 1270 pm_runtime_disable(dev); 1280 - iio_triggered_buffer_cleanup(indio_dev); 1281 1271 if (mpu3050->irq) 1282 - free_irq(mpu3050->irq, mpu3050); 1283 - iio_device_unregister(indio_dev); 1272 + free_irq(mpu3050->irq, mpu3050->trig); 1273 + iio_triggered_buffer_cleanup(indio_dev); 1284 1274 mpu3050_power_down(mpu3050); 1285 1275 } 1286 1276
+4 -4
drivers/iio/imu/adis16550.c
··· 643 643 case IIO_CHAN_INFO_LOW_PASS_FILTER_3DB_FREQUENCY: 644 644 switch (chan->type) { 645 645 case IIO_ANGL_VEL: 646 - ret = adis16550_get_accl_filter_freq(st, val); 646 + ret = adis16550_get_gyro_filter_freq(st, val); 647 647 if (ret) 648 648 return ret; 649 649 return IIO_VAL_INT; 650 650 case IIO_ACCEL: 651 - ret = adis16550_get_gyro_filter_freq(st, val); 651 + ret = adis16550_get_accl_filter_freq(st, val); 652 652 if (ret) 653 653 return ret; 654 654 return IIO_VAL_INT; ··· 681 681 case IIO_CHAN_INFO_LOW_PASS_FILTER_3DB_FREQUENCY: 682 682 switch (chan->type) { 683 683 case IIO_ANGL_VEL: 684 - return adis16550_set_accl_filter_freq(st, val); 685 - case IIO_ACCEL: 686 684 return adis16550_set_gyro_filter_freq(st, val); 685 + case IIO_ACCEL: 686 + return adis16550_set_accl_filter_freq(st, val); 687 687 default: 688 688 return -EINVAL; 689 689 }
+5 -10
drivers/iio/imu/bmi160/bmi160_core.c
··· 573 573 int_out_ctrl_shift = BMI160_INT1_OUT_CTRL_SHIFT; 574 574 int_latch_mask = BMI160_INT1_LATCH_MASK; 575 575 int_map_mask = BMI160_INT1_MAP_DRDY_EN; 576 + pin_name = "INT1"; 576 577 break; 577 578 case BMI160_PIN_INT2: 578 579 int_out_ctrl_shift = BMI160_INT2_OUT_CTRL_SHIFT; 579 580 int_latch_mask = BMI160_INT2_LATCH_MASK; 580 581 int_map_mask = BMI160_INT2_MAP_DRDY_EN; 582 + pin_name = "INT2"; 581 583 break; 584 + default: 585 + return -EINVAL; 582 586 } 583 587 int_out_ctrl_mask = BMI160_INT_OUT_CTRL_MASK << int_out_ctrl_shift; 584 588 ··· 616 612 ret = bmi160_write_conf_reg(regmap, BMI160_REG_INT_MAP, 617 613 int_map_mask, int_map_mask, 618 614 write_usleep); 619 - if (ret) { 620 - switch (pin) { 621 - case BMI160_PIN_INT1: 622 - pin_name = "INT1"; 623 - break; 624 - case BMI160_PIN_INT2: 625 - pin_name = "INT2"; 626 - break; 627 - } 615 + if (ret) 628 616 dev_err(dev, "Failed to configure %s IRQ pin", pin_name); 629 - } 630 617 631 618 return ret; 632 619 }
+1 -1
drivers/iio/imu/bno055/bno055.c
··· 64 64 #define BNO055_GRAVITY_DATA_X_LSB_REG 0x2E 65 65 #define BNO055_GRAVITY_DATA_Y_LSB_REG 0x30 66 66 #define BNO055_GRAVITY_DATA_Z_LSB_REG 0x32 67 - #define BNO055_SCAN_CH_COUNT ((BNO055_GRAVITY_DATA_Z_LSB_REG - BNO055_ACC_DATA_X_LSB_REG) / 2) 67 + #define BNO055_SCAN_CH_COUNT ((BNO055_GRAVITY_DATA_Z_LSB_REG - BNO055_ACC_DATA_X_LSB_REG) / 2 + 1) 68 68 #define BNO055_TEMP_REG 0x34 69 69 #define BNO055_CALIB_STAT_REG 0x35 70 70 #define BNO055_CALIB_STAT_MAGN_SHIFT 0
+14 -1
drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
··· 225 225 const struct st_lsm6dsx_reg *batch_reg; 226 226 u8 data; 227 227 228 + /* Only internal sensors have a FIFO ODR configuration register. */ 229 + if (sensor->id >= ARRAY_SIZE(hw->settings->batch)) 230 + return 0; 231 + 228 232 batch_reg = &hw->settings->batch[sensor->id]; 229 233 if (batch_reg->addr) { 230 234 int val; ··· 862 858 int i, ret; 863 859 864 860 for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) { 861 + const struct iio_dev_attr **attrs; 862 + 865 863 if (!hw->iio_devs[i]) 866 864 continue; 867 865 866 + /* 867 + * For the accelerometer, allow setting FIFO sampling frequency 868 + * values different from the sensor sampling frequency, which 869 + * may be needed to keep FIFO data rate low while sampling 870 + * acceleration data at high rates for accurate event detection. 871 + */ 872 + attrs = i == ST_LSM6DSX_ID_ACC ? st_lsm6dsx_buffer_attrs : NULL; 868 873 ret = devm_iio_kfifo_buffer_setup_ext(hw->dev, hw->iio_devs[i], 869 874 &st_lsm6dsx_buffer_ops, 870 - st_lsm6dsx_buffer_attrs); 875 + attrs); 871 876 if (ret) 872 877 return ret; 873 878 }
+12 -6
drivers/iio/light/vcnl4035.c
··· 103 103 struct iio_dev *indio_dev = pf->indio_dev; 104 104 struct vcnl4035_data *data = iio_priv(indio_dev); 105 105 /* Ensure naturally aligned timestamp */ 106 - u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)] __aligned(8) = { }; 106 + struct { 107 + u16 als_data; 108 + aligned_s64 timestamp; 109 + } buffer = { }; 110 + unsigned int val; 107 111 int ret; 108 112 109 - ret = regmap_read(data->regmap, VCNL4035_ALS_DATA, (int *)buffer); 113 + ret = regmap_read(data->regmap, VCNL4035_ALS_DATA, &val); 110 114 if (ret < 0) { 111 115 dev_err(&data->client->dev, 112 116 "Trigger consumer can't read from sensor.\n"); 113 117 goto fail_read; 114 118 } 115 - iio_push_to_buffers_with_timestamp(indio_dev, buffer, 116 - iio_get_time_ns(indio_dev)); 119 + 120 + buffer.als_data = val; 121 + iio_push_to_buffers_with_timestamp(indio_dev, &buffer, 122 + iio_get_time_ns(indio_dev)); 117 123 118 124 fail_read: 119 125 iio_trigger_notify_done(indio_dev->trig); ··· 387 381 .sign = 'u', 388 382 .realbits = 16, 389 383 .storagebits = 16, 390 - .endianness = IIO_LE, 384 + .endianness = IIO_CPU, 391 385 }, 392 386 }, 393 387 { ··· 401 395 .sign = 'u', 402 396 .realbits = 16, 403 397 .storagebits = 16, 404 - .endianness = IIO_LE, 398 + .endianness = IIO_CPU, 405 399 }, 406 400 }, 407 401 };
+1 -3
drivers/iio/light/veml6070.c
··· 134 134 if (ret < 0) 135 135 return ret; 136 136 137 - ret = (msb << 8) | lsb; 138 - 139 - return 0; 137 + return (msb << 8) | lsb; 140 138 } 141 139 142 140 static const struct iio_chan_spec veml6070_channels[] = {
+20 -4
drivers/iio/orientation/hid-sensor-rotation.c
··· 19 19 struct hid_sensor_common common_attributes; 20 20 struct hid_sensor_hub_attribute_info quaternion; 21 21 struct { 22 - s32 sampled_vals[4]; 23 - aligned_s64 timestamp; 22 + IIO_DECLARE_QUATERNION(s32, sampled_vals); 23 + /* 24 + * ABI regression avoidance: There are two copies of the same 25 + * timestamp in case of userspace depending on broken alignment 26 + * from older kernels. 27 + */ 28 + aligned_s64 timestamp[2]; 24 29 } scan; 25 30 int scale_pre_decml; 26 31 int scale_post_decml; ··· 159 154 if (!rot_state->timestamp) 160 155 rot_state->timestamp = iio_get_time_ns(indio_dev); 161 156 162 - iio_push_to_buffers_with_timestamp(indio_dev, &rot_state->scan, 163 - rot_state->timestamp); 157 + /* 158 + * ABI regression avoidance: IIO previously had an incorrect 159 + * implementation of iio_push_to_buffers_with_timestamp() that 160 + * put the timestamp in the last 8 bytes of the buffer, which 161 + * was incorrect according to the IIO ABI. To avoid breaking 162 + * userspace that may be depending on this broken behavior, we 163 + * put the timestamp in both the correct place [0] and the old 164 + * incorrect place [1]. 165 + */ 166 + rot_state->scan.timestamp[0] = rot_state->timestamp; 167 + rot_state->scan.timestamp[1] = rot_state->timestamp; 168 + 169 + iio_push_to_buffers(indio_dev, &rot_state->scan); 164 170 165 171 rot_state->timestamp = 0; 166 172 }
+1 -1
drivers/iio/pressure/abp2030pa.c
··· 520 520 data->p_offset = div_s64(odelta * data->pmin, pdelta) - data->outmin; 521 521 522 522 if (data->irq > 0) { 523 - ret = devm_request_irq(dev, irq, abp2_eoc_handler, IRQF_ONESHOT, 523 + ret = devm_request_irq(dev, irq, abp2_eoc_handler, 0, 524 524 dev_name(dev), data); 525 525 if (ret) 526 526 return ret;
+4 -3
drivers/iio/proximity/rfd77402.c
··· 173 173 struct i2c_client *client = data->client; 174 174 int val, ret; 175 175 176 - if (data->irq_en) { 177 - reinit_completion(&data->completion); 176 + if (data->irq_en) 178 177 return rfd77402_wait_for_irq(data); 179 - } 180 178 181 179 /* 182 180 * As per RFD77402 datasheet section '3.1.1 Single Measure', the ··· 201 203 RFD77402_STATUS_MCPU_ON); 202 204 if (ret < 0) 203 205 return ret; 206 + 207 + if (data->irq_en) 208 + reinit_completion(&data->completion); 204 209 205 210 ret = i2c_smbus_write_byte_data(client, RFD77402_CMD_R, 206 211 RFD77402_CMD_SINGLE |
+5
drivers/input/joystick/xpad.c
··· 313 313 { 0x1532, 0x0a00, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE }, 314 314 { 0x1532, 0x0a03, "Razer Wildcat", 0, XTYPE_XBOXONE }, 315 315 { 0x1532, 0x0a29, "Razer Wolverine V2", 0, XTYPE_XBOXONE }, 316 + { 0x1532, 0x0a57, "Razer Wolverine V3 Pro (Wired)", 0, XTYPE_XBOX360 }, 317 + { 0x1532, 0x0a59, "Razer Wolverine V3 Pro (2.4 GHz Dongle)", 0, XTYPE_XBOX360 }, 316 318 { 0x15e4, 0x3f00, "Power A Mini Pro Elite", 0, XTYPE_XBOX360 }, 317 319 { 0x15e4, 0x3f0a, "Xbox Airflo wired controller", 0, XTYPE_XBOX360 }, 318 320 { 0x15e4, 0x3f10, "Batarang Xbox 360 controller", 0, XTYPE_XBOX360 }, ··· 362 360 { 0x1bad, 0xfd00, "Razer Onza TE", 0, XTYPE_XBOX360 }, 363 361 { 0x1bad, 0xfd01, "Razer Onza", 0, XTYPE_XBOX360 }, 364 362 { 0x1ee9, 0x1590, "ZOTAC Gaming Zone", 0, XTYPE_XBOX360 }, 363 + { 0x20bc, 0x5134, "BETOP BTP-KP50B Xinput Dongle", 0, XTYPE_XBOX360 }, 364 + { 0x20bc, 0x514a, "BETOP BTP-KP50C Xinput Dongle", 0, XTYPE_XBOX360 }, 365 365 { 0x20d6, 0x2001, "BDA Xbox Series X Wired Controller", 0, XTYPE_XBOXONE }, 366 366 { 0x20d6, 0x2009, "PowerA Enhanced Wired Controller for Xbox Series X|S", 0, XTYPE_XBOXONE }, 367 367 { 0x20d6, 0x2064, "PowerA Wired Controller for Xbox", MAP_SHARE_BUTTON, XTYPE_XBOXONE }, ··· 566 562 XPAD_XBOX360_VENDOR(0x1a86), /* Nanjing Qinheng Microelectronics (WCH) */ 567 563 XPAD_XBOX360_VENDOR(0x1bad), /* Harmonix Rock Band guitar and drums */ 568 564 XPAD_XBOX360_VENDOR(0x1ee9), /* ZOTAC Technology Limited */ 565 + XPAD_XBOX360_VENDOR(0x20bc), /* BETOP wireless dongles */ 569 566 XPAD_XBOX360_VENDOR(0x20d6), /* PowerA controllers */ 570 567 XPAD_XBOXONE_VENDOR(0x20d6), /* PowerA controllers */ 571 568 XPAD_XBOX360_VENDOR(0x2345), /* Machenike Controllers */
+41 -1
drivers/input/mouse/bcm5974.c
··· 286 286 const struct tp_finger *index[MAX_FINGERS]; /* finger index data */ 287 287 struct input_mt_pos pos[MAX_FINGERS]; /* position array */ 288 288 int slots[MAX_FINGERS]; /* slot assignments */ 289 + struct work_struct mode_reset_work; 290 + unsigned long last_mode_reset; 289 291 }; 290 292 291 293 /* trackpad finger block data, le16-aligned */ ··· 698 696 return retval; 699 697 } 700 698 699 + /* 700 + * Mode switches sent before the control response are ignored. 701 + * Fixing this state requires switching to normal mode and waiting 702 + * about 1ms before switching back to wellspring mode. 703 + */ 704 + static void bcm5974_mode_reset_work(struct work_struct *work) 705 + { 706 + struct bcm5974 *dev = container_of(work, struct bcm5974, mode_reset_work); 707 + int error; 708 + 709 + guard(mutex)(&dev->pm_mutex); 710 + dev->last_mode_reset = jiffies; 711 + 712 + error = bcm5974_wellspring_mode(dev, false); 713 + if (error) { 714 + dev_err(&dev->intf->dev, "reset to normal mode failed\n"); 715 + return; 716 + } 717 + 718 + fsleep(1000); 719 + 720 + error = bcm5974_wellspring_mode(dev, true); 721 + if (error) 722 + dev_err(&dev->intf->dev, "mode switch after reset failed\n"); 723 + } 724 + 701 725 static void bcm5974_irq_button(struct urb *urb) 702 726 { 703 727 struct bcm5974 *dev = urb->context; ··· 780 752 if (dev->tp_urb->actual_length == 2) 781 753 goto exit; 782 754 783 - if (report_tp_state(dev, dev->tp_urb->actual_length)) 755 + if (report_tp_state(dev, dev->tp_urb->actual_length)) { 784 756 dprintk(1, "bcm5974: bad trackpad package, length: %d\n", 785 757 dev->tp_urb->actual_length); 758 + 759 + /* 760 + * Receiving a HID packet means we aren't in wellspring mode. 761 + * If we haven't tried a reset in the last second, try now. 762 + */ 763 + if (dev->tp_urb->actual_length == 8 && 764 + time_after(jiffies, dev->last_mode_reset + msecs_to_jiffies(1000))) { 765 + schedule_work(&dev->mode_reset_work); 766 + } 767 + } 786 768 787 769 exit: 788 770 error = usb_submit_urb(dev->tp_urb, GFP_ATOMIC); ··· 944 906 dev->intf = iface; 945 907 dev->input = input_dev; 946 908 dev->cfg = *cfg; 909 + INIT_WORK(&dev->mode_reset_work, bcm5974_mode_reset_work); 947 910 mutex_init(&dev->pm_mutex); 948 911 949 912 /* setup urbs */ ··· 1037 998 { 1038 999 struct bcm5974 *dev = usb_get_intfdata(iface); 1039 1000 1001 + disable_work_sync(&dev->mode_reset_work); 1040 1002 usb_set_intfdata(iface, NULL); 1041 1003 1042 1004 input_unregister_device(dev->input);
+2 -2
drivers/input/rmi4/rmi_f54.c
··· 538 538 int error; 539 539 int i; 540 540 541 + mutex_lock(&f54->data_mutex); 542 + 541 543 report_size = rmi_f54_get_report_size(f54); 542 544 if (report_size == 0) { 543 545 dev_err(&fn->dev, "Bad report size, report type=%d\n", ··· 547 545 error = -EINVAL; 548 546 goto error; /* retry won't help */ 549 547 } 550 - 551 - mutex_lock(&f54->data_mutex); 552 548 553 549 /* 554 550 * Need to check if command has completed.
+7
drivers/input/serio/i8042-acpipnpio.h
··· 1189 1189 }, 1190 1190 { 1191 1191 .matches = { 1192 + DMI_MATCH(DMI_BOARD_NAME, "X6KK45xU_X6SP45xU"), 1193 + }, 1194 + .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1195 + SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1196 + }, 1197 + { 1198 + .matches = { 1192 1199 DMI_MATCH(DMI_BOARD_NAME, "WUJIE Series-X5SP4NAG"), 1193 1200 }, 1194 1201 .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+2 -2
drivers/interconnect/qcom/sm8450.c
··· 800 800 .channels = 1, 801 801 .buswidth = 4, 802 802 .num_links = 1, 803 - .link_nodes = { MASTER_CDSP_NOC_CFG }, 803 + .link_nodes = { &qhm_nsp_noc_config }, 804 804 }; 805 805 806 806 static struct qcom_icc_node qhs_cpr_cx = { ··· 874 874 .channels = 1, 875 875 .buswidth = 4, 876 876 .num_links = 1, 877 - .link_nodes = { MASTER_CNOC_LPASS_AG_NOC }, 877 + .link_nodes = { &qhm_config_noc }, 878 878 }; 879 879 880 880 static struct qcom_icc_node qhs_mss_cfg = {
+1 -1
drivers/iommu/generic_pt/fmt/amdv1.h
··· 191 191 } 192 192 #define pt_load_entry_raw amdv1pt_load_entry_raw 193 193 194 - static inline void 194 + static __always_inline void 195 195 amdv1pt_install_leaf_entry(struct pt_state *pts, pt_oaddr_t oa, 196 196 unsigned int oasz_lg2, 197 197 const struct pt_write_attrs *attrs)
+1 -1
drivers/iommu/generic_pt/iommu_pt.h
··· 1057 1057 1058 1058 pt_walk_range(&range, __unmap_range, &unmap); 1059 1059 1060 - gather_range_pages(iotlb_gather, iommu_table, iova, len, 1060 + gather_range_pages(iotlb_gather, iommu_table, iova, unmap.unmapped, 1061 1061 &unmap.free_list); 1062 1062 1063 1063 return unmap.unmapped;
+2 -2
drivers/irqchip/irq-riscv-aplic-main.c
··· 150 150 struct device *dev = priv->dev; 151 151 152 152 list_del(&priv->head); 153 - if (dev->pm_domain) 153 + if (dev->pm_domain && dev->of_node) 154 154 dev_pm_genpd_remove_notifier(dev); 155 155 } 156 156 ··· 165 165 166 166 priv->saved_hw_regs.srcs = srcs; 167 167 list_add(&priv->head, &aplics); 168 - if (dev->pm_domain) { 168 + if (dev->pm_domain && dev->of_node) { 169 169 priv->genpd_nb.notifier_call = aplic_pm_notifier; 170 170 ret = dev_pm_genpd_add_notifier(dev, &priv->genpd_nb); 171 171 if (ret)
+4 -1
drivers/misc/fastrpc.c
··· 1401 1401 } 1402 1402 err_map: 1403 1403 fastrpc_buf_free(fl->cctx->remote_heap); 1404 + fl->cctx->remote_heap = NULL; 1404 1405 err_name: 1405 1406 kfree(name); 1406 1407 err: ··· 2390 2389 if (!err) { 2391 2390 src_perms = BIT(QCOM_SCM_VMID_HLOS); 2392 2391 2393 - qcom_scm_assign_mem(res.start, resource_size(&res), &src_perms, 2392 + err = qcom_scm_assign_mem(res.start, resource_size(&res), &src_perms, 2394 2393 data->vmperms, data->vmcount); 2394 + if (err) 2395 + goto err_free_data; 2395 2396 } 2396 2397 2397 2398 }
+4 -2
drivers/misc/lis3lv02d/lis3lv02d.c
··· 1230 1230 else 1231 1231 thread_fn = NULL; 1232 1232 1233 + if (thread_fn) 1234 + irq_flags |= IRQF_ONESHOT; 1235 + 1233 1236 err = request_threaded_irq(lis3->irq, lis302dl_interrupt, 1234 1237 thread_fn, 1235 - IRQF_TRIGGER_RISING | IRQF_ONESHOT | 1236 - irq_flags, 1238 + irq_flags | IRQF_TRIGGER_RISING, 1237 1239 DRIVER_NAME, lis3); 1238 1240 1239 1241 if (err < 0) {
+1
drivers/misc/mei/Kconfig
··· 3 3 config INTEL_MEI 4 4 tristate "Intel Management Engine Interface" 5 5 depends on PCI 6 + depends on X86 || DRM_XE!=n || COMPILE_TEST 6 7 default X86_64 || MATOM 7 8 help 8 9 The Intel Management Engine (Intel ME) provides Manageability,
+4 -10
drivers/misc/mei/hw-me.c
··· 1337 1337 /* check if we need to start the dev */ 1338 1338 if (!mei_host_is_ready(dev)) { 1339 1339 if (mei_hw_is_ready(dev)) { 1340 - /* synchronized by dev mutex */ 1341 - if (waitqueue_active(&dev->wait_hw_ready)) { 1342 - dev_dbg(&dev->dev, "we need to start the dev.\n"); 1343 - dev->recvd_hw_ready = true; 1344 - wake_up(&dev->wait_hw_ready); 1345 - } else if (dev->dev_state != MEI_DEV_UNINITIALIZED && 1346 - dev->dev_state != MEI_DEV_POWERING_DOWN && 1347 - dev->dev_state != MEI_DEV_POWER_DOWN) { 1340 + if (dev->dev_state == MEI_DEV_ENABLED) { 1348 1341 dev_dbg(&dev->dev, "Force link reset.\n"); 1349 1342 schedule_work(&dev->reset_work); 1350 1343 } else { 1351 - dev_dbg(&dev->dev, "Ignore this interrupt in state = %d\n", 1352 - dev->dev_state); 1344 + dev_dbg(&dev->dev, "we need to start the dev.\n"); 1345 + dev->recvd_hw_ready = true; 1346 + wake_up(&dev->wait_hw_ready); 1353 1347 } 1354 1348 } else { 1355 1349 dev_dbg(&dev->dev, "Spurious Interrupt\n");
+1 -1
drivers/net/bonding/bond_main.c
··· 5326 5326 if (!(bond_slave_is_up(slave) && slave->link == BOND_LINK_UP)) 5327 5327 continue; 5328 5328 5329 - if (bond_is_last_slave(bond, slave)) { 5329 + if (i + 1 == slaves_count) { 5330 5330 skb2 = skb; 5331 5331 skb_used = true; 5332 5332 } else {
+19 -1
drivers/net/ethernet/airoha/airoha_eth.c
··· 794 794 795 795 static void airoha_qdma_cleanup_rx_queue(struct airoha_queue *q) 796 796 { 797 - struct airoha_eth *eth = q->qdma->eth; 797 + struct airoha_qdma *qdma = q->qdma; 798 + struct airoha_eth *eth = qdma->eth; 799 + int qid = q - &qdma->q_rx[0]; 798 800 799 801 while (q->queued) { 800 802 struct airoha_queue_entry *e = &q->entry[q->tail]; 803 + struct airoha_qdma_desc *desc = &q->desc[q->tail]; 801 804 struct page *page = virt_to_head_page(e->buf); 802 805 803 806 dma_sync_single_for_cpu(eth->dev, e->dma_addr, e->dma_len, 804 807 page_pool_get_dma_dir(q->page_pool)); 805 808 page_pool_put_full_page(q->page_pool, page, false); 809 + /* Reset DMA descriptor */ 810 + WRITE_ONCE(desc->ctrl, 0); 811 + WRITE_ONCE(desc->addr, 0); 812 + WRITE_ONCE(desc->data, 0); 813 + WRITE_ONCE(desc->msg0, 0); 814 + WRITE_ONCE(desc->msg1, 0); 815 + WRITE_ONCE(desc->msg2, 0); 816 + WRITE_ONCE(desc->msg3, 0); 817 + 806 818 q->tail = (q->tail + 1) % q->ndesc; 807 819 q->queued--; 808 820 } 821 + 822 + q->head = q->tail; 823 + airoha_qdma_rmw(qdma, REG_RX_DMA_IDX(qid), RX_RING_DMA_IDX_MASK, 824 + FIELD_PREP(RX_RING_DMA_IDX_MASK, q->tail)); 809 825 } 810 826 811 827 static int airoha_qdma_init_rx(struct airoha_qdma *qdma) ··· 2961 2945 if (err) 2962 2946 return err; 2963 2947 } 2948 + 2949 + set_bit(DEV_STATE_REGISTERED, &eth->state); 2964 2950 2965 2951 return 0; 2966 2952 }
+1
drivers/net/ethernet/airoha/airoha_eth.h
··· 88 88 89 89 enum { 90 90 DEV_STATE_INITIALIZED, 91 + DEV_STATE_REGISTERED, 91 92 }; 92 93 93 94 enum {
+7
drivers/net/ethernet/airoha/airoha_ppe.c
··· 1368 1368 struct airoha_eth *eth = ppe->eth; 1369 1369 int err = 0; 1370 1370 1371 + /* Netfilter flowtable can try to offload flower rules while not all 1372 + * the net_devices are registered or initialized. Delay offloading 1373 + * until all net_devices are registered in the system. 1374 + */ 1375 + if (!test_bit(DEV_STATE_REGISTERED, &eth->state)) 1376 + return -EBUSY; 1377 + 1371 1378 mutex_lock(&flow_offload_mutex); 1372 1379 1373 1380 if (!eth->npu)
+52 -24
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 8045 8045 ulp_msix = bnxt_get_avail_msix(bp, bp->ulp_num_msix_want); 8046 8046 if (!ulp_msix) 8047 8047 bnxt_set_ulp_stat_ctxs(bp, 0); 8048 + else 8049 + bnxt_set_dflt_ulp_stat_ctxs(bp); 8048 8050 8049 8051 if (ulp_msix > bp->ulp_num_msix_want) 8050 8052 ulp_msix = bp->ulp_num_msix_want; ··· 8673 8671 struct hwrm_func_backing_store_qcaps_v2_output *resp; 8674 8672 struct hwrm_func_backing_store_qcaps_v2_input *req; 8675 8673 struct bnxt_ctx_mem_info *ctx = bp->ctx; 8676 - u16 type; 8674 + u16 type, next_type = 0; 8677 8675 int rc; 8678 8676 8679 8677 rc = hwrm_req_init(bp, req, HWRM_FUNC_BACKING_STORE_QCAPS_V2); ··· 8689 8687 8690 8688 resp = hwrm_req_hold(bp, req); 8691 8689 8692 - for (type = 0; type < BNXT_CTX_V2_MAX; ) { 8690 + for (type = 0; type < BNXT_CTX_V2_MAX; type = next_type) { 8693 8691 struct bnxt_ctx_mem_type *ctxm = &ctx->ctx_arr[type]; 8694 8692 u8 init_val, init_off, i; 8695 8693 u32 max_entries; ··· 8702 8700 if (rc) 8703 8701 goto ctx_done; 8704 8702 flags = le32_to_cpu(resp->flags); 8705 - type = le16_to_cpu(resp->next_valid_type); 8703 + next_type = le16_to_cpu(resp->next_valid_type); 8706 8704 if (!(flags & BNXT_CTX_MEM_TYPE_VALID)) { 8707 8705 bnxt_free_one_ctx_mem(bp, ctxm, true); 8708 8706 continue; ··· 8717 8715 else 8718 8716 continue; 8719 8717 } 8720 - ctxm->type = le16_to_cpu(resp->type); 8718 + ctxm->type = type; 8721 8719 ctxm->entry_size = entry_size; 8722 8720 ctxm->flags = flags; 8723 8721 ctxm->instance_bmap = le32_to_cpu(resp->instance_bit_map); ··· 12994 12992 return bp->num_tc ? bp->tx_nr_rings / bp->num_tc : bp->tx_nr_rings; 12995 12993 } 12996 12994 12995 + static void bnxt_set_xdp_tx_rings(struct bnxt *bp) 12996 + { 12997 + bp->tx_nr_rings_xdp = bp->tx_nr_rings_per_tc; 12998 + bp->tx_nr_rings += bp->tx_nr_rings_xdp; 12999 + } 13000 + 13001 + static void bnxt_adj_tx_rings(struct bnxt *bp) 13002 + { 13003 + /* Make adjustments if reserved TX rings are less than requested */ 13004 + bp->tx_nr_rings -= bp->tx_nr_rings_xdp; 13005 + bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp); 13006 + if (bp->tx_nr_rings_xdp) 13007 + bnxt_set_xdp_tx_rings(bp); 13008 + } 13009 + 12997 13010 static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init) 12998 13011 { 12999 13012 int rc = 0; ··· 13026 13009 if (rc) 13027 13010 return rc; 13028 13011 13029 - /* Make adjustments if reserved TX rings are less than requested */ 13030 - bp->tx_nr_rings -= bp->tx_nr_rings_xdp; 13031 - bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp); 13032 - if (bp->tx_nr_rings_xdp) { 13033 - bp->tx_nr_rings_xdp = bp->tx_nr_rings_per_tc; 13034 - bp->tx_nr_rings += bp->tx_nr_rings_xdp; 13035 - } 13012 + bnxt_adj_tx_rings(bp); 13036 13013 rc = bnxt_alloc_mem(bp, irq_re_init); 13037 13014 if (rc) { 13038 13015 netdev_err(bp->dev, "bnxt_alloc_mem err: %x\n", rc); ··· 15447 15436 return 0; 15448 15437 } 15449 15438 15439 + void bnxt_set_cp_rings(struct bnxt *bp, bool sh) 15440 + { 15441 + int tx_cp = bnxt_num_tx_to_cp(bp, bp->tx_nr_rings); 15442 + 15443 + bp->cp_nr_rings = sh ? max_t(int, tx_cp, bp->rx_nr_rings) : 15444 + tx_cp + bp->rx_nr_rings; 15445 + } 15446 + 15450 15447 int bnxt_setup_mq_tc(struct net_device *dev, u8 tc) 15451 15448 { 15452 15449 struct bnxt *bp = netdev_priv(dev); 15453 15450 bool sh = false; 15454 - int rc, tx_cp; 15451 + int rc; 15455 15452 15456 15453 if (tc > bp->max_tc) { 15457 15454 netdev_err(dev, "Too many traffic classes requested: %d. Max supported is %d.\n", ··· 15492 15473 bp->num_tc = 0; 15493 15474 } 15494 15475 bp->tx_nr_rings += bp->tx_nr_rings_xdp; 15495 - tx_cp = bnxt_num_tx_to_cp(bp, bp->tx_nr_rings); 15496 - bp->cp_nr_rings = sh ? max_t(int, tx_cp, bp->rx_nr_rings) : 15497 - tx_cp + bp->rx_nr_rings; 15476 + bnxt_set_cp_rings(bp, sh); 15498 15477 15499 15478 if (netif_running(bp->dev)) 15500 15479 return bnxt_open_nic(bp, true, false); ··· 16542 16525 bp->tx_nr_rings = bnxt_tx_nr_rings(bp); 16543 16526 } 16544 16527 16528 + static void bnxt_adj_dflt_rings(struct bnxt *bp, bool sh) 16529 + { 16530 + if (sh) 16531 + bnxt_trim_dflt_sh_rings(bp); 16532 + else 16533 + bp->cp_nr_rings = bp->tx_nr_rings_per_tc + bp->rx_nr_rings; 16534 + bp->tx_nr_rings = bnxt_tx_nr_rings(bp); 16535 + if (sh && READ_ONCE(bp->xdp_prog)) { 16536 + bnxt_set_xdp_tx_rings(bp); 16537 + bnxt_set_cp_rings(bp, true); 16538 + } 16539 + } 16540 + 16545 16541 static int bnxt_set_dflt_rings(struct bnxt *bp, bool sh) 16546 16542 { 16547 16543 int dflt_rings, max_rx_rings, max_tx_rings, rc; ··· 16580 16550 return rc; 16581 16551 bp->rx_nr_rings = min_t(int, dflt_rings, max_rx_rings); 16582 16552 bp->tx_nr_rings_per_tc = min_t(int, dflt_rings, max_tx_rings); 16583 - if (sh) 16584 - bnxt_trim_dflt_sh_rings(bp); 16585 - else 16586 - bp->cp_nr_rings = bp->tx_nr_rings_per_tc + bp->rx_nr_rings; 16587 - bp->tx_nr_rings = bnxt_tx_nr_rings(bp); 16553 + 16554 + bnxt_adj_dflt_rings(bp, sh); 16588 16555 16589 16556 avail_msix = bnxt_get_max_func_irqs(bp) - bp->cp_nr_rings; 16590 16557 if (avail_msix >= BNXT_MIN_ROCE_CP_RINGS) { ··· 16594 16567 rc = __bnxt_reserve_rings(bp); 16595 16568 if (rc && rc != -ENODEV) 16596 16569 netdev_warn(bp->dev, "Unable to reserve tx rings\n"); 16597 - bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp); 16570 + 16571 + bnxt_adj_tx_rings(bp); 16598 16572 if (sh) 16599 - bnxt_trim_dflt_sh_rings(bp); 16573 + bnxt_adj_dflt_rings(bp, true); 16600 16574 16601 16575 /* Rings may have been trimmed, re-reserve the trimmed rings. */ 16602 16576 if (bnxt_need_reserve_rings(bp)) { 16603 16577 rc = __bnxt_reserve_rings(bp); 16604 16578 if (rc && rc != -ENODEV) 16605 16579 netdev_warn(bp->dev, "2nd rings reservation failed.\n"); 16606 - bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp); 16580 + bnxt_adj_tx_rings(bp); 16607 16581 } 16608 16582 if (BNXT_CHIP_TYPE_NITRO_A0(bp)) { 16609 16583 bp->rx_nr_rings++; ··· 16638 16610 if (rc) 16639 16611 goto init_dflt_ring_err; 16640 16612 16641 - bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp); 16613 + bnxt_adj_tx_rings(bp); 16642 16614 16643 16615 bnxt_set_dflt_rfs(bp); 16644 16616
+1
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 2985 2985 int tx_xdp); 2986 2986 int bnxt_fw_init_one(struct bnxt *bp); 2987 2987 bool bnxt_hwrm_reset_permitted(struct bnxt *bp); 2988 + void bnxt_set_cp_rings(struct bnxt *bp, bool sh); 2988 2989 int bnxt_setup_mq_tc(struct net_device *dev, u8 tc); 2989 2990 struct bnxt_ntuple_filter *bnxt_lookup_ntp_filter_from_idx(struct bnxt *bp, 2990 2991 struct bnxt_ntuple_filter *fltr, u32 idx);
+1 -4
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 945 945 bool sh = false; 946 946 int tx_xdp = 0; 947 947 int rc = 0; 948 - int tx_cp; 949 948 950 949 if (channel->other_count) 951 950 return -EINVAL; ··· 1012 1013 if (tcs > 1) 1013 1014 bp->tx_nr_rings = bp->tx_nr_rings_per_tc * tcs + tx_xdp; 1014 1015 1015 - tx_cp = bnxt_num_tx_to_cp(bp, bp->tx_nr_rings); 1016 - bp->cp_nr_rings = sh ? max_t(int, tx_cp, bp->rx_nr_rings) : 1017 - tx_cp + bp->rx_nr_rings; 1016 + bnxt_set_cp_rings(bp, sh); 1018 1017 1019 1018 /* After changing number of rx channels, update NTUPLE feature. */ 1020 1019 netdev_update_features(dev);
+2 -3
drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
··· 384 384 static int bnxt_xdp_set(struct bnxt *bp, struct bpf_prog *prog) 385 385 { 386 386 struct net_device *dev = bp->dev; 387 - int tx_xdp = 0, tx_cp, rc, tc; 387 + int tx_xdp = 0, rc, tc; 388 388 struct bpf_prog *old; 389 389 390 390 netdev_assert_locked(dev); ··· 431 431 } 432 432 bp->tx_nr_rings_xdp = tx_xdp; 433 433 bp->tx_nr_rings = bp->tx_nr_rings_per_tc * tc + tx_xdp; 434 - tx_cp = bnxt_num_tx_to_cp(bp, bp->tx_nr_rings); 435 - bp->cp_nr_rings = max_t(int, tx_cp, bp->rx_nr_rings); 434 + bnxt_set_cp_rings(bp, true); 436 435 bnxt_set_tpa_flags(bp); 437 436 bnxt_set_ring_params(bp); 438 437
+1 -1
drivers/net/ethernet/broadcom/tg3.c
··· 12299 12299 ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.advertising, 12300 12300 advertising); 12301 12301 12302 - if (netif_running(dev) && tp->link_up) { 12302 + if (netif_running(dev) && netif_carrier_ok(dev)) { 12303 12303 cmd->base.speed = tp->link_config.active_speed; 12304 12304 cmd->base.duplex = tp->link_config.active_duplex; 12305 12305 ethtool_convert_legacy_u32_to_link_mode(
+6 -4
drivers/net/ethernet/cadence/macb_pci.c
··· 96 96 return 0; 97 97 98 98 err_plat_dev_register: 99 - clk_unregister(plat_data.hclk); 99 + clk_unregister_fixed_rate(plat_data.hclk); 100 100 101 101 err_hclk_register: 102 - clk_unregister(plat_data.pclk); 102 + clk_unregister_fixed_rate(plat_data.pclk); 103 103 104 104 err_pclk_register: 105 105 return err; ··· 109 109 { 110 110 struct platform_device *plat_dev = pci_get_drvdata(pdev); 111 111 struct macb_platform_data *plat_data = dev_get_platdata(&plat_dev->dev); 112 + struct clk *pclk = plat_data->pclk; 113 + struct clk *hclk = plat_data->hclk; 112 114 113 - clk_unregister(plat_data->pclk); 114 - clk_unregister(plat_data->hclk); 115 115 platform_device_unregister(plat_dev); 116 + clk_unregister_fixed_rate(pclk); 117 + clk_unregister_fixed_rate(hclk); 116 118 } 117 119 118 120 static const struct pci_device_id dev_id_table[] = {
+24 -4
drivers/net/ethernet/faraday/ftgmac100.c
··· 977 977 priv->tx_skbs = kcalloc(MAX_TX_QUEUE_ENTRIES, sizeof(void *), 978 978 GFP_KERNEL); 979 979 if (!priv->tx_skbs) 980 - return -ENOMEM; 980 + goto err_free_rx_skbs; 981 981 982 982 /* Allocate descriptors */ 983 983 priv->rxdes = dma_alloc_coherent(priv->dev, 984 984 MAX_RX_QUEUE_ENTRIES * sizeof(struct ftgmac100_rxdes), 985 985 &priv->rxdes_dma, GFP_KERNEL); 986 986 if (!priv->rxdes) 987 - return -ENOMEM; 987 + goto err_free_tx_skbs; 988 988 priv->txdes = dma_alloc_coherent(priv->dev, 989 989 MAX_TX_QUEUE_ENTRIES * sizeof(struct ftgmac100_txdes), 990 990 &priv->txdes_dma, GFP_KERNEL); 991 991 if (!priv->txdes) 992 - return -ENOMEM; 992 + goto err_free_rxdes; 993 993 994 994 /* Allocate scratch packet buffer */ 995 995 priv->rx_scratch = dma_alloc_coherent(priv->dev, ··· 997 997 &priv->rx_scratch_dma, 998 998 GFP_KERNEL); 999 999 if (!priv->rx_scratch) 1000 - return -ENOMEM; 1000 + goto err_free_txdes; 1001 1001 1002 1002 return 0; 1003 + 1004 + err_free_txdes: 1005 + dma_free_coherent(priv->dev, 1006 + MAX_TX_QUEUE_ENTRIES * 1007 + sizeof(struct ftgmac100_txdes), 1008 + priv->txdes, priv->txdes_dma); 1009 + priv->txdes = NULL; 1010 + err_free_rxdes: 1011 + dma_free_coherent(priv->dev, 1012 + MAX_RX_QUEUE_ENTRIES * 1013 + sizeof(struct ftgmac100_rxdes), 1014 + priv->rxdes, priv->rxdes_dma); 1015 + priv->rxdes = NULL; 1016 + err_free_tx_skbs: 1017 + kfree(priv->tx_skbs); 1018 + priv->tx_skbs = NULL; 1019 + err_free_rx_skbs: 1020 + kfree(priv->rx_skbs); 1021 + priv->rx_skbs = NULL; 1022 + return -ENOMEM; 1003 1023 } 1004 1024 1005 1025 static void ftgmac100_init_rings(struct ftgmac100 *priv)
+12 -1
drivers/net/ethernet/freescale/enetc/enetc.c
··· 2578 2578 2579 2579 static void enetc_setup_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring) 2580 2580 { 2581 + struct enetc_si *si = container_of(hw, struct enetc_si, hw); 2581 2582 int idx = tx_ring->index; 2582 2583 u32 tbmr; 2583 2584 ··· 2592 2591 enetc_txbdr_wr(hw, idx, ENETC_TBLENR, 2593 2592 ENETC_RTBLENR_LEN(tx_ring->bd_count)); 2594 2593 2595 - /* clearing PI/CI registers for Tx not supported, adjust sw indexes */ 2594 + /* For ENETC v1, clearing PI/CI registers for Tx not supported, 2595 + * adjust sw indexes 2596 + */ 2596 2597 tx_ring->next_to_use = enetc_txbdr_rd(hw, idx, ENETC_TBPIR); 2597 2598 tx_ring->next_to_clean = enetc_txbdr_rd(hw, idx, ENETC_TBCIR); 2599 + 2600 + if (tx_ring->next_to_use != tx_ring->next_to_clean && 2601 + !is_enetc_rev1(si)) { 2602 + tx_ring->next_to_use = 0; 2603 + tx_ring->next_to_clean = 0; 2604 + enetc_txbdr_wr(hw, idx, ENETC_TBPIR, 0); 2605 + enetc_txbdr_wr(hw, idx, ENETC_TBCIR, 0); 2606 + } 2598 2607 2599 2608 /* enable Tx ints by setting pkt thr to 1 */ 2600 2609 enetc_txbdr_wr(hw, idx, ENETC_TBICR0, ENETC_TBICR0_ICEN | 0x1);
+11
drivers/net/ethernet/freescale/enetc/enetc4_hw.h
··· 134 134 135 135 /* Port operational register */ 136 136 #define ENETC4_POR 0x4100 137 + #define POR_TXDIS BIT(0) 138 + #define POR_RXDIS BIT(1) 139 + 140 + /* Port status register */ 141 + #define ENETC4_PSR 0x4104 142 + #define PSR_RX_BUSY BIT(1) 137 143 138 144 /* Port traffic class a transmit maximum SDU register */ 139 145 #define ENETC4_PTCTMSDUR(a) ((a) * 0x20 + 0x4208) ··· 178 172 179 173 /* Port internal MDIO base address, use to access PCS */ 180 174 #define ENETC4_PM_IMDIO_BASE 0x5030 175 + 176 + /* Port MAC 0/1 Interrupt Event Register */ 177 + #define ENETC4_PM_IEVENT(mac) (0x5040 + (mac) * 0x400) 178 + #define PM_IEVENT_TX_EMPTY BIT(5) 179 + #define PM_IEVENT_RX_EMPTY BIT(6) 181 180 182 181 /* Port MAC 0/1 Pause Quanta Register */ 183 182 #define ENETC4_PM_PAUSE_QUANTA(mac) (0x5054 + (mac) * 0x400)
+104 -14
drivers/net/ethernet/freescale/enetc/enetc4_pf.c
··· 444 444 enetc4_pf_reset_tc_msdu(&si->hw); 445 445 } 446 446 447 - static void enetc4_enable_trx(struct enetc_pf *pf) 448 - { 449 - struct enetc_hw *hw = &pf->si->hw; 450 - 451 - /* Enable port transmit/receive */ 452 - enetc_port_wr(hw, ENETC4_POR, 0); 453 - } 454 - 455 447 static void enetc4_configure_port(struct enetc_pf *pf) 456 448 { 457 449 enetc4_configure_port_si(pf); 458 450 enetc4_set_trx_frame_size(pf); 459 451 enetc_set_default_rss_key(pf); 460 - enetc4_enable_trx(pf); 461 452 } 462 453 463 454 static int enetc4_init_ntmp_user(struct enetc_si *si) ··· 792 801 enetc_port_wr(hw, ENETC4_PPAUOFFTR, pause_off_thresh); 793 802 } 794 803 795 - static void enetc4_enable_mac(struct enetc_pf *pf, bool en) 804 + static void enetc4_mac_wait_tx_empty(struct enetc_si *si, int mac) 796 805 { 806 + u32 val; 807 + 808 + if (read_poll_timeout(enetc_port_rd, val, 809 + val & PM_IEVENT_TX_EMPTY, 810 + 100, 10000, false, &si->hw, 811 + ENETC4_PM_IEVENT(mac))) 812 + dev_warn(&si->pdev->dev, 813 + "MAC %d TX is not empty\n", mac); 814 + } 815 + 816 + static void enetc4_mac_tx_graceful_stop(struct enetc_pf *pf) 817 + { 818 + struct enetc_hw *hw = &pf->si->hw; 819 + struct enetc_si *si = pf->si; 820 + u32 val; 821 + 822 + val = enetc_port_rd(hw, ENETC4_POR); 823 + val |= POR_TXDIS; 824 + enetc_port_wr(hw, ENETC4_POR, val); 825 + 826 + if (enetc_is_pseudo_mac(si)) 827 + return; 828 + 829 + enetc4_mac_wait_tx_empty(si, 0); 830 + if (si->hw_features & ENETC_SI_F_QBU) 831 + enetc4_mac_wait_tx_empty(si, 1); 832 + 833 + val = enetc_port_mac_rd(si, ENETC4_PM_CMD_CFG(0)); 834 + val &= ~PM_CMD_CFG_TX_EN; 835 + enetc_port_mac_wr(si, ENETC4_PM_CMD_CFG(0), val); 836 + } 837 + 838 + static void enetc4_mac_tx_enable(struct enetc_pf *pf) 839 + { 840 + struct enetc_hw *hw = &pf->si->hw; 797 841 struct enetc_si *si = pf->si; 798 842 u32 val; 799 843 800 844 val = enetc_port_mac_rd(si, ENETC4_PM_CMD_CFG(0)); 801 - val &= ~(PM_CMD_CFG_TX_EN | PM_CMD_CFG_RX_EN); 802 - val |= en ? (PM_CMD_CFG_TX_EN | PM_CMD_CFG_RX_EN) : 0; 845 + val |= PM_CMD_CFG_TX_EN; 846 + enetc_port_mac_wr(si, ENETC4_PM_CMD_CFG(0), val); 803 847 848 + val = enetc_port_rd(hw, ENETC4_POR); 849 + val &= ~POR_TXDIS; 850 + enetc_port_wr(hw, ENETC4_POR, val); 851 + } 852 + 853 + static void enetc4_mac_wait_rx_empty(struct enetc_si *si, int mac) 854 + { 855 + u32 val; 856 + 857 + if (read_poll_timeout(enetc_port_rd, val, 858 + val & PM_IEVENT_RX_EMPTY, 859 + 100, 10000, false, &si->hw, 860 + ENETC4_PM_IEVENT(mac))) 861 + dev_warn(&si->pdev->dev, 862 + "MAC %d RX is not empty\n", mac); 863 + } 864 + 865 + static void enetc4_mac_rx_graceful_stop(struct enetc_pf *pf) 866 + { 867 + struct enetc_hw *hw = &pf->si->hw; 868 + struct enetc_si *si = pf->si; 869 + u32 val; 870 + 871 + if (enetc_is_pseudo_mac(si)) 872 + goto check_rx_busy; 873 + 874 + if (si->hw_features & ENETC_SI_F_QBU) { 875 + val = enetc_port_rd(hw, ENETC4_PM_CMD_CFG(1)); 876 + val &= ~PM_CMD_CFG_RX_EN; 877 + enetc_port_wr(hw, ENETC4_PM_CMD_CFG(1), val); 878 + enetc4_mac_wait_rx_empty(si, 1); 879 + } 880 + 881 + val = enetc_port_rd(hw, ENETC4_PM_CMD_CFG(0)); 882 + val &= ~PM_CMD_CFG_RX_EN; 883 + enetc_port_wr(hw, ENETC4_PM_CMD_CFG(0), val); 884 + enetc4_mac_wait_rx_empty(si, 0); 885 + 886 + check_rx_busy: 887 + if (read_poll_timeout(enetc_port_rd, val, 888 + !(val & PSR_RX_BUSY), 889 + 100, 10000, false, hw, 890 + ENETC4_PSR)) 891 + dev_warn(&si->pdev->dev, "Port RX busy\n"); 892 + 893 + val = enetc_port_rd(hw, ENETC4_POR); 894 + val |= POR_RXDIS; 895 + enetc_port_wr(hw, ENETC4_POR, val); 896 + } 897 + 898 + static void enetc4_mac_rx_enable(struct enetc_pf *pf) 899 + { 900 + struct enetc_hw *hw = &pf->si->hw; 901 + struct enetc_si *si = pf->si; 902 + u32 val; 903 + 904 + val = enetc_port_rd(hw, ENETC4_POR); 905 + val &= ~POR_RXDIS; 906 + enetc_port_wr(hw, ENETC4_POR, val); 907 + 908 + val = enetc_port_mac_rd(si, ENETC4_PM_CMD_CFG(0)); 909 + val |= PM_CMD_CFG_RX_EN; 804 910 enetc_port_mac_wr(si, ENETC4_PM_CMD_CFG(0), val); 805 911 } 806 912 ··· 941 853 enetc4_set_hd_flow_control(pf, hd_fc); 942 854 enetc4_set_tx_pause(pf, priv->num_rx_rings, tx_pause); 943 855 enetc4_set_rx_pause(pf, rx_pause); 944 - enetc4_enable_mac(pf, true); 856 + enetc4_mac_tx_enable(pf); 857 + enetc4_mac_rx_enable(pf); 945 858 } 946 859 947 860 static void enetc4_pl_mac_link_down(struct phylink_config *config, ··· 951 862 { 952 863 struct enetc_pf *pf = phylink_to_enetc_pf(config); 953 864 954 - enetc4_enable_mac(pf, false); 865 + enetc4_mac_rx_graceful_stop(pf); 866 + enetc4_mac_tx_graceful_stop(pf); 955 867 } 956 868 957 869 static const struct phylink_mac_ops enetc_pl_mac_ops = {
+9 -1
drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
··· 795 795 struct enetc_si *si = priv->si; 796 796 int err = 0; 797 797 798 + if (rxfh->hfunc != ETH_RSS_HASH_NO_CHANGE && 799 + rxfh->hfunc != ETH_RSS_HASH_TOP) 800 + return -EOPNOTSUPP; 801 + 798 802 /* set hash key, if PF */ 799 - if (rxfh->key && enetc_si_is_pf(si)) 803 + if (rxfh->key) { 804 + if (!enetc_si_is_pf(si)) 805 + return -EOPNOTSUPP; 806 + 800 807 enetc_set_rss_key(si, rxfh->key); 808 + } 801 809 802 810 /* set RSS table */ 803 811 if (rxfh->indir)
-3
drivers/net/ethernet/freescale/fec_ptp.c
··· 545 545 if (rq->perout.flags) 546 546 return -EOPNOTSUPP; 547 547 548 - if (rq->perout.index != fep->pps_channel) 549 - return -EOPNOTSUPP; 550 - 551 548 period.tv_sec = rq->perout.period.sec; 552 549 period.tv_nsec = rq->perout.period.nsec; 553 550 period_ns = timespec64_to_ns(&period);
+20 -1
drivers/net/ethernet/mediatek/mtk_ppe_offload.c
··· 244 244 return 0; 245 245 } 246 246 247 + static bool 248 + mtk_flow_is_valid_idev(const struct mtk_eth *eth, const struct net_device *idev) 249 + { 250 + size_t i; 251 + 252 + if (!idev) 253 + return false; 254 + 255 + for (i = 0; i < ARRAY_SIZE(eth->netdev); i++) { 256 + if (!eth->netdev[i]) 257 + continue; 258 + 259 + if (idev->netdev_ops == eth->netdev[i]->netdev_ops) 260 + return true; 261 + } 262 + 263 + return false; 264 + } 265 + 247 266 static int 248 267 mtk_flow_offload_replace(struct mtk_eth *eth, struct flow_cls_offload *f, 249 268 int ppe_index) ··· 289 270 flow_rule_match_meta(rule, &match); 290 271 if (mtk_is_netsys_v2_or_greater(eth)) { 291 272 idev = __dev_get_by_index(&init_net, match.key->ingress_ifindex); 292 - if (idev && idev->netdev_ops == eth->netdev[0]->netdev_ops) { 273 + if (mtk_flow_is_valid_idev(eth, idev)) { 293 274 struct mtk_mac *mac = netdev_priv(idev); 294 275 295 276 if (WARN_ON(mac->ppe_idx >= eth->soc->ppe_num))
+1 -3
drivers/net/ethernet/mellanox/mlx5/core/devlink.c
··· 107 107 if (err) 108 108 return err; 109 109 110 - err = mlx5_fw_version_query(dev, &running_fw, &stored_fw); 111 - if (err) 112 - return err; 110 + mlx5_fw_version_query(dev, &running_fw, &stored_fw); 113 111 114 112 snprintf(version_str, sizeof(version_str), "%d.%d.%04d", 115 113 mlx5_fw_ver_major(running_fw), mlx5_fw_ver_minor(running_fw),
+2
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 3761 3761 return 0; 3762 3762 3763 3763 err_vports: 3764 + /* rollback to legacy, indicates don't unregister the uplink netdev */ 3765 + esw->dev->priv.flags |= MLX5_PRIV_FLAGS_SWITCH_LEGACY; 3764 3766 mlx5_esw_offloads_rep_unload(esw, MLX5_VPORT_UPLINK); 3765 3767 err_uplink: 3766 3768 esw_offloads_steering_cleanup(esw);
+32 -17
drivers/net/ethernet/mellanox/mlx5/core/fw.c
··· 822 822 return 0; 823 823 } 824 824 825 - int mlx5_fw_version_query(struct mlx5_core_dev *dev, 826 - u32 *running_ver, u32 *pending_ver) 825 + void mlx5_fw_version_query(struct mlx5_core_dev *dev, 826 + u32 *running_ver, u32 *pending_ver) 827 827 { 828 828 u32 reg_mcqi_version[MLX5_ST_SZ_DW(mcqi_version)] = {}; 829 829 bool pending_version_exists; 830 830 int component_index; 831 831 int err; 832 832 833 + *running_ver = 0; 834 + *pending_ver = 0; 835 + 833 836 if (!MLX5_CAP_GEN(dev, mcam_reg) || !MLX5_CAP_MCAM_REG(dev, mcqi) || 834 837 !MLX5_CAP_MCAM_REG(dev, mcqs)) { 835 838 mlx5_core_warn(dev, "fw query isn't supported by the FW\n"); 836 - return -EOPNOTSUPP; 839 + return; 837 840 } 838 841 839 842 component_index = mlx5_get_boot_img_component_index(dev); 840 - if (component_index < 0) 841 - return component_index; 843 + if (component_index < 0) { 844 + mlx5_core_warn(dev, "fw query failed to find boot img component index, err %d\n", 845 + component_index); 846 + return; 847 + } 842 848 849 + *running_ver = U32_MAX; /* indicate failure */ 843 850 err = mlx5_reg_mcqi_version_query(dev, component_index, 844 851 MCQI_FW_RUNNING_VERSION, 845 852 reg_mcqi_version); 846 - if (err) 847 - return err; 853 + if (!err) 854 + *running_ver = MLX5_GET(mcqi_version, reg_mcqi_version, 855 + version); 856 + else 857 + mlx5_core_warn(dev, "failed to query running version, err %d\n", 858 + err); 848 859 849 - *running_ver = MLX5_GET(mcqi_version, reg_mcqi_version, version); 850 - 860 + *pending_ver = U32_MAX; /* indicate failure */ 851 861 err = mlx5_fw_image_pending(dev, component_index, &pending_version_exists); 852 - if (err) 853 - return err; 862 + if (err) { 863 + mlx5_core_warn(dev, "failed to query pending image, err %d\n", 864 + err); 865 + return; 866 + } 854 867 855 868 if (!pending_version_exists) { 856 869 *pending_ver = 0; 857 - return 0; 870 + return; 858 871 } 859 872 860 873 err = mlx5_reg_mcqi_version_query(dev, component_index, 861 874 MCQI_FW_STORED_VERSION, 862 875 reg_mcqi_version); 863 - if (err) 864 - return err; 876 + if (!err) 877 + *pending_ver = MLX5_GET(mcqi_version, reg_mcqi_version, 878 + version); 879 + else 880 + mlx5_core_warn(dev, "failed to query pending version, err %d\n", 881 + err); 865 882 866 - *pending_ver = MLX5_GET(mcqi_version, reg_mcqi_version, version); 867 - 868 - return 0; 883 + return; 869 884 }
+3
drivers/net/ethernet/mellanox/mlx5/core/lag/debugfs.c
··· 160 160 161 161 void mlx5_ldev_add_debugfs(struct mlx5_core_dev *dev) 162 162 { 163 + struct mlx5_lag *ldev = mlx5_lag_dev(dev); 163 164 struct dentry *dbg; 164 165 166 + if (!ldev) 167 + return; 165 168 dbg = debugfs_create_dir("lag", mlx5_debugfs_get_dev_root(dev)); 166 169 dev->priv.dbg.lag_debugfs = dbg; 167 170
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
··· 393 393 394 394 int mlx5_firmware_flash(struct mlx5_core_dev *dev, const struct firmware *fw, 395 395 struct netlink_ext_ack *extack); 396 - int mlx5_fw_version_query(struct mlx5_core_dev *dev, 397 - u32 *running_ver, u32 *stored_ver); 396 + void mlx5_fw_version_query(struct mlx5_core_dev *dev, u32 *running_ver, 397 + u32 *stored_ver); 398 398 399 399 #ifdef CONFIG_MLX5_CORE_EN 400 400 int mlx5e_init(void);
+1 -1
drivers/net/ethernet/meta/fbnic/fbnic_debugfs.c
··· 197 197 return 0; 198 198 } 199 199 200 - for (i = 0; i <= ring->size_mask; i++) { 200 + for (i = 0; i < (ring->size_mask + 1) * FBNIC_BD_FRAG_COUNT; i++) { 201 201 u64 bd = le64_to_cpu(ring->desc[i]); 202 202 203 203 seq_printf(s, "%04x %#04llx %#014llx\n", i,
+3 -3
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
··· 927 927 /* Force DMA writes to flush before writing to tail */ 928 928 dma_wmb(); 929 929 930 - writel(i, bdq->doorbell); 930 + writel(i * FBNIC_BD_FRAG_COUNT, bdq->doorbell); 931 931 } 932 932 } 933 933 ··· 2564 2564 hpq->tail = 0; 2565 2565 hpq->head = 0; 2566 2566 2567 - log_size = fls(hpq->size_mask); 2567 + log_size = fls(hpq->size_mask) + ilog2(FBNIC_BD_FRAG_COUNT); 2568 2568 2569 2569 /* Store descriptor ring address and size */ 2570 2570 fbnic_ring_wr32(hpq, FBNIC_QUEUE_BDQ_HPQ_BAL, lower_32_bits(hpq->dma)); ··· 2576 2576 if (!ppq->size_mask) 2577 2577 goto write_ctl; 2578 2578 2579 - log_size = fls(ppq->size_mask); 2579 + log_size = fls(ppq->size_mask) + ilog2(FBNIC_BD_FRAG_COUNT); 2580 2580 2581 2581 /* Add enabling of PPQ to BDQ control */ 2582 2582 bdq_ctl |= FBNIC_QUEUE_BDQ_CTL_PPQ_ENABLE;
+1 -1
drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
··· 38 38 #define FBNIC_MAX_XDPQS 128u 39 39 40 40 /* These apply to TWQs, TCQ, RCQ */ 41 - #define FBNIC_QUEUE_SIZE_MIN 16u 41 + #define FBNIC_QUEUE_SIZE_MIN 64u 42 42 #define FBNIC_QUEUE_SIZE_MAX SZ_64K 43 43 44 44 #define FBNIC_TXQ_SIZE_DEFAULT 1024
+7
drivers/net/ethernet/microsoft/mana/mana_en.c
··· 766 766 } 767 767 768 768 *frag_count = 1; 769 + 770 + /* In the single-buffer path, napi_build_skb() must see the 771 + * actual backing allocation size so skb->truesize reflects 772 + * the full page (or higher-order page), not just the usable 773 + * packet area. 774 + */ 775 + *alloc_size = PAGE_SIZE << get_order(*alloc_size); 769 776 return; 770 777 } 771 778
+4 -10
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 156 156 static void stmmac_flush_tx_descriptors(struct stmmac_priv *priv, int queue); 157 157 static void stmmac_set_dma_operation_mode(struct stmmac_priv *priv, u32 txmode, 158 158 u32 rxmode, u32 chan); 159 - static int stmmac_vlan_restore(struct stmmac_priv *priv); 159 + static void stmmac_vlan_restore(struct stmmac_priv *priv); 160 160 161 161 #ifdef CONFIG_DEBUG_FS 162 162 static const struct net_device_ops stmmac_netdev_ops; ··· 6859 6859 return ret; 6860 6860 } 6861 6861 6862 - static int stmmac_vlan_restore(struct stmmac_priv *priv) 6862 + static void stmmac_vlan_restore(struct stmmac_priv *priv) 6863 6863 { 6864 - int ret; 6865 - 6866 6864 if (!(priv->dev->features & NETIF_F_VLAN_FEATURES)) 6867 - return 0; 6865 + return; 6868 6866 6869 6867 if (priv->hw->num_vlan) 6870 6868 stmmac_restore_hw_vlan_rx_fltr(priv, priv->dev, priv->hw); 6871 6869 6872 - ret = stmmac_vlan_update(priv, priv->num_double_vlans); 6873 - if (ret) 6874 - netdev_err(priv->dev, "Failed to restore VLANs\n"); 6875 - 6876 - return ret; 6870 + stmmac_vlan_update(priv, priv->num_double_vlans); 6877 6871 } 6878 6872 6879 6873 static int stmmac_bpf(struct net_device *dev, struct netdev_bpf *bpf)
+1 -1
drivers/net/ethernet/ti/icssg/icssg_common.c
··· 902 902 903 903 skb_reserve(skb, headroom); 904 904 skb_put(skb, pkt_len); 905 + skb_copy_to_linear_data(skb, xdp->data, pkt_len); 905 906 skb->dev = ndev; 906 907 907 908 /* RX HW timestamp */ ··· 913 912 skb->offload_fwd_mark = emac->offload_fwd_mark; 914 913 skb->protocol = eth_type_trans(skb, ndev); 915 914 916 - skb_mark_for_recycle(skb); 917 915 napi_gro_receive(&emac->napi_rx, skb); 918 916 ndev->stats.rx_bytes += pkt_len; 919 917 ndev->stats.rx_packets++;
+2 -2
drivers/net/ethernet/xilinx/xilinx_axienet.h
··· 105 105 #define XAXIDMA_BD_HAS_DRE_MASK 0xF00 /* Whether has DRE mask */ 106 106 #define XAXIDMA_BD_WORDLEN_MASK 0xFF /* Whether has DRE mask */ 107 107 108 - #define XAXIDMA_BD_CTRL_LENGTH_MASK 0x007FFFFF /* Requested len */ 108 + #define XAXIDMA_BD_CTRL_LENGTH_MASK GENMASK(25, 0) /* Requested len */ 109 109 #define XAXIDMA_BD_CTRL_TXSOF_MASK 0x08000000 /* First tx packet */ 110 110 #define XAXIDMA_BD_CTRL_TXEOF_MASK 0x04000000 /* Last tx packet */ 111 111 #define XAXIDMA_BD_CTRL_ALL_MASK 0x0C000000 /* All control bits */ ··· 130 130 #define XAXIDMA_BD_CTRL_TXEOF_MASK 0x04000000 /* Last tx packet */ 131 131 #define XAXIDMA_BD_CTRL_ALL_MASK 0x0C000000 /* All control bits */ 132 132 133 - #define XAXIDMA_BD_STS_ACTUAL_LEN_MASK 0x007FFFFF /* Actual len */ 133 + #define XAXIDMA_BD_STS_ACTUAL_LEN_MASK GENMASK(25, 0) /* Actual len */ 134 134 #define XAXIDMA_BD_STS_COMPLETE_MASK 0x80000000 /* Completed */ 135 135 #define XAXIDMA_BD_STS_DEC_ERR_MASK 0x40000000 /* Decode error */ 136 136 #define XAXIDMA_BD_STS_SLV_ERR_MASK 0x20000000 /* Slave error */
+4 -5
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 770 770 * @first_bd: Index of first descriptor to clean up 771 771 * @nr_bds: Max number of descriptors to clean up 772 772 * @force: Whether to clean descriptors even if not complete 773 - * @sizep: Pointer to a u32 filled with the total sum of all bytes 774 - * in all cleaned-up descriptors. Ignored if NULL. 773 + * @sizep: Pointer to a u32 accumulating the total byte count of 774 + * completed packets (using skb->len). Ignored if NULL. 775 775 * @budget: NAPI budget (use 0 when not called from NAPI poll) 776 776 * 777 777 * Would either be called after a successful transmit operation, or after ··· 805 805 DMA_TO_DEVICE); 806 806 807 807 if (cur_p->skb && (status & XAXIDMA_BD_STS_COMPLETE_MASK)) { 808 + if (sizep) 809 + *sizep += cur_p->skb->len; 808 810 napi_consume_skb(cur_p->skb, budget); 809 811 packets++; 810 812 } ··· 820 818 wmb(); 821 819 cur_p->cntrl = 0; 822 820 cur_p->status = 0; 823 - 824 - if (sizep) 825 - *sizep += status & XAXIDMA_BD_STS_ACTUAL_LEN_MASK; 826 821 } 827 822 828 823 if (!force) {
+6 -1
drivers/net/phy/sfp.c
··· 480 480 { 481 481 /* Ubiquiti U-Fiber Instant module claims that support all transceiver 482 482 * types including 10G Ethernet which is not truth. So clear all claimed 483 - * modes and set only one mode which module supports: 1000baseX_Full. 483 + * modes and set only one mode which module supports: 1000baseX_Full, 484 + * along with the Autoneg and pause bits. 484 485 */ 485 486 linkmode_zero(caps->link_modes); 486 487 linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseX_Full_BIT, 487 488 caps->link_modes); 489 + linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, caps->link_modes); 490 + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT, caps->link_modes); 491 + linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, caps->link_modes); 492 + 488 493 phy_interface_zero(caps->interfaces); 489 494 __set_bit(PHY_INTERFACE_MODE_1000BASEX, caps->interfaces); 490 495 }
+9 -11
drivers/net/virtio_net.c
··· 381 381 struct xdp_buff **xsk_buffs; 382 382 }; 383 383 384 - #define VIRTIO_NET_RSS_MAX_KEY_SIZE 40 385 - 386 384 /* Control VQ buffers: protected by the rtnl lock */ 387 385 struct control_buf { 388 386 struct virtio_net_ctrl_hdr hdr; ··· 484 486 485 487 /* Must be last as it ends in a flexible-array member. */ 486 488 TRAILING_OVERLAP(struct virtio_net_rss_config_trailer, rss_trailer, hash_key_data, 487 - u8 rss_hash_key_data[VIRTIO_NET_RSS_MAX_KEY_SIZE]; 489 + u8 rss_hash_key_data[NETDEV_RSS_KEY_LEN]; 488 490 ); 489 491 }; 490 492 static_assert(offsetof(struct virtnet_info, rss_trailer.hash_key_data) == ··· 6706 6708 struct virtnet_info *vi; 6707 6709 u16 max_queue_pairs; 6708 6710 int mtu = 0; 6711 + u16 key_sz; 6709 6712 6710 6713 /* Find if host supports multiqueue/rss virtio_net device */ 6711 6714 max_queue_pairs = 1; ··· 6841 6842 } 6842 6843 6843 6844 if (vi->has_rss || vi->has_rss_hash_report) { 6844 - vi->rss_key_size = 6845 - virtio_cread8(vdev, offsetof(struct virtio_net_config, rss_max_key_size)); 6846 - if (vi->rss_key_size > VIRTIO_NET_RSS_MAX_KEY_SIZE) { 6847 - dev_err(&vdev->dev, "rss_max_key_size=%u exceeds the limit %u.\n", 6848 - vi->rss_key_size, VIRTIO_NET_RSS_MAX_KEY_SIZE); 6849 - err = -EINVAL; 6850 - goto free; 6851 - } 6845 + key_sz = virtio_cread8(vdev, offsetof(struct virtio_net_config, rss_max_key_size)); 6846 + 6847 + vi->rss_key_size = min_t(u16, key_sz, NETDEV_RSS_KEY_LEN); 6848 + if (key_sz > vi->rss_key_size) 6849 + dev_warn(&vdev->dev, 6850 + "rss_max_key_size=%u exceeds driver limit %u, clamping\n", 6851 + key_sz, vi->rss_key_size); 6852 6852 6853 6853 vi->rss_hash_types_supported = 6854 6854 virtio_cread32(vdev, offsetof(struct virtio_net_config, supported_hash_types));
+4 -2
drivers/net/vxlan/vxlan_core.c
··· 1965 1965 ns_olen = request->len - skb_network_offset(request) - 1966 1966 sizeof(struct ipv6hdr) - sizeof(*ns); 1967 1967 for (i = 0; i < ns_olen-1; i += (ns->opt[i+1]<<3)) { 1968 - if (!ns->opt[i + 1]) { 1968 + if (!ns->opt[i + 1] || i + (ns->opt[i + 1] << 3) > ns_olen) { 1969 1969 kfree_skb(reply); 1970 1970 return NULL; 1971 1971 } 1972 1972 if (ns->opt[i] == ND_OPT_SOURCE_LL_ADDR) { 1973 - daddr = ns->opt + i + sizeof(struct nd_opt_hdr); 1973 + if ((ns->opt[i + 1] << 3) >= 1974 + sizeof(struct nd_opt_hdr) + ETH_ALEN) 1975 + daddr = ns->opt + i + sizeof(struct nd_opt_hdr); 1974 1976 break; 1975 1977 } 1976 1978 }
+7 -8
drivers/net/wireless/ath/ath11k/dp_rx.c
··· 1 1 // SPDX-License-Identifier: BSD-3-Clause-Clear 2 2 /* 3 3 * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved. 4 - * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved. 4 + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. 5 5 */ 6 6 7 7 #include <linux/ieee80211.h> ··· 1110 1110 struct ath11k_base *ab = ar->ab; 1111 1111 struct ath11k_peer *peer; 1112 1112 struct ath11k_sta *arsta = ath11k_sta_to_arsta(params->sta); 1113 + struct dp_rx_tid *rx_tid; 1113 1114 int vdev_id = arsta->arvif->vdev_id; 1114 - dma_addr_t paddr; 1115 - bool active; 1116 1115 int ret; 1117 1116 1118 1117 spin_lock_bh(&ab->base_lock); ··· 1123 1124 return -ENOENT; 1124 1125 } 1125 1126 1126 - paddr = peer->rx_tid[params->tid].paddr; 1127 - active = peer->rx_tid[params->tid].active; 1127 + rx_tid = &peer->rx_tid[params->tid]; 1128 1128 1129 - if (!active) { 1129 + if (!rx_tid->active) { 1130 1130 spin_unlock_bh(&ab->base_lock); 1131 1131 return 0; 1132 1132 } 1133 1133 1134 - ret = ath11k_peer_rx_tid_reo_update(ar, peer, peer->rx_tid, 1, 0, false); 1134 + ret = ath11k_peer_rx_tid_reo_update(ar, peer, rx_tid, 1, 0, false); 1135 1135 spin_unlock_bh(&ab->base_lock); 1136 1136 if (ret) { 1137 1137 ath11k_warn(ab, "failed to update reo for rx tid %d: %d\n", ··· 1139 1141 } 1140 1142 1141 1143 ret = ath11k_wmi_peer_rx_reorder_queue_setup(ar, vdev_id, 1142 - params->sta->addr, paddr, 1144 + params->sta->addr, 1145 + rx_tid->paddr, 1143 1146 params->tid, 1, 1); 1144 1147 if (ret) 1145 1148 ath11k_warn(ab, "failed to send wmi to delete rx tid %d\n",
+3 -1
drivers/net/wireless/ath/ath12k/dp_rx.c
··· 735 735 struct ath12k_dp *dp = ath12k_ab_to_dp(ab); 736 736 struct ath12k_dp_link_peer *peer; 737 737 struct ath12k_sta *ahsta = ath12k_sta_to_ahsta(params->sta); 738 + struct ath12k_dp_rx_tid *rx_tid; 738 739 struct ath12k_link_sta *arsta; 739 740 int vdev_id; 740 741 bool active; ··· 771 770 return 0; 772 771 } 773 772 774 - ret = ath12k_dp_arch_peer_rx_tid_reo_update(dp, peer, peer->dp_peer->rx_tid, 773 + rx_tid = &peer->dp_peer->rx_tid[params->tid]; 774 + ret = ath12k_dp_arch_peer_rx_tid_reo_update(dp, peer, rx_tid, 775 775 1, 0, false); 776 776 spin_unlock_bh(&dp->dp_lock); 777 777 if (ret) {
+5
drivers/net/wireless/intel/iwlwifi/fw/api/commands.h
··· 297 297 SCAN_OFFLOAD_UPDATE_PROFILES_CMD = 0x6E, 298 298 299 299 /** 300 + * @SCAN_START_NOTIFICATION_UMAC: uses &struct iwl_umac_scan_start 301 + */ 302 + SCAN_START_NOTIFICATION_UMAC = 0xb2, 303 + 304 + /** 300 305 * @MATCH_FOUND_NOTIFICATION: scan match found 301 306 */ 302 307 MATCH_FOUND_NOTIFICATION = 0xd9,
+10
drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
··· 1157 1157 }; 1158 1158 1159 1159 /** 1160 + * struct iwl_umac_scan_start - scan start notification 1161 + * @uid: scan id, &enum iwl_umac_scan_uid_offsets 1162 + * @reserved: for future use 1163 + */ 1164 + struct iwl_umac_scan_start { 1165 + __le32 uid; 1166 + __le32 reserved; 1167 + } __packed; /* SCAN_START_UMAC_API_S_VER_1 */ 1168 + 1169 + /** 1160 1170 * struct iwl_umac_scan_complete - scan complete notification 1161 1171 * @uid: scan id, &enum iwl_umac_scan_uid_offsets 1162 1172 * @last_schedule: last scheduling line
+69 -32
drivers/net/wireless/intel/iwlwifi/mld/iface.c
··· 111 111 IEEE80211_HE_MAC_CAP2_ACK_EN); 112 112 } 113 113 114 - static void iwl_mld_set_he_support(struct iwl_mld *mld, 115 - struct ieee80211_vif *vif, 116 - struct iwl_mac_config_cmd *cmd) 114 + struct iwl_mld_mac_wifi_gen_sta_iter_data { 115 + struct ieee80211_vif *vif; 116 + struct iwl_mac_wifi_gen_support *support; 117 + }; 118 + 119 + static void iwl_mld_mac_wifi_gen_sta_iter(void *_data, 120 + struct ieee80211_sta *sta) 117 121 { 118 - if (vif->type == NL80211_IFTYPE_AP) 119 - cmd->wifi_gen.he_ap_support = 1; 120 - else 121 - cmd->wifi_gen.he_support = 1; 122 + struct iwl_mld_sta *mld_sta = iwl_mld_sta_from_mac80211(sta); 123 + struct iwl_mld_mac_wifi_gen_sta_iter_data *data = _data; 124 + struct ieee80211_link_sta *link_sta; 125 + unsigned int link_id; 126 + 127 + if (mld_sta->vif != data->vif) 128 + return; 129 + 130 + for_each_sta_active_link(data->vif, sta, link_sta, link_id) { 131 + if (link_sta->he_cap.has_he) 132 + data->support->he_support = 1; 133 + if (link_sta->eht_cap.has_eht) 134 + data->support->eht_support = 1; 135 + } 136 + } 137 + 138 + static void iwl_mld_set_wifi_gen(struct iwl_mld *mld, 139 + struct ieee80211_vif *vif, 140 + struct iwl_mac_wifi_gen_support *support) 141 + { 142 + struct iwl_mld_mac_wifi_gen_sta_iter_data sta_iter_data = { 143 + .vif = vif, 144 + .support = support, 145 + }; 146 + struct ieee80211_bss_conf *link_conf; 147 + unsigned int link_id; 148 + 149 + switch (vif->type) { 150 + case NL80211_IFTYPE_MONITOR: 151 + /* for sniffer, set to HW capabilities */ 152 + support->he_support = 1; 153 + support->eht_support = mld->trans->cfg->eht_supported; 154 + break; 155 + case NL80211_IFTYPE_AP: 156 + /* for AP set according to the link configs */ 157 + for_each_vif_active_link(vif, link_conf, link_id) { 158 + support->he_ap_support |= link_conf->he_support; 159 + support->eht_support |= link_conf->eht_support; 160 + } 161 + break; 162 + default: 163 + /* 164 + * If we have MLO enabled, then the firmware needs to enable 165 + * address translation for the station(s) we add. That depends 166 + * on having EHT enabled in firmware, which in turn depends on 167 + * mac80211 in the iteration below. 168 + * However, mac80211 doesn't enable capabilities on the AP STA 169 + * until it has parsed the association response successfully, 170 + * so set EHT (and HE as a pre-requisite for EHT) when the vif 171 + * is an MLD. 172 + */ 173 + if (ieee80211_vif_is_mld(vif)) { 174 + support->he_support = 1; 175 + support->eht_support = 1; 176 + } 177 + 178 + ieee80211_iterate_stations_mtx(mld->hw, 179 + iwl_mld_mac_wifi_gen_sta_iter, 180 + &sta_iter_data); 181 + break; 182 + } 122 183 } 123 184 124 185 /* fill the common part for all interface types */ ··· 189 128 u32 action) 190 129 { 191 130 struct iwl_mld_vif *mld_vif = iwl_mld_vif_from_mac80211(vif); 192 - struct ieee80211_bss_conf *link_conf; 193 - unsigned int link_id; 194 131 195 132 lockdep_assert_wiphy(mld->wiphy); 196 133 ··· 206 147 cmd->nic_not_ack_enabled = 207 148 cpu_to_le32(!iwl_mld_is_nic_ack_enabled(mld, vif)); 208 149 209 - /* If we have MLO enabled, then the firmware needs to enable 210 - * address translation for the station(s) we add. That depends 211 - * on having EHT enabled in firmware, which in turn depends on 212 - * mac80211 in the code below. 213 - * However, mac80211 doesn't enable HE/EHT until it has parsed 214 - * the association response successfully, so just skip all that 215 - * and enable both when we have MLO. 216 - */ 217 - if (ieee80211_vif_is_mld(vif)) { 218 - iwl_mld_set_he_support(mld, vif, cmd); 219 - cmd->wifi_gen.eht_support = 1; 220 - return; 221 - } 222 - 223 - for_each_vif_active_link(vif, link_conf, link_id) { 224 - if (!link_conf->he_support) 225 - continue; 226 - 227 - iwl_mld_set_he_support(mld, vif, cmd); 228 - 229 - /* EHT, if supported, was already set above */ 230 - break; 231 - } 150 + iwl_mld_set_wifi_gen(mld, vif, &cmd->wifi_gen); 232 151 } 233 152 234 153 static void iwl_mld_fill_mac_cmd_sta(struct iwl_mld *mld,
+19
drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
··· 1761 1761 1762 1762 if (vif->type == NL80211_IFTYPE_STATION) 1763 1763 iwl_mld_link_set_2mhz_block(mld, vif, sta); 1764 + 1765 + if (sta->tdls) { 1766 + /* 1767 + * update MAC since wifi generation flags may change, 1768 + * we also update MAC on association to the AP via the 1769 + * vif assoc change 1770 + */ 1771 + iwl_mld_mac_fw_action(mld, vif, FW_CTXT_ACTION_MODIFY); 1772 + } 1773 + 1764 1774 /* Now the link_sta's capabilities are set, update the FW */ 1765 1775 iwl_mld_config_tlc(mld, vif, sta); 1766 1776 ··· 1882 1872 if (sta->tdls && iwl_mld_tdls_sta_count(mld) == 0) { 1883 1873 /* just removed last TDLS STA, so enable PM */ 1884 1874 iwl_mld_update_mac_power(mld, vif, false); 1875 + } 1876 + 1877 + if (sta->tdls) { 1878 + /* 1879 + * update MAC since wifi generation flags may change, 1880 + * we also update MAC on disassociation to the AP via 1881 + * the vif assoc change 1882 + */ 1883 + iwl_mld_mac_fw_action(mld, vif, FW_CTXT_ACTION_MODIFY); 1885 1884 } 1886 1885 } else { 1887 1886 return -EINVAL;
+1
drivers/net/wireless/intel/iwlwifi/mld/mld.c
··· 171 171 HCMD_NAME(MISSED_BEACONS_NOTIFICATION), 172 172 HCMD_NAME(MAC_PM_POWER_TABLE), 173 173 HCMD_NAME(MFUART_LOAD_NOTIFICATION), 174 + HCMD_NAME(SCAN_START_NOTIFICATION_UMAC), 174 175 HCMD_NAME(RSS_CONFIG_CMD), 175 176 HCMD_NAME(SCAN_ITERATION_COMPLETE_UMAC), 176 177 HCMD_NAME(REPLY_RX_MPDU_CMD),
+2 -2
drivers/net/wireless/intel/iwlwifi/mld/mlo.c
··· 739 739 740 740 /* Ignore any BSS that was not seen in the last MLO scan */ 741 741 if (ktime_before(link_conf->bss->ts_boottime, 742 - mld->scan.last_mlo_scan_time)) 742 + mld->scan.last_mlo_scan_start_time)) 743 743 continue; 744 744 745 745 data[n_data].link_id = link_id; ··· 945 945 if (!mld_vif->authorized || hweight16(usable_links) <= 1) 946 946 return; 947 947 948 - if (WARN(ktime_before(mld->scan.last_mlo_scan_time, 948 + if (WARN(ktime_before(mld->scan.last_mlo_scan_start_time, 949 949 ktime_sub_ns(ktime_get_boottime_ns(), 950 950 5ULL * NSEC_PER_SEC)), 951 951 "Last MLO scan was too long ago, can't select links\n"))
+5
drivers/net/wireless/intel/iwlwifi/mld/notif.c
··· 287 287 * at least enough bytes to cover the structure listed in the CMD_VER_ENTRY. 288 288 */ 289 289 290 + CMD_VERSIONS(scan_start_notif, 291 + CMD_VER_ENTRY(1, iwl_umac_scan_start)) 290 292 CMD_VERSIONS(scan_complete_notif, 291 293 CMD_VER_ENTRY(1, iwl_umac_scan_complete)) 292 294 CMD_VERSIONS(scan_iter_complete_notif, ··· 362 360 link_id) 363 361 DEFINE_SIMPLE_CANCELLATION(roc, iwl_roc_notif, activity) 364 362 DEFINE_SIMPLE_CANCELLATION(scan_complete, iwl_umac_scan_complete, uid) 363 + DEFINE_SIMPLE_CANCELLATION(scan_start, iwl_umac_scan_start, uid) 365 364 DEFINE_SIMPLE_CANCELLATION(probe_resp_data, iwl_probe_resp_data_notif, 366 365 mac_id) 367 366 DEFINE_SIMPLE_CANCELLATION(uapsd_misbehaving_ap, iwl_uapsd_misbehaving_ap_notif, ··· 405 402 RX_HANDLER_SYNC) 406 403 RX_HANDLER_NO_OBJECT(LEGACY_GROUP, BA_NOTIF, compressed_ba_notif, 407 404 RX_HANDLER_SYNC) 405 + RX_HANDLER_OF_SCAN(LEGACY_GROUP, SCAN_START_NOTIFICATION_UMAC, 406 + scan_start_notif) 408 407 RX_HANDLER_OF_SCAN(LEGACY_GROUP, SCAN_COMPLETE_UMAC, 409 408 scan_complete_notif) 410 409 RX_HANDLER_NO_OBJECT(LEGACY_GROUP, SCAN_ITERATION_COMPLETE_UMAC,
+27 -3
drivers/net/wireless/intel/iwlwifi/mld/scan.c
··· 473 473 params->flags & NL80211_SCAN_FLAG_COLOCATED_6GHZ) 474 474 flags |= IWL_UMAC_SCAN_GEN_FLAGS_V2_TRIGGER_UHB_SCAN; 475 475 476 + if (scan_status == IWL_MLD_SCAN_INT_MLO) 477 + flags |= IWL_UMAC_SCAN_GEN_FLAGS_V2_NTF_START; 478 + 476 479 if (params->enable_6ghz_passive) 477 480 flags |= IWL_UMAC_SCAN_GEN_FLAGS_V2_6GHZ_PASSIVE_SCAN; 478 481 ··· 1820 1817 ret = _iwl_mld_single_scan_start(mld, vif, req, &ies, 1821 1818 IWL_MLD_SCAN_INT_MLO); 1822 1819 1823 - if (!ret) 1824 - mld->scan.last_mlo_scan_time = ktime_get_boottime_ns(); 1825 - 1826 1820 IWL_DEBUG_SCAN(mld, "Internal MLO scan: ret=%d\n", ret); 1827 1821 } 1828 1822 ··· 1902 1902 { 1903 1903 IWL_DEBUG_SCAN(mld, "Scheduled scan results\n"); 1904 1904 ieee80211_sched_scan_results(mld->hw); 1905 + } 1906 + 1907 + void iwl_mld_handle_scan_start_notif(struct iwl_mld *mld, 1908 + struct iwl_rx_packet *pkt) 1909 + { 1910 + struct iwl_umac_scan_complete *notif = (void *)pkt->data; 1911 + u32 uid = le32_to_cpu(notif->uid); 1912 + 1913 + if (IWL_FW_CHECK(mld, uid >= ARRAY_SIZE(mld->scan.uid_status), 1914 + "FW reports out-of-range scan UID %d\n", uid)) 1915 + return; 1916 + 1917 + if (IWL_FW_CHECK(mld, !(mld->scan.uid_status[uid] & mld->scan.status), 1918 + "FW reports scan UID %d we didn't trigger\n", uid)) 1919 + return; 1920 + 1921 + IWL_DEBUG_SCAN(mld, "Scan started: uid=%u type=%u\n", uid, 1922 + mld->scan.uid_status[uid]); 1923 + if (IWL_FW_CHECK(mld, mld->scan.uid_status[uid] != IWL_MLD_SCAN_INT_MLO, 1924 + "FW reports scan start notification %d we didn't trigger\n", 1925 + mld->scan.uid_status[uid])) 1926 + return; 1927 + 1928 + mld->scan.last_mlo_scan_start_time = ktime_get_boottime_ns(); 1905 1929 } 1906 1930 1907 1931 void iwl_mld_handle_scan_complete_notif(struct iwl_mld *mld,
+6 -3
drivers/net/wireless/intel/iwlwifi/mld/scan.h
··· 27 27 void iwl_mld_handle_match_found_notif(struct iwl_mld *mld, 28 28 struct iwl_rx_packet *pkt); 29 29 30 + void iwl_mld_handle_scan_start_notif(struct iwl_mld *mld, 31 + struct iwl_rx_packet *pkt); 32 + 30 33 void iwl_mld_handle_scan_complete_notif(struct iwl_mld *mld, 31 34 struct iwl_rx_packet *pkt); 32 35 ··· 117 114 * in jiffies. 118 115 * @last_start_time_jiffies: stores the last start time in jiffies 119 116 * (interface up/reset/resume). 120 - * @last_mlo_scan_time: start time of the last MLO scan in nanoseconds since 121 - * boot. 117 + * @last_mlo_scan_start_time: start time of the last MLO scan in nanoseconds 118 + * since boot. 122 119 */ 123 120 struct iwl_mld_scan { 124 121 /* Add here fields that need clean up on restart */ ··· 139 136 void *cmd; 140 137 unsigned long last_6ghz_passive_jiffies; 141 138 unsigned long last_start_time_jiffies; 142 - u64 last_mlo_scan_time; 139 + u64 last_mlo_scan_start_time; 143 140 }; 144 141 145 142 /**
+1 -1
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
··· 2807 2807 if (IS_ERR_OR_NULL(vif)) 2808 2808 return; 2809 2809 2810 - if (len < sizeof(struct iwl_scan_offload_match_info)) { 2810 + if (len < sizeof(struct iwl_scan_offload_match_info) + matches_len) { 2811 2811 IWL_ERR(mvm, "Invalid scan match info notification\n"); 2812 2812 return; 2813 2813 }
+2 -1
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 470 470 .dataflags[0] = IWL_HCMD_DFL_NOCOPY, 471 471 }; 472 472 473 - if (mvm->trans->mac_cfg->device_family < IWL_DEVICE_FAMILY_AX210) { 473 + if (mvm->trans->mac_cfg->device_family < IWL_DEVICE_FAMILY_AX210 || 474 + !mvm->trans->cfg->uhb_supported) { 474 475 IWL_DEBUG_RADIO(mvm, "UATS feature is not supported\n"); 475 476 return; 476 477 }
+1 -1
drivers/net/wireless/microchip/wilc1000/hif.c
··· 163 163 u32 index = 0; 164 164 u32 i, scan_timeout; 165 165 u8 *buffer; 166 - u8 valuesize = 0; 166 + u32 valuesize = 0; 167 167 u8 *search_ssid_vals = NULL; 168 168 const u8 ch_list_len = request->n_channels; 169 169 struct host_if_drv *hif_drv = vif->hif_drv;
+5 -3
drivers/net/wireless/ti/wl1251/tx.c
··· 402 402 int hdrlen; 403 403 u8 *frame; 404 404 405 - skb = wl->tx_frames[result->id]; 406 - if (skb == NULL) { 407 - wl1251_error("SKB for packet %d is NULL", result->id); 405 + if (unlikely(result->id >= ARRAY_SIZE(wl->tx_frames) || 406 + wl->tx_frames[result->id] == NULL)) { 407 + wl1251_error("invalid packet id %u", result->id); 408 408 return; 409 409 } 410 + 411 + skb = wl->tx_frames[result->id]; 410 412 411 413 info = IEEE80211_SKB_CB(skb); 412 414
-1
drivers/net/wireless/virtual/virt_wifi.c
··· 557 557 eth_hw_addr_inherit(dev, priv->lowerdev); 558 558 netif_stacked_transfer_operstate(priv->lowerdev, dev); 559 559 560 - SET_NETDEV_DEV(dev, &priv->lowerdev->dev); 561 560 dev->ieee80211_ptr = kzalloc_obj(*dev->ieee80211_ptr); 562 561 563 562 if (!dev->ieee80211_ptr) {
+3
drivers/nfc/pn533/uart.c
··· 211 211 212 212 timer_delete(&dev->cmd_timeout); 213 213 for (i = 0; i < count; i++) { 214 + if (unlikely(!skb_tailroom(dev->recv_skb))) 215 + skb_trim(dev->recv_skb, 0); 216 + 214 217 skb_put_u8(dev->recv_skb, *data++); 215 218 if (!pn532_uart_rx_is_frame(dev->recv_skb)) 216 219 continue;
+1
drivers/nvmem/imx-ocotp-ele.c
··· 131 131 static void imx_ocotp_fixup_dt_cell_info(struct nvmem_device *nvmem, 132 132 struct nvmem_cell_info *cell) 133 133 { 134 + cell->raw_len = round_up(cell->bytes, 4); 134 135 cell->read_post_process = imx_ocotp_cell_pp; 135 136 } 136 137
+1
drivers/nvmem/imx-ocotp.c
··· 589 589 static void imx_ocotp_fixup_dt_cell_info(struct nvmem_device *nvmem, 590 590 struct nvmem_cell_info *cell) 591 591 { 592 + cell->raw_len = round_up(cell->bytes, 4); 592 593 cell->read_post_process = imx_ocotp_cell_pp; 593 594 } 594 595
+4 -4
drivers/nvmem/zynqmp_nvmem.c
··· 66 66 dma_addr_t dma_buf; 67 67 size_t words = bytes / WORD_INBYTES; 68 68 int ret; 69 - int value; 69 + unsigned int value; 70 70 char *data; 71 71 72 72 if (bytes % WORD_INBYTES != 0) { ··· 80 80 } 81 81 82 82 if (pufflag == 1 && flag == EFUSE_WRITE) { 83 - memcpy(&value, val, bytes); 83 + memcpy(&value, val, sizeof(value)); 84 84 if ((offset == EFUSE_PUF_START_OFFSET || 85 85 offset == EFUSE_PUF_MID_OFFSET) && 86 86 value & P_USER_0_64_UPPER_MASK) { ··· 100 100 if (!efuse) 101 101 return -ENOMEM; 102 102 103 - data = dma_alloc_coherent(dev, sizeof(bytes), 103 + data = dma_alloc_coherent(dev, bytes, 104 104 &dma_buf, GFP_KERNEL); 105 105 if (!data) { 106 106 ret = -ENOMEM; ··· 134 134 if (flag == EFUSE_READ) 135 135 memcpy(val, data, bytes); 136 136 efuse_access_err: 137 - dma_free_coherent(dev, sizeof(bytes), 137 + dma_free_coherent(dev, bytes, 138 138 data, dma_buf); 139 139 efuse_data_fail: 140 140 dma_free_coherent(dev, sizeof(struct xilinx_efuse),
+14 -18
drivers/s390/crypto/zcrypt_msgtype6.c
··· 953 953 /* 954 954 * The request distributor calls this function if it picked the CEXxC 955 955 * device to handle a modexpo request. 956 + * This function assumes that ap_msg has been initialized with 957 + * ap_init_apmsg() and thus a valid buffer with the size of 958 + * ap_msg->bufsize is available within ap_msg. Also the caller has 959 + * to make sure ap_release_apmsg() is always called even on failure. 956 960 * @zq: pointer to zcrypt_queue structure that identifies the 957 961 * CEXxC device to the request distributor 958 962 * @mex: pointer to the modexpo request buffer ··· 968 964 struct ap_response_type *resp_type = &ap_msg->response; 969 965 int rc; 970 966 971 - ap_msg->msg = (void *)get_zeroed_page(GFP_KERNEL); 972 - if (!ap_msg->msg) 973 - return -ENOMEM; 974 - ap_msg->bufsize = PAGE_SIZE; 975 967 ap_msg->receive = zcrypt_msgtype6_receive; 976 968 ap_msg->psmid = (((unsigned long)current->pid) << 32) + 977 969 atomic_inc_return(&zcrypt_step); 978 970 rc = icamex_msg_to_type6mex_msgx(zq, ap_msg, mex); 979 971 if (rc) 980 - goto out_free; 972 + goto out; 981 973 resp_type->type = CEXXC_RESPONSE_TYPE_ICA; 982 974 init_completion(&resp_type->work); 983 975 rc = ap_queue_message(zq->queue, ap_msg); 984 976 if (rc) 985 - goto out_free; 977 + goto out; 986 978 rc = wait_for_completion_interruptible(&resp_type->work); 987 979 if (rc == 0) { 988 980 rc = ap_msg->rc; ··· 991 991 ap_cancel_message(zq->queue, ap_msg); 992 992 } 993 993 994 - out_free: 995 - free_page((unsigned long)ap_msg->msg); 996 - ap_msg->msg = NULL; 994 + out: 997 995 return rc; 998 996 } 999 997 1000 998 /* 1001 999 * The request distributor calls this function if it picked the CEXxC 1002 1000 * device to handle a modexpo_crt request. 1001 + * This function assumes that ap_msg has been initialized with 1002 + * ap_init_apmsg() and thus a valid buffer with the size of 1003 + * ap_msg->bufsize is available within ap_msg. Also the caller has 1004 + * to make sure ap_release_apmsg() is always called even on failure. 1003 1005 * @zq: pointer to zcrypt_queue structure that identifies the 1004 1006 * CEXxC device to the request distributor 1005 1007 * @crt: pointer to the modexpoc_crt request buffer ··· 1013 1011 struct ap_response_type *resp_type = &ap_msg->response; 1014 1012 int rc; 1015 1013 1016 - ap_msg->msg = (void *)get_zeroed_page(GFP_KERNEL); 1017 - if (!ap_msg->msg) 1018 - return -ENOMEM; 1019 - ap_msg->bufsize = PAGE_SIZE; 1020 1014 ap_msg->receive = zcrypt_msgtype6_receive; 1021 1015 ap_msg->psmid = (((unsigned long)current->pid) << 32) + 1022 1016 atomic_inc_return(&zcrypt_step); 1023 1017 rc = icacrt_msg_to_type6crt_msgx(zq, ap_msg, crt); 1024 1018 if (rc) 1025 - goto out_free; 1019 + goto out; 1026 1020 resp_type->type = CEXXC_RESPONSE_TYPE_ICA; 1027 1021 init_completion(&resp_type->work); 1028 1022 rc = ap_queue_message(zq->queue, ap_msg); 1029 1023 if (rc) 1030 - goto out_free; 1024 + goto out; 1031 1025 rc = wait_for_completion_interruptible(&resp_type->work); 1032 1026 if (rc == 0) { 1033 1027 rc = ap_msg->rc; ··· 1036 1038 ap_cancel_message(zq->queue, ap_msg); 1037 1039 } 1038 1040 1039 - out_free: 1040 - free_page((unsigned long)ap_msg->msg); 1041 - ap_msg->msg = NULL; 1041 + out: 1042 1042 return rc; 1043 1043 } 1044 1044
+12
drivers/spi/spi-amlogic-spifc-a4.c
··· 1066 1066 .finish_io_req = aml_sfc_ecc_finish_io_req, 1067 1067 }; 1068 1068 1069 + static void aml_sfc_unregister_ecc_engine(void *data) 1070 + { 1071 + struct nand_ecc_engine *eng = data; 1072 + 1073 + nand_ecc_unregister_on_host_hw_engine(eng); 1074 + } 1075 + 1069 1076 static int aml_sfc_clk_init(struct aml_sfc *sfc) 1070 1077 { 1071 1078 sfc->gate_clk = devm_clk_get_enabled(sfc->dev, "gate"); ··· 1155 1148 ret = nand_ecc_register_on_host_hw_engine(&sfc->ecc_eng); 1156 1149 if (ret) 1157 1150 return dev_err_probe(&pdev->dev, ret, "failed to register Aml host ecc engine.\n"); 1151 + 1152 + ret = devm_add_action_or_reset(dev, aml_sfc_unregister_ecc_engine, 1153 + &sfc->ecc_eng); 1154 + if (ret) 1155 + return dev_err_probe(dev, ret, "failed to add ECC unregister action\n"); 1158 1156 1159 1157 ret = of_property_read_u32(np, "amlogic,rx-adj", &val); 1160 1158 if (!ret)
+9 -8
drivers/spi/spi-cadence-quadspi.c
··· 1483 1483 if (refcount_read(&cqspi->inflight_ops) == 0) 1484 1484 return -ENODEV; 1485 1485 1486 - if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) { 1487 - ret = pm_runtime_resume_and_get(dev); 1488 - if (ret) { 1489 - dev_err(&mem->spi->dev, "resume failed with %d\n", ret); 1490 - return ret; 1491 - } 1492 - } 1493 - 1494 1486 if (!refcount_read(&cqspi->refcount)) 1495 1487 return -EBUSY; 1496 1488 ··· 1494 1502 return -EBUSY; 1495 1503 } 1496 1504 1505 + if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) { 1506 + ret = pm_runtime_resume_and_get(dev); 1507 + if (ret) { 1508 + dev_err(&mem->spi->dev, "resume failed with %d\n", ret); 1509 + goto dec_inflight_refcount; 1510 + } 1511 + } 1512 + 1497 1513 ret = cqspi_mem_process(mem, op); 1498 1514 1499 1515 if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) ··· 1510 1510 if (ret) 1511 1511 dev_err(&mem->spi->dev, "operation failed with %d\n", ret); 1512 1512 1513 + dec_inflight_refcount: 1513 1514 if (refcount_read(&cqspi->inflight_ops) > 1) 1514 1515 refcount_dec(&cqspi->inflight_ops); 1515 1516
+6 -6
drivers/spi/spi-stm32-ospi.c
··· 928 928 dma_cfg.dst_addr = ospi->regs_phys_base + OSPI_DR; 929 929 ret = stm32_ospi_dma_setup(ospi, &dma_cfg); 930 930 if (ret) 931 - return ret; 931 + goto err_dma_free; 932 932 933 933 mutex_init(&ospi->lock); 934 934 ··· 965 965 if (ret) { 966 966 /* Disable ospi */ 967 967 writel_relaxed(0, ospi->regs_base + OSPI_CR); 968 - goto err_pm_resume; 968 + goto err_reset_control; 969 969 } 970 970 971 971 pm_runtime_put_autosuspend(ospi->dev); 972 972 973 973 return 0; 974 974 975 + err_reset_control: 976 + reset_control_release(ospi->rstc); 975 977 err_pm_resume: 976 978 pm_runtime_put_sync_suspend(ospi->dev); 977 979 978 980 err_pm_enable: 979 981 pm_runtime_force_suspend(ospi->dev); 980 982 mutex_destroy(&ospi->lock); 983 + err_dma_free: 981 984 if (ospi->dma_chtx) 982 985 dma_release_channel(ospi->dma_chtx); 983 986 if (ospi->dma_chrx) ··· 992 989 static void stm32_ospi_remove(struct platform_device *pdev) 993 990 { 994 991 struct stm32_ospi *ospi = platform_get_drvdata(pdev); 995 - int ret; 996 992 997 - ret = pm_runtime_resume_and_get(ospi->dev); 998 - if (ret < 0) 999 - return; 993 + pm_runtime_resume_and_get(ospi->dev); 1000 994 1001 995 spi_unregister_controller(ospi->ctrl); 1002 996 /* Disable ospi */
+27 -5
drivers/thermal/thermal_core.c
··· 41 41 42 42 static bool thermal_pm_suspended; 43 43 44 + static struct workqueue_struct *thermal_wq __ro_after_init; 45 + 44 46 /* 45 47 * Governor section: set of functions to handle thermal governors 46 48 * ··· 315 313 if (delay > HZ) 316 314 delay = round_jiffies_relative(delay); 317 315 318 - mod_delayed_work(system_freezable_power_efficient_wq, &tz->poll_queue, delay); 316 + mod_delayed_work(thermal_wq, &tz->poll_queue, delay); 319 317 } 320 318 321 319 static void thermal_zone_recheck(struct thermal_zone_device *tz, int error) ··· 1642 1640 device_del(&tz->device); 1643 1641 release_device: 1644 1642 put_device(&tz->device); 1643 + wait_for_completion(&tz->removal); 1645 1644 remove_id: 1646 1645 ida_free(&thermal_tz_ida, id); 1647 1646 free_tzp: ··· 1788 1785 1789 1786 guard(thermal_zone)(tz); 1790 1787 1788 + /* If the thermal zone is going away, there's nothing to do. */ 1789 + if (tz->state & TZ_STATE_FLAG_EXIT) 1790 + return; 1791 + 1791 1792 tz->state &= ~(TZ_STATE_FLAG_SUSPENDED | TZ_STATE_FLAG_RESUMING); 1792 1793 1793 1794 thermal_debug_tz_resume(tz); ··· 1818 1811 } 1819 1812 1820 1813 tz->state |= TZ_STATE_FLAG_SUSPENDED; 1814 + 1815 + /* Prevent new work from getting to the workqueue subsequently. */ 1816 + cancel_delayed_work(&tz->poll_queue); 1821 1817 } 1822 1818 1823 1819 static void thermal_pm_notify_prepare(void) ··· 1839 1829 { 1840 1830 guard(thermal_zone)(tz); 1841 1831 1842 - cancel_delayed_work(&tz->poll_queue); 1843 - 1844 1832 reinit_completion(&tz->resume); 1845 1833 tz->state |= TZ_STATE_FLAG_RESUMING; 1846 1834 ··· 1848 1840 */ 1849 1841 INIT_DELAYED_WORK(&tz->poll_queue, thermal_zone_device_resume); 1850 1842 /* Queue up the work without a delay. */ 1851 - mod_delayed_work(system_freezable_power_efficient_wq, &tz->poll_queue, 0); 1843 + mod_delayed_work(thermal_wq, &tz->poll_queue, 0); 1852 1844 } 1853 1845 1854 1846 static void thermal_pm_notify_complete(void) ··· 1871 1863 case PM_RESTORE_PREPARE: 1872 1864 case PM_SUSPEND_PREPARE: 1873 1865 thermal_pm_notify_prepare(); 1866 + /* 1867 + * Allow any leftover thermal work items already on the 1868 + * worqueue to complete so they don't get in the way later. 1869 + */ 1870 + flush_workqueue(thermal_wq); 1874 1871 break; 1875 1872 case PM_POST_HIBERNATION: 1876 1873 case PM_POST_RESTORE: ··· 1908 1895 if (result) 1909 1896 goto error; 1910 1897 1898 + thermal_wq = alloc_workqueue("thermal_events", 1899 + WQ_FREEZABLE | WQ_POWER_EFFICIENT | WQ_PERCPU, 0); 1900 + if (!thermal_wq) { 1901 + result = -ENOMEM; 1902 + goto unregister_netlink; 1903 + } 1904 + 1911 1905 result = thermal_register_governors(); 1912 1906 if (result) 1913 - goto unregister_netlink; 1907 + goto destroy_workqueue; 1914 1908 1915 1909 thermal_class = kzalloc_obj(*thermal_class); 1916 1910 if (!thermal_class) { ··· 1944 1924 1945 1925 unregister_governors: 1946 1926 thermal_unregister_governors(); 1927 + destroy_workqueue: 1928 + destroy_workqueue(thermal_wq); 1947 1929 unregister_netlink: 1948 1930 thermal_netlink_exit(); 1949 1931 error:
+1 -1
drivers/thunderbolt/nhi.c
··· 1020 1020 * If power rails are sustainable for wakeup from S4 this 1021 1021 * property is set by the BIOS. 1022 1022 */ 1023 - if (device_property_read_u8(&pdev->dev, "WAKE_SUPPORTED", &val)) 1023 + if (!device_property_read_u8(&pdev->dev, "WAKE_SUPPORTED", &val)) 1024 1024 return !!val; 1025 1025 1026 1026 return true;
+18
drivers/tty/vt/vt.c
··· 1909 1909 dest = ((u16 *)vc->vc_origin) + r * vc->vc_cols; 1910 1910 memcpy(dest, src, 2 * cols); 1911 1911 } 1912 + /* 1913 + * If the console was resized while in the alternate screen, 1914 + * resize the saved unicode buffer to the current dimensions. 1915 + * On allocation failure new_uniscr is NULL, causing the old 1916 + * buffer to be freed and vc_uni_lines to be lazily rebuilt 1917 + * via vc_uniscr_check() when next needed. 1918 + */ 1919 + if (vc->vc_saved_uni_lines && 1920 + (vc->vc_saved_rows != vc->vc_rows || 1921 + vc->vc_saved_cols != vc->vc_cols)) { 1922 + u32 **new_uniscr = vc_uniscr_alloc(vc->vc_cols, vc->vc_rows); 1923 + 1924 + if (new_uniscr) 1925 + vc_uniscr_copy_area(new_uniscr, vc->vc_cols, vc->vc_rows, 1926 + vc->vc_saved_uni_lines, cols, 0, rows); 1927 + vc_uniscr_free(vc->vc_saved_uni_lines); 1928 + vc->vc_saved_uni_lines = new_uniscr; 1929 + } 1912 1930 vc_uniscr_set(vc, vc->vc_saved_uni_lines); 1913 1931 vc->vc_saved_uni_lines = NULL; 1914 1932 restore_cur(vc);
+4
drivers/usb/cdns3/cdns3-gadget.c
··· 2589 2589 struct cdns3_request *priv_req; 2590 2590 int ret = 0; 2591 2591 2592 + if (!ep->desc) 2593 + return -ESHUTDOWN; 2594 + 2592 2595 request->actual = 0; 2593 2596 request->status = -EINPROGRESS; 2594 2597 priv_req = to_cdns3_request(request); ··· 3431 3428 ret = cdns3_gadget_start(cdns); 3432 3429 if (ret) { 3433 3430 pm_runtime_put_sync(cdns->dev); 3431 + cdns_drd_gadget_off(cdns); 3434 3432 return ret; 3435 3433 } 3436 3434
+9
drivers/usb/class/cdc-acm.c
··· 1225 1225 if (!data_interface || !control_interface) 1226 1226 return -ENODEV; 1227 1227 goto skip_normal_probe; 1228 + } else if (quirks == NO_UNION_12) { 1229 + data_interface = usb_ifnum_to_if(usb_dev, 2); 1230 + control_interface = usb_ifnum_to_if(usb_dev, 1); 1231 + if (!data_interface || !control_interface) 1232 + return -ENODEV; 1233 + goto skip_normal_probe; 1228 1234 } 1229 1235 1230 1236 /* normal probing*/ ··· 1753 1747 }, 1754 1748 { USB_DEVICE(0x045b, 0x024D), /* Renesas R-Car E3 USB Download mode */ 1755 1749 .driver_info = DISABLE_ECHO, /* Don't echo banner */ 1750 + }, 1751 + { USB_DEVICE(0x04b8, 0x0d12), /* EPSON HMD Com&Sens */ 1752 + .driver_info = NO_UNION_12, /* union descriptor is garbage */ 1756 1753 }, 1757 1754 { USB_DEVICE(0x0e8d, 0x0003), /* FIREFLY, MediaTek Inc; andrey.arapov@gmail.com */ 1758 1755 .driver_info = NO_UNION_NORMAL, /* has no union descriptor */
+1
drivers/usb/class/cdc-acm.h
··· 114 114 #define SEND_ZERO_PACKET BIT(6) 115 115 #define DISABLE_ECHO BIT(7) 116 116 #define MISSING_CAP_BRK BIT(8) 117 + #define NO_UNION_12 BIT(9)
+3
drivers/usb/class/usbtmc.c
··· 254 254 list_del(&file_data->file_elem); 255 255 256 256 spin_unlock_irq(&file_data->data->dev_lock); 257 + 258 + /* flush anchored URBs */ 259 + usbtmc_draw_down(file_data); 257 260 mutex_unlock(&file_data->data->io_mutex); 258 261 259 262 kref_put(&file_data->data->kref, usbtmc_delete);
+2 -3
drivers/usb/common/ulpi.c
··· 331 331 ulpi->ops = ops; 332 332 333 333 ret = ulpi_register(dev, ulpi); 334 - if (ret) { 335 - kfree(ulpi); 334 + if (ret) 336 335 return ERR_PTR(ret); 337 - } 336 + 338 337 339 338 return ulpi; 340 339 }
+14 -9
drivers/usb/core/driver.c
··· 1415 1415 int status = 0; 1416 1416 int i = 0, n = 0; 1417 1417 struct usb_interface *intf; 1418 + bool offload_active = false; 1418 1419 1419 1420 if (udev->state == USB_STATE_NOTATTACHED || 1420 1421 udev->state == USB_STATE_SUSPENDED) 1421 1422 goto done; 1422 1423 1424 + usb_offload_set_pm_locked(udev, true); 1423 1425 if (msg.event == PM_EVENT_SUSPEND && usb_offload_check(udev)) { 1424 1426 dev_dbg(&udev->dev, "device offloaded, skip suspend.\n"); 1425 - udev->offload_at_suspend = 1; 1427 + offload_active = true; 1426 1428 } 1427 1429 1428 1430 /* Suspend all the interfaces and then udev itself */ ··· 1438 1436 * interrupt urbs, allowing interrupt events to be 1439 1437 * handled during system suspend. 1440 1438 */ 1441 - if (udev->offload_at_suspend && 1442 - intf->needs_remote_wakeup) { 1439 + if (offload_active && intf->needs_remote_wakeup) { 1443 1440 dev_dbg(&intf->dev, 1444 1441 "device offloaded, skip suspend.\n"); 1445 1442 continue; ··· 1453 1452 } 1454 1453 } 1455 1454 if (status == 0) { 1456 - if (!udev->offload_at_suspend) 1455 + if (!offload_active) 1457 1456 status = usb_suspend_device(udev, msg); 1458 1457 1459 1458 /* ··· 1499 1498 */ 1500 1499 } else { 1501 1500 udev->can_submit = 0; 1502 - if (!udev->offload_at_suspend) { 1501 + if (!offload_active) { 1503 1502 for (i = 0; i < 16; ++i) { 1504 1503 usb_hcd_flush_endpoint(udev, udev->ep_out[i]); 1505 1504 usb_hcd_flush_endpoint(udev, udev->ep_in[i]); ··· 1508 1507 } 1509 1508 1510 1509 done: 1510 + if (status != 0) 1511 + usb_offload_set_pm_locked(udev, false); 1511 1512 dev_vdbg(&udev->dev, "%s: status %d\n", __func__, status); 1512 1513 return status; 1513 1514 } ··· 1539 1536 int status = 0; 1540 1537 int i; 1541 1538 struct usb_interface *intf; 1539 + bool offload_active = false; 1542 1540 1543 1541 if (udev->state == USB_STATE_NOTATTACHED) { 1544 1542 status = -ENODEV; 1545 1543 goto done; 1546 1544 } 1547 1545 udev->can_submit = 1; 1546 + if (msg.event == PM_EVENT_RESUME) 1547 + offload_active = usb_offload_check(udev); 1548 1548 1549 1549 /* Resume the device */ 1550 1550 if (udev->state == USB_STATE_SUSPENDED || udev->reset_resume) { 1551 - if (!udev->offload_at_suspend) 1551 + if (!offload_active) 1552 1552 status = usb_resume_device(udev, msg); 1553 1553 else 1554 1554 dev_dbg(&udev->dev, ··· 1568 1562 * pending interrupt urbs, allowing interrupt events 1569 1563 * to be handled during system suspend. 1570 1564 */ 1571 - if (udev->offload_at_suspend && 1572 - intf->needs_remote_wakeup) { 1565 + if (offload_active && intf->needs_remote_wakeup) { 1573 1566 dev_dbg(&intf->dev, 1574 1567 "device offloaded, skip resume.\n"); 1575 1568 continue; ··· 1577 1572 udev->reset_resume); 1578 1573 } 1579 1574 } 1580 - udev->offload_at_suspend = 0; 1581 1575 usb_mark_last_busy(udev); 1582 1576 1583 1577 done: 1584 1578 dev_vdbg(&udev->dev, "%s: status %d\n", __func__, status); 1579 + usb_offload_set_pm_locked(udev, false); 1585 1580 if (!status) 1586 1581 udev->reset_resume = 0; 1587 1582 return status;
+1 -1
drivers/usb/core/hcd.c
··· 2403 2403 if (hcd->rh_registered) { 2404 2404 pm_wakeup_event(&hcd->self.root_hub->dev, 0); 2405 2405 set_bit(HCD_FLAG_WAKEUP_PENDING, &hcd->flags); 2406 - queue_work(pm_wq, &hcd->wakeup_work); 2406 + queue_work(system_freezable_wq, &hcd->wakeup_work); 2407 2407 } 2408 2408 spin_unlock_irqrestore (&hcd_root_hub_lock, flags); 2409 2409 }
+59 -43
drivers/usb/core/offload.c
··· 25 25 */ 26 26 int usb_offload_get(struct usb_device *udev) 27 27 { 28 - int ret; 28 + int ret = 0; 29 29 30 - usb_lock_device(udev); 31 - if (udev->state == USB_STATE_NOTATTACHED) { 32 - usb_unlock_device(udev); 30 + if (!usb_get_dev(udev)) 33 31 return -ENODEV; 32 + 33 + if (pm_runtime_get_if_active(&udev->dev) != 1) { 34 + ret = -EBUSY; 35 + goto err_rpm; 34 36 } 35 37 36 - if (udev->state == USB_STATE_SUSPENDED || 37 - udev->offload_at_suspend) { 38 - usb_unlock_device(udev); 39 - return -EBUSY; 40 - } 38 + spin_lock(&udev->offload_lock); 41 39 42 - /* 43 - * offload_usage could only be modified when the device is active, since 44 - * it will alter the suspend flow of the device. 45 - */ 46 - ret = usb_autoresume_device(udev); 47 - if (ret < 0) { 48 - usb_unlock_device(udev); 49 - return ret; 40 + if (udev->offload_pm_locked) { 41 + ret = -EAGAIN; 42 + goto err; 50 43 } 51 44 52 45 udev->offload_usage++; 53 - usb_autosuspend_device(udev); 54 - usb_unlock_device(udev); 46 + 47 + err: 48 + spin_unlock(&udev->offload_lock); 49 + pm_runtime_put_autosuspend(&udev->dev); 50 + err_rpm: 51 + usb_put_dev(udev); 55 52 56 53 return ret; 57 54 } ··· 66 69 */ 67 70 int usb_offload_put(struct usb_device *udev) 68 71 { 69 - int ret; 72 + int ret = 0; 70 73 71 - usb_lock_device(udev); 72 - if (udev->state == USB_STATE_NOTATTACHED) { 73 - usb_unlock_device(udev); 74 + if (!usb_get_dev(udev)) 74 75 return -ENODEV; 76 + 77 + if (pm_runtime_get_if_active(&udev->dev) != 1) { 78 + ret = -EBUSY; 79 + goto err_rpm; 75 80 } 76 81 77 - if (udev->state == USB_STATE_SUSPENDED || 78 - udev->offload_at_suspend) { 79 - usb_unlock_device(udev); 80 - return -EBUSY; 81 - } 82 + spin_lock(&udev->offload_lock); 82 83 83 - /* 84 - * offload_usage could only be modified when the device is active, since 85 - * it will alter the suspend flow of the device. 86 - */ 87 - ret = usb_autoresume_device(udev); 88 - if (ret < 0) { 89 - usb_unlock_device(udev); 90 - return ret; 84 + if (udev->offload_pm_locked) { 85 + ret = -EAGAIN; 86 + goto err; 91 87 } 92 88 93 89 /* Drop the count when it wasn't 0, ignore the operation otherwise. */ 94 90 if (udev->offload_usage) 95 91 udev->offload_usage--; 96 - usb_autosuspend_device(udev); 97 - usb_unlock_device(udev); 92 + 93 + err: 94 + spin_unlock(&udev->offload_lock); 95 + pm_runtime_put_autosuspend(&udev->dev); 96 + err_rpm: 97 + usb_put_dev(udev); 98 98 99 99 return ret; 100 100 } ··· 106 112 * management. 107 113 * 108 114 * The caller must hold @udev's device lock. In addition, the caller should 109 - * ensure downstream usb devices are all either suspended or marked as 110 - * "offload_at_suspend" to ensure the correctness of the return value. 115 + * ensure the device itself and the downstream usb devices are all marked as 116 + * "offload_pm_locked" to ensure the correctness of the return value. 111 117 * 112 118 * Returns true on any offload activity, false otherwise. 113 119 */ 114 120 bool usb_offload_check(struct usb_device *udev) __must_hold(&udev->dev->mutex) 115 121 { 116 122 struct usb_device *child; 117 - bool active; 123 + bool active = false; 118 124 int port1; 125 + 126 + if (udev->offload_usage) 127 + return true; 119 128 120 129 usb_hub_for_each_child(udev, port1, child) { 121 130 usb_lock_device(child); 122 131 active = usb_offload_check(child); 123 132 usb_unlock_device(child); 133 + 124 134 if (active) 125 - return true; 135 + break; 126 136 } 127 137 128 - return !!udev->offload_usage; 138 + return active; 129 139 } 130 140 EXPORT_SYMBOL_GPL(usb_offload_check); 141 + 142 + /** 143 + * usb_offload_set_pm_locked - set the PM lock state of a USB device 144 + * @udev: the USB device to modify 145 + * @locked: the new lock state 146 + * 147 + * Setting @locked to true prevents offload_usage from being modified. This 148 + * ensures that offload activities cannot be started or stopped during critical 149 + * power management transitions, maintaining a stable state for the duration 150 + * of the transition. 151 + */ 152 + void usb_offload_set_pm_locked(struct usb_device *udev, bool locked) 153 + { 154 + spin_lock(&udev->offload_lock); 155 + udev->offload_pm_locked = locked; 156 + spin_unlock(&udev->offload_lock); 157 + } 158 + EXPORT_SYMBOL_GPL(usb_offload_set_pm_locked);
+11 -1
drivers/usb/core/phy.c
··· 114 114 struct usb_phy_roothub *usb_phy_roothub_alloc_usb3_phy(struct device *dev) 115 115 { 116 116 struct usb_phy_roothub *phy_roothub; 117 - int num_phys; 117 + int num_phys, usb2_phy_index; 118 118 119 119 if (!IS_ENABLED(CONFIG_GENERIC_PHY)) 120 120 return NULL; ··· 122 122 num_phys = of_count_phandle_with_args(dev->of_node, "phys", 123 123 "#phy-cells"); 124 124 if (num_phys <= 0) 125 + return NULL; 126 + 127 + /* 128 + * If 'usb2-phy' is not present, usb_phy_roothub_alloc() added 129 + * all PHYs to the primary HCD's phy_roothub already, so skip 130 + * adding 'usb3-phy' here to avoid double use of that. 131 + */ 132 + usb2_phy_index = of_property_match_string(dev->of_node, "phy-names", 133 + "usb2-phy"); 134 + if (usb2_phy_index < 0) 125 135 return NULL; 126 136 127 137 phy_roothub = devm_kzalloc(dev, sizeof(*phy_roothub), GFP_KERNEL);
+3
drivers/usb/core/quirks.c
··· 401 401 402 402 /* Silicon Motion Flash Drive */ 403 403 { USB_DEVICE(0x090c, 0x1000), .driver_info = USB_QUIRK_DELAY_INIT }, 404 + { USB_DEVICE(0x090c, 0x2000), .driver_info = USB_QUIRK_DELAY_INIT }, 404 405 405 406 /* Sound Devices USBPre2 */ 406 407 { USB_DEVICE(0x0926, 0x0202), .driver_info = ··· 493 492 /* Razer - Razer Blade Keyboard */ 494 493 { USB_DEVICE(0x1532, 0x0116), .driver_info = 495 494 USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL }, 495 + /* Razer - Razer Kiyo Pro Webcam */ 496 + { USB_DEVICE(0x1532, 0x0e05), .driver_info = USB_QUIRK_NO_LPM }, 496 497 497 498 /* Lenovo ThinkPad OneLink+ Dock twin hub controllers (VIA Labs VL812) */ 498 499 { USB_DEVICE(0x17ef, 0x1018), .driver_info = USB_QUIRK_RESET_RESUME },
+1
drivers/usb/core/usb.c
··· 671 671 set_dev_node(&dev->dev, dev_to_node(bus->sysdev)); 672 672 dev->state = USB_STATE_ATTACHED; 673 673 dev->lpm_disable_count = 1; 674 + spin_lock_init(&dev->offload_lock); 674 675 dev->offload_usage = 0; 675 676 atomic_set(&dev->urbnum, 0); 676 677
+2
drivers/usb/dwc2/gadget.c
··· 4607 4607 /* Exit clock gating when driver is stopped. */ 4608 4608 if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_NONE && 4609 4609 hsotg->bus_suspended && !hsotg->params.no_clock_gating) { 4610 + spin_lock_irqsave(&hsotg->lock, flags); 4610 4611 dwc2_gadget_exit_clock_gating(hsotg, 0); 4612 + spin_unlock_irqrestore(&hsotg->lock, flags); 4611 4613 } 4612 4614 4613 4615 /* all endpoints should be shutdown */
+3 -2
drivers/usb/dwc3/dwc3-google.c
··· 385 385 "google,usb-cfg-csr", 386 386 ARRAY_SIZE(args), args); 387 387 if (IS_ERR(google->usb_cfg_regmap)) { 388 - return dev_err_probe(dev, PTR_ERR(google->usb_cfg_regmap), 389 - "invalid usb cfg csr\n"); 388 + ret = dev_err_probe(dev, PTR_ERR(google->usb_cfg_regmap), 389 + "invalid usb cfg csr\n"); 390 + goto err_deinit_pdom; 390 391 } 391 392 392 393 google->host_cfg_offset = args[0];
+1 -1
drivers/usb/dwc3/dwc3-imx8mp.c
··· 263 263 dwc3 = platform_get_drvdata(dwc3_imx->dwc3_pdev); 264 264 if (!dwc3) { 265 265 err = dev_err_probe(dev, -EPROBE_DEFER, "failed to get dwc3 platform data\n"); 266 - goto depopulate; 266 + goto put_dwc3; 267 267 } 268 268 269 269 dwc3->glue_ops = &dwc3_imx_glue_ops;
+23 -12
drivers/usb/gadget/function/f_ecm.c
··· 681 681 struct usb_ep *ep; 682 682 683 683 struct f_ecm_opts *ecm_opts; 684 + struct net_device *net __free(detach_gadget) = NULL; 684 685 struct usb_request *request __free(free_usb_request) = NULL; 685 686 686 687 if (!can_support_ecm(cdev->gadget)) ··· 689 688 690 689 ecm_opts = container_of(f->fi, struct f_ecm_opts, func_inst); 691 690 692 - mutex_lock(&ecm_opts->lock); 691 + scoped_guard(mutex, &ecm_opts->lock) 692 + if (ecm_opts->bind_count == 0 && !ecm_opts->bound) { 693 + if (!device_is_registered(&ecm_opts->net->dev)) { 694 + gether_set_gadget(ecm_opts->net, cdev->gadget); 695 + status = gether_register_netdev(ecm_opts->net); 696 + } else 697 + status = gether_attach_gadget(ecm_opts->net, cdev->gadget); 693 698 694 - gether_set_gadget(ecm_opts->net, cdev->gadget); 695 - 696 - if (!ecm_opts->bound) { 697 - status = gether_register_netdev(ecm_opts->net); 698 - ecm_opts->bound = true; 699 - } 700 - 701 - mutex_unlock(&ecm_opts->lock); 702 - if (status) 703 - return status; 699 + if (status) 700 + return status; 701 + net = ecm_opts->net; 702 + } 704 703 705 704 ecm_string_defs[1].s = ecm->ethaddr; 706 705 ··· 791 790 792 791 ecm->notify_req = no_free_ptr(request); 793 792 793 + ecm_opts->bind_count++; 794 + retain_and_null_ptr(net); 795 + 794 796 DBG(cdev, "CDC Ethernet: IN/%s OUT/%s NOTIFY/%s\n", 795 797 ecm->port.in_ep->name, ecm->port.out_ep->name, 796 798 ecm->notify->name); ··· 840 836 struct f_ecm_opts *opts; 841 837 842 838 opts = container_of(f, struct f_ecm_opts, func_inst); 843 - if (opts->bound) 839 + if (device_is_registered(&opts->net->dev)) 844 840 gether_cleanup(netdev_priv(opts->net)); 845 841 else 846 842 free_netdev(opts->net); ··· 910 906 static void ecm_unbind(struct usb_configuration *c, struct usb_function *f) 911 907 { 912 908 struct f_ecm *ecm = func_to_ecm(f); 909 + struct f_ecm_opts *ecm_opts; 913 910 914 911 DBG(c->cdev, "ecm unbind\n"); 912 + 913 + ecm_opts = container_of(f->fi, struct f_ecm_opts, func_inst); 915 914 916 915 usb_free_all_descriptors(f); 917 916 ··· 925 918 926 919 kfree(ecm->notify_req->buf); 927 920 usb_ep_free_request(ecm->notify, ecm->notify_req); 921 + 922 + ecm_opts->bind_count--; 923 + if (ecm_opts->bind_count == 0 && !ecm_opts->bound) 924 + gether_detach_gadget(ecm_opts->net); 928 925 } 929 926 930 927 static struct usb_function *ecm_alloc(struct usb_function_instance *fi)
+31 -28
drivers/usb/gadget/function/f_eem.c
··· 7 7 * Copyright (C) 2009 EF Johnson Technologies 8 8 */ 9 9 10 + #include <linux/cleanup.h> 10 11 #include <linux/kernel.h> 11 12 #include <linux/module.h> 12 13 #include <linux/device.h> ··· 252 251 struct usb_ep *ep; 253 252 254 253 struct f_eem_opts *eem_opts; 254 + struct net_device *net __free(detach_gadget) = NULL; 255 255 256 256 eem_opts = container_of(f->fi, struct f_eem_opts, func_inst); 257 - /* 258 - * in drivers/usb/gadget/configfs.c:configfs_composite_bind() 259 - * configurations are bound in sequence with list_for_each_entry, 260 - * in each configuration its functions are bound in sequence 261 - * with list_for_each_entry, so we assume no race condition 262 - * with regard to eem_opts->bound access 263 - */ 264 - if (!eem_opts->bound) { 265 - mutex_lock(&eem_opts->lock); 266 - gether_set_gadget(eem_opts->net, cdev->gadget); 267 - status = gether_register_netdev(eem_opts->net); 268 - mutex_unlock(&eem_opts->lock); 269 - if (status) 270 - return status; 271 - eem_opts->bound = true; 272 - } 257 + 258 + scoped_guard(mutex, &eem_opts->lock) 259 + if (eem_opts->bind_count == 0 && !eem_opts->bound) { 260 + if (!device_is_registered(&eem_opts->net->dev)) { 261 + gether_set_gadget(eem_opts->net, cdev->gadget); 262 + status = gether_register_netdev(eem_opts->net); 263 + } else 264 + status = gether_attach_gadget(eem_opts->net, cdev->gadget); 265 + 266 + if (status) 267 + return status; 268 + net = eem_opts->net; 269 + } 273 270 274 271 us = usb_gstrings_attach(cdev, eem_strings, 275 272 ARRAY_SIZE(eem_string_defs)); ··· 278 279 /* allocate instance-specific interface IDs */ 279 280 status = usb_interface_id(c, f); 280 281 if (status < 0) 281 - goto fail; 282 + return status; 282 283 eem->ctrl_id = status; 283 284 eem_intf.bInterfaceNumber = status; 284 - 285 - status = -ENODEV; 286 285 287 286 /* allocate instance-specific endpoints */ 288 287 ep = usb_ep_autoconfig(cdev->gadget, &eem_fs_in_desc); 289 288 if (!ep) 290 - goto fail; 289 + return -ENODEV; 291 290 eem->port.in_ep = ep; 292 291 293 292 ep = usb_ep_autoconfig(cdev->gadget, &eem_fs_out_desc); 294 293 if (!ep) 295 - goto fail; 294 + return -ENODEV; 296 295 eem->port.out_ep = ep; 297 296 298 297 /* support all relevant hardware speeds... we expect that when ··· 306 309 status = usb_assign_descriptors(f, eem_fs_function, eem_hs_function, 307 310 eem_ss_function, eem_ss_function); 308 311 if (status) 309 - goto fail; 312 + return status; 313 + 314 + eem_opts->bind_count++; 315 + retain_and_null_ptr(net); 310 316 311 317 DBG(cdev, "CDC Ethernet (EEM): IN/%s OUT/%s\n", 312 318 eem->port.in_ep->name, eem->port.out_ep->name); 313 319 return 0; 314 - 315 - fail: 316 - ERROR(cdev, "%s: can't bind, err %d\n", f->name, status); 317 - 318 - return status; 319 320 } 320 321 321 322 static void eem_cmd_complete(struct usb_ep *ep, struct usb_request *req) ··· 592 597 struct f_eem_opts *opts; 593 598 594 599 opts = container_of(f, struct f_eem_opts, func_inst); 595 - if (opts->bound) 600 + if (device_is_registered(&opts->net->dev)) 596 601 gether_cleanup(netdev_priv(opts->net)); 597 602 else 598 603 free_netdev(opts->net); ··· 635 640 636 641 static void eem_unbind(struct usb_configuration *c, struct usb_function *f) 637 642 { 643 + struct f_eem_opts *opts; 644 + 638 645 DBG(c->cdev, "eem unbind\n"); 639 646 647 + opts = container_of(f->fi, struct f_eem_opts, func_inst); 648 + 640 649 usb_free_all_descriptors(f); 650 + 651 + opts->bind_count--; 652 + if (opts->bind_count == 0 && !opts->bound) 653 + gether_detach_gadget(opts->net); 641 654 } 642 655 643 656 static struct usb_function *eem_alloc(struct usb_function_instance *fi)
+10 -9
drivers/usb/gadget/function/f_hid.c
··· 1262 1262 if (status) 1263 1263 goto fail; 1264 1264 1265 - spin_lock_init(&hidg->write_spinlock); 1266 1265 hidg->write_pending = 1; 1267 1266 hidg->req = NULL; 1268 - spin_lock_init(&hidg->read_spinlock); 1269 - spin_lock_init(&hidg->get_report_spinlock); 1270 - init_waitqueue_head(&hidg->write_queue); 1271 - init_waitqueue_head(&hidg->read_queue); 1272 - init_waitqueue_head(&hidg->get_queue); 1273 - init_waitqueue_head(&hidg->get_id_queue); 1274 - INIT_LIST_HEAD(&hidg->completed_out_req); 1275 - INIT_LIST_HEAD(&hidg->report_list); 1276 1267 1277 1268 INIT_WORK(&hidg->work, get_report_workqueue_handler); 1278 1269 hidg->workqueue = alloc_workqueue("report_work", ··· 1598 1607 opts = container_of(fi, struct f_hid_opts, func_inst); 1599 1608 1600 1609 mutex_lock(&opts->lock); 1610 + 1611 + spin_lock_init(&hidg->write_spinlock); 1612 + spin_lock_init(&hidg->read_spinlock); 1613 + spin_lock_init(&hidg->get_report_spinlock); 1614 + init_waitqueue_head(&hidg->write_queue); 1615 + init_waitqueue_head(&hidg->read_queue); 1616 + init_waitqueue_head(&hidg->get_queue); 1617 + init_waitqueue_head(&hidg->get_id_queue); 1618 + INIT_LIST_HEAD(&hidg->completed_out_req); 1619 + INIT_LIST_HEAD(&hidg->report_list); 1601 1620 1602 1621 device_initialize(&hidg->dev); 1603 1622 hidg->dev.release = hidg_release;
+30 -19
drivers/usb/gadget/function/f_rndis.c
··· 11 11 12 12 /* #define VERBOSE_DEBUG */ 13 13 14 + #include <linux/cleanup.h> 14 15 #include <linux/slab.h> 15 16 #include <linux/kernel.h> 16 17 #include <linux/module.h> ··· 666 665 667 666 struct f_rndis_opts *rndis_opts; 668 667 struct usb_os_desc_table *os_desc_table __free(kfree) = NULL; 668 + struct net_device *net __free(detach_gadget) = NULL; 669 669 struct usb_request *request __free(free_usb_request) = NULL; 670 670 671 671 if (!can_support_rndis(c)) ··· 680 678 return -ENOMEM; 681 679 } 682 680 683 - rndis_iad_descriptor.bFunctionClass = rndis_opts->class; 684 - rndis_iad_descriptor.bFunctionSubClass = rndis_opts->subclass; 685 - rndis_iad_descriptor.bFunctionProtocol = rndis_opts->protocol; 681 + scoped_guard(mutex, &rndis_opts->lock) { 682 + rndis_iad_descriptor.bFunctionClass = rndis_opts->class; 683 + rndis_iad_descriptor.bFunctionSubClass = rndis_opts->subclass; 684 + rndis_iad_descriptor.bFunctionProtocol = rndis_opts->protocol; 686 685 687 - /* 688 - * in drivers/usb/gadget/configfs.c:configfs_composite_bind() 689 - * configurations are bound in sequence with list_for_each_entry, 690 - * in each configuration its functions are bound in sequence 691 - * with list_for_each_entry, so we assume no race condition 692 - * with regard to rndis_opts->bound access 693 - */ 694 - if (!rndis_opts->bound) { 695 - gether_set_gadget(rndis_opts->net, cdev->gadget); 696 - status = gether_register_netdev(rndis_opts->net); 697 - if (status) 698 - return status; 699 - rndis_opts->bound = true; 686 + if (rndis_opts->bind_count == 0 && !rndis_opts->borrowed_net) { 687 + if (!device_is_registered(&rndis_opts->net->dev)) { 688 + gether_set_gadget(rndis_opts->net, cdev->gadget); 689 + status = gether_register_netdev(rndis_opts->net); 690 + } else 691 + status = gether_attach_gadget(rndis_opts->net, cdev->gadget); 692 + 693 + if (status) 694 + return status; 695 + net = rndis_opts->net; 696 + } 700 697 } 701 698 702 699 us = usb_gstrings_attach(cdev, rndis_strings, ··· 794 793 } 795 794 rndis->notify_req = no_free_ptr(request); 796 795 796 + rndis_opts->bind_count++; 797 + retain_and_null_ptr(net); 798 + 797 799 /* NOTE: all that is done without knowing or caring about 798 800 * the network link ... which is unavailable to this code 799 801 * until we're activated via set_alt(). ··· 813 809 struct f_rndis_opts *opts; 814 810 815 811 opts = container_of(f, struct f_rndis_opts, func_inst); 816 - if (opts->bound) 812 + if (device_is_registered(&opts->net->dev)) 817 813 gether_cleanup(netdev_priv(opts->net)); 818 814 else 819 815 free_netdev(opts->net); 820 - opts->borrowed_net = opts->bound = true; 816 + opts->borrowed_net = true; 821 817 opts->net = net; 822 818 } 823 819 EXPORT_SYMBOL_GPL(rndis_borrow_net); ··· 875 871 876 872 opts = container_of(f, struct f_rndis_opts, func_inst); 877 873 if (!opts->borrowed_net) { 878 - if (opts->bound) 874 + if (device_is_registered(&opts->net->dev)) 879 875 gether_cleanup(netdev_priv(opts->net)); 880 876 else 881 877 free_netdev(opts->net); ··· 944 940 static void rndis_unbind(struct usb_configuration *c, struct usb_function *f) 945 941 { 946 942 struct f_rndis *rndis = func_to_rndis(f); 943 + struct f_rndis_opts *rndis_opts; 944 + 945 + rndis_opts = container_of(f->fi, struct f_rndis_opts, func_inst); 947 946 948 947 kfree(f->os_desc_table); 949 948 f->os_desc_n = 0; ··· 954 947 955 948 kfree(rndis->notify_req->buf); 956 949 usb_ep_free_request(rndis->notify, rndis->notify_req); 950 + 951 + rndis_opts->bind_count--; 952 + if (rndis_opts->bind_count == 0 && !rndis_opts->borrowed_net) 953 + gether_detach_gadget(rndis_opts->net); 957 954 } 958 955 959 956 static struct usb_function *rndis_alloc(struct usb_function_instance *fi)
+35 -28
drivers/usb/gadget/function/f_subset.c
··· 6 6 * Copyright (C) 2008 Nokia Corporation 7 7 */ 8 8 9 + #include <linux/cleanup.h> 9 10 #include <linux/slab.h> 10 11 #include <linux/kernel.h> 11 12 #include <linux/module.h> ··· 299 298 struct usb_ep *ep; 300 299 301 300 struct f_gether_opts *gether_opts; 301 + struct net_device *net __free(detach_gadget) = NULL; 302 302 303 303 gether_opts = container_of(f->fi, struct f_gether_opts, func_inst); 304 304 305 - /* 306 - * in drivers/usb/gadget/configfs.c:configfs_composite_bind() 307 - * configurations are bound in sequence with list_for_each_entry, 308 - * in each configuration its functions are bound in sequence 309 - * with list_for_each_entry, so we assume no race condition 310 - * with regard to gether_opts->bound access 311 - */ 312 - if (!gether_opts->bound) { 313 - mutex_lock(&gether_opts->lock); 314 - gether_set_gadget(gether_opts->net, cdev->gadget); 315 - status = gether_register_netdev(gether_opts->net); 316 - mutex_unlock(&gether_opts->lock); 317 - if (status) 318 - return status; 319 - gether_opts->bound = true; 320 - } 305 + scoped_guard(mutex, &gether_opts->lock) 306 + if (gether_opts->bind_count == 0 && !gether_opts->bound) { 307 + if (!device_is_registered(&gether_opts->net->dev)) { 308 + gether_set_gadget(gether_opts->net, cdev->gadget); 309 + status = gether_register_netdev(gether_opts->net); 310 + } else 311 + status = gether_attach_gadget(gether_opts->net, cdev->gadget); 312 + 313 + if (status) 314 + return status; 315 + net = gether_opts->net; 316 + } 321 317 322 318 us = usb_gstrings_attach(cdev, geth_strings, 323 319 ARRAY_SIZE(geth_string_defs)); ··· 327 329 /* allocate instance-specific interface IDs */ 328 330 status = usb_interface_id(c, f); 329 331 if (status < 0) 330 - goto fail; 332 + return status; 331 333 subset_data_intf.bInterfaceNumber = status; 332 - 333 - status = -ENODEV; 334 334 335 335 /* allocate instance-specific endpoints */ 336 336 ep = usb_ep_autoconfig(cdev->gadget, &fs_subset_in_desc); 337 337 if (!ep) 338 - goto fail; 338 + return -ENODEV; 339 339 geth->port.in_ep = ep; 340 340 341 341 ep = usb_ep_autoconfig(cdev->gadget, &fs_subset_out_desc); 342 342 if (!ep) 343 - goto fail; 343 + return -ENODEV; 344 344 geth->port.out_ep = ep; 345 345 346 346 /* support all relevant hardware speeds... we expect that when ··· 356 360 status = usb_assign_descriptors(f, fs_eth_function, hs_eth_function, 357 361 ss_eth_function, ss_eth_function); 358 362 if (status) 359 - goto fail; 363 + return status; 360 364 361 365 /* NOTE: all that is done without knowing or caring about 362 366 * the network link ... which is unavailable to this code 363 367 * until we're activated via set_alt(). 364 368 */ 365 369 370 + gether_opts->bind_count++; 371 + retain_and_null_ptr(net); 372 + 366 373 DBG(cdev, "CDC Subset: IN/%s OUT/%s\n", 367 374 geth->port.in_ep->name, geth->port.out_ep->name); 368 375 return 0; 369 - 370 - fail: 371 - ERROR(cdev, "%s: can't bind, err %d\n", f->name, status); 372 - 373 - return status; 374 376 } 375 377 376 378 static inline struct f_gether_opts *to_f_gether_opts(struct config_item *item) ··· 411 417 struct f_gether_opts *opts; 412 418 413 419 opts = container_of(f, struct f_gether_opts, func_inst); 414 - if (opts->bound) 420 + if (device_is_registered(&opts->net->dev)) 415 421 gether_cleanup(netdev_priv(opts->net)); 416 422 else 417 423 free_netdev(opts->net); ··· 443 449 static void geth_free(struct usb_function *f) 444 450 { 445 451 struct f_gether *eth; 452 + struct f_gether_opts *opts; 453 + 454 + opts = container_of(f->fi, struct f_gether_opts, func_inst); 446 455 447 456 eth = func_to_geth(f); 457 + scoped_guard(mutex, &opts->lock) 458 + opts->refcnt--; 448 459 kfree(eth); 449 460 } 450 461 451 462 static void geth_unbind(struct usb_configuration *c, struct usb_function *f) 452 463 { 464 + struct f_gether_opts *opts; 465 + 466 + opts = container_of(f->fi, struct f_gether_opts, func_inst); 467 + 453 468 geth_string_defs[0].id = 0; 454 469 usb_free_all_descriptors(f); 470 + 471 + opts->bind_count--; 472 + if (opts->bind_count == 0 && !opts->bound) 473 + gether_detach_gadget(opts->net); 455 474 } 456 475 457 476 static struct usb_function *geth_alloc(struct usb_function_instance *fi)
+37 -10
drivers/usb/gadget/function/f_uac1_legacy.c
··· 360 360 static void f_audio_complete(struct usb_ep *ep, struct usb_request *req) 361 361 { 362 362 struct f_audio *audio = req->context; 363 - int status = req->status; 364 - u32 data = 0; 365 363 struct usb_ep *out_ep = audio->out_ep; 366 364 367 - switch (status) { 368 - 369 - case 0: /* normal completion? */ 370 - if (ep == out_ep) 365 + switch (req->status) { 366 + case 0: 367 + if (ep == out_ep) { 371 368 f_audio_out_ep_complete(ep, req); 372 - else if (audio->set_con) { 373 - memcpy(&data, req->buf, req->length); 374 - audio->set_con->set(audio->set_con, audio->set_cmd, 375 - le16_to_cpu(data)); 369 + } else if (audio->set_con) { 370 + struct usb_audio_control *con = audio->set_con; 371 + u8 type = con->type; 372 + u32 data; 373 + bool valid_request = false; 374 + 375 + switch (type) { 376 + case UAC_FU_MUTE: { 377 + u8 value; 378 + 379 + if (req->actual == sizeof(value)) { 380 + memcpy(&value, req->buf, sizeof(value)); 381 + data = value; 382 + valid_request = true; 383 + } 384 + break; 385 + } 386 + case UAC_FU_VOLUME: { 387 + __le16 value; 388 + 389 + if (req->actual == sizeof(value)) { 390 + memcpy(&value, req->buf, sizeof(value)); 391 + data = le16_to_cpu(value); 392 + valid_request = true; 393 + } 394 + break; 395 + } 396 + } 397 + 398 + if (valid_request) 399 + con->set(con, audio->set_cmd, data); 400 + else 401 + usb_ep_set_halt(ep); 402 + 376 403 audio->set_con = NULL; 377 404 } 378 405 break;
+36 -3
drivers/usb/gadget/function/f_uvc.c
··· 413 413 { 414 414 int ret; 415 415 416 + guard(mutex)(&uvc->lock); 417 + if (uvc->func_unbound) { 418 + dev_dbg(&uvc->vdev.dev, "skipping function deactivate (unbound)\n"); 419 + return; 420 + } 421 + 416 422 if ((ret = usb_function_deactivate(&uvc->func)) < 0) 417 423 uvcg_info(&uvc->func, "UVC disconnect failed with %d\n", ret); 418 424 } ··· 437 431 438 432 static DEVICE_ATTR_RO(function_name); 439 433 434 + static void uvc_vdev_release(struct video_device *vdev) 435 + { 436 + struct uvc_device *uvc = video_get_drvdata(vdev); 437 + 438 + /* Signal uvc_function_unbind() that the video device has been released */ 439 + if (uvc->vdev_release_done) 440 + complete(uvc->vdev_release_done); 441 + } 442 + 440 443 static int 441 444 uvc_register_video(struct uvc_device *uvc) 442 445 { ··· 458 443 uvc->vdev.v4l2_dev->dev = &cdev->gadget->dev; 459 444 uvc->vdev.fops = &uvc_v4l2_fops; 460 445 uvc->vdev.ioctl_ops = &uvc_v4l2_ioctl_ops; 461 - uvc->vdev.release = video_device_release_empty; 446 + uvc->vdev.release = uvc_vdev_release; 462 447 uvc->vdev.vfl_dir = VFL_DIR_TX; 463 448 uvc->vdev.lock = &uvc->video.mutex; 464 449 uvc->vdev.device_caps = V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_STREAMING; ··· 674 659 int ret = -EINVAL; 675 660 676 661 uvcg_info(f, "%s()\n", __func__); 662 + scoped_guard(mutex, &uvc->lock) 663 + uvc->func_unbound = false; 677 664 678 665 opts = fi_to_f_uvc_opts(f->fi); 679 666 /* Sanity check the streaming endpoint module parameters. */ ··· 1005 988 static void uvc_function_unbind(struct usb_configuration *c, 1006 989 struct usb_function *f) 1007 990 { 991 + DECLARE_COMPLETION_ONSTACK(vdev_release_done); 1008 992 struct usb_composite_dev *cdev = c->cdev; 1009 993 struct uvc_device *uvc = to_uvc(f); 1010 994 struct uvc_video *video = &uvc->video; 1011 995 long wait_ret = 1; 996 + bool connected; 1012 997 1013 998 uvcg_info(f, "%s()\n", __func__); 999 + scoped_guard(mutex, &uvc->lock) { 1000 + uvc->func_unbound = true; 1001 + uvc->vdev_release_done = &vdev_release_done; 1002 + connected = uvc->func_connected; 1003 + } 1014 1004 1015 1005 kthread_cancel_work_sync(&video->hw_submit); 1016 1006 ··· 1030 1006 * though the video device removal uevent. Allow some time for the 1031 1007 * application to close out before things get deleted. 1032 1008 */ 1033 - if (uvc->func_connected) { 1009 + if (connected) { 1034 1010 uvcg_dbg(f, "waiting for clean disconnect\n"); 1035 1011 wait_ret = wait_event_interruptible_timeout(uvc->func_connected_queue, 1036 1012 uvc->func_connected == false, msecs_to_jiffies(500)); ··· 1041 1017 video_unregister_device(&uvc->vdev); 1042 1018 v4l2_device_unregister(&uvc->v4l2_dev); 1043 1019 1044 - if (uvc->func_connected) { 1020 + scoped_guard(mutex, &uvc->lock) 1021 + connected = uvc->func_connected; 1022 + 1023 + if (connected) { 1045 1024 /* 1046 1025 * Wait for the release to occur to ensure there are no longer any 1047 1026 * pending operations that may cause panics when resources are cleaned ··· 1055 1028 uvc->func_connected == false, msecs_to_jiffies(1000)); 1056 1029 uvcg_dbg(f, "done waiting for release with ret: %ld\n", wait_ret); 1057 1030 } 1031 + 1032 + /* Wait for the video device to be released */ 1033 + wait_for_completion(&vdev_release_done); 1034 + uvc->vdev_release_done = NULL; 1058 1035 1059 1036 usb_ep_free_request(cdev->gadget->ep0, uvc->control_req); 1060 1037 kfree(uvc->control_buf); ··· 1078 1047 return ERR_PTR(-ENOMEM); 1079 1048 1080 1049 mutex_init(&uvc->video.mutex); 1050 + mutex_init(&uvc->lock); 1051 + uvc->func_unbound = true; 1081 1052 uvc->state = UVC_STATE_DISCONNECTED; 1082 1053 init_waitqueue_head(&uvc->func_connected_queue); 1083 1054 opts = fi_to_f_uvc_opts(fi);
+15 -6
drivers/usb/gadget/function/u_ecm.h
··· 15 15 16 16 #include <linux/usb/composite.h> 17 17 18 + /** 19 + * struct f_ecm_opts - ECM function options 20 + * @func_inst: USB function instance. 21 + * @net: The net_device associated with the ECM function. 22 + * @bound: True if the net_device is shared and pre-registered during the 23 + * legacy composite driver's bind phase (e.g., multi.c). If false, 24 + * the ECM function will register the net_device during its own 25 + * bind phase. 26 + * @bind_count: Tracks the number of configurations the ECM function is 27 + * bound to, preventing double-registration of the @net device. 28 + * @lock: Protects the data from concurrent access by configfs read/write 29 + * and create symlink/remove symlink operations. 30 + * @refcnt: Reference counter for the function instance. 31 + */ 18 32 struct f_ecm_opts { 19 33 struct usb_function_instance func_inst; 20 34 struct net_device *net; 21 35 bool bound; 36 + int bind_count; 22 37 23 - /* 24 - * Read/write access to configfs attributes is handled by configfs. 25 - * 26 - * This is to protect the data from concurrent access by read/write 27 - * and create symlink/remove symlink. 28 - */ 29 38 struct mutex lock; 30 39 int refcnt; 31 40 };
+15 -6
drivers/usb/gadget/function/u_eem.h
··· 15 15 16 16 #include <linux/usb/composite.h> 17 17 18 + /** 19 + * struct f_eem_opts - EEM function options 20 + * @func_inst: USB function instance. 21 + * @net: The net_device associated with the EEM function. 22 + * @bound: True if the net_device is shared and pre-registered during the 23 + * legacy composite driver's bind phase (e.g., multi.c). If false, 24 + * the EEM function will register the net_device during its own 25 + * bind phase. 26 + * @bind_count: Tracks the number of configurations the EEM function is 27 + * bound to, preventing double-registration of the @net device. 28 + * @lock: Protects the data from concurrent access by configfs read/write 29 + * and create symlink/remove symlink operations. 30 + * @refcnt: Reference counter for the function instance. 31 + */ 18 32 struct f_eem_opts { 19 33 struct usb_function_instance func_inst; 20 34 struct net_device *net; 21 35 bool bound; 36 + int bind_count; 22 37 23 - /* 24 - * Read/write access to configfs attributes is handled by configfs. 25 - * 26 - * This is to protect the data from concurrent access by read/write 27 - * and create symlink/remove symlink. 28 - */ 29 38 struct mutex lock; 30 39 int refcnt; 31 40 };
+9 -7
drivers/usb/gadget/function/u_ether.c
··· 113 113 114 114 strscpy(p->driver, "g_ether", sizeof(p->driver)); 115 115 strscpy(p->version, UETH__VERSION, sizeof(p->version)); 116 - strscpy(p->fw_version, dev->gadget->name, sizeof(p->fw_version)); 117 - strscpy(p->bus_info, dev_name(&dev->gadget->dev), sizeof(p->bus_info)); 116 + if (dev->gadget) { 117 + strscpy(p->fw_version, dev->gadget->name, sizeof(p->fw_version)); 118 + strscpy(p->bus_info, dev_name(&dev->gadget->dev), sizeof(p->bus_info)); 119 + } 118 120 } 119 121 120 122 /* REVISIT can also support: ··· 1225 1223 1226 1224 DBG(dev, "%s\n", __func__); 1227 1225 1226 + spin_lock(&dev->lock); 1227 + dev->port_usb = NULL; 1228 + link->is_suspend = false; 1229 + spin_unlock(&dev->lock); 1230 + 1228 1231 netif_stop_queue(dev->net); 1229 1232 netif_carrier_off(dev->net); 1230 1233 ··· 1267 1260 dev->header_len = 0; 1268 1261 dev->unwrap = NULL; 1269 1262 dev->wrap = NULL; 1270 - 1271 - spin_lock(&dev->lock); 1272 - dev->port_usb = NULL; 1273 - link->is_suspend = false; 1274 - spin_unlock(&dev->lock); 1275 1263 } 1276 1264 EXPORT_SYMBOL_GPL(gether_disconnect); 1277 1265
+15 -7
drivers/usb/gadget/function/u_gether.h
··· 15 15 16 16 #include <linux/usb/composite.h> 17 17 18 + /** 19 + * struct f_gether_opts - subset function options 20 + * @func_inst: USB function instance. 21 + * @net: The net_device associated with the subset function. 22 + * @bound: True if the net_device is shared and pre-registered during the 23 + * legacy composite driver's bind phase (e.g., multi.c). If false, 24 + * the subset function will register the net_device during its own 25 + * bind phase. 26 + * @bind_count: Tracks the number of configurations the subset function is 27 + * bound to, preventing double-registration of the @net device. 28 + * @lock: Protects the data from concurrent access by configfs read/write 29 + * and create symlink/remove symlink operations. 30 + * @refcnt: Reference counter for the function instance. 31 + */ 18 32 struct f_gether_opts { 19 33 struct usb_function_instance func_inst; 20 34 struct net_device *net; 21 35 bool bound; 22 - 23 - /* 24 - * Read/write access to configfs attributes is handled by configfs. 25 - * 26 - * This is to protect the data from concurrent access by read/write 27 - * and create symlink/remove symlink. 28 - */ 36 + int bind_count; 29 37 struct mutex lock; 30 38 int refcnt; 31 39 };
+15 -6
drivers/usb/gadget/function/u_ncm.h
··· 15 15 16 16 #include <linux/usb/composite.h> 17 17 18 + /** 19 + * struct f_ncm_opts - NCM function options 20 + * @func_inst: USB function instance. 21 + * @net: The net_device associated with the NCM function. 22 + * @bind_count: Tracks the number of configurations the NCM function is 23 + * bound to, preventing double-registration of the @net device. 24 + * @ncm_interf_group: ConfigFS group for NCM interface. 25 + * @ncm_os_desc: USB OS descriptor for NCM. 26 + * @ncm_ext_compat_id: Extended compatibility ID. 27 + * @lock: Protects the data from concurrent access by configfs read/write 28 + * and create symlink/remove symlink operations. 29 + * @refcnt: Reference counter for the function instance. 30 + * @max_segment_size: Maximum segment size. 31 + */ 18 32 struct f_ncm_opts { 19 33 struct usb_function_instance func_inst; 20 34 struct net_device *net; ··· 37 23 struct config_group *ncm_interf_group; 38 24 struct usb_os_desc ncm_os_desc; 39 25 char ncm_ext_compat_id[16]; 40 - /* 41 - * Read/write access to configfs attributes is handled by configfs. 42 - * 43 - * This is to protect the data from concurrent access by read/write 44 - * and create symlink/remove symlink. 45 - */ 26 + 46 27 struct mutex lock; 47 28 int refcnt; 48 29
+23 -8
drivers/usb/gadget/function/u_rndis.h
··· 15 15 16 16 #include <linux/usb/composite.h> 17 17 18 + /** 19 + * struct f_rndis_opts - RNDIS function options 20 + * @func_inst: USB function instance. 21 + * @vendor_id: Vendor ID. 22 + * @manufacturer: Manufacturer string. 23 + * @net: The net_device associated with the RNDIS function. 24 + * @bind_count: Tracks the number of configurations the RNDIS function is 25 + * bound to, preventing double-registration of the @net device. 26 + * @borrowed_net: True if the net_device is shared and pre-registered during 27 + * the legacy composite driver's bind phase (e.g., multi.c). 28 + * If false, the RNDIS function will register the net_device 29 + * during its own bind phase. 30 + * @rndis_interf_group: ConfigFS group for RNDIS interface. 31 + * @rndis_os_desc: USB OS descriptor for RNDIS. 32 + * @rndis_ext_compat_id: Extended compatibility ID. 33 + * @class: USB class. 34 + * @subclass: USB subclass. 35 + * @protocol: USB protocol. 36 + * @lock: Protects the data from concurrent access by configfs read/write 37 + * and create symlink/remove symlink operations. 38 + * @refcnt: Reference counter for the function instance. 39 + */ 18 40 struct f_rndis_opts { 19 41 struct usb_function_instance func_inst; 20 42 u32 vendor_id; 21 43 const char *manufacturer; 22 44 struct net_device *net; 23 - bool bound; 45 + int bind_count; 24 46 bool borrowed_net; 25 47 26 48 struct config_group *rndis_interf_group; ··· 52 30 u8 class; 53 31 u8 subclass; 54 32 u8 protocol; 55 - 56 - /* 57 - * Read/write access to configfs attributes is handled by configfs. 58 - * 59 - * This is to protect the data from concurrent access by read/write 60 - * and create symlink/remove symlink. 61 - */ 62 33 struct mutex lock; 63 34 int refcnt; 64 35 };
+3
drivers/usb/gadget/function/uvc.h
··· 155 155 enum uvc_state state; 156 156 struct usb_function func; 157 157 struct uvc_video video; 158 + struct completion *vdev_release_done; 159 + struct mutex lock; /* protects func_unbound and func_connected */ 160 + bool func_unbound; 158 161 bool func_connected; 159 162 wait_queue_head_t func_connected_queue; 160 163
+4 -1
drivers/usb/gadget/function/uvc_v4l2.c
··· 574 574 if (sub->type < UVC_EVENT_FIRST || sub->type > UVC_EVENT_LAST) 575 575 return -EINVAL; 576 576 577 + guard(mutex)(&uvc->lock); 578 + 577 579 if (sub->type == UVC_EVENT_SETUP && uvc->func_connected) 578 580 return -EBUSY; 579 581 ··· 597 595 uvc_function_disconnect(uvc); 598 596 uvcg_video_disable(&uvc->video); 599 597 uvcg_free_buffers(&uvc->video.queue); 600 - uvc->func_connected = false; 598 + scoped_guard(mutex, &uvc->lock) 599 + uvc->func_connected = false; 601 600 wake_up_interruptible(&uvc->func_connected_queue); 602 601 } 603 602
+26 -16
drivers/usb/gadget/udc/dummy_hcd.c
··· 462 462 463 463 /* Report reset and disconnect events to the driver */ 464 464 if (dum->ints_enabled && (disconnect || reset)) { 465 - stop_activity(dum); 466 465 ++dum->callback_usage; 466 + /* 467 + * stop_activity() can drop dum->lock, so it must 468 + * not come between the dum->ints_enabled test 469 + * and the ++dum->callback_usage. 470 + */ 471 + stop_activity(dum); 467 472 spin_unlock(&dum->lock); 468 473 if (reset) 469 474 usb_gadget_udc_reset(&dum->gadget, dum->driver); ··· 913 908 spin_lock_irqsave(&dum->lock, flags); 914 909 dum->pullup = (value != 0); 915 910 set_link_state(dum_hcd); 916 - if (value == 0) { 917 - /* 918 - * Emulate synchronize_irq(): wait for callbacks to finish. 919 - * This seems to be the best place to emulate the call to 920 - * synchronize_irq() that's in usb_gadget_remove_driver(). 921 - * Doing it in dummy_udc_stop() would be too late since it 922 - * is called after the unbind callback and unbind shouldn't 923 - * be invoked until all the other callbacks are finished. 924 - */ 925 - while (dum->callback_usage > 0) { 926 - spin_unlock_irqrestore(&dum->lock, flags); 927 - usleep_range(1000, 2000); 928 - spin_lock_irqsave(&dum->lock, flags); 929 - } 930 - } 931 911 spin_unlock_irqrestore(&dum->lock, flags); 932 912 933 913 usb_hcd_poll_rh_status(dummy_hcd_to_hcd(dum_hcd)); ··· 935 945 936 946 spin_lock_irq(&dum->lock); 937 947 dum->ints_enabled = enable; 948 + if (!enable) { 949 + /* 950 + * Emulate synchronize_irq(): wait for callbacks to finish. 951 + * This has to happen after emulated interrupts are disabled 952 + * (dum->ints_enabled is clear) and before the unbind callback, 953 + * just like the call to synchronize_irq() in 954 + * gadget/udc/core:gadget_unbind_driver(). 955 + */ 956 + while (dum->callback_usage > 0) { 957 + spin_unlock_irq(&dum->lock); 958 + usleep_range(1000, 2000); 959 + spin_lock_irq(&dum->lock); 960 + } 961 + } 938 962 spin_unlock_irq(&dum->lock); 939 963 } 940 964 ··· 1538 1534 /* rescan to continue with any other queued i/o */ 1539 1535 if (rescan) 1540 1536 goto top; 1537 + 1538 + /* request not fully transferred; stop iterating to 1539 + * preserve data ordering across queued requests. 1540 + */ 1541 + if (req->req.actual < req->req.length) 1542 + break; 1541 1543 } 1542 1544 return sent; 1543 1545 }
+2 -2
drivers/usb/host/ehci-brcm.c
··· 31 31 int res; 32 32 33 33 /* Wait for next microframe (every 125 usecs) */ 34 - res = readl_relaxed_poll_timeout(&ehci->regs->frame_index, val, 35 - val != frame_idx, 1, 130); 34 + res = readl_relaxed_poll_timeout_atomic(&ehci->regs->frame_index, 35 + val, val != frame_idx, 1, 130); 36 36 if (res) 37 37 ehci_err(ehci, "Error waiting for SOF\n"); 38 38 udelay(delay);
+3 -15
drivers/usb/host/xhci-sideband.c
··· 93 93 static void 94 94 __xhci_sideband_remove_interrupter(struct xhci_sideband *sb) 95 95 { 96 - struct usb_device *udev; 97 - 98 96 lockdep_assert_held(&sb->mutex); 99 97 100 98 if (!sb->ir) ··· 100 102 101 103 xhci_remove_secondary_interrupter(xhci_to_hcd(sb->xhci), sb->ir); 102 104 sb->ir = NULL; 103 - udev = sb->vdev->udev; 104 - 105 - if (udev->state != USB_STATE_NOTATTACHED) 106 - usb_offload_put(udev); 107 105 } 108 106 109 107 /* sideband api functions */ ··· 285 291 * Allow other drivers, such as usb controller driver, to check if there are 286 292 * any sideband activity on the host controller. This information could be used 287 293 * for power management or other forms of resource management. The caller should 288 - * ensure downstream usb devices are all either suspended or marked as 289 - * "offload_at_suspend" to ensure the correctness of the return value. 294 + * ensure downstream usb devices are all marked as "offload_pm_locked" to ensure 295 + * the correctness of the return value. 290 296 * 291 297 * Returns true on any active sideband existence, false otherwise. 292 298 */ ··· 322 328 xhci_sideband_create_interrupter(struct xhci_sideband *sb, int num_seg, 323 329 bool ip_autoclear, u32 imod_interval, int intr_num) 324 330 { 325 - int ret = 0; 326 - struct usb_device *udev; 327 - 328 331 if (!sb || !sb->xhci) 329 332 return -ENODEV; 330 333 ··· 339 348 if (!sb->ir) 340 349 return -ENOMEM; 341 350 342 - udev = sb->vdev->udev; 343 - ret = usb_offload_get(udev); 344 - 345 351 sb->ir->ip_autoclear = ip_autoclear; 346 352 347 - return ret; 353 + return 0; 348 354 } 349 355 EXPORT_SYMBOL_GPL(xhci_sideband_create_interrupter); 350 356
+5 -2
drivers/usb/misc/usbio.c
··· 614 614 usb_fill_bulk_urb(usbio->urb, udev, usbio->rx_pipe, usbio->rxbuf, 615 615 usbio->rxbuf_len, usbio_bulk_recv, usbio); 616 616 ret = usb_submit_urb(usbio->urb, GFP_KERNEL); 617 - if (ret) 618 - return dev_err_probe(dev, ret, "Submitting usb urb\n"); 617 + if (ret) { 618 + dev_err_probe(dev, ret, "Submitting usb urb\n"); 619 + goto err_free_urb; 620 + } 619 621 620 622 mutex_lock(&usbio->ctrl_mutex); 621 623 ··· 665 663 err_unlock: 666 664 mutex_unlock(&usbio->ctrl_mutex); 667 665 usb_kill_urb(usbio->urb); 666 + err_free_urb: 668 667 usb_free_urb(usbio->urb); 669 668 670 669 return ret;
+3
drivers/usb/serial/io_edgeport.c
··· 73 73 { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_EDGEPORT_22I) }, 74 74 { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_EDGEPORT_412_4) }, 75 75 { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_EDGEPORT_COMPATIBLE) }, 76 + { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_BLACKBOX_IC135A) }, 76 77 { } 77 78 }; 78 79 ··· 122 121 { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_EDGEPORT_8R) }, 123 122 { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_EDGEPORT_8RR) }, 124 123 { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_EDGEPORT_412_8) }, 124 + { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_BLACKBOX_IC135A) }, 125 125 { USB_DEVICE(USB_VENDOR_ID_NCR, NCR_DEVICE_ID_EPIC_0202) }, 126 126 { USB_DEVICE(USB_VENDOR_ID_NCR, NCR_DEVICE_ID_EPIC_0203) }, 127 127 { USB_DEVICE(USB_VENDOR_ID_NCR, NCR_DEVICE_ID_EPIC_0310) }, ··· 472 470 case ION_DEVICE_ID_EDGEPORT_2_DIN: 473 471 case ION_DEVICE_ID_EDGEPORT_4_DIN: 474 472 case ION_DEVICE_ID_EDGEPORT_16_DUAL_CPU: 473 + case ION_DEVICE_ID_BLACKBOX_IC135A: 475 474 product_info->IsRS232 = 1; 476 475 break; 477 476
+1
drivers/usb/serial/io_usbvend.h
··· 211 211 212 212 // 213 213 // Definitions for other product IDs 214 + #define ION_DEVICE_ID_BLACKBOX_IC135A 0x0801 // OEM device (rebranded Edgeport/4) 214 215 #define ION_DEVICE_ID_MT4X56USB 0x1403 // OEM device 215 216 #define ION_DEVICE_ID_E5805A 0x1A01 // OEM device (rebranded Edgeport/4) 216 217
+4
drivers/usb/serial/option.c
··· 2441 2441 { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0xff, 0x30) }, /* MeiG Smart SRM815 and SRM825L */ 2442 2442 { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0xff, 0x40) }, /* MeiG Smart SRM825L */ 2443 2443 { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0xff, 0x60) }, /* MeiG Smart SRM825L */ 2444 + { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d38, 0xff, 0xff, 0x30) }, /* MeiG Smart SRM825WN (Diag) */ 2445 + { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d38, 0xff, 0xff, 0x40) }, /* MeiG Smart SRM825WN (AT) */ 2446 + { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d38, 0xff, 0xff, 0x60) }, /* MeiG Smart SRM825WN (NMEA) */ 2444 2447 { USB_DEVICE_INTERFACE_CLASS(0x2df3, 0x9d03, 0xff) }, /* LongSung M5710 */ 2445 2448 { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) }, /* GosunCn GM500 RNDIS */ 2446 2449 { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) }, /* GosunCn GM500 MBIM */ ··· 2464 2461 { USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x0302, 0xff) }, /* Rolling RW101R-GL (laptop MBIM) */ 2465 2462 { USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x0802, 0xff), /* Rolling RW350-GL (laptop MBIM) */ 2466 2463 .driver_info = RSVD(5) }, 2464 + { USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x1003, 0xff) }, /* Rolling RW135R-GL (laptop MBIM) */ 2467 2465 { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0100, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WWD for Global */ 2468 2466 { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0100, 0xff, 0x00, 0x40) }, 2469 2467 { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0100, 0xff, 0xff, 0x40) },
+22 -22
drivers/usb/typec/altmodes/thunderbolt.c
··· 39 39 40 40 static int tbt_enter_mode(struct tbt_altmode *tbt) 41 41 { 42 - struct typec_altmode *plug = tbt->plug[TYPEC_PLUG_SOP_P]; 43 - u32 vdo; 44 - 45 - vdo = tbt->alt->vdo & (TBT_VENDOR_SPECIFIC_B0 | TBT_VENDOR_SPECIFIC_B1); 46 - vdo |= tbt->alt->vdo & TBT_INTEL_SPECIFIC_B0; 47 - vdo |= TBT_MODE; 48 - 49 - if (plug) { 50 - if (typec_cable_is_active(tbt->cable)) 51 - vdo |= TBT_ENTER_MODE_ACTIVE_CABLE; 52 - 53 - vdo |= TBT_ENTER_MODE_CABLE_SPEED(TBT_CABLE_SPEED(plug->vdo)); 54 - vdo |= plug->vdo & TBT_CABLE_ROUNDED; 55 - vdo |= plug->vdo & TBT_CABLE_OPTICAL; 56 - vdo |= plug->vdo & TBT_CABLE_RETIMER; 57 - vdo |= plug->vdo & TBT_CABLE_LINK_TRAINING; 58 - } else { 59 - vdo |= TBT_ENTER_MODE_CABLE_SPEED(TBT_CABLE_USB3_PASSIVE); 60 - } 61 - 62 - tbt->enter_vdo = vdo; 63 - return typec_altmode_enter(tbt->alt, &vdo); 42 + return typec_altmode_enter(tbt->alt, &tbt->enter_vdo); 64 43 } 65 44 66 45 static void tbt_altmode_work(struct work_struct *work) ··· 316 337 { 317 338 struct tbt_altmode *tbt = typec_altmode_get_drvdata(alt); 318 339 struct typec_altmode *plug; 340 + u32 vdo; 319 341 320 342 if (tbt->cable) 321 343 return true; ··· 343 363 344 364 tbt->plug[i] = plug; 345 365 } 366 + 367 + vdo = tbt->alt->vdo & (TBT_VENDOR_SPECIFIC_B0 | TBT_VENDOR_SPECIFIC_B1); 368 + vdo |= tbt->alt->vdo & TBT_INTEL_SPECIFIC_B0; 369 + vdo |= TBT_MODE; 370 + plug = tbt->plug[TYPEC_PLUG_SOP_P]; 371 + 372 + if (plug) { 373 + if (typec_cable_is_active(tbt->cable)) 374 + vdo |= TBT_ENTER_MODE_ACTIVE_CABLE; 375 + 376 + vdo |= TBT_ENTER_MODE_CABLE_SPEED(TBT_CABLE_SPEED(plug->vdo)); 377 + vdo |= plug->vdo & TBT_CABLE_ROUNDED; 378 + vdo |= plug->vdo & TBT_CABLE_OPTICAL; 379 + vdo |= plug->vdo & TBT_CABLE_RETIMER; 380 + vdo |= plug->vdo & TBT_CABLE_LINK_TRAINING; 381 + } else { 382 + vdo |= TBT_ENTER_MODE_CABLE_SPEED(TBT_CABLE_USB3_PASSIVE); 383 + } 384 + 385 + tbt->enter_vdo = vdo; 346 386 347 387 return true; 348 388 }
-4
drivers/usb/typec/class.c
··· 686 686 687 687 alt->adev.dev.bus = &typec_bus; 688 688 689 - /* Plug alt modes need a class to generate udev events. */ 690 - if (is_typec_plug(parent)) 691 - alt->adev.dev.class = &typec_class; 692 - 693 689 ret = device_register(&alt->adev.dev); 694 690 if (ret) { 695 691 dev_err(parent, "failed to register alternate mode (%d)\n",
+7 -2
drivers/usb/typec/ucsi/ucsi.c
··· 43 43 if (cci & UCSI_CCI_BUSY) 44 44 return; 45 45 46 - if (UCSI_CCI_CONNECTOR(cci)) 47 - ucsi_connector_change(ucsi, UCSI_CCI_CONNECTOR(cci)); 46 + if (UCSI_CCI_CONNECTOR(cci)) { 47 + if (UCSI_CCI_CONNECTOR(cci) <= ucsi->cap.num_connectors) 48 + ucsi_connector_change(ucsi, UCSI_CCI_CONNECTOR(cci)); 49 + else 50 + dev_err(ucsi->dev, "bogus connector number in CCI: %lu\n", 51 + UCSI_CCI_CONNECTOR(cci)); 52 + } 48 53 49 54 if (cci & UCSI_CCI_ACK_COMPLETE && 50 55 test_and_clear_bit(ACK_PENDING, &ucsi->flags))
+1 -1
fs/btrfs/extent-tree.c
··· 495 495 btrfs_item_key_to_cpu(leaf, &key, path->slots[0]); 496 496 if (key.objectid != bytenr || 497 497 key.type != BTRFS_EXTENT_DATA_REF_KEY) 498 - return ret; 498 + return -ENOENT; 499 499 500 500 ref = btrfs_item_ptr(leaf, path->slots[0], 501 501 struct btrfs_extent_data_ref);
+24 -6
fs/mpage.c
··· 646 646 } 647 647 648 648 /** 649 - * mpage_writepages - walk the list of dirty pages of the given address space & writepage() all of them 649 + * __mpage_writepages - walk the list of dirty pages of the given address space 650 + * & writepage() all of them 650 651 * @mapping: address space structure to write 651 652 * @wbc: subtract the number of written pages from *@wbc->nr_to_write 652 653 * @get_block: the filesystem's block mapper function. 654 + * @write_folio: handler to call for each folio before calling 655 + * mpage_write_folio() 653 656 * 654 657 * This is a library function, which implements the writepages() 655 - * address_space_operation. 658 + * address_space_operation. It calls @write_folio handler for each folio. If 659 + * the handler returns value > 0, it calls mpage_write_folio() to do the 660 + * folio writeback. 656 661 */ 657 662 int 658 - mpage_writepages(struct address_space *mapping, 659 - struct writeback_control *wbc, get_block_t get_block) 663 + __mpage_writepages(struct address_space *mapping, 664 + struct writeback_control *wbc, get_block_t get_block, 665 + int (*write_folio)(struct folio *folio, 666 + struct writeback_control *wbc)) 660 667 { 661 668 struct mpage_data mpd = { 662 669 .get_block = get_block, ··· 673 666 int error; 674 667 675 668 blk_start_plug(&plug); 676 - while ((folio = writeback_iter(mapping, wbc, folio, &error))) 669 + while ((folio = writeback_iter(mapping, wbc, folio, &error))) { 670 + if (write_folio) { 671 + error = write_folio(folio, wbc); 672 + /* 673 + * == 0 means folio is handled, < 0 means error. In 674 + * both cases hand back control to writeback_iter() 675 + */ 676 + if (error <= 0) 677 + continue; 678 + /* Let mpage_write_folio() handle the folio. */ 679 + } 677 680 error = mpage_write_folio(wbc, folio, &mpd); 681 + } 678 682 if (mpd.bio) 679 683 mpage_bio_submit_write(mpd.bio); 680 684 blk_finish_plug(&plug); 681 685 return error; 682 686 } 683 - EXPORT_SYMBOL(mpage_writepages); 687 + EXPORT_SYMBOL(__mpage_writepages);
+4
fs/smb/client/fs_context.c
··· 588 588 while (IS_DELIM(*cursor1)) 589 589 cursor1++; 590 590 591 + /* exit in case of only delimiters */ 592 + if (!*cursor1) 593 + return NULL; 594 + 591 595 /* copy the first letter */ 592 596 *cursor2 = *cursor1; 593 597
+89 -32
fs/smb/server/smb2pdu.c
··· 3402 3402 KSMBD_SHARE_FLAG_ACL_XATTR)) { 3403 3403 struct smb_fattr fattr; 3404 3404 struct smb_ntsd *pntsd; 3405 - int pntsd_size, ace_num = 0; 3405 + int pntsd_size; 3406 + size_t scratch_len; 3406 3407 3407 3408 ksmbd_acls_fattr(&fattr, idmap, inode); 3408 - if (fattr.cf_acls) 3409 - ace_num = fattr.cf_acls->a_count; 3410 - if (fattr.cf_dacls) 3411 - ace_num += fattr.cf_dacls->a_count; 3409 + scratch_len = smb_acl_sec_desc_scratch_len(&fattr, 3410 + NULL, 0, 3411 + OWNER_SECINFO | GROUP_SECINFO | 3412 + DACL_SECINFO); 3413 + if (!scratch_len || scratch_len == SIZE_MAX) { 3414 + rc = -EFBIG; 3415 + posix_acl_release(fattr.cf_acls); 3416 + posix_acl_release(fattr.cf_dacls); 3417 + goto err_out; 3418 + } 3412 3419 3413 - pntsd = kmalloc(sizeof(struct smb_ntsd) + 3414 - sizeof(struct smb_sid) * 3 + 3415 - sizeof(struct smb_acl) + 3416 - sizeof(struct smb_ace) * ace_num * 2, 3417 - KSMBD_DEFAULT_GFP); 3420 + pntsd = kvzalloc(scratch_len, KSMBD_DEFAULT_GFP); 3418 3421 if (!pntsd) { 3422 + rc = -ENOMEM; 3419 3423 posix_acl_release(fattr.cf_acls); 3420 3424 posix_acl_release(fattr.cf_dacls); 3421 3425 goto err_out; ··· 3434 3430 posix_acl_release(fattr.cf_acls); 3435 3431 posix_acl_release(fattr.cf_dacls); 3436 3432 if (rc) { 3437 - kfree(pntsd); 3433 + kvfree(pntsd); 3438 3434 goto err_out; 3439 3435 } 3440 3436 ··· 3444 3440 pntsd, 3445 3441 pntsd_size, 3446 3442 false); 3447 - kfree(pntsd); 3443 + kvfree(pntsd); 3448 3444 if (rc) 3449 3445 pr_err("failed to store ntacl in xattr : %d\n", 3450 3446 rc); ··· 5376 5372 if (test_share_config_flag(work->tcon->share_conf, 5377 5373 KSMBD_SHARE_FLAG_PIPE)) { 5378 5374 /* smb2 info file called for pipe */ 5379 - return smb2_get_info_file_pipe(work->sess, req, rsp, 5375 + rc = smb2_get_info_file_pipe(work->sess, req, rsp, 5380 5376 work->response_buf); 5377 + goto iov_pin_out; 5381 5378 } 5382 5379 5383 5380 if (work->next_smb2_rcv_hdr_off) { ··· 5478 5473 rc = buffer_check_err(le32_to_cpu(req->OutputBufferLength), 5479 5474 rsp, work->response_buf); 5480 5475 ksmbd_fd_put(work, fp); 5476 + 5477 + iov_pin_out: 5478 + if (!rc) 5479 + rc = ksmbd_iov_pin_rsp(work, (void *)rsp, 5480 + offsetof(struct smb2_query_info_rsp, Buffer) + 5481 + le32_to_cpu(rsp->OutputBufferLength)); 5481 5482 return rc; 5482 5483 } 5483 5484 ··· 5710 5699 rc = buffer_check_err(le32_to_cpu(req->OutputBufferLength), 5711 5700 rsp, work->response_buf); 5712 5701 path_put(&path); 5702 + 5703 + if (!rc) 5704 + rc = ksmbd_iov_pin_rsp(work, (void *)rsp, 5705 + offsetof(struct smb2_query_info_rsp, Buffer) + 5706 + le32_to_cpu(rsp->OutputBufferLength)); 5713 5707 return rc; 5714 5708 } 5715 5709 ··· 5724 5708 { 5725 5709 struct ksmbd_file *fp; 5726 5710 struct mnt_idmap *idmap; 5727 - struct smb_ntsd *pntsd = (struct smb_ntsd *)rsp->Buffer, *ppntsd = NULL; 5711 + struct smb_ntsd *pntsd = NULL, *ppntsd = NULL; 5728 5712 struct smb_fattr fattr = {{0}}; 5729 5713 struct inode *inode; 5730 5714 __u32 secdesclen = 0; 5731 5715 unsigned int id = KSMBD_NO_FID, pid = KSMBD_NO_FID; 5732 5716 int addition_info = le32_to_cpu(req->AdditionalInformation); 5733 - int rc = 0, ppntsd_size = 0; 5717 + int rc = 0, ppntsd_size = 0, max_len; 5718 + size_t scratch_len = 0; 5734 5719 5735 5720 if (addition_info & ~(OWNER_SECINFO | GROUP_SECINFO | DACL_SECINFO | 5736 5721 PROTECTED_DACL_SECINFO | 5737 5722 UNPROTECTED_DACL_SECINFO)) { 5738 5723 ksmbd_debug(SMB, "Unsupported addition info: 0x%x)\n", 5739 5724 addition_info); 5725 + 5726 + pntsd = kzalloc(ALIGN(sizeof(struct smb_ntsd), 8), 5727 + KSMBD_DEFAULT_GFP); 5728 + if (!pntsd) 5729 + return -ENOMEM; 5740 5730 5741 5731 pntsd->revision = cpu_to_le16(1); 5742 5732 pntsd->type = cpu_to_le16(SELF_RELATIVE | DACL_PROTECTED); ··· 5752 5730 pntsd->dacloffset = 0; 5753 5731 5754 5732 secdesclen = sizeof(struct smb_ntsd); 5755 - rsp->OutputBufferLength = cpu_to_le32(secdesclen); 5756 - 5757 - return 0; 5733 + goto iov_pin; 5758 5734 } 5759 5735 5760 5736 if (work->next_smb2_rcv_hdr_off) { ··· 5784 5764 &ppntsd); 5785 5765 5786 5766 /* Check if sd buffer size exceeds response buffer size */ 5787 - if (smb2_resp_buf_len(work, 8) > ppntsd_size) 5788 - rc = build_sec_desc(idmap, pntsd, ppntsd, ppntsd_size, 5789 - addition_info, &secdesclen, &fattr); 5767 + max_len = smb2_calc_max_out_buf_len(work, 5768 + offsetof(struct smb2_query_info_rsp, Buffer), 5769 + le32_to_cpu(req->OutputBufferLength)); 5770 + if (max_len < 0) { 5771 + rc = -EINVAL; 5772 + goto release_acl; 5773 + } 5774 + 5775 + scratch_len = smb_acl_sec_desc_scratch_len(&fattr, ppntsd, 5776 + ppntsd_size, addition_info); 5777 + if (!scratch_len || scratch_len == SIZE_MAX) { 5778 + rc = -EFBIG; 5779 + goto release_acl; 5780 + } 5781 + 5782 + pntsd = kvzalloc(scratch_len, KSMBD_DEFAULT_GFP); 5783 + if (!pntsd) { 5784 + rc = -ENOMEM; 5785 + goto release_acl; 5786 + } 5787 + 5788 + rc = build_sec_desc(idmap, pntsd, ppntsd, ppntsd_size, 5789 + addition_info, &secdesclen, &fattr); 5790 + 5791 + release_acl: 5790 5792 posix_acl_release(fattr.cf_acls); 5791 5793 posix_acl_release(fattr.cf_dacls); 5792 5794 kfree(ppntsd); 5793 5795 ksmbd_fd_put(work, fp); 5794 - if (rc) 5795 - return rc; 5796 5796 5797 + if (!rc && ALIGN(secdesclen, 8) > scratch_len) 5798 + rc = -EFBIG; 5799 + if (rc) 5800 + goto err_out; 5801 + 5802 + iov_pin: 5797 5803 rsp->OutputBufferLength = cpu_to_le32(secdesclen); 5798 - return 0; 5804 + rc = buffer_check_err(le32_to_cpu(req->OutputBufferLength), 5805 + rsp, work->response_buf); 5806 + if (rc) 5807 + goto err_out; 5808 + 5809 + rc = ksmbd_iov_pin_rsp_read(work, (void *)rsp, 5810 + offsetof(struct smb2_query_info_rsp, Buffer), 5811 + pntsd, secdesclen); 5812 + err_out: 5813 + if (rc) { 5814 + rsp->OutputBufferLength = 0; 5815 + kvfree(pntsd); 5816 + } 5817 + 5818 + return rc; 5799 5819 } 5800 5820 5801 5821 /** ··· 5859 5799 goto err_out; 5860 5800 } 5861 5801 5802 + rsp->StructureSize = cpu_to_le16(9); 5803 + rsp->OutputBufferOffset = cpu_to_le16(72); 5804 + 5862 5805 switch (req->InfoType) { 5863 5806 case SMB2_O_INFO_FILE: 5864 5807 ksmbd_debug(SMB, "GOT SMB2_O_INFO_FILE\n"); ··· 5882 5819 } 5883 5820 ksmbd_revert_fsids(work); 5884 5821 5885 - if (!rc) { 5886 - rsp->StructureSize = cpu_to_le16(9); 5887 - rsp->OutputBufferOffset = cpu_to_le16(72); 5888 - rc = ksmbd_iov_pin_rsp(work, (void *)rsp, 5889 - offsetof(struct smb2_query_info_rsp, Buffer) + 5890 - le32_to_cpu(rsp->OutputBufferLength)); 5891 - } 5892 - 5893 5822 err_out: 5894 5823 if (rc < 0) { 5895 5824 if (rc == -EACCES) ··· 5892 5837 rsp->hdr.Status = STATUS_UNEXPECTED_IO_ERROR; 5893 5838 else if (rc == -ENOMEM) 5894 5839 rsp->hdr.Status = STATUS_INSUFFICIENT_RESOURCES; 5840 + else if (rc == -EINVAL && rsp->hdr.Status == 0) 5841 + rsp->hdr.Status = STATUS_INVALID_PARAMETER; 5895 5842 else if (rc == -EOPNOTSUPP || rsp->hdr.Status == 0) 5896 5843 rsp->hdr.Status = STATUS_INVALID_INFO_CLASS; 5897 5844 smb2_set_err_rsp(work);
+43
fs/smb/server/smbacl.c
··· 915 915 return 0; 916 916 } 917 917 918 + size_t smb_acl_sec_desc_scratch_len(struct smb_fattr *fattr, 919 + struct smb_ntsd *ppntsd, int ppntsd_size, int addition_info) 920 + { 921 + size_t len = sizeof(struct smb_ntsd); 922 + size_t tmp; 923 + 924 + if (addition_info & OWNER_SECINFO) 925 + len += sizeof(struct smb_sid); 926 + if (addition_info & GROUP_SECINFO) 927 + len += sizeof(struct smb_sid); 928 + if (!(addition_info & DACL_SECINFO)) 929 + return len; 930 + 931 + len += sizeof(struct smb_acl); 932 + if (ppntsd && ppntsd_size > 0) { 933 + unsigned int dacl_offset = le32_to_cpu(ppntsd->dacloffset); 934 + 935 + if (dacl_offset < ppntsd_size && 936 + check_add_overflow(len, ppntsd_size - dacl_offset, &len)) 937 + return 0; 938 + } 939 + 940 + if (fattr->cf_acls) { 941 + if (check_mul_overflow((size_t)fattr->cf_acls->a_count, 942 + 2 * sizeof(struct smb_ace), &tmp) || 943 + check_add_overflow(len, tmp, &len)) 944 + return 0; 945 + } else { 946 + /* default/minimum DACL */ 947 + if (check_add_overflow(len, 5 * sizeof(struct smb_ace), &len)) 948 + return 0; 949 + } 950 + 951 + if (fattr->cf_dacls) { 952 + if (check_mul_overflow((size_t)fattr->cf_dacls->a_count, 953 + sizeof(struct smb_ace), &tmp) || 954 + check_add_overflow(len, tmp, &len)) 955 + return 0; 956 + } 957 + 958 + return len; 959 + } 960 + 918 961 /* Convert permission bits from mode to equivalent CIFS ACL */ 919 962 int build_sec_desc(struct mnt_idmap *idmap, 920 963 struct smb_ntsd *pntsd, struct smb_ntsd *ppntsd,
+2
fs/smb/server/smbacl.h
··· 101 101 bool type_check, bool get_write); 102 102 void id_to_sid(unsigned int cid, uint sidtype, struct smb_sid *ssid); 103 103 void ksmbd_init_domain(u32 *sub_auth); 104 + size_t smb_acl_sec_desc_scratch_len(struct smb_fattr *fattr, 105 + struct smb_ntsd *ppntsd, int ppntsd_size, int addition_info); 104 106 105 107 static inline uid_t posix_acl_uid_translate(struct mnt_idmap *idmap, 106 108 struct posix_acl_entry *pace)
+15 -18
fs/udf/inode.c
··· 181 181 } 182 182 } 183 183 184 - static int udf_adinicb_writepages(struct address_space *mapping, 185 - struct writeback_control *wbc) 184 + static int udf_handle_page_wb(struct folio *folio, 185 + struct writeback_control *wbc) 186 186 { 187 - struct inode *inode = mapping->host; 187 + struct inode *inode = folio->mapping->host; 188 188 struct udf_inode_info *iinfo = UDF_I(inode); 189 - struct folio *folio = NULL; 190 - int error = 0; 191 189 192 - while ((folio = writeback_iter(mapping, wbc, folio, &error))) { 193 - BUG_ON(!folio_test_locked(folio)); 194 - BUG_ON(folio->index != 0); 195 - memcpy_from_file_folio(iinfo->i_data + iinfo->i_lenEAttr, folio, 190 + /* 191 + * Inodes in the normal format are handled by the generic code. This 192 + * check is race-free as the folio lock protects us from inode type 193 + * conversion. 194 + */ 195 + if (iinfo->i_alloc_type != ICBTAG_FLAG_AD_IN_ICB) 196 + return 1; 197 + 198 + memcpy_from_file_folio(iinfo->i_data + iinfo->i_lenEAttr, folio, 196 199 0, i_size_read(inode)); 197 - folio_unlock(folio); 198 - } 199 - 200 + folio_unlock(folio); 200 201 mark_inode_dirty(inode); 201 202 return 0; 202 203 } ··· 205 204 static int udf_writepages(struct address_space *mapping, 206 205 struct writeback_control *wbc) 207 206 { 208 - struct inode *inode = mapping->host; 209 - struct udf_inode_info *iinfo = UDF_I(inode); 210 - 211 - if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) 212 - return udf_adinicb_writepages(mapping, wbc); 213 - return mpage_writepages(mapping, wbc, udf_get_block_wb); 207 + return __mpage_writepages(mapping, wbc, udf_get_block_wb, 208 + udf_handle_page_wb); 214 209 } 215 210 216 211 static void udf_adinicb_read_folio(struct folio *folio)
+2 -3
include/crypto/if_alg.h
··· 230 230 return PAGE_SIZE <= af_alg_rcvbuf(sk); 231 231 } 232 232 233 - unsigned int af_alg_count_tsgl(struct sock *sk, size_t bytes, size_t offset); 234 - void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst, 235 - size_t dst_offset); 233 + unsigned int af_alg_count_tsgl(struct sock *sk, size_t bytes); 234 + void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst); 236 235 void af_alg_wmem_wakeup(struct sock *sk); 237 236 int af_alg_wait_for_data(struct sock *sk, unsigned flags, unsigned min); 238 237 int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
+4
include/linux/bpf.h
··· 1854 1854 * target hook is sleepable, we'll go through tasks trace RCU GP and 1855 1855 * then "classic" RCU GP; this need for chaining tasks trace and 1856 1856 * classic RCU GPs is designated by setting bpf_link->sleepable flag 1857 + * 1858 + * For non-sleepable tracepoint links we go through SRCU gp instead, 1859 + * since RCU is not used in that case. Sleepable tracepoints still 1860 + * follow the scheme above. 1857 1861 */ 1858 1862 void (*dealloc_deferred)(struct bpf_link *link); 1859 1863 int (*detach)(struct bpf_link *link);
+3
include/linux/cgroup-defs.h
··· 609 609 /* used to wait for offlining of csses */ 610 610 wait_queue_head_t offline_waitq; 611 611 612 + /* used by cgroup_rmdir() to wait for dying tasks to leave */ 613 + wait_queue_head_t dying_populated_waitq; 614 + 612 615 /* used to schedule release agent */ 613 616 struct work_struct release_agent_work; 614 617
+5 -7
include/linux/gpio/gpio-nomadik.h
··· 114 114 } 115 115 116 116 /** 117 - * enum prcm_gpiocr_reg_index 118 - * Used to reference an PRCM GPIOCR register address. 117 + * enum prcm_gpiocr_reg_index - Used to reference a PRCM GPIOCR register address. 119 118 */ 120 119 enum prcm_gpiocr_reg_index { 121 120 PRCM_IDX_GPIOCR1, ··· 122 123 PRCM_IDX_GPIOCR3 123 124 }; 124 125 /** 125 - * enum prcm_gpiocr_altcx_index 126 - * Used to reference an Other alternate-C function. 126 + * enum prcm_gpiocr_altcx_index - Used to reference an Other alternate-C function. 127 127 */ 128 128 enum prcm_gpiocr_altcx_index { 129 129 PRCM_IDX_GPIOCR_ALTC1, ··· 133 135 }; 134 136 135 137 /** 136 - * struct prcm_gpio_altcx - Other alternate-C function 138 + * struct prcm_gpiocr_altcx - Other alternate-C function 137 139 * @used: other alternate-C function availability 138 140 * @reg_index: PRCM GPIOCR register index used to control the function 139 141 * @control_bit: PRCM GPIOCR bit used to control the function ··· 145 147 } __packed; 146 148 147 149 /** 148 - * struct prcm_gpio_altcx_pin_desc - Other alternate-C pin 150 + * struct prcm_gpiocr_altcx_pin_desc - Other alternate-C pin 149 151 * @pin: The pin number 150 152 * @altcx: array of other alternate-C[1-4] functions 151 153 */ ··· 191 193 * numbering. 192 194 * @npins: The number of entries in @pins. 193 195 * @functions: The functions supported on this SoC. 194 - * @nfunction: The number of entries in @functions. 196 + * @nfunctions: The number of entries in @functions. 195 197 * @groups: An array describing all pin groups the pin SoC supports. 196 198 * @ngroups: The number of entries in @groups. 197 199 * @altcx_pins: The pins that support Other alternate-C function on this SoC
+12
include/linux/iio/iio.h
··· 931 931 #define IIO_DECLARE_DMA_BUFFER_WITH_TS(type, name, count) \ 932 932 __IIO_DECLARE_BUFFER_WITH_TS(type, name, count) __aligned(IIO_DMA_MINALIGN) 933 933 934 + /** 935 + * IIO_DECLARE_QUATERNION() - Declare a quaternion element 936 + * @type: element type of the individual vectors 937 + * @name: identifier name 938 + * 939 + * Quaternions are a vector composed of 4 elements (W, X, Y, Z). Use this macro 940 + * to declare a quaternion element in a struct to ensure proper alignment in 941 + * an IIO buffer. 942 + */ 943 + #define IIO_DECLARE_QUATERNION(type, name) \ 944 + type name[4] __aligned(sizeof(type) * 4) 945 + 934 946 struct iio_dev *iio_device_alloc(struct device *parent, int sizeof_priv); 935 947 936 948 /* The information at the returned address is guaranteed to be cacheline aligned */
+2 -1
include/linux/iommu.h
··· 980 980 static inline void iommu_iotlb_sync(struct iommu_domain *domain, 981 981 struct iommu_iotlb_gather *iotlb_gather) 982 982 { 983 - if (domain->ops->iotlb_sync) 983 + if (domain->ops->iotlb_sync && 984 + likely(iotlb_gather->start < iotlb_gather->end)) 984 985 domain->ops->iotlb_sync(domain, iotlb_gather); 985 986 986 987 iommu_iotlb_gather_init(iotlb_gather);
+2 -2
include/linux/lis3lv02d.h
··· 30 30 * @default_rate: Default sampling rate. 0 means reset default 31 31 * @setup_resources: Interrupt line setup call back function 32 32 * @release_resources: Interrupt line release call back function 33 - * @st_min_limits[3]: Selftest acceptance minimum values 34 - * @st_max_limits[3]: Selftest acceptance maximum values 33 + * @st_min_limits: Selftest acceptance minimum values (x, y, z) 34 + * @st_max_limits: Selftest acceptance maximum values (x, y, z) 35 35 * @irq2: Irq line 2 number 36 36 * 37 37 * Platform data is used to setup the sensor chip. Meaning of the different
+9 -2
include/linux/mpage.h
··· 17 17 18 18 void mpage_readahead(struct readahead_control *, get_block_t get_block); 19 19 int mpage_read_folio(struct folio *folio, get_block_t get_block); 20 - int mpage_writepages(struct address_space *mapping, 21 - struct writeback_control *wbc, get_block_t get_block); 20 + int __mpage_writepages(struct address_space *mapping, 21 + struct writeback_control *wbc, get_block_t get_block, 22 + int (*write_folio)(struct folio *folio, 23 + struct writeback_control *wbc)); 24 + static inline int mpage_writepages(struct address_space *mapping, 25 + struct writeback_control *wbc, get_block_t get_block) 26 + { 27 + return __mpage_writepages(mapping, wbc, get_block, NULL); 28 + } 22 29 23 30 #endif
+1 -1
include/linux/netfilter/ipset/ip_set.h
··· 309 309 310 310 /* register and unregister set references */ 311 311 extern ip_set_id_t ip_set_get_byname(struct net *net, 312 - const char *name, struct ip_set **set); 312 + const struct nlattr *name, struct ip_set **set); 313 313 extern void ip_set_put_byindex(struct net *net, ip_set_id_t index); 314 314 extern void ip_set_name_byindex(struct net *net, ip_set_id_t index, char *name); 315 315 extern ip_set_id_t ip_set_nfnl_get_byindex(struct net *net, ip_set_id_t index);
+1
include/linux/skbuff.h
··· 5097 5097 return unlikely(skb->active_extensions); 5098 5098 } 5099 5099 #else 5100 + static inline void __skb_ext_put(struct skb_ext *ext) {} 5100 5101 static inline void skb_ext_put(struct sk_buff *skb) {} 5101 5102 static inline void skb_ext_reset(struct sk_buff *skb) {} 5102 5103 static inline void skb_ext_del(struct sk_buff *skb, int unused) {}
+3 -3
include/linux/timb_gpio.h
··· 9 9 10 10 /** 11 11 * struct timbgpio_platform_data - Platform data of the Timberdale GPIO driver 12 - * @gpio_base The number of the first GPIO pin, set to -1 for 12 + * @gpio_base: The number of the first GPIO pin, set to -1 for 13 13 * dynamic number allocation. 14 - * @nr_pins Number of pins that is supported by the hardware (1-32) 15 - * @irq_base If IRQ is supported by the hardware, this is the base 14 + * @nr_pins: Number of pins that is supported by the hardware (1-32) 15 + * @irq_base: If IRQ is supported by the hardware, this is the base 16 16 * number of IRQ:s. One IRQ per pin will be used. Set to 17 17 * -1 if IRQ:s is not supported. 18 18 */
+20
include/linux/tracepoint.h
··· 122 122 { 123 123 return tp->ext && tp->ext->faultable; 124 124 } 125 + /* 126 + * Run RCU callback with the appropriate grace period wait for non-faultable 127 + * tracepoints, e.g., those used in atomic context. 128 + */ 129 + static inline void call_tracepoint_unregister_atomic(struct rcu_head *rcu, rcu_callback_t func) 130 + { 131 + call_srcu(&tracepoint_srcu, rcu, func); 132 + } 133 + /* 134 + * Run RCU callback with the appropriate grace period wait for faultable 135 + * tracepoints, e.g., those used in syscall context. 136 + */ 137 + static inline void call_tracepoint_unregister_syscall(struct rcu_head *rcu, rcu_callback_t func) 138 + { 139 + call_rcu_tasks_trace(rcu, func); 140 + } 125 141 #else 126 142 static inline void tracepoint_synchronize_unregister(void) 127 143 { } ··· 145 129 { 146 130 return false; 147 131 } 132 + static inline void call_tracepoint_unregister_atomic(struct rcu_head *rcu, rcu_callback_t func) 133 + { } 134 + static inline void call_tracepoint_unregister_syscall(struct rcu_head *rcu, rcu_callback_t func) 135 + { } 148 136 #endif 149 137 150 138 #ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+8 -2
include/linux/usb.h
··· 21 21 #include <linux/completion.h> /* for struct completion */ 22 22 #include <linux/sched.h> /* for current && schedule_timeout */ 23 23 #include <linux/mutex.h> /* for struct mutex */ 24 + #include <linux/spinlock.h> /* for spinlock_t */ 24 25 #include <linux/pm_runtime.h> /* for runtime PM */ 25 26 26 27 struct usb_device; ··· 637 636 * @do_remote_wakeup: remote wakeup should be enabled 638 637 * @reset_resume: needs reset instead of resume 639 638 * @port_is_suspended: the upstream port is suspended (L2 or U3) 640 - * @offload_at_suspend: offload activities during suspend is enabled. 639 + * @offload_pm_locked: prevents offload_usage changes during PM transitions. 641 640 * @offload_usage: number of offload activities happening on this usb device. 641 + * @offload_lock: protects offload_usage and offload_pm_locked 642 642 * @slot_id: Slot ID assigned by xHCI 643 643 * @l1_params: best effor service latency for USB2 L1 LPM state, and L1 timeout. 644 644 * @u1_params: exit latencies for USB3 U1 LPM state, and hub-initiated timeout. ··· 728 726 unsigned do_remote_wakeup:1; 729 727 unsigned reset_resume:1; 730 728 unsigned port_is_suspended:1; 731 - unsigned offload_at_suspend:1; 729 + unsigned offload_pm_locked:1; 732 730 int offload_usage; 731 + spinlock_t offload_lock; 733 732 enum usb_link_tunnel_mode tunnel_mode; 734 733 struct device_link *usb4_link; 735 734 ··· 852 849 int usb_offload_get(struct usb_device *udev); 853 850 int usb_offload_put(struct usb_device *udev); 854 851 bool usb_offload_check(struct usb_device *udev); 852 + void usb_offload_set_pm_locked(struct usb_device *udev, bool locked); 855 853 #else 856 854 857 855 static inline int usb_offload_get(struct usb_device *udev) ··· 861 857 { return 0; } 862 858 static inline bool usb_offload_check(struct usb_device *udev) 863 859 { return false; } 860 + static inline void usb_offload_set_pm_locked(struct usb_device *udev, bool locked) 861 + { } 864 862 #endif 865 863 866 864 extern int usb_disable_lpm(struct usb_device *udev);
+1
include/net/netns/mpls.h
··· 17 17 size_t platform_labels; 18 18 struct mpls_route __rcu * __rcu *platform_label; 19 19 struct mutex platform_mutex; 20 + seqcount_mutex_t platform_label_seq; 20 21 21 22 struct ctl_table_header *ctl; 22 23 };
+5 -2
io_uring/io_uring.c
··· 2015 2015 if (ctx->flags & IORING_SETUP_SQ_REWIND) 2016 2016 entries = ctx->sq_entries; 2017 2017 else 2018 - entries = io_sqring_entries(ctx); 2018 + entries = __io_sqring_entries(ctx); 2019 2019 2020 2020 entries = min(nr, entries); 2021 2021 if (unlikely(!entries)) ··· 2250 2250 */ 2251 2251 poll_wait(file, &ctx->poll_wq, wait); 2252 2252 2253 - if (!io_sqring_full(ctx)) 2253 + rcu_read_lock(); 2254 + 2255 + if (!__io_sqring_full(ctx)) 2254 2256 mask |= EPOLLOUT | EPOLLWRNORM; 2255 2257 2256 2258 /* ··· 2272 2270 if (__io_cqring_events_user(ctx) || io_has_work(ctx)) 2273 2271 mask |= EPOLLIN | EPOLLRDNORM; 2274 2272 2273 + rcu_read_unlock(); 2275 2274 return mask; 2276 2275 } 2277 2276
+29 -5
io_uring/io_uring.h
··· 142 142 #endif 143 143 }; 144 144 145 + static inline struct io_rings *io_get_rings(struct io_ring_ctx *ctx) 146 + { 147 + return rcu_dereference_check(ctx->rings_rcu, 148 + lockdep_is_held(&ctx->uring_lock) || 149 + lockdep_is_held(&ctx->completion_lock)); 150 + } 151 + 145 152 static inline bool io_should_wake(struct io_wait_queue *iowq) 146 153 { 147 154 struct io_ring_ctx *ctx = iowq->ctx; 148 - int dist = READ_ONCE(ctx->rings->cq.tail) - (int) iowq->cq_tail; 155 + struct io_rings *rings; 156 + int dist; 157 + 158 + guard(rcu)(); 159 + rings = io_get_rings(ctx); 149 160 150 161 /* 151 162 * Wake up if we have enough events, or if a timeout occurred since we 152 163 * started waiting. For timeouts, we always want to return to userspace, 153 164 * regardless of event count. 154 165 */ 166 + dist = READ_ONCE(rings->cq.tail) - (int) iowq->cq_tail; 155 167 return dist >= 0 || atomic_read(&ctx->cq_timeouts) != iowq->nr_timeouts; 156 168 } 157 169 ··· 443 431 __io_wq_wake(&ctx->cq_wait); 444 432 } 445 433 446 - static inline bool io_sqring_full(struct io_ring_ctx *ctx) 434 + static inline bool __io_sqring_full(struct io_ring_ctx *ctx) 447 435 { 448 - struct io_rings *r = ctx->rings; 436 + struct io_rings *r = io_get_rings(ctx); 449 437 450 438 /* 451 439 * SQPOLL must use the actual sqring head, as using the cached_sq_head ··· 457 445 return READ_ONCE(r->sq.tail) - READ_ONCE(r->sq.head) == ctx->sq_entries; 458 446 } 459 447 460 - static inline unsigned int io_sqring_entries(struct io_ring_ctx *ctx) 448 + static inline bool io_sqring_full(struct io_ring_ctx *ctx) 461 449 { 462 - struct io_rings *rings = ctx->rings; 450 + guard(rcu)(); 451 + return __io_sqring_full(ctx); 452 + } 453 + 454 + static inline unsigned int __io_sqring_entries(struct io_ring_ctx *ctx) 455 + { 456 + struct io_rings *rings = io_get_rings(ctx); 463 457 unsigned int entries; 464 458 465 459 /* make sure SQ entry isn't read before tail */ 466 460 entries = smp_load_acquire(&rings->sq.tail) - ctx->cached_sq_head; 467 461 return min(entries, ctx->sq_entries); 462 + } 463 + 464 + static inline unsigned int io_sqring_entries(struct io_ring_ctx *ctx) 465 + { 466 + guard(rcu)(); 467 + return __io_sqring_entries(ctx); 468 468 } 469 469 470 470 /*
+4
io_uring/net.c
··· 421 421 422 422 sr->done_io = 0; 423 423 sr->len = READ_ONCE(sqe->len); 424 + if (unlikely(sr->len < 0)) 425 + return -EINVAL; 424 426 sr->flags = READ_ONCE(sqe->ioprio); 425 427 if (sr->flags & ~SENDMSG_FLAGS) 426 428 return -EINVAL; ··· 793 791 794 792 sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr)); 795 793 sr->len = READ_ONCE(sqe->len); 794 + if (unlikely(sr->len < 0)) 795 + return -EINVAL; 796 796 sr->flags = READ_ONCE(sqe->ioprio); 797 797 if (sr->flags & ~RECVMSG_FLAGS) 798 798 return -EINVAL;
+9 -1
io_uring/register.c
··· 178 178 return -EBUSY; 179 179 180 180 ret = io_parse_restrictions(arg, nr_args, &ctx->restrictions); 181 - /* Reset all restrictions if an error happened */ 181 + /* 182 + * Reset all restrictions if an error happened, but retain any COW'ed 183 + * settings. 184 + */ 182 185 if (ret < 0) { 186 + struct io_bpf_filters *bpf = ctx->restrictions.bpf_filters; 187 + bool cowed = ctx->restrictions.bpf_filters_cow; 188 + 183 189 memset(&ctx->restrictions, 0, sizeof(ctx->restrictions)); 190 + ctx->restrictions.bpf_filters = bpf; 191 + ctx->restrictions.bpf_filters_cow = cowed; 184 192 return ret; 185 193 } 186 194 if (ctx->restrictions.op_registered)
+4
io_uring/rsrc.c
··· 1061 1061 return ret; 1062 1062 if (!(imu->dir & (1 << ddir))) 1063 1063 return -EFAULT; 1064 + if (unlikely(!len)) { 1065 + iov_iter_bvec(iter, ddir, NULL, 0, 0); 1066 + return 0; 1067 + } 1064 1068 1065 1069 offset = buf_addr - imu->ubuf; 1066 1070
+31 -19
io_uring/wait.c
··· 79 79 if (io_has_work(ctx)) 80 80 goto out_wake; 81 81 /* got events since we started waiting, min timeout is done */ 82 - if (iowq->cq_min_tail != READ_ONCE(ctx->rings->cq.tail)) 83 - goto out_wake; 84 - /* if we have any events and min timeout expired, we're done */ 85 - if (io_cqring_events(ctx)) 86 - goto out_wake; 82 + scoped_guard(rcu) { 83 + struct io_rings *rings = io_get_rings(ctx); 87 84 85 + if (iowq->cq_min_tail != READ_ONCE(rings->cq.tail)) 86 + goto out_wake; 87 + /* if we have any events and min timeout expired, we're done */ 88 + if (io_cqring_events(ctx)) 89 + goto out_wake; 90 + } 88 91 /* 89 92 * If using deferred task_work running and application is waiting on 90 93 * more than one request, ensure we reset it now where we are switching ··· 189 186 struct ext_arg *ext_arg) 190 187 { 191 188 struct io_wait_queue iowq; 192 - struct io_rings *rings = ctx->rings; 189 + struct io_rings *rings; 193 190 ktime_t start_time; 194 - int ret; 191 + int ret, nr_wait; 195 192 196 193 min_events = min_t(int, min_events, ctx->cq_entries); 197 194 ··· 204 201 205 202 if (unlikely(test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq))) 206 203 io_cqring_do_overflow_flush(ctx); 207 - if (__io_cqring_events_user(ctx) >= min_events) 204 + 205 + rcu_read_lock(); 206 + rings = io_get_rings(ctx); 207 + if (__io_cqring_events_user(ctx) >= min_events) { 208 + rcu_read_unlock(); 208 209 return 0; 210 + } 209 211 210 212 init_waitqueue_func_entry(&iowq.wq, io_wake_function); 211 213 iowq.wq.private = current; 212 214 INIT_LIST_HEAD(&iowq.wq.entry); 213 215 iowq.ctx = ctx; 214 - iowq.cq_tail = READ_ONCE(ctx->rings->cq.head) + min_events; 215 - iowq.cq_min_tail = READ_ONCE(ctx->rings->cq.tail); 216 + iowq.cq_tail = READ_ONCE(rings->cq.head) + min_events; 217 + iowq.cq_min_tail = READ_ONCE(rings->cq.tail); 218 + nr_wait = (int) iowq.cq_tail - READ_ONCE(rings->cq.tail); 219 + rcu_read_unlock(); 220 + rings = NULL; 216 221 iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts); 217 222 iowq.hit_timeout = 0; 218 223 iowq.min_timeout = ext_arg->min_time; ··· 251 240 trace_io_uring_cqring_wait(ctx, min_events); 252 241 do { 253 242 unsigned long check_cq; 254 - int nr_wait; 255 - 256 - /* if min timeout has been hit, don't reset wait count */ 257 - if (!iowq.hit_timeout) 258 - nr_wait = (int) iowq.cq_tail - 259 - READ_ONCE(ctx->rings->cq.tail); 260 - else 261 - nr_wait = 1; 262 243 263 244 if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) { 264 245 atomic_set(&ctx->cq_wait_nr, nr_wait); ··· 301 298 break; 302 299 } 303 300 cond_resched(); 301 + 302 + /* if min timeout has been hit, don't reset wait count */ 303 + if (!iowq.hit_timeout) 304 + scoped_guard(rcu) 305 + nr_wait = (int) iowq.cq_tail - 306 + READ_ONCE(io_get_rings(ctx)->cq.tail); 307 + else 308 + nr_wait = 1; 304 309 } while (1); 305 310 306 311 if (!(ctx->flags & IORING_SETUP_DEFER_TASKRUN)) 307 312 finish_wait(&ctx->cq_wait, &iowq.wq); 308 313 restore_saved_sigmask_unless(ret == -EINTR); 309 314 310 - return READ_ONCE(rings->cq.head) == READ_ONCE(rings->cq.tail) ? ret : 0; 315 + guard(rcu)(); 316 + return READ_ONCE(io_get_rings(ctx)->cq.head) == READ_ONCE(io_get_rings(ctx)->cq.tail) ? ret : 0; 311 317 }
+5 -2
io_uring/wait.h
··· 28 28 29 29 static inline unsigned int __io_cqring_events(struct io_ring_ctx *ctx) 30 30 { 31 - return ctx->cached_cq_tail - READ_ONCE(ctx->rings->cq.head); 31 + struct io_rings *rings = io_get_rings(ctx); 32 + return ctx->cached_cq_tail - READ_ONCE(rings->cq.head); 32 33 } 33 34 34 35 static inline unsigned int __io_cqring_events_user(struct io_ring_ctx *ctx) 35 36 { 36 - return READ_ONCE(ctx->rings->cq.tail) - READ_ONCE(ctx->rings->cq.head); 37 + struct io_rings *rings = io_get_rings(ctx); 38 + 39 + return READ_ONCE(rings->cq.tail) - READ_ONCE(rings->cq.head); 37 40 } 38 41 39 42 /*
+23 -2
kernel/bpf/syscall.c
··· 3261 3261 bpf_link_dealloc(link); 3262 3262 } 3263 3263 3264 + static bool bpf_link_is_tracepoint(struct bpf_link *link) 3265 + { 3266 + /* 3267 + * Only these combinations support a tracepoint bpf_link. 3268 + * BPF_LINK_TYPE_TRACING raw_tp progs are hardcoded to use 3269 + * bpf_raw_tp_link_lops and thus dealloc_deferred(), see 3270 + * bpf_raw_tp_link_attach(). 3271 + */ 3272 + return link->type == BPF_LINK_TYPE_RAW_TRACEPOINT || 3273 + (link->type == BPF_LINK_TYPE_TRACING && link->attach_type == BPF_TRACE_RAW_TP); 3274 + } 3275 + 3264 3276 static void bpf_link_defer_dealloc_mult_rcu_gp(struct rcu_head *rcu) 3265 3277 { 3266 3278 if (rcu_trace_implies_rcu_gp()) ··· 3291 3279 if (link->prog) 3292 3280 ops->release(link); 3293 3281 if (ops->dealloc_deferred) { 3294 - /* Schedule BPF link deallocation, which will only then 3282 + /* 3283 + * Schedule BPF link deallocation, which will only then 3295 3284 * trigger putting BPF program refcount. 3296 3285 * If underlying BPF program is sleepable or BPF link's target 3297 3286 * attach hookpoint is sleepable or otherwise requires RCU GPs 3298 3287 * to ensure link and its underlying BPF program is not 3299 3288 * reachable anymore, we need to first wait for RCU tasks 3300 - * trace sync, and then go through "classic" RCU grace period 3289 + * trace sync, and then go through "classic" RCU grace period. 3290 + * 3291 + * For tracepoint BPF links, we need to go through SRCU grace 3292 + * period wait instead when non-faultable tracepoint is used. We 3293 + * don't need to chain SRCU grace period waits, however, for the 3294 + * faultable case, since it exclusively uses RCU Tasks Trace. 3301 3295 */ 3302 3296 if (link->sleepable || (link->prog && link->prog->sleepable)) 3303 3297 call_rcu_tasks_trace(&link->rcu, bpf_link_defer_dealloc_mult_rcu_gp); 3298 + /* We need to do a SRCU grace period wait for non-faultable tracepoint BPF links. */ 3299 + else if (bpf_link_is_tracepoint(link)) 3300 + call_tracepoint_unregister_atomic(&link->rcu, bpf_link_defer_dealloc_rcu_gp); 3304 3301 else 3305 3302 call_rcu(&link->rcu, bpf_link_defer_dealloc_rcu_gp); 3306 3303 } else if (ops->dealloc) {
+32 -5
kernel/bpf/verifier.c
··· 617 617 insn->imm == BPF_LOAD_ACQ; 618 618 } 619 619 620 + static bool is_atomic_fetch_insn(const struct bpf_insn *insn) 621 + { 622 + return BPF_CLASS(insn->code) == BPF_STX && 623 + BPF_MODE(insn->code) == BPF_ATOMIC && 624 + (insn->imm & BPF_FETCH); 625 + } 626 + 620 627 static int __get_spi(s32 off) 621 628 { 622 629 return (-off - 1) / BPF_REG_SIZE; ··· 4454 4447 * dreg still needs precision before this insn 4455 4448 */ 4456 4449 } 4457 - } else if (class == BPF_LDX || is_atomic_load_insn(insn)) { 4458 - if (!bt_is_reg_set(bt, dreg)) 4450 + } else if (class == BPF_LDX || 4451 + is_atomic_load_insn(insn) || 4452 + is_atomic_fetch_insn(insn)) { 4453 + u32 load_reg = dreg; 4454 + 4455 + /* 4456 + * Atomic fetch operation writes the old value into 4457 + * a register (sreg or r0) and if it was tracked for 4458 + * precision, propagate to the stack slot like we do 4459 + * in regular ldx. 4460 + */ 4461 + if (is_atomic_fetch_insn(insn)) 4462 + load_reg = insn->imm == BPF_CMPXCHG ? 4463 + BPF_REG_0 : sreg; 4464 + 4465 + if (!bt_is_reg_set(bt, load_reg)) 4459 4466 return 0; 4460 - bt_clear_reg(bt, dreg); 4467 + bt_clear_reg(bt, load_reg); 4461 4468 4462 4469 /* scalars can only be spilled into stack w/o losing precision. 4463 4470 * Load from any other memory can be zero extended. ··· 7926 7905 } else if (reg->type == CONST_PTR_TO_MAP) { 7927 7906 err = check_ptr_to_map_access(env, regs, regno, off, size, t, 7928 7907 value_regno); 7929 - } else if (base_type(reg->type) == PTR_TO_BUF) { 7908 + } else if (base_type(reg->type) == PTR_TO_BUF && 7909 + !type_may_be_null(reg->type)) { 7930 7910 bool rdonly_mem = type_is_rdonly_mem(reg->type); 7931 7911 u32 *max_access; 7932 7912 ··· 19937 19915 * since someone could have accessed through (ptr - k), or 19938 19916 * even done ptr -= k in a register, to get a safe access. 19939 19917 */ 19940 - if (rold->range > rcur->range) 19918 + if (rold->range < 0 || rcur->range < 0) { 19919 + /* special case for [BEYOND|AT]_PKT_END */ 19920 + if (rold->range != rcur->range) 19921 + return false; 19922 + } else if (rold->range > rcur->range) { 19941 19923 return false; 19924 + } 19942 19925 /* If the offsets don't match, we can't trust our alignment; 19943 19926 * nor can we be sure that we won't fall out of range. 19944 19927 */
+85 -3
kernel/cgroup/cgroup.c
··· 2126 2126 #endif 2127 2127 2128 2128 init_waitqueue_head(&cgrp->offline_waitq); 2129 + init_waitqueue_head(&cgrp->dying_populated_waitq); 2129 2130 INIT_WORK(&cgrp->release_agent_work, cgroup1_release_agent); 2130 2131 } 2131 2132 ··· 6225 6224 return 0; 6226 6225 }; 6227 6226 6227 + /** 6228 + * cgroup_drain_dying - wait for dying tasks to leave before rmdir 6229 + * @cgrp: the cgroup being removed 6230 + * 6231 + * cgroup.procs and cgroup.threads use css_task_iter which filters out 6232 + * PF_EXITING tasks so that userspace doesn't see tasks that have already been 6233 + * reaped via waitpid(). However, cgroup_has_tasks() - which tests whether the 6234 + * cgroup has non-empty css_sets - is only updated when dying tasks pass through 6235 + * cgroup_task_dead() in finish_task_switch(). This creates a window where 6236 + * cgroup.procs reads empty but cgroup_has_tasks() is still true, making rmdir 6237 + * fail with -EBUSY from cgroup_destroy_locked() even though userspace sees no 6238 + * tasks. 6239 + * 6240 + * This function aligns cgroup_has_tasks() with what userspace can observe. If 6241 + * cgroup_has_tasks() but the task iterator sees nothing (all remaining tasks are 6242 + * PF_EXITING), we wait for cgroup_task_dead() to finish processing them. As the 6243 + * window between PF_EXITING and cgroup_task_dead() is short, the wait is brief. 6244 + * 6245 + * This function only concerns itself with this cgroup's own dying tasks. 6246 + * Whether the cgroup has children is cgroup_destroy_locked()'s problem. 6247 + * 6248 + * Each cgroup_task_dead() kicks the waitqueue via cset->cgrp_links, and we 6249 + * retry the full check from scratch. 6250 + * 6251 + * Must be called with cgroup_mutex held. 6252 + */ 6253 + static int cgroup_drain_dying(struct cgroup *cgrp) 6254 + __releases(&cgroup_mutex) __acquires(&cgroup_mutex) 6255 + { 6256 + struct css_task_iter it; 6257 + struct task_struct *task; 6258 + DEFINE_WAIT(wait); 6259 + 6260 + lockdep_assert_held(&cgroup_mutex); 6261 + retry: 6262 + if (!cgroup_has_tasks(cgrp)) 6263 + return 0; 6264 + 6265 + /* Same iterator as cgroup.threads - if any task is visible, it's busy */ 6266 + css_task_iter_start(&cgrp->self, 0, &it); 6267 + task = css_task_iter_next(&it); 6268 + css_task_iter_end(&it); 6269 + 6270 + if (task) 6271 + return -EBUSY; 6272 + 6273 + /* 6274 + * All remaining tasks are PF_EXITING and will pass through 6275 + * cgroup_task_dead() shortly. Wait for a kick and retry. 6276 + * 6277 + * cgroup_has_tasks() can't transition from false to true while we're 6278 + * holding cgroup_mutex, but the true to false transition happens 6279 + * under css_set_lock (via cgroup_task_dead()). We must retest and 6280 + * prepare_to_wait() under css_set_lock. Otherwise, the transition 6281 + * can happen between our first test and prepare_to_wait(), and we 6282 + * sleep with no one to wake us. 6283 + */ 6284 + spin_lock_irq(&css_set_lock); 6285 + if (!cgroup_has_tasks(cgrp)) { 6286 + spin_unlock_irq(&css_set_lock); 6287 + return 0; 6288 + } 6289 + prepare_to_wait(&cgrp->dying_populated_waitq, &wait, 6290 + TASK_UNINTERRUPTIBLE); 6291 + spin_unlock_irq(&css_set_lock); 6292 + mutex_unlock(&cgroup_mutex); 6293 + schedule(); 6294 + finish_wait(&cgrp->dying_populated_waitq, &wait); 6295 + mutex_lock(&cgroup_mutex); 6296 + goto retry; 6297 + } 6298 + 6228 6299 int cgroup_rmdir(struct kernfs_node *kn) 6229 6300 { 6230 6301 struct cgroup *cgrp; ··· 6306 6233 if (!cgrp) 6307 6234 return 0; 6308 6235 6309 - ret = cgroup_destroy_locked(cgrp); 6310 - if (!ret) 6311 - TRACE_CGROUP_PATH(rmdir, cgrp); 6236 + ret = cgroup_drain_dying(cgrp); 6237 + if (!ret) { 6238 + ret = cgroup_destroy_locked(cgrp); 6239 + if (!ret) 6240 + TRACE_CGROUP_PATH(rmdir, cgrp); 6241 + } 6312 6242 6313 6243 cgroup_kn_unlock(kn); 6314 6244 return ret; ··· 7071 6995 7072 6996 static void do_cgroup_task_dead(struct task_struct *tsk) 7073 6997 { 6998 + struct cgrp_cset_link *link; 7074 6999 struct css_set *cset; 7075 7000 unsigned long flags; 7076 7001 ··· 7084 7007 /* matches the signal->live check in css_task_iter_advance() */ 7085 7008 if (thread_group_leader(tsk) && atomic_read(&tsk->signal->live)) 7086 7009 list_add_tail(&tsk->cg_list, &cset->dying_tasks); 7010 + 7011 + /* kick cgroup_drain_dying() waiters, see cgroup_rmdir() */ 7012 + list_for_each_entry(link, &cset->cgrp_links, cgrp_link) 7013 + if (waitqueue_active(&link->cgrp->dying_populated_waitq)) 7014 + wake_up(&link->cgrp->dying_populated_waitq); 7087 7015 7088 7016 if (dl_task(tsk)) 7089 7017 dec_dl_tasks_cs(tsk);
+20 -9
kernel/cgroup/cpuset.c
··· 2988 2988 struct cgroup_subsys_state *css; 2989 2989 struct cpuset *cs, *oldcs; 2990 2990 struct task_struct *task; 2991 - bool cpus_updated, mems_updated; 2991 + bool setsched_check; 2992 2992 int ret; 2993 2993 2994 2994 /* used later by cpuset_attach() */ ··· 3003 3003 if (ret) 3004 3004 goto out_unlock; 3005 3005 3006 - cpus_updated = !cpumask_equal(cs->effective_cpus, oldcs->effective_cpus); 3007 - mems_updated = !nodes_equal(cs->effective_mems, oldcs->effective_mems); 3006 + /* 3007 + * Skip rights over task setsched check in v2 when nothing changes, 3008 + * migration permission derives from hierarchy ownership in 3009 + * cgroup_procs_write_permission()). 3010 + */ 3011 + setsched_check = !cpuset_v2() || 3012 + !cpumask_equal(cs->effective_cpus, oldcs->effective_cpus) || 3013 + !nodes_equal(cs->effective_mems, oldcs->effective_mems); 3014 + 3015 + /* 3016 + * A v1 cpuset with tasks will have no CPU left only when CPU hotplug 3017 + * brings the last online CPU offline as users are not allowed to empty 3018 + * cpuset.cpus when there are active tasks inside. When that happens, 3019 + * we should allow tasks to migrate out without security check to make 3020 + * sure they will be able to run after migration. 3021 + */ 3022 + if (!is_in_v2_mode() && cpumask_empty(oldcs->effective_cpus)) 3023 + setsched_check = false; 3008 3024 3009 3025 cgroup_taskset_for_each(task, css, tset) { 3010 3026 ret = task_can_attach(task); 3011 3027 if (ret) 3012 3028 goto out_unlock; 3013 3029 3014 - /* 3015 - * Skip rights over task check in v2 when nothing changes, 3016 - * migration permission derives from hierarchy ownership in 3017 - * cgroup_procs_write_permission()). 3018 - */ 3019 - if (!cpuset_v2() || (cpus_updated || mems_updated)) { 3030 + if (setsched_check) { 3020 3031 ret = security_task_setscheduler(task); 3021 3032 if (ret) 3022 3033 goto out_unlock;
+2
kernel/power/em_netlink.c
··· 109 109 110 110 id = nla_get_u32(info->attrs[DEV_ENERGYMODEL_A_PERF_DOMAIN_PERF_DOMAIN_ID]); 111 111 pd = em_perf_domain_get_by_id(id); 112 + if (!pd) 113 + return -EINVAL; 112 114 113 115 __em_nl_get_pd_size(pd, &msg_sz); 114 116 msg = genlmsg_new(msg_sz, GFP_KERNEL);
+3 -1
kernel/sched/debug.c
··· 902 902 void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq) 903 903 { 904 904 s64 left_vruntime = -1, zero_vruntime, right_vruntime = -1, left_deadline = -1, spread; 905 + u64 avruntime; 905 906 struct sched_entity *last, *first, *root; 906 907 struct rq *rq = cpu_rq(cpu); 907 908 unsigned long flags; ··· 926 925 if (last) 927 926 right_vruntime = last->vruntime; 928 927 zero_vruntime = cfs_rq->zero_vruntime; 928 + avruntime = avg_vruntime(cfs_rq); 929 929 raw_spin_rq_unlock_irqrestore(rq, flags); 930 930 931 931 SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "left_deadline", ··· 936 934 SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "zero_vruntime", 937 935 SPLIT_NS(zero_vruntime)); 938 936 SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "avg_vruntime", 939 - SPLIT_NS(avg_vruntime(cfs_rq))); 937 + SPLIT_NS(avruntime)); 940 938 SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "right_vruntime", 941 939 SPLIT_NS(right_vruntime)); 942 940 spread = right_vruntime - left_vruntime;
+105 -39
kernel/sched/ext.c
··· 1110 1110 p->scx.dsq = dsq; 1111 1111 1112 1112 /* 1113 - * scx.ddsp_dsq_id and scx.ddsp_enq_flags are only relevant on the 1114 - * direct dispatch path, but we clear them here because the direct 1115 - * dispatch verdict may be overridden on the enqueue path during e.g. 1116 - * bypass. 1117 - */ 1118 - p->scx.ddsp_dsq_id = SCX_DSQ_INVALID; 1119 - p->scx.ddsp_enq_flags = 0; 1120 - 1121 - /* 1122 1113 * We're transitioning out of QUEUEING or DISPATCHING. store_release to 1123 1114 * match waiters' load_acquire. 1124 1115 */ ··· 1274 1283 p->scx.ddsp_enq_flags = enq_flags; 1275 1284 } 1276 1285 1286 + /* 1287 + * Clear @p direct dispatch state when leaving the scheduler. 1288 + * 1289 + * Direct dispatch state must be cleared in the following cases: 1290 + * - direct_dispatch(): cleared on the synchronous enqueue path, deferred 1291 + * dispatch keeps the state until consumed 1292 + * - process_ddsp_deferred_locals(): cleared after consuming deferred state, 1293 + * - do_enqueue_task(): cleared on enqueue fallbacks where the dispatch 1294 + * verdict is ignored (local/global/bypass) 1295 + * - dequeue_task_scx(): cleared after dispatch_dequeue(), covering deferred 1296 + * cancellation and holding_cpu races 1297 + * - scx_disable_task(): cleared for queued wakeup tasks, which are excluded by 1298 + * the scx_bypass() loop, so that stale state is not reused by a subsequent 1299 + * scheduler instance 1300 + */ 1301 + static inline void clear_direct_dispatch(struct task_struct *p) 1302 + { 1303 + p->scx.ddsp_dsq_id = SCX_DSQ_INVALID; 1304 + p->scx.ddsp_enq_flags = 0; 1305 + } 1306 + 1277 1307 static void direct_dispatch(struct scx_sched *sch, struct task_struct *p, 1278 1308 u64 enq_flags) 1279 1309 { 1280 1310 struct rq *rq = task_rq(p); 1281 1311 struct scx_dispatch_q *dsq = 1282 1312 find_dsq_for_dispatch(sch, rq, p->scx.ddsp_dsq_id, p); 1313 + u64 ddsp_enq_flags; 1283 1314 1284 1315 touch_core_sched_dispatch(rq, p); 1285 1316 ··· 1342 1329 return; 1343 1330 } 1344 1331 1345 - dispatch_enqueue(sch, dsq, p, 1346 - p->scx.ddsp_enq_flags | SCX_ENQ_CLEAR_OPSS); 1332 + ddsp_enq_flags = p->scx.ddsp_enq_flags; 1333 + clear_direct_dispatch(p); 1334 + 1335 + dispatch_enqueue(sch, dsq, p, ddsp_enq_flags | SCX_ENQ_CLEAR_OPSS); 1347 1336 } 1348 1337 1349 1338 static bool scx_rq_online(struct rq *rq) ··· 1454 1439 */ 1455 1440 touch_core_sched(rq, p); 1456 1441 refill_task_slice_dfl(sch, p); 1442 + clear_direct_dispatch(p); 1457 1443 dispatch_enqueue(sch, dsq, p, enq_flags); 1458 1444 } 1459 1445 ··· 1626 1610 sub_nr_running(rq, 1); 1627 1611 1628 1612 dispatch_dequeue(rq, p); 1613 + clear_direct_dispatch(p); 1629 1614 return true; 1630 1615 } 1631 1616 ··· 2310 2293 struct task_struct, scx.dsq_list.node))) { 2311 2294 struct scx_sched *sch = scx_root; 2312 2295 struct scx_dispatch_q *dsq; 2296 + u64 dsq_id = p->scx.ddsp_dsq_id; 2297 + u64 enq_flags = p->scx.ddsp_enq_flags; 2313 2298 2314 2299 list_del_init(&p->scx.dsq_list.node); 2300 + clear_direct_dispatch(p); 2315 2301 2316 - dsq = find_dsq_for_dispatch(sch, rq, p->scx.ddsp_dsq_id, p); 2302 + dsq = find_dsq_for_dispatch(sch, rq, dsq_id, p); 2317 2303 if (!WARN_ON_ONCE(dsq->id != SCX_DSQ_LOCAL)) 2318 - dispatch_to_local_dsq(sch, rq, dsq, p, 2319 - p->scx.ddsp_enq_flags); 2304 + dispatch_to_local_dsq(sch, rq, dsq, p, enq_flags); 2320 2305 } 2321 2306 } 2322 2307 ··· 2423 2404 { 2424 2405 struct scx_sched *sch = scx_root; 2425 2406 2426 - /* see kick_cpus_irq_workfn() */ 2407 + /* see kick_sync_wait_bal_cb() */ 2427 2408 smp_store_release(&rq->scx.kick_sync, rq->scx.kick_sync + 1); 2428 2409 2429 2410 update_curr_scx(rq); ··· 2466 2447 switch_class(rq, next); 2467 2448 } 2468 2449 2450 + static void kick_sync_wait_bal_cb(struct rq *rq) 2451 + { 2452 + struct scx_kick_syncs __rcu *ks = __this_cpu_read(scx_kick_syncs); 2453 + unsigned long *ksyncs = rcu_dereference_sched(ks)->syncs; 2454 + bool waited; 2455 + s32 cpu; 2456 + 2457 + /* 2458 + * Drop rq lock and enable IRQs while waiting. IRQs must be enabled 2459 + * — a target CPU may be waiting for us to process an IPI (e.g. TLB 2460 + * flush) while we wait for its kick_sync to advance. 2461 + * 2462 + * Also, keep advancing our own kick_sync so that new kick_sync waits 2463 + * targeting us, which can start after we drop the lock, cannot form 2464 + * cyclic dependencies. 2465 + */ 2466 + retry: 2467 + waited = false; 2468 + for_each_cpu(cpu, rq->scx.cpus_to_sync) { 2469 + /* 2470 + * smp_load_acquire() pairs with smp_store_release() on 2471 + * kick_sync updates on the target CPUs. 2472 + */ 2473 + if (cpu == cpu_of(rq) || 2474 + smp_load_acquire(&cpu_rq(cpu)->scx.kick_sync) != ksyncs[cpu]) { 2475 + cpumask_clear_cpu(cpu, rq->scx.cpus_to_sync); 2476 + continue; 2477 + } 2478 + 2479 + raw_spin_rq_unlock_irq(rq); 2480 + while (READ_ONCE(cpu_rq(cpu)->scx.kick_sync) == ksyncs[cpu]) { 2481 + smp_store_release(&rq->scx.kick_sync, rq->scx.kick_sync + 1); 2482 + cpu_relax(); 2483 + } 2484 + raw_spin_rq_lock_irq(rq); 2485 + waited = true; 2486 + } 2487 + 2488 + if (waited) 2489 + goto retry; 2490 + } 2491 + 2469 2492 static struct task_struct *first_local_task(struct rq *rq) 2470 2493 { 2471 2494 return list_first_entry_or_null(&rq->scx.local_dsq.list, ··· 2521 2460 bool keep_prev; 2522 2461 struct task_struct *p; 2523 2462 2524 - /* see kick_cpus_irq_workfn() */ 2463 + /* see kick_sync_wait_bal_cb() */ 2525 2464 smp_store_release(&rq->scx.kick_sync, rq->scx.kick_sync + 1); 2526 2465 2527 2466 rq_modified_begin(rq, &ext_sched_class); ··· 2530 2469 balance_one(rq, prev); 2531 2470 rq_repin_lock(rq, rf); 2532 2471 maybe_queue_balance_callback(rq); 2472 + 2473 + /* 2474 + * Defer to a balance callback which can drop rq lock and enable 2475 + * IRQs. Waiting directly in the pick path would deadlock against 2476 + * CPUs sending us IPIs (e.g. TLB flushes) while we wait for them. 2477 + */ 2478 + if (unlikely(rq->scx.kick_sync_pending)) { 2479 + rq->scx.kick_sync_pending = false; 2480 + queue_balance_callback(rq, &rq->scx.kick_sync_bal_cb, 2481 + kick_sync_wait_bal_cb); 2482 + } 2533 2483 2534 2484 /* 2535 2485 * If any higher-priority sched class enqueued a runnable task on ··· 3033 2961 3034 2962 lockdep_assert_rq_held(rq); 3035 2963 WARN_ON_ONCE(scx_get_task_state(p) != SCX_TASK_ENABLED); 2964 + 2965 + clear_direct_dispatch(p); 3036 2966 3037 2967 if (SCX_HAS_OP(sch, disable)) 3038 2968 SCX_CALL_OP_TASK(sch, SCX_KF_REST, disable, rq, p); ··· 4787 4713 if (!cpumask_empty(rq->scx.cpus_to_wait)) 4788 4714 dump_line(&ns, " cpus_to_wait : %*pb", 4789 4715 cpumask_pr_args(rq->scx.cpus_to_wait)); 4716 + if (!cpumask_empty(rq->scx.cpus_to_sync)) 4717 + dump_line(&ns, " cpus_to_sync : %*pb", 4718 + cpumask_pr_args(rq->scx.cpus_to_sync)); 4790 4719 4791 4720 used = seq_buf_used(&ns); 4792 4721 if (SCX_HAS_OP(sch, dump_cpu)) { ··· 5687 5610 5688 5611 if (cpumask_test_cpu(cpu, this_scx->cpus_to_wait)) { 5689 5612 if (cur_class == &ext_sched_class) { 5613 + cpumask_set_cpu(cpu, this_scx->cpus_to_sync); 5690 5614 ksyncs[cpu] = rq->scx.kick_sync; 5691 5615 should_wait = true; 5692 - } else { 5693 - cpumask_clear_cpu(cpu, this_scx->cpus_to_wait); 5694 5616 } 5617 + cpumask_clear_cpu(cpu, this_scx->cpus_to_wait); 5695 5618 } 5696 5619 5697 5620 resched_curr(rq); ··· 5746 5669 cpumask_clear_cpu(cpu, this_scx->cpus_to_kick_if_idle); 5747 5670 } 5748 5671 5749 - if (!should_wait) 5750 - return; 5751 - 5752 - for_each_cpu(cpu, this_scx->cpus_to_wait) { 5753 - unsigned long *wait_kick_sync = &cpu_rq(cpu)->scx.kick_sync; 5754 - 5755 - /* 5756 - * Busy-wait until the task running at the time of kicking is no 5757 - * longer running. This can be used to implement e.g. core 5758 - * scheduling. 5759 - * 5760 - * smp_cond_load_acquire() pairs with store_releases in 5761 - * pick_task_scx() and put_prev_task_scx(). The former breaks 5762 - * the wait if SCX's scheduling path is entered even if the same 5763 - * task is picked subsequently. The latter is necessary to break 5764 - * the wait when $cpu is taken by a higher sched class. 5765 - */ 5766 - if (cpu != cpu_of(this_rq)) 5767 - smp_cond_load_acquire(wait_kick_sync, VAL != ksyncs[cpu]); 5768 - 5769 - cpumask_clear_cpu(cpu, this_scx->cpus_to_wait); 5672 + /* 5673 + * Can't wait in hardirq — kick_sync can't advance, deadlocking if 5674 + * CPUs wait for each other. Defer to kick_sync_wait_bal_cb(). 5675 + */ 5676 + if (should_wait) { 5677 + raw_spin_rq_lock(this_rq); 5678 + this_scx->kick_sync_pending = true; 5679 + resched_curr(this_rq); 5680 + raw_spin_rq_unlock(this_rq); 5770 5681 } 5771 5682 } 5772 5683 ··· 5859 5794 BUG_ON(!zalloc_cpumask_var_node(&rq->scx.cpus_to_kick_if_idle, GFP_KERNEL, n)); 5860 5795 BUG_ON(!zalloc_cpumask_var_node(&rq->scx.cpus_to_preempt, GFP_KERNEL, n)); 5861 5796 BUG_ON(!zalloc_cpumask_var_node(&rq->scx.cpus_to_wait, GFP_KERNEL, n)); 5797 + BUG_ON(!zalloc_cpumask_var_node(&rq->scx.cpus_to_sync, GFP_KERNEL, n)); 5862 5798 rq->scx.deferred_irq_work = IRQ_WORK_INIT_HARD(deferred_irq_workfn); 5863 5799 rq->scx.kick_cpus_irq_work = IRQ_WORK_INIT_HARD(kick_cpus_irq_workfn); 5864 5800
+20 -13
kernel/sched/ext_idle.c
··· 543 543 * piled up on it even if there is an idle core elsewhere on 544 544 * the system. 545 545 */ 546 - waker_node = cpu_to_node(cpu); 546 + waker_node = scx_cpu_node_if_enabled(cpu); 547 547 if (!(current->flags & PF_EXITING) && 548 548 cpu_rq(cpu)->scx.local_dsq.nr == 0 && 549 549 (!(flags & SCX_PICK_IDLE_IN_NODE) || (waker_node == node)) && ··· 860 860 * code. 861 861 * 862 862 * We can't simply check whether @p->migration_disabled is set in a 863 - * sched_ext callback, because migration is always disabled for the current 864 - * task while running BPF code. 863 + * sched_ext callback, because the BPF prolog (__bpf_prog_enter) may disable 864 + * migration for the current task while running BPF code. 865 865 * 866 - * The prolog (__bpf_prog_enter) and epilog (__bpf_prog_exit) respectively 867 - * disable and re-enable migration. For this reason, the current task 868 - * inside a sched_ext callback is always a migration-disabled task. 866 + * Since the BPF prolog calls migrate_disable() only when CONFIG_PREEMPT_RCU 867 + * is enabled (via rcu_read_lock_dont_migrate()), migration_disabled == 1 for 868 + * the current task is ambiguous only in that case: it could be from the BPF 869 + * prolog rather than a real migrate_disable() call. 869 870 * 870 - * Therefore, when @p->migration_disabled == 1, check whether @p is the 871 - * current task or not: if it is, then migration was not disabled before 872 - * entering the callback, otherwise migration was disabled. 871 + * Without CONFIG_PREEMPT_RCU, the BPF prolog never calls migrate_disable(), 872 + * so migration_disabled == 1 always means the task is truly 873 + * migration-disabled. 874 + * 875 + * Therefore, when migration_disabled == 1 and CONFIG_PREEMPT_RCU is enabled, 876 + * check whether @p is the current task or not: if it is, then migration was 877 + * not disabled before entering the callback, otherwise migration was disabled. 873 878 * 874 879 * Returns true if @p is migration-disabled, false otherwise. 875 880 */ 876 881 static bool is_bpf_migration_disabled(const struct task_struct *p) 877 882 { 878 - if (p->migration_disabled == 1) 879 - return p != current; 880 - else 881 - return p->migration_disabled; 883 + if (p->migration_disabled == 1) { 884 + if (IS_ENABLED(CONFIG_PREEMPT_RCU)) 885 + return p != current; 886 + return true; 887 + } 888 + return p->migration_disabled; 882 889 } 883 890 884 891 static s32 select_cpu_from_kfunc(struct scx_sched *sch, struct task_struct *p,
+3 -7
kernel/sched/fair.c
··· 707 707 * Called in: 708 708 * - place_entity() -- before enqueue 709 709 * - update_entity_lag() -- before dequeue 710 - * - entity_tick() 710 + * - update_deadline() -- slice expiration 711 711 * 712 712 * This means it is one entry 'behind' but that puts it close enough to where 713 713 * the bound on entity_key() is at most two lag bounds. ··· 1131 1131 * EEVDF: vd_i = ve_i + r_i / w_i 1132 1132 */ 1133 1133 se->deadline = se->vruntime + calc_delta_fair(se->slice, se); 1134 + avg_vruntime(cfs_rq); 1134 1135 1135 1136 /* 1136 1137 * The task has consumed its request, reschedule. ··· 5594 5593 update_load_avg(cfs_rq, curr, UPDATE_TG); 5595 5594 update_cfs_group(curr); 5596 5595 5597 - /* 5598 - * Pulls along cfs_rq::zero_vruntime. 5599 - */ 5600 - avg_vruntime(cfs_rq); 5601 - 5602 5596 #ifdef CONFIG_SCHED_HRTICK 5603 5597 /* 5604 5598 * queued ticks are scheduled to match the slice, so don't bother ··· 9124 9128 */ 9125 9129 if (entity_eligible(cfs_rq, se)) { 9126 9130 se->vruntime = se->deadline; 9127 - se->deadline += calc_delta_fair(se->slice, se); 9131 + update_deadline(cfs_rq, se); 9128 9132 } 9129 9133 } 9130 9134
+3
kernel/sched/sched.h
··· 805 805 cpumask_var_t cpus_to_kick_if_idle; 806 806 cpumask_var_t cpus_to_preempt; 807 807 cpumask_var_t cpus_to_wait; 808 + cpumask_var_t cpus_to_sync; 809 + bool kick_sync_pending; 808 810 unsigned long kick_sync; 809 811 local_t reenq_local_deferred; 810 812 struct balance_callback deferred_bal_cb; 813 + struct balance_callback kick_sync_bal_cb; 811 814 struct irq_work deferred_irq_work; 812 815 struct irq_work kick_cpus_irq_work; 813 816 struct scx_dispatch_q bypass_dsq;
+4
kernel/trace/bpf_trace.c
··· 2752 2752 if (!is_kprobe_multi(prog)) 2753 2753 return -EINVAL; 2754 2754 2755 + /* kprobe_multi is not allowed to be sleepable. */ 2756 + if (prog->sleepable) 2757 + return -EINVAL; 2758 + 2755 2759 /* Writing to context is not allowed for kprobes. */ 2756 2760 if (prog->aux->kprobe_write_ctx) 2757 2761 return -EINVAL;
+22 -3
kernel/workqueue.c
··· 7699 7699 else 7700 7700 ts = touched; 7701 7701 7702 - /* did we stall? */ 7702 + /* 7703 + * Did we stall? 7704 + * 7705 + * Do a lockless check first to do not disturb the system. 7706 + * 7707 + * Prevent false positives by double checking the timestamp 7708 + * under pool->lock. The lock makes sure that the check reads 7709 + * an updated pool->last_progress_ts when this CPU saw 7710 + * an already updated pool->worklist above. It seems better 7711 + * than adding another barrier into __queue_work() which 7712 + * is a hotter path. 7713 + */ 7703 7714 if (time_after(now, ts + thresh)) { 7715 + scoped_guard(raw_spinlock_irqsave, &pool->lock) { 7716 + pool_ts = pool->last_progress_ts; 7717 + if (time_after(pool_ts, touched)) 7718 + ts = pool_ts; 7719 + else 7720 + ts = touched; 7721 + } 7722 + if (!time_after(now, ts + thresh)) 7723 + continue; 7724 + 7704 7725 lockup_detected = true; 7705 7726 stall_time = jiffies_to_msecs(now - pool_ts) / 1000; 7706 7727 max_stall_time = max(max_stall_time, stall_time); ··· 7733 7712 pr_cont_pool_info(pool); 7734 7713 pr_cont(" stuck for %us!\n", stall_time); 7735 7714 } 7736 - 7737 - 7738 7715 } 7739 7716 7740 7717 if (lockup_detected)
+4
lib/crypto/chacha-block-generic.c
··· 87 87 &out[i * sizeof(u32)]); 88 88 89 89 state->x[12]++; 90 + 91 + chacha_zeroize_state(&permuted_state); 90 92 } 91 93 EXPORT_SYMBOL(chacha_block_generic); 92 94 ··· 112 110 113 111 memcpy(&out[0], &permuted_state.x[0], 16); 114 112 memcpy(&out[4], &permuted_state.x[12], 16); 113 + 114 + chacha_zeroize_state(&permuted_state); 115 115 } 116 116 EXPORT_SYMBOL(hchacha_block_generic);
+7 -1
net/bluetooth/hci_conn.c
··· 1843 1843 u8 aux_num_cis = 0; 1844 1844 u8 cis_id; 1845 1845 1846 + hci_dev_lock(hdev); 1847 + 1846 1848 conn = hci_conn_hash_lookup_cig(hdev, cig_id); 1847 - if (!conn) 1849 + if (!conn) { 1850 + hci_dev_unlock(hdev); 1848 1851 return 0; 1852 + } 1849 1853 1850 1854 qos = &conn->iso_qos; 1851 1855 pdu->cig_id = cig_id; ··· 1887 1883 cis->p_rtn = qos->ucast.in.rtn; 1888 1884 } 1889 1885 pdu->num_cis = aux_num_cis; 1886 + 1887 + hci_dev_unlock(hdev); 1890 1888 1891 1889 if (!pdu->num_cis) 1892 1890 return 0;
+55 -72
net/bluetooth/hci_event.c
··· 80 80 return data; 81 81 } 82 82 83 + static void hci_store_wake_reason(struct hci_dev *hdev, 84 + const bdaddr_t *bdaddr, u8 addr_type) 85 + __must_hold(&hdev->lock); 86 + 83 87 static u8 hci_cc_inquiry_cancel(struct hci_dev *hdev, void *data, 84 88 struct sk_buff *skb) 85 89 { ··· 3115 3111 bt_dev_dbg(hdev, "status 0x%2.2x", status); 3116 3112 3117 3113 hci_dev_lock(hdev); 3114 + hci_store_wake_reason(hdev, &ev->bdaddr, BDADDR_BREDR); 3118 3115 3119 3116 /* Check for existing connection: 3120 3117 * ··· 3278 3273 __u8 flags = 0; 3279 3274 3280 3275 bt_dev_dbg(hdev, "bdaddr %pMR type 0x%x", &ev->bdaddr, ev->link_type); 3276 + 3277 + hci_dev_lock(hdev); 3278 + hci_store_wake_reason(hdev, &ev->bdaddr, BDADDR_BREDR); 3279 + hci_dev_unlock(hdev); 3281 3280 3282 3281 /* Reject incoming connection from device with same BD ADDR against 3283 3282 * CVE-2020-26555 ··· 5030 5021 bt_dev_dbg(hdev, "status 0x%2.2x", status); 5031 5022 5032 5023 hci_dev_lock(hdev); 5024 + hci_store_wake_reason(hdev, &ev->bdaddr, BDADDR_BREDR); 5033 5025 5034 5026 conn = hci_conn_hash_lookup_ba(hdev, ev->link_type, &ev->bdaddr); 5035 5027 if (!conn) { ··· 5723 5713 int err; 5724 5714 5725 5715 hci_dev_lock(hdev); 5716 + hci_store_wake_reason(hdev, bdaddr, bdaddr_type); 5726 5717 5727 5718 /* All controllers implicitly stop advertising in the event of a 5728 5719 * connection, so ensure that the state bit is cleared. ··· 6016 6005 bt_dev_dbg(hdev, "status 0x%2.2x", ev->status); 6017 6006 6018 6007 hci_dev_lock(hdev); 6008 + hci_store_wake_reason(hdev, &ev->bdaddr, ev->bdaddr_type); 6019 6009 6020 6010 hci_dev_clear_flag(hdev, HCI_PA_SYNC); 6021 6011 ··· 6415 6403 info->length + 1)) 6416 6404 break; 6417 6405 6406 + hci_store_wake_reason(hdev, &info->bdaddr, info->bdaddr_type); 6407 + 6418 6408 if (info->length <= max_adv_len(hdev)) { 6419 6409 rssi = info->data[info->length]; 6420 6410 process_adv_report(hdev, info->type, &info->bdaddr, ··· 6505 6491 info->length)) 6506 6492 break; 6507 6493 6494 + hci_store_wake_reason(hdev, &info->bdaddr, info->bdaddr_type); 6495 + 6508 6496 evt_type = __le16_to_cpu(info->type) & LE_EXT_ADV_EVT_TYPE_MASK; 6509 6497 legacy_evt_type = ext_evt_type_to_legacy(hdev, evt_type); 6510 6498 ··· 6552 6536 bt_dev_dbg(hdev, "status 0x%2.2x", ev->status); 6553 6537 6554 6538 hci_dev_lock(hdev); 6539 + hci_store_wake_reason(hdev, &ev->bdaddr, ev->bdaddr_type); 6555 6540 6556 6541 hci_dev_clear_flag(hdev, HCI_PA_SYNC); 6557 6542 ··· 6784 6767 latency = le16_to_cpu(ev->latency); 6785 6768 timeout = le16_to_cpu(ev->timeout); 6786 6769 6770 + hci_dev_lock(hdev); 6771 + 6787 6772 hcon = hci_conn_hash_lookup_handle(hdev, handle); 6788 - if (!hcon || hcon->state != BT_CONNECTED) 6789 - return send_conn_param_neg_reply(hdev, handle, 6790 - HCI_ERROR_UNKNOWN_CONN_ID); 6773 + if (!hcon || hcon->state != BT_CONNECTED) { 6774 + send_conn_param_neg_reply(hdev, handle, 6775 + HCI_ERROR_UNKNOWN_CONN_ID); 6776 + goto unlock; 6777 + } 6791 6778 6792 - if (max > hcon->le_conn_max_interval) 6793 - return send_conn_param_neg_reply(hdev, handle, 6794 - HCI_ERROR_INVALID_LL_PARAMS); 6779 + if (max > hcon->le_conn_max_interval) { 6780 + send_conn_param_neg_reply(hdev, handle, 6781 + HCI_ERROR_INVALID_LL_PARAMS); 6782 + goto unlock; 6783 + } 6795 6784 6796 - if (hci_check_conn_params(min, max, latency, timeout)) 6797 - return send_conn_param_neg_reply(hdev, handle, 6798 - HCI_ERROR_INVALID_LL_PARAMS); 6785 + if (hci_check_conn_params(min, max, latency, timeout)) { 6786 + send_conn_param_neg_reply(hdev, handle, 6787 + HCI_ERROR_INVALID_LL_PARAMS); 6788 + goto unlock; 6789 + } 6799 6790 6800 6791 if (hcon->role == HCI_ROLE_MASTER) { 6801 6792 struct hci_conn_params *params; 6802 6793 u8 store_hint; 6803 - 6804 - hci_dev_lock(hdev); 6805 6794 6806 6795 params = hci_conn_params_lookup(hdev, &hcon->dst, 6807 6796 hcon->dst_type); ··· 6820 6797 } else { 6821 6798 store_hint = 0x00; 6822 6799 } 6823 - 6824 - hci_dev_unlock(hdev); 6825 6800 6826 6801 mgmt_new_conn_param(hdev, &hcon->dst, hcon->dst_type, 6827 6802 store_hint, min, max, latency, timeout); ··· 6834 6813 cp.max_ce_len = 0; 6835 6814 6836 6815 hci_send_cmd(hdev, HCI_OP_LE_CONN_PARAM_REQ_REPLY, sizeof(cp), &cp); 6816 + 6817 + unlock: 6818 + hci_dev_unlock(hdev); 6837 6819 } 6838 6820 6839 6821 static void hci_le_direct_adv_report_evt(struct hci_dev *hdev, void *data, ··· 6857 6833 6858 6834 for (i = 0; i < ev->num; i++) { 6859 6835 struct hci_ev_le_direct_adv_info *info = &ev->info[i]; 6836 + 6837 + hci_store_wake_reason(hdev, &info->bdaddr, info->bdaddr_type); 6860 6838 6861 6839 process_adv_report(hdev, info->type, &info->bdaddr, 6862 6840 info->bdaddr_type, &info->direct_addr, ··· 7543 7517 return true; 7544 7518 } 7545 7519 7546 - static void hci_store_wake_reason(struct hci_dev *hdev, u8 event, 7547 - struct sk_buff *skb) 7520 + static void hci_store_wake_reason(struct hci_dev *hdev, 7521 + const bdaddr_t *bdaddr, u8 addr_type) 7522 + __must_hold(&hdev->lock) 7548 7523 { 7549 - struct hci_ev_le_advertising_info *adv; 7550 - struct hci_ev_le_direct_adv_info *direct_adv; 7551 - struct hci_ev_le_ext_adv_info *ext_adv; 7552 - const struct hci_ev_conn_complete *conn_complete = (void *)skb->data; 7553 - const struct hci_ev_conn_request *conn_request = (void *)skb->data; 7554 - 7555 - hci_dev_lock(hdev); 7524 + lockdep_assert_held(&hdev->lock); 7556 7525 7557 7526 /* If we are currently suspended and this is the first BT event seen, 7558 7527 * save the wake reason associated with the event. 7559 7528 */ 7560 7529 if (!hdev->suspended || hdev->wake_reason) 7561 - goto unlock; 7530 + return; 7531 + 7532 + if (!bdaddr) { 7533 + hdev->wake_reason = MGMT_WAKE_REASON_UNEXPECTED; 7534 + return; 7535 + } 7562 7536 7563 7537 /* Default to remote wake. Values for wake_reason are documented in the 7564 7538 * Bluez mgmt api docs. 7565 7539 */ 7566 7540 hdev->wake_reason = MGMT_WAKE_REASON_REMOTE_WAKE; 7567 - 7568 - /* Once configured for remote wakeup, we should only wake up for 7569 - * reconnections. It's useful to see which device is waking us up so 7570 - * keep track of the bdaddr of the connection event that woke us up. 7571 - */ 7572 - if (event == HCI_EV_CONN_REQUEST) { 7573 - bacpy(&hdev->wake_addr, &conn_request->bdaddr); 7574 - hdev->wake_addr_type = BDADDR_BREDR; 7575 - } else if (event == HCI_EV_CONN_COMPLETE) { 7576 - bacpy(&hdev->wake_addr, &conn_complete->bdaddr); 7577 - hdev->wake_addr_type = BDADDR_BREDR; 7578 - } else if (event == HCI_EV_LE_META) { 7579 - struct hci_ev_le_meta *le_ev = (void *)skb->data; 7580 - u8 subevent = le_ev->subevent; 7581 - u8 *ptr = &skb->data[sizeof(*le_ev)]; 7582 - u8 num_reports = *ptr; 7583 - 7584 - if ((subevent == HCI_EV_LE_ADVERTISING_REPORT || 7585 - subevent == HCI_EV_LE_DIRECT_ADV_REPORT || 7586 - subevent == HCI_EV_LE_EXT_ADV_REPORT) && 7587 - num_reports) { 7588 - adv = (void *)(ptr + 1); 7589 - direct_adv = (void *)(ptr + 1); 7590 - ext_adv = (void *)(ptr + 1); 7591 - 7592 - switch (subevent) { 7593 - case HCI_EV_LE_ADVERTISING_REPORT: 7594 - bacpy(&hdev->wake_addr, &adv->bdaddr); 7595 - hdev->wake_addr_type = adv->bdaddr_type; 7596 - break; 7597 - case HCI_EV_LE_DIRECT_ADV_REPORT: 7598 - bacpy(&hdev->wake_addr, &direct_adv->bdaddr); 7599 - hdev->wake_addr_type = direct_adv->bdaddr_type; 7600 - break; 7601 - case HCI_EV_LE_EXT_ADV_REPORT: 7602 - bacpy(&hdev->wake_addr, &ext_adv->bdaddr); 7603 - hdev->wake_addr_type = ext_adv->bdaddr_type; 7604 - break; 7605 - } 7606 - } 7607 - } else { 7608 - hdev->wake_reason = MGMT_WAKE_REASON_UNEXPECTED; 7609 - } 7610 - 7611 - unlock: 7612 - hci_dev_unlock(hdev); 7541 + bacpy(&hdev->wake_addr, bdaddr); 7542 + hdev->wake_addr_type = addr_type; 7613 7543 } 7614 7544 7615 7545 #define HCI_EV_VL(_op, _func, _min_len, _max_len) \ ··· 7812 7830 7813 7831 skb_pull(skb, HCI_EVENT_HDR_SIZE); 7814 7832 7815 - /* Store wake reason if we're suspended */ 7816 - hci_store_wake_reason(hdev, event, skb); 7817 - 7818 7833 bt_dev_dbg(hdev, "event 0x%2.2x", event); 7819 7834 7820 7835 hci_event_func(hdev, event, skb, &opcode, &status, &req_complete, 7821 7836 &req_complete_skb); 7837 + 7838 + hci_dev_lock(hdev); 7839 + hci_store_wake_reason(hdev, NULL, 0); 7840 + hci_dev_unlock(hdev); 7822 7841 7823 7842 if (req_complete) { 7824 7843 req_complete(hdev, status, opcode);
+62 -26
net/bluetooth/hci_sync.c
··· 780 780 void *data, hci_cmd_sync_work_destroy_t destroy) 781 781 { 782 782 if (hci_cmd_sync_lookup_entry(hdev, func, data, destroy)) 783 - return 0; 783 + return -EEXIST; 784 784 785 785 return hci_cmd_sync_queue(hdev, func, data, destroy); 786 786 } ··· 801 801 return -ENETDOWN; 802 802 803 803 /* If on cmd_sync_work then run immediately otherwise queue */ 804 - if (current_work() == &hdev->cmd_sync_work) 805 - return func(hdev, data); 804 + if (current_work() == &hdev->cmd_sync_work) { 805 + int err; 806 + 807 + err = func(hdev, data); 808 + if (destroy) 809 + destroy(hdev, data, err); 810 + 811 + return 0; 812 + } 806 813 807 814 return hci_cmd_sync_submit(hdev, func, data, destroy); 808 815 } ··· 3262 3255 3263 3256 int hci_update_passive_scan(struct hci_dev *hdev) 3264 3257 { 3258 + int err; 3259 + 3265 3260 /* Only queue if it would have any effect */ 3266 3261 if (!test_bit(HCI_UP, &hdev->flags) || 3267 3262 test_bit(HCI_INIT, &hdev->flags) || ··· 3273 3264 hci_dev_test_flag(hdev, HCI_UNREGISTER)) 3274 3265 return 0; 3275 3266 3276 - return hci_cmd_sync_queue_once(hdev, update_passive_scan_sync, NULL, 3277 - NULL); 3267 + err = hci_cmd_sync_queue_once(hdev, update_passive_scan_sync, NULL, 3268 + NULL); 3269 + return (err == -EEXIST) ? 0 : err; 3278 3270 } 3279 3271 3280 3272 int hci_write_sc_support_sync(struct hci_dev *hdev, u8 val) ··· 6968 6958 6969 6959 int hci_connect_acl_sync(struct hci_dev *hdev, struct hci_conn *conn) 6970 6960 { 6971 - return hci_cmd_sync_queue_once(hdev, hci_acl_create_conn_sync, conn, 6972 - NULL); 6961 + int err; 6962 + 6963 + err = hci_cmd_sync_queue_once(hdev, hci_acl_create_conn_sync, conn, 6964 + NULL); 6965 + return (err == -EEXIST) ? 0 : err; 6973 6966 } 6974 6967 6975 6968 static void create_le_conn_complete(struct hci_dev *hdev, void *data, int err) ··· 7008 6995 7009 6996 int hci_connect_le_sync(struct hci_dev *hdev, struct hci_conn *conn) 7010 6997 { 7011 - return hci_cmd_sync_queue_once(hdev, hci_le_create_conn_sync, conn, 7012 - create_le_conn_complete); 6998 + int err; 6999 + 7000 + err = hci_cmd_sync_queue_once(hdev, hci_le_create_conn_sync, conn, 7001 + create_le_conn_complete); 7002 + return (err == -EEXIST) ? 0 : err; 7013 7003 } 7014 7004 7015 7005 int hci_cancel_connect_sync(struct hci_dev *hdev, struct hci_conn *conn) ··· 7219 7203 7220 7204 int hci_connect_pa_sync(struct hci_dev *hdev, struct hci_conn *conn) 7221 7205 { 7222 - return hci_cmd_sync_queue_once(hdev, hci_le_pa_create_sync, conn, 7223 - create_pa_complete); 7206 + int err; 7207 + 7208 + err = hci_cmd_sync_queue_once(hdev, hci_le_pa_create_sync, conn, 7209 + create_pa_complete); 7210 + return (err == -EEXIST) ? 0 : err; 7224 7211 } 7225 7212 7226 7213 static void create_big_complete(struct hci_dev *hdev, void *data, int err) ··· 7241 7222 7242 7223 static int hci_le_big_create_sync(struct hci_dev *hdev, void *data) 7243 7224 { 7244 - DEFINE_FLEX(struct hci_cp_le_big_create_sync, cp, bis, num_bis, 0x11); 7225 + DEFINE_FLEX(struct hci_cp_le_big_create_sync, cp, bis, num_bis, 7226 + HCI_MAX_ISO_BIS); 7245 7227 struct hci_conn *conn = data; 7246 7228 struct bt_iso_qos *qos = &conn->iso_qos; 7247 7229 int err; ··· 7286 7266 7287 7267 int hci_connect_big_sync(struct hci_dev *hdev, struct hci_conn *conn) 7288 7268 { 7289 - return hci_cmd_sync_queue_once(hdev, hci_le_big_create_sync, conn, 7290 - create_big_complete); 7269 + int err; 7270 + 7271 + err = hci_cmd_sync_queue_once(hdev, hci_le_big_create_sync, conn, 7272 + create_big_complete); 7273 + return (err == -EEXIST) ? 0 : err; 7291 7274 } 7292 7275 7293 7276 struct past_data { ··· 7382 7359 if (err) 7383 7360 kfree(data); 7384 7361 7385 - return err; 7362 + return (err == -EEXIST) ? 0 : err; 7386 7363 } 7387 7364 7388 7365 static void le_read_features_complete(struct hci_dev *hdev, void *data, int err) ··· 7391 7368 7392 7369 bt_dev_dbg(hdev, "err %d", err); 7393 7370 7394 - if (err == -ECANCELED) 7395 - return; 7396 - 7397 7371 hci_conn_drop(conn); 7372 + hci_conn_put(conn); 7398 7373 } 7399 7374 7400 7375 static int hci_le_read_all_remote_features_sync(struct hci_dev *hdev, ··· 7459 7438 * role is possible. Otherwise just transition into the 7460 7439 * connected state without requesting the remote features. 7461 7440 */ 7462 - if (conn->out || (hdev->le_features[0] & HCI_LE_PERIPHERAL_FEATURES)) 7441 + if (conn->out || (hdev->le_features[0] & HCI_LE_PERIPHERAL_FEATURES)) { 7463 7442 err = hci_cmd_sync_queue_once(hdev, 7464 7443 hci_le_read_remote_features_sync, 7465 - hci_conn_hold(conn), 7444 + hci_conn_hold(hci_conn_get(conn)), 7466 7445 le_read_features_complete); 7467 - else 7446 + if (err) { 7447 + hci_conn_drop(conn); 7448 + hci_conn_put(conn); 7449 + } 7450 + } else { 7468 7451 err = -EOPNOTSUPP; 7452 + } 7469 7453 7470 - return err; 7454 + return (err == -EEXIST) ? 0 : err; 7471 7455 } 7472 7456 7473 7457 static void pkt_type_changed(struct hci_dev *hdev, void *data, int err) ··· 7498 7472 { 7499 7473 struct hci_dev *hdev = conn->hdev; 7500 7474 struct hci_cp_change_conn_ptype *cp; 7475 + int err; 7501 7476 7502 7477 cp = kmalloc_obj(*cp); 7503 7478 if (!cp) ··· 7507 7480 cp->handle = cpu_to_le16(conn->handle); 7508 7481 cp->pkt_type = cpu_to_le16(pkt_type); 7509 7482 7510 - return hci_cmd_sync_queue_once(hdev, hci_change_conn_ptype_sync, cp, 7511 - pkt_type_changed); 7483 + err = hci_cmd_sync_queue_once(hdev, hci_change_conn_ptype_sync, cp, 7484 + pkt_type_changed); 7485 + if (err) 7486 + kfree(cp); 7487 + 7488 + return (err == -EEXIST) ? 0 : err; 7512 7489 } 7513 7490 7514 7491 static void le_phy_update_complete(struct hci_dev *hdev, void *data, int err) ··· 7538 7507 { 7539 7508 struct hci_dev *hdev = conn->hdev; 7540 7509 struct hci_cp_le_set_phy *cp; 7510 + int err; 7541 7511 7542 7512 cp = kmalloc_obj(*cp); 7543 7513 if (!cp) ··· 7549 7517 cp->tx_phys = tx_phys; 7550 7518 cp->rx_phys = rx_phys; 7551 7519 7552 - return hci_cmd_sync_queue_once(hdev, hci_le_set_phy_sync, cp, 7553 - le_phy_update_complete); 7520 + err = hci_cmd_sync_queue_once(hdev, hci_le_set_phy_sync, cp, 7521 + le_phy_update_complete); 7522 + if (err) 7523 + kfree(cp); 7524 + 7525 + return (err == -EEXIST) ? 0 : err; 7554 7526 }
+14 -3
net/bluetooth/mgmt.c
··· 2478 2478 struct mgmt_mesh_tx *mesh_tx; 2479 2479 struct mgmt_cp_mesh_send *send = data; 2480 2480 struct mgmt_rp_mesh_read_features rp; 2481 + u16 expected_len; 2481 2482 bool sending; 2482 2483 int err = 0; 2483 2484 ··· 2486 2485 !hci_dev_test_flag(hdev, HCI_MESH_EXPERIMENTAL)) 2487 2486 return mgmt_cmd_status(sk, hdev->id, MGMT_OP_MESH_SEND, 2488 2487 MGMT_STATUS_NOT_SUPPORTED); 2489 - if (!hci_dev_test_flag(hdev, HCI_LE_ENABLED) || 2490 - len <= MGMT_MESH_SEND_SIZE || 2491 - len > (MGMT_MESH_SEND_SIZE + 31)) 2488 + if (!hci_dev_test_flag(hdev, HCI_LE_ENABLED)) 2492 2489 return mgmt_cmd_status(sk, hdev->id, MGMT_OP_MESH_SEND, 2493 2490 MGMT_STATUS_REJECTED); 2491 + 2492 + if (!send->adv_data_len || send->adv_data_len > 31) 2493 + return mgmt_cmd_status(sk, hdev->id, MGMT_OP_MESH_SEND, 2494 + MGMT_STATUS_REJECTED); 2495 + 2496 + expected_len = struct_size(send, adv_data, send->adv_data_len); 2497 + if (expected_len != len) 2498 + return mgmt_cmd_status(sk, hdev->id, MGMT_OP_MESH_SEND, 2499 + MGMT_STATUS_INVALID_PARAMS); 2494 2500 2495 2501 hci_dev_lock(hdev); 2496 2502 ··· 7254 7246 static bool ltk_is_valid(struct mgmt_ltk_info *key) 7255 7247 { 7256 7248 if (key->initiator != 0x00 && key->initiator != 0x01) 7249 + return false; 7250 + 7251 + if (key->enc_size > sizeof(key->val)) 7257 7252 return false; 7258 7253 7259 7254 switch (key->addr.type) {
+23 -7
net/bluetooth/sco.c
··· 298 298 int err = 0; 299 299 300 300 sco_conn_lock(conn); 301 - if (conn->sk) 301 + if (conn->sk || sco_pi(sk)->conn) 302 302 err = -EBUSY; 303 303 else 304 304 __sco_chan_add(conn, sk, parent); ··· 353 353 354 354 lock_sock(sk); 355 355 356 + /* Recheck state after reacquiring the socket lock, as another 357 + * thread may have changed it (e.g., closed the socket). 358 + */ 359 + if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) { 360 + release_sock(sk); 361 + hci_conn_drop(hcon); 362 + err = -EBADFD; 363 + goto unlock; 364 + } 365 + 356 366 err = sco_chan_add(conn, sk, NULL); 357 367 if (err) { 358 368 release_sock(sk); 369 + hci_conn_drop(hcon); 359 370 goto unlock; 360 371 } 361 372 ··· 667 656 addr->sa_family != AF_BLUETOOTH) 668 657 return -EINVAL; 669 658 670 - if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) 671 - return -EBADFD; 672 - 673 - if (sk->sk_type != SOCK_SEQPACKET) 674 - err = -EINVAL; 675 - 676 659 lock_sock(sk); 660 + 661 + if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) { 662 + release_sock(sk); 663 + return -EBADFD; 664 + } 665 + 666 + if (sk->sk_type != SOCK_SEQPACKET) { 667 + release_sock(sk); 668 + return -EINVAL; 669 + } 670 + 677 671 /* Set destination address and psm */ 678 672 bacpy(&sco_pi(sk)->dst, &sa->sco_bdaddr); 679 673 release_sock(sk);
+6 -5
net/bluetooth/smp.c
··· 1018 1018 1019 1019 smp_s1(smp->tk, smp->prnd, smp->rrnd, stk); 1020 1020 1021 - if (hcon->pending_sec_level == BT_SECURITY_HIGH) 1022 - auth = 1; 1023 - else 1024 - auth = 0; 1021 + auth = test_bit(SMP_FLAG_MITM_AUTH, &smp->flags) ? 1 : 0; 1025 1022 1026 1023 /* Even though there's no _RESPONDER suffix this is the 1027 1024 * responder STK we're adding for later lookup (the initiator ··· 1823 1826 if (sec_level > conn->hcon->pending_sec_level) 1824 1827 conn->hcon->pending_sec_level = sec_level; 1825 1828 1826 - /* If we need MITM check that it can be achieved */ 1829 + /* If we need MITM check that it can be achieved. */ 1827 1830 if (conn->hcon->pending_sec_level >= BT_SECURITY_HIGH) { 1828 1831 u8 method; 1829 1832 ··· 1831 1834 req->io_capability); 1832 1835 if (method == JUST_WORKS || method == JUST_CFM) 1833 1836 return SMP_AUTH_REQUIREMENTS; 1837 + 1838 + /* Force MITM bit if it isn't set by the initiator. */ 1839 + auth |= SMP_AUTH_MITM; 1840 + rsp.auth_req |= SMP_AUTH_MITM; 1834 1841 } 1835 1842 1836 1843 key_size = min(req->max_key_size, rsp.max_key_size);
+11 -7
net/bridge/br_arp_nd_proxy.c
··· 251 251 252 252 static void br_nd_send(struct net_bridge *br, struct net_bridge_port *p, 253 253 struct sk_buff *request, struct neighbour *n, 254 - __be16 vlan_proto, u16 vlan_tci, struct nd_msg *ns) 254 + __be16 vlan_proto, u16 vlan_tci) 255 255 { 256 256 struct net_device *dev = request->dev; 257 257 struct net_bridge_vlan_group *vg; 258 + struct nd_msg *na, *ns; 258 259 struct sk_buff *reply; 259 - struct nd_msg *na; 260 260 struct ipv6hdr *pip6; 261 261 int na_olen = 8; /* opt hdr + ETH_ALEN for target */ 262 262 int ns_olen; ··· 264 264 u8 *daddr; 265 265 u16 pvid; 266 266 267 - if (!dev) 267 + if (!dev || skb_linearize(request)) 268 268 return; 269 269 270 270 len = LL_RESERVED_SPACE(dev) + sizeof(struct ipv6hdr) + ··· 281 281 skb_set_mac_header(reply, 0); 282 282 283 283 daddr = eth_hdr(request)->h_source; 284 + ns = (struct nd_msg *)(skb_network_header(request) + 285 + sizeof(struct ipv6hdr)); 284 286 285 287 /* Do we need option processing ? */ 286 288 ns_olen = request->len - (skb_network_offset(request) + 287 289 sizeof(struct ipv6hdr)) - sizeof(*ns); 288 290 for (i = 0; i < ns_olen - 1; i += (ns->opt[i + 1] << 3)) { 289 - if (!ns->opt[i + 1]) { 291 + if (!ns->opt[i + 1] || i + (ns->opt[i + 1] << 3) > ns_olen) { 290 292 kfree_skb(reply); 291 293 return; 292 294 } 293 295 if (ns->opt[i] == ND_OPT_SOURCE_LL_ADDR) { 294 - daddr = ns->opt + i + sizeof(struct nd_opt_hdr); 296 + if ((ns->opt[i + 1] << 3) >= 297 + sizeof(struct nd_opt_hdr) + ETH_ALEN) 298 + daddr = ns->opt + i + sizeof(struct nd_opt_hdr); 295 299 break; 296 300 } 297 301 } ··· 476 472 if (vid != 0) 477 473 br_nd_send(br, p, skb, n, 478 474 skb->vlan_proto, 479 - skb_vlan_tag_get(skb), msg); 475 + skb_vlan_tag_get(skb)); 480 476 else 481 - br_nd_send(br, p, skb, n, 0, 0, msg); 477 + br_nd_send(br, p, skb, n, 0, 0); 482 478 replied = true; 483 479 } 484 480
+2 -2
net/bridge/br_mrp_netlink.c
··· 196 196 br_mrp_start_test_policy[IFLA_BRIDGE_MRP_START_TEST_MAX + 1] = { 197 197 [IFLA_BRIDGE_MRP_START_TEST_UNSPEC] = { .type = NLA_REJECT }, 198 198 [IFLA_BRIDGE_MRP_START_TEST_RING_ID] = { .type = NLA_U32 }, 199 - [IFLA_BRIDGE_MRP_START_TEST_INTERVAL] = { .type = NLA_U32 }, 199 + [IFLA_BRIDGE_MRP_START_TEST_INTERVAL] = NLA_POLICY_MIN(NLA_U32, 1), 200 200 [IFLA_BRIDGE_MRP_START_TEST_MAX_MISS] = { .type = NLA_U32 }, 201 201 [IFLA_BRIDGE_MRP_START_TEST_PERIOD] = { .type = NLA_U32 }, 202 202 [IFLA_BRIDGE_MRP_START_TEST_MONITOR] = { .type = NLA_U32 }, ··· 316 316 br_mrp_start_in_test_policy[IFLA_BRIDGE_MRP_START_IN_TEST_MAX + 1] = { 317 317 [IFLA_BRIDGE_MRP_START_IN_TEST_UNSPEC] = { .type = NLA_REJECT }, 318 318 [IFLA_BRIDGE_MRP_START_IN_TEST_IN_ID] = { .type = NLA_U32 }, 319 - [IFLA_BRIDGE_MRP_START_IN_TEST_INTERVAL] = { .type = NLA_U32 }, 319 + [IFLA_BRIDGE_MRP_START_IN_TEST_INTERVAL] = NLA_POLICY_MIN(NLA_U32, 1), 320 320 [IFLA_BRIDGE_MRP_START_IN_TEST_MAX_MISS] = { .type = NLA_U32 }, 321 321 [IFLA_BRIDGE_MRP_START_IN_TEST_PERIOD] = { .type = NLA_U32 }, 322 322 };
+8 -3
net/core/dev.c
··· 3821 3821 * segmentation-offloads.rst). 3822 3822 */ 3823 3823 if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4) { 3824 - struct iphdr *iph = skb->encapsulation ? 3825 - inner_ip_hdr(skb) : ip_hdr(skb); 3824 + const struct iphdr *iph; 3825 + struct iphdr _iph; 3826 + int nhoff = skb->encapsulation ? 3827 + skb_inner_network_offset(skb) : 3828 + skb_network_offset(skb); 3826 3829 3827 - if (!(iph->frag_off & htons(IP_DF))) 3830 + iph = skb_header_pointer(skb, nhoff, sizeof(_iph), &_iph); 3831 + 3832 + if (!iph || !(iph->frag_off & htons(IP_DF))) 3828 3833 features &= ~dev->mangleid_features; 3829 3834 } 3830 3835
+8 -5
net/core/skmsg.c
··· 1267 1267 1268 1268 static void sk_psock_verdict_data_ready(struct sock *sk) 1269 1269 { 1270 - struct socket *sock = sk->sk_socket; 1271 - const struct proto_ops *ops; 1270 + const struct proto_ops *ops = NULL; 1271 + struct socket *sock; 1272 1272 int copied; 1273 1273 1274 1274 trace_sk_data_ready(sk); 1275 1275 1276 - if (unlikely(!sock)) 1277 - return; 1278 - ops = READ_ONCE(sock->ops); 1276 + rcu_read_lock(); 1277 + sock = READ_ONCE(sk->sk_socket); 1278 + if (likely(sock)) 1279 + ops = READ_ONCE(sock->ops); 1280 + rcu_read_unlock(); 1279 1281 if (!ops || !ops->read_skb) 1280 1282 return; 1283 + 1281 1284 copied = ops->read_skb(sk, sk_psock_verdict_recv); 1282 1285 if (copied >= 0) { 1283 1286 struct sk_psock *psock;
+17 -15
net/hsr/hsr_device.c
··· 532 532 static int hsr_ndo_vlan_rx_add_vid(struct net_device *dev, 533 533 __be16 proto, u16 vid) 534 534 { 535 - bool is_slave_a_added = false; 536 - bool is_slave_b_added = false; 535 + struct net_device *slave_a_dev = NULL; 536 + struct net_device *slave_b_dev = NULL; 537 537 struct hsr_port *port; 538 538 struct hsr_priv *hsr; 539 539 int ret = 0; ··· 549 549 switch (port->type) { 550 550 case HSR_PT_SLAVE_A: 551 551 if (ret) { 552 - /* clean up Slave-B */ 553 552 netdev_err(dev, "add vid failed for Slave-A\n"); 554 - if (is_slave_b_added) 555 - vlan_vid_del(port->dev, proto, vid); 556 - return ret; 553 + goto unwind; 557 554 } 558 - 559 - is_slave_a_added = true; 555 + slave_a_dev = port->dev; 560 556 break; 561 - 562 557 case HSR_PT_SLAVE_B: 563 558 if (ret) { 564 - /* clean up Slave-A */ 565 559 netdev_err(dev, "add vid failed for Slave-B\n"); 566 - if (is_slave_a_added) 567 - vlan_vid_del(port->dev, proto, vid); 568 - return ret; 560 + goto unwind; 569 561 } 570 - 571 - is_slave_b_added = true; 562 + slave_b_dev = port->dev; 572 563 break; 573 564 default: 565 + if (ret) 566 + goto unwind; 574 567 break; 575 568 } 576 569 } 577 570 578 571 return 0; 572 + 573 + unwind: 574 + if (slave_a_dev) 575 + vlan_vid_del(slave_a_dev, proto, vid); 576 + 577 + if (slave_b_dev) 578 + vlan_vid_del(slave_b_dev, proto, vid); 579 + 580 + return ret; 579 581 } 580 582 581 583 static int hsr_ndo_vlan_rx_kill_vid(struct net_device *dev,
+36 -2
net/hsr/hsr_framereg.c
··· 123 123 hsr_free_node(node); 124 124 } 125 125 126 + static void hsr_lock_seq_out_pair(struct hsr_node *node_a, 127 + struct hsr_node *node_b) 128 + { 129 + if (node_a == node_b) { 130 + spin_lock_bh(&node_a->seq_out_lock); 131 + return; 132 + } 133 + 134 + if (node_a < node_b) { 135 + spin_lock_bh(&node_a->seq_out_lock); 136 + spin_lock_nested(&node_b->seq_out_lock, SINGLE_DEPTH_NESTING); 137 + } else { 138 + spin_lock_bh(&node_b->seq_out_lock); 139 + spin_lock_nested(&node_a->seq_out_lock, SINGLE_DEPTH_NESTING); 140 + } 141 + } 142 + 143 + static void hsr_unlock_seq_out_pair(struct hsr_node *node_a, 144 + struct hsr_node *node_b) 145 + { 146 + if (node_a == node_b) { 147 + spin_unlock_bh(&node_a->seq_out_lock); 148 + return; 149 + } 150 + 151 + if (node_a < node_b) { 152 + spin_unlock(&node_b->seq_out_lock); 153 + spin_unlock_bh(&node_a->seq_out_lock); 154 + } else { 155 + spin_unlock(&node_a->seq_out_lock); 156 + spin_unlock_bh(&node_b->seq_out_lock); 157 + } 158 + } 159 + 126 160 void hsr_del_nodes(struct list_head *node_db) 127 161 { 128 162 struct hsr_node *node; ··· 466 432 } 467 433 468 434 ether_addr_copy(node_real->macaddress_B, ethhdr->h_source); 469 - spin_lock_bh(&node_real->seq_out_lock); 435 + hsr_lock_seq_out_pair(node_real, node_curr); 470 436 for (i = 0; i < HSR_PT_PORTS; i++) { 471 437 if (!node_curr->time_in_stale[i] && 472 438 time_after(node_curr->time_in[i], node_real->time_in[i])) { ··· 489 455 src_blk->seq_nrs[i], HSR_SEQ_BLOCK_SIZE); 490 456 } 491 457 } 492 - spin_unlock_bh(&node_real->seq_out_lock); 458 + hsr_unlock_seq_out_pair(node_real, node_curr); 493 459 node_real->addr_B_port = port_rcv->type; 494 460 495 461 spin_lock_bh(&hsr->list_lock);
+3 -3
net/ipv6/addrconf.c
··· 3625 3625 if ((ifp->flags & IFA_F_PERMANENT) && 3626 3626 fixup_permanent_addr(net, idev, ifp) < 0) { 3627 3627 write_unlock_bh(&idev->lock); 3628 - in6_ifa_hold(ifp); 3629 - ipv6_del_addr(ifp); 3630 - write_lock_bh(&idev->lock); 3631 3628 3632 3629 net_info_ratelimited("%s: Failed to add prefix route for address %pI6c; dropping\n", 3633 3630 idev->dev->name, &ifp->addr); 3631 + in6_ifa_hold(ifp); 3632 + ipv6_del_addr(ifp); 3633 + write_lock_bh(&idev->lock); 3634 3634 } 3635 3635 } 3636 3636
+10
net/ipv6/datagram.c
··· 763 763 { 764 764 struct in6_pktinfo *src_info; 765 765 struct cmsghdr *cmsg; 766 + struct ipv6_rt_hdr *orthdr; 766 767 struct ipv6_rt_hdr *rthdr; 767 768 struct ipv6_opt_hdr *hdr; 768 769 struct ipv6_txoptions *opt = ipc6->opt; ··· 925 924 goto exit_f; 926 925 } 927 926 if (cmsg->cmsg_type == IPV6_DSTOPTS) { 927 + if (opt->dst1opt) 928 + opt->opt_flen -= ipv6_optlen(opt->dst1opt); 928 929 opt->opt_flen += len; 929 930 opt->dst1opt = hdr; 930 931 } else { 932 + if (opt->dst0opt) 933 + opt->opt_nflen -= ipv6_optlen(opt->dst0opt); 931 934 opt->opt_nflen += len; 932 935 opt->dst0opt = hdr; 933 936 } ··· 974 969 goto exit_f; 975 970 } 976 971 972 + orthdr = opt->srcrt; 973 + if (orthdr) 974 + opt->opt_nflen -= ((orthdr->hdrlen + 1) << 3); 977 975 opt->opt_nflen += len; 978 976 opt->srcrt = rthdr; 979 977 980 978 if (cmsg->cmsg_type == IPV6_2292RTHDR && opt->dst1opt) { 981 979 int dsthdrlen = ((opt->dst1opt->hdrlen+1)<<3); 982 980 981 + if (opt->dst0opt) 982 + opt->opt_nflen -= ipv6_optlen(opt->dst0opt); 983 983 opt->opt_nflen += dsthdrlen; 984 984 opt->dst0opt = opt->dst1opt; 985 985 opt->dst1opt = NULL;
+3
net/ipv6/icmp.c
··· 875 875 if (!skb2) 876 876 return 1; 877 877 878 + /* Remove debris left by IPv4 stack. */ 879 + memset(IP6CB(skb2), 0, sizeof(*IP6CB(skb2))); 880 + 878 881 skb_dst_drop(skb2); 879 882 skb_pull(skb2, nhs); 880 883 skb_reset_network_header(skb2);
+2 -2
net/ipv6/ioam6.c
··· 708 708 struct ioam6_namespace *ns, 709 709 struct ioam6_trace_hdr *trace, 710 710 struct ioam6_schema *sc, 711 - u8 sclen, bool is_input) 711 + unsigned int sclen, bool is_input) 712 712 { 713 713 struct net_device *dev = skb_dst_dev(skb); 714 714 struct timespec64 ts; ··· 939 939 bool is_input) 940 940 { 941 941 struct ioam6_schema *sc; 942 - u8 sclen = 0; 942 + unsigned int sclen = 0; 943 943 944 944 /* Skip if Overflow flag is set 945 945 */
+11 -3
net/ipv6/ip6_fib.c
··· 727 727 728 728 void fib6_metric_set(struct fib6_info *f6i, int metric, u32 val) 729 729 { 730 + struct dst_metrics *m; 731 + 730 732 if (!f6i) 731 733 return; 732 734 733 - if (f6i->fib6_metrics == &dst_default_metrics) { 735 + if (READ_ONCE(f6i->fib6_metrics) == &dst_default_metrics) { 736 + struct dst_metrics *dflt = (struct dst_metrics *)&dst_default_metrics; 734 737 struct dst_metrics *p = kzalloc_obj(*p, GFP_ATOMIC); 735 738 736 739 if (!p) 737 740 return; 738 741 742 + p->metrics[metric - 1] = val; 739 743 refcount_set(&p->refcnt, 1); 740 - f6i->fib6_metrics = p; 744 + if (cmpxchg(&f6i->fib6_metrics, dflt, p) != dflt) 745 + kfree(p); 746 + else 747 + return; 741 748 } 742 749 743 - f6i->fib6_metrics->metrics[metric - 1] = val; 750 + m = READ_ONCE(f6i->fib6_metrics); 751 + WRITE_ONCE(m->metrics[metric - 1], val); 744 752 } 745 753 746 754 /*
-5
net/ipv6/ip6_flowlabel.c
··· 133 133 if (time_after(ttd, fl->expires)) 134 134 fl->expires = ttd; 135 135 ttd = fl->expires; 136 - if (fl->opt && fl->share == IPV6_FL_S_EXCL) { 137 - struct ipv6_txoptions *opt = fl->opt; 138 - fl->opt = NULL; 139 - kfree(opt); 140 - } 141 136 if (!timer_pending(&ip6_fl_gc_timer) || 142 137 time_after(ip6_fl_gc_timer.expires, ttd)) 143 138 mod_timer(&ip6_fl_gc_timer, ttd);
+5
net/ipv6/ip6_tunnel.c
··· 601 601 if (!skb2) 602 602 return 0; 603 603 604 + /* Remove debris left by IPv6 stack. */ 605 + memset(IPCB(skb2), 0, sizeof(*IPCB(skb2))); 606 + 604 607 skb_dst_drop(skb2); 605 608 606 609 skb_pull(skb2, offset); 607 610 skb_reset_network_header(skb2); 608 611 eiph = ip_hdr(skb2); 612 + if (eiph->version != 4 || eiph->ihl < 5) 613 + goto out; 609 614 610 615 /* Try to guess incoming interface */ 611 616 rt = ip_route_output_ports(dev_net(skb->dev), &fl4, NULL, eiph->saddr,
+3
net/ipv6/ndisc.c
··· 1209 1209 ndmsg->nduseropt_icmp_type = icmp6h->icmp6_type; 1210 1210 ndmsg->nduseropt_icmp_code = icmp6h->icmp6_code; 1211 1211 ndmsg->nduseropt_opts_len = opt->nd_opt_len << 3; 1212 + ndmsg->nduseropt_pad1 = 0; 1213 + ndmsg->nduseropt_pad2 = 0; 1214 + ndmsg->nduseropt_pad3 = 0; 1212 1215 1213 1216 memcpy(ndmsg + 1, opt, opt->nd_opt_len << 3); 1214 1217
+25 -4
net/mpls/af_mpls.c
··· 83 83 return mpls_dereference(net, platform_label[index]); 84 84 } 85 85 86 + static struct mpls_route __rcu **mpls_platform_label_rcu(struct net *net, size_t *platform_labels) 87 + { 88 + struct mpls_route __rcu **platform_label; 89 + unsigned int sequence; 90 + 91 + do { 92 + sequence = read_seqcount_begin(&net->mpls.platform_label_seq); 93 + platform_label = rcu_dereference(net->mpls.platform_label); 94 + *platform_labels = net->mpls.platform_labels; 95 + } while (read_seqcount_retry(&net->mpls.platform_label_seq, sequence)); 96 + 97 + return platform_label; 98 + } 99 + 86 100 static struct mpls_route *mpls_route_input_rcu(struct net *net, unsigned int index) 87 101 { 88 102 struct mpls_route __rcu **platform_label; 103 + size_t platform_labels; 89 104 90 - if (index >= net->mpls.platform_labels) 105 + platform_label = mpls_platform_label_rcu(net, &platform_labels); 106 + 107 + if (index >= platform_labels) 91 108 return NULL; 92 109 93 - platform_label = rcu_dereference(net->mpls.platform_label); 94 110 return rcu_dereference(platform_label[index]); 95 111 } 96 112 ··· 2256 2240 if (index < MPLS_LABEL_FIRST_UNRESERVED) 2257 2241 index = MPLS_LABEL_FIRST_UNRESERVED; 2258 2242 2259 - platform_label = rcu_dereference(net->mpls.platform_label); 2260 - platform_labels = net->mpls.platform_labels; 2243 + platform_label = mpls_platform_label_rcu(net, &platform_labels); 2261 2244 2262 2245 if (filter.filter_set) 2263 2246 flags |= NLM_F_DUMP_FILTERED; ··· 2660 2645 } 2661 2646 2662 2647 /* Update the global pointers */ 2648 + local_bh_disable(); 2649 + write_seqcount_begin(&net->mpls.platform_label_seq); 2663 2650 net->mpls.platform_labels = limit; 2664 2651 rcu_assign_pointer(net->mpls.platform_label, labels); 2652 + write_seqcount_end(&net->mpls.platform_label_seq); 2653 + local_bh_enable(); 2665 2654 2666 2655 mutex_unlock(&net->mpls.platform_mutex); 2667 2656 ··· 2747 2728 int i; 2748 2729 2749 2730 mutex_init(&net->mpls.platform_mutex); 2731 + seqcount_mutex_init(&net->mpls.platform_label_seq, &net->mpls.platform_mutex); 2732 + 2750 2733 net->mpls.platform_labels = 0; 2751 2734 net->mpls.platform_label = NULL; 2752 2735 net->mpls.ip_ttl_propagate = 1;
+8 -3
net/mptcp/protocol.c
··· 2006 2006 static int __mptcp_recvmsg_mskq(struct sock *sk, struct msghdr *msg, 2007 2007 size_t len, int flags, int copied_total, 2008 2008 struct scm_timestamping_internal *tss, 2009 - int *cmsg_flags) 2009 + int *cmsg_flags, struct sk_buff **last) 2010 2010 { 2011 2011 struct mptcp_sock *msk = mptcp_sk(sk); 2012 2012 struct sk_buff *skb, *tmp; ··· 2023 2023 /* skip already peeked skbs */ 2024 2024 if (total_data_len + data_len <= copied_total) { 2025 2025 total_data_len += data_len; 2026 + *last = skb; 2026 2027 continue; 2027 2028 } 2028 2029 ··· 2059 2058 } 2060 2059 2061 2060 mptcp_eat_recv_skb(sk, skb); 2061 + } else { 2062 + *last = skb; 2062 2063 } 2063 2064 2064 2065 if (copied >= len) ··· 2291 2288 cmsg_flags = MPTCP_CMSG_INQ; 2292 2289 2293 2290 while (copied < len) { 2291 + struct sk_buff *last = NULL; 2294 2292 int err, bytes_read; 2295 2293 2296 2294 bytes_read = __mptcp_recvmsg_mskq(sk, msg, len - copied, flags, 2297 - copied, &tss, &cmsg_flags); 2295 + copied, &tss, &cmsg_flags, 2296 + &last); 2298 2297 if (unlikely(bytes_read < 0)) { 2299 2298 if (!copied) 2300 2299 copied = bytes_read; ··· 2348 2343 2349 2344 pr_debug("block timeout %ld\n", timeo); 2350 2345 mptcp_cleanup_rbuf(msk, copied); 2351 - err = sk_wait_data(sk, &timeo, NULL); 2346 + err = sk_wait_data(sk, &timeo, last); 2352 2347 if (err < 0) { 2353 2348 err = copied ? : err; 2354 2349 goto out_err;
+2 -2
net/netfilter/ipset/ip_set_core.c
··· 821 821 * 822 822 */ 823 823 ip_set_id_t 824 - ip_set_get_byname(struct net *net, const char *name, struct ip_set **set) 824 + ip_set_get_byname(struct net *net, const struct nlattr *name, struct ip_set **set) 825 825 { 826 826 ip_set_id_t i, index = IPSET_INVALID_ID; 827 827 struct ip_set *s; ··· 830 830 rcu_read_lock(); 831 831 for (i = 0; i < inst->ip_set_max; i++) { 832 832 s = rcu_dereference(inst->ip_set_list)[i]; 833 - if (s && STRNCMP(s->name, name)) { 833 + if (s && nla_strcmp(name, s->name) == 0) { 834 834 __ip_set_get(s); 835 835 index = i; 836 836 *set = s;
+1 -1
net/netfilter/ipset/ip_set_hash_gen.h
··· 1098 1098 if (!test_bit(i, n->used)) 1099 1099 k++; 1100 1100 } 1101 - if (n->pos == 0 && k == 0) { 1101 + if (k == n->pos) { 1102 1102 t->hregion[r].ext_size -= ext_size(n->size, dsize); 1103 1103 rcu_assign_pointer(hbucket(t, key), NULL); 1104 1104 kfree_rcu(n, rcu);
+2 -2
net/netfilter/ipset/ip_set_list_set.c
··· 367 367 ret = ip_set_get_extensions(set, tb, &ext); 368 368 if (ret) 369 369 return ret; 370 - e.id = ip_set_get_byname(map->net, nla_data(tb[IPSET_ATTR_NAME]), &s); 370 + e.id = ip_set_get_byname(map->net, tb[IPSET_ATTR_NAME], &s); 371 371 if (e.id == IPSET_INVALID_ID) 372 372 return -IPSET_ERR_NAME; 373 373 /* "Loop detection" */ ··· 389 389 390 390 if (tb[IPSET_ATTR_NAMEREF]) { 391 391 e.refid = ip_set_get_byname(map->net, 392 - nla_data(tb[IPSET_ATTR_NAMEREF]), 392 + tb[IPSET_ATTR_NAMEREF], 393 393 &s); 394 394 if (e.refid == IPSET_INVALID_ID) { 395 395 ret = -IPSET_ERR_NAMEREF;
+1 -1
net/netfilter/nf_conntrack_helper.c
··· 415 415 */ 416 416 synchronize_rcu(); 417 417 418 - nf_ct_expect_iterate_destroy(expect_iter_me, NULL); 418 + nf_ct_expect_iterate_destroy(expect_iter_me, me); 419 419 nf_ct_iterate_destroy(unhelp, me); 420 420 421 421 /* nf_ct_iterate_destroy() does an unconditional synchronize_rcu() as
+15 -45
net/netfilter/nf_conntrack_netlink.c
··· 2636 2636 2637 2637 static struct nf_conntrack_expect * 2638 2638 ctnetlink_alloc_expect(const struct nlattr *const cda[], struct nf_conn *ct, 2639 - struct nf_conntrack_helper *helper, 2640 2639 struct nf_conntrack_tuple *tuple, 2641 2640 struct nf_conntrack_tuple *mask); 2642 2641 ··· 2864 2865 { 2865 2866 struct nlattr *cda[CTA_EXPECT_MAX+1]; 2866 2867 struct nf_conntrack_tuple tuple, mask; 2867 - struct nf_conntrack_helper *helper = NULL; 2868 2868 struct nf_conntrack_expect *exp; 2869 2869 int err; 2870 2870 ··· 2877 2879 if (err < 0) 2878 2880 return err; 2879 2881 2880 - if (cda[CTA_EXPECT_HELP_NAME]) { 2881 - const char *helpname = nla_data(cda[CTA_EXPECT_HELP_NAME]); 2882 - 2883 - helper = __nf_conntrack_helper_find(helpname, nf_ct_l3num(ct), 2884 - nf_ct_protonum(ct)); 2885 - if (helper == NULL) 2886 - return -EOPNOTSUPP; 2887 - } 2888 - 2889 2882 exp = ctnetlink_alloc_expect((const struct nlattr * const *)cda, ct, 2890 - helper, &tuple, &mask); 2883 + &tuple, &mask); 2891 2884 if (IS_ERR(exp)) 2892 2885 return PTR_ERR(exp); 2893 2886 ··· 3517 3528 3518 3529 static struct nf_conntrack_expect * 3519 3530 ctnetlink_alloc_expect(const struct nlattr * const cda[], struct nf_conn *ct, 3520 - struct nf_conntrack_helper *helper, 3521 3531 struct nf_conntrack_tuple *tuple, 3522 3532 struct nf_conntrack_tuple *mask) 3523 3533 { 3524 3534 struct net *net = read_pnet(&ct->ct_net); 3535 + struct nf_conntrack_helper *helper; 3525 3536 struct nf_conntrack_expect *exp; 3526 3537 struct nf_conn_help *help; 3527 3538 u32 class = 0; ··· 3531 3542 if (!help) 3532 3543 return ERR_PTR(-EOPNOTSUPP); 3533 3544 3534 - if (cda[CTA_EXPECT_CLASS] && helper) { 3545 + helper = rcu_dereference(help->helper); 3546 + if (!helper) 3547 + return ERR_PTR(-EOPNOTSUPP); 3548 + 3549 + if (cda[CTA_EXPECT_CLASS]) { 3535 3550 class = ntohl(nla_get_be32(cda[CTA_EXPECT_CLASS])); 3536 3551 if (class > helper->expect_class_max) 3537 3552 return ERR_PTR(-EINVAL); ··· 3569 3576 #ifdef CONFIG_NF_CONNTRACK_ZONES 3570 3577 exp->zone = ct->zone; 3571 3578 #endif 3572 - if (!helper) 3573 - helper = rcu_dereference(help->helper); 3574 3579 rcu_assign_pointer(exp->helper, helper); 3575 3580 exp->tuple = *tuple; 3576 3581 exp->mask.src.u3 = mask->src.u3; ··· 3579 3588 exp, nf_ct_l3num(ct)); 3580 3589 if (err < 0) 3581 3590 goto err_out; 3591 + #if IS_ENABLED(CONFIG_NF_NAT) 3592 + } else { 3593 + memset(&exp->saved_addr, 0, sizeof(exp->saved_addr)); 3594 + memset(&exp->saved_proto, 0, sizeof(exp->saved_proto)); 3595 + exp->dir = 0; 3596 + #endif 3582 3597 } 3583 3598 return exp; 3584 3599 err_out: ··· 3600 3603 { 3601 3604 struct nf_conntrack_tuple tuple, mask, master_tuple; 3602 3605 struct nf_conntrack_tuple_hash *h = NULL; 3603 - struct nf_conntrack_helper *helper = NULL; 3604 3606 struct nf_conntrack_expect *exp; 3605 3607 struct nf_conn *ct; 3606 3608 int err; ··· 3625 3629 ct = nf_ct_tuplehash_to_ctrack(h); 3626 3630 3627 3631 rcu_read_lock(); 3628 - if (cda[CTA_EXPECT_HELP_NAME]) { 3629 - const char *helpname = nla_data(cda[CTA_EXPECT_HELP_NAME]); 3630 - 3631 - helper = __nf_conntrack_helper_find(helpname, u3, 3632 - nf_ct_protonum(ct)); 3633 - if (helper == NULL) { 3634 - rcu_read_unlock(); 3635 - #ifdef CONFIG_MODULES 3636 - if (request_module("nfct-helper-%s", helpname) < 0) { 3637 - err = -EOPNOTSUPP; 3638 - goto err_ct; 3639 - } 3640 - rcu_read_lock(); 3641 - helper = __nf_conntrack_helper_find(helpname, u3, 3642 - nf_ct_protonum(ct)); 3643 - if (helper) { 3644 - err = -EAGAIN; 3645 - goto err_rcu; 3646 - } 3647 - rcu_read_unlock(); 3648 - #endif 3649 - err = -EOPNOTSUPP; 3650 - goto err_ct; 3651 - } 3652 - } 3653 - 3654 - exp = ctnetlink_alloc_expect(cda, ct, helper, &tuple, &mask); 3632 + exp = ctnetlink_alloc_expect(cda, ct, &tuple, &mask); 3655 3633 if (IS_ERR(exp)) { 3656 3634 err = PTR_ERR(exp); 3657 3635 goto err_rcu; ··· 3635 3665 nf_ct_expect_put(exp); 3636 3666 err_rcu: 3637 3667 rcu_read_unlock(); 3638 - err_ct: 3639 3668 nf_ct_put(ct); 3669 + 3640 3670 return err; 3641 3671 } 3642 3672
+130 -66
net/netfilter/nf_flow_table_offload.c
··· 14 14 #include <net/netfilter/nf_conntrack_core.h> 15 15 #include <net/netfilter/nf_conntrack_tuple.h> 16 16 17 + #define NF_FLOW_RULE_ACTION_MAX 24 18 + 17 19 static struct workqueue_struct *nf_flow_offload_add_wq; 18 20 static struct workqueue_struct *nf_flow_offload_del_wq; 19 21 static struct workqueue_struct *nf_flow_offload_stats_wq; ··· 218 216 static inline struct flow_action_entry * 219 217 flow_action_entry_next(struct nf_flow_rule *flow_rule) 220 218 { 221 - int i = flow_rule->rule->action.num_entries++; 219 + int i; 220 + 221 + if (unlikely(flow_rule->rule->action.num_entries >= NF_FLOW_RULE_ACTION_MAX)) 222 + return NULL; 223 + 224 + i = flow_rule->rule->action.num_entries++; 222 225 223 226 return &flow_rule->rule->action.entries[i]; 224 227 } ··· 240 233 const unsigned char *addr; 241 234 u32 mask, val; 242 235 u16 val16; 236 + 237 + if (!entry0 || !entry1) 238 + return -E2BIG; 243 239 244 240 this_tuple = &flow->tuplehash[dir].tuple; 245 241 ··· 294 284 u8 nud_state; 295 285 u16 val16; 296 286 287 + if (!entry0 || !entry1) 288 + return -E2BIG; 289 + 297 290 this_tuple = &flow->tuplehash[dir].tuple; 298 291 299 292 switch (this_tuple->xmit_type) { ··· 338 325 return 0; 339 326 } 340 327 341 - static void flow_offload_ipv4_snat(struct net *net, 342 - const struct flow_offload *flow, 343 - enum flow_offload_tuple_dir dir, 344 - struct nf_flow_rule *flow_rule) 328 + static int flow_offload_ipv4_snat(struct net *net, 329 + const struct flow_offload *flow, 330 + enum flow_offload_tuple_dir dir, 331 + struct nf_flow_rule *flow_rule) 345 332 { 346 333 struct flow_action_entry *entry = flow_action_entry_next(flow_rule); 347 334 u32 mask = ~htonl(0xffffffff); 348 335 __be32 addr; 349 336 u32 offset; 337 + 338 + if (!entry) 339 + return -E2BIG; 350 340 351 341 switch (dir) { 352 342 case FLOW_OFFLOAD_DIR_ORIGINAL: ··· 361 345 offset = offsetof(struct iphdr, daddr); 362 346 break; 363 347 default: 364 - return; 348 + return -EOPNOTSUPP; 365 349 } 366 350 367 351 flow_offload_mangle(entry, FLOW_ACT_MANGLE_HDR_TYPE_IP4, offset, 368 352 &addr, &mask); 353 + return 0; 369 354 } 370 355 371 - static void flow_offload_ipv4_dnat(struct net *net, 372 - const struct flow_offload *flow, 373 - enum flow_offload_tuple_dir dir, 374 - struct nf_flow_rule *flow_rule) 356 + static int flow_offload_ipv4_dnat(struct net *net, 357 + const struct flow_offload *flow, 358 + enum flow_offload_tuple_dir dir, 359 + struct nf_flow_rule *flow_rule) 375 360 { 376 361 struct flow_action_entry *entry = flow_action_entry_next(flow_rule); 377 362 u32 mask = ~htonl(0xffffffff); 378 363 __be32 addr; 379 364 u32 offset; 365 + 366 + if (!entry) 367 + return -E2BIG; 380 368 381 369 switch (dir) { 382 370 case FLOW_OFFLOAD_DIR_ORIGINAL: ··· 392 372 offset = offsetof(struct iphdr, saddr); 393 373 break; 394 374 default: 395 - return; 375 + return -EOPNOTSUPP; 396 376 } 397 377 398 378 flow_offload_mangle(entry, FLOW_ACT_MANGLE_HDR_TYPE_IP4, offset, 399 379 &addr, &mask); 380 + return 0; 400 381 } 401 382 402 - static void flow_offload_ipv6_mangle(struct nf_flow_rule *flow_rule, 383 + static int flow_offload_ipv6_mangle(struct nf_flow_rule *flow_rule, 403 384 unsigned int offset, 404 385 const __be32 *addr, const __be32 *mask) 405 386 { ··· 409 388 410 389 for (i = 0; i < sizeof(struct in6_addr) / sizeof(u32); i++) { 411 390 entry = flow_action_entry_next(flow_rule); 391 + if (!entry) 392 + return -E2BIG; 393 + 412 394 flow_offload_mangle(entry, FLOW_ACT_MANGLE_HDR_TYPE_IP6, 413 395 offset + i * sizeof(u32), &addr[i], mask); 414 396 } 397 + 398 + return 0; 415 399 } 416 400 417 - static void flow_offload_ipv6_snat(struct net *net, 418 - const struct flow_offload *flow, 419 - enum flow_offload_tuple_dir dir, 420 - struct nf_flow_rule *flow_rule) 401 + static int flow_offload_ipv6_snat(struct net *net, 402 + const struct flow_offload *flow, 403 + enum flow_offload_tuple_dir dir, 404 + struct nf_flow_rule *flow_rule) 421 405 { 422 406 u32 mask = ~htonl(0xffffffff); 423 407 const __be32 *addr; ··· 438 412 offset = offsetof(struct ipv6hdr, daddr); 439 413 break; 440 414 default: 441 - return; 415 + return -EOPNOTSUPP; 442 416 } 443 417 444 - flow_offload_ipv6_mangle(flow_rule, offset, addr, &mask); 418 + return flow_offload_ipv6_mangle(flow_rule, offset, addr, &mask); 445 419 } 446 420 447 - static void flow_offload_ipv6_dnat(struct net *net, 448 - const struct flow_offload *flow, 449 - enum flow_offload_tuple_dir dir, 450 - struct nf_flow_rule *flow_rule) 421 + static int flow_offload_ipv6_dnat(struct net *net, 422 + const struct flow_offload *flow, 423 + enum flow_offload_tuple_dir dir, 424 + struct nf_flow_rule *flow_rule) 451 425 { 452 426 u32 mask = ~htonl(0xffffffff); 453 427 const __be32 *addr; ··· 463 437 offset = offsetof(struct ipv6hdr, saddr); 464 438 break; 465 439 default: 466 - return; 440 + return -EOPNOTSUPP; 467 441 } 468 442 469 - flow_offload_ipv6_mangle(flow_rule, offset, addr, &mask); 443 + return flow_offload_ipv6_mangle(flow_rule, offset, addr, &mask); 470 444 } 471 445 472 446 static int flow_offload_l4proto(const struct flow_offload *flow) ··· 488 462 return type; 489 463 } 490 464 491 - static void flow_offload_port_snat(struct net *net, 492 - const struct flow_offload *flow, 493 - enum flow_offload_tuple_dir dir, 494 - struct nf_flow_rule *flow_rule) 465 + static int flow_offload_port_snat(struct net *net, 466 + const struct flow_offload *flow, 467 + enum flow_offload_tuple_dir dir, 468 + struct nf_flow_rule *flow_rule) 495 469 { 496 470 struct flow_action_entry *entry = flow_action_entry_next(flow_rule); 497 471 u32 mask, port; 498 472 u32 offset; 473 + 474 + if (!entry) 475 + return -E2BIG; 499 476 500 477 switch (dir) { 501 478 case FLOW_OFFLOAD_DIR_ORIGINAL: ··· 514 485 mask = ~htonl(0xffff); 515 486 break; 516 487 default: 517 - return; 488 + return -EOPNOTSUPP; 518 489 } 519 490 520 491 flow_offload_mangle(entry, flow_offload_l4proto(flow), offset, 521 492 &port, &mask); 493 + return 0; 522 494 } 523 495 524 - static void flow_offload_port_dnat(struct net *net, 525 - const struct flow_offload *flow, 526 - enum flow_offload_tuple_dir dir, 527 - struct nf_flow_rule *flow_rule) 496 + static int flow_offload_port_dnat(struct net *net, 497 + const struct flow_offload *flow, 498 + enum flow_offload_tuple_dir dir, 499 + struct nf_flow_rule *flow_rule) 528 500 { 529 501 struct flow_action_entry *entry = flow_action_entry_next(flow_rule); 530 502 u32 mask, port; 531 503 u32 offset; 504 + 505 + if (!entry) 506 + return -E2BIG; 532 507 533 508 switch (dir) { 534 509 case FLOW_OFFLOAD_DIR_ORIGINAL: ··· 548 515 mask = ~htonl(0xffff0000); 549 516 break; 550 517 default: 551 - return; 518 + return -EOPNOTSUPP; 552 519 } 553 520 554 521 flow_offload_mangle(entry, flow_offload_l4proto(flow), offset, 555 522 &port, &mask); 523 + return 0; 556 524 } 557 525 558 - static void flow_offload_ipv4_checksum(struct net *net, 559 - const struct flow_offload *flow, 560 - struct nf_flow_rule *flow_rule) 526 + static int flow_offload_ipv4_checksum(struct net *net, 527 + const struct flow_offload *flow, 528 + struct nf_flow_rule *flow_rule) 561 529 { 562 530 u8 protonum = flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.l4proto; 563 531 struct flow_action_entry *entry = flow_action_entry_next(flow_rule); 532 + 533 + if (!entry) 534 + return -E2BIG; 564 535 565 536 entry->id = FLOW_ACTION_CSUM; 566 537 entry->csum_flags = TCA_CSUM_UPDATE_FLAG_IPV4HDR; ··· 577 540 entry->csum_flags |= TCA_CSUM_UPDATE_FLAG_UDP; 578 541 break; 579 542 } 543 + 544 + return 0; 580 545 } 581 546 582 - static void flow_offload_redirect(struct net *net, 583 - const struct flow_offload *flow, 584 - enum flow_offload_tuple_dir dir, 585 - struct nf_flow_rule *flow_rule) 547 + static int flow_offload_redirect(struct net *net, 548 + const struct flow_offload *flow, 549 + enum flow_offload_tuple_dir dir, 550 + struct nf_flow_rule *flow_rule) 586 551 { 587 552 const struct flow_offload_tuple *this_tuple, *other_tuple; 588 553 struct flow_action_entry *entry; ··· 602 563 ifindex = other_tuple->iifidx; 603 564 break; 604 565 default: 605 - return; 566 + return -EOPNOTSUPP; 606 567 } 607 568 608 569 dev = dev_get_by_index(net, ifindex); 609 570 if (!dev) 610 - return; 571 + return -ENODEV; 611 572 612 573 entry = flow_action_entry_next(flow_rule); 574 + if (!entry) { 575 + dev_put(dev); 576 + return -E2BIG; 577 + } 578 + 613 579 entry->id = FLOW_ACTION_REDIRECT; 614 580 entry->dev = dev; 581 + 582 + return 0; 615 583 } 616 584 617 - static void flow_offload_encap_tunnel(const struct flow_offload *flow, 618 - enum flow_offload_tuple_dir dir, 619 - struct nf_flow_rule *flow_rule) 585 + static int flow_offload_encap_tunnel(const struct flow_offload *flow, 586 + enum flow_offload_tuple_dir dir, 587 + struct nf_flow_rule *flow_rule) 620 588 { 621 589 const struct flow_offload_tuple *this_tuple; 622 590 struct flow_action_entry *entry; ··· 631 585 632 586 this_tuple = &flow->tuplehash[dir].tuple; 633 587 if (this_tuple->xmit_type == FLOW_OFFLOAD_XMIT_DIRECT) 634 - return; 588 + return 0; 635 589 636 590 dst = this_tuple->dst_cache; 637 591 if (dst && dst->lwtstate) { ··· 640 594 tun_info = lwt_tun_info(dst->lwtstate); 641 595 if (tun_info && (tun_info->mode & IP_TUNNEL_INFO_TX)) { 642 596 entry = flow_action_entry_next(flow_rule); 597 + if (!entry) 598 + return -E2BIG; 643 599 entry->id = FLOW_ACTION_TUNNEL_ENCAP; 644 600 entry->tunnel = tun_info; 645 601 } 646 602 } 603 + 604 + return 0; 647 605 } 648 606 649 - static void flow_offload_decap_tunnel(const struct flow_offload *flow, 650 - enum flow_offload_tuple_dir dir, 651 - struct nf_flow_rule *flow_rule) 607 + static int flow_offload_decap_tunnel(const struct flow_offload *flow, 608 + enum flow_offload_tuple_dir dir, 609 + struct nf_flow_rule *flow_rule) 652 610 { 653 611 const struct flow_offload_tuple *other_tuple; 654 612 struct flow_action_entry *entry; ··· 660 610 661 611 other_tuple = &flow->tuplehash[!dir].tuple; 662 612 if (other_tuple->xmit_type == FLOW_OFFLOAD_XMIT_DIRECT) 663 - return; 613 + return 0; 664 614 665 615 dst = other_tuple->dst_cache; 666 616 if (dst && dst->lwtstate) { ··· 669 619 tun_info = lwt_tun_info(dst->lwtstate); 670 620 if (tun_info && (tun_info->mode & IP_TUNNEL_INFO_TX)) { 671 621 entry = flow_action_entry_next(flow_rule); 622 + if (!entry) 623 + return -E2BIG; 672 624 entry->id = FLOW_ACTION_TUNNEL_DECAP; 673 625 } 674 626 } 627 + 628 + return 0; 675 629 } 676 630 677 631 static int ··· 687 633 const struct flow_offload_tuple *tuple; 688 634 int i; 689 635 690 - flow_offload_decap_tunnel(flow, dir, flow_rule); 691 - flow_offload_encap_tunnel(flow, dir, flow_rule); 636 + if (flow_offload_decap_tunnel(flow, dir, flow_rule) < 0 || 637 + flow_offload_encap_tunnel(flow, dir, flow_rule) < 0) 638 + return -1; 692 639 693 640 if (flow_offload_eth_src(net, flow, dir, flow_rule) < 0 || 694 641 flow_offload_eth_dst(net, flow, dir, flow_rule) < 0) ··· 705 650 706 651 if (tuple->encap[i].proto == htons(ETH_P_8021Q)) { 707 652 entry = flow_action_entry_next(flow_rule); 653 + if (!entry) 654 + return -1; 708 655 entry->id = FLOW_ACTION_VLAN_POP; 709 656 } 710 657 } ··· 720 663 continue; 721 664 722 665 entry = flow_action_entry_next(flow_rule); 666 + if (!entry) 667 + return -1; 723 668 724 669 switch (other_tuple->encap[i].proto) { 725 670 case htons(ETH_P_PPP_SES): ··· 747 688 return -1; 748 689 749 690 if (test_bit(NF_FLOW_SNAT, &flow->flags)) { 750 - flow_offload_ipv4_snat(net, flow, dir, flow_rule); 751 - flow_offload_port_snat(net, flow, dir, flow_rule); 691 + if (flow_offload_ipv4_snat(net, flow, dir, flow_rule) < 0 || 692 + flow_offload_port_snat(net, flow, dir, flow_rule) < 0) 693 + return -1; 752 694 } 753 695 if (test_bit(NF_FLOW_DNAT, &flow->flags)) { 754 - flow_offload_ipv4_dnat(net, flow, dir, flow_rule); 755 - flow_offload_port_dnat(net, flow, dir, flow_rule); 696 + if (flow_offload_ipv4_dnat(net, flow, dir, flow_rule) < 0 || 697 + flow_offload_port_dnat(net, flow, dir, flow_rule) < 0) 698 + return -1; 756 699 } 757 700 if (test_bit(NF_FLOW_SNAT, &flow->flags) || 758 701 test_bit(NF_FLOW_DNAT, &flow->flags)) 759 - flow_offload_ipv4_checksum(net, flow, flow_rule); 702 + if (flow_offload_ipv4_checksum(net, flow, flow_rule) < 0) 703 + return -1; 760 704 761 - flow_offload_redirect(net, flow, dir, flow_rule); 705 + if (flow_offload_redirect(net, flow, dir, flow_rule) < 0) 706 + return -1; 762 707 763 708 return 0; 764 709 } ··· 776 713 return -1; 777 714 778 715 if (test_bit(NF_FLOW_SNAT, &flow->flags)) { 779 - flow_offload_ipv6_snat(net, flow, dir, flow_rule); 780 - flow_offload_port_snat(net, flow, dir, flow_rule); 716 + if (flow_offload_ipv6_snat(net, flow, dir, flow_rule) < 0 || 717 + flow_offload_port_snat(net, flow, dir, flow_rule) < 0) 718 + return -1; 781 719 } 782 720 if (test_bit(NF_FLOW_DNAT, &flow->flags)) { 783 - flow_offload_ipv6_dnat(net, flow, dir, flow_rule); 784 - flow_offload_port_dnat(net, flow, dir, flow_rule); 721 + if (flow_offload_ipv6_dnat(net, flow, dir, flow_rule) < 0 || 722 + flow_offload_port_dnat(net, flow, dir, flow_rule) < 0) 723 + return -1; 785 724 } 786 725 787 - flow_offload_redirect(net, flow, dir, flow_rule); 726 + if (flow_offload_redirect(net, flow, dir, flow_rule) < 0) 727 + return -1; 788 728 789 729 return 0; 790 730 } 791 731 EXPORT_SYMBOL_GPL(nf_flow_rule_route_ipv6); 792 - 793 - #define NF_FLOW_RULE_ACTION_MAX 16 794 732 795 733 static struct nf_flow_rule * 796 734 nf_flow_offload_rule_alloc(struct net *net,
+5 -2
net/netfilter/nf_tables_api.c
··· 11667 11667 switch (data->verdict.code) { 11668 11668 case NF_ACCEPT: 11669 11669 case NF_DROP: 11670 - case NF_QUEUE: 11671 - break; 11672 11670 case NFT_CONTINUE: 11673 11671 case NFT_BREAK: 11674 11672 case NFT_RETURN: ··· 11701 11703 11702 11704 data->verdict.chain = chain; 11703 11705 break; 11706 + case NF_QUEUE: 11707 + /* The nft_queue expression is used for this purpose, an 11708 + * immediate NF_QUEUE verdict should not ever be seen here. 11709 + */ 11710 + fallthrough; 11704 11711 default: 11705 11712 return -EINVAL; 11706 11713 }
+23
net/netfilter/x_tables.c
··· 501 501 par->match->table, par->table); 502 502 return -EINVAL; 503 503 } 504 + 505 + /* NFPROTO_UNSPEC implies NF_INET_* hooks which do not overlap with 506 + * NF_ARP_IN,OUT,FORWARD, allow explicit extensions with NFPROTO_ARP 507 + * support. 508 + */ 509 + if (par->family == NFPROTO_ARP && 510 + par->match->family != NFPROTO_ARP) { 511 + pr_info_ratelimited("%s_tables: %s match: not valid for this family\n", 512 + xt_prefix[par->family], par->match->name); 513 + return -EINVAL; 514 + } 504 515 if (par->match->hooks && (par->hook_mask & ~par->match->hooks) != 0) { 505 516 char used[64], allow[64]; 506 517 ··· 1027 1016 par->target->table, par->table); 1028 1017 return -EINVAL; 1029 1018 } 1019 + 1020 + /* NFPROTO_UNSPEC implies NF_INET_* hooks which do not overlap with 1021 + * NF_ARP_IN,OUT,FORWARD, allow explicit extensions with NFPROTO_ARP 1022 + * support. 1023 + */ 1024 + if (par->family == NFPROTO_ARP && 1025 + par->target->family != NFPROTO_ARP) { 1026 + pr_info_ratelimited("%s_tables: %s target: not valid for this family\n", 1027 + xt_prefix[par->family], par->target->name); 1028 + return -EINVAL; 1029 + } 1030 + 1030 1031 if (par->target->hooks && (par->hook_mask & ~par->target->hooks) != 0) { 1031 1032 char used[64], allow[64]; 1032 1033
+6
net/netfilter/xt_cgroup.c
··· 65 65 66 66 info->priv = NULL; 67 67 if (info->has_path) { 68 + if (strnlen(info->path, sizeof(info->path)) >= sizeof(info->path)) 69 + return -ENAMETOOLONG; 70 + 68 71 cgrp = cgroup_get_from_path(info->path); 69 72 if (IS_ERR(cgrp)) { 70 73 pr_info_ratelimited("invalid path, errno=%ld\n", ··· 105 102 106 103 info->priv = NULL; 107 104 if (info->has_path) { 105 + if (strnlen(info->path, sizeof(info->path)) >= sizeof(info->path)) 106 + return -ENAMETOOLONG; 107 + 108 108 cgrp = cgroup_get_from_path(info->path); 109 109 if (IS_ERR(cgrp)) { 110 110 pr_info_ratelimited("invalid path, errno=%ld\n",
+5
net/netfilter/xt_rateest.c
··· 91 91 goto err1; 92 92 } 93 93 94 + if (strnlen(info->name1, sizeof(info->name1)) >= sizeof(info->name1)) 95 + return -ENAMETOOLONG; 96 + if (strnlen(info->name2, sizeof(info->name2)) >= sizeof(info->name2)) 97 + return -ENAMETOOLONG; 98 + 94 99 ret = -ENOENT; 95 100 est1 = xt_rateest_lookup(par->net, info->name1); 96 101 if (!est1)
+13 -18
net/qrtr/af_qrtr.c
··· 118 118 * @ep: endpoint 119 119 * @ref: reference count for node 120 120 * @nid: node id 121 - * @qrtr_tx_flow: tree of qrtr_tx_flow, keyed by node << 32 | port 121 + * @qrtr_tx_flow: xarray of qrtr_tx_flow, keyed by node << 32 | port 122 122 * @qrtr_tx_lock: lock for qrtr_tx_flow inserts 123 123 * @rx_queue: receive queue 124 124 * @item: list item for broadcast list ··· 129 129 struct kref ref; 130 130 unsigned int nid; 131 131 132 - struct radix_tree_root qrtr_tx_flow; 132 + struct xarray qrtr_tx_flow; 133 133 struct mutex qrtr_tx_lock; /* for qrtr_tx_flow */ 134 134 135 135 struct sk_buff_head rx_queue; ··· 172 172 struct qrtr_tx_flow *flow; 173 173 unsigned long flags; 174 174 void __rcu **slot; 175 + unsigned long index; 175 176 176 177 spin_lock_irqsave(&qrtr_nodes_lock, flags); 177 178 /* If the node is a bridge for other nodes, there are possibly ··· 190 189 skb_queue_purge(&node->rx_queue); 191 190 192 191 /* Free tx flow counters */ 193 - radix_tree_for_each_slot(slot, &node->qrtr_tx_flow, &iter, 0) { 194 - flow = *slot; 195 - radix_tree_iter_delete(&node->qrtr_tx_flow, &iter, slot); 192 + xa_for_each(&node->qrtr_tx_flow, index, flow) 196 193 kfree(flow); 197 - } 194 + xa_destroy(&node->qrtr_tx_flow); 198 195 kfree(node); 199 196 } 200 197 ··· 227 228 228 229 key = remote_node << 32 | remote_port; 229 230 230 - rcu_read_lock(); 231 - flow = radix_tree_lookup(&node->qrtr_tx_flow, key); 232 - rcu_read_unlock(); 231 + flow = xa_load(&node->qrtr_tx_flow, key); 233 232 if (flow) { 234 233 spin_lock(&flow->resume_tx.lock); 235 234 flow->pending = 0; ··· 266 269 return 0; 267 270 268 271 mutex_lock(&node->qrtr_tx_lock); 269 - flow = radix_tree_lookup(&node->qrtr_tx_flow, key); 272 + flow = xa_load(&node->qrtr_tx_flow, key); 270 273 if (!flow) { 271 274 flow = kzalloc_obj(*flow); 272 275 if (flow) { 273 276 init_waitqueue_head(&flow->resume_tx); 274 - if (radix_tree_insert(&node->qrtr_tx_flow, key, flow)) { 277 + if (xa_err(xa_store(&node->qrtr_tx_flow, key, flow, 278 + GFP_KERNEL))) { 275 279 kfree(flow); 276 280 flow = NULL; 277 281 } ··· 324 326 unsigned long key = (u64)dest_node << 32 | dest_port; 325 327 struct qrtr_tx_flow *flow; 326 328 327 - rcu_read_lock(); 328 - flow = radix_tree_lookup(&node->qrtr_tx_flow, key); 329 - rcu_read_unlock(); 329 + flow = xa_load(&node->qrtr_tx_flow, key); 330 330 if (flow) { 331 331 spin_lock_irq(&flow->resume_tx.lock); 332 332 flow->tx_failed = 1; ··· 595 599 node->nid = QRTR_EP_NID_AUTO; 596 600 node->ep = ep; 597 601 598 - INIT_RADIX_TREE(&node->qrtr_tx_flow, GFP_KERNEL); 602 + xa_init(&node->qrtr_tx_flow); 599 603 mutex_init(&node->qrtr_tx_lock); 600 604 601 605 qrtr_node_assign(node, nid); ··· 623 627 struct qrtr_tx_flow *flow; 624 628 struct sk_buff *skb; 625 629 unsigned long flags; 630 + unsigned long index; 626 631 void __rcu **slot; 627 632 628 633 mutex_lock(&node->ep_lock); ··· 646 649 647 650 /* Wake up any transmitters waiting for resume-tx from the node */ 648 651 mutex_lock(&node->qrtr_tx_lock); 649 - radix_tree_for_each_slot(slot, &node->qrtr_tx_flow, &iter, 0) { 650 - flow = *slot; 652 + xa_for_each(&node->qrtr_tx_flow, index, flow) 651 653 wake_up_interruptible_all(&flow->resume_tx); 652 - } 653 654 mutex_unlock(&node->qrtr_tx_lock); 654 655 655 656 qrtr_node_release(node);
+6 -1
net/rds/ib_rdma.c
··· 604 604 return ibmr; 605 605 } 606 606 607 - if (conn) 607 + if (conn) { 608 608 ic = conn->c_transport_data; 609 + if (!ic || !ic->i_cm_id || !ic->i_cm_id->qp) { 610 + ret = -ENODEV; 611 + goto out; 612 + } 613 + } 609 614 610 615 if (!rds_ibdev->mr_8k_pool || !rds_ibdev->mr_1m_pool) { 611 616 ret = -ENODEV;
+1
net/sched/cls_api.c
··· 2969 2969 tcm->tcm__pad1 = 0; 2970 2970 tcm->tcm__pad2 = 0; 2971 2971 tcm->tcm_handle = 0; 2972 + tcm->tcm_info = 0; 2972 2973 if (block->q) { 2973 2974 tcm->tcm_ifindex = qdisc_dev(block->q)->ifindex; 2974 2975 tcm->tcm_parent = block->q->handle;
+9 -1
net/sched/cls_flow.c
··· 503 503 } 504 504 505 505 if (TC_H_MAJ(baseclass) == 0) { 506 - struct Qdisc *q = tcf_block_q(tp->chain->block); 506 + struct tcf_block *block = tp->chain->block; 507 + struct Qdisc *q; 507 508 509 + if (tcf_block_shared(block)) { 510 + NL_SET_ERR_MSG(extack, 511 + "Must specify baseclass when attaching flow filter to block"); 512 + goto err2; 513 + } 514 + 515 + q = tcf_block_q(block); 508 516 baseclass = TC_H_MAKE(q->handle, baseclass); 509 517 } 510 518 if (TC_H_MIN(baseclass) == 0)
+12 -2
net/sched/cls_fw.c
··· 247 247 struct nlattr *tb[TCA_FW_MAX + 1]; 248 248 int err; 249 249 250 - if (!opt) 251 - return handle ? -EINVAL : 0; /* Succeed if it is old method. */ 250 + if (!opt) { 251 + if (handle) 252 + return -EINVAL; 253 + 254 + if (tcf_block_shared(tp->chain->block)) { 255 + NL_SET_ERR_MSG(extack, 256 + "Must specify mark when attaching fw filter to block"); 257 + return -EINVAL; 258 + } 259 + 260 + return 0; /* Succeed if it is old method. */ 261 + } 252 262 253 263 err = nla_parse_nested_deprecated(tb, TCA_FW_MAX, opt, fw_policy, 254 264 NULL);
+2 -2
net/sched/sch_hfsc.c
··· 555 555 rtsc_min(struct runtime_sc *rtsc, struct internal_sc *isc, u64 x, u64 y) 556 556 { 557 557 u64 y1, y2, dx, dy; 558 - u32 dsm; 558 + u64 dsm; 559 559 560 560 if (isc->sm1 <= isc->sm2) { 561 561 /* service curve is convex */ ··· 598 598 */ 599 599 dx = (y1 - y) << SM_SHIFT; 600 600 dsm = isc->sm1 - isc->sm2; 601 - do_div(dx, dsm); 601 + dx = div64_u64(dx, dsm); 602 602 /* 603 603 * check if (x, y1) belongs to the 1st segment of rtsc. 604 604 * if so, add the offset.
+3 -2
net/sched/sch_netem.c
··· 519 519 goto finish_segs; 520 520 } 521 521 522 - skb->data[get_random_u32_below(skb_headlen(skb))] ^= 523 - 1<<get_random_u32_below(8); 522 + if (skb_headlen(skb)) 523 + skb->data[get_random_u32_below(skb_headlen(skb))] ^= 524 + 1 << get_random_u32_below(8); 524 525 } 525 526 526 527 if (unlikely(q->t_len >= sch->limit)) {
+1
net/vmw_vsock/af_vsock.c
··· 2928 2928 net->vsock.mode = vsock_net_child_mode(current->nsproxy->net_ns); 2929 2929 2930 2930 net->vsock.child_ns_mode = net->vsock.mode; 2931 + net->vsock.child_ns_mode_locked = 0; 2931 2932 } 2932 2933 2933 2934 static __net_init int vsock_sysctl_init_net(struct net *net)
+6 -3
net/x25/x25_in.c
··· 34 34 struct sk_buff *skbo, *skbn = skb; 35 35 struct x25_sock *x25 = x25_sk(sk); 36 36 37 + /* make sure we don't overflow */ 38 + if (x25->fraglen + skb->len > USHRT_MAX) 39 + return 1; 40 + 37 41 if (more) { 38 42 x25->fraglen += skb->len; 39 43 skb_queue_tail(&x25->fragment_queue, skb); ··· 48 44 if (x25->fraglen > 0) { /* End of fragment */ 49 45 int len = x25->fraglen + skb->len; 50 46 51 - if ((skbn = alloc_skb(len, GFP_ATOMIC)) == NULL){ 52 - kfree_skb(skb); 47 + skbn = alloc_skb(len, GFP_ATOMIC); 48 + if (!skbn) 53 49 return 1; 54 - } 55 50 56 51 skb_queue_tail(&x25->fragment_queue, skb); 57 52
+1
net/x25/x25_subr.c
··· 40 40 skb_queue_purge(&x25->interrupt_in_queue); 41 41 skb_queue_purge(&x25->interrupt_out_queue); 42 42 skb_queue_purge(&x25->fragment_queue); 43 + x25->fraglen = 0; 43 44 } 44 45 45 46
+27 -1
sound/hda/codecs/realtek/alc269.c
··· 4122 4122 ALC233_FIXUP_LENOVO_GPIO2_MIC_HOTKEY, 4123 4123 ALC245_FIXUP_BASS_HP_DAC, 4124 4124 ALC245_FIXUP_ACER_MICMUTE_LED, 4125 + ALC245_FIXUP_CS35L41_I2C_2_MUTE_LED, 4126 + ALC236_FIXUP_HP_DMIC, 4125 4127 }; 4126 4128 4127 4129 /* A special fixup for Lenovo C940 and Yoga Duet 7; ··· 6653 6651 .v.func = alc285_fixup_hp_coef_micmute_led, 6654 6652 .chained = true, 6655 6653 .chain_id = ALC2XX_FIXUP_HEADSET_MIC, 6654 + }, 6655 + [ALC245_FIXUP_CS35L41_I2C_2_MUTE_LED] = { 6656 + .type = HDA_FIXUP_FUNC, 6657 + .v.func = alc245_fixup_hp_mute_led_coefbit, 6658 + .chained = true, 6659 + .chain_id = ALC287_FIXUP_CS35L41_I2C_2, 6660 + }, 6661 + [ALC236_FIXUP_HP_DMIC] = { 6662 + .type = HDA_FIXUP_PINS, 6663 + .v.pins = (const struct hda_pintbl[]) { 6664 + { 0x12, 0x90a60160 }, /* use as internal mic */ 6665 + { } 6666 + }, 6656 6667 } 6657 6668 }; 6658 6669 ··· 6720 6705 SND_PCI_QUIRK(0x1025, 0x1597, "Acer Nitro 5 AN517-55", ALC2XX_FIXUP_HEADSET_MIC), 6721 6706 SND_PCI_QUIRK(0x1025, 0x169a, "Acer Swift SFG16", ALC256_FIXUP_ACER_SFG16_MICMUTE_LED), 6722 6707 SND_PCI_QUIRK(0x1025, 0x171e, "Acer Nitro ANV15-51", ALC245_FIXUP_ACER_MICMUTE_LED), 6708 + SND_PCI_QUIRK(0x1025, 0x173a, "Acer Swift SFG14-73", ALC245_FIXUP_ACER_MICMUTE_LED), 6723 6709 SND_PCI_QUIRK(0x1025, 0x1826, "Acer Helios ZPC", ALC287_FIXUP_PREDATOR_SPK_CS35L41_I2C_2), 6724 6710 SND_PCI_QUIRK(0x1025, 0x182c, "Acer Helios ZPD", ALC287_FIXUP_PREDATOR_SPK_CS35L41_I2C_2), 6725 6711 SND_PCI_QUIRK(0x1025, 0x1844, "Acer Helios ZPS", ALC287_FIXUP_PREDATOR_SPK_CS35L41_I2C_2), ··· 7018 7002 SND_PCI_QUIRK(0x103c, 0x8a30, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2), 7019 7003 SND_PCI_QUIRK(0x103c, 0x8a31, "HP Envy 15", ALC287_FIXUP_CS35L41_I2C_2), 7020 7004 SND_PCI_QUIRK(0x103c, 0x8a34, "HP Pavilion x360 2-in-1 Laptop 14-ek0xxx", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 7005 + SND_PCI_QUIRK(0x103c, 0x8a3d, "HP Victus 15-fb0xxx (MB 8A3D)", ALC245_FIXUP_HP_MUTE_LED_V2_COEFBIT), 7021 7006 SND_PCI_QUIRK(0x103c, 0x8a4f, "HP Victus 15-fa0xxx (MB 8A4F)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 7022 7007 SND_PCI_QUIRK(0x103c, 0x8a6e, "HP EDNA 360", ALC287_FIXUP_CS35L41_I2C_4), 7023 7008 SND_PCI_QUIRK(0x103c, 0x8a74, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED), ··· 7162 7145 SND_PCI_QUIRK(0x103c, 0x8da1, "HP 16 Clipper OmniBook X", ALC287_FIXUP_CS35L41_I2C_2), 7163 7146 SND_PCI_QUIRK(0x103c, 0x8da7, "HP 14 Enstrom OmniBook X", ALC287_FIXUP_CS35L41_I2C_2), 7164 7147 SND_PCI_QUIRK(0x103c, 0x8da8, "HP 16 Piston OmniBook X", ALC287_FIXUP_CS35L41_I2C_2), 7148 + SND_PCI_QUIRK(0x103c, 0x8dc9, "HP Laptop 15-fc0xxx", ALC236_FIXUP_HP_DMIC), 7165 7149 SND_PCI_QUIRK(0x103c, 0x8dd4, "HP EliteStudio 8 AIO", ALC274_FIXUP_HP_AIO_BIND_DACS), 7166 7150 SND_PCI_QUIRK(0x103c, 0x8dd7, "HP Laptop 15-fd0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2), 7167 7151 SND_PCI_QUIRK(0x103c, 0x8de8, "HP Gemtree", ALC245_FIXUP_TAS2781_SPI_2), ··· 7195 7177 SND_PCI_QUIRK(0x103c, 0x8e37, "HP 16 Piston OmniBook X", ALC287_FIXUP_CS35L41_I2C_2), 7196 7178 SND_PCI_QUIRK(0x103c, 0x8e3a, "HP Agusta", ALC287_FIXUP_CS35L41_I2C_2), 7197 7179 SND_PCI_QUIRK(0x103c, 0x8e3b, "HP Agusta", ALC287_FIXUP_CS35L41_I2C_2), 7198 - SND_PCI_QUIRK(0x103c, 0x8e60, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2), 7180 + SND_PCI_QUIRK(0x103c, 0x8e60, "HP OmniBook 7 Laptop 16-bh0xxx", ALC245_FIXUP_CS35L41_I2C_2_MUTE_LED), 7199 7181 SND_PCI_QUIRK(0x103c, 0x8e61, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2), 7200 7182 SND_PCI_QUIRK(0x103c, 0x8e62, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2), 7201 7183 SND_PCI_QUIRK(0x103c, 0x8e8a, "HP NexusX", ALC245_FIXUP_HP_TAS2781_I2C_MUTE_LED), ··· 7279 7261 SND_PCI_QUIRK(0x1043, 0x1533, "ASUS GV302XA/XJ/XQ/XU/XV/XI", ALC287_FIXUP_CS35L41_I2C_2), 7280 7262 SND_PCI_QUIRK(0x1043, 0x1573, "ASUS GZ301VV/VQ/VU/VJ/VA/VC/VE/VVC/VQC/VUC/VJC/VEC/VCC", ALC285_FIXUP_ASUS_HEADSET_MIC), 7281 7263 SND_PCI_QUIRK(0x1043, 0x1584, "ASUS UM3406GA ", ALC287_FIXUP_CS35L41_I2C_2), 7264 + SND_PCI_QUIRK(0x1043, 0x1602, "ASUS ROG Strix SCAR 15", ALC285_FIXUP_ASUS_G533Z_PINS), 7282 7265 SND_PCI_QUIRK(0x1043, 0x1652, "ASUS ROG Zephyrus Do 15 SE", ALC289_FIXUP_ASUS_ZEPHYRUS_DUAL_SPK), 7283 7266 SND_PCI_QUIRK(0x1043, 0x1662, "ASUS GV301QH", ALC294_FIXUP_ASUS_DUAL_SPK), 7284 7267 SND_PCI_QUIRK(0x1043, 0x1663, "ASUS GU603ZI/ZJ/ZQ/ZU/ZV", ALC285_FIXUP_ASUS_HEADSET_MIC), ··· 7419 7400 SND_PCI_QUIRK(0x144d, 0xc188, "Samsung Galaxy Book Flex (NT950QCT-A38A)", ALC298_FIXUP_SAMSUNG_AMP), 7420 7401 SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Book Flex (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_AMP), 7421 7402 SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NP930XCJ-K01US)", ALC298_FIXUP_SAMSUNG_AMP), 7403 + SND_PCI_QUIRK(0x144d, 0xc1ac, "Samsung Galaxy Book2 Pro 360 (NP950QED)", ALC298_FIXUP_SAMSUNG_AMP_V2_2_AMPS), 7422 7404 SND_PCI_QUIRK(0x144d, 0xc1a3, "Samsung Galaxy Book Pro (NP935XDB-KC1SE)", ALC298_FIXUP_SAMSUNG_AMP), 7423 7405 SND_PCI_QUIRK(0x144d, 0xc1a4, "Samsung Galaxy Book Pro 360 (NT935QBD)", ALC298_FIXUP_SAMSUNG_AMP), 7424 7406 SND_PCI_QUIRK(0x144d, 0xc1a6, "Samsung Galaxy Book Pro 360 (NP930QBD)", ALC298_FIXUP_SAMSUNG_AMP), ··· 7615 7595 SND_PCI_QUIRK(0x17aa, 0x3834, "Lenovo IdeaPad Slim 9i 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), 7616 7596 SND_PCI_QUIRK(0x17aa, 0x383d, "Legion Y9000X 2019", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS), 7617 7597 SND_PCI_QUIRK(0x17aa, 0x3843, "Lenovo Yoga 9i / Yoga Book 9i", ALC287_FIXUP_LENOVO_YOGA_BOOK_9I), 7598 + /* Yoga Pro 7 14IMH9 shares PCI SSID 17aa:3847 with Legion 7 16ACHG6; 7599 + * use codec SSID to distinguish them 7600 + */ 7601 + HDA_CODEC_QUIRK(0x17aa, 0x38cf, "Lenovo Yoga Pro 7 14IMH9", ALC287_FIXUP_YOGA9_14IMH9_BASS_SPK_PIN), 7618 7602 SND_PCI_QUIRK(0x17aa, 0x3847, "Legion 7 16ACHG6", ALC287_FIXUP_LEGION_16ACHG6), 7619 7603 SND_PCI_QUIRK(0x17aa, 0x384a, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), 7620 7604 SND_PCI_QUIRK(0x17aa, 0x3852, "Lenovo Yoga 7 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), ··· 7681 7657 SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), 7682 7658 SND_PCI_QUIRK(0x17aa, 0x390d, "Lenovo Yoga Pro 7 14ASP10", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), 7683 7659 SND_PCI_QUIRK(0x17aa, 0x3913, "Lenovo 145", ALC236_FIXUP_LENOVO_INV_DMIC), 7660 + SND_PCI_QUIRK(0x17aa, 0x391a, "Lenovo Yoga Slim 7 14AKP10", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), 7684 7661 SND_PCI_QUIRK(0x17aa, 0x391f, "Yoga S990-16 pro Quad YC Quad", ALC287_FIXUP_TXNW2781_I2C), 7685 7662 SND_PCI_QUIRK(0x17aa, 0x3920, "Yoga S990-16 pro Quad VECO Quad", ALC287_FIXUP_TXNW2781_I2C), 7686 7663 SND_PCI_QUIRK(0x17aa, 0x3929, "Thinkbook 13x Gen 5", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD), ··· 7775 7750 SND_PCI_QUIRK(0xf111, 0x0009, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 7776 7751 SND_PCI_QUIRK(0xf111, 0x000b, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 7777 7752 SND_PCI_QUIRK(0xf111, 0x000c, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 7753 + SND_PCI_QUIRK(0xf111, 0x000f, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 7778 7754 7779 7755 #if 0 7780 7756 /* Below is a quirk table taken from the old code.
+13
sound/hda/controllers/intel.c
··· 2085 2085 {} 2086 2086 }; 2087 2087 2088 + static struct pci_device_id driver_denylist_msi_x870e[] = { 2089 + { PCI_DEVICE_SUB(0x1022, 0x15e3, 0x1462, 0xee59) }, /* MSI X870E Tomahawk WiFi */ 2090 + {} 2091 + }; 2092 + 2088 2093 /* DMI-based denylist, to be used when: 2089 2094 * - PCI subsystem IDs are zero, impossible to distinguish from valid sound cards. 2090 2095 * - Different modifications of the same laptop use different GPU models. ··· 2102 2097 DMI_MATCH(DMI_PRODUCT_VERSION, "Ideapad Z570"), 2103 2098 }, 2104 2099 .driver_data = &driver_denylist_ideapad_z570, 2100 + }, 2101 + { 2102 + /* PCI device matching alone incorrectly matches some laptops */ 2103 + .matches = { 2104 + DMI_MATCH(DMI_BOARD_VENDOR, "Micro-Star International Co., Ltd."), 2105 + DMI_MATCH(DMI_BOARD_NAME, "MAG X870E TOMAHAWK WIFI (MS-7E59)"), 2106 + }, 2107 + .driver_data = &driver_denylist_msi_x870e, 2105 2108 }, 2106 2109 {} 2107 2110 };
+7 -3
sound/pci/ctxfi/ctatc.c
··· 1427 1427 daio_mgr = (struct daio_mgr *)atc->rsc_mgrs[DAIO]; 1428 1428 da_desc.msr = atc->msr; 1429 1429 for (i = 0; i < NUM_DAIOTYP; i++) { 1430 - if (((i == MIC) && !cap.dedicated_mic) || ((i == RCA) && !cap.dedicated_rca)) 1430 + if (((i == MIC) && !cap.dedicated_mic) || 1431 + ((i == RCA) && !cap.dedicated_rca) || 1432 + i == SPDIFI1) 1431 1433 continue; 1432 - da_desc.type = (atc->model != CTSB073X) ? i : 1433 - ((i == SPDIFIO) ? SPDIFI1 : i); 1434 + if (atc->model == CTSB073X && i == SPDIFIO) 1435 + da_desc.type = SPDIFI1; 1436 + else 1437 + da_desc.type = i; 1434 1438 da_desc.output = (i < LINEIM) || (i == RCA); 1435 1439 err = daio_mgr->get_daio(daio_mgr, &da_desc, 1436 1440 (struct daio **)&atc->daios[i]);
+51 -31
sound/pci/ctxfi/ctdaio.c
··· 99 99 .output_slot = daio_index, 100 100 }; 101 101 102 - static unsigned int daio_device_index(enum DAIOTYP type, struct hw *hw) 102 + static int daio_device_index(enum DAIOTYP type, struct hw *hw) 103 103 { 104 104 switch (hw->chip_type) { 105 105 case ATC20K1: ··· 112 112 case LINEO3: return 5; 113 113 case LINEO4: return 6; 114 114 case LINEIM: return 7; 115 - default: return -EINVAL; 115 + default: 116 + pr_err("ctxfi: Invalid type %d for hw20k1\n", type); 117 + return -EINVAL; 116 118 } 117 119 case ATC20K2: 118 120 switch (type) { 119 121 case SPDIFOO: return 0; 120 122 case SPDIFIO: return 0; 123 + case SPDIFI1: return 1; 121 124 case LINEO1: return 4; 122 125 case LINEO2: return 7; 123 126 case LINEO3: return 5; ··· 128 125 case LINEIM: return 4; 129 126 case MIC: return 5; 130 127 case RCA: return 3; 131 - default: return -EINVAL; 128 + default: 129 + pr_err("ctxfi: Invalid type %d for hw20k2\n", type); 130 + return -EINVAL; 132 131 } 133 132 default: 133 + pr_err("ctxfi: Invalid chip type %d\n", hw->chip_type); 134 134 return -EINVAL; 135 135 } 136 136 } ··· 154 148 155 149 static int dao_commit_write(struct dao *dao) 156 150 { 157 - dao->hw->dao_commit_write(dao->hw, 158 - daio_device_index(dao->daio.type, dao->hw), dao->ctrl_blk); 151 + int idx = daio_device_index(dao->daio.type, dao->hw); 152 + 153 + if (idx < 0) 154 + return idx; 155 + dao->hw->dao_commit_write(dao->hw, idx, dao->ctrl_blk); 159 156 return 0; 160 157 } 161 158 ··· 296 287 297 288 static int dai_commit_write(struct dai *dai) 298 289 { 299 - dai->hw->dai_commit_write(dai->hw, 300 - daio_device_index(dai->daio.type, dai->hw), dai->ctrl_blk); 290 + int idx = daio_device_index(dai->daio.type, dai->hw); 291 + 292 + if (idx < 0) 293 + return idx; 294 + dai->hw->dai_commit_write(dai->hw, idx, dai->ctrl_blk); 301 295 return 0; 302 296 } 303 297 ··· 379 367 { 380 368 struct hw *hw = mgr->mgr.hw; 381 369 unsigned int conf; 382 - int err; 370 + int idx, err; 383 371 384 372 err = daio_rsc_init(&dao->daio, desc, mgr->mgr.hw); 385 373 if (err) ··· 398 386 if (err) 399 387 goto error2; 400 388 401 - hw->daio_mgr_dsb_dao(mgr->mgr.ctrl_blk, 402 - daio_device_index(dao->daio.type, hw)); 389 + idx = daio_device_index(dao->daio.type, hw); 390 + if (idx < 0) { 391 + err = idx; 392 + goto error2; 393 + } 394 + 395 + hw->daio_mgr_dsb_dao(mgr->mgr.ctrl_blk, idx); 403 396 hw->daio_mgr_commit_write(hw, mgr->mgr.ctrl_blk); 404 397 405 398 conf = (desc->msr & 0x7) | (desc->passthru << 3); 406 - hw->daio_mgr_dao_init(hw, mgr->mgr.ctrl_blk, 407 - daio_device_index(dao->daio.type, hw), conf); 408 - hw->daio_mgr_enb_dao(mgr->mgr.ctrl_blk, 409 - daio_device_index(dao->daio.type, hw)); 399 + hw->daio_mgr_dao_init(hw, mgr->mgr.ctrl_blk, idx, conf); 400 + hw->daio_mgr_enb_dao(mgr->mgr.ctrl_blk, idx); 410 401 hw->daio_mgr_commit_write(hw, mgr->mgr.ctrl_blk); 411 402 412 403 return 0; ··· 458 443 const struct daio_desc *desc, 459 444 struct daio_mgr *mgr) 460 445 { 461 - int err; 446 + int idx, err; 462 447 struct hw *hw = mgr->mgr.hw; 463 448 unsigned int rsr, msr; 464 449 ··· 472 457 if (err) 473 458 goto error1; 474 459 460 + idx = daio_device_index(dai->daio.type, dai->hw); 461 + if (idx < 0) { 462 + err = idx; 463 + goto error1; 464 + } 465 + 475 466 for (rsr = 0, msr = desc->msr; msr > 1; msr >>= 1) 476 467 rsr++; 477 468 ··· 486 465 /* default to disabling control of a SRC */ 487 466 hw->dai_srt_set_ec(dai->ctrl_blk, 0); 488 467 hw->dai_srt_set_et(dai->ctrl_blk, 0); /* default to disabling SRT */ 489 - hw->dai_commit_write(hw, 490 - daio_device_index(dai->daio.type, dai->hw), dai->ctrl_blk); 468 + hw->dai_commit_write(hw, idx, dai->ctrl_blk); 491 469 492 470 return 0; 493 471 ··· 601 581 static int daio_mgr_enb_daio(struct daio_mgr *mgr, struct daio *daio) 602 582 { 603 583 struct hw *hw = mgr->mgr.hw; 584 + int idx = daio_device_index(daio->type, hw); 604 585 605 - if (daio->output) { 606 - hw->daio_mgr_enb_dao(mgr->mgr.ctrl_blk, 607 - daio_device_index(daio->type, hw)); 608 - } else { 609 - hw->daio_mgr_enb_dai(mgr->mgr.ctrl_blk, 610 - daio_device_index(daio->type, hw)); 611 - } 586 + if (idx < 0) 587 + return idx; 588 + if (daio->output) 589 + hw->daio_mgr_enb_dao(mgr->mgr.ctrl_blk, idx); 590 + else 591 + hw->daio_mgr_enb_dai(mgr->mgr.ctrl_blk, idx); 612 592 return 0; 613 593 } 614 594 615 595 static int daio_mgr_dsb_daio(struct daio_mgr *mgr, struct daio *daio) 616 596 { 617 597 struct hw *hw = mgr->mgr.hw; 598 + int idx = daio_device_index(daio->type, hw); 618 599 619 - if (daio->output) { 620 - hw->daio_mgr_dsb_dao(mgr->mgr.ctrl_blk, 621 - daio_device_index(daio->type, hw)); 622 - } else { 623 - hw->daio_mgr_dsb_dai(mgr->mgr.ctrl_blk, 624 - daio_device_index(daio->type, hw)); 625 - } 600 + if (idx < 0) 601 + return idx; 602 + if (daio->output) 603 + hw->daio_mgr_dsb_dao(mgr->mgr.ctrl_blk, idx); 604 + else 605 + hw->daio_mgr_dsb_dai(mgr->mgr.ctrl_blk, idx); 626 606 return 0; 627 607 } 628 608
+1 -1
sound/soc/amd/ps/pci-ps.c
··· 339 339 mach->mach_params.subsystem_device = acp_data->subsystem_device; 340 340 mach->mach_params.subsystem_id_set = true; 341 341 342 - dev_dbg(dev, "SSID %x%x\n", mach->mach_params.subsystem_vendor, 342 + dev_dbg(dev, "SSID %x%04x\n", mach->mach_params.subsystem_vendor, 343 343 mach->mach_params.subsystem_device); 344 344 return mach; 345 345 }
+14
sound/soc/amd/yc/acp6x-mach.c
··· 48 48 { 49 49 .driver_data = &acp6x_card, 50 50 .matches = { 51 + DMI_MATCH(DMI_BOARD_VENDOR, "HP"), 52 + DMI_MATCH(DMI_PRODUCT_NAME, "HP Laptop 15-fc0xxx"), 53 + } 54 + }, 55 + { 56 + .driver_data = &acp6x_card, 57 + .matches = { 51 58 DMI_MATCH(DMI_BOARD_VENDOR, "Dell Inc."), 52 59 DMI_MATCH(DMI_PRODUCT_NAME, "Dell G15 5525"), 53 60 } ··· 736 729 .matches = { 737 730 DMI_MATCH(DMI_BOARD_VENDOR, "Micro-Star International Co., Ltd."), 738 731 DMI_MATCH(DMI_PRODUCT_NAME, "Thin A15 B7VE"), 732 + } 733 + }, 734 + { 735 + .driver_data = &acp6x_card, 736 + .matches = { 737 + DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."), 738 + DMI_MATCH(DMI_PRODUCT_NAME, "M7601RM"), 739 739 } 740 740 }, 741 741 {}
+24 -10
sound/soc/cirrus/ep93xx-i2s.c
··· 91 91 return __raw_readl(info->regs + reg); 92 92 } 93 93 94 - static void ep93xx_i2s_enable(struct ep93xx_i2s_info *info, int stream) 94 + static int ep93xx_i2s_enable(struct ep93xx_i2s_info *info, int stream) 95 95 { 96 96 unsigned base_reg; 97 + int err; 97 98 98 99 if ((ep93xx_i2s_read_reg(info, EP93XX_I2S_TX0EN) & 0x1) == 0 && 99 100 (ep93xx_i2s_read_reg(info, EP93XX_I2S_RX0EN) & 0x1) == 0) { 100 101 /* Enable clocks */ 101 - clk_prepare_enable(info->mclk); 102 - clk_prepare_enable(info->sclk); 103 - clk_prepare_enable(info->lrclk); 102 + err = clk_prepare_enable(info->mclk); 103 + if (err) 104 + return err; 105 + err = clk_prepare_enable(info->sclk); 106 + if (err) { 107 + clk_disable_unprepare(info->mclk); 108 + return err; 109 + } 110 + err = clk_prepare_enable(info->lrclk); 111 + if (err) { 112 + clk_disable_unprepare(info->sclk); 113 + clk_disable_unprepare(info->mclk); 114 + return err; 115 + } 104 116 105 117 /* Enable i2s */ 106 118 ep93xx_i2s_write_reg(info, EP93XX_I2S_GLCTRL, 1); ··· 131 119 ep93xx_i2s_write_reg(info, EP93XX_I2S_TXCTRL, 132 120 EP93XX_I2S_TXCTRL_TXEMPTY_LVL | 133 121 EP93XX_I2S_TXCTRL_TXUFIE); 122 + 123 + return 0; 134 124 } 135 125 136 126 static void ep93xx_i2s_disable(struct ep93xx_i2s_info *info, int stream) ··· 209 195 { 210 196 struct ep93xx_i2s_info *info = snd_soc_dai_get_drvdata(dai); 211 197 212 - ep93xx_i2s_enable(info, substream->stream); 213 - 214 - return 0; 198 + return ep93xx_i2s_enable(info, substream->stream); 215 199 } 216 200 217 201 static void ep93xx_i2s_shutdown(struct snd_pcm_substream *substream, ··· 385 373 static int ep93xx_i2s_resume(struct snd_soc_component *component) 386 374 { 387 375 struct ep93xx_i2s_info *info = snd_soc_component_get_drvdata(component); 376 + int err; 388 377 389 378 if (!snd_soc_component_active(component)) 390 379 return 0; 391 380 392 - ep93xx_i2s_enable(info, SNDRV_PCM_STREAM_PLAYBACK); 393 - ep93xx_i2s_enable(info, SNDRV_PCM_STREAM_CAPTURE); 381 + err = ep93xx_i2s_enable(info, SNDRV_PCM_STREAM_PLAYBACK); 382 + if (err) 383 + return err; 394 384 395 - return 0; 385 + return ep93xx_i2s_enable(info, SNDRV_PCM_STREAM_CAPTURE); 396 386 } 397 387 #else 398 388 #define ep93xx_i2s_suspend NULL
-2
sound/soc/intel/boards/Kconfig
··· 530 530 select SND_SOC_CS42L43_SDW 531 531 select MFD_CS42L43 532 532 select MFD_CS42L43_SDW 533 - select PINCTRL_CS42L43 534 - select SPI_CS42L43 535 533 select SND_SOC_CS35L56_SPI 536 534 select SND_SOC_CS35L56_SDW 537 535 select SND_SOC_DMIC
+1 -1
sound/soc/intel/boards/ehl_rt5660.c
··· 127 127 params_rate(params) * 50, 128 128 params_rate(params) * 512); 129 129 if (ret < 0) 130 - dev_err(codec_dai->dev, "can't set codec pll: %d\n", ret); 130 + dev_err(rtd->dev, "can't set codec pll: %d\n", ret); 131 131 132 132 return ret; 133 133 }
+1
sound/soc/soc-core.c
··· 2859 2859 INIT_LIST_HEAD(&component->dobj_list); 2860 2860 INIT_LIST_HEAD(&component->card_list); 2861 2861 INIT_LIST_HEAD(&component->list); 2862 + INIT_LIST_HEAD(&component->card_aux_list); 2862 2863 mutex_init(&component->io_mutex); 2863 2864 2864 2865 if (!component->name) {
+1 -1
sound/usb/caiaq/device.c
··· 488 488 memset(id, 0, sizeof(id)); 489 489 490 490 for (c = card->shortname, len = 0; 491 - *c && len < sizeof(card->id); c++) 491 + *c && len < sizeof(card->id) - 1; c++) 492 492 if (*c != ' ') 493 493 id[len++] = *c; 494 494
+9 -1
sound/usb/qcom/qc_audio_offload.c
··· 699 699 uaudio_iommu_unmap(MEM_EVENT_RING, IOVA_BASE, PAGE_SIZE, 700 700 PAGE_SIZE); 701 701 xhci_sideband_remove_interrupter(uadev[dev->chip->card->number].sb); 702 + usb_offload_put(dev->udev); 702 703 } 703 704 } 704 705 ··· 1183 1182 dma_coherent = dev_is_dma_coherent(subs->dev->bus->sysdev); 1184 1183 er_pa = 0; 1185 1184 1185 + ret = usb_offload_get(subs->dev); 1186 + if (ret < 0) 1187 + goto exit; 1188 + 1186 1189 /* event ring */ 1187 1190 ret = xhci_sideband_create_interrupter(uadev[card_num].sb, 1, false, 1188 1191 0, uaudio_qdev->data->intr_num); 1189 1192 if (ret < 0) { 1190 1193 dev_err(&subs->dev->dev, "failed to fetch interrupter\n"); 1191 - goto exit; 1194 + goto put_offload; 1192 1195 } 1193 1196 1194 1197 sgt = xhci_sideband_get_event_buffer(uadev[card_num].sb); ··· 1224 1219 mem_info->dma = 0; 1225 1220 remove_interrupter: 1226 1221 xhci_sideband_remove_interrupter(uadev[card_num].sb); 1222 + put_offload: 1223 + usb_offload_put(subs->dev); 1227 1224 exit: 1228 1225 return ret; 1229 1226 } ··· 1489 1482 uaudio_iommu_unmap(MEM_EVENT_RING, IOVA_BASE, PAGE_SIZE, PAGE_SIZE); 1490 1483 free_sec_ring: 1491 1484 xhci_sideband_remove_interrupter(uadev[card_num].sb); 1485 + usb_offload_put(subs->dev); 1492 1486 drop_sync_ep: 1493 1487 if (subs->sync_endpoint) { 1494 1488 uaudio_iommu_unmap(MEM_XFER_RING,
+4
sound/usb/quirks.c
··· 2305 2305 QUIRK_FLAG_PLAYBACK_FIRST | QUIRK_FLAG_GENERIC_IMPLICIT_FB), 2306 2306 DEVICE_FLG(0x13e5, 0x0001, /* Serato Phono */ 2307 2307 QUIRK_FLAG_IGNORE_CTL_ERROR), 2308 + DEVICE_FLG(0x152a, 0x880a, /* NeuralDSP Quad Cortex */ 2309 + 0), /* Doesn't have the vendor quirk which would otherwise apply */ 2308 2310 DEVICE_FLG(0x154e, 0x1002, /* Denon DCD-1500RE */ 2309 2311 QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY), 2310 2312 DEVICE_FLG(0x154e, 0x1003, /* Denon DA-300USB */ ··· 2435 2433 QUIRK_FLAG_VALIDATE_RATES), 2436 2434 DEVICE_FLG(0x1235, 0x8006, 0), /* Focusrite Scarlett 2i2 1st Gen */ 2437 2435 DEVICE_FLG(0x1235, 0x800a, 0), /* Focusrite Scarlett 2i4 1st Gen */ 2436 + DEVICE_FLG(0x1235, 0x8016, 0), /* Focusrite Scarlett 2i2 1st Gen */ 2437 + DEVICE_FLG(0x1235, 0x801c, 0), /* Focusrite Scarlett Solo 1st Gen */ 2438 2438 VENDOR_FLG(0x1235, /* Focusrite Novation */ 2439 2439 QUIRK_FLAG_SKIP_CLOCK_SELECTOR | 2440 2440 QUIRK_FLAG_SKIP_IFACE_SETUP),
+341
tools/testing/selftests/bpf/progs/verifier_precision.c
··· 5 5 #include "../../../include/linux/filter.h" 6 6 #include "bpf_misc.h" 7 7 8 + struct { 9 + __uint(type, BPF_MAP_TYPE_ARRAY); 10 + __uint(max_entries, 1); 11 + __type(key, __u32); 12 + __type(value, __u64); 13 + } precision_map SEC(".maps"); 14 + 8 15 SEC("?raw_tp") 9 16 __success __log_level(2) 10 17 __msg("mark_precise: frame0: regs=r2 stack= before 3: (bf) r1 = r10") ··· 306 299 "r0 = -r0;" 307 300 "exit;" 308 301 ::: __clobber_all); 302 + } 303 + 304 + SEC("?raw_tp") 305 + __success __log_level(2) 306 + __msg("mark_precise: frame0: regs=r2 stack= before 4: (bf) r3 = r10") 307 + __msg("mark_precise: frame0: regs=r2 stack= before 3: (db) r2 = atomic64_fetch_add((u64 *)(r10 -8), r2)") 308 + __msg("mark_precise: frame0: regs= stack=-8 before 2: (b7) r2 = 0") 309 + __msg("mark_precise: frame0: regs= stack=-8 before 1: (7b) *(u64 *)(r10 -8) = r1") 310 + __msg("mark_precise: frame0: regs=r1 stack= before 0: (b7) r1 = 8") 311 + __naked int bpf_atomic_fetch_add_precision(void) 312 + { 313 + asm volatile ( 314 + "r1 = 8;" 315 + "*(u64 *)(r10 - 8) = r1;" 316 + "r2 = 0;" 317 + ".8byte %[fetch_add_insn];" /* r2 = atomic_fetch_add(*(u64 *)(r10 - 8), r2) */ 318 + "r3 = r10;" 319 + "r3 += r2;" /* mark_precise */ 320 + "r0 = 0;" 321 + "exit;" 322 + : 323 + : __imm_insn(fetch_add_insn, 324 + BPF_ATOMIC_OP(BPF_DW, BPF_ADD | BPF_FETCH, BPF_REG_10, BPF_REG_2, -8)) 325 + : __clobber_all); 326 + } 327 + 328 + SEC("?raw_tp") 329 + __success __log_level(2) 330 + __msg("mark_precise: frame0: regs=r2 stack= before 4: (bf) r3 = r10") 331 + __msg("mark_precise: frame0: regs=r2 stack= before 3: (db) r2 = atomic64_xchg((u64 *)(r10 -8), r2)") 332 + __msg("mark_precise: frame0: regs= stack=-8 before 2: (b7) r2 = 0") 333 + __msg("mark_precise: frame0: regs= stack=-8 before 1: (7b) *(u64 *)(r10 -8) = r1") 334 + __msg("mark_precise: frame0: regs=r1 stack= before 0: (b7) r1 = 8") 335 + __naked int bpf_atomic_xchg_precision(void) 336 + { 337 + asm volatile ( 338 + "r1 = 8;" 339 + "*(u64 *)(r10 - 8) = r1;" 340 + "r2 = 0;" 341 + ".8byte %[xchg_insn];" /* r2 = atomic_xchg(*(u64 *)(r10 - 8), r2) */ 342 + "r3 = r10;" 343 + "r3 += r2;" /* mark_precise */ 344 + "r0 = 0;" 345 + "exit;" 346 + : 347 + : __imm_insn(xchg_insn, 348 + BPF_ATOMIC_OP(BPF_DW, BPF_XCHG, BPF_REG_10, BPF_REG_2, -8)) 349 + : __clobber_all); 350 + } 351 + 352 + SEC("?raw_tp") 353 + __success __log_level(2) 354 + __msg("mark_precise: frame0: regs=r2 stack= before 4: (bf) r3 = r10") 355 + __msg("mark_precise: frame0: regs=r2 stack= before 3: (db) r2 = atomic64_fetch_or((u64 *)(r10 -8), r2)") 356 + __msg("mark_precise: frame0: regs= stack=-8 before 2: (b7) r2 = 0") 357 + __msg("mark_precise: frame0: regs= stack=-8 before 1: (7b) *(u64 *)(r10 -8) = r1") 358 + __msg("mark_precise: frame0: regs=r1 stack= before 0: (b7) r1 = 8") 359 + __naked int bpf_atomic_fetch_or_precision(void) 360 + { 361 + asm volatile ( 362 + "r1 = 8;" 363 + "*(u64 *)(r10 - 8) = r1;" 364 + "r2 = 0;" 365 + ".8byte %[fetch_or_insn];" /* r2 = atomic_fetch_or(*(u64 *)(r10 - 8), r2) */ 366 + "r3 = r10;" 367 + "r3 += r2;" /* mark_precise */ 368 + "r0 = 0;" 369 + "exit;" 370 + : 371 + : __imm_insn(fetch_or_insn, 372 + BPF_ATOMIC_OP(BPF_DW, BPF_OR | BPF_FETCH, BPF_REG_10, BPF_REG_2, -8)) 373 + : __clobber_all); 374 + } 375 + 376 + SEC("?raw_tp") 377 + __success __log_level(2) 378 + __msg("mark_precise: frame0: regs=r2 stack= before 4: (bf) r3 = r10") 379 + __msg("mark_precise: frame0: regs=r2 stack= before 3: (db) r2 = atomic64_fetch_and((u64 *)(r10 -8), r2)") 380 + __msg("mark_precise: frame0: regs= stack=-8 before 2: (b7) r2 = 0") 381 + __msg("mark_precise: frame0: regs= stack=-8 before 1: (7b) *(u64 *)(r10 -8) = r1") 382 + __msg("mark_precise: frame0: regs=r1 stack= before 0: (b7) r1 = 8") 383 + __naked int bpf_atomic_fetch_and_precision(void) 384 + { 385 + asm volatile ( 386 + "r1 = 8;" 387 + "*(u64 *)(r10 - 8) = r1;" 388 + "r2 = 0;" 389 + ".8byte %[fetch_and_insn];" /* r2 = atomic_fetch_and(*(u64 *)(r10 - 8), r2) */ 390 + "r3 = r10;" 391 + "r3 += r2;" /* mark_precise */ 392 + "r0 = 0;" 393 + "exit;" 394 + : 395 + : __imm_insn(fetch_and_insn, 396 + BPF_ATOMIC_OP(BPF_DW, BPF_AND | BPF_FETCH, BPF_REG_10, BPF_REG_2, -8)) 397 + : __clobber_all); 398 + } 399 + 400 + SEC("?raw_tp") 401 + __success __log_level(2) 402 + __msg("mark_precise: frame0: regs=r2 stack= before 4: (bf) r3 = r10") 403 + __msg("mark_precise: frame0: regs=r2 stack= before 3: (db) r2 = atomic64_fetch_xor((u64 *)(r10 -8), r2)") 404 + __msg("mark_precise: frame0: regs= stack=-8 before 2: (b7) r2 = 0") 405 + __msg("mark_precise: frame0: regs= stack=-8 before 1: (7b) *(u64 *)(r10 -8) = r1") 406 + __msg("mark_precise: frame0: regs=r1 stack= before 0: (b7) r1 = 8") 407 + __naked int bpf_atomic_fetch_xor_precision(void) 408 + { 409 + asm volatile ( 410 + "r1 = 8;" 411 + "*(u64 *)(r10 - 8) = r1;" 412 + "r2 = 0;" 413 + ".8byte %[fetch_xor_insn];" /* r2 = atomic_fetch_xor(*(u64 *)(r10 - 8), r2) */ 414 + "r3 = r10;" 415 + "r3 += r2;" /* mark_precise */ 416 + "r0 = 0;" 417 + "exit;" 418 + : 419 + : __imm_insn(fetch_xor_insn, 420 + BPF_ATOMIC_OP(BPF_DW, BPF_XOR | BPF_FETCH, BPF_REG_10, BPF_REG_2, -8)) 421 + : __clobber_all); 422 + } 423 + 424 + SEC("?raw_tp") 425 + __success __log_level(2) 426 + __msg("mark_precise: frame0: regs=r0 stack= before 5: (bf) r3 = r10") 427 + __msg("mark_precise: frame0: regs=r0 stack= before 4: (db) r0 = atomic64_cmpxchg((u64 *)(r10 -8), r0, r2)") 428 + __msg("mark_precise: frame0: regs= stack=-8 before 3: (b7) r2 = 0") 429 + __msg("mark_precise: frame0: regs= stack=-8 before 2: (b7) r0 = 0") 430 + __msg("mark_precise: frame0: regs= stack=-8 before 1: (7b) *(u64 *)(r10 -8) = r1") 431 + __msg("mark_precise: frame0: regs=r1 stack= before 0: (b7) r1 = 8") 432 + __naked int bpf_atomic_cmpxchg_precision(void) 433 + { 434 + asm volatile ( 435 + "r1 = 8;" 436 + "*(u64 *)(r10 - 8) = r1;" 437 + "r0 = 0;" 438 + "r2 = 0;" 439 + ".8byte %[cmpxchg_insn];" /* r0 = atomic_cmpxchg(*(u64 *)(r10 - 8), r0, r2) */ 440 + "r3 = r10;" 441 + "r3 += r0;" /* mark_precise */ 442 + "r0 = 0;" 443 + "exit;" 444 + : 445 + : __imm_insn(cmpxchg_insn, 446 + BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, BPF_REG_10, BPF_REG_2, -8)) 447 + : __clobber_all); 448 + } 449 + 450 + /* Regression test for dual precision: Both the fetched value (r2) and 451 + * a reread of the same stack slot (r3) are tracked for precision. After 452 + * the atomic operation, the stack slot is STACK_MISC. Thus, the ldx at 453 + * insn 4 does NOT set INSN_F_STACK_ACCESS. Precision for the stack slot 454 + * propagates solely through the atomic fetch's load side (insn 3). 455 + */ 456 + SEC("?raw_tp") 457 + __success __log_level(2) 458 + __msg("mark_precise: frame0: regs=r2,r3 stack= before 4: (79) r3 = *(u64 *)(r10 -8)") 459 + __msg("mark_precise: frame0: regs=r2 stack= before 3: (db) r2 = atomic64_fetch_add((u64 *)(r10 -8), r2)") 460 + __msg("mark_precise: frame0: regs= stack=-8 before 2: (b7) r2 = 0") 461 + __msg("mark_precise: frame0: regs= stack=-8 before 1: (7b) *(u64 *)(r10 -8) = r1") 462 + __msg("mark_precise: frame0: regs=r1 stack= before 0: (b7) r1 = 8") 463 + __naked int bpf_atomic_fetch_add_dual_precision(void) 464 + { 465 + asm volatile ( 466 + "r1 = 8;" 467 + "*(u64 *)(r10 - 8) = r1;" 468 + "r2 = 0;" 469 + ".8byte %[fetch_add_insn];" /* r2 = atomic_fetch_add(*(u64 *)(r10 - 8), r2) */ 470 + "r3 = *(u64 *)(r10 - 8);" 471 + "r4 = r2;" 472 + "r4 += r3;" 473 + "r4 &= 7;" 474 + "r5 = r10;" 475 + "r5 += r4;" /* mark_precise */ 476 + "r0 = 0;" 477 + "exit;" 478 + : 479 + : __imm_insn(fetch_add_insn, 480 + BPF_ATOMIC_OP(BPF_DW, BPF_ADD | BPF_FETCH, BPF_REG_10, BPF_REG_2, -8)) 481 + : __clobber_all); 482 + } 483 + 484 + SEC("?raw_tp") 485 + __success __log_level(2) 486 + __msg("mark_precise: frame0: regs=r0,r3 stack= before 5: (79) r3 = *(u64 *)(r10 -8)") 487 + __msg("mark_precise: frame0: regs=r0 stack= before 4: (db) r0 = atomic64_cmpxchg((u64 *)(r10 -8), r0, r2)") 488 + __msg("mark_precise: frame0: regs= stack=-8 before 3: (b7) r2 = 0") 489 + __msg("mark_precise: frame0: regs= stack=-8 before 2: (b7) r0 = 8") 490 + __msg("mark_precise: frame0: regs= stack=-8 before 1: (7b) *(u64 *)(r10 -8) = r1") 491 + __msg("mark_precise: frame0: regs=r1 stack= before 0: (b7) r1 = 8") 492 + __naked int bpf_atomic_cmpxchg_dual_precision(void) 493 + { 494 + asm volatile ( 495 + "r1 = 8;" 496 + "*(u64 *)(r10 - 8) = r1;" 497 + "r0 = 8;" 498 + "r2 = 0;" 499 + ".8byte %[cmpxchg_insn];" /* r0 = atomic_cmpxchg(*(u64 *)(r10 - 8), r0, r2) */ 500 + "r3 = *(u64 *)(r10 - 8);" 501 + "r4 = r0;" 502 + "r4 += r3;" 503 + "r4 &= 7;" 504 + "r5 = r10;" 505 + "r5 += r4;" /* mark_precise */ 506 + "r0 = 0;" 507 + "exit;" 508 + : 509 + : __imm_insn(cmpxchg_insn, 510 + BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, BPF_REG_10, BPF_REG_2, -8)) 511 + : __clobber_all); 512 + } 513 + 514 + SEC("?raw_tp") 515 + __success __log_level(2) 516 + __msg("mark_precise: frame0: regs=r1 stack= before 10: (57) r1 &= 7") 517 + __msg("mark_precise: frame0: regs=r1 stack= before 9: (db) r1 = atomic64_fetch_add((u64 *)(r0 +0), r1)") 518 + __not_msg("falling back to forcing all scalars precise") 519 + __naked int bpf_atomic_fetch_add_map_precision(void) 520 + { 521 + asm volatile ( 522 + "r1 = 0;" 523 + "*(u64 *)(r10 - 8) = r1;" 524 + "r2 = r10;" 525 + "r2 += -8;" 526 + "r1 = %[precision_map] ll;" 527 + "call %[bpf_map_lookup_elem];" 528 + "if r0 == 0 goto 1f;" 529 + "r1 = 0;" 530 + ".8byte %[fetch_add_insn];" /* r1 = atomic_fetch_add(*(u64 *)(r0 + 0), r1) */ 531 + "r1 &= 7;" 532 + "r2 = r10;" 533 + "r2 += r1;" /* mark_precise */ 534 + "1: r0 = 0;" 535 + "exit;" 536 + : 537 + : __imm_addr(precision_map), 538 + __imm(bpf_map_lookup_elem), 539 + __imm_insn(fetch_add_insn, 540 + BPF_ATOMIC_OP(BPF_DW, BPF_ADD | BPF_FETCH, BPF_REG_0, BPF_REG_1, 0)) 541 + : __clobber_all); 542 + } 543 + 544 + SEC("?raw_tp") 545 + __success __log_level(2) 546 + __msg("mark_precise: frame0: regs=r0 stack= before 12: (57) r0 &= 7") 547 + __msg("mark_precise: frame0: regs=r0 stack= before 11: (db) r0 = atomic64_cmpxchg((u64 *)(r6 +0), r0, r1)") 548 + __not_msg("falling back to forcing all scalars precise") 549 + __naked int bpf_atomic_cmpxchg_map_precision(void) 550 + { 551 + asm volatile ( 552 + "r1 = 0;" 553 + "*(u64 *)(r10 - 8) = r1;" 554 + "r2 = r10;" 555 + "r2 += -8;" 556 + "r1 = %[precision_map] ll;" 557 + "call %[bpf_map_lookup_elem];" 558 + "if r0 == 0 goto 1f;" 559 + "r6 = r0;" 560 + "r0 = 0;" 561 + "r1 = 0;" 562 + ".8byte %[cmpxchg_insn];" /* r0 = atomic_cmpxchg(*(u64 *)(r6 + 0), r0, r1) */ 563 + "r0 &= 7;" 564 + "r2 = r10;" 565 + "r2 += r0;" /* mark_precise */ 566 + "1: r0 = 0;" 567 + "exit;" 568 + : 569 + : __imm_addr(precision_map), 570 + __imm(bpf_map_lookup_elem), 571 + __imm_insn(cmpxchg_insn, 572 + BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, BPF_REG_6, BPF_REG_1, 0)) 573 + : __clobber_all); 574 + } 575 + 576 + SEC("?raw_tp") 577 + __success __log_level(2) 578 + __msg("mark_precise: frame0: regs=r1 stack= before 10: (57) r1 &= 7") 579 + __msg("mark_precise: frame0: regs=r1 stack= before 9: (c3) r1 = atomic_fetch_add((u32 *)(r0 +0), r1)") 580 + __not_msg("falling back to forcing all scalars precise") 581 + __naked int bpf_atomic_fetch_add_32bit_precision(void) 582 + { 583 + asm volatile ( 584 + "r1 = 0;" 585 + "*(u64 *)(r10 - 8) = r1;" 586 + "r2 = r10;" 587 + "r2 += -8;" 588 + "r1 = %[precision_map] ll;" 589 + "call %[bpf_map_lookup_elem];" 590 + "if r0 == 0 goto 1f;" 591 + "r1 = 0;" 592 + ".8byte %[fetch_add_insn];" /* r1 = atomic_fetch_add(*(u32 *)(r0 + 0), r1) */ 593 + "r1 &= 7;" 594 + "r2 = r10;" 595 + "r2 += r1;" /* mark_precise */ 596 + "1: r0 = 0;" 597 + "exit;" 598 + : 599 + : __imm_addr(precision_map), 600 + __imm(bpf_map_lookup_elem), 601 + __imm_insn(fetch_add_insn, 602 + BPF_ATOMIC_OP(BPF_W, BPF_ADD | BPF_FETCH, BPF_REG_0, BPF_REG_1, 0)) 603 + : __clobber_all); 604 + } 605 + 606 + SEC("?raw_tp") 607 + __success __log_level(2) 608 + __msg("mark_precise: frame0: regs=r0 stack= before 12: (57) r0 &= 7") 609 + __msg("mark_precise: frame0: regs=r0 stack= before 11: (c3) r0 = atomic_cmpxchg((u32 *)(r6 +0), r0, r1)") 610 + __not_msg("falling back to forcing all scalars precise") 611 + __naked int bpf_atomic_cmpxchg_32bit_precision(void) 612 + { 613 + asm volatile ( 614 + "r1 = 0;" 615 + "*(u64 *)(r10 - 8) = r1;" 616 + "r2 = r10;" 617 + "r2 += -8;" 618 + "r1 = %[precision_map] ll;" 619 + "call %[bpf_map_lookup_elem];" 620 + "if r0 == 0 goto 1f;" 621 + "r6 = r0;" 622 + "r0 = 0;" 623 + "r1 = 0;" 624 + ".8byte %[cmpxchg_insn];" /* r0 = atomic_cmpxchg(*(u32 *)(r6 + 0), r0, r1) */ 625 + "r0 &= 7;" 626 + "r2 = r10;" 627 + "r2 += r0;" /* mark_precise */ 628 + "1: r0 = 0;" 629 + "exit;" 630 + : 631 + : __imm_addr(precision_map), 632 + __imm(bpf_map_lookup_elem), 633 + __imm_insn(cmpxchg_insn, 634 + BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, BPF_REG_6, BPF_REG_1, 0)) 635 + : __clobber_all); 309 636 } 310 637 311 638 char _license[] SEC("license") = "GPL";
+15
tools/testing/selftests/cgroup/lib/cgroup_util.c
··· 123 123 return ret; 124 124 } 125 125 126 + int cg_read_strcmp_wait(const char *cgroup, const char *control, 127 + const char *expected) 128 + { 129 + int i, ret; 130 + 131 + for (i = 0; i < 100; i++) { 132 + ret = cg_read_strcmp(cgroup, control, expected); 133 + if (!ret) 134 + return ret; 135 + usleep(10000); 136 + } 137 + 138 + return ret; 139 + } 140 + 126 141 int cg_read_strstr(const char *cgroup, const char *control, const char *needle) 127 142 { 128 143 char buf[PAGE_SIZE];
+2
tools/testing/selftests/cgroup/lib/include/cgroup_util.h
··· 61 61 char *buf, size_t len); 62 62 extern int cg_read_strcmp(const char *cgroup, const char *control, 63 63 const char *expected); 64 + extern int cg_read_strcmp_wait(const char *cgroup, const char *control, 65 + const char *expected); 64 66 extern int cg_read_strstr(const char *cgroup, const char *control, 65 67 const char *needle); 66 68 extern long cg_read_long(const char *cgroup, const char *control);
+2 -1
tools/testing/selftests/cgroup/test_core.c
··· 233 233 if (err) 234 234 goto cleanup; 235 235 236 - if (cg_read_strcmp(cg_test_d, "cgroup.events", "populated 0\n")) 236 + if (cg_read_strcmp_wait(cg_test_d, "cgroup.events", 237 + "populated 0\n")) 237 238 goto cleanup; 238 239 239 240 /* Remove cgroup. */
+4 -3
tools/testing/selftests/cgroup/test_kill.c
··· 86 86 wait_for_pid(pids[i]); 87 87 88 88 if (ret == KSFT_PASS && 89 - cg_read_strcmp(cgroup, "cgroup.events", "populated 0\n")) 89 + cg_read_strcmp_wait(cgroup, "cgroup.events", "populated 0\n")) 90 90 ret = KSFT_FAIL; 91 91 92 92 if (cgroup) ··· 190 190 wait_for_pid(pids[i]); 191 191 192 192 if (ret == KSFT_PASS && 193 - cg_read_strcmp(cgroup[0], "cgroup.events", "populated 0\n")) 193 + cg_read_strcmp_wait(cgroup[0], "cgroup.events", 194 + "populated 0\n")) 194 195 ret = KSFT_FAIL; 195 196 196 197 for (i = 9; i >= 0 && cgroup[i]; i--) { ··· 252 251 wait_for_pid(pid); 253 252 254 253 if (ret == KSFT_PASS && 255 - cg_read_strcmp(cgroup, "cgroup.events", "populated 0\n")) 254 + cg_read_strcmp_wait(cgroup, "cgroup.events", "populated 0\n")) 256 255 ret = KSFT_FAIL; 257 256 258 257 if (cgroup)
+11 -8
tools/testing/selftests/riscv/vector/validate_v_ptrace.c
··· 290 290 291 291 /* verify initial vsetvli settings */ 292 292 293 - if (is_xtheadvector_supported()) 293 + if (is_xtheadvector_supported()) { 294 294 EXPECT_EQ(5UL, regset_data->vtype); 295 - else 295 + } else { 296 296 EXPECT_EQ(9UL, regset_data->vtype); 297 + } 297 298 298 299 EXPECT_EQ(regset_data->vlenb, regset_data->vl); 299 300 EXPECT_EQ(vlenb, regset_data->vlenb); ··· 347 346 { 348 347 } 349 348 350 - #define VECTOR_1_0 BIT(0) 351 - #define XTHEAD_VECTOR_0_7 BIT(1) 349 + #define VECTOR_1_0 _BITUL(0) 350 + #define XTHEAD_VECTOR_0_7 _BITUL(1) 352 351 353 352 #define vector_test(x) ((x) & VECTOR_1_0) 354 353 #define xthead_test(x) ((x) & XTHEAD_VECTOR_0_7) ··· 620 619 621 620 /* verify initial vsetvli settings */ 622 621 623 - if (is_xtheadvector_supported()) 622 + if (is_xtheadvector_supported()) { 624 623 EXPECT_EQ(5UL, regset_data->vtype); 625 - else 624 + } else { 626 625 EXPECT_EQ(9UL, regset_data->vtype); 626 + } 627 627 628 628 EXPECT_EQ(regset_data->vlenb, regset_data->vl); 629 629 EXPECT_EQ(vlenb, regset_data->vlenb); ··· 829 827 830 828 /* verify initial vsetvli settings */ 831 829 832 - if (is_xtheadvector_supported()) 830 + if (is_xtheadvector_supported()) { 833 831 EXPECT_EQ(5UL, regset_data->vtype); 834 - else 832 + } else { 835 833 EXPECT_EQ(9UL, regset_data->vtype); 834 + } 836 835 837 836 EXPECT_EQ(regset_data->vlenb, regset_data->vl); 838 837 EXPECT_EQ(vlenb, regset_data->vlenb);
+1
tools/testing/selftests/sched_ext/Makefile
··· 188 188 rt_stall \ 189 189 test_example \ 190 190 total_bw \ 191 + cyclic_kick_wait \ 191 192 192 193 testcase-targets := $(addsuffix .o,$(addprefix $(SCXOBJ_DIR)/,$(auto-test-targets))) 193 194
+68
tools/testing/selftests/sched_ext/cyclic_kick_wait.bpf.c
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Stress concurrent SCX_KICK_WAIT calls to reproduce wait-cycle deadlock. 4 + * 5 + * Three CPUs are designated from userspace. Every enqueue from one of the 6 + * three CPUs kicks the next CPU in the ring with SCX_KICK_WAIT, creating a 7 + * persistent A -> B -> C -> A wait cycle pressure. 8 + */ 9 + #include <scx/common.bpf.h> 10 + 11 + char _license[] SEC("license") = "GPL"; 12 + 13 + const volatile s32 test_cpu_a; 14 + const volatile s32 test_cpu_b; 15 + const volatile s32 test_cpu_c; 16 + 17 + u64 nr_enqueues; 18 + u64 nr_wait_kicks; 19 + 20 + UEI_DEFINE(uei); 21 + 22 + static s32 target_cpu(s32 cpu) 23 + { 24 + if (cpu == test_cpu_a) 25 + return test_cpu_b; 26 + if (cpu == test_cpu_b) 27 + return test_cpu_c; 28 + if (cpu == test_cpu_c) 29 + return test_cpu_a; 30 + return -1; 31 + } 32 + 33 + void BPF_STRUCT_OPS(cyclic_kick_wait_enqueue, struct task_struct *p, 34 + u64 enq_flags) 35 + { 36 + s32 this_cpu = bpf_get_smp_processor_id(); 37 + s32 tgt; 38 + 39 + __sync_fetch_and_add(&nr_enqueues, 1); 40 + 41 + if (p->flags & PF_KTHREAD) { 42 + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_INF, 43 + enq_flags | SCX_ENQ_PREEMPT); 44 + return; 45 + } 46 + 47 + scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); 48 + 49 + tgt = target_cpu(this_cpu); 50 + if (tgt < 0 || tgt == this_cpu) 51 + return; 52 + 53 + __sync_fetch_and_add(&nr_wait_kicks, 1); 54 + scx_bpf_kick_cpu(tgt, SCX_KICK_WAIT); 55 + } 56 + 57 + void BPF_STRUCT_OPS(cyclic_kick_wait_exit, struct scx_exit_info *ei) 58 + { 59 + UEI_RECORD(uei, ei); 60 + } 61 + 62 + SEC(".struct_ops.link") 63 + struct sched_ext_ops cyclic_kick_wait_ops = { 64 + .enqueue = cyclic_kick_wait_enqueue, 65 + .exit = cyclic_kick_wait_exit, 66 + .name = "cyclic_kick_wait", 67 + .timeout_ms = 1000U, 68 + };
+194
tools/testing/selftests/sched_ext/cyclic_kick_wait.c
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Test SCX_KICK_WAIT forward progress under cyclic wait pressure. 4 + * 5 + * SCX_KICK_WAIT busy-waits until the target CPU enters the scheduling path. 6 + * If multiple CPUs form a wait cycle (A waits for B, B waits for C, C waits 7 + * for A), all CPUs deadlock unless the implementation breaks the cycle. 8 + * 9 + * This test creates that scenario: three CPUs are arranged in a ring. The BPF 10 + * scheduler's ops.enqueue() kicks the next CPU in the ring with SCX_KICK_WAIT 11 + * on every enqueue. Userspace pins 4 worker threads per CPU that loop calling 12 + * sched_yield(), generating a steady stream of enqueues and thus sustained 13 + * A->B->C->A kick_wait cycle pressure. The test passes if the system remains 14 + * responsive for 5 seconds without the scheduler being killed by the watchdog. 15 + */ 16 + #define _GNU_SOURCE 17 + 18 + #include <bpf/bpf.h> 19 + #include <errno.h> 20 + #include <pthread.h> 21 + #include <sched.h> 22 + #include <scx/common.h> 23 + #include <stdint.h> 24 + #include <string.h> 25 + #include <time.h> 26 + #include <unistd.h> 27 + 28 + #include "scx_test.h" 29 + #include "cyclic_kick_wait.bpf.skel.h" 30 + 31 + #define WORKERS_PER_CPU 4 32 + #define NR_TEST_CPUS 3 33 + #define NR_WORKERS (NR_TEST_CPUS * WORKERS_PER_CPU) 34 + 35 + struct worker_ctx { 36 + pthread_t tid; 37 + int cpu; 38 + volatile bool stop; 39 + volatile __u64 iters; 40 + bool started; 41 + }; 42 + 43 + static void *worker_fn(void *arg) 44 + { 45 + struct worker_ctx *worker = arg; 46 + cpu_set_t mask; 47 + 48 + CPU_ZERO(&mask); 49 + CPU_SET(worker->cpu, &mask); 50 + 51 + if (sched_setaffinity(0, sizeof(mask), &mask)) 52 + return (void *)(uintptr_t)errno; 53 + 54 + while (!worker->stop) { 55 + sched_yield(); 56 + worker->iters++; 57 + } 58 + 59 + return NULL; 60 + } 61 + 62 + static int join_worker(struct worker_ctx *worker) 63 + { 64 + void *ret; 65 + struct timespec ts; 66 + int err; 67 + 68 + if (!worker->started) 69 + return 0; 70 + 71 + if (clock_gettime(CLOCK_REALTIME, &ts)) 72 + return -errno; 73 + 74 + ts.tv_sec += 2; 75 + err = pthread_timedjoin_np(worker->tid, &ret, &ts); 76 + if (err == ETIMEDOUT) 77 + pthread_detach(worker->tid); 78 + if (err) 79 + return -err; 80 + 81 + if ((uintptr_t)ret) 82 + return -(int)(uintptr_t)ret; 83 + 84 + return 0; 85 + } 86 + 87 + static enum scx_test_status setup(void **ctx) 88 + { 89 + struct cyclic_kick_wait *skel; 90 + 91 + skel = cyclic_kick_wait__open(); 92 + SCX_FAIL_IF(!skel, "Failed to open skel"); 93 + SCX_ENUM_INIT(skel); 94 + 95 + *ctx = skel; 96 + return SCX_TEST_PASS; 97 + } 98 + 99 + static enum scx_test_status run(void *ctx) 100 + { 101 + struct cyclic_kick_wait *skel = ctx; 102 + struct worker_ctx workers[NR_WORKERS] = {}; 103 + struct bpf_link *link = NULL; 104 + enum scx_test_status status = SCX_TEST_PASS; 105 + int test_cpus[NR_TEST_CPUS]; 106 + int nr_cpus = 0; 107 + cpu_set_t mask; 108 + int ret, i; 109 + 110 + if (sched_getaffinity(0, sizeof(mask), &mask)) { 111 + SCX_ERR("Failed to get affinity (%d)", errno); 112 + return SCX_TEST_FAIL; 113 + } 114 + 115 + for (i = 0; i < CPU_SETSIZE; i++) { 116 + if (CPU_ISSET(i, &mask)) 117 + test_cpus[nr_cpus++] = i; 118 + if (nr_cpus == NR_TEST_CPUS) 119 + break; 120 + } 121 + 122 + if (nr_cpus < NR_TEST_CPUS) 123 + return SCX_TEST_SKIP; 124 + 125 + skel->rodata->test_cpu_a = test_cpus[0]; 126 + skel->rodata->test_cpu_b = test_cpus[1]; 127 + skel->rodata->test_cpu_c = test_cpus[2]; 128 + 129 + if (cyclic_kick_wait__load(skel)) { 130 + SCX_ERR("Failed to load skel"); 131 + return SCX_TEST_FAIL; 132 + } 133 + 134 + link = bpf_map__attach_struct_ops(skel->maps.cyclic_kick_wait_ops); 135 + if (!link) { 136 + SCX_ERR("Failed to attach scheduler"); 137 + return SCX_TEST_FAIL; 138 + } 139 + 140 + for (i = 0; i < NR_WORKERS; i++) 141 + workers[i].cpu = test_cpus[i / WORKERS_PER_CPU]; 142 + 143 + for (i = 0; i < NR_WORKERS; i++) { 144 + ret = pthread_create(&workers[i].tid, NULL, worker_fn, &workers[i]); 145 + if (ret) { 146 + SCX_ERR("Failed to create worker thread %d (%d)", i, ret); 147 + status = SCX_TEST_FAIL; 148 + goto out; 149 + } 150 + workers[i].started = true; 151 + } 152 + 153 + sleep(5); 154 + 155 + if (skel->data->uei.kind != EXIT_KIND(SCX_EXIT_NONE)) { 156 + SCX_ERR("Scheduler exited unexpectedly (kind=%llu code=%lld)", 157 + (unsigned long long)skel->data->uei.kind, 158 + (long long)skel->data->uei.exit_code); 159 + status = SCX_TEST_FAIL; 160 + } 161 + 162 + out: 163 + for (i = 0; i < NR_WORKERS; i++) 164 + workers[i].stop = true; 165 + 166 + for (i = 0; i < NR_WORKERS; i++) { 167 + ret = join_worker(&workers[i]); 168 + if (ret && status == SCX_TEST_PASS) { 169 + SCX_ERR("Failed to join worker thread %d (%d)", i, ret); 170 + status = SCX_TEST_FAIL; 171 + } 172 + } 173 + 174 + if (link) 175 + bpf_link__destroy(link); 176 + 177 + return status; 178 + } 179 + 180 + static void cleanup(void *ctx) 181 + { 182 + struct cyclic_kick_wait *skel = ctx; 183 + 184 + cyclic_kick_wait__destroy(skel); 185 + } 186 + 187 + struct scx_test cyclic_kick_wait = { 188 + .name = "cyclic_kick_wait", 189 + .description = "Verify SCX_KICK_WAIT forward progress under a 3-CPU wait cycle", 190 + .setup = setup, 191 + .run = run, 192 + .cleanup = cleanup, 193 + }; 194 + REGISTER_SCX_TEST(&cyclic_kick_wait)
+44
tools/testing/selftests/tc-testing/tc-tests/infra/filter.json
··· 22 22 "teardown": [ 23 23 "$TC qdisc del dev $DUMMY root handle 1: htb default 1" 24 24 ] 25 + }, 26 + { 27 + "id": "b7e3", 28 + "name": "Empty fw filter on shared block - rejected at config time", 29 + "category": [ 30 + "filter", 31 + "fw" 32 + ], 33 + "plugins": { 34 + "requires": "nsPlugin" 35 + }, 36 + "setup": [ 37 + "$TC qdisc add dev $DEV1 egress_block 1 clsact" 38 + ], 39 + "cmdUnderTest": "$TC filter add block 1 protocol ip prio 1 fw", 40 + "expExitCode": "2", 41 + "verifyCmd": "$TC filter show block 1", 42 + "matchPattern": "fw", 43 + "matchCount": "0", 44 + "teardown": [ 45 + "$TC qdisc del dev $DEV1 clsact" 46 + ] 47 + }, 48 + { 49 + "id": "c8f4", 50 + "name": "Flow filter on shared block without baseclass - rejected at config time", 51 + "category": [ 52 + "filter", 53 + "flow" 54 + ], 55 + "plugins": { 56 + "requires": "nsPlugin" 57 + }, 58 + "setup": [ 59 + "$TC qdisc add dev $DEV1 ingress_block 1 clsact" 60 + ], 61 + "cmdUnderTest": "$TC filter add block 1 protocol ip prio 1 handle 1 flow map key dst", 62 + "expExitCode": "2", 63 + "verifyCmd": "$TC filter show block 1", 64 + "matchPattern": "flow", 65 + "matchCount": "0", 66 + "teardown": [ 67 + "$TC qdisc del dev $DEV1 clsact" 68 + ] 25 69 } 26 70 ]
+25
tools/testing/selftests/tc-testing/tc-tests/infra/qdiscs.json
··· 1111 1111 "teardown": [ 1112 1112 "$TC qdisc del dev $DUMMY root handle 1:" 1113 1113 ] 1114 + }, 1115 + { 1116 + "id": "a3d7", 1117 + "name": "HFSC with large m1 - no divide-by-zero on class reactivation", 1118 + "category": [ 1119 + "qdisc", 1120 + "hfsc" 1121 + ], 1122 + "plugins": { 1123 + "requires": "nsPlugin" 1124 + }, 1125 + "setup": [ 1126 + "$TC qdisc replace dev $DUMMY root handle 1: hfsc default 1", 1127 + "$TC class replace dev $DUMMY parent 1: classid 1:1 hfsc rt m1 32gbit d 1ms m2 0bit ls m1 32gbit d 1ms m2 0bit", 1128 + "ping -I$DUMMY -f -c1 -s64 -W1 10.10.10.1 || true", 1129 + "sleep 1" 1130 + ], 1131 + "cmdUnderTest": "ping -I$DUMMY -f -c1 -s64 -W1 10.10.10.1 || true", 1132 + "expExitCode": "0", 1133 + "verifyCmd": "$TC qdisc show dev $DUMMY", 1134 + "matchPattern": "qdisc hfsc 1: root", 1135 + "matchCount": "1", 1136 + "teardown": [ 1137 + "$TC qdisc del dev $DUMMY handle 1: root" 1138 + ] 1114 1139 } 1115 1140 ]
-1
tools/tracing/rtla/src/timerlat_bpf.h
··· 12 12 }; 13 13 14 14 #ifndef __bpf__ 15 - #include <bpf/libbpf.h> 16 15 #ifdef HAVE_BPF_SKEL 17 16 int timerlat_bpf_init(struct timerlat_params *params); 18 17 int timerlat_bpf_attach(void);