Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'reset-gpio-for-v6.19' of https://git.pengutronix.de/git/pza/linux into gpio/for-next

Reset/GPIO/swnode changes for v6.19

* Extend software node implementation, allowing its properties to reference
existing firmware nodes.
* Update the GPIO property interface to use reworked swnode macros.
* Rework reset-gpio code to use GPIO lookup via swnode.
* Fix spi-cs42l43 driver to work with swnode changes.

+4207 -1888
+1
.mailmap
··· 644 644 Quentin Monnet <qmo@kernel.org> <quentin.monnet@netronome.com> 645 645 Quentin Monnet <qmo@kernel.org> <quentin@isovalent.com> 646 646 Quentin Perret <qperret@qperret.net> <quentin.perret@arm.com> 647 + Rae Moar <raemoar63@gmail.com> <rmoar@google.com> 647 648 Rafael J. Wysocki <rjw@rjwysocki.net> <rjw@sisk.pl> 648 649 Rajeev Nandan <quic_rajeevny@quicinc.com> <rajeevny@codeaurora.org> 649 650 Rajendra Nayak <quic_rjendra@quicinc.com> <rnayak@codeaurora.org>
+4
CREDITS
··· 2036 2036 S: 602 00 Brno 2037 2037 S: Czech Republic 2038 2038 2039 + N: Karsten Keil 2040 + E: isdn@linux-pingi.de 2041 + D: ISDN subsystem maintainer 2042 + 2039 2043 N: Jakob Kemi 2040 2044 E: jakob.kemi@telia.com 2041 2045 D: V4L W9966 Webcam driver
+1 -1
Documentation/devicetree/bindings/gpio/ti,twl4030-gpio.yaml
··· 1 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas/ti,twl4030-gpio.yaml# 4 + $id: http://devicetree.org/schemas/gpio/ti,twl4030-gpio.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 7 title: TI TWL4030 GPIO controller
+2 -2
Documentation/devicetree/bindings/net/microchip,sparx5-switch.yaml
··· 180 180 then: 181 181 properties: 182 182 reg: 183 - minItems: 2 183 + maxItems: 2 184 184 reg-names: 185 - minItems: 2 185 + maxItems: 2 186 186 else: 187 187 properties: 188 188 reg:
+2 -2
Documentation/devicetree/bindings/sound/qcom,pm4125-sdw.yaml
··· 32 32 33 33 $ref: /schemas/types.yaml#/definitions/uint32-array 34 34 minItems: 2 35 - maxItems: 2 35 + maxItems: 4 36 36 items: 37 37 enum: [1, 2, 3, 4] 38 38 ··· 48 48 49 49 $ref: /schemas/types.yaml#/definitions/uint32-array 50 50 minItems: 2 51 - maxItems: 2 51 + maxItems: 5 52 52 items: 53 53 enum: [1, 2, 3, 4, 5] 54 54
+4 -4
Documentation/firmware-guide/acpi/i2c-muxes.rst
··· 37 37 Name (_HID, ...) 38 38 Name (_CRS, ResourceTemplate () { 39 39 I2cSerialBus (0x50, ControllerInitiated, I2C_SPEED, 40 - AddressingMode7Bit, "\\_SB.SMB1.CH00", 0x00, 41 - ResourceConsumer,,) 40 + AddressingMode7Bit, "\\_SB.SMB1.MUX0.CH00", 41 + 0x00, ResourceConsumer,,) 42 42 } 43 43 } 44 44 } ··· 52 52 Name (_HID, ...) 53 53 Name (_CRS, ResourceTemplate () { 54 54 I2cSerialBus (0x50, ControllerInitiated, I2C_SPEED, 55 - AddressingMode7Bit, "\\_SB.SMB1.CH01", 0x00, 56 - ResourceConsumer,,) 55 + AddressingMode7Bit, "\\_SB.SMB1.MUX0.CH01", 56 + 0x00, ResourceConsumer,,) 57 57 } 58 58 } 59 59 }
+2
Documentation/netlink/specs/dpll.yaml
··· 605 605 reply: &pin-attrs 606 606 attributes: 607 607 - id 608 + - module-name 609 + - clock-id 608 610 - board-label 609 611 - panel-label 610 612 - package-label
-3
Documentation/networking/netconsole.rst
··· 19 19 20 20 Sysdata append support by Breno Leitao <leitao@debian.org>, Jan 15 2025 21 21 22 - Please send bug reports to Matt Mackall <mpm@selenic.com> 23 - Satyam Sharma <satyam.sharma@gmail.com>, and Cong Wang <xiyou.wangcong@gmail.com> 24 - 25 22 Introduction: 26 23 ============= 27 24
+13 -9
MAINTAINERS
··· 4818 4818 F: drivers/net/dsa/bcm_sf2* 4819 4819 F: include/linux/dsa/brcm.h 4820 4820 F: include/linux/platform_data/b53.h 4821 + F: net/dsa/tag_brcm.c 4821 4822 4822 4823 BROADCOM BCM2711/BCM2835 ARM ARCHITECTURE 4823 4824 M: Florian Fainelli <florian.fainelli@broadcom.com> ··· 12529 12528 F: include/linux/net/intel/*/ 12530 12529 12531 12530 INTEL ETHERNET PROTOCOL DRIVER FOR RDMA 12531 + M: Krzysztof Czurylo <krzysztof.czurylo@intel.com> 12532 12532 M: Tatyana Nikolova <tatyana.e.nikolova@intel.com> 12533 12533 L: linux-rdma@vger.kernel.org 12534 12534 S: Supported ··· 12870 12868 K: \bSGX_ 12871 12869 12872 12870 INTEL SKYLAKE INT3472 ACPI DEVICE DRIVER 12873 - M: Daniel Scally <djrscally@gmail.com> 12871 + M: Daniel Scally <dan.scally@ideasonboard.com> 12872 + M: Sakari Ailus <sakari.ailus@linux.intel.com> 12874 12873 S: Maintained 12875 12874 F: drivers/platform/x86/intel/int3472/ 12876 12875 F: include/linux/platform_data/x86/int3472.h ··· 13270 13267 F: drivers/infiniband/ulp/isert 13271 13268 13272 13269 ISDN/CMTP OVER BLUETOOTH 13273 - M: Karsten Keil <isdn@linux-pingi.de> 13274 - L: isdn4linux@listserv.isdn4linux.de (subscribers-only) 13275 13270 L: netdev@vger.kernel.org 13276 - S: Odd Fixes 13271 + S: Orphan 13277 13272 W: http://www.isdn4linux.de 13278 13273 F: Documentation/isdn/ 13279 13274 F: drivers/isdn/capi/ ··· 13280 13279 F: net/bluetooth/cmtp/ 13281 13280 13282 13281 ISDN/mISDN SUBSYSTEM 13283 - M: Karsten Keil <isdn@linux-pingi.de> 13284 - L: isdn4linux@listserv.isdn4linux.de (subscribers-only) 13285 13282 L: netdev@vger.kernel.org 13286 - S: Maintained 13283 + S: Orphan 13287 13284 W: http://www.isdn4linux.de 13288 13285 F: drivers/isdn/Kconfig 13289 13286 F: drivers/isdn/Makefile ··· 13435 13436 F: scripts/Makefile.kasan 13436 13437 13437 13438 KCONFIG 13439 + M: Nathan Chancellor <nathan@kernel.org> 13440 + M: Nicolas Schier <nsc@kernel.org> 13438 13441 L: linux-kbuild@vger.kernel.org 13439 - S: Orphan 13442 + S: Odd Fixes 13440 13443 Q: https://patchwork.kernel.org/project/linux-kbuild/list/ 13444 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/kbuild/linux.git 13441 13445 F: Documentation/kbuild/kconfig* 13442 13446 F: scripts/Kconfig.include 13443 13447 F: scripts/kconfig/ ··· 13625 13623 KERNEL UNIT TESTING FRAMEWORK (KUnit) 13626 13624 M: Brendan Higgins <brendan.higgins@linux.dev> 13627 13625 M: David Gow <davidgow@google.com> 13628 - R: Rae Moar <rmoar@google.com> 13626 + R: Rae Moar <raemoar63@gmail.com> 13629 13627 L: linux-kselftest@vger.kernel.org 13630 13628 L: kunit-dev@googlegroups.com 13631 13629 S: Maintained ··· 20170 20168 R: Jiri Olsa <jolsa@kernel.org> 20171 20169 R: Ian Rogers <irogers@google.com> 20172 20170 R: Adrian Hunter <adrian.hunter@intel.com> 20171 + R: James Clark <james.clark@linaro.org> 20173 20172 L: linux-perf-users@vger.kernel.org 20174 20173 L: linux-kernel@vger.kernel.org 20175 20174 S: Supported ··· 21342 21339 QUALCOMM WCN36XX WIRELESS DRIVER 21343 21340 M: Loic Poulain <loic.poulain@oss.qualcomm.com> 21344 21341 L: wcn36xx@lists.infradead.org 21342 + L: linux-wireless@vger.kernel.org 21345 21343 S: Supported 21346 21344 W: https://wireless.wiki.kernel.org/en/users/Drivers/wcn36xx 21347 21345 F: drivers/net/wireless/ath/wcn36xx/
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 18 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc5 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+7
arch/Kconfig
··· 917 917 An architecture should select this option if it requires the 918 918 .kcfi_traps section for KCFI trap handling. 919 919 920 + config ARCH_USES_CFI_GENERIC_LLVM_PASS 921 + bool 922 + help 923 + An architecture should select this option if it uses the generic 924 + KCFIPass in LLVM to expand kCFI bundles instead of architecture-specific 925 + lowering. 926 + 920 927 config CFI 921 928 bool "Use Kernel Control Flow Integrity (kCFI)" 922 929 default CFI_CLANG
+2
arch/arm/Kconfig
··· 44 44 select ARCH_USE_BUILTIN_BSWAP 45 45 select ARCH_USE_CMPXCHG_LOCKREF 46 46 select ARCH_USE_MEMTEST 47 + # https://github.com/llvm/llvm-project/commit/d130f402642fba3d065aacb506cb061c899558de 48 + select ARCH_USES_CFI_GENERIC_LLVM_PASS if CLANG_VERSION < 220000 47 49 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU 48 50 select ARCH_WANT_GENERAL_HUGETLB 49 51 select ARCH_WANT_IPC_PARSE_VERSION
+3 -2
arch/arm64/net/bpf_jit_comp.c
··· 1213 1213 u8 src = bpf2a64[insn->src_reg]; 1214 1214 const u8 tmp = bpf2a64[TMP_REG_1]; 1215 1215 const u8 tmp2 = bpf2a64[TMP_REG_2]; 1216 + const u8 tmp3 = bpf2a64[TMP_REG_3]; 1216 1217 const u8 fp = bpf2a64[BPF_REG_FP]; 1217 1218 const u8 arena_vm_base = bpf2a64[ARENA_VM_START]; 1218 1219 const u8 priv_sp = bpf2a64[PRIVATE_SP]; ··· 1758 1757 case BPF_ST | BPF_PROBE_MEM32 | BPF_W: 1759 1758 case BPF_ST | BPF_PROBE_MEM32 | BPF_DW: 1760 1759 if (BPF_MODE(insn->code) == BPF_PROBE_MEM32) { 1761 - emit(A64_ADD(1, tmp2, dst, arena_vm_base), ctx); 1762 - dst = tmp2; 1760 + emit(A64_ADD(1, tmp3, dst, arena_vm_base), ctx); 1761 + dst = tmp3; 1763 1762 } 1764 1763 if (dst == fp) { 1765 1764 dst_adj = ctx->priv_sp_used ? priv_sp : A64_SP;
+1 -1
arch/loongarch/Makefile
··· 109 109 ifdef CONFIG_RUSTC_HAS_ANNOTATE_TABLEJUMP 110 110 KBUILD_RUSTFLAGS += -Cllvm-args=--loongarch-annotate-tablejump 111 111 else 112 - KBUILD_RUSTFLAGS += -Zno-jump-tables # keep compatibility with older compilers 112 + KBUILD_RUSTFLAGS += $(if $(call rustc-min-version,109300),-Cjump-tables=n,-Zno-jump-tables) # keep compatibility with older compilers 113 113 endif 114 114 ifdef CONFIG_LTO_CLANG 115 115 # The annotate-tablejump option can not be passed to LLVM backend when LTO is enabled.
+10 -3
arch/parisc/kernel/unwind.c
··· 35 35 36 36 #define KERNEL_START (KERNEL_BINARY_TEXT_START) 37 37 38 + #define ALIGNMENT_OK(ptr, type) (((ptr) & (sizeof(type) - 1)) == 0) 39 + 38 40 extern struct unwind_table_entry __start___unwind[]; 39 41 extern struct unwind_table_entry __stop___unwind[]; 40 42 ··· 259 257 if (pc_is_kernel_fn(pc, _switch_to) || 260 258 pc == (unsigned long)&_switch_to_ret) { 261 259 info->prev_sp = info->sp - CALLEE_SAVE_FRAME_SIZE; 262 - info->prev_ip = *(unsigned long *)(info->prev_sp - RP_OFFSET); 260 + if (ALIGNMENT_OK(info->prev_sp, long)) 261 + info->prev_ip = *(unsigned long *)(info->prev_sp - RP_OFFSET); 262 + else 263 + info->prev_ip = info->prev_sp = 0; 263 264 return 1; 264 265 } 265 266 266 267 #ifdef CONFIG_IRQSTACKS 267 - if (pc == (unsigned long)&_call_on_stack) { 268 + if (pc == (unsigned long)&_call_on_stack && ALIGNMENT_OK(info->sp, long)) { 268 269 info->prev_sp = *(unsigned long *)(info->sp - FRAME_SIZE - REG_SZ); 269 270 info->prev_ip = *(unsigned long *)(info->sp - FRAME_SIZE - RP_OFFSET); 270 271 return 1; ··· 375 370 info->prev_sp = info->sp - frame_size; 376 371 if (e->Millicode) 377 372 info->rp = info->r31; 378 - else if (rpoffset) 373 + else if (rpoffset && ALIGNMENT_OK(info->prev_sp, long)) 379 374 info->rp = *(unsigned long *)(info->prev_sp - rpoffset); 375 + else 376 + info->rp = 0; 380 377 info->prev_ip = info->rp; 381 378 info->rp = 0; 382 379 }
+6
arch/riscv/include/asm/asm.h
··· 12 12 #define __ASM_STR(x) #x 13 13 #endif 14 14 15 + #ifdef CONFIG_AS_HAS_INSN 16 + #define ASM_INSN_I(__x) ".insn " __x 17 + #else 18 + #define ASM_INSN_I(__x) ".4byte " __x 19 + #endif 20 + 15 21 #if __riscv_xlen == 64 16 22 #define __REG_SEL(a, b) __ASM_STR(a) 17 23 #elif __riscv_xlen == 32
+4 -4
arch/riscv/include/asm/insn-def.h
··· 256 256 INSN_S(OPCODE_OP_IMM, FUNC3(6), __RS2(3), \ 257 257 SIMM12((offset) & 0xfe0), RS1(base)) 258 258 259 - #define RISCV_PAUSE ".4byte 0x100000f" 260 - #define ZAWRS_WRS_NTO ".4byte 0x00d00073" 261 - #define ZAWRS_WRS_STO ".4byte 0x01d00073" 262 - #define RISCV_NOP4 ".4byte 0x00000013" 259 + #define RISCV_PAUSE ASM_INSN_I("0x100000f") 260 + #define ZAWRS_WRS_NTO ASM_INSN_I("0x00d00073") 261 + #define ZAWRS_WRS_STO ASM_INSN_I("0x01d00073") 262 + #define RISCV_NOP4 ASM_INSN_I("0x00000013") 263 263 264 264 #define RISCV_INSN_NOP4 _AC(0x00000013, U) 265 265
+3 -3
arch/riscv/include/asm/vendor_extensions/mips.h
··· 30 30 * allowing any subsequent instructions to fetch. 31 31 */ 32 32 33 - #define MIPS_PAUSE ".4byte 0x00501013\n\t" 34 - #define MIPS_EHB ".4byte 0x00301013\n\t" 35 - #define MIPS_IHB ".4byte 0x00101013\n\t" 33 + #define MIPS_PAUSE ASM_INSN_I("0x00501013\n\t") 34 + #define MIPS_EHB ASM_INSN_I("0x00301013\n\t") 35 + #define MIPS_IHB ASM_INSN_I("0x00101013\n\t") 36 36 37 37 #endif // _ASM_RISCV_VENDOR_EXTENSIONS_MIPS_H
+2 -2
arch/riscv/kernel/kgdb.c
··· 265 265 { 266 266 if (!strncmp(remcom_in_buffer, gdb_xfer_read_target, 267 267 sizeof(gdb_xfer_read_target))) 268 - strcpy(remcom_out_buffer, riscv_gdb_stub_target_desc); 268 + strscpy(remcom_out_buffer, riscv_gdb_stub_target_desc, BUFMAX); 269 269 else if (!strncmp(remcom_in_buffer, gdb_xfer_read_cpuxml, 270 270 sizeof(gdb_xfer_read_cpuxml))) 271 - strcpy(remcom_out_buffer, riscv_gdb_stub_cpuxml); 271 + strscpy(remcom_out_buffer, riscv_gdb_stub_cpuxml, BUFMAX); 272 272 } 273 273 274 274 static inline void kgdb_arch_update_addr(struct pt_regs *regs,
+6 -2
arch/riscv/kernel/module-sections.c
··· 119 119 unsigned int num_plts = 0; 120 120 unsigned int num_gots = 0; 121 121 Elf_Rela *scratch = NULL; 122 + Elf_Rela *new_scratch; 122 123 size_t scratch_size = 0; 123 124 int i; 124 125 ··· 169 168 scratch_size_needed = (num_scratch_relas + num_relas) * sizeof(*scratch); 170 169 if (scratch_size_needed > scratch_size) { 171 170 scratch_size = scratch_size_needed; 172 - scratch = kvrealloc(scratch, scratch_size, GFP_KERNEL); 173 - if (!scratch) 171 + new_scratch = kvrealloc(scratch, scratch_size, GFP_KERNEL); 172 + if (!new_scratch) { 173 + kvfree(scratch); 174 174 return -ENOMEM; 175 + } 176 + scratch = new_scratch; 175 177 } 176 178 177 179 for (size_t j = 0; j < num_relas; j++)
+19 -2
arch/riscv/kernel/stacktrace.c
··· 16 16 17 17 #ifdef CONFIG_FRAME_POINTER 18 18 19 + /* 20 + * This disables KASAN checking when reading a value from another task's stack, 21 + * since the other task could be running on another CPU and could have poisoned 22 + * the stack in the meantime. 23 + */ 24 + #define READ_ONCE_TASK_STACK(task, x) \ 25 + ({ \ 26 + unsigned long val; \ 27 + unsigned long addr = x; \ 28 + if ((task) == current) \ 29 + val = READ_ONCE(addr); \ 30 + else \ 31 + val = READ_ONCE_NOCHECK(addr); \ 32 + val; \ 33 + }) 34 + 19 35 extern asmlinkage void handle_exception(void); 20 36 extern unsigned long ret_from_exception_end; 21 37 ··· 85 69 fp = frame->ra; 86 70 pc = regs->ra; 87 71 } else { 88 - fp = frame->fp; 89 - pc = ftrace_graph_ret_addr(current, &graph_idx, frame->ra, 72 + fp = READ_ONCE_TASK_STACK(task, frame->fp); 73 + pc = READ_ONCE_TASK_STACK(task, frame->ra); 74 + pc = ftrace_graph_ret_addr(current, &graph_idx, pc, 90 75 &frame->ra); 91 76 if (pc >= (unsigned long)handle_exception && 92 77 pc < (unsigned long)&ret_from_exception_end) {
+1 -1
arch/riscv/kernel/tests/Kconfig.debug
··· 31 31 If unsure, say N. 32 32 33 33 config RISCV_KPROBES_KUNIT 34 - bool "KUnit test for riscv kprobes" if !KUNIT_ALL_TESTS 34 + tristate "KUnit test for riscv kprobes" if !KUNIT_ALL_TESTS 35 35 depends on KUNIT 36 36 depends on KPROBES 37 37 default KUNIT_ALL_TESTS
+3 -1
arch/riscv/kernel/tests/kprobes/Makefile
··· 1 - obj-y += test-kprobes.o test-kprobes-asm.o 1 + obj-$(CONFIG_RISCV_KPROBES_KUNIT) += kprobes_riscv_kunit.o 2 + 3 + kprobes_riscv_kunit-objs := test-kprobes.o test-kprobes-asm.o
+4 -1
arch/riscv/kernel/tests/kprobes/test-kprobes.c
··· 49 49 }; 50 50 51 51 static struct kunit_suite kprobes_test_suite = { 52 - .name = "kprobes_test_riscv", 52 + .name = "kprobes_riscv", 53 53 .test_cases = kprobes_testcases, 54 54 }; 55 55 56 56 kunit_test_suites(&kprobes_test_suite); 57 + 58 + MODULE_LICENSE("GPL"); 59 + MODULE_DESCRIPTION("KUnit test for riscv kprobes");
+1 -1
arch/riscv/mm/ptdump.c
··· 21 21 #define pt_dump_seq_puts(m, fmt) \ 22 22 ({ \ 23 23 if (m) \ 24 - seq_printf(m, fmt); \ 24 + seq_puts(m, fmt); \ 25 25 }) 26 26 27 27 /*
-1
arch/s390/Kconfig
··· 158 158 select ARCH_WANT_IRQS_OFF_ACTIVATE_MM 159 159 select ARCH_WANT_KERNEL_PMD_MKWRITE 160 160 select ARCH_WANT_LD_ORPHAN_WARN 161 - select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP 162 161 select ARCH_WANTS_THP_SWAP 163 162 select BUILDTIME_TABLE_SORT 164 163 select CLONE_BACKWARDS2
+9 -5
arch/s390/configs/debug_defconfig
··· 101 101 CONFIG_MEMORY_HOTPLUG=y 102 102 CONFIG_MEMORY_HOTREMOVE=y 103 103 CONFIG_KSM=y 104 + CONFIG_PERSISTENT_HUGE_ZERO_FOLIO=y 104 105 CONFIG_TRANSPARENT_HUGEPAGE=y 105 106 CONFIG_CMA_DEBUGFS=y 106 107 CONFIG_CMA_SYSFS=y ··· 124 123 CONFIG_TLS_TOE=y 125 124 CONFIG_XFRM_USER=m 126 125 CONFIG_NET_KEY=m 127 - CONFIG_XDP_SOCKETS=y 128 - CONFIG_XDP_SOCKETS_DIAG=m 129 - CONFIG_DIBS=y 130 - CONFIG_DIBS_LO=y 131 126 CONFIG_SMC=m 132 127 CONFIG_SMC_DIAG=m 128 + CONFIG_DIBS=y 129 + CONFIG_DIBS_LO=y 130 + CONFIG_XDP_SOCKETS=y 131 + CONFIG_XDP_SOCKETS_DIAG=m 133 132 CONFIG_INET=y 134 133 CONFIG_IP_MULTICAST=y 135 134 CONFIG_IP_ADVANCED_ROUTER=y ··· 473 472 CONFIG_SCSI_DH_ALUA=m 474 473 CONFIG_MD=y 475 474 CONFIG_BLK_DEV_MD=y 475 + CONFIG_MD_LLBITMAP=y 476 476 # CONFIG_MD_BITMAP_FILE is not set 477 477 CONFIG_MD_LINEAR=m 478 478 CONFIG_MD_CLUSTER=m ··· 656 654 CONFIG_JFS_SECURITY=y 657 655 CONFIG_JFS_STATISTICS=y 658 656 CONFIG_XFS_FS=y 657 + CONFIG_XFS_SUPPORT_V4=y 658 + CONFIG_XFS_SUPPORT_ASCII_CI=y 659 659 CONFIG_XFS_QUOTA=y 660 660 CONFIG_XFS_POSIX_ACL=y 661 661 CONFIG_XFS_RT=y 662 + # CONFIG_XFS_ONLINE_SCRUB is not set 662 663 CONFIG_XFS_DEBUG=y 663 664 CONFIG_GFS2_FS=m 664 665 CONFIG_GFS2_FS_LOCKING_DLM=y ··· 671 666 CONFIG_BTRFS_DEBUG=y 672 667 CONFIG_BTRFS_ASSERT=y 673 668 CONFIG_NILFS2_FS=m 674 - CONFIG_FS_DAX=y 675 669 CONFIG_EXPORTFS_BLOCK_OPS=y 676 670 CONFIG_FS_ENCRYPTION=y 677 671 CONFIG_FS_VERITY=y
+9 -5
arch/s390/configs/defconfig
··· 94 94 CONFIG_MEMORY_HOTPLUG=y 95 95 CONFIG_MEMORY_HOTREMOVE=y 96 96 CONFIG_KSM=y 97 + CONFIG_PERSISTENT_HUGE_ZERO_FOLIO=y 97 98 CONFIG_TRANSPARENT_HUGEPAGE=y 98 99 CONFIG_CMA_SYSFS=y 99 100 CONFIG_CMA_AREAS=7 ··· 115 114 CONFIG_TLS_TOE=y 116 115 CONFIG_XFRM_USER=m 117 116 CONFIG_NET_KEY=m 118 - CONFIG_XDP_SOCKETS=y 119 - CONFIG_XDP_SOCKETS_DIAG=m 120 - CONFIG_DIBS=y 121 - CONFIG_DIBS_LO=y 122 117 CONFIG_SMC=m 123 118 CONFIG_SMC_DIAG=m 119 + CONFIG_DIBS=y 120 + CONFIG_DIBS_LO=y 121 + CONFIG_XDP_SOCKETS=y 122 + CONFIG_XDP_SOCKETS_DIAG=m 124 123 CONFIG_INET=y 125 124 CONFIG_IP_MULTICAST=y 126 125 CONFIG_IP_ADVANCED_ROUTER=y ··· 463 462 CONFIG_SCSI_DH_ALUA=m 464 463 CONFIG_MD=y 465 464 CONFIG_BLK_DEV_MD=y 465 + CONFIG_MD_LLBITMAP=y 466 466 # CONFIG_MD_BITMAP_FILE is not set 467 467 CONFIG_MD_LINEAR=m 468 468 CONFIG_MD_CLUSTER=m ··· 646 644 CONFIG_JFS_SECURITY=y 647 645 CONFIG_JFS_STATISTICS=y 648 646 CONFIG_XFS_FS=y 647 + CONFIG_XFS_SUPPORT_V4=y 648 + CONFIG_XFS_SUPPORT_ASCII_CI=y 649 649 CONFIG_XFS_QUOTA=y 650 650 CONFIG_XFS_POSIX_ACL=y 651 651 CONFIG_XFS_RT=y 652 + # CONFIG_XFS_ONLINE_SCRUB is not set 652 653 CONFIG_GFS2_FS=m 653 654 CONFIG_GFS2_FS_LOCKING_DLM=y 654 655 CONFIG_OCFS2_FS=m 655 656 CONFIG_BTRFS_FS=y 656 657 CONFIG_BTRFS_FS_POSIX_ACL=y 657 658 CONFIG_NILFS2_FS=m 658 - CONFIG_FS_DAX=y 659 659 CONFIG_EXPORTFS_BLOCK_OPS=y 660 660 CONFIG_FS_ENCRYPTION=y 661 661 CONFIG_FS_VERITY=y
-1
arch/s390/configs/zfcpdump_defconfig
··· 33 33 CONFIG_DEVTMPFS=y 34 34 CONFIG_DEVTMPFS_SAFE=y 35 35 CONFIG_BLK_DEV_RAM=y 36 - # CONFIG_DCSSBLK is not set 37 36 # CONFIG_DASD is not set 38 37 CONFIG_ENCLOSURE_SERVICES=y 39 38 CONFIG_SCSI=y
+34 -18
arch/s390/crypto/phmac_s390.c
··· 169 169 u64 buflen[2]; 170 170 }; 171 171 172 + enum async_op { 173 + OP_NOP = 0, 174 + OP_UPDATE, 175 + OP_FINAL, 176 + OP_FINUP, 177 + }; 178 + 172 179 /* phmac request context */ 173 180 struct phmac_req_ctx { 174 181 struct hash_walk_helper hwh; 175 182 struct kmac_sha2_ctx kmac_ctx; 176 - bool final; 183 + enum async_op async_op; 177 184 }; 178 185 179 186 /* ··· 617 610 * using engine to serialize requests. 618 611 */ 619 612 if (rc == 0 || rc == -EKEYEXPIRED) { 613 + req_ctx->async_op = OP_UPDATE; 620 614 atomic_inc(&tfm_ctx->via_engine_ctr); 621 615 rc = crypto_transfer_hash_request_to_engine(phmac_crypto_engine, req); 622 616 if (rc != -EINPROGRESS) ··· 655 647 * using engine to serialize requests. 656 648 */ 657 649 if (rc == 0 || rc == -EKEYEXPIRED) { 658 - req->nbytes = 0; 659 - req_ctx->final = true; 650 + req_ctx->async_op = OP_FINAL; 660 651 atomic_inc(&tfm_ctx->via_engine_ctr); 661 652 rc = crypto_transfer_hash_request_to_engine(phmac_crypto_engine, req); 662 653 if (rc != -EINPROGRESS) ··· 683 676 if (rc) 684 677 goto out; 685 678 679 + req_ctx->async_op = OP_FINUP; 680 + 686 681 /* Try synchronous operations if no active engine usage */ 687 682 if (!atomic_read(&tfm_ctx->via_engine_ctr)) { 688 683 rc = phmac_kmac_update(req, false); 689 684 if (rc == 0) 690 - req->nbytes = 0; 685 + req_ctx->async_op = OP_FINAL; 691 686 } 692 - if (!rc && !req->nbytes && !atomic_read(&tfm_ctx->via_engine_ctr)) { 687 + if (!rc && req_ctx->async_op == OP_FINAL && 688 + !atomic_read(&tfm_ctx->via_engine_ctr)) { 693 689 rc = phmac_kmac_final(req, false); 694 690 if (rc == 0) 695 691 goto out; ··· 704 694 * using engine to serialize requests. 705 695 */ 706 696 if (rc == 0 || rc == -EKEYEXPIRED) { 707 - req_ctx->final = true; 697 + /* req->async_op has been set to either OP_FINUP or OP_FINAL */ 708 698 atomic_inc(&tfm_ctx->via_engine_ctr); 709 699 rc = crypto_transfer_hash_request_to_engine(phmac_crypto_engine, req); 710 700 if (rc != -EINPROGRESS) ··· 865 855 866 856 /* 867 857 * Three kinds of requests come in here: 868 - * update when req->nbytes > 0 and req_ctx->final is false 869 - * final when req->nbytes = 0 and req_ctx->final is true 870 - * finup when req->nbytes > 0 and req_ctx->final is true 871 - * For update and finup the hwh walk needs to be prepared and 872 - * up to date but the actual nr of bytes in req->nbytes may be 873 - * any non zero number. For final there is no hwh walk needed. 858 + * 1. req->async_op == OP_UPDATE with req->nbytes > 0 859 + * 2. req->async_op == OP_FINUP with req->nbytes > 0 860 + * 3. req->async_op == OP_FINAL 861 + * For update and finup the hwh walk has already been prepared 862 + * by the caller. For final there is no hwh walk needed. 874 863 */ 875 864 876 - if (req->nbytes) { 865 + switch (req_ctx->async_op) { 866 + case OP_UPDATE: 867 + case OP_FINUP: 877 868 rc = phmac_kmac_update(req, true); 878 869 if (rc == -EKEYEXPIRED) { 879 870 /* ··· 891 880 hwh_advance(hwh, rc); 892 881 goto out; 893 882 } 894 - req->nbytes = 0; 895 - } 896 - 897 - if (req_ctx->final) { 883 + if (req_ctx->async_op == OP_UPDATE) 884 + break; 885 + req_ctx->async_op = OP_FINAL; 886 + fallthrough; 887 + case OP_FINAL: 898 888 rc = phmac_kmac_final(req, true); 899 889 if (rc == -EKEYEXPIRED) { 900 890 /* ··· 909 897 cond_resched(); 910 898 return -ENOSPC; 911 899 } 900 + break; 901 + default: 902 + /* unknown/unsupported/unimplemented asynch op */ 903 + return -EOPNOTSUPP; 912 904 } 913 905 914 906 out: 915 - if (rc || req_ctx->final) 907 + if (rc || req_ctx->async_op == OP_FINAL) 916 908 memzero_explicit(kmac_ctx, sizeof(*kmac_ctx)); 917 909 pr_debug("request complete with rc=%d\n", rc); 918 910 local_bh_disable();
-1
arch/s390/include/asm/pci.h
··· 145 145 u8 has_resources : 1; 146 146 u8 is_physfn : 1; 147 147 u8 util_str_avail : 1; 148 - u8 irqs_registered : 1; 149 148 u8 tid_avail : 1; 150 149 u8 rtr_avail : 1; /* Relaxed translation allowed */ 151 150 unsigned int devfn; /* DEVFN part of the RID*/
+7 -12
arch/s390/mm/dump_pagetables.c
··· 291 291 292 292 static int add_marker(unsigned long start, unsigned long end, const char *name) 293 293 { 294 - size_t oldsize, newsize; 294 + struct addr_marker *new; 295 + size_t newsize; 295 296 296 - oldsize = markers_cnt * sizeof(*markers); 297 - newsize = oldsize + 2 * sizeof(*markers); 298 - if (!oldsize) 299 - markers = kvmalloc(newsize, GFP_KERNEL); 300 - else 301 - markers = kvrealloc(markers, newsize, GFP_KERNEL); 302 - if (!markers) 303 - goto error; 297 + newsize = (markers_cnt + 2) * sizeof(*markers); 298 + new = kvrealloc(markers, newsize, GFP_KERNEL); 299 + if (!new) 300 + return -ENOMEM; 301 + markers = new; 304 302 markers[markers_cnt].is_start = 1; 305 303 markers[markers_cnt].start_address = start; 306 304 markers[markers_cnt].size = end - start; ··· 310 312 markers[markers_cnt].name = name; 311 313 markers_cnt++; 312 314 return 0; 313 - error: 314 - markers_cnt = 0; 315 - return -ENOMEM; 316 315 } 317 316 318 317 static int pt_dump_init(void)
+2 -2
arch/s390/pci/pci_event.c
··· 188 188 * is unbound or probed and that userspace can't access its 189 189 * configuration space while we perform recovery. 190 190 */ 191 - pci_dev_lock(pdev); 191 + device_lock(&pdev->dev); 192 192 if (pdev->error_state == pci_channel_io_perm_failure) { 193 193 ers_res = PCI_ERS_RESULT_DISCONNECT; 194 194 goto out_unlock; ··· 257 257 driver->err_handler->resume(pdev); 258 258 pci_uevent_ers(pdev, PCI_ERS_RESULT_RECOVERED); 259 259 out_unlock: 260 - pci_dev_unlock(pdev); 260 + device_unlock(&pdev->dev); 261 261 zpci_report_status(zdev, "recovery", status_str); 262 262 263 263 return ers_res;
+1 -8
arch/s390/pci/pci_irq.c
··· 107 107 else 108 108 rc = zpci_set_airq(zdev); 109 109 110 - if (!rc) 111 - zdev->irqs_registered = 1; 112 - 113 110 return rc; 114 111 } 115 112 ··· 119 122 rc = zpci_clear_directed_irq(zdev); 120 123 else 121 124 rc = zpci_clear_airq(zdev); 122 - 123 - if (!rc) 124 - zdev->irqs_registered = 0; 125 125 126 126 return rc; 127 127 } ··· 421 427 { 422 428 struct zpci_dev *zdev = to_zpci(pdev); 423 429 424 - if (!zdev->irqs_registered) 425 - zpci_set_irq(zdev); 430 + zpci_set_irq(zdev); 426 431 return true; 427 432 } 428 433
+2 -2
arch/x86/Makefile
··· 75 75 # 76 76 # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53383 77 77 # 78 - KBUILD_CFLAGS += -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx 78 + KBUILD_CFLAGS += -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -mno-sse4a 79 79 KBUILD_RUSTFLAGS += --target=$(objtree)/scripts/target.json 80 80 KBUILD_RUSTFLAGS += -Ctarget-feature=-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-avx,-avx2 81 81 ··· 98 98 # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104816 99 99 # 100 100 KBUILD_CFLAGS += $(call cc-option,-fcf-protection=branch -fno-jump-tables) 101 - KBUILD_RUSTFLAGS += -Zcf-protection=branch -Zno-jump-tables 101 + KBUILD_RUSTFLAGS += -Zcf-protection=branch $(if $(call rustc-min-version,109300),-Cjump-tables=n,-Zno-jump-tables) 102 102 else 103 103 KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none) 104 104 endif
+1
arch/x86/events/intel/core.c
··· 7596 7596 break; 7597 7597 7598 7598 case INTEL_PANTHERLAKE_L: 7599 + case INTEL_WILDCATLAKE_L: 7599 7600 pr_cont("Pantherlake Hybrid events, "); 7600 7601 name = "pantherlake_hybrid"; 7601 7602 goto lnl_common;
+2 -1
arch/x86/events/intel/ds.c
··· 317 317 { 318 318 u64 val; 319 319 320 - WARN_ON_ONCE(hybrid_pmu(event->pmu)->pmu_type == hybrid_big); 320 + WARN_ON_ONCE(is_hybrid() && 321 + hybrid_pmu(event->pmu)->pmu_type == hybrid_big); 321 322 322 323 dse &= PERF_PEBS_DATA_SOURCE_GRT_MASK; 323 324 val = hybrid_var(event->pmu, pebs_data_source)[dse];
+1
arch/x86/events/intel/uncore.c
··· 1895 1895 X86_MATCH_VFM(INTEL_ARROWLAKE_H, &mtl_uncore_init), 1896 1896 X86_MATCH_VFM(INTEL_LUNARLAKE_M, &lnl_uncore_init), 1897 1897 X86_MATCH_VFM(INTEL_PANTHERLAKE_L, &ptl_uncore_init), 1898 + X86_MATCH_VFM(INTEL_WILDCATLAKE_L, &ptl_uncore_init), 1898 1899 X86_MATCH_VFM(INTEL_SAPPHIRERAPIDS_X, &spr_uncore_init), 1899 1900 X86_MATCH_VFM(INTEL_EMERALDRAPIDS_X, &spr_uncore_init), 1900 1901 X86_MATCH_VFM(INTEL_GRANITERAPIDS_X, &gnr_uncore_init),
-1
arch/x86/include/asm/amd/node.h
··· 23 23 #define AMD_NODE0_PCI_SLOT 0x18 24 24 25 25 struct pci_dev *amd_node_get_func(u16 node, u8 func); 26 - struct pci_dev *amd_node_get_root(u16 node); 27 26 28 27 static inline u16 amd_num_nodes(void) 29 28 {
+3 -3
arch/x86/include/asm/intel-family.h
··· 150 150 151 151 #define INTEL_LUNARLAKE_M IFM(6, 0xBD) /* Lion Cove / Skymont */ 152 152 153 - #define INTEL_PANTHERLAKE_L IFM(6, 0xCC) /* Cougar Cove / Crestmont */ 153 + #define INTEL_PANTHERLAKE_L IFM(6, 0xCC) /* Cougar Cove / Darkmont */ 154 154 155 155 #define INTEL_WILDCATLAKE_L IFM(6, 0xD5) 156 156 157 - #define INTEL_NOVALAKE IFM(18, 0x01) 158 - #define INTEL_NOVALAKE_L IFM(18, 0x03) 157 + #define INTEL_NOVALAKE IFM(18, 0x01) /* Coyote Cove / Arctic Wolf */ 158 + #define INTEL_NOVALAKE_L IFM(18, 0x03) /* Coyote Cove / Arctic Wolf */ 159 159 160 160 /* "Small Core" Processors (Atom/E-Core) */ 161 161
+3
arch/x86/include/asm/page_64.h
··· 43 43 void clear_page_orig(void *page); 44 44 void clear_page_rep(void *page); 45 45 void clear_page_erms(void *page); 46 + KCFI_REFERENCE(clear_page_orig); 47 + KCFI_REFERENCE(clear_page_rep); 48 + KCFI_REFERENCE(clear_page_erms); 46 49 47 50 static inline void clear_page(void *page) 48 51 {
+4
arch/x86/include/asm/runtime-const.h
··· 2 2 #ifndef _ASM_RUNTIME_CONST_H 3 3 #define _ASM_RUNTIME_CONST_H 4 4 5 + #ifdef MODULE 6 + #error "Cannot use runtime-const infrastructure from modules" 7 + #endif 8 + 5 9 #ifdef __ASSEMBLY__ 6 10 7 11 .macro RUNTIME_CONST_PTR sym reg
+5 -5
arch/x86/include/asm/uaccess_64.h
··· 12 12 #include <asm/cpufeatures.h> 13 13 #include <asm/page.h> 14 14 #include <asm/percpu.h> 15 - #include <asm/runtime-const.h> 16 15 17 - /* 18 - * Virtual variable: there's no actual backing store for this, 19 - * it can purely be used as 'runtime_const_ptr(USER_PTR_MAX)' 20 - */ 16 + #ifdef MODULE 17 + #define runtime_const_ptr(sym) (sym) 18 + #else 19 + #include <asm/runtime-const.h> 20 + #endif 21 21 extern unsigned long USER_PTR_MAX; 22 22 23 23 #ifdef CONFIG_ADDRESS_MASKING
+51 -99
arch/x86/kernel/amd_node.c
··· 34 34 return pci_get_domain_bus_and_slot(0, 0, PCI_DEVFN(AMD_NODE0_PCI_SLOT + node, func)); 35 35 } 36 36 37 - #define DF_BLK_INST_CNT 0x040 38 - #define DF_CFG_ADDR_CNTL_LEGACY 0x084 39 - #define DF_CFG_ADDR_CNTL_DF4 0xC04 40 - 41 - #define DF_MAJOR_REVISION GENMASK(27, 24) 42 - 43 - static u16 get_cfg_addr_cntl_offset(struct pci_dev *df_f0) 44 - { 45 - u32 reg; 46 - 47 - /* 48 - * Revision fields added for DF4 and later. 49 - * 50 - * Major revision of '0' is found pre-DF4. Field is Read-as-Zero. 51 - */ 52 - if (pci_read_config_dword(df_f0, DF_BLK_INST_CNT, &reg)) 53 - return 0; 54 - 55 - if (reg & DF_MAJOR_REVISION) 56 - return DF_CFG_ADDR_CNTL_DF4; 57 - 58 - return DF_CFG_ADDR_CNTL_LEGACY; 59 - } 60 - 61 - struct pci_dev *amd_node_get_root(u16 node) 62 - { 63 - struct pci_dev *root; 64 - u16 cntl_off; 65 - u8 bus; 66 - 67 - if (!cpu_feature_enabled(X86_FEATURE_ZEN)) 68 - return NULL; 69 - 70 - /* 71 - * D18F0xXXX [Config Address Control] (DF::CfgAddressCntl) 72 - * Bits [7:0] (SecBusNum) holds the bus number of the root device for 73 - * this Data Fabric instance. The segment, device, and function will be 0. 74 - */ 75 - struct pci_dev *df_f0 __free(pci_dev_put) = amd_node_get_func(node, 0); 76 - if (!df_f0) 77 - return NULL; 78 - 79 - cntl_off = get_cfg_addr_cntl_offset(df_f0); 80 - if (!cntl_off) 81 - return NULL; 82 - 83 - if (pci_read_config_byte(df_f0, cntl_off, &bus)) 84 - return NULL; 85 - 86 - /* Grab the pointer for the actual root device instance. */ 87 - root = pci_get_domain_bus_and_slot(0, bus, 0); 88 - 89 - pci_dbg(root, "is root for AMD node %u\n", node); 90 - return root; 91 - } 92 - 93 37 static struct pci_dev **amd_roots; 94 38 95 39 /* Protect the PCI config register pairs used for SMN. */ ··· 218 274 DEFINE_SHOW_STORE_ATTRIBUTE(smn_address); 219 275 DEFINE_SHOW_STORE_ATTRIBUTE(smn_value); 220 276 221 - static int amd_cache_roots(void) 277 + static struct pci_dev *get_next_root(struct pci_dev *root) 222 278 { 223 - u16 node, num_nodes = amd_num_nodes(); 224 - 225 - amd_roots = kcalloc(num_nodes, sizeof(*amd_roots), GFP_KERNEL); 226 - if (!amd_roots) 227 - return -ENOMEM; 228 - 229 - for (node = 0; node < num_nodes; node++) 230 - amd_roots[node] = amd_node_get_root(node); 231 - 232 - return 0; 233 - } 234 - 235 - static int reserve_root_config_spaces(void) 236 - { 237 - struct pci_dev *root = NULL; 238 - struct pci_bus *bus = NULL; 239 - 240 - while ((bus = pci_find_next_bus(bus))) { 241 - /* Root device is Device 0 Function 0 on each Primary Bus. */ 242 - root = pci_get_slot(bus, 0); 243 - if (!root) 279 + while ((root = pci_get_class(PCI_CLASS_BRIDGE_HOST << 8, root))) { 280 + /* Root device is Device 0 Function 0. */ 281 + if (root->devfn) 244 282 continue; 245 283 246 284 if (root->vendor != PCI_VENDOR_ID_AMD && 247 285 root->vendor != PCI_VENDOR_ID_HYGON) 248 286 continue; 249 287 250 - pci_dbg(root, "Reserving PCI config space\n"); 251 - 252 - /* 253 - * There are a few SMN index/data pairs and other registers 254 - * that shouldn't be accessed by user space. 255 - * So reserve the entire PCI config space for simplicity rather 256 - * than covering specific registers piecemeal. 257 - */ 258 - if (!pci_request_config_region_exclusive(root, 0, PCI_CFG_SPACE_SIZE, NULL)) { 259 - pci_err(root, "Failed to reserve config space\n"); 260 - return -EEXIST; 261 - } 288 + break; 262 289 } 263 290 264 - smn_exclusive = true; 265 - return 0; 291 + return root; 266 292 } 267 293 268 294 static bool enable_dfs; ··· 246 332 247 333 static int __init amd_smn_init(void) 248 334 { 249 - int err; 335 + u16 count, num_roots, roots_per_node, node, num_nodes; 336 + struct pci_dev *root; 250 337 251 338 if (!cpu_feature_enabled(X86_FEATURE_ZEN)) 252 339 return 0; ··· 257 342 if (amd_roots) 258 343 return 0; 259 344 260 - err = amd_cache_roots(); 261 - if (err) 262 - return err; 345 + num_roots = 0; 346 + root = NULL; 347 + while ((root = get_next_root(root))) { 348 + pci_dbg(root, "Reserving PCI config space\n"); 263 349 264 - err = reserve_root_config_spaces(); 265 - if (err) 266 - return err; 350 + /* 351 + * There are a few SMN index/data pairs and other registers 352 + * that shouldn't be accessed by user space. So reserve the 353 + * entire PCI config space for simplicity rather than covering 354 + * specific registers piecemeal. 355 + */ 356 + if (!pci_request_config_region_exclusive(root, 0, PCI_CFG_SPACE_SIZE, NULL)) { 357 + pci_err(root, "Failed to reserve config space\n"); 358 + return -EEXIST; 359 + } 360 + 361 + num_roots++; 362 + } 363 + 364 + pr_debug("Found %d AMD root devices\n", num_roots); 365 + 366 + if (!num_roots) 367 + return -ENODEV; 368 + 369 + num_nodes = amd_num_nodes(); 370 + amd_roots = kcalloc(num_nodes, sizeof(*amd_roots), GFP_KERNEL); 371 + if (!amd_roots) 372 + return -ENOMEM; 373 + 374 + roots_per_node = num_roots / num_nodes; 375 + 376 + count = 0; 377 + node = 0; 378 + root = NULL; 379 + while (node < num_nodes && (root = get_next_root(root))) { 380 + /* Use one root for each node and skip the rest. */ 381 + if (count++ % roots_per_node) 382 + continue; 383 + 384 + pci_dbg(root, "is root for AMD node %u\n", node); 385 + amd_roots[node++] = root; 386 + } 267 387 268 388 if (enable_dfs) { 269 389 debugfs_dir = debugfs_create_dir("amd_smn", arch_debugfs_dir); ··· 307 357 debugfs_create_file("address", 0600, debugfs_dir, NULL, &smn_address_fops); 308 358 debugfs_create_file("value", 0600, debugfs_dir, NULL, &smn_value_fops); 309 359 } 360 + 361 + smn_exclusive = true; 310 362 311 363 return 0; 312 364 }
+12 -1
arch/x86/kernel/cpu/amd.c
··· 516 516 setup_force_cpu_cap(X86_FEATURE_ZEN5); 517 517 break; 518 518 case 0x50 ... 0x5f: 519 - case 0x90 ... 0xaf: 519 + case 0x80 ... 0xaf: 520 520 case 0xc0 ... 0xcf: 521 521 setup_force_cpu_cap(X86_FEATURE_ZEN6); 522 522 break; ··· 1035 1035 } 1036 1036 } 1037 1037 1038 + static const struct x86_cpu_id zen5_rdseed_microcode[] = { 1039 + ZEN_MODEL_STEP_UCODE(0x1a, 0x02, 0x1, 0x0b00215a), 1040 + ZEN_MODEL_STEP_UCODE(0x1a, 0x11, 0x0, 0x0b101054), 1041 + {}, 1042 + }; 1043 + 1038 1044 static void init_amd_zen5(struct cpuinfo_x86 *c) 1039 1045 { 1046 + if (!x86_match_min_microcode_rev(zen5_rdseed_microcode)) { 1047 + clear_cpu_cap(c, X86_FEATURE_RDSEED); 1048 + msr_clear_bit(MSR_AMD64_CPUID_FN_7, 18); 1049 + pr_emerg_once("RDSEED32 is broken. Disabling the corresponding CPUID bit.\n"); 1050 + } 1040 1051 } 1041 1052 1042 1053 static void init_amd(struct cpuinfo_x86 *c)
+5 -1
arch/x86/kernel/cpu/common.c
··· 78 78 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info); 79 79 EXPORT_PER_CPU_SYMBOL(cpu_info); 80 80 81 + /* Used for modules: built-in code uses runtime constants */ 82 + unsigned long USER_PTR_MAX; 83 + EXPORT_SYMBOL(USER_PTR_MAX); 84 + 81 85 u32 elf_hwcap2 __read_mostly; 82 86 83 87 /* Number of siblings per CPU package */ ··· 2583 2579 alternative_instructions(); 2584 2580 2585 2581 if (IS_ENABLED(CONFIG_X86_64)) { 2586 - unsigned long USER_PTR_MAX = TASK_SIZE_MAX; 2582 + USER_PTR_MAX = TASK_SIZE_MAX; 2587 2583 2588 2584 /* 2589 2585 * Enable this when LAM is gated on LASS support
+21 -1
arch/x86/kernel/cpu/microcode/amd.c
··· 220 220 case 0xaa001: return cur_rev <= 0xaa00116; break; 221 221 case 0xaa002: return cur_rev <= 0xaa00218; break; 222 222 case 0xb0021: return cur_rev <= 0xb002146; break; 223 + case 0xb0081: return cur_rev <= 0xb008111; break; 223 224 case 0xb1010: return cur_rev <= 0xb101046; break; 224 225 case 0xb2040: return cur_rev <= 0xb204031; break; 225 226 case 0xb4040: return cur_rev <= 0xb404031; break; 226 227 case 0xb6000: return cur_rev <= 0xb600031; break; 228 + case 0xb6080: return cur_rev <= 0xb608031; break; 227 229 case 0xb7000: return cur_rev <= 0xb700031; break; 228 230 default: break; 229 231 } ··· 235 233 return true; 236 234 } 237 235 236 + static bool cpu_has_entrysign(void) 237 + { 238 + unsigned int fam = x86_family(bsp_cpuid_1_eax); 239 + unsigned int model = x86_model(bsp_cpuid_1_eax); 240 + 241 + if (fam == 0x17 || fam == 0x19) 242 + return true; 243 + 244 + if (fam == 0x1a) { 245 + if (model <= 0x2f || 246 + (0x40 <= model && model <= 0x4f) || 247 + (0x60 <= model && model <= 0x6f)) 248 + return true; 249 + } 250 + 251 + return false; 252 + } 253 + 238 254 static bool verify_sha256_digest(u32 patch_id, u32 cur_rev, const u8 *data, unsigned int len) 239 255 { 240 256 struct patch_digest *pd = NULL; 241 257 u8 digest[SHA256_DIGEST_SIZE]; 242 258 int i; 243 259 244 - if (x86_family(bsp_cpuid_1_eax) < 0x17) 260 + if (!cpu_has_entrysign()) 245 261 return true; 246 262 247 263 if (!need_sha_check(cur_rev))
+3
arch/x86/kernel/fpu/core.c
··· 825 825 !fpregs_state_valid(fpu, smp_processor_id())) 826 826 os_xrstor_supervisor(fpu->fpstate); 827 827 828 + /* Ensure XFD state is in sync before reloading XSTATE */ 829 + xfd_update_state(fpu->fpstate); 830 + 828 831 /* Reset user states in registers. */ 829 832 restore_fpregs_from_init_fpstate(XFEATURE_MASK_USER_RESTORE); 830 833
+1 -1
arch/x86/net/bpf_jit_comp.c
··· 2701 2701 /* Update cleanup_addr */ 2702 2702 ctx->cleanup_addr = proglen; 2703 2703 if (bpf_prog_was_classic(bpf_prog) && 2704 - !capable(CAP_SYS_ADMIN)) { 2704 + !ns_capable_noaudit(&init_user_ns, CAP_SYS_ADMIN)) { 2705 2705 u8 *ip = image + addrs[i - 1]; 2706 2706 2707 2707 if (emit_spectre_bhb_barrier(&prog, ip, bpf_prog))
+1 -1
block/blk-crypto.c
··· 292 292 } 293 293 294 294 if (!bio_crypt_check_alignment(bio)) { 295 - bio->bi_status = BLK_STS_IOERR; 295 + bio->bi_status = BLK_STS_INVAL; 296 296 goto fail; 297 297 } 298 298
+3
drivers/acpi/acpi_mrrm.c
··· 63 63 if (!mrrm) 64 64 return -ENODEV; 65 65 66 + if (mrrm->header.revision != 1) 67 + return -EINVAL; 68 + 66 69 if (mrrm->flags & ACPI_MRRM_FLAGS_REGION_ASSIGNMENT_OS) 67 70 return -EOPNOTSUPP; 68 71
+3 -1
drivers/acpi/acpi_video.c
··· 1959 1959 struct acpi_video_device *dev; 1960 1960 1961 1961 mutex_lock(&video->device_list_lock); 1962 - list_for_each_entry(dev, &video->video_device_list, entry) 1962 + list_for_each_entry(dev, &video->video_device_list, entry) { 1963 1963 acpi_video_dev_remove_notify_handler(dev); 1964 + cancel_delayed_work_sync(&dev->switch_brightness_work); 1965 + } 1964 1966 mutex_unlock(&video->device_list_lock); 1965 1967 1966 1968 acpi_video_bus_stop_devices(video);
+3 -1
drivers/acpi/button.c
··· 619 619 620 620 input_set_drvdata(input, device); 621 621 error = input_register_device(input); 622 - if (error) 622 + if (error) { 623 + input_free_device(input); 623 624 goto err_remove_fs; 625 + } 624 626 625 627 switch (device->device_type) { 626 628 case ACPI_BUS_TYPE_POWER_BUTTON:
+1 -1
drivers/acpi/cppc_acpi.c
··· 750 750 } 751 751 752 752 /* 753 - * Disregard _CPC if the number of entries in the return pachage is not 753 + * Disregard _CPC if the number of entries in the return package is not 754 754 * as expected, but support future revisions being proper supersets of 755 755 * the v3 and only causing more entries to be returned by _CPC. 756 756 */
+4 -3
drivers/acpi/fan.h
··· 49 49 }; 50 50 51 51 struct acpi_fan { 52 + acpi_handle handle; 52 53 bool acpi4; 53 54 bool has_fst; 54 55 struct acpi_fan_fif fif; ··· 60 59 struct device_attribute fine_grain_control; 61 60 }; 62 61 63 - int acpi_fan_get_fst(struct acpi_device *device, struct acpi_fan_fst *fst); 62 + int acpi_fan_get_fst(acpi_handle handle, struct acpi_fan_fst *fst); 64 63 int acpi_fan_create_attributes(struct acpi_device *device); 65 64 void acpi_fan_delete_attributes(struct acpi_device *device); 66 65 67 66 #if IS_REACHABLE(CONFIG_HWMON) 68 - int devm_acpi_fan_create_hwmon(struct acpi_device *device); 67 + int devm_acpi_fan_create_hwmon(struct device *dev); 69 68 #else 70 - static inline int devm_acpi_fan_create_hwmon(struct acpi_device *device) { return 0; }; 69 + static inline int devm_acpi_fan_create_hwmon(struct device *dev) { return 0; }; 71 70 #endif 72 71 73 72 #endif
+1 -1
drivers/acpi/fan_attr.c
··· 55 55 struct acpi_fan_fst fst; 56 56 int status; 57 57 58 - status = acpi_fan_get_fst(acpi_dev, &fst); 58 + status = acpi_fan_get_fst(acpi_dev->handle, &fst); 59 59 if (status) 60 60 return status; 61 61
+23 -13
drivers/acpi/fan_core.c
··· 44 44 return 0; 45 45 } 46 46 47 - int acpi_fan_get_fst(struct acpi_device *device, struct acpi_fan_fst *fst) 47 + int acpi_fan_get_fst(acpi_handle handle, struct acpi_fan_fst *fst) 48 48 { 49 49 struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 50 50 union acpi_object *obj; 51 51 acpi_status status; 52 52 int ret = 0; 53 53 54 - status = acpi_evaluate_object(device->handle, "_FST", NULL, &buffer); 55 - if (ACPI_FAILURE(status)) { 56 - dev_err(&device->dev, "Get fan state failed\n"); 57 - return -ENODEV; 58 - } 54 + status = acpi_evaluate_object(handle, "_FST", NULL, &buffer); 55 + if (ACPI_FAILURE(status)) 56 + return -EIO; 59 57 60 58 obj = buffer.pointer; 61 - if (!obj || obj->type != ACPI_TYPE_PACKAGE || 62 - obj->package.count != 3 || 63 - obj->package.elements[1].type != ACPI_TYPE_INTEGER) { 64 - dev_err(&device->dev, "Invalid _FST data\n"); 65 - ret = -EINVAL; 59 + if (!obj) 60 + return -ENODATA; 61 + 62 + if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count != 3) { 63 + ret = -EPROTO; 64 + goto err; 65 + } 66 + 67 + if (obj->package.elements[0].type != ACPI_TYPE_INTEGER || 68 + obj->package.elements[1].type != ACPI_TYPE_INTEGER || 69 + obj->package.elements[2].type != ACPI_TYPE_INTEGER) { 70 + ret = -EPROTO; 66 71 goto err; 67 72 } 68 73 ··· 86 81 struct acpi_fan_fst fst; 87 82 int status, i; 88 83 89 - status = acpi_fan_get_fst(device, &fst); 84 + status = acpi_fan_get_fst(device->handle, &fst); 90 85 if (status) 91 86 return status; 92 87 ··· 316 311 struct acpi_device *device = ACPI_COMPANION(&pdev->dev); 317 312 char *name; 318 313 314 + if (!device) 315 + return -ENODEV; 316 + 319 317 fan = devm_kzalloc(&pdev->dev, sizeof(*fan), GFP_KERNEL); 320 318 if (!fan) { 321 319 dev_err(&device->dev, "No memory for fan\n"); 322 320 return -ENOMEM; 323 321 } 322 + 323 + fan->handle = device->handle; 324 324 device->driver_data = fan; 325 325 platform_set_drvdata(pdev, fan); 326 326 ··· 347 337 } 348 338 349 339 if (fan->has_fst) { 350 - result = devm_acpi_fan_create_hwmon(device); 340 + result = devm_acpi_fan_create_hwmon(&pdev->dev); 351 341 if (result) 352 342 return result; 353 343
+5 -6
drivers/acpi/fan_hwmon.c
··· 93 93 static int acpi_fan_hwmon_read(struct device *dev, enum hwmon_sensor_types type, u32 attr, 94 94 int channel, long *val) 95 95 { 96 - struct acpi_device *adev = to_acpi_device(dev->parent); 97 96 struct acpi_fan *fan = dev_get_drvdata(dev); 98 97 struct acpi_fan_fps *fps; 99 98 struct acpi_fan_fst fst; 100 99 int ret; 101 100 102 - ret = acpi_fan_get_fst(adev, &fst); 101 + ret = acpi_fan_get_fst(fan->handle, &fst); 103 102 if (ret < 0) 104 103 return ret; 105 104 ··· 166 167 .info = acpi_fan_hwmon_info, 167 168 }; 168 169 169 - int devm_acpi_fan_create_hwmon(struct acpi_device *device) 170 + int devm_acpi_fan_create_hwmon(struct device *dev) 170 171 { 171 - struct acpi_fan *fan = acpi_driver_data(device); 172 + struct acpi_fan *fan = dev_get_drvdata(dev); 172 173 struct device *hdev; 173 174 174 - hdev = devm_hwmon_device_register_with_info(&device->dev, "acpi_fan", fan, 175 - &acpi_fan_hwmon_chip_info, NULL); 175 + hdev = devm_hwmon_device_register_with_info(dev, "acpi_fan", fan, &acpi_fan_hwmon_chip_info, 176 + NULL); 176 177 return PTR_ERR_OR_ZERO(hdev); 177 178 }
+1 -1
drivers/acpi/sbs.c
··· 487 487 if (result) 488 488 return result; 489 489 490 - battery->present = state & (1 << battery->id); 490 + battery->present = !!(state & (1 << battery->id)); 491 491 if (!battery->present) 492 492 return 0; 493 493
+1 -1
drivers/acpi/spcr.c
··· 155 155 * Baud Rate field. If this field is zero or not present, Configured 156 156 * Baud Rate is used. 157 157 */ 158 - if (table->precise_baudrate) 158 + if (table->header.revision >= 4 && table->precise_baudrate) 159 159 baud_rate = table->precise_baudrate; 160 160 else switch (table->baud_rate) { 161 161 case 0:
+2 -4
drivers/base/regmap/regmap-slimbus.c
··· 48 48 if (IS_ERR(bus)) 49 49 return ERR_CAST(bus); 50 50 51 - return __regmap_init(&slimbus->dev, bus, &slimbus->dev, config, 52 - lock_key, lock_name); 51 + return __regmap_init(&slimbus->dev, bus, slimbus, config, lock_key, lock_name); 53 52 } 54 53 EXPORT_SYMBOL_GPL(__regmap_init_slimbus); 55 54 ··· 62 63 if (IS_ERR(bus)) 63 64 return ERR_CAST(bus); 64 65 65 - return __devm_regmap_init(&slimbus->dev, bus, &slimbus, config, 66 - lock_key, lock_name); 66 + return __devm_regmap_init(&slimbus->dev, bus, slimbus, config, lock_key, lock_name); 67 67 } 68 68 EXPORT_SYMBOL_GPL(__devm_regmap_init_slimbus); 69 69
+24 -6
drivers/base/swnode.c
··· 535 535 ref_array = prop->pointer; 536 536 ref = &ref_array[index]; 537 537 538 - refnode = software_node_fwnode(ref->node); 538 + /* 539 + * A software node can reference other software nodes or firmware 540 + * nodes (which are the abstraction layer sitting on top of them). 541 + * This is done to ensure we can create references to static software 542 + * nodes before they're registered with the firmware node framework. 543 + * At the time the reference is being resolved, we expect the swnodes 544 + * in question to already have been registered and to be backed by 545 + * a firmware node. This is why we use the fwnode API below to read the 546 + * relevant properties and bump the reference count. 547 + */ 548 + 549 + if (ref->swnode) 550 + refnode = software_node_fwnode(ref->swnode); 551 + else if (ref->fwnode) 552 + refnode = ref->fwnode; 553 + else 554 + return -EINVAL; 555 + 539 556 if (!refnode) 540 557 return -ENOENT; 541 558 542 559 if (nargs_prop) { 543 - error = property_entry_read_int_array(ref->node->properties, 544 - nargs_prop, sizeof(u32), 545 - &nargs_prop_val, 1); 560 + error = fwnode_property_read_u32(refnode, nargs_prop, &nargs_prop_val); 546 561 if (error) 547 562 return error; 548 563 ··· 570 555 if (!args) 571 556 return 0; 572 557 573 - args->fwnode = software_node_get(refnode); 558 + args->fwnode = fwnode_handle_get(refnode); 574 559 args->nargs = nargs; 575 560 576 561 for (i = 0; i < nargs; i++) ··· 650 635 651 636 ref = prop->pointer; 652 637 653 - return software_node_get(software_node_fwnode(ref[0].node)); 638 + if (!ref->swnode) 639 + return NULL; 640 + 641 + return software_node_get(software_node_fwnode(ref->swnode)); 654 642 } 655 643 656 644 static struct fwnode_handle *
+6
drivers/bcma/main.c
··· 294 294 int err; 295 295 296 296 list_for_each_entry(core, &bus->cores, list) { 297 + struct device_node *np; 298 + 297 299 /* We support that core ourselves */ 298 300 switch (core->id.id) { 299 301 case BCMA_CORE_4706_CHIPCOMMON: ··· 311 309 312 310 /* Early cores were already registered */ 313 311 if (bcma_is_core_needed_early(core->id.id)) 312 + continue; 313 + 314 + np = core->dev.of_node; 315 + if (np && !of_device_is_available(np)) 314 316 continue; 315 317 316 318 /* Only first GMAC core on BCM4706 is connected and working */
+1
drivers/block/null_blk/main.c
··· 1949 1949 .logical_block_size = dev->blocksize, 1950 1950 .physical_block_size = dev->blocksize, 1951 1951 .max_hw_sectors = dev->max_sectors, 1952 + .dma_alignment = dev->blocksize - 1, 1952 1953 }; 1953 1954 1954 1955 struct nullb *nullb;
+3 -1
drivers/bluetooth/bpa10x.c
··· 41 41 struct usb_anchor rx_anchor; 42 42 43 43 struct sk_buff *rx_skb[2]; 44 + struct hci_uart hu; 44 45 }; 45 46 46 47 static void bpa10x_tx_complete(struct urb *urb) ··· 97 96 if (urb->status == 0) { 98 97 bool idx = usb_pipebulk(urb->pipe); 99 98 100 - data->rx_skb[idx] = h4_recv_buf(hdev, data->rx_skb[idx], 99 + data->rx_skb[idx] = h4_recv_buf(&data->hu, data->rx_skb[idx], 101 100 urb->transfer_buffer, 102 101 urb->actual_length, 103 102 bpa10x_recv_pkts, ··· 389 388 hci_set_drvdata(hdev, data); 390 389 391 390 data->hdev = hdev; 391 + data->hu.hdev = hdev; 392 392 393 393 SET_HCIDEV_DEV(hdev, &intf->dev); 394 394
+6 -5
drivers/bluetooth/btintel_pcie.c
··· 1467 1467 if (intr_hw & BTINTEL_PCIE_MSIX_HW_INT_CAUSES_GP1) 1468 1468 btintel_pcie_msix_gp1_handler(data); 1469 1469 1470 - /* This interrupt is triggered by the firmware after updating 1471 - * boot_stage register and image_response register 1472 - */ 1473 - if (intr_hw & BTINTEL_PCIE_MSIX_HW_INT_CAUSES_GP0) 1474 - btintel_pcie_msix_gp0_handler(data); 1475 1470 1476 1471 /* For TX */ 1477 1472 if (intr_fh & BTINTEL_PCIE_MSIX_FH_INT_CAUSES_0) { ··· 1481 1486 if (!btintel_pcie_is_txackq_empty(data)) 1482 1487 btintel_pcie_msix_tx_handle(data); 1483 1488 } 1489 + 1490 + /* This interrupt is triggered by the firmware after updating 1491 + * boot_stage register and image_response register 1492 + */ 1493 + if (intr_hw & BTINTEL_PCIE_MSIX_HW_INT_CAUSES_GP0) 1494 + btintel_pcie_msix_gp0_handler(data); 1484 1495 1485 1496 /* 1486 1497 * Before sending the interrupt the HW disables it to prevent a nested
+12
drivers/bluetooth/btmtksdio.c
··· 1270 1270 1271 1271 sdio_claim_host(bdev->func); 1272 1272 1273 + /* set drv_pmctrl if BT is closed before doing reset */ 1274 + if (!test_bit(BTMTKSDIO_FUNC_ENABLED, &bdev->tx_state)) { 1275 + sdio_enable_func(bdev->func); 1276 + btmtksdio_drv_pmctrl(bdev); 1277 + } 1278 + 1273 1279 sdio_writel(bdev->func, C_INT_EN_CLR, MTK_REG_CHLPCR, NULL); 1274 1280 skb_queue_purge(&bdev->txq); 1275 1281 cancel_work_sync(&bdev->txrx_work); ··· 1289 1283 if (err < 0) { 1290 1284 bt_dev_err(hdev, "Failed to reset (%d)", err); 1291 1285 goto err; 1286 + } 1287 + 1288 + /* set fw_pmctrl back if BT is closed after doing reset */ 1289 + if (!test_bit(BTMTKSDIO_FUNC_ENABLED, &bdev->tx_state)) { 1290 + btmtksdio_fw_pmctrl(bdev); 1291 + sdio_disable_func(bdev->func); 1292 1292 } 1293 1293 1294 1294 clear_bit(BTMTKSDIO_PATCH_ENABLED, &bdev->tx_state);
+3 -1
drivers/bluetooth/btmtkuart.c
··· 79 79 u16 stp_dlen; 80 80 81 81 const struct btmtkuart_data *data; 82 + struct hci_uart hu; 82 83 }; 83 84 84 85 #define btmtkuart_is_standalone(bdev) \ ··· 369 368 sz_left -= adv; 370 369 p_left += adv; 371 370 372 - bdev->rx_skb = h4_recv_buf(bdev->hdev, bdev->rx_skb, p_h4, 371 + bdev->rx_skb = h4_recv_buf(&bdev->hu, bdev->rx_skb, p_h4, 373 372 sz_h4, mtk_recv_pkts, 374 373 ARRAY_SIZE(mtk_recv_pkts)); 375 374 if (IS_ERR(bdev->rx_skb)) { ··· 859 858 } 860 859 861 860 bdev->hdev = hdev; 861 + bdev->hu.hdev = hdev; 862 862 863 863 hdev->bus = HCI_UART; 864 864 hci_set_drvdata(hdev, bdev);
+3 -1
drivers/bluetooth/btnxpuart.c
··· 212 212 struct ps_data psdata; 213 213 struct btnxpuart_data *nxp_data; 214 214 struct reset_control *pdn; 215 + struct hci_uart hu; 215 216 }; 216 217 217 218 #define NXP_V1_FW_REQ_PKT 0xa5 ··· 1757 1756 1758 1757 ps_start_timer(nxpdev); 1759 1758 1760 - nxpdev->rx_skb = h4_recv_buf(nxpdev->hdev, nxpdev->rx_skb, data, count, 1759 + nxpdev->rx_skb = h4_recv_buf(&nxpdev->hu, nxpdev->rx_skb, data, count, 1761 1760 nxp_recv_pkts, ARRAY_SIZE(nxp_recv_pkts)); 1762 1761 if (IS_ERR(nxpdev->rx_skb)) { 1763 1762 int err = PTR_ERR(nxpdev->rx_skb); ··· 1876 1875 reset_control_deassert(nxpdev->pdn); 1877 1876 1878 1877 nxpdev->hdev = hdev; 1878 + nxpdev->hu.hdev = hdev; 1879 1879 1880 1880 hdev->bus = HCI_UART; 1881 1881 hci_set_drvdata(hdev, nxpdev);
+3 -1
drivers/bluetooth/btrtl.c
··· 625 625 len += entry->len; 626 626 } 627 627 628 - if (!len) 628 + if (!len) { 629 + kvfree(ptr); 629 630 return -EPERM; 631 + } 630 632 631 633 *_buf = ptr; 632 634 return len;
+1 -1
drivers/bluetooth/hci_ag6xx.c
··· 105 105 if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 106 106 return -EUNATCH; 107 107 108 - ag6xx->rx_skb = h4_recv_buf(hu->hdev, ag6xx->rx_skb, data, count, 108 + ag6xx->rx_skb = h4_recv_buf(hu, ag6xx->rx_skb, data, count, 109 109 ag6xx_recv_pkts, 110 110 ARRAY_SIZE(ag6xx_recv_pkts)); 111 111 if (IS_ERR(ag6xx->rx_skb)) {
+1 -1
drivers/bluetooth/hci_aml.c
··· 650 650 struct aml_data *aml_data = hu->priv; 651 651 int err; 652 652 653 - aml_data->rx_skb = h4_recv_buf(hu->hdev, aml_data->rx_skb, data, count, 653 + aml_data->rx_skb = h4_recv_buf(hu, aml_data->rx_skb, data, count, 654 654 aml_recv_pkts, 655 655 ARRAY_SIZE(aml_recv_pkts)); 656 656 if (IS_ERR(aml_data->rx_skb)) {
+1 -1
drivers/bluetooth/hci_ath.c
··· 191 191 { 192 192 struct ath_struct *ath = hu->priv; 193 193 194 - ath->rx_skb = h4_recv_buf(hu->hdev, ath->rx_skb, data, count, 194 + ath->rx_skb = h4_recv_buf(hu, ath->rx_skb, data, count, 195 195 ath_recv_pkts, ARRAY_SIZE(ath_recv_pkts)); 196 196 if (IS_ERR(ath->rx_skb)) { 197 197 int err = PTR_ERR(ath->rx_skb);
+1 -1
drivers/bluetooth/hci_bcm.c
··· 698 698 if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 699 699 return -EUNATCH; 700 700 701 - bcm->rx_skb = h4_recv_buf(hu->hdev, bcm->rx_skb, data, count, 701 + bcm->rx_skb = h4_recv_buf(hu, bcm->rx_skb, data, count, 702 702 bcm_recv_pkts, ARRAY_SIZE(bcm_recv_pkts)); 703 703 if (IS_ERR(bcm->rx_skb)) { 704 704 int err = PTR_ERR(bcm->rx_skb);
+3 -3
drivers/bluetooth/hci_h4.c
··· 112 112 if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 113 113 return -EUNATCH; 114 114 115 - h4->rx_skb = h4_recv_buf(hu->hdev, h4->rx_skb, data, count, 115 + h4->rx_skb = h4_recv_buf(hu, h4->rx_skb, data, count, 116 116 h4_recv_pkts, ARRAY_SIZE(h4_recv_pkts)); 117 117 if (IS_ERR(h4->rx_skb)) { 118 118 int err = PTR_ERR(h4->rx_skb); ··· 151 151 return hci_uart_unregister_proto(&h4p); 152 152 } 153 153 154 - struct sk_buff *h4_recv_buf(struct hci_dev *hdev, struct sk_buff *skb, 154 + struct sk_buff *h4_recv_buf(struct hci_uart *hu, struct sk_buff *skb, 155 155 const unsigned char *buffer, int count, 156 156 const struct h4_recv_pkt *pkts, int pkts_count) 157 157 { 158 - struct hci_uart *hu = hci_get_drvdata(hdev); 159 158 u8 alignment = hu->alignment ? hu->alignment : 1; 159 + struct hci_dev *hdev = hu->hdev; 160 160 161 161 /* Check for error from previous call */ 162 162 if (IS_ERR(skb))
+1 -1
drivers/bluetooth/hci_intel.c
··· 972 972 if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 973 973 return -EUNATCH; 974 974 975 - intel->rx_skb = h4_recv_buf(hu->hdev, intel->rx_skb, data, count, 975 + intel->rx_skb = h4_recv_buf(hu, intel->rx_skb, data, count, 976 976 intel_recv_pkts, 977 977 ARRAY_SIZE(intel_recv_pkts)); 978 978 if (IS_ERR(intel->rx_skb)) {
+1 -1
drivers/bluetooth/hci_ll.c
··· 429 429 if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 430 430 return -EUNATCH; 431 431 432 - ll->rx_skb = h4_recv_buf(hu->hdev, ll->rx_skb, data, count, 432 + ll->rx_skb = h4_recv_buf(hu, ll->rx_skb, data, count, 433 433 ll_recv_pkts, ARRAY_SIZE(ll_recv_pkts)); 434 434 if (IS_ERR(ll->rx_skb)) { 435 435 int err = PTR_ERR(ll->rx_skb);
+3 -3
drivers/bluetooth/hci_mrvl.c
··· 264 264 !test_bit(STATE_FW_LOADED, &mrvl->flags)) 265 265 return count; 266 266 267 - mrvl->rx_skb = h4_recv_buf(hu->hdev, mrvl->rx_skb, data, count, 268 - mrvl_recv_pkts, 269 - ARRAY_SIZE(mrvl_recv_pkts)); 267 + mrvl->rx_skb = h4_recv_buf(hu, mrvl->rx_skb, data, count, 268 + mrvl_recv_pkts, 269 + ARRAY_SIZE(mrvl_recv_pkts)); 270 270 if (IS_ERR(mrvl->rx_skb)) { 271 271 int err = PTR_ERR(mrvl->rx_skb); 272 272 bt_dev_err(hu->hdev, "Frame reassembly failed (%d)", err);
+2 -2
drivers/bluetooth/hci_nokia.c
··· 624 624 if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 625 625 return -EUNATCH; 626 626 627 - btdev->rx_skb = h4_recv_buf(hu->hdev, btdev->rx_skb, data, count, 628 - nokia_recv_pkts, ARRAY_SIZE(nokia_recv_pkts)); 627 + btdev->rx_skb = h4_recv_buf(hu, btdev->rx_skb, data, count, 628 + nokia_recv_pkts, ARRAY_SIZE(nokia_recv_pkts)); 629 629 if (IS_ERR(btdev->rx_skb)) { 630 630 err = PTR_ERR(btdev->rx_skb); 631 631 dev_err(dev, "Frame reassembly failed (%d)", err);
+1 -1
drivers/bluetooth/hci_qca.c
··· 1277 1277 if (!test_bit(HCI_UART_REGISTERED, &hu->flags)) 1278 1278 return -EUNATCH; 1279 1279 1280 - qca->rx_skb = h4_recv_buf(hu->hdev, qca->rx_skb, data, count, 1280 + qca->rx_skb = h4_recv_buf(hu, qca->rx_skb, data, count, 1281 1281 qca_recv_pkts, ARRAY_SIZE(qca_recv_pkts)); 1282 1282 if (IS_ERR(qca->rx_skb)) { 1283 1283 int err = PTR_ERR(qca->rx_skb);
+1 -1
drivers/bluetooth/hci_uart.h
··· 162 162 int h4_init(void); 163 163 int h4_deinit(void); 164 164 165 - struct sk_buff *h4_recv_buf(struct hci_dev *hdev, struct sk_buff *skb, 165 + struct sk_buff *h4_recv_buf(struct hci_uart *hu, struct sk_buff *skb, 166 166 const unsigned char *buffer, int count, 167 167 const struct h4_recv_pkt *pkts, int pkts_count); 168 168 #endif
+3 -2
drivers/cpuidle/cpuidle-riscv-sbi.c
··· 18 18 #include <linux/module.h> 19 19 #include <linux/of.h> 20 20 #include <linux/slab.h> 21 + #include <linux/string.h> 21 22 #include <linux/platform_device.h> 22 23 #include <linux/pm_domain.h> 23 24 #include <linux/pm_runtime.h> ··· 304 303 drv->states[0].exit_latency = 1; 305 304 drv->states[0].target_residency = 1; 306 305 drv->states[0].power_usage = UINT_MAX; 307 - strcpy(drv->states[0].name, "WFI"); 308 - strcpy(drv->states[0].desc, "RISC-V WFI"); 306 + strscpy(drv->states[0].name, "WFI"); 307 + strscpy(drv->states[0].desc, "RISC-V WFI"); 309 308 310 309 /* 311 310 * If no DT idle states are detected (ret == 0) let the driver
+5 -2
drivers/cpuidle/governors/menu.c
··· 318 318 319 319 /* 320 320 * Use a physical idle state, not busy polling, unless a timer 321 - * is going to trigger soon enough. 321 + * is going to trigger soon enough or the exit latency of the 322 + * idle state in question is greater than the predicted idle 323 + * duration. 322 324 */ 323 325 if ((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) && 324 - s->target_residency_ns <= data->next_timer_ns) { 326 + s->target_residency_ns <= data->next_timer_ns && 327 + s->exit_latency_ns <= predicted_ns) { 325 328 predicted_ns = s->target_residency_ns; 326 329 idx = i; 327 330 break;
-2
drivers/crypto/aspeed/aspeed-acry.c
··· 787 787 err_engine_rsa_start: 788 788 crypto_engine_exit(acry_dev->crypt_engine_rsa); 789 789 clk_exit: 790 - clk_disable_unprepare(acry_dev->clk); 791 790 792 791 return rc; 793 792 } ··· 798 799 aspeed_acry_unregister(acry_dev); 799 800 crypto_engine_exit(acry_dev->crypt_engine_rsa); 800 801 tasklet_kill(&acry_dev->done_task); 801 - clk_disable_unprepare(acry_dev->clk); 802 802 } 803 803 804 804 MODULE_DEVICE_TABLE(of, aspeed_acry_of_matches);
+1 -1
drivers/dma-buf/dma-fence.c
··· 1141 1141 "RCU protection is required for safe access to returned string"); 1142 1142 1143 1143 if (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) 1144 - return fence->ops->get_driver_name(fence); 1144 + return fence->ops->get_timeline_name(fence); 1145 1145 else 1146 1146 return "signaled-timeline"; 1147 1147 }
+20 -16
drivers/dpll/dpll_netlink.c
··· 1559 1559 return -EMSGSIZE; 1560 1560 } 1561 1561 pin = dpll_pin_find_from_nlattr(info); 1562 - if (!IS_ERR(pin)) { 1563 - if (!dpll_pin_available(pin)) { 1564 - nlmsg_free(msg); 1565 - return -ENODEV; 1566 - } 1567 - ret = dpll_msg_add_pin_handle(msg, pin); 1568 - if (ret) { 1569 - nlmsg_free(msg); 1570 - return ret; 1571 - } 1562 + if (IS_ERR(pin)) { 1563 + nlmsg_free(msg); 1564 + return PTR_ERR(pin); 1565 + } 1566 + if (!dpll_pin_available(pin)) { 1567 + nlmsg_free(msg); 1568 + return -ENODEV; 1569 + } 1570 + ret = dpll_msg_add_pin_handle(msg, pin); 1571 + if (ret) { 1572 + nlmsg_free(msg); 1573 + return ret; 1572 1574 } 1573 1575 genlmsg_end(msg, hdr); 1574 1576 ··· 1737 1735 } 1738 1736 1739 1737 dpll = dpll_device_find_from_nlattr(info); 1740 - if (!IS_ERR(dpll)) { 1741 - ret = dpll_msg_add_dev_handle(msg, dpll); 1742 - if (ret) { 1743 - nlmsg_free(msg); 1744 - return ret; 1745 - } 1738 + if (IS_ERR(dpll)) { 1739 + nlmsg_free(msg); 1740 + return PTR_ERR(dpll); 1741 + } 1742 + ret = dpll_msg_add_dev_handle(msg, dpll); 1743 + if (ret) { 1744 + nlmsg_free(msg); 1745 + return ret; 1746 1746 } 1747 1747 genlmsg_end(msg, hdr); 1748 1748
+1 -1
drivers/dpll/zl3073x/dpll.c
··· 1904 1904 } 1905 1905 1906 1906 is_diff = zl3073x_out_is_diff(zldev, out); 1907 - is_enabled = zl3073x_out_is_enabled(zldev, out); 1907 + is_enabled = zl3073x_output_pin_is_enabled(zldev, index); 1908 1908 } 1909 1909 1910 1910 /* Skip N-pin if the corresponding input/output is differential */
+1 -1
drivers/edac/versalnet_edac.c
··· 433 433 phys_addr_t pfn; 434 434 int err; 435 435 436 - if (WARN_ON_ONCE(ctl_num > NUM_CONTROLLERS)) 436 + if (WARN_ON_ONCE(ctl_num >= NUM_CONTROLLERS)) 437 437 return; 438 438 439 439 mci = priv->mci[ctl_num];
+1
drivers/gpio/gpio-aggregator.c
··· 723 723 chip->get_multiple = gpio_fwd_get_multiple_locked; 724 724 chip->set = gpio_fwd_set; 725 725 chip->set_multiple = gpio_fwd_set_multiple_locked; 726 + chip->set_config = gpio_fwd_set_config; 726 727 chip->to_irq = gpio_fwd_to_irq; 727 728 chip->base = -1; 728 729 chip->ngpio = ngpios;
-19
drivers/gpio/gpio-tb10x.c
··· 50 50 return ioread32(gpio->base + offs); 51 51 } 52 52 53 - static inline void tb10x_reg_write(struct tb10x_gpio *gpio, unsigned int offs, 54 - u32 val) 55 - { 56 - iowrite32(val, gpio->base + offs); 57 - } 58 - 59 - static inline void tb10x_set_bits(struct tb10x_gpio *gpio, unsigned int offs, 60 - u32 mask, u32 val) 61 - { 62 - u32 r; 63 - 64 - guard(gpio_generic_lock_irqsave)(&gpio->chip); 65 - 66 - r = tb10x_reg_read(gpio, offs); 67 - r = (r & ~mask) | (val & mask); 68 - 69 - tb10x_reg_write(gpio, offs, r); 70 - } 71 - 72 53 static int tb10x_gpio_to_irq(struct gpio_chip *chip, unsigned offset) 73 54 { 74 55 struct tb10x_gpio *tb10x_gpio = gpiochip_get_data(chip);
+3 -2
drivers/gpio/gpiolib-swnode.c
··· 31 31 32 32 gdev_node = to_software_node(fwnode); 33 33 if (!gdev_node || !gdev_node->name) 34 - return ERR_PTR(-EINVAL); 34 + goto fwnode_lookup; 35 35 36 36 /* 37 37 * Check for a special node that identifies undefined GPIOs, this is ··· 41 41 !strcmp(gdev_node->name, GPIOLIB_SWNODE_UNDEFINED_NAME)) 42 42 return ERR_PTR(-ENOENT); 43 43 44 - gdev = gpio_device_find_by_label(gdev_node->name); 44 + fwnode_lookup: 45 + gdev = gpio_device_find_by_fwnode(fwnode); 45 46 return gdev ?: ERR_PTR(-EPROBE_DEFER); 46 47 } 47 48
+7 -1
drivers/gpio/gpiolib.c
··· 5355 5355 struct gpio_device *gdev; 5356 5356 loff_t index = *pos; 5357 5357 5358 + s->private = NULL; 5359 + 5358 5360 priv = kzalloc(sizeof(*priv), GFP_KERNEL); 5359 5361 if (!priv) 5360 5362 return NULL; ··· 5390 5388 5391 5389 static void gpiolib_seq_stop(struct seq_file *s, void *v) 5392 5390 { 5393 - struct gpiolib_seq_priv *priv = s->private; 5391 + struct gpiolib_seq_priv *priv; 5392 + 5393 + priv = s->private; 5394 + if (!priv) 5395 + return; 5394 5396 5395 5397 srcu_read_unlock(&gpio_devices_srcu, priv->idx); 5396 5398 kfree(priv);
+1 -1
drivers/gpu/drm/Makefile
··· 245 245 quiet_cmd_hdrtest = HDRTEST $(patsubst %.hdrtest,%.h,$@) 246 246 cmd_hdrtest = \ 247 247 $(CC) $(c_flags) -fsyntax-only -x c /dev/null -include $< -include $<; \ 248 - PYTHONDONTWRITEBYTECODE=1 $(KERNELDOC) -none $(if $(CONFIG_WERROR)$(CONFIG_DRM_WERROR),-Werror) $<; \ 248 + PYTHONDONTWRITEBYTECODE=1 $(PYTHON3) $(KERNELDOC) -none $(if $(CONFIG_WERROR)$(CONFIG_DRM_WERROR),-Werror) $<; \ 249 249 touch $@ 250 250 251 251 $(obj)/%.hdrtest: $(src)/%.h FORCE
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 1267 1267 1268 1268 (void)amdgpu_vm_bo_unmap(adev, bo_va, entry->va); 1269 1269 1270 + /* VM entity stopped if process killed, don't clear freed pt bo */ 1271 + if (!amdgpu_vm_ready(vm)) 1272 + return 0; 1273 + 1270 1274 (void)amdgpu_vm_clear_freed(adev, vm, &bo_va->last_pt_update); 1271 1275 1272 1276 (void)amdgpu_sync_fence(sync, bo_va->last_pt_update, GFP_KERNEL);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 1 + // SPDX-License-Identifier: MIT 2 2 /* 3 3 * Copyright 2025 Advanced Micro Devices, Inc. 4 4 *
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_cper.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 1 + /* SPDX-License-Identifier: MIT */ 2 2 /* 3 3 * Copyright 2025 Advanced Micro Devices, Inc. 4 4 *
-4
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 5243 5243 if (amdgpu_sriov_vf(adev)) 5244 5244 amdgpu_virt_release_full_gpu(adev, false); 5245 5245 5246 - r = amdgpu_dpm_notify_rlc_state(adev, false); 5247 - if (r) 5248 - return r; 5249 - 5250 5246 return 0; 5251 5247 } 5252 5248
+7 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 2632 2632 { 2633 2633 struct drm_device *drm_dev = dev_get_drvdata(dev); 2634 2634 struct amdgpu_device *adev = drm_to_adev(drm_dev); 2635 + int r; 2635 2636 2636 - if (amdgpu_acpi_should_gpu_reset(adev)) 2637 - return amdgpu_asic_reset(adev); 2637 + if (amdgpu_acpi_should_gpu_reset(adev)) { 2638 + amdgpu_device_lock_reset_domain(adev->reset_domain); 2639 + r = amdgpu_asic_reset(adev); 2640 + amdgpu_device_unlock_reset_domain(adev->reset_domain); 2641 + return r; 2642 + } 2638 2643 2639 2644 return 0; 2640 2645 }
+4 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
··· 2355 2355 if (!ret && !psp->securedisplay_context.context.resp_status) { 2356 2356 psp->securedisplay_context.context.initialized = true; 2357 2357 mutex_init(&psp->securedisplay_context.mutex); 2358 - } else 2358 + } else { 2359 + /* don't try again */ 2360 + psp->securedisplay_context.context.bin_desc.size_bytes = 0; 2359 2361 return ret; 2362 + } 2360 2363 2361 2364 mutex_lock(&psp->securedisplay_context.mutex); 2362 2365
+30 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_vpe.c
··· 322 322 return 0; 323 323 } 324 324 325 + static bool vpe_need_dpm0_at_power_down(struct amdgpu_device *adev) 326 + { 327 + switch (amdgpu_ip_version(adev, VPE_HWIP, 0)) { 328 + case IP_VERSION(6, 1, 1): 329 + return adev->pm.fw_version < 0x0a640500; 330 + default: 331 + return false; 332 + } 333 + } 334 + 335 + static int vpe_get_dpm_level(struct amdgpu_device *adev) 336 + { 337 + struct amdgpu_vpe *vpe = &adev->vpe; 338 + 339 + if (!adev->pm.dpm_enabled) 340 + return 0; 341 + 342 + return RREG32(vpe_get_reg_offset(vpe, 0, vpe->regs.dpm_request_lv)); 343 + } 344 + 325 345 static void vpe_idle_work_handler(struct work_struct *work) 326 346 { 327 347 struct amdgpu_device *adev = ··· 349 329 unsigned int fences = 0; 350 330 351 331 fences += amdgpu_fence_count_emitted(&adev->vpe.ring); 332 + if (fences) 333 + goto reschedule; 352 334 353 - if (fences == 0) 354 - amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VPE, AMD_PG_STATE_GATE); 355 - else 356 - schedule_delayed_work(&adev->vpe.idle_work, VPE_IDLE_TIMEOUT); 335 + if (vpe_need_dpm0_at_power_down(adev) && vpe_get_dpm_level(adev) != 0) 336 + goto reschedule; 337 + 338 + amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VPE, AMD_PG_STATE_GATE); 339 + return; 340 + 341 + reschedule: 342 + schedule_delayed_work(&adev->vpe.idle_work, VPE_IDLE_TIMEOUT); 357 343 } 358 344 359 345 static int vpe_common_init(struct amdgpu_vpe *vpe)
+2 -1
drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c
··· 407 407 return -EINVAL; 408 408 } 409 409 410 - if (adev->kfd.init_complete && !amdgpu_in_reset(adev)) 410 + if (adev->kfd.init_complete && !amdgpu_in_reset(adev) && 411 + !adev->in_suspend) 411 412 flags |= AMDGPU_XCP_OPS_KFD; 412 413 413 414 if (flags & AMDGPU_XCP_OPS_KFD) {
+1 -1
drivers/gpu/drm/amd/amdgpu/cyan_skillfish_reg_init.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 1 + // SPDX-License-Identifier: MIT 2 2 /* 3 3 * Copyright 2018 Advanced Micro Devices, Inc. 4 4 *
+5
drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
··· 3102 3102 return r; 3103 3103 } 3104 3104 3105 + adev->gfx.gfx_supported_reset = 3106 + amdgpu_get_soft_full_reset_mask(&adev->gfx.gfx_ring[0]); 3107 + adev->gfx.compute_supported_reset = 3108 + amdgpu_get_soft_full_reset_mask(&adev->gfx.compute_ring[0]); 3109 + 3105 3110 return r; 3106 3111 } 3107 3112
+5
drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
··· 4399 4399 4400 4400 gfx_v7_0_gpu_early_init(adev); 4401 4401 4402 + adev->gfx.gfx_supported_reset = 4403 + amdgpu_get_soft_full_reset_mask(&adev->gfx.gfx_ring[0]); 4404 + adev->gfx.compute_supported_reset = 4405 + amdgpu_get_soft_full_reset_mask(&adev->gfx.compute_ring[0]); 4406 + 4402 4407 return r; 4403 4408 } 4404 4409
+5
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
··· 2023 2023 if (r) 2024 2024 return r; 2025 2025 2026 + adev->gfx.gfx_supported_reset = 2027 + amdgpu_get_soft_full_reset_mask(&adev->gfx.gfx_ring[0]); 2028 + adev->gfx.compute_supported_reset = 2029 + amdgpu_get_soft_full_reset_mask(&adev->gfx.compute_ring[0]); 2030 + 2026 2031 return 0; 2027 2032 } 2028 2033
+3 -1
drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
··· 2292 2292 r = amdgpu_xcp_init(adev->xcp_mgr, num_xcp, mode); 2293 2293 2294 2294 } else { 2295 - if (amdgpu_xcp_query_partition_mode(adev->xcp_mgr, 2295 + if (adev->in_suspend) 2296 + amdgpu_xcp_restore_partition_mode(adev->xcp_mgr); 2297 + else if (amdgpu_xcp_query_partition_mode(adev->xcp_mgr, 2296 2298 AMDGPU_XCP_FL_NONE) == 2297 2299 AMDGPU_UNKNOWN_COMPUTE_PARTITION_MODE) 2298 2300 r = amdgpu_xcp_switch_partition_mode(
+25 -1
drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
··· 142 142 return err; 143 143 } 144 144 145 + static int psp_v11_wait_for_tos_unload(struct psp_context *psp) 146 + { 147 + struct amdgpu_device *adev = psp->adev; 148 + uint32_t sol_reg1, sol_reg2; 149 + int retry_loop; 150 + 151 + /* Wait for the TOS to be unloaded */ 152 + for (retry_loop = 0; retry_loop < 20; retry_loop++) { 153 + sol_reg1 = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81); 154 + usleep_range(1000, 2000); 155 + sol_reg2 = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81); 156 + if (sol_reg1 == sol_reg2) 157 + return 0; 158 + } 159 + dev_err(adev->dev, "TOS unload failed, C2PMSG_33: %x C2PMSG_81: %x", 160 + RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_33), 161 + RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81)); 162 + 163 + return -ETIME; 164 + } 165 + 145 166 static int psp_v11_0_wait_for_bootloader(struct psp_context *psp) 146 167 { 147 168 struct amdgpu_device *adev = psp->adev; 148 - 149 169 int ret; 150 170 int retry_loop; 171 + 172 + /* For a reset done at the end of S3, only wait for TOS to be unloaded */ 173 + if (adev->in_s3 && !(adev->flags & AMD_IS_APU) && amdgpu_in_reset(adev)) 174 + return psp_v11_wait_for_tos_unload(psp); 151 175 152 176 for (retry_loop = 0; retry_loop < 20; retry_loop++) { 153 177 /* Wait for bootloader to signify that is
+10 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 3563 3563 /* Do mst topology probing after resuming cached state*/ 3564 3564 drm_connector_list_iter_begin(ddev, &iter); 3565 3565 drm_for_each_connector_iter(connector, &iter) { 3566 + bool init = false; 3566 3567 3567 3568 if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK) 3568 3569 continue; ··· 3573 3572 aconnector->mst_root) 3574 3573 continue; 3575 3574 3576 - drm_dp_mst_topology_queue_probe(&aconnector->mst_mgr); 3575 + scoped_guard(mutex, &aconnector->mst_mgr.lock) { 3576 + init = !aconnector->mst_mgr.mst_primary; 3577 + } 3578 + if (init) 3579 + dm_helpers_dp_mst_start_top_mgr(aconnector->dc_link->ctx, 3580 + aconnector->dc_link, false); 3581 + else 3582 + drm_dp_mst_topology_queue_probe(&aconnector->mst_mgr); 3577 3583 } 3578 3584 drm_connector_list_iter_end(&iter); 3579 3585 ··· 8038 8030 "mode %dx%d@%dHz is not native, enabling scaling\n", 8039 8031 adjusted_mode->hdisplay, adjusted_mode->vdisplay, 8040 8032 drm_mode_vrefresh(adjusted_mode)); 8041 - dm_new_connector_state->scaling = RMX_FULL; 8033 + dm_new_connector_state->scaling = RMX_ASPECT; 8042 8034 } 8043 8035 return 0; 8044 8036 }
+18 -3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
··· 248 248 struct vblank_control_work *vblank_work = 249 249 container_of(work, struct vblank_control_work, work); 250 250 struct amdgpu_display_manager *dm = vblank_work->dm; 251 + struct amdgpu_device *adev = drm_to_adev(dm->ddev); 252 + int r; 251 253 252 254 mutex_lock(&dm->dc_lock); 253 255 ··· 279 277 280 278 if (dm->active_vblank_irq_count == 0) { 281 279 dc_post_update_surfaces_to_stream(dm->dc); 280 + 281 + r = amdgpu_dpm_pause_power_profile(adev, true); 282 + if (r) 283 + dev_warn(adev->dev, "failed to set default power profile mode\n"); 284 + 282 285 dc_allow_idle_optimizations(dm->dc, true); 286 + 287 + r = amdgpu_dpm_pause_power_profile(adev, false); 288 + if (r) 289 + dev_warn(adev->dev, "failed to restore the power profile mode\n"); 283 290 } 284 291 285 292 mutex_unlock(&dm->dc_lock); ··· 308 297 int irq_type; 309 298 int rc = 0; 310 299 311 - if (acrtc->otg_inst == -1) 312 - goto skip; 300 + if (enable && !acrtc->base.enabled) { 301 + drm_dbg_vbl(crtc->dev, 302 + "Reject vblank enable on unconfigured CRTC %d (enabled=%d)\n", 303 + acrtc->crtc_id, acrtc->base.enabled); 304 + return -EINVAL; 305 + } 313 306 314 307 irq_type = amdgpu_display_crtc_idx_to_irq_type(adev, acrtc->crtc_id); 315 308 ··· 398 383 return rc; 399 384 } 400 385 #endif 401 - skip: 386 + 402 387 if (amdgpu_in_reset(adev)) 403 388 return 0; 404 389
+2 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
··· 1302 1302 if (connector->status != connector_status_connected) 1303 1303 return -ENODEV; 1304 1304 1305 - if (pipe_ctx != NULL && pipe_ctx->stream_res.tg->funcs->get_odm_combine_segments) 1305 + if (pipe_ctx && pipe_ctx->stream_res.tg && 1306 + pipe_ctx->stream_res.tg->funcs->get_odm_combine_segments) 1306 1307 pipe_ctx->stream_res.tg->funcs->get_odm_combine_segments(pipe_ctx->stream_res.tg, &segments); 1307 1308 1308 1309 seq_printf(m, "%d\n", segments);
+1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
··· 83 83 edid_caps->panel_patch.remove_sink_ext_caps = true; 84 84 break; 85 85 case drm_edid_encode_panel_id('S', 'D', 'C', 0x4154): 86 + case drm_edid_encode_panel_id('S', 'D', 'C', 0x4171): 86 87 drm_dbg_driver(dev, "Disabling VSC on monitor with panel id %X\n", panel_id); 87 88 edid_caps->panel_patch.disable_colorimetry = true; 88 89 break;
-3
drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c
··· 578 578 dpp_base->ctx->dc->optimized_required = true; 579 579 dpp_base->deferred_reg_writes.bits.disable_blnd_lut = true; 580 580 } 581 - } else { 582 - REG_SET(CM_MEM_PWR_CTRL, 0, 583 - BLNDGAM_MEM_PWR_FORCE, power_on == true ? 0 : 1); 584 581 } 585 582 } 586 583
+1 -1
drivers/gpu/drm/amd/include/amd_cper.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 1 + /* SPDX-License-Identifier: MIT */ 2 2 /* 3 3 * Copyright 2025 Advanced Micro Devices, Inc. 4 4 *
+1 -1
drivers/gpu/drm/amd/include/ivsrcid/vcn/irqsrcs_vcn_5_0.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 1 + /* SPDX-License-Identifier: MIT */ 2 2 3 3 /* 4 4 * Copyright 2024 Advanced Micro Devices, Inc. All rights reserved.
-18
drivers/gpu/drm/amd/pm/amdgpu_dpm.c
··· 195 195 return ret; 196 196 } 197 197 198 - int amdgpu_dpm_notify_rlc_state(struct amdgpu_device *adev, bool en) 199 - { 200 - int ret = 0; 201 - const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs; 202 - 203 - if (pp_funcs && pp_funcs->notify_rlc_state) { 204 - mutex_lock(&adev->pm.mutex); 205 - 206 - ret = pp_funcs->notify_rlc_state( 207 - adev->powerplay.pp_handle, 208 - en); 209 - 210 - mutex_unlock(&adev->pm.mutex); 211 - } 212 - 213 - return ret; 214 - } 215 - 216 198 int amdgpu_dpm_is_baco_supported(struct amdgpu_device *adev) 217 199 { 218 200 const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+2 -2
drivers/gpu/drm/amd/pm/amdgpu_pm.c
··· 4724 4724 ret = devm_device_add_group(adev->dev, 4725 4725 &amdgpu_pm_policy_attr_group); 4726 4726 if (ret) 4727 - goto err_out0; 4727 + goto err_out1; 4728 4728 } 4729 4729 4730 4730 if (amdgpu_dpm_is_temp_metrics_supported(adev, SMU_TEMP_METRIC_GPUBOARD)) { 4731 4731 ret = devm_device_add_group(adev->dev, 4732 4732 &amdgpu_board_attr_group); 4733 4733 if (ret) 4734 - goto err_out0; 4734 + goto err_out1; 4735 4735 if (amdgpu_pm_get_sensor_generic(adev, AMDGPU_PP_SENSOR_MAXNODEPOWERLIMIT, 4736 4736 (void *)&tmp) != -EOPNOTSUPP) { 4737 4737 sysfs_add_file_to_group(&adev->dev->kobj,
-2
drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
··· 424 424 int amdgpu_dpm_set_mp1_state(struct amdgpu_device *adev, 425 425 enum pp_mp1_state mp1_state); 426 426 427 - int amdgpu_dpm_notify_rlc_state(struct amdgpu_device *adev, bool en); 428 - 429 427 int amdgpu_dpm_set_gfx_power_up_by_imu(struct amdgpu_device *adev); 430 428 431 429 int amdgpu_dpm_baco_exit(struct amdgpu_device *adev);
+1 -1
drivers/gpu/drm/amd/pm/powerplay/smumgr/fiji_smumgr.c
··· 2024 2024 table->VoltageResponseTime = 0; 2025 2025 table->PhaseResponseTime = 0; 2026 2026 table->MemoryThermThrottleEnable = 1; 2027 - table->PCIeBootLinkLevel = 0; /* 0:Gen1 1:Gen2 2:Gen3*/ 2027 + table->PCIeBootLinkLevel = (uint8_t) (data->dpm_table.pcie_speed_table.count); 2028 2028 table->PCIeGenInterval = 1; 2029 2029 table->VRConfig = 0; 2030 2030
+1 -1
drivers/gpu/drm/amd/pm/powerplay/smumgr/iceland_smumgr.c
··· 2028 2028 table->VoltageResponseTime = 0; 2029 2029 table->PhaseResponseTime = 0; 2030 2030 table->MemoryThermThrottleEnable = 1; 2031 - table->PCIeBootLinkLevel = 0; 2031 + table->PCIeBootLinkLevel = (uint8_t) (data->dpm_table.pcie_speed_table.count); 2032 2032 table->PCIeGenInterval = 1; 2033 2033 2034 2034 result = iceland_populate_smc_svi2_config(hwmgr, table);
+6
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 2040 2040 smu->is_apu && (amdgpu_in_reset(adev) || adev->in_s0ix)) 2041 2041 return 0; 2042 2042 2043 + /* vangogh s0ix */ 2044 + if ((amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(11, 5, 0) || 2045 + amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(11, 5, 2)) && 2046 + adev->in_s0ix) 2047 + return 0; 2048 + 2043 2049 /* 2044 2050 * For gpu reset, runpm and hibernation through BACO, 2045 2051 * BACO feature has to be kept enabled.
+3
drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
··· 2217 2217 uint32_t total_cu = adev->gfx.config.max_cu_per_sh * 2218 2218 adev->gfx.config.max_sh_per_se * adev->gfx.config.max_shader_engines; 2219 2219 2220 + if (adev->in_s0ix) 2221 + return 0; 2222 + 2220 2223 /* allow message will be sent after enable message on Vangogh*/ 2221 2224 if (smu_cmn_feature_is_enabled(smu, SMU_FEATURE_DPM_GFXCLK_BIT) && 2222 2225 (adev->pg_flags & AMD_PG_SUPPORT_GFX_PG)) {
+1 -1
drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
··· 969 969 table_index); 970 970 uint32_t table_size; 971 971 int ret = 0; 972 - if (!table_data || table_id >= SMU_TABLE_COUNT || table_id < 0) 972 + if (!table_data || table_index >= SMU_TABLE_COUNT || table_id < 0) 973 973 return -EINVAL; 974 974 975 975 table_size = smu_table->tables[table_index].size;
+4 -4
drivers/gpu/drm/ast/ast_drv.h
··· 282 282 __ast_write8(addr, reg + 1, val); 283 283 } 284 284 285 - static inline void __ast_write8_i_masked(void __iomem *addr, u32 reg, u8 index, u8 read_mask, 285 + static inline void __ast_write8_i_masked(void __iomem *addr, u32 reg, u8 index, u8 preserve_mask, 286 286 u8 val) 287 287 { 288 - u8 tmp = __ast_read8_i_masked(addr, reg, index, read_mask); 288 + u8 tmp = __ast_read8_i_masked(addr, reg, index, preserve_mask); 289 289 290 - tmp |= val; 291 - __ast_write8_i(addr, reg, index, tmp); 290 + val &= ~preserve_mask; 291 + __ast_write8_i(addr, reg, index, tmp | val); 292 292 } 293 293 294 294 static inline u32 ast_read32(struct ast_device *ast, u32 reg)
+1 -1
drivers/gpu/drm/ci/gitlab-ci.yml
··· 280 280 GIT_STRATEGY: none 281 281 script: 282 282 # ci-fairy check-commits --junit-xml=check-commits.xml 283 - - ci-fairy check-merge-request --require-allow-collaboration --junit-xml=check-merge-request.xml 283 + # - ci-fairy check-merge-request --require-allow-collaboration --junit-xml=check-merge-request.xml 284 284 - | 285 285 set -eu 286 286 image_tags=(
+6 -2
drivers/gpu/drm/drm_gem_atomic_helper.c
··· 310 310 void __drm_gem_reset_shadow_plane(struct drm_plane *plane, 311 311 struct drm_shadow_plane_state *shadow_plane_state) 312 312 { 313 - __drm_atomic_helper_plane_reset(plane, &shadow_plane_state->base); 314 - drm_format_conv_state_init(&shadow_plane_state->fmtcnv_state); 313 + if (shadow_plane_state) { 314 + __drm_atomic_helper_plane_reset(plane, &shadow_plane_state->base); 315 + drm_format_conv_state_init(&shadow_plane_state->fmtcnv_state); 316 + } else { 317 + __drm_atomic_helper_plane_reset(plane, NULL); 318 + } 315 319 } 316 320 EXPORT_SYMBOL(__drm_gem_reset_shadow_plane); 317 321
+1 -1
drivers/gpu/drm/etnaviv/etnaviv_buffer.c
··· 347 347 u32 link_target, link_dwords; 348 348 bool switch_context = gpu->exec_state != exec_state; 349 349 bool switch_mmu_context = gpu->mmu_context != mmu_context; 350 - unsigned int new_flush_seq = READ_ONCE(gpu->mmu_context->flush_seq); 350 + unsigned int new_flush_seq = READ_ONCE(mmu_context->flush_seq); 351 351 bool need_flush = switch_mmu_context || gpu->flush_seq != new_flush_seq; 352 352 bool has_blt = !!(gpu->identity.minor_features5 & 353 353 chipMinorFeatures5_BLT_ENGINE);
+1 -1
drivers/gpu/drm/i915/Makefile
··· 413 413 # 414 414 # Enable locally for CONFIG_DRM_I915_WERROR=y. See also scripts/Makefile.build 415 415 ifdef CONFIG_DRM_I915_WERROR 416 - cmd_checkdoc = PYTHONDONTWRITEBYTECODE=1 $(KERNELDOC) -none -Werror $< 416 + cmd_checkdoc = PYTHONDONTWRITEBYTECODE=1 $(PYTHON3) $(KERNELDOC) -none -Werror $< 417 417 endif 418 418 419 419 # header test
+54 -1
drivers/gpu/drm/i915/display/intel_dmc.c
··· 546 546 REG_FIELD_GET(DMC_EVT_CTL_EVENT_ID_MASK, data) == event_id; 547 547 } 548 548 549 + static bool fixup_dmc_evt(struct intel_display *display, 550 + enum intel_dmc_id dmc_id, 551 + i915_reg_t reg_ctl, u32 *data_ctl, 552 + i915_reg_t reg_htp, u32 *data_htp) 553 + { 554 + if (!is_dmc_evt_ctl_reg(display, dmc_id, reg_ctl)) 555 + return false; 556 + 557 + if (!is_dmc_evt_htp_reg(display, dmc_id, reg_htp)) 558 + return false; 559 + 560 + /* make sure reg_ctl and reg_htp are for the same event */ 561 + if (i915_mmio_reg_offset(reg_ctl) - i915_mmio_reg_offset(DMC_EVT_CTL(display, dmc_id, 0)) != 562 + i915_mmio_reg_offset(reg_htp) - i915_mmio_reg_offset(DMC_EVT_HTP(display, dmc_id, 0))) 563 + return false; 564 + 565 + /* 566 + * On ADL-S the HRR event handler is not restored after DC6. 567 + * Clear it to zero from the beginning to avoid mismatches later. 568 + */ 569 + if (display->platform.alderlake_s && dmc_id == DMC_FW_MAIN && 570 + is_event_handler(display, dmc_id, MAINDMC_EVENT_VBLANK_A, reg_ctl, *data_ctl)) { 571 + *data_ctl = 0; 572 + *data_htp = 0; 573 + return true; 574 + } 575 + 576 + return false; 577 + } 578 + 549 579 static bool disable_dmc_evt(struct intel_display *display, 550 580 enum intel_dmc_id dmc_id, 551 581 i915_reg_t reg, u32 data) ··· 1094 1064 for (i = 0; i < mmio_count; i++) { 1095 1065 dmc_info->mmioaddr[i] = _MMIO(mmioaddr[i]); 1096 1066 dmc_info->mmiodata[i] = mmiodata[i]; 1067 + } 1097 1068 1069 + for (i = 0; i < mmio_count - 1; i++) { 1070 + u32 orig_mmiodata[2] = { 1071 + dmc_info->mmiodata[i], 1072 + dmc_info->mmiodata[i+1], 1073 + }; 1074 + 1075 + if (!fixup_dmc_evt(display, dmc_id, 1076 + dmc_info->mmioaddr[i], &dmc_info->mmiodata[i], 1077 + dmc_info->mmioaddr[i+1], &dmc_info->mmiodata[i+1])) 1078 + continue; 1079 + 1080 + drm_dbg_kms(display->drm, 1081 + " mmio[%d]: 0x%x = 0x%x->0x%x (EVT_CTL)\n", 1082 + i, i915_mmio_reg_offset(dmc_info->mmioaddr[i]), 1083 + orig_mmiodata[0], dmc_info->mmiodata[i]); 1084 + drm_dbg_kms(display->drm, 1085 + " mmio[%d]: 0x%x = 0x%x->0x%x (EVT_HTP)\n", 1086 + i+1, i915_mmio_reg_offset(dmc_info->mmioaddr[i+1]), 1087 + orig_mmiodata[1], dmc_info->mmiodata[i+1]); 1088 + } 1089 + 1090 + for (i = 0; i < mmio_count; i++) { 1098 1091 drm_dbg_kms(display->drm, " mmio[%d]: 0x%x = 0x%x%s%s\n", 1099 - i, mmioaddr[i], mmiodata[i], 1092 + i, i915_mmio_reg_offset(dmc_info->mmioaddr[i]), dmc_info->mmiodata[i], 1100 1093 is_dmc_evt_ctl_reg(display, dmc_id, dmc_info->mmioaddr[i]) ? " (EVT_CTL)" : 1101 1094 is_dmc_evt_htp_reg(display, dmc_id, dmc_info->mmioaddr[i]) ? " (EVT_HTP)" : "", 1102 1095 disable_dmc_evt(display, dmc_id, dmc_info->mmioaddr[i],
+2 -2
drivers/gpu/drm/i915/gt/intel_gt_clock_utils.c
··· 205 205 206 206 u64 intel_gt_clock_interval_to_ns(const struct intel_gt *gt, u64 count) 207 207 { 208 - return div_u64_roundup(count * NSEC_PER_SEC, gt->clock_frequency); 208 + return mul_u64_u32_div(count, NSEC_PER_SEC, gt->clock_frequency); 209 209 } 210 210 211 211 u64 intel_gt_pm_interval_to_ns(const struct intel_gt *gt, u64 count) ··· 215 215 216 216 u64 intel_gt_ns_to_clock_interval(const struct intel_gt *gt, u64 ns) 217 217 { 218 - return div_u64_roundup(gt->clock_frequency * ns, NSEC_PER_SEC); 218 + return mul_u64_u32_div(ns, gt->clock_frequency, NSEC_PER_SEC); 219 219 } 220 220 221 221 u64 intel_gt_ns_to_pm_interval(const struct intel_gt *gt, u64 ns)
+14 -2
drivers/gpu/drm/i915/i915_vma.c
··· 1595 1595 err_vma_res: 1596 1596 i915_vma_resource_free(vma_res); 1597 1597 err_fence: 1598 - if (work) 1599 - dma_fence_work_commit_imm(&work->base); 1598 + if (work) { 1599 + /* 1600 + * When pinning VMA to GGTT on CHV or BXT with VTD enabled, 1601 + * commit VMA binding asynchronously to avoid risk of lock 1602 + * inversion among reservation_ww locks held here and 1603 + * cpu_hotplug_lock acquired from stop_machine(), which we 1604 + * wrap around GGTT updates when running in those environments. 1605 + */ 1606 + if (i915_vma_is_ggtt(vma) && 1607 + intel_vm_no_concurrent_access_wa(vma->vm->i915)) 1608 + dma_fence_work_commit(&work->base); 1609 + else 1610 + dma_fence_work_commit_imm(&work->base); 1611 + } 1600 1612 err_rpm: 1601 1613 intel_runtime_pm_put(&vma->vm->i915->runtime_pm, wakeref); 1602 1614
+1
drivers/gpu/drm/imagination/Kconfig
··· 7 7 depends on DRM 8 8 depends on MMU 9 9 depends on PM 10 + depends on POWER_SEQUENCING || !POWER_SEQUENCING 10 11 select DRM_EXEC 11 12 select DRM_GEM_SHMEM_HELPER 12 13 select DRM_SCHED
+9 -9
drivers/gpu/drm/imx/ipuv3/parallel-display.c
··· 25 25 26 26 struct imx_parallel_display_encoder { 27 27 struct drm_encoder encoder; 28 - struct drm_bridge bridge; 29 - struct imx_parallel_display *pd; 30 28 }; 31 29 32 30 struct imx_parallel_display { 33 31 struct device *dev; 34 32 u32 bus_format; 35 33 struct drm_bridge *next_bridge; 34 + struct drm_bridge bridge; 36 35 }; 37 36 38 37 static inline struct imx_parallel_display *bridge_to_imxpd(struct drm_bridge *b) 39 38 { 40 - return container_of(b, struct imx_parallel_display_encoder, bridge)->pd; 39 + return container_of(b, struct imx_parallel_display, bridge); 41 40 } 42 41 43 42 static const u32 imx_pd_bus_fmts[] = { ··· 194 195 if (IS_ERR(imxpd_encoder)) 195 196 return PTR_ERR(imxpd_encoder); 196 197 197 - imxpd_encoder->pd = imxpd; 198 198 encoder = &imxpd_encoder->encoder; 199 - bridge = &imxpd_encoder->bridge; 199 + bridge = &imxpd->bridge; 200 200 201 201 ret = imx_drm_encoder_parse_of(drm, encoder, imxpd->dev->of_node); 202 202 if (ret) 203 203 return ret; 204 204 205 - bridge->funcs = &imx_pd_bridge_funcs; 206 205 drm_bridge_attach(encoder, bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR); 207 206 208 207 connector = drm_bridge_connector_init(drm, encoder); ··· 225 228 u32 bus_format = 0; 226 229 const char *fmt; 227 230 228 - imxpd = devm_kzalloc(dev, sizeof(*imxpd), GFP_KERNEL); 229 - if (!imxpd) 230 - return -ENOMEM; 231 + imxpd = devm_drm_bridge_alloc(dev, struct imx_parallel_display, bridge, 232 + &imx_pd_bridge_funcs); 233 + if (IS_ERR(imxpd)) 234 + return PTR_ERR(imxpd); 231 235 232 236 /* port@1 is the output port */ 233 237 imxpd->next_bridge = devm_drm_of_get_bridge(dev, np, 1, 0); ··· 255 257 imxpd->dev = dev; 256 258 257 259 platform_set_drvdata(pdev, imxpd); 260 + 261 + devm_drm_bridge_add(dev, &imxpd->bridge); 258 262 259 263 return component_add(dev, &imx_pd_ops); 260 264 }
+7
drivers/gpu/drm/mediatek/mtk_crtc.c
··· 283 283 unsigned int i; 284 284 unsigned long flags; 285 285 286 + /* release GCE HW usage and start autosuspend */ 287 + pm_runtime_mark_last_busy(cmdq_cl->chan->mbox->dev); 288 + pm_runtime_put_autosuspend(cmdq_cl->chan->mbox->dev); 289 + 286 290 if (data->sta < 0) 287 291 return; 288 292 ··· 621 617 spin_lock_irqsave(&mtk_crtc->config_lock, flags); 622 618 mtk_crtc->config_updating = false; 623 619 spin_unlock_irqrestore(&mtk_crtc->config_lock, flags); 620 + 621 + if (pm_runtime_resume_and_get(mtk_crtc->cmdq_client.chan->mbox->dev) < 0) 622 + goto update_config_out; 624 623 625 624 mbox_send_message(mtk_crtc->cmdq_client.chan, cmdq_handle); 626 625 mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0);
-10
drivers/gpu/drm/mediatek/mtk_drm_drv.c
··· 686 686 for (i = 0; i < private->data->mmsys_dev_num; i++) 687 687 private->all_drm_private[i]->drm = NULL; 688 688 err_put_dev: 689 - for (i = 0; i < private->data->mmsys_dev_num; i++) { 690 - /* For device_find_child in mtk_drm_get_all_priv() */ 691 - put_device(private->all_drm_private[i]->dev); 692 - } 693 689 put_device(private->mutex_dev); 694 690 return ret; 695 691 } ··· 693 697 static void mtk_drm_unbind(struct device *dev) 694 698 { 695 699 struct mtk_drm_private *private = dev_get_drvdata(dev); 696 - int i; 697 700 698 701 /* for multi mmsys dev, unregister drm dev in mmsys master */ 699 702 if (private->drm_master) { 700 703 drm_dev_unregister(private->drm); 701 704 mtk_drm_kms_deinit(private->drm); 702 705 drm_dev_put(private->drm); 703 - 704 - for (i = 0; i < private->data->mmsys_dev_num; i++) { 705 - /* For device_find_child in mtk_drm_get_all_priv() */ 706 - put_device(private->all_drm_private[i]->dev); 707 - } 708 706 put_device(private->mutex_dev); 709 707 } 710 708 private->mtk_drm_bound = false;
+1 -23
drivers/gpu/drm/mediatek/mtk_plane.c
··· 21 21 22 22 static const u64 modifiers[] = { 23 23 DRM_FORMAT_MOD_LINEAR, 24 - DRM_FORMAT_MOD_ARM_AFBC(AFBC_FORMAT_MOD_BLOCK_SIZE_32x8 | 25 - AFBC_FORMAT_MOD_SPLIT | 26 - AFBC_FORMAT_MOD_SPARSE), 27 24 DRM_FORMAT_MOD_INVALID, 28 25 }; 29 26 ··· 68 71 uint32_t format, 69 72 uint64_t modifier) 70 73 { 71 - if (modifier == DRM_FORMAT_MOD_LINEAR) 72 - return true; 73 - 74 - if (modifier != DRM_FORMAT_MOD_ARM_AFBC( 75 - AFBC_FORMAT_MOD_BLOCK_SIZE_32x8 | 76 - AFBC_FORMAT_MOD_SPLIT | 77 - AFBC_FORMAT_MOD_SPARSE)) 78 - return false; 79 - 80 - if (format != DRM_FORMAT_XRGB8888 && 81 - format != DRM_FORMAT_ARGB8888 && 82 - format != DRM_FORMAT_BGRX8888 && 83 - format != DRM_FORMAT_BGRA8888 && 84 - format != DRM_FORMAT_ABGR8888 && 85 - format != DRM_FORMAT_XBGR8888 && 86 - format != DRM_FORMAT_RGB888 && 87 - format != DRM_FORMAT_BGR888) 88 - return false; 89 - 90 - return true; 74 + return modifier == DRM_FORMAT_MOD_LINEAR; 91 75 } 92 76 93 77 static void mtk_plane_destroy_state(struct drm_plane *plane,
+4 -1
drivers/gpu/drm/msm/adreno/a6xx_gmu.c
··· 780 780 return true; 781 781 } 782 782 783 + #define NEXT_BLK(blk) \ 784 + ((const struct block_header *)((const char *)(blk) + sizeof(*(blk)) + (blk)->size)) 785 + 783 786 static int a6xx_gmu_fw_load(struct a6xx_gmu *gmu) 784 787 { 785 788 struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); ··· 814 811 815 812 for (blk = (const struct block_header *) fw_image->data; 816 813 (const u8*) blk < fw_image->data + fw_image->size; 817 - blk = (const struct block_header *) &blk->data[blk->size >> 2]) { 814 + blk = NEXT_BLK(blk)) { 818 815 if (blk->size == 0) 819 816 continue; 820 817
-7
drivers/gpu/drm/msm/adreno/adreno_gpu.c
··· 348 348 return 0; 349 349 } 350 350 351 - static bool 352 - adreno_smmu_has_prr(struct msm_gpu *gpu) 353 - { 354 - struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(&gpu->pdev->dev); 355 - return adreno_smmu && adreno_smmu->set_prr_addr; 356 - } 357 - 358 351 int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, 359 352 uint32_t param, uint64_t *value, uint32_t *len) 360 353 {
+3
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
··· 1545 1545 adjusted_mode_clk = dpu_core_perf_adjusted_mode_clk(mode->clock, 1546 1546 dpu_kms->perf.perf_cfg); 1547 1547 1548 + if (dpu_kms->catalog->caps->has_3d_merge) 1549 + adjusted_mode_clk /= 2; 1550 + 1548 1551 /* 1549 1552 * The given mode, adjusted for the perf clock factor, should not exceed 1550 1553 * the max core clock rate
+2 -2
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
··· 267 267 .base = 0x200, .len = 0xa0,}, \ 268 268 .csc_blk = {.name = "csc", \ 269 269 .base = 0x320, .len = 0x100,}, \ 270 - .format_list = plane_formats_yuv, \ 271 - .num_formats = ARRAY_SIZE(plane_formats_yuv), \ 270 + .format_list = plane_formats, \ 271 + .num_formats = ARRAY_SIZE(plane_formats), \ 272 272 .rotation_cfg = NULL, \ 273 273 } 274 274
+8 -6
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
··· 500 500 int i; 501 501 502 502 for (i = 0; i < DPU_MAX_PLANES; i++) { 503 + uint32_t w = src_w, h = src_h; 504 + 503 505 if (i == DPU_SSPP_COMP_1_2 || i == DPU_SSPP_COMP_2) { 504 - src_w /= chroma_subsmpl_h; 505 - src_h /= chroma_subsmpl_v; 506 + w /= chroma_subsmpl_h; 507 + h /= chroma_subsmpl_v; 506 508 } 507 509 508 - pixel_ext->num_ext_pxls_top[i] = src_h; 509 - pixel_ext->num_ext_pxls_left[i] = src_w; 510 + pixel_ext->num_ext_pxls_top[i] = h; 511 + pixel_ext->num_ext_pxls_left[i] = w; 510 512 } 511 513 } 512 514 ··· 742 740 * We already have verified scaling against platform limitations. 743 741 * Now check if the SSPP supports scaling at all. 744 742 */ 745 - if (!sblk->scaler_blk.len && 743 + if (!(sblk->scaler_blk.len && pipe->sspp->ops.setup_scaler) && 746 744 ((drm_rect_width(&new_plane_state->src) >> 16 != 747 745 drm_rect_width(&new_plane_state->dst)) || 748 746 (drm_rect_height(&new_plane_state->src) >> 16 != ··· 1280 1278 state, plane_state, 1281 1279 prev_adjacent_plane_state); 1282 1280 if (ret) 1283 - break; 1281 + return ret; 1284 1282 1285 1283 prev_adjacent_plane_state = plane_state; 1286 1284 }
+1 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
··· 842 842 843 843 if (!reqs->scale && !reqs->yuv) 844 844 hw_sspp = dpu_rm_try_sspp(rm, global_state, crtc, reqs, SSPP_TYPE_DMA); 845 - if (!hw_sspp && reqs->scale) 845 + if (!hw_sspp && !reqs->yuv) 846 846 hw_sspp = dpu_rm_try_sspp(rm, global_state, crtc, reqs, SSPP_TYPE_RGB); 847 847 if (!hw_sspp) 848 848 hw_sspp = dpu_rm_try_sspp(rm, global_state, crtc, reqs, SSPP_TYPE_VIG);
+3
drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c
··· 72 72 DPU_ERROR("invalid fb w=%d, maxlinewidth=%u\n", 73 73 fb->width, dpu_wb_conn->maxlinewidth); 74 74 return -EINVAL; 75 + } else if (fb->modifier != DRM_FORMAT_MOD_LINEAR) { 76 + DPU_ERROR("unsupported fb modifier:%#llx\n", fb->modifier); 77 + return -EINVAL; 75 78 } 76 79 77 80 return drm_atomic_helper_check_wb_connector_state(conn_state->connector, conn_state->state);
-1
drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
··· 109 109 struct msm_dsi_dphy_timing timing; 110 110 const struct msm_dsi_phy_cfg *cfg; 111 111 void *tuning_cfg; 112 - void *pll_data; 113 112 114 113 enum msm_dsi_phy_usecase usecase; 115 114 bool regulator_ldo_mode;
+2 -16
drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
··· 426 426 u32 data; 427 427 428 428 spin_lock_irqsave(&pll->pll_enable_lock, flags); 429 - if (pll->pll_enable_cnt++) { 430 - spin_unlock_irqrestore(&pll->pll_enable_lock, flags); 431 - WARN_ON(pll->pll_enable_cnt == INT_MAX); 432 - return; 433 - } 429 + pll->pll_enable_cnt++; 430 + WARN_ON(pll->pll_enable_cnt == INT_MAX); 434 431 435 432 data = readl(pll->phy->base + REG_DSI_7nm_PHY_CMN_CTRL_0); 436 433 data |= DSI_7nm_PHY_CMN_CTRL_0_PLL_SHUTDOWNB; ··· 873 876 spin_lock_init(&pll_7nm->pll_enable_lock); 874 877 875 878 pll_7nm->phy = phy; 876 - phy->pll_data = pll_7nm; 877 879 878 880 ret = pll_7nm_register(pll_7nm, phy->provided_clocks->hws); 879 881 if (ret) { ··· 961 965 u32 const delay_us = 5; 962 966 u32 const timeout_us = 1000; 963 967 struct msm_dsi_dphy_timing *timing = &phy->timing; 964 - struct dsi_pll_7nm *pll = phy->pll_data; 965 968 void __iomem *base = phy->base; 966 969 bool less_than_1500_mhz; 967 - unsigned long flags; 968 970 u32 vreg_ctrl_0, vreg_ctrl_1, lane_ctrl0; 969 971 u32 glbl_pemph_ctrl_0; 970 972 u32 glbl_str_swi_cal_sel_ctrl, glbl_hstx_str_ctrl_0; ··· 1084 1090 glbl_rescode_bot_ctrl = 0x3c; 1085 1091 } 1086 1092 1087 - spin_lock_irqsave(&pll->pll_enable_lock, flags); 1088 - pll->pll_enable_cnt = 1; 1089 1093 /* de-assert digital and pll power down */ 1090 1094 data = DSI_7nm_PHY_CMN_CTRL_0_DIGTOP_PWRDN_B | 1091 1095 DSI_7nm_PHY_CMN_CTRL_0_PLL_SHUTDOWNB; 1092 1096 writel(data, base + REG_DSI_7nm_PHY_CMN_CTRL_0); 1093 - spin_unlock_irqrestore(&pll->pll_enable_lock, flags); 1094 1097 1095 1098 /* Assert PLL core reset */ 1096 1099 writel(0x00, base + REG_DSI_7nm_PHY_CMN_PLL_CNTRL); ··· 1200 1209 1201 1210 static void dsi_7nm_phy_disable(struct msm_dsi_phy *phy) 1202 1211 { 1203 - struct dsi_pll_7nm *pll = phy->pll_data; 1204 1212 void __iomem *base = phy->base; 1205 - unsigned long flags; 1206 1213 u32 data; 1207 1214 1208 1215 DBG(""); ··· 1227 1238 writel(data, base + REG_DSI_7nm_PHY_CMN_CTRL_0); 1228 1239 writel(0, base + REG_DSI_7nm_PHY_CMN_LANE_CTRL0); 1229 1240 1230 - spin_lock_irqsave(&pll->pll_enable_lock, flags); 1231 - pll->pll_enable_cnt = 0; 1232 1241 /* Turn off all PHY blocks */ 1233 1242 writel(0x00, base + REG_DSI_7nm_PHY_CMN_CTRL_0); 1234 - spin_unlock_irqrestore(&pll->pll_enable_lock, flags); 1235 1243 1236 1244 /* make sure phy is turned off */ 1237 1245 wmb();
+7 -3
drivers/gpu/drm/msm/msm_gem.c
··· 1120 1120 put_pages(obj); 1121 1121 } 1122 1122 1123 - if (obj->resv != &obj->_resv) { 1123 + /* 1124 + * In error paths, we could end up here before msm_gem_new_handle() 1125 + * has changed obj->resv to point to the shared resv. In this case, 1126 + * we don't want to drop a ref to the shared r_obj that we haven't 1127 + * taken yet. 1128 + */ 1129 + if ((msm_obj->flags & MSM_BO_NO_SHARE) && (obj->resv != &obj->_resv)) { 1124 1130 struct drm_gem_object *r_obj = 1125 1131 container_of(obj->resv, struct drm_gem_object, _resv); 1126 - 1127 - WARN_ON(!(msm_obj->flags & MSM_BO_NO_SHARE)); 1128 1132 1129 1133 /* Drop reference we hold to shared resv obj: */ 1130 1134 drm_gem_object_put(r_obj);
+5 -4
drivers/gpu/drm/msm/msm_gem_submit.c
··· 414 414 submit->user_fence, 415 415 DMA_RESV_USAGE_BOOKKEEP, 416 416 DMA_RESV_USAGE_BOOKKEEP); 417 + 418 + last_fence = vm->last_fence; 419 + vm->last_fence = dma_fence_unwrap_merge(submit->user_fence, last_fence); 420 + dma_fence_put(last_fence); 421 + 417 422 return; 418 423 } 419 424 ··· 432 427 dma_resv_add_fence(obj->resv, submit->user_fence, 433 428 DMA_RESV_USAGE_READ); 434 429 } 435 - 436 - last_fence = vm->last_fence; 437 - vm->last_fence = dma_fence_unwrap_merge(submit->user_fence, last_fence); 438 - dma_fence_put(last_fence); 439 430 } 440 431 441 432 static int submit_bo(struct msm_gem_submit *submit, uint32_t idx,
+7 -1
drivers/gpu/drm/msm/msm_gem_vma.c
··· 971 971 lookup_op(struct msm_vm_bind_job *job, const struct drm_msm_vm_bind_op *op) 972 972 { 973 973 struct drm_device *dev = job->vm->drm; 974 + struct msm_drm_private *priv = dev->dev_private; 974 975 int i = job->nr_ops++; 975 976 int ret = 0; 976 977 ··· 1016 1015 default: 1017 1016 ret = UERR(EINVAL, dev, "invalid op: %u\n", op->op); 1018 1017 break; 1018 + } 1019 + 1020 + if ((op->op == MSM_VM_BIND_OP_MAP_NULL) && 1021 + !adreno_smmu_has_prr(priv->gpu)) { 1022 + ret = UERR(EINVAL, dev, "PRR not supported\n"); 1019 1023 } 1020 1024 1021 1025 return ret; ··· 1427 1421 * Maybe we could allow just UNMAP ops? OTOH userspace should just 1428 1422 * immediately close the device file and all will be torn down. 1429 1423 */ 1430 - if (to_msm_vm(ctx->vm)->unusable) 1424 + if (to_msm_vm(msm_context_vm(dev, ctx))->unusable) 1431 1425 return UERR(EPIPE, dev, "context is unusable"); 1432 1426 1433 1427 /*
+11
drivers/gpu/drm/msm/msm_gpu.h
··· 299 299 return container_of(adreno_smmu, struct msm_gpu, adreno_smmu); 300 300 } 301 301 302 + static inline bool 303 + adreno_smmu_has_prr(struct msm_gpu *gpu) 304 + { 305 + struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(&gpu->pdev->dev); 306 + 307 + if (!adreno_smmu) 308 + return false; 309 + 310 + return adreno_smmu && adreno_smmu->set_prr_addr; 311 + } 312 + 302 313 /* It turns out that all targets use the same ringbuffer size */ 303 314 #define MSM_GPU_RINGBUFFER_SZ SZ_32K 304 315 #define MSM_GPU_RINGBUFFER_BLKSIZE 32
+5
drivers/gpu/drm/msm/msm_iommu.c
··· 338 338 339 339 ret = kmem_cache_alloc_bulk(pt_cache, GFP_KERNEL, p->count, p->pages); 340 340 if (ret != p->count) { 341 + kfree(p->pages); 342 + p->pages = NULL; 341 343 p->count = ret; 342 344 return -ENOMEM; 343 345 } ··· 352 350 { 353 351 struct kmem_cache *pt_cache = get_pt_cache(mmu); 354 352 uint32_t remaining_pt_count = p->count - p->ptr; 353 + 354 + if (!p->pages) 355 + return; 355 356 356 357 if (p->count > 0) 357 358 trace_msm_mmu_prealloc_cleanup(p->count, remaining_pt_count);
+3 -1
drivers/gpu/drm/nouveau/dispnv50/disp.c
··· 2867 2867 } 2868 2868 2869 2869 /* Assign the correct format modifiers */ 2870 - if (disp->disp->object.oclass >= TU102_DISP) 2870 + if (disp->disp->object.oclass >= GB202_DISP) 2871 + nouveau_display(dev)->format_modifiers = wndwca7e_modifiers; 2872 + else if (disp->disp->object.oclass >= TU102_DISP) 2871 2873 nouveau_display(dev)->format_modifiers = wndwc57e_modifiers; 2872 2874 else 2873 2875 if (drm->client.device.info.family >= NV_DEVICE_INFO_V0_FERMI)
+1
drivers/gpu/drm/nouveau/dispnv50/disp.h
··· 104 104 extern const u64 disp50xx_modifiers[]; 105 105 extern const u64 disp90xx_modifiers[]; 106 106 extern const u64 wndwc57e_modifiers[]; 107 + extern const u64 wndwca7e_modifiers[]; 107 108 #endif
+22 -2
drivers/gpu/drm/nouveau/dispnv50/wndw.c
··· 786 786 } 787 787 788 788 /* This function assumes the format has already been validated against the plane 789 - * and the modifier was validated against the device-wides modifier list at FB 789 + * and the modifier was validated against the device-wide modifier list at FB 790 790 * creation time. 791 791 */ 792 792 static bool nv50_plane_format_mod_supported(struct drm_plane *plane, 793 793 u32 format, u64 modifier) 794 794 { 795 795 struct nouveau_drm *drm = nouveau_drm(plane->dev); 796 + const struct drm_format_info *info = drm_format_info(format); 796 797 uint8_t i; 797 798 798 799 /* All chipsets can display all formats in linear layout */ ··· 801 800 return true; 802 801 803 802 if (drm->client.device.info.chipset < 0xc0) { 804 - const struct drm_format_info *info = drm_format_info(format); 805 803 const uint8_t kind = (modifier >> 12) & 0xff; 806 804 807 805 if (!format) return false; 808 806 809 807 for (i = 0; i < info->num_planes; i++) 810 808 if ((info->cpp[i] != 4) && kind != 0x70) return false; 809 + } else if (drm->client.device.info.chipset >= 0x1b2) { 810 + const uint8_t slayout = ((modifier >> 22) & 0x1) | 811 + ((modifier >> 25) & 0x6); 812 + 813 + if (!format) 814 + return false; 815 + 816 + /* 817 + * Note in practice this implies only formats where cpp is equal 818 + * for each plane, or >= 4 for all planes, are supported. 819 + */ 820 + for (i = 0; i < info->num_planes; i++) { 821 + if (((info->cpp[i] == 2) && slayout != 3) || 822 + ((info->cpp[i] == 1) && slayout != 2) || 823 + ((info->cpp[i] >= 4) && slayout != 1)) 824 + return false; 825 + 826 + /* 24-bit not supported. It has yet another layout */ 827 + WARN_ON(info->cpp[i] == 3); 828 + } 811 829 } 812 830 813 831 return true;
+33
drivers/gpu/drm/nouveau/dispnv50/wndwca7e.c
··· 179 179 return 0; 180 180 } 181 181 182 + /**************************************************************** 183 + * Log2(block height) ----------------------------+ * 184 + * Page Kind ----------------------------------+ | * 185 + * Gob Height/Page Kind Generation ------+ | | * 186 + * Sector layout -------+ | | | * 187 + * Compression ------+ | | | | */ 188 + const u64 wndwca7e_modifiers[] = { /* | | | | | */ 189 + /* 4cpp+ modifiers */ 190 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 1, 2, 0x06, 0), 191 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 1, 2, 0x06, 1), 192 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 1, 2, 0x06, 2), 193 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 1, 2, 0x06, 3), 194 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 1, 2, 0x06, 4), 195 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 1, 2, 0x06, 5), 196 + /* 1cpp/8bpp modifiers */ 197 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 2, 2, 0x06, 0), 198 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 2, 2, 0x06, 1), 199 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 2, 2, 0x06, 2), 200 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 2, 2, 0x06, 3), 201 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 2, 2, 0x06, 4), 202 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 2, 2, 0x06, 5), 203 + /* 2cpp/16bpp modifiers */ 204 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 3, 2, 0x06, 0), 205 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 3, 2, 0x06, 1), 206 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 3, 2, 0x06, 2), 207 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 3, 2, 0x06, 3), 208 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 3, 2, 0x06, 4), 209 + DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(0, 3, 2, 0x06, 5), 210 + /* All formats support linear */ 211 + DRM_FORMAT_MOD_LINEAR, 212 + DRM_FORMAT_MOD_INVALID 213 + }; 214 + 182 215 static const struct nv50_wndw_func 183 216 wndwca7e = { 184 217 .acquire = wndwc37e_acquire,
+12 -2
drivers/gpu/drm/nouveau/nouveau_sched.c
··· 482 482 return 0; 483 483 } 484 484 485 + static bool 486 + nouveau_sched_job_list_empty(struct nouveau_sched *sched) 487 + { 488 + bool empty; 489 + 490 + spin_lock(&sched->job.list.lock); 491 + empty = list_empty(&sched->job.list.head); 492 + spin_unlock(&sched->job.list.lock); 493 + 494 + return empty; 495 + } 485 496 486 497 static void 487 498 nouveau_sched_fini(struct nouveau_sched *sched) ··· 500 489 struct drm_gpu_scheduler *drm_sched = &sched->base; 501 490 struct drm_sched_entity *entity = &sched->entity; 502 491 503 - rmb(); /* for list_empty to work without lock */ 504 - wait_event(sched->job.wq, list_empty(&sched->job.list.head)); 492 + wait_event(sched->job.wq, nouveau_sched_job_list_empty(sched)); 505 493 506 494 drm_sched_entity_fini(entity); 507 495 drm_sched_fini(drm_sched);
+1 -1
drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c
··· 359 359 dsi->lanes = 4; 360 360 dsi->format = MIPI_DSI_FMT_RGB888; 361 361 dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST | 362 - MIPI_DSI_MODE_LPM; 362 + MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_NO_EOT_PACKET; 363 363 364 364 kingdisplay = devm_drm_panel_alloc(&dsi->dev, __typeof(*kingdisplay), base, 365 365 &kingdisplay_panel_funcs,
+6 -1
drivers/gpu/drm/panel/panel-sitronix-st7789v.c
··· 249 249 .flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC, 250 250 }; 251 251 252 + /* 253 + * The mode data for this panel has been reverse engineered without access 254 + * to the panel datasheet / manual. Using DRM_MODE_FLAG_PHSYNC like all 255 + * other panels results in garbage data on the display. 256 + */ 252 257 static const struct drm_display_mode t28cp45tn89_mode = { 253 258 .clock = 6008, 254 259 .hdisplay = 240, ··· 266 261 .vtotal = 320 + 8 + 4 + 4, 267 262 .width_mm = 43, 268 263 .height_mm = 57, 269 - .flags = DRM_MODE_FLAG_PVSYNC | DRM_MODE_FLAG_NVSYNC, 264 + .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC, 270 265 }; 271 266 272 267 static const struct drm_display_mode et028013dma_mode = {
+4 -21
drivers/gpu/drm/radeon/radeon_drv.c
··· 314 314 315 315 ret = pci_enable_device(pdev); 316 316 if (ret) 317 - goto err_free; 317 + return ret; 318 318 319 319 pci_set_drvdata(pdev, ddev); 320 320 321 321 ret = radeon_driver_load_kms(ddev, flags); 322 322 if (ret) 323 - goto err_agp; 323 + goto err; 324 324 325 325 ret = drm_dev_register(ddev, flags); 326 326 if (ret) 327 - goto err_agp; 327 + goto err; 328 328 329 329 if (rdev->mc.real_vram_size <= (8 * 1024 * 1024)) 330 330 format = drm_format_info(DRM_FORMAT_C8); ··· 337 337 338 338 return 0; 339 339 340 - err_agp: 340 + err: 341 341 pci_disable_device(pdev); 342 - err_free: 343 - drm_dev_put(ddev); 344 342 return ret; 345 - } 346 - 347 - static void 348 - radeon_pci_remove(struct pci_dev *pdev) 349 - { 350 - struct drm_device *dev = pci_get_drvdata(pdev); 351 - 352 - drm_put_dev(dev); 353 343 } 354 344 355 345 static void 356 346 radeon_pci_shutdown(struct pci_dev *pdev) 357 347 { 358 - /* if we are running in a VM, make sure the device 359 - * torn down properly on reboot/shutdown 360 - */ 361 - if (radeon_device_is_virtual()) 362 - radeon_pci_remove(pdev); 363 - 364 348 #if defined(CONFIG_PPC64) || defined(CONFIG_MACH_LOONGSON64) 365 349 /* 366 350 * Some adapters need to be suspended before a ··· 597 613 .name = DRIVER_NAME, 598 614 .id_table = pciidlist, 599 615 .probe = radeon_pci_probe, 600 - .remove = radeon_pci_remove, 601 616 .shutdown = radeon_pci_shutdown, 602 617 .driver.pm = &radeon_pm_ops, 603 618 };
-1
drivers/gpu/drm/radeon/radeon_kms.c
··· 84 84 rdev->agp = NULL; 85 85 86 86 done_free: 87 - kfree(rdev); 88 87 dev->dev_private = NULL; 89 88 } 90 89
+23 -17
drivers/gpu/drm/scheduler/sched_entity.c
··· 70 70 entity->guilty = guilty; 71 71 entity->num_sched_list = num_sched_list; 72 72 entity->priority = priority; 73 + entity->last_user = current->group_leader; 73 74 /* 74 75 * It's perfectly valid to initialize an entity without having a valid 75 76 * scheduler attached. It's just not valid to use the scheduler before it ··· 173 172 } 174 173 EXPORT_SYMBOL(drm_sched_entity_error); 175 174 175 + static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f, 176 + struct dma_fence_cb *cb); 177 + 176 178 static void drm_sched_entity_kill_jobs_work(struct work_struct *wrk) 177 179 { 178 180 struct drm_sched_job *job = container_of(wrk, typeof(*job), work); 179 - 180 - drm_sched_fence_scheduled(job->s_fence, NULL); 181 - drm_sched_fence_finished(job->s_fence, -ESRCH); 182 - WARN_ON(job->s_fence->parent); 183 - job->sched->ops->free_job(job); 184 - } 185 - 186 - /* Signal the scheduler finished fence when the entity in question is killed. */ 187 - static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f, 188 - struct dma_fence_cb *cb) 189 - { 190 - struct drm_sched_job *job = container_of(cb, struct drm_sched_job, 191 - finish_cb); 181 + struct dma_fence *f; 192 182 unsigned long index; 193 - 194 - dma_fence_put(f); 195 183 196 184 /* Wait for all dependencies to avoid data corruptions */ 197 185 xa_for_each(&job->dependencies, index, f) { ··· 208 218 209 219 dma_fence_put(f); 210 220 } 221 + 222 + drm_sched_fence_scheduled(job->s_fence, NULL); 223 + drm_sched_fence_finished(job->s_fence, -ESRCH); 224 + WARN_ON(job->s_fence->parent); 225 + job->sched->ops->free_job(job); 226 + } 227 + 228 + /* Signal the scheduler finished fence when the entity in question is killed. */ 229 + static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f, 230 + struct dma_fence_cb *cb) 231 + { 232 + struct drm_sched_job *job = container_of(cb, struct drm_sched_job, 233 + finish_cb); 234 + 235 + dma_fence_put(f); 211 236 212 237 INIT_WORK(&job->work, drm_sched_entity_kill_jobs_work); 213 238 schedule_work(&job->work); ··· 307 302 308 303 /* For a killed process disallow further enqueueing of jobs. */ 309 304 last_user = cmpxchg(&entity->last_user, current->group_leader, NULL); 310 - if ((!last_user || last_user == current->group_leader) && 305 + if (last_user == current->group_leader && 311 306 (current->flags & PF_EXITING) && (current->exit_code == SIGKILL)) 312 307 drm_sched_entity_kill(entity); 313 308 ··· 557 552 drm_sched_rq_remove_entity(entity->rq, entity); 558 553 entity->rq = rq; 559 554 } 560 - spin_unlock(&entity->lock); 561 555 562 556 if (entity->num_sched_list == 1) 563 557 entity->sched_list = NULL; 558 + 559 + spin_unlock(&entity->lock); 564 560 } 565 561 566 562 /**
+1
drivers/gpu/drm/tiny/Kconfig
··· 85 85 config DRM_PIXPAPER 86 86 tristate "DRM support for PIXPAPER display panels" 87 87 depends on DRM && SPI 88 + depends on MMU 88 89 select DRM_CLIENT_SELECTION 89 90 select DRM_GEM_SHMEM_HELPER 90 91 select DRM_KMS_HELPER
+7 -7
drivers/gpu/drm/xe/xe_device.c
··· 988 988 989 989 drm_dbg(&xe->drm, "Shutting down device\n"); 990 990 991 - if (xe_driver_flr_disabled(xe)) { 992 - xe_display_pm_shutdown(xe); 991 + xe_display_pm_shutdown(xe); 993 992 994 - xe_irq_suspend(xe); 993 + xe_irq_suspend(xe); 995 994 996 - for_each_gt(gt, xe, id) 997 - xe_gt_shutdown(gt); 995 + for_each_gt(gt, xe, id) 996 + xe_gt_shutdown(gt); 998 997 999 - xe_display_pm_shutdown_late(xe); 1000 - } else { 998 + xe_display_pm_shutdown_late(xe); 999 + 1000 + if (!xe_driver_flr_disabled(xe)) { 1001 1001 /* BOOM! */ 1002 1002 __xe_driver_flr(xe); 1003 1003 }
+2 -1
drivers/gpu/drm/xe/xe_exec.c
··· 165 165 166 166 for (num_syncs = 0; num_syncs < args->num_syncs; num_syncs++) { 167 167 err = xe_sync_entry_parse(xe, xef, &syncs[num_syncs], 168 - &syncs_user[num_syncs], SYNC_PARSE_FLAG_EXEC | 168 + &syncs_user[num_syncs], NULL, 0, 169 + SYNC_PARSE_FLAG_EXEC | 169 170 (xe_vm_in_lr_mode(vm) ? 170 171 SYNC_PARSE_FLAG_LR_MODE : 0)); 171 172 if (err)
+14
drivers/gpu/drm/xe/xe_exec_queue.c
··· 10 10 #include <drm/drm_device.h> 11 11 #include <drm/drm_drv.h> 12 12 #include <drm/drm_file.h> 13 + #include <drm/drm_syncobj.h> 13 14 #include <uapi/drm/xe_drm.h> 14 15 15 16 #include "xe_dep_scheduler.h" ··· 325 324 } 326 325 xe_vm_put(migrate_vm); 327 326 327 + if (!IS_ERR(q)) { 328 + int err = drm_syncobj_create(&q->ufence_syncobj, 329 + DRM_SYNCOBJ_CREATE_SIGNALED, 330 + NULL); 331 + if (err) { 332 + xe_exec_queue_put(q); 333 + return ERR_PTR(err); 334 + } 335 + } 336 + 328 337 return q; 329 338 } 330 339 ALLOW_ERROR_INJECTION(xe_exec_queue_create_bind, ERRNO); ··· 343 332 { 344 333 struct xe_exec_queue *q = container_of(ref, struct xe_exec_queue, refcount); 345 334 struct xe_exec_queue *eq, *next; 335 + 336 + if (q->ufence_syncobj) 337 + drm_syncobj_put(q->ufence_syncobj); 346 338 347 339 if (xe_exec_queue_uses_pxp(q)) 348 340 xe_pxp_exec_queue_remove(gt_to_xe(q->gt)->pxp, q);
+7
drivers/gpu/drm/xe/xe_exec_queue_types.h
··· 15 15 #include "xe_hw_fence_types.h" 16 16 #include "xe_lrc_types.h" 17 17 18 + struct drm_syncobj; 18 19 struct xe_execlist_exec_queue; 19 20 struct xe_gt; 20 21 struct xe_guc_exec_queue; ··· 155 154 /** @pxp.link: link into the list of PXP exec queues */ 156 155 struct list_head link; 157 156 } pxp; 157 + 158 + /** @ufence_syncobj: User fence syncobj */ 159 + struct drm_syncobj *ufence_syncobj; 160 + 161 + /** @ufence_timeline_value: User fence timeline value */ 162 + u64 ufence_timeline_value; 158 163 159 164 /** @ops: submission backend exec queue operations */ 160 165 const struct xe_exec_queue_ops *ops;
+12 -7
drivers/gpu/drm/xe/xe_gt.c
··· 813 813 unsigned int fw_ref; 814 814 int err; 815 815 816 - if (xe_device_wedged(gt_to_xe(gt))) 817 - return -ECANCELED; 816 + if (xe_device_wedged(gt_to_xe(gt))) { 817 + err = -ECANCELED; 818 + goto err_pm_put; 819 + } 818 820 819 821 /* We only support GT resets with GuC submission */ 820 - if (!xe_device_uc_enabled(gt_to_xe(gt))) 821 - return -ENODEV; 822 + if (!xe_device_uc_enabled(gt_to_xe(gt))) { 823 + err = -ENODEV; 824 + goto err_pm_put; 825 + } 822 826 823 827 xe_gt_info(gt, "reset started\n"); 824 828 825 829 err = gt_wait_reset_unblock(gt); 826 830 if (!err) 827 831 xe_gt_warn(gt, "reset block failed to get lifted"); 828 - 829 - xe_pm_runtime_get(gt_to_xe(gt)); 830 832 831 833 if (xe_fault_inject_gt_reset()) { 832 834 err = -ECANCELED; ··· 876 874 xe_gt_err(gt, "reset failed (%pe)\n", ERR_PTR(err)); 877 875 878 876 xe_device_declare_wedged(gt_to_xe(gt)); 877 + err_pm_put: 879 878 xe_pm_runtime_put(gt_to_xe(gt)); 880 879 881 880 return err; ··· 898 895 return; 899 896 900 897 xe_gt_info(gt, "reset queued\n"); 901 - queue_work(gt->ordered_wq, &gt->reset.worker); 898 + xe_pm_runtime_get_noresume(gt_to_xe(gt)); 899 + if (!queue_work(gt->ordered_wq, &gt->reset.worker)) 900 + xe_pm_runtime_put(gt_to_xe(gt)); 902 901 } 903 902 904 903 void xe_gt_suspend_prepare(struct xe_gt *gt)
+3
drivers/gpu/drm/xe/xe_guc_ct.c
··· 200 200 { 201 201 struct xe_guc_ct *ct = arg; 202 202 203 + #if IS_ENABLED(CONFIG_DRM_XE_DEBUG) 204 + cancel_work_sync(&ct->dead.worker); 205 + #endif 203 206 ct_exit_safe_mode(ct); 204 207 destroy_workqueue(ct->g2h_wq); 205 208 xa_destroy(&ct->fence_lookup);
+30 -15
drivers/gpu/drm/xe/xe_oa.c
··· 10 10 11 11 #include <drm/drm_drv.h> 12 12 #include <drm/drm_managed.h> 13 + #include <drm/drm_syncobj.h> 13 14 #include <uapi/drm/xe_drm.h> 14 15 15 16 #include <generated/xe_wa_oob.h> ··· 1390 1389 return 0; 1391 1390 } 1392 1391 1393 - static int xe_oa_parse_syncs(struct xe_oa *oa, struct xe_oa_open_param *param) 1392 + static int xe_oa_parse_syncs(struct xe_oa *oa, 1393 + struct xe_oa_stream *stream, 1394 + struct xe_oa_open_param *param) 1394 1395 { 1395 1396 int ret, num_syncs, num_ufence = 0; 1396 1397 ··· 1412 1409 1413 1410 for (num_syncs = 0; num_syncs < param->num_syncs; num_syncs++) { 1414 1411 ret = xe_sync_entry_parse(oa->xe, param->xef, &param->syncs[num_syncs], 1415 - &param->syncs_user[num_syncs], 0); 1412 + &param->syncs_user[num_syncs], 1413 + stream->ufence_syncobj, 1414 + ++stream->ufence_timeline_value, 0); 1416 1415 if (ret) 1417 1416 goto err_syncs; 1418 1417 ··· 1544 1539 return -ENODEV; 1545 1540 1546 1541 param.xef = stream->xef; 1547 - err = xe_oa_parse_syncs(stream->oa, &param); 1542 + err = xe_oa_parse_syncs(stream->oa, stream, &param); 1548 1543 if (err) 1549 1544 goto err_config_put; 1550 1545 ··· 1640 1635 if (stream->exec_q) 1641 1636 xe_exec_queue_put(stream->exec_q); 1642 1637 1638 + drm_syncobj_put(stream->ufence_syncobj); 1643 1639 kfree(stream); 1644 1640 } 1645 1641 ··· 1832 1826 struct xe_oa_open_param *param) 1833 1827 { 1834 1828 struct xe_oa_stream *stream; 1829 + struct drm_syncobj *ufence_syncobj; 1835 1830 int stream_fd; 1836 1831 int ret; 1837 1832 ··· 1843 1836 goto exit; 1844 1837 } 1845 1838 1839 + ret = drm_syncobj_create(&ufence_syncobj, DRM_SYNCOBJ_CREATE_SIGNALED, 1840 + NULL); 1841 + if (ret) 1842 + goto exit; 1843 + 1846 1844 stream = kzalloc(sizeof(*stream), GFP_KERNEL); 1847 1845 if (!stream) { 1848 1846 ret = -ENOMEM; 1849 - goto exit; 1847 + goto err_syncobj; 1850 1848 } 1851 - 1849 + stream->ufence_syncobj = ufence_syncobj; 1852 1850 stream->oa = oa; 1853 - ret = xe_oa_stream_init(stream, param); 1851 + 1852 + ret = xe_oa_parse_syncs(oa, stream, param); 1854 1853 if (ret) 1855 1854 goto err_free; 1855 + 1856 + ret = xe_oa_stream_init(stream, param); 1857 + if (ret) { 1858 + while (param->num_syncs--) 1859 + xe_sync_entry_cleanup(&param->syncs[param->num_syncs]); 1860 + kfree(param->syncs); 1861 + goto err_free; 1862 + } 1856 1863 1857 1864 if (!param->disabled) { 1858 1865 ret = xe_oa_enable_locked(stream); ··· 1891 1870 xe_oa_stream_destroy(stream); 1892 1871 err_free: 1893 1872 kfree(stream); 1873 + err_syncobj: 1874 + drm_syncobj_put(ufence_syncobj); 1894 1875 exit: 1895 1876 return ret; 1896 1877 } ··· 2106 2083 goto err_exec_q; 2107 2084 } 2108 2085 2109 - ret = xe_oa_parse_syncs(oa, &param); 2110 - if (ret) 2111 - goto err_exec_q; 2112 - 2113 2086 mutex_lock(&param.hwe->gt->oa.gt_lock); 2114 2087 ret = xe_oa_stream_open_ioctl_locked(oa, &param); 2115 2088 mutex_unlock(&param.hwe->gt->oa.gt_lock); 2116 2089 if (ret < 0) 2117 - goto err_sync_cleanup; 2090 + goto err_exec_q; 2118 2091 2119 2092 return ret; 2120 2093 2121 - err_sync_cleanup: 2122 - while (param.num_syncs--) 2123 - xe_sync_entry_cleanup(&param.syncs[param.num_syncs]); 2124 - kfree(param.syncs); 2125 2094 err_exec_q: 2126 2095 if (param.exec_q) 2127 2096 xe_exec_queue_put(param.exec_q);
+8
drivers/gpu/drm/xe/xe_oa_types.h
··· 15 15 #include "regs/xe_reg_defs.h" 16 16 #include "xe_hw_engine_types.h" 17 17 18 + struct drm_syncobj; 19 + 18 20 #define DEFAULT_XE_OA_BUFFER_SIZE SZ_16M 19 21 20 22 enum xe_oa_report_header { ··· 249 247 250 248 /** @xef: xe_file with which the stream was opened */ 251 249 struct xe_file *xef; 250 + 251 + /** @ufence_syncobj: User fence syncobj */ 252 + struct drm_syncobj *ufence_syncobj; 253 + 254 + /** @ufence_timeline_value: User fence timeline value */ 255 + u64 ufence_timeline_value; 252 256 253 257 /** @last_fence: fence to use in stream destroy when needed */ 254 258 struct dma_fence *last_fence;
+15 -2
drivers/gpu/drm/xe/xe_sync.c
··· 113 113 int xe_sync_entry_parse(struct xe_device *xe, struct xe_file *xef, 114 114 struct xe_sync_entry *sync, 115 115 struct drm_xe_sync __user *sync_user, 116 + struct drm_syncobj *ufence_syncobj, 117 + u64 ufence_timeline_value, 116 118 unsigned int flags) 117 119 { 118 120 struct drm_xe_sync sync_in; ··· 194 192 if (exec) { 195 193 sync->addr = sync_in.addr; 196 194 } else { 195 + sync->ufence_timeline_value = ufence_timeline_value; 197 196 sync->ufence = user_fence_create(xe, sync_in.addr, 198 197 sync_in.timeline_value); 199 198 if (XE_IOCTL_DBG(xe, IS_ERR(sync->ufence))) 200 199 return PTR_ERR(sync->ufence); 200 + sync->ufence_chain_fence = dma_fence_chain_alloc(); 201 + if (!sync->ufence_chain_fence) 202 + return -ENOMEM; 203 + sync->ufence_syncobj = ufence_syncobj; 201 204 } 202 205 203 206 break; ··· 246 239 } else if (sync->ufence) { 247 240 int err; 248 241 249 - dma_fence_get(fence); 242 + drm_syncobj_add_point(sync->ufence_syncobj, 243 + sync->ufence_chain_fence, 244 + fence, sync->ufence_timeline_value); 245 + sync->ufence_chain_fence = NULL; 246 + 247 + fence = drm_syncobj_fence_get(sync->ufence_syncobj); 250 248 user_fence_get(sync->ufence); 251 249 err = dma_fence_add_callback(fence, &sync->ufence->cb, 252 250 user_fence_cb); ··· 271 259 drm_syncobj_put(sync->syncobj); 272 260 dma_fence_put(sync->fence); 273 261 dma_fence_chain_free(sync->chain_fence); 274 - if (sync->ufence) 262 + dma_fence_chain_free(sync->ufence_chain_fence); 263 + if (!IS_ERR_OR_NULL(sync->ufence)) 275 264 user_fence_put(sync->ufence); 276 265 } 277 266
+3
drivers/gpu/drm/xe/xe_sync.h
··· 8 8 9 9 #include "xe_sync_types.h" 10 10 11 + struct drm_syncobj; 11 12 struct xe_device; 12 13 struct xe_exec_queue; 13 14 struct xe_file; ··· 22 21 int xe_sync_entry_parse(struct xe_device *xe, struct xe_file *xef, 23 22 struct xe_sync_entry *sync, 24 23 struct drm_xe_sync __user *sync_user, 24 + struct drm_syncobj *ufence_syncobj, 25 + u64 ufence_timeline_value, 25 26 unsigned int flags); 26 27 int xe_sync_entry_add_deps(struct xe_sync_entry *sync, 27 28 struct xe_sched_job *job);
+3
drivers/gpu/drm/xe/xe_sync_types.h
··· 18 18 struct drm_syncobj *syncobj; 19 19 struct dma_fence *fence; 20 20 struct dma_fence_chain *chain_fence; 21 + struct dma_fence_chain *ufence_chain_fence; 22 + struct drm_syncobj *ufence_syncobj; 21 23 struct xe_user_fence *ufence; 22 24 u64 addr; 23 25 u64 timeline_value; 26 + u64 ufence_timeline_value; 24 27 u32 type; 25 28 u32 flags; 26 29 };
+4 -4
drivers/gpu/drm/xe/xe_validation.h
··· 166 166 */ 167 167 DEFINE_CLASS(xe_validation, struct xe_validation_ctx *, 168 168 if (_T) xe_validation_ctx_fini(_T);, 169 - ({_ret = xe_validation_ctx_init(_ctx, _val, _exec, _flags); 170 - _ret ? NULL : _ctx; }), 169 + ({*_ret = xe_validation_ctx_init(_ctx, _val, _exec, _flags); 170 + *_ret ? NULL : _ctx; }), 171 171 struct xe_validation_ctx *_ctx, struct xe_validation_device *_val, 172 - struct drm_exec *_exec, const struct xe_val_flags _flags, int _ret); 172 + struct drm_exec *_exec, const struct xe_val_flags _flags, int *_ret); 173 173 static inline void *class_xe_validation_lock_ptr(class_xe_validation_t *_T) 174 174 {return *_T; } 175 175 #define class_xe_validation_is_conditional true ··· 186 186 * exhaustive eviction. 187 187 */ 188 188 #define xe_validation_guard(_ctx, _val, _exec, _flags, _ret) \ 189 - scoped_guard(xe_validation, _ctx, _val, _exec, _flags, _ret) \ 189 + scoped_guard(xe_validation, _ctx, _val, _exec, _flags, &_ret) \ 190 190 drm_exec_until_all_locked(_exec) 191 191 192 192 #endif
+4
drivers/gpu/drm/xe/xe_vm.c
··· 3606 3606 3607 3607 syncs_user = u64_to_user_ptr(args->syncs); 3608 3608 for (num_syncs = 0; num_syncs < args->num_syncs; num_syncs++) { 3609 + struct xe_exec_queue *__q = q ?: vm->q[0]; 3610 + 3609 3611 err = xe_sync_entry_parse(xe, xef, &syncs[num_syncs], 3610 3612 &syncs_user[num_syncs], 3613 + __q->ufence_syncobj, 3614 + ++__q->ufence_timeline_value, 3611 3615 (xe_vm_in_lr_mode(vm) ? 3612 3616 SYNC_PARSE_FLAG_LR_MODE : 0) | 3613 3617 (!args->num_binds ?
+23 -27
drivers/i2c/muxes/i2c-mux-pca954x.c
··· 118 118 raw_spinlock_t lock; 119 119 struct regulator *supply; 120 120 121 + struct gpio_desc *reset_gpio; 121 122 struct reset_control *reset_cont; 122 123 }; 123 124 ··· 316 315 return 1 << chan; 317 316 } 318 317 319 - static void pca954x_reset_assert(struct pca954x *data) 320 - { 321 - if (data->reset_cont) 322 - reset_control_assert(data->reset_cont); 323 - } 324 - 325 - static void pca954x_reset_deassert(struct pca954x *data) 326 - { 327 - if (data->reset_cont) 328 - reset_control_deassert(data->reset_cont); 329 - } 330 - 331 - static void pca954x_reset_mux(struct pca954x *data) 332 - { 333 - pca954x_reset_assert(data); 334 - udelay(1); 335 - pca954x_reset_deassert(data); 336 - } 337 - 338 318 static int pca954x_select_chan(struct i2c_mux_core *muxc, u32 chan) 339 319 { 340 320 struct pca954x *data = i2c_mux_priv(muxc); ··· 329 347 ret = pca954x_reg_write(muxc->parent, client, regval); 330 348 data->last_chan = ret < 0 ? 0 : regval; 331 349 } 332 - if (ret == -ETIMEDOUT && data->reset_cont) 333 - pca954x_reset_mux(data); 334 350 335 351 return ret; 336 352 } ··· 338 358 struct pca954x *data = i2c_mux_priv(muxc); 339 359 struct i2c_client *client = data->client; 340 360 s32 idle_state; 341 - int ret = 0; 342 361 343 362 idle_state = READ_ONCE(data->idle_state); 344 363 if (idle_state >= 0) ··· 347 368 if (idle_state == MUX_IDLE_DISCONNECT) { 348 369 /* Deselect active channel */ 349 370 data->last_chan = 0; 350 - ret = pca954x_reg_write(muxc->parent, client, 351 - data->last_chan); 352 - if (ret == -ETIMEDOUT && data->reset_cont) 353 - pca954x_reset_mux(data); 371 + return pca954x_reg_write(muxc->parent, client, 372 + data->last_chan); 354 373 } 355 374 356 375 /* otherwise leave as-is */ ··· 527 550 if (IS_ERR(data->reset_cont)) 528 551 return dev_err_probe(dev, PTR_ERR(data->reset_cont), 529 552 "Failed to get reset\n"); 553 + else if (data->reset_cont) 554 + return 0; 555 + 556 + /* 557 + * fallback to legacy reset-gpios 558 + */ 559 + data->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); 560 + if (IS_ERR(data->reset_gpio)) { 561 + return dev_err_probe(dev, PTR_ERR(data->reset_gpio), 562 + "Failed to get reset gpio"); 563 + } 530 564 531 565 return 0; 566 + } 567 + 568 + static void pca954x_reset_deassert(struct pca954x *data) 569 + { 570 + if (data->reset_cont) 571 + reset_control_deassert(data->reset_cont); 572 + else 573 + gpiod_set_value_cansleep(data->reset_gpio, 0); 532 574 } 533 575 534 576 /* ··· 589 593 if (ret) 590 594 goto fail_cleanup; 591 595 592 - if (data->reset_cont) { 596 + if (data->reset_cont || data->reset_gpio) { 593 597 udelay(1); 594 598 pca954x_reset_deassert(data); 595 599 /* Give the chip some time to recover. */
+1
drivers/infiniband/core/uverbs_std_types_cq.c
··· 206 206 return ret; 207 207 208 208 err_free: 209 + ib_umem_release(umem); 209 210 rdma_restrack_put(&cq->res); 210 211 kfree(cq); 211 212 err_event_file:
+3 -8
drivers/infiniband/hw/bnxt_re/ib_verbs.c
··· 913 913 spin_unlock_irqrestore(&qp->scq->cq_lock, flags); 914 914 } 915 915 916 - static int bnxt_re_destroy_gsi_sqp(struct bnxt_re_qp *qp) 916 + static void bnxt_re_destroy_gsi_sqp(struct bnxt_re_qp *qp) 917 917 { 918 918 struct bnxt_re_qp *gsi_sqp; 919 919 struct bnxt_re_ah *gsi_sah; ··· 933 933 934 934 ibdev_dbg(&rdev->ibdev, "Destroy the shadow QP\n"); 935 935 rc = bnxt_qplib_destroy_qp(&rdev->qplib_res, &gsi_sqp->qplib_qp); 936 - if (rc) { 936 + if (rc) 937 937 ibdev_err(&rdev->ibdev, "Destroy Shadow QP failed"); 938 - goto fail; 939 - } 938 + 940 939 bnxt_qplib_free_qp_res(&rdev->qplib_res, &gsi_sqp->qplib_qp); 941 940 942 941 /* remove from active qp list */ ··· 950 951 rdev->gsi_ctx.gsi_sqp = NULL; 951 952 rdev->gsi_ctx.gsi_sah = NULL; 952 953 rdev->gsi_ctx.sqp_tbl = NULL; 953 - 954 - return 0; 955 - fail: 956 - return rc; 957 954 } 958 955 959 956 static void bnxt_re_del_unique_gid(struct bnxt_re_dev *rdev)
+7 -9
drivers/infiniband/hw/efa/efa_verbs.c
··· 1216 1216 if (umem->length < cq->size) { 1217 1217 ibdev_dbg(&dev->ibdev, "External memory too small\n"); 1218 1218 err = -EINVAL; 1219 - goto err_free_mem; 1219 + goto err_out; 1220 1220 } 1221 1221 1222 1222 if (!ib_umem_is_contiguous(umem)) { 1223 1223 ibdev_dbg(&dev->ibdev, "Non contiguous CQ unsupported\n"); 1224 1224 err = -EINVAL; 1225 - goto err_free_mem; 1225 + goto err_out; 1226 1226 } 1227 1227 1228 1228 cq->cpu_addr = NULL; ··· 1251 1251 1252 1252 err = efa_com_create_cq(&dev->edev, &params, &result); 1253 1253 if (err) 1254 - goto err_free_mem; 1254 + goto err_free_mapped; 1255 1255 1256 1256 resp.db_off = result.db_off; 1257 1257 resp.cq_idx = result.cq_idx; ··· 1299 1299 efa_cq_user_mmap_entries_remove(cq); 1300 1300 err_destroy_cq: 1301 1301 efa_destroy_cq_idx(dev, cq->cq_idx); 1302 - err_free_mem: 1303 - if (umem) 1304 - ib_umem_release(umem); 1305 - else 1306 - efa_free_mapped(dev, cq->cpu_addr, cq->dma_addr, cq->size, DMA_FROM_DEVICE); 1307 - 1302 + err_free_mapped: 1303 + if (!umem) 1304 + efa_free_mapped(dev, cq->cpu_addr, cq->dma_addr, cq->size, 1305 + DMA_FROM_DEVICE); 1308 1306 err_out: 1309 1307 atomic64_inc(&dev->stats.create_cq_err); 1310 1308 return err;
+55 -3
drivers/infiniband/hw/hns/hns_roce_cq.c
··· 30 30 * SOFTWARE. 31 31 */ 32 32 33 + #include <linux/pci.h> 33 34 #include <rdma/ib_umem.h> 34 35 #include <rdma/uverbs_ioctl.h> 35 36 #include "hns_roce_device.h" 36 37 #include "hns_roce_cmd.h" 37 38 #include "hns_roce_hem.h" 38 39 #include "hns_roce_common.h" 40 + 41 + void hns_roce_put_cq_bankid_for_uctx(struct hns_roce_ucontext *uctx) 42 + { 43 + struct hns_roce_dev *hr_dev = to_hr_dev(uctx->ibucontext.device); 44 + struct hns_roce_cq_table *cq_table = &hr_dev->cq_table; 45 + 46 + if (hr_dev->pci_dev->revision < PCI_REVISION_ID_HIP09) 47 + return; 48 + 49 + mutex_lock(&cq_table->bank_mutex); 50 + cq_table->ctx_num[uctx->cq_bank_id]--; 51 + mutex_unlock(&cq_table->bank_mutex); 52 + } 53 + 54 + void hns_roce_get_cq_bankid_for_uctx(struct hns_roce_ucontext *uctx) 55 + { 56 + struct hns_roce_dev *hr_dev = to_hr_dev(uctx->ibucontext.device); 57 + struct hns_roce_cq_table *cq_table = &hr_dev->cq_table; 58 + u32 least_load = cq_table->ctx_num[0]; 59 + u8 bankid = 0; 60 + u8 i; 61 + 62 + if (hr_dev->pci_dev->revision < PCI_REVISION_ID_HIP09) 63 + return; 64 + 65 + mutex_lock(&cq_table->bank_mutex); 66 + for (i = 1; i < HNS_ROCE_CQ_BANK_NUM; i++) { 67 + if (cq_table->ctx_num[i] < least_load) { 68 + least_load = cq_table->ctx_num[i]; 69 + bankid = i; 70 + } 71 + } 72 + cq_table->ctx_num[bankid]++; 73 + mutex_unlock(&cq_table->bank_mutex); 74 + 75 + uctx->cq_bank_id = bankid; 76 + } 39 77 40 78 static u8 get_least_load_bankid_for_cq(struct hns_roce_bank *bank) 41 79 { ··· 93 55 return bankid; 94 56 } 95 57 96 - static int alloc_cqn(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq) 58 + static u8 select_cq_bankid(struct hns_roce_dev *hr_dev, 59 + struct hns_roce_bank *bank, struct ib_udata *udata) 60 + { 61 + struct hns_roce_ucontext *uctx = udata ? 62 + rdma_udata_to_drv_context(udata, struct hns_roce_ucontext, 63 + ibucontext) : NULL; 64 + 65 + if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09) 66 + return uctx ? uctx->cq_bank_id : 0; 67 + 68 + return get_least_load_bankid_for_cq(bank); 69 + } 70 + 71 + static int alloc_cqn(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq, 72 + struct ib_udata *udata) 97 73 { 98 74 struct hns_roce_cq_table *cq_table = &hr_dev->cq_table; 99 75 struct hns_roce_bank *bank; ··· 115 63 int id; 116 64 117 65 mutex_lock(&cq_table->bank_mutex); 118 - bankid = get_least_load_bankid_for_cq(cq_table->bank); 66 + bankid = select_cq_bankid(hr_dev, cq_table->bank, udata); 119 67 bank = &cq_table->bank[bankid]; 120 68 121 69 id = ida_alloc_range(&bank->ida, bank->min, bank->max, GFP_KERNEL); ··· 448 396 goto err_cq_buf; 449 397 } 450 398 451 - ret = alloc_cqn(hr_dev, hr_cq); 399 + ret = alloc_cqn(hr_dev, hr_cq, udata); 452 400 if (ret) { 453 401 ibdev_err(ibdev, "failed to alloc CQN, ret = %d.\n", ret); 454 402 goto err_cq_db;
+4
drivers/infiniband/hw/hns/hns_roce_device.h
··· 217 217 struct mutex page_mutex; 218 218 struct hns_user_mmap_entry *db_mmap_entry; 219 219 u32 config; 220 + u8 cq_bank_id; 220 221 }; 221 222 222 223 struct hns_roce_pd { ··· 496 495 struct hns_roce_hem_table table; 497 496 struct hns_roce_bank bank[HNS_ROCE_CQ_BANK_NUM]; 498 497 struct mutex bank_mutex; 498 + u32 ctx_num[HNS_ROCE_CQ_BANK_NUM]; 499 499 }; 500 500 501 501 struct hns_roce_srq_table { ··· 1307 1305 size_t length, 1308 1306 enum hns_roce_mmap_type mmap_type); 1309 1307 bool check_sl_valid(struct hns_roce_dev *hr_dev, u8 sl); 1308 + void hns_roce_put_cq_bankid_for_uctx(struct hns_roce_ucontext *uctx); 1309 + void hns_roce_get_cq_bankid_for_uctx(struct hns_roce_ucontext *uctx); 1310 1310 1311 1311 #endif /* _HNS_ROCE_DEVICE_H */
+8 -4
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 165 165 hr_reg_write(fseg, FRMR_PBL_BUF_PG_SZ, 166 166 to_hr_hw_page_shift(mr->pbl_mtr.hem_cfg.buf_pg_shift)); 167 167 hr_reg_clear(fseg, FRMR_BLK_MODE); 168 + hr_reg_clear(fseg, FRMR_BLOCK_SIZE); 169 + hr_reg_clear(fseg, FRMR_ZBVA); 168 170 } 169 171 170 172 static void set_atomic_seg(const struct ib_send_wr *wr, ··· 340 338 struct hns_roce_qp *qp = to_hr_qp(ibqp); 341 339 int j = 0; 342 340 int i; 343 - 344 - hr_reg_write(rc_sq_wqe, RC_SEND_WQE_MSG_START_SGE_IDX, 345 - (*sge_ind) & (qp->sge.sge_cnt - 1)); 346 341 347 342 hr_reg_write(rc_sq_wqe, RC_SEND_WQE_INLINE, 348 343 !!(wr->send_flags & IB_SEND_INLINE)); ··· 585 586 hr_reg_write(rc_sq_wqe, RC_SEND_WQE_CQE, 586 587 (wr->send_flags & IB_SEND_SIGNALED) ? 1 : 0); 587 588 589 + hr_reg_write(rc_sq_wqe, RC_SEND_WQE_MSG_START_SGE_IDX, 590 + curr_idx & (qp->sge.sge_cnt - 1)); 591 + 588 592 if (wr->opcode == IB_WR_ATOMIC_CMP_AND_SWP || 589 593 wr->opcode == IB_WR_ATOMIC_FETCH_AND_ADD) { 590 594 if (msg_len != ATOMIC_WR_LEN) ··· 735 733 qp->sq.wrid[wqe_idx] = wr->wr_id; 736 734 owner_bit = 737 735 ~(((qp->sq.head + nreq) >> ilog2(qp->sq.wqe_cnt)) & 0x1); 736 + 737 + /* RC and UD share the same DirectWQE field layout */ 738 + ((struct hns_roce_v2_rc_send_wqe *)wqe)->byte_4 = 0; 738 739 739 740 /* Corresponding to the QP type, wqe process separately */ 740 741 if (ibqp->qp_type == IB_QPT_RC) ··· 7052 7047 dev_err(hr_dev->dev, "RoCE Engine init failed!\n"); 7053 7048 goto error_failed_roce_init; 7054 7049 } 7055 - 7056 7050 7057 7051 handle->priv = hr_dev; 7058 7052
+4
drivers/infiniband/hw/hns/hns_roce_main.c
··· 425 425 if (ret) 426 426 goto error_fail_copy_to_udata; 427 427 428 + hns_roce_get_cq_bankid_for_uctx(context); 429 + 428 430 return 0; 429 431 430 432 error_fail_copy_to_udata: ··· 448 446 { 449 447 struct hns_roce_ucontext *context = to_hr_ucontext(ibcontext); 450 448 struct hns_roce_dev *hr_dev = to_hr_dev(ibcontext->device); 449 + 450 + hns_roce_put_cq_bankid_for_uctx(context); 451 451 452 452 if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_CQ_RECORD_DB || 453 453 hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_QP_RECORD_DB)
-2
drivers/infiniband/hw/hns/hns_roce_qp.c
··· 662 662 663 663 hr_qp->sq.wqe_shift = ucmd->log_sq_stride; 664 664 hr_qp->sq.wqe_cnt = cnt; 665 - cap->max_send_sge = hr_qp->sq.max_gs; 666 665 667 666 return 0; 668 667 } ··· 743 744 744 745 /* sync the parameters of kernel QP to user's configuration */ 745 746 cap->max_send_wr = cnt; 746 - cap->max_send_sge = hr_qp->sq.max_gs; 747 747 748 748 return 0; 749 749 }
+1 -1
drivers/infiniband/hw/irdma/pble.c
··· 71 71 static void get_sd_pd_idx(struct irdma_hmc_pble_rsrc *pble_rsrc, 72 72 struct sd_pd_idx *idx) 73 73 { 74 - idx->sd_idx = (u32)pble_rsrc->next_fpm_addr / IRDMA_HMC_DIRECT_BP_SIZE; 74 + idx->sd_idx = pble_rsrc->next_fpm_addr / IRDMA_HMC_DIRECT_BP_SIZE; 75 75 idx->pd_idx = (u32)(pble_rsrc->next_fpm_addr / IRDMA_HMC_PAGED_BP_SIZE); 76 76 idx->rel_pd_idx = (idx->pd_idx % IRDMA_HMC_PD_CNT_IN_SD); 77 77 }
+1 -1
drivers/infiniband/hw/irdma/type.h
··· 706 706 u32 vchnl_ver; 707 707 u16 num_vfs; 708 708 u16 hmc_fn_id; 709 - u8 vf_id; 709 + u16 vf_id; 710 710 bool privileged:1; 711 711 bool vchnl_up:1; 712 712 bool ceq_valid:1;
+1
drivers/infiniband/hw/irdma/verbs.c
··· 2503 2503 spin_lock_init(&iwcq->lock); 2504 2504 INIT_LIST_HEAD(&iwcq->resize_list); 2505 2505 INIT_LIST_HEAD(&iwcq->cmpl_generated); 2506 + iwcq->cq_num = cq_num; 2506 2507 info.dev = dev; 2507 2508 ukinfo->cq_size = max(entries, 4); 2508 2509 ukinfo->cq_id = cq_num;
+1 -1
drivers/infiniband/hw/irdma/verbs.h
··· 140 140 struct irdma_cq { 141 141 struct ib_cq ibcq; 142 142 struct irdma_sc_cq sc_cq; 143 - u16 cq_num; 143 + u32 cq_num; 144 144 bool user_mode; 145 145 atomic_t armed; 146 146 enum irdma_cmpl_notify last_notify;
+3 -9
drivers/iommu/iommufd/io_pagetable.c
··· 707 707 struct iopt_area *area; 708 708 unsigned long unmapped_bytes = 0; 709 709 unsigned int tries = 0; 710 - int rc = -ENOENT; 710 + /* If there are no mapped entries then success */ 711 + int rc = 0; 711 712 712 713 /* 713 714 * The domains_rwsem must be held in read mode any time any area->pages ··· 778 777 779 778 down_write(&iopt->iova_rwsem); 780 779 } 781 - if (unmapped_bytes) 782 - rc = 0; 783 780 784 781 out_unlock_iova: 785 782 up_write(&iopt->iova_rwsem); ··· 814 815 815 816 int iopt_unmap_all(struct io_pagetable *iopt, unsigned long *unmapped) 816 817 { 817 - int rc; 818 - 819 - rc = iopt_unmap_iova_range(iopt, 0, ULONG_MAX, unmapped); 820 818 /* If the IOVAs are empty then unmap all succeeds */ 821 - if (rc == -ENOENT) 822 - return 0; 823 - return rc; 819 + return iopt_unmap_iova_range(iopt, 0, ULONG_MAX, unmapped); 824 820 } 825 821 826 822 /* The caller must always free all the nodes in the allowed_iova rb_root. */
+4
drivers/iommu/iommufd/ioas.c
··· 367 367 &unmapped); 368 368 if (rc) 369 369 goto out_put; 370 + if (!unmapped) { 371 + rc = -ENOENT; 372 + goto out_put; 373 + } 370 374 } 371 375 372 376 cmd->length = unmapped;
+2 -3
drivers/iommu/iommufd/iova_bitmap.c
··· 130 130 static unsigned long iova_bitmap_offset_to_index(struct iova_bitmap *bitmap, 131 131 unsigned long iova) 132 132 { 133 - unsigned long pgsize = 1UL << bitmap->mapped.pgshift; 134 - 135 - return iova / (BITS_PER_TYPE(*bitmap->bitmap) * pgsize); 133 + return (iova >> bitmap->mapped.pgshift) / 134 + BITS_PER_TYPE(*bitmap->bitmap); 136 135 } 137 136 138 137 /*
+13 -5
drivers/isdn/hardware/mISDN/hfcsusb.c
··· 1904 1904 mISDN_freebchannel(&hw->bch[1]); 1905 1905 mISDN_freebchannel(&hw->bch[0]); 1906 1906 mISDN_freedchannel(&hw->dch); 1907 - kfree(hw); 1908 1907 return err; 1909 1908 } 1910 1909 1911 1910 static int 1912 1911 hfcsusb_probe(struct usb_interface *intf, const struct usb_device_id *id) 1913 1912 { 1913 + int err; 1914 1914 struct hfcsusb *hw; 1915 1915 struct usb_device *dev = interface_to_usbdev(intf); 1916 1916 struct usb_host_interface *iface = intf->cur_altsetting; ··· 2101 2101 if (!hw->ctrl_urb) { 2102 2102 pr_warn("%s: No memory for control urb\n", 2103 2103 driver_info->vend_name); 2104 - kfree(hw); 2105 - return -ENOMEM; 2104 + err = -ENOMEM; 2105 + goto err_free_hw; 2106 2106 } 2107 2107 2108 2108 pr_info("%s: %s: detected \"%s\" (%s, if=%d alt=%d)\n", 2109 2109 hw->name, __func__, driver_info->vend_name, 2110 2110 conf_str[small_match], ifnum, alt_used); 2111 2111 2112 - if (setup_instance(hw, dev->dev.parent)) 2113 - return -EIO; 2112 + if (setup_instance(hw, dev->dev.parent)) { 2113 + err = -EIO; 2114 + goto err_free_urb; 2115 + } 2114 2116 2115 2117 hw->intf = intf; 2116 2118 usb_set_intfdata(hw->intf, hw); 2117 2119 return 0; 2120 + 2121 + err_free_urb: 2122 + usb_free_urb(hw->ctrl_urb); 2123 + err_free_hw: 2124 + kfree(hw); 2125 + return err; 2118 2126 } 2119 2127 2120 2128 /* function called when an active device is removed */
+5
drivers/media/common/videobuf2/videobuf2-v4l2.c
··· 1010 1010 if (vb2_queue_is_busy(vdev->queue, file)) 1011 1011 return -EBUSY; 1012 1012 1013 + if (vb2_fileio_is_active(vdev->queue)) { 1014 + dprintk(vdev->queue, 1, "file io in progress\n"); 1015 + return -EBUSY; 1016 + } 1017 + 1013 1018 return vb2_core_remove_bufs(vdev->queue, d->index, d->count); 1014 1019 } 1015 1020 EXPORT_SYMBOL_GPL(vb2_ioctl_remove_bufs);
+3 -6
drivers/media/pci/cx18/cx18-driver.c
··· 1136 1136 int video_input; 1137 1137 int fw_retry_count = 3; 1138 1138 struct v4l2_frequency vf; 1139 - struct cx18_open_id fh; 1140 1139 v4l2_std_id std; 1141 - 1142 - fh.cx = cx; 1143 1140 1144 1141 if (test_bit(CX18_F_I_FAILED, &cx->i_flags)) 1145 1142 return -ENXIO; ··· 1217 1220 1218 1221 video_input = cx->active_input; 1219 1222 cx->active_input++; /* Force update of input */ 1220 - cx18_s_input(NULL, &fh, video_input); 1223 + cx18_do_s_input(cx, video_input); 1221 1224 1222 1225 /* Let the VIDIOC_S_STD ioctl do all the work, keeps the code 1223 1226 in one place. */ 1224 1227 cx->std++; /* Force full standard initialization */ 1225 1228 std = (cx->tuner_std == V4L2_STD_ALL) ? V4L2_STD_NTSC_M : cx->tuner_std; 1226 - cx18_s_std(NULL, &fh, std); 1227 - cx18_s_frequency(NULL, &fh, &vf); 1229 + cx18_do_s_std(cx, std); 1230 + cx18_do_s_frequency(cx, &vf); 1228 1231 return 0; 1229 1232 } 1230 1233
+19 -11
drivers/media/pci/cx18/cx18-ioctl.c
··· 521 521 return 0; 522 522 } 523 523 524 - int cx18_s_input(struct file *file, void *fh, unsigned int inp) 524 + int cx18_do_s_input(struct cx18 *cx, unsigned int inp) 525 525 { 526 - struct cx18_open_id *id = file2id(file); 527 - struct cx18 *cx = id->cx; 528 526 v4l2_std_id std = V4L2_STD_ALL; 529 527 const struct cx18_card_video_input *card_input = 530 528 cx->card->video_inputs + inp; ··· 556 558 return 0; 557 559 } 558 560 561 + static int cx18_s_input(struct file *file, void *fh, unsigned int inp) 562 + { 563 + return cx18_do_s_input(file2id(file)->cx, inp); 564 + } 565 + 559 566 static int cx18_g_frequency(struct file *file, void *fh, 560 567 struct v4l2_frequency *vf) 561 568 { ··· 573 570 return 0; 574 571 } 575 572 576 - int cx18_s_frequency(struct file *file, void *fh, const struct v4l2_frequency *vf) 573 + int cx18_do_s_frequency(struct cx18 *cx, const struct v4l2_frequency *vf) 577 574 { 578 - struct cx18_open_id *id = file2id(file); 579 - struct cx18 *cx = id->cx; 580 - 581 575 if (vf->tuner != 0) 582 576 return -EINVAL; 583 577 ··· 585 585 return 0; 586 586 } 587 587 588 + static int cx18_s_frequency(struct file *file, void *fh, 589 + const struct v4l2_frequency *vf) 590 + { 591 + return cx18_do_s_frequency(file2id(file)->cx, vf); 592 + } 593 + 588 594 static int cx18_g_std(struct file *file, void *fh, v4l2_std_id *std) 589 595 { 590 596 struct cx18 *cx = file2id(file)->cx; ··· 599 593 return 0; 600 594 } 601 595 602 - int cx18_s_std(struct file *file, void *fh, v4l2_std_id std) 596 + int cx18_do_s_std(struct cx18 *cx, v4l2_std_id std) 603 597 { 604 - struct cx18_open_id *id = file2id(file); 605 - struct cx18 *cx = id->cx; 606 - 607 598 if ((std & V4L2_STD_ALL) == 0) 608 599 return -EINVAL; 609 600 ··· 643 640 /* Tuner */ 644 641 cx18_call_all(cx, video, s_std, cx->std); 645 642 return 0; 643 + } 644 + 645 + static int cx18_s_std(struct file *file, void *fh, v4l2_std_id std) 646 + { 647 + return cx18_do_s_std(file2id(file)->cx, std); 646 648 } 647 649 648 650 static int cx18_s_tuner(struct file *file, void *fh, const struct v4l2_tuner *vt)
+5 -3
drivers/media/pci/cx18/cx18-ioctl.h
··· 12 12 void cx18_expand_service_set(struct v4l2_sliced_vbi_format *fmt, int is_pal); 13 13 u16 cx18_get_service_set(struct v4l2_sliced_vbi_format *fmt); 14 14 void cx18_set_funcs(struct video_device *vdev); 15 - int cx18_s_std(struct file *file, void *fh, v4l2_std_id std); 16 - int cx18_s_frequency(struct file *file, void *fh, const struct v4l2_frequency *vf); 17 - int cx18_s_input(struct file *file, void *fh, unsigned int inp); 15 + 16 + struct cx18; 17 + int cx18_do_s_std(struct cx18 *cx, v4l2_std_id std); 18 + int cx18_do_s_frequency(struct cx18 *cx, const struct v4l2_frequency *vf); 19 + int cx18_do_s_input(struct cx18 *cx, unsigned int inp);
+4 -7
drivers/media/pci/ivtv/ivtv-driver.c
··· 1247 1247 1248 1248 int ivtv_init_on_first_open(struct ivtv *itv) 1249 1249 { 1250 - struct v4l2_frequency vf; 1251 1250 /* Needed to call ioctls later */ 1252 - struct ivtv_open_id fh; 1251 + struct ivtv_stream *s = &itv->streams[IVTV_ENC_STREAM_TYPE_MPG]; 1252 + struct v4l2_frequency vf; 1253 1253 int fw_retry_count = 3; 1254 1254 int video_input; 1255 - 1256 - fh.itv = itv; 1257 - fh.type = IVTV_ENC_STREAM_TYPE_MPG; 1258 1255 1259 1256 if (test_bit(IVTV_F_I_FAILED, &itv->i_flags)) 1260 1257 return -ENXIO; ··· 1294 1297 1295 1298 video_input = itv->active_input; 1296 1299 itv->active_input++; /* Force update of input */ 1297 - ivtv_s_input(NULL, &fh, video_input); 1300 + ivtv_do_s_input(itv, video_input); 1298 1301 1299 1302 /* Let the VIDIOC_S_STD ioctl do all the work, keeps the code 1300 1303 in one place. */ 1301 1304 itv->std++; /* Force full standard initialization */ 1302 1305 itv->std_out = itv->std; 1303 - ivtv_s_frequency(NULL, &fh, &vf); 1306 + ivtv_do_s_frequency(s, &vf); 1304 1307 1305 1308 if (itv->card->v4l2_capabilities & V4L2_CAP_VIDEO_OUTPUT) { 1306 1309 /* Turn on the TV-out: ivtv_init_mpeg_decoder() initializes
+17 -5
drivers/media/pci/ivtv/ivtv-ioctl.c
··· 974 974 return 0; 975 975 } 976 976 977 - int ivtv_s_input(struct file *file, void *fh, unsigned int inp) 977 + int ivtv_do_s_input(struct ivtv *itv, unsigned int inp) 978 978 { 979 - struct ivtv *itv = file2id(file)->itv; 980 979 v4l2_std_id std; 981 980 int i; 982 981 ··· 1014 1015 ivtv_unmute(itv); 1015 1016 1016 1017 return 0; 1018 + } 1019 + 1020 + static int ivtv_s_input(struct file *file, void *fh, unsigned int inp) 1021 + { 1022 + return ivtv_do_s_input(file2id(file)->itv, inp); 1017 1023 } 1018 1024 1019 1025 static int ivtv_g_output(struct file *file, void *fh, unsigned int *i) ··· 1069 1065 return 0; 1070 1066 } 1071 1067 1072 - int ivtv_s_frequency(struct file *file, void *fh, const struct v4l2_frequency *vf) 1068 + int ivtv_do_s_frequency(struct ivtv_stream *s, const struct v4l2_frequency *vf) 1073 1069 { 1074 - struct ivtv *itv = file2id(file)->itv; 1075 - struct ivtv_stream *s = &itv->streams[file2id(file)->type]; 1070 + struct ivtv *itv = s->itv; 1076 1071 1077 1072 if (s->vdev.vfl_dir) 1078 1073 return -ENOTTY; ··· 1083 1080 ivtv_call_all(itv, tuner, s_frequency, vf); 1084 1081 ivtv_unmute(itv); 1085 1082 return 0; 1083 + } 1084 + 1085 + static int ivtv_s_frequency(struct file *file, void *fh, 1086 + const struct v4l2_frequency *vf) 1087 + { 1088 + struct ivtv_open_id *id = file2id(file); 1089 + struct ivtv *itv = id->itv; 1090 + 1091 + return ivtv_do_s_frequency(&itv->streams[id->type], vf); 1086 1092 } 1087 1093 1088 1094 static int ivtv_g_std(struct file *file, void *fh, v4l2_std_id *std)
+4 -2
drivers/media/pci/ivtv/ivtv-ioctl.h
··· 9 9 #ifndef IVTV_IOCTL_H 10 10 #define IVTV_IOCTL_H 11 11 12 + struct ivtv; 13 + 12 14 u16 ivtv_service2vbi(int type); 13 15 void ivtv_expand_service_set(struct v4l2_sliced_vbi_format *fmt, int is_pal); 14 16 u16 ivtv_get_service_set(struct v4l2_sliced_vbi_format *fmt); ··· 19 17 void ivtv_set_funcs(struct video_device *vdev); 20 18 void ivtv_s_std_enc(struct ivtv *itv, v4l2_std_id std); 21 19 void ivtv_s_std_dec(struct ivtv *itv, v4l2_std_id std); 22 - int ivtv_s_frequency(struct file *file, void *fh, const struct v4l2_frequency *vf); 23 - int ivtv_s_input(struct file *file, void *fh, unsigned int inp); 20 + int ivtv_do_s_frequency(struct ivtv_stream *s, const struct v4l2_frequency *vf); 21 + int ivtv_do_s_input(struct ivtv *itv, unsigned int inp); 24 22 25 23 #endif
+14 -1
drivers/media/usb/uvc/uvc_driver.c
··· 167 167 168 168 static struct uvc_streaming *uvc_stream_by_id(struct uvc_device *dev, int id) 169 169 { 170 - struct uvc_streaming *stream; 170 + struct uvc_streaming *stream, *last_stream; 171 + unsigned int count = 0; 171 172 172 173 list_for_each_entry(stream, &dev->streams, list) { 174 + count += 1; 175 + last_stream = stream; 173 176 if (stream->header.bTerminalLink == id) 174 177 return stream; 178 + } 179 + 180 + /* 181 + * If the streaming entity is referenced by an invalid ID, notify the 182 + * user and use heuristics to guess the correct entity. 183 + */ 184 + if (count == 1 && id == UVC_INVALID_ENTITY_ID) { 185 + dev_warn(&dev->intf->dev, 186 + "UVC non compliance: Invalid USB header. The streaming entity has an invalid ID, guessing the correct one."); 187 + return last_stream; 175 188 } 176 189 177 190 return NULL;
+1 -1
drivers/media/v4l2-core/v4l2-subdev.c
··· 2608 2608 int v4l2_subdev_get_privacy_led(struct v4l2_subdev *sd) 2609 2609 { 2610 2610 #if IS_REACHABLE(CONFIG_LEDS_CLASS) 2611 - sd->privacy_led = led_get(sd->dev, "privacy-led"); 2611 + sd->privacy_led = led_get(sd->dev, "privacy"); 2612 2612 if (IS_ERR(sd->privacy_led) && PTR_ERR(sd->privacy_led) != -ENOENT) 2613 2613 return dev_err_probe(sd->dev, PTR_ERR(sd->privacy_led), 2614 2614 "getting privacy LED\n");
+1 -8
drivers/net/bonding/bond_options.c
··· 225 225 { NULL, -1, 0}, 226 226 }; 227 227 228 - static const struct bond_opt_value bond_actor_port_prio_tbl[] = { 229 - { "minval", 0, BOND_VALFLAG_MIN}, 230 - { "maxval", 65535, BOND_VALFLAG_MAX}, 231 - { "default", 255, BOND_VALFLAG_DEFAULT}, 232 - { NULL, -1, 0}, 233 - }; 234 - 235 228 static const struct bond_opt_value bond_ad_user_port_key_tbl[] = { 236 229 { "minval", 0, BOND_VALFLAG_MIN | BOND_VALFLAG_DEFAULT}, 237 230 { "maxval", 1023, BOND_VALFLAG_MAX}, ··· 490 497 .id = BOND_OPT_ACTOR_PORT_PRIO, 491 498 .name = "actor_port_prio", 492 499 .unsuppmodes = BOND_MODE_ALL_EX(BIT(BOND_MODE_8023AD)), 493 - .values = bond_actor_port_prio_tbl, 500 + .flags = BOND_OPTFLAG_RAWVAL, 494 501 .set = bond_option_actor_port_prio_set, 495 502 }, 496 503 [BOND_OPT_AD_ACTOR_SYSTEM] = {
+29 -7
drivers/net/dsa/b53/b53_common.c
··· 371 371 * frames should be flooded or not. 372 372 */ 373 373 b53_read8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, &mgmt); 374 - mgmt |= B53_UC_FWD_EN | B53_MC_FWD_EN | B53_IPMC_FWD_EN; 374 + mgmt |= B53_UC_FWD_EN | B53_MC_FWD_EN | B53_IP_MC; 375 375 b53_write8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, mgmt); 376 376 } else { 377 377 b53_read8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, &mgmt); 378 - mgmt |= B53_IP_MCAST_25; 378 + mgmt |= B53_IP_MC; 379 379 b53_write8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, mgmt); 380 380 } 381 381 } ··· 1372 1372 else 1373 1373 reg &= ~PORT_OVERRIDE_FULL_DUPLEX; 1374 1374 1375 + reg &= ~(0x3 << GMII_PO_SPEED_S); 1376 + if (is5301x(dev) || is58xx(dev)) 1377 + reg &= ~PORT_OVERRIDE_SPEED_2000M; 1378 + 1375 1379 switch (speed) { 1376 1380 case 2000: 1377 1381 reg |= PORT_OVERRIDE_SPEED_2000M; ··· 1393 1389 dev_err(dev->dev, "unknown speed: %d\n", speed); 1394 1390 return; 1395 1391 } 1392 + 1393 + if (is5325(dev)) 1394 + reg &= ~PORT_OVERRIDE_LP_FLOW_25; 1395 + else 1396 + reg &= ~(PORT_OVERRIDE_RX_FLOW | PORT_OVERRIDE_TX_FLOW); 1396 1397 1397 1398 if (rx_pause) { 1398 1399 if (is5325(dev)) ··· 1602 1593 struct b53_device *dev = dp->ds->priv; 1603 1594 int port = dp->index; 1604 1595 1605 - if (mode == MLO_AN_PHY) 1596 + if (mode == MLO_AN_PHY) { 1597 + if (is63xx(dev) && in_range(port, B53_63XX_RGMII0, 4)) 1598 + b53_force_link(dev, port, false); 1606 1599 return; 1600 + } 1607 1601 1608 1602 if (mode == MLO_AN_FIXED) { 1609 1603 b53_force_link(dev, port, false); ··· 1634 1622 if (mode == MLO_AN_PHY) { 1635 1623 /* Re-negotiate EEE if it was enabled already */ 1636 1624 p->eee_enabled = b53_eee_init(ds, port, phydev); 1625 + 1626 + if (is63xx(dev) && in_range(port, B53_63XX_RGMII0, 4)) { 1627 + b53_force_port_config(dev, port, speed, duplex, 1628 + tx_pause, rx_pause); 1629 + b53_force_link(dev, port, true); 1630 + } 1631 + 1637 1632 return; 1638 1633 } 1639 1634 ··· 2037 2018 do { 2038 2019 b53_read8(dev, B53_ARLIO_PAGE, offset, &reg); 2039 2020 if (!(reg & ARL_SRCH_STDN)) 2040 - return 0; 2021 + return -ENOENT; 2041 2022 2042 2023 if (reg & ARL_SRCH_VLID) 2043 2024 return 0; ··· 2087 2068 int b53_fdb_dump(struct dsa_switch *ds, int port, 2088 2069 dsa_fdb_dump_cb_t *cb, void *data) 2089 2070 { 2071 + unsigned int count = 0, results_per_hit = 1; 2090 2072 struct b53_device *priv = ds->priv; 2091 2073 struct b53_arl_entry results[2]; 2092 - unsigned int count = 0; 2093 2074 u8 offset; 2094 2075 int ret; 2095 2076 u8 reg; 2077 + 2078 + if (priv->num_arl_bins > 2) 2079 + results_per_hit = 2; 2096 2080 2097 2081 mutex_lock(&priv->arl_mutex); 2098 2082 ··· 2118 2096 if (ret) 2119 2097 break; 2120 2098 2121 - if (priv->num_arl_bins > 2) { 2099 + if (results_per_hit == 2) { 2122 2100 b53_arl_search_rd(priv, 1, &results[1]); 2123 2101 ret = b53_fdb_copy(port, &results[1], cb, data); 2124 2102 if (ret) ··· 2128 2106 break; 2129 2107 } 2130 2108 2131 - } while (count++ < b53_max_arl_entries(priv) / 2); 2109 + } while (count++ < b53_max_arl_entries(priv) / results_per_hit); 2132 2110 2133 2111 mutex_unlock(&priv->arl_mutex); 2134 2112
+1 -2
drivers/net/dsa/b53/b53_regs.h
··· 111 111 112 112 /* IP Multicast control (8 bit) */ 113 113 #define B53_IP_MULTICAST_CTRL 0x21 114 - #define B53_IP_MCAST_25 BIT(0) 115 - #define B53_IPMC_FWD_EN BIT(1) 114 + #define B53_IP_MC BIT(0) 116 115 #define B53_UC_FWD_EN BIT(6) 117 116 #define B53_MC_FWD_EN BIT(7) 118 117
+84 -14
drivers/net/dsa/microchip/ksz9477.c
··· 1355 1355 } 1356 1356 } 1357 1357 1358 + #define RESV_MCAST_CNT 8 1359 + 1360 + static u8 reserved_mcast_map[RESV_MCAST_CNT] = { 0, 1, 3, 16, 32, 33, 2, 17 }; 1361 + 1358 1362 int ksz9477_enable_stp_addr(struct ksz_device *dev) 1359 1363 { 1364 + u8 i, ports, update; 1360 1365 const u32 *masks; 1366 + bool override; 1361 1367 u32 data; 1362 1368 int ret; 1363 1369 ··· 1372 1366 /* Enable Reserved multicast table */ 1373 1367 ksz_cfg(dev, REG_SW_LUE_CTRL_0, SW_RESV_MCAST_ENABLE, true); 1374 1368 1375 - /* Set the Override bit for forwarding BPDU packet to CPU */ 1376 - ret = ksz_write32(dev, REG_SW_ALU_VAL_B, 1377 - ALU_V_OVERRIDE | BIT(dev->cpu_port)); 1378 - if (ret < 0) 1379 - return ret; 1369 + /* The reserved multicast address table has 8 entries. Each entry has 1370 + * a default value of which port to forward. It is assumed the host 1371 + * port is the last port in most of the switches, but that is not the 1372 + * case for KSZ9477 or maybe KSZ9897. For LAN937X family the default 1373 + * port is port 5, the first RGMII port. It is okay for LAN9370, a 1374 + * 5-port switch, but may not be correct for the other 8-port 1375 + * versions. It is necessary to update the whole table to forward to 1376 + * the right ports. 1377 + * Furthermore PTP messages can use a reserved multicast address and 1378 + * the host will not receive them if this table is not correct. 1379 + */ 1380 + for (i = 0; i < RESV_MCAST_CNT; i++) { 1381 + data = reserved_mcast_map[i] << 1382 + dev->info->shifts[ALU_STAT_INDEX]; 1383 + data |= ALU_STAT_START | 1384 + masks[ALU_STAT_DIRECT] | 1385 + masks[ALU_RESV_MCAST_ADDR] | 1386 + masks[ALU_STAT_READ]; 1387 + ret = ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data); 1388 + if (ret < 0) 1389 + return ret; 1380 1390 1381 - data = ALU_STAT_START | ALU_RESV_MCAST_ADDR | masks[ALU_STAT_WRITE]; 1391 + /* wait to be finished */ 1392 + ret = ksz9477_wait_alu_sta_ready(dev); 1393 + if (ret < 0) 1394 + return ret; 1382 1395 1383 - ret = ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data); 1384 - if (ret < 0) 1385 - return ret; 1396 + ret = ksz_read32(dev, REG_SW_ALU_VAL_B, &data); 1397 + if (ret < 0) 1398 + return ret; 1386 1399 1387 - /* wait to be finished */ 1388 - ret = ksz9477_wait_alu_sta_ready(dev); 1389 - if (ret < 0) { 1390 - dev_err(dev->dev, "Failed to update Reserved Multicast table\n"); 1391 - return ret; 1400 + override = false; 1401 + ports = data & dev->port_mask; 1402 + switch (i) { 1403 + case 0: 1404 + case 6: 1405 + /* Change the host port. */ 1406 + update = BIT(dev->cpu_port); 1407 + override = true; 1408 + break; 1409 + case 2: 1410 + /* Change the host port. */ 1411 + update = BIT(dev->cpu_port); 1412 + break; 1413 + case 4: 1414 + case 5: 1415 + case 7: 1416 + /* Skip the host port. */ 1417 + update = dev->port_mask & ~BIT(dev->cpu_port); 1418 + break; 1419 + default: 1420 + update = ports; 1421 + break; 1422 + } 1423 + if (update != ports || override) { 1424 + data &= ~dev->port_mask; 1425 + data |= update; 1426 + /* Set Override bit to receive frame even when port is 1427 + * closed. 1428 + */ 1429 + if (override) 1430 + data |= ALU_V_OVERRIDE; 1431 + ret = ksz_write32(dev, REG_SW_ALU_VAL_B, data); 1432 + if (ret < 0) 1433 + return ret; 1434 + 1435 + data = reserved_mcast_map[i] << 1436 + dev->info->shifts[ALU_STAT_INDEX]; 1437 + data |= ALU_STAT_START | 1438 + masks[ALU_STAT_DIRECT] | 1439 + masks[ALU_RESV_MCAST_ADDR] | 1440 + masks[ALU_STAT_WRITE]; 1441 + ret = ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data); 1442 + if (ret < 0) 1443 + return ret; 1444 + 1445 + /* wait to be finished */ 1446 + ret = ksz9477_wait_alu_sta_ready(dev); 1447 + if (ret < 0) 1448 + return ret; 1449 + } 1392 1450 } 1393 1451 1394 1452 return 0;
+1 -2
drivers/net/dsa/microchip/ksz9477_reg.h
··· 2 2 /* 3 3 * Microchip KSZ9477 register definitions 4 4 * 5 - * Copyright (C) 2017-2024 Microchip Technology Inc. 5 + * Copyright (C) 2017-2025 Microchip Technology Inc. 6 6 */ 7 7 8 8 #ifndef __KSZ9477_REGS_H ··· 397 397 398 398 #define ALU_RESV_MCAST_INDEX_M (BIT(6) - 1) 399 399 #define ALU_STAT_START BIT(7) 400 - #define ALU_RESV_MCAST_ADDR BIT(1) 401 400 402 401 #define REG_SW_ALU_VAL_A 0x0420 403 402
+4
drivers/net/dsa/microchip/ksz_common.c
··· 808 808 static const u32 ksz9477_masks[] = { 809 809 [ALU_STAT_WRITE] = 0, 810 810 [ALU_STAT_READ] = 1, 811 + [ALU_STAT_DIRECT] = 0, 812 + [ALU_RESV_MCAST_ADDR] = BIT(1), 811 813 [P_MII_TX_FLOW_CTRL] = BIT(5), 812 814 [P_MII_RX_FLOW_CTRL] = BIT(3), 813 815 }; ··· 837 835 static const u32 lan937x_masks[] = { 838 836 [ALU_STAT_WRITE] = 1, 839 837 [ALU_STAT_READ] = 2, 838 + [ALU_STAT_DIRECT] = BIT(3), 839 + [ALU_RESV_MCAST_ADDR] = BIT(2), 840 840 [P_MII_TX_FLOW_CTRL] = BIT(5), 841 841 [P_MII_RX_FLOW_CTRL] = BIT(3), 842 842 };
+2
drivers/net/dsa/microchip/ksz_common.h
··· 294 294 DYNAMIC_MAC_TABLE_TIMESTAMP, 295 295 ALU_STAT_WRITE, 296 296 ALU_STAT_READ, 297 + ALU_STAT_DIRECT, 298 + ALU_RESV_MCAST_ADDR, 297 299 P_MII_TX_FLOW_CTRL, 298 300 P_MII_RX_FLOW_CTRL, 299 301 };
+5 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 12439 12439 return -ENODEV; 12440 12440 } 12441 12441 12442 - static void bnxt_clear_reservations(struct bnxt *bp, bool fw_reset) 12442 + void bnxt_clear_reservations(struct bnxt *bp, bool fw_reset) 12443 12443 { 12444 12444 struct bnxt_hw_resc *hw_resc = &bp->hw_resc; 12445 12445 ··· 16892 16892 if (netif_running(dev)) 16893 16893 netif_close(dev); 16894 16894 16895 + if (bnxt_hwrm_func_drv_unrgtr(bp)) { 16896 + pcie_flr(pdev); 16897 + goto shutdown_exit; 16898 + } 16895 16899 bnxt_ptp_clear(bp); 16896 16900 bnxt_clear_int_mode(bp); 16897 16901 pci_disable_device(pdev);
+2 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 2149 2149 static inline void bnxt_bs_trace_check_wrap(struct bnxt_bs_trace_info *bs_trace, 2150 2150 u32 offset) 2151 2151 { 2152 - if (!bs_trace->wrapped && 2152 + if (!bs_trace->wrapped && bs_trace->magic_byte && 2153 2153 *bs_trace->magic_byte != BNXT_TRACE_BUF_MAGIC_BYTE) 2154 2154 bs_trace->wrapped = 1; 2155 2155 bs_trace->last_offset = offset; ··· 2941 2941 int bnxt_update_link(struct bnxt *bp, bool chng_link_state); 2942 2942 int bnxt_hwrm_set_pause(struct bnxt *); 2943 2943 int bnxt_hwrm_set_link_setting(struct bnxt *, bool, bool); 2944 + void bnxt_clear_reservations(struct bnxt *bp, bool fw_reset); 2944 2945 int bnxt_cancel_reservations(struct bnxt *bp, bool fw_reset); 2945 2946 int bnxt_hwrm_alloc_wol_fltr(struct bnxt *bp); 2946 2947 int bnxt_hwrm_free_wol_fltr(struct bnxt *bp);
+3 -2
drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c
··· 333 333 u32 offset = 0; 334 334 int rc = 0; 335 335 336 + record->max_entries = cpu_to_le32(ctxm->max_entries); 337 + record->entry_size = cpu_to_le32(ctxm->entry_size); 338 + 336 339 rc = bnxt_dbg_hwrm_log_buffer_flush(bp, type, 0, &offset); 337 340 if (rc) 338 341 return; 339 342 340 343 bnxt_bs_trace_check_wrap(bs_trace, offset); 341 - record->max_entries = cpu_to_le32(ctxm->max_entries); 342 - record->entry_size = cpu_to_le32(ctxm->entry_size); 343 344 record->offset = cpu_to_le32(bs_trace->last_offset); 344 345 record->wrapped = bs_trace->wrapped; 345 346 }
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
··· 461 461 rtnl_unlock(); 462 462 break; 463 463 } 464 - bnxt_cancel_reservations(bp, false); 464 + bnxt_clear_reservations(bp, false); 465 465 bnxt_free_ctx_mem(bp, false); 466 466 break; 467 467 }
+2 -2
drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
··· 1051 1051 if (ptp->ptp_clock) { 1052 1052 ptp_clock_unregister(ptp->ptp_clock); 1053 1053 ptp->ptp_clock = NULL; 1054 - kfree(ptp->ptp_info.pin_config); 1055 - ptp->ptp_info.pin_config = NULL; 1056 1054 } 1055 + kfree(ptp->ptp_info.pin_config); 1056 + ptp->ptp_info.pin_config = NULL; 1057 1057 } 1058 1058 1059 1059 int bnxt_ptp_init(struct bnxt *bp)
+6 -1
drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c
··· 290 290 return -EINVAL; 291 291 } 292 292 293 + if (unlikely(!try_module_get(THIS_MODULE))) { 294 + NL_SET_ERR_MSG_MOD(extack, "Failed to acquire module reference"); 295 + return -ENODEV; 296 + } 297 + 293 298 sa_entry = kzalloc(sizeof(*sa_entry), GFP_KERNEL); 294 299 if (!sa_entry) { 295 300 res = -ENOMEM; 301 + module_put(THIS_MODULE); 296 302 goto out; 297 303 } 298 304 ··· 307 301 sa_entry->esn = 1; 308 302 ch_ipsec_setkey(x, sa_entry); 309 303 x->xso.offload_handle = (unsigned long)sa_entry; 310 - try_module_get(THIS_MODULE); 311 304 out: 312 305 return res; 313 306 }
+15
drivers/net/ethernet/google/gve/gve_ptp.c
··· 26 26 return 0; 27 27 } 28 28 29 + static int gve_ptp_gettimex64(struct ptp_clock_info *info, 30 + struct timespec64 *ts, 31 + struct ptp_system_timestamp *sts) 32 + { 33 + return -EOPNOTSUPP; 34 + } 35 + 36 + static int gve_ptp_settime64(struct ptp_clock_info *info, 37 + const struct timespec64 *ts) 38 + { 39 + return -EOPNOTSUPP; 40 + } 41 + 29 42 static long gve_ptp_do_aux_work(struct ptp_clock_info *info) 30 43 { 31 44 const struct gve_ptp *ptp = container_of(info, struct gve_ptp, info); ··· 60 47 static const struct ptp_clock_info gve_ptp_caps = { 61 48 .owner = THIS_MODULE, 62 49 .name = "gve clock", 50 + .gettimex64 = gve_ptp_gettimex64, 51 + .settime64 = gve_ptp_settime64, 63 52 .do_aux_work = gve_ptp_do_aux_work, 64 53 }; 65 54
+1
drivers/net/ethernet/hisilicon/hibmcge/hbg_common.h
··· 17 17 #define HBG_PCU_CACHE_LINE_SIZE 32 18 18 #define HBG_TX_TIMEOUT_BUF_LEN 1024 19 19 #define HBG_RX_DESCR 0x01 20 + #define HBG_NO_PHY 0xFF 20 21 21 22 #define HBG_PACKET_HEAD_SIZE ((HBG_RX_SKIP1 + HBG_RX_SKIP2 + \ 22 23 HBG_RX_DESCR) * HBG_PCU_CACHE_LINE_SIZE)
+6 -4
drivers/net/ethernet/hisilicon/hibmcge/hbg_err.c
··· 136 136 { 137 137 struct net_device *netdev = pci_get_drvdata(pdev); 138 138 139 - netif_device_detach(netdev); 140 - 141 - if (state == pci_channel_io_perm_failure) 139 + if (state == pci_channel_io_perm_failure) { 140 + netif_device_detach(netdev); 142 141 return PCI_ERS_RESULT_DISCONNECT; 142 + } 143 143 144 - pci_disable_device(pdev); 145 144 return PCI_ERS_RESULT_NEED_RESET; 146 145 } 147 146 ··· 148 149 { 149 150 struct net_device *netdev = pci_get_drvdata(pdev); 150 151 struct hbg_priv *priv = netdev_priv(netdev); 152 + 153 + netif_device_detach(netdev); 154 + pci_disable_device(pdev); 151 155 152 156 if (pci_enable_device(pdev)) { 153 157 dev_err(&pdev->dev,
+3
drivers/net/ethernet/hisilicon/hibmcge/hbg_hw.c
··· 244 244 245 245 hbg_hw_mac_enable(priv, HBG_STATUS_ENABLE); 246 246 247 + if (priv->mac.phy_addr == HBG_NO_PHY) 248 + return; 249 + 247 250 /* wait MAC link up */ 248 251 ret = readl_poll_timeout(priv->io_base + HBG_REG_AN_NEG_STATE_ADDR, 249 252 link_status,
+1
drivers/net/ethernet/hisilicon/hibmcge/hbg_irq.c
··· 32 32 const struct hbg_irq_info *irq_info) 33 33 { 34 34 priv->stats.rx_fifo_less_empty_thrsld_cnt++; 35 + hbg_hw_irq_enable(priv, irq_info->mask, true); 35 36 } 36 37 37 38 #define HBG_IRQ_I(name, handle) \
-1
drivers/net/ethernet/hisilicon/hibmcge/hbg_mdio.c
··· 20 20 #define HBG_MDIO_OP_INTERVAL_US (5 * 1000) 21 21 22 22 #define HBG_NP_LINK_FAIL_RETRY_TIMES 5 23 - #define HBG_NO_PHY 0xFF 24 23 25 24 static void hbg_mdio_set_command(struct hbg_mac *mac, u32 cmd) 26 25 {
+1 -2
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 9429 9429 /* this command reads phy id and register at the same time */ 9430 9430 fallthrough; 9431 9431 case SIOCGMIIREG: 9432 - data->val_out = hclge_read_phy_reg(hdev, data->reg_num); 9433 - return 0; 9432 + return hclge_read_phy_reg(hdev, data->reg_num, &data->val_out); 9434 9433 9435 9434 case SIOCSMIIREG: 9436 9435 return hclge_write_phy_reg(hdev, data->reg_num, data->val_in);
+6 -3
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
··· 274 274 phy_stop(phydev); 275 275 } 276 276 277 - u16 hclge_read_phy_reg(struct hclge_dev *hdev, u16 reg_addr) 277 + int hclge_read_phy_reg(struct hclge_dev *hdev, u16 reg_addr, u16 *val) 278 278 { 279 279 struct hclge_phy_reg_cmd *req; 280 280 struct hclge_desc desc; ··· 286 286 req->reg_addr = cpu_to_le16(reg_addr); 287 287 288 288 ret = hclge_cmd_send(&hdev->hw, &desc, 1); 289 - if (ret) 289 + if (ret) { 290 290 dev_err(&hdev->pdev->dev, 291 291 "failed to read phy reg, ret = %d.\n", ret); 292 + return ret; 293 + } 292 294 293 - return le16_to_cpu(req->reg_val); 295 + *val = le16_to_cpu(req->reg_val); 296 + return 0; 294 297 } 295 298 296 299 int hclge_write_phy_reg(struct hclge_dev *hdev, u16 reg_addr, u16 val)
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.h
··· 13 13 void hclge_mac_disconnect_phy(struct hnae3_handle *handle); 14 14 void hclge_mac_start_phy(struct hclge_dev *hdev); 15 15 void hclge_mac_stop_phy(struct hclge_dev *hdev); 16 - u16 hclge_read_phy_reg(struct hclge_dev *hdev, u16 reg_addr); 16 + int hclge_read_phy_reg(struct hclge_dev *hdev, u16 reg_addr, u16 *val); 17 17 int hclge_write_phy_reg(struct hclge_dev *hdev, u16 reg_addr, u16 val); 18 18 19 19 #endif
+2 -2
drivers/net/ethernet/intel/Kconfig
··· 146 146 tristate "Intel(R) 10GbE PCI Express adapters support" 147 147 depends on PCI 148 148 depends on PTP_1588_CLOCK_OPTIONAL 149 - select LIBIE_FWLOG 149 + select LIBIE_FWLOG if DEBUG_FS 150 150 select MDIO 151 151 select NET_DEVLINK 152 152 select PLDMFW ··· 298 298 select DIMLIB 299 299 select LIBIE 300 300 select LIBIE_ADMINQ 301 - select LIBIE_FWLOG 301 + select LIBIE_FWLOG if DEBUG_FS 302 302 select NET_DEVLINK 303 303 select PACKING 304 304 select PLDMFW
+33 -2
drivers/net/ethernet/intel/ice/ice_common.c
··· 4382 4382 unsigned int lane; 4383 4383 int err; 4384 4384 4385 + /* E82X does not have sequential IDs, lane number is PF ID. 4386 + * For E825 device, the exception is the variant with external 4387 + * PHY (0x579F), in which there is also 1:1 pf_id -> lane_number 4388 + * mapping. 4389 + */ 4390 + if (hw->mac_type == ICE_MAC_GENERIC || 4391 + hw->device_id == ICE_DEV_ID_E825C_SGMII) 4392 + return hw->pf_id; 4393 + 4385 4394 options = kcalloc(ICE_AQC_PORT_OPT_MAX, sizeof(*options), GFP_KERNEL); 4386 4395 if (!options) 4387 4396 return -ENOMEM; ··· 6506 6497 } 6507 6498 6508 6499 /** 6500 + * ice_get_dest_cgu - get destination CGU dev for given HW 6501 + * @hw: pointer to the HW struct 6502 + * 6503 + * Get CGU client id for CGU register read/write operations. 6504 + * 6505 + * Return: CGU device id to use in SBQ transactions. 6506 + */ 6507 + static enum ice_sbq_dev_id ice_get_dest_cgu(struct ice_hw *hw) 6508 + { 6509 + /* On dual complex E825 only complex 0 has functional CGU powering all 6510 + * the PHYs. 6511 + * SBQ destination device cgu points to CGU on a current complex and to 6512 + * access primary CGU from the secondary complex, the driver should use 6513 + * cgu_peer as a destination device. 6514 + */ 6515 + if (hw->mac_type == ICE_MAC_GENERIC_3K_E825 && ice_is_dual(hw) && 6516 + !ice_is_primary(hw)) 6517 + return ice_sbq_dev_cgu_peer; 6518 + return ice_sbq_dev_cgu; 6519 + } 6520 + 6521 + /** 6509 6522 * ice_read_cgu_reg - Read a CGU register 6510 6523 * @hw: Pointer to the HW struct 6511 6524 * @addr: Register address to read ··· 6541 6510 int ice_read_cgu_reg(struct ice_hw *hw, u32 addr, u32 *val) 6542 6511 { 6543 6512 struct ice_sbq_msg_input cgu_msg = { 6513 + .dest_dev = ice_get_dest_cgu(hw), 6544 6514 .opcode = ice_sbq_msg_rd, 6545 - .dest_dev = ice_sbq_dev_cgu, 6546 6515 .msg_addr_low = addr 6547 6516 }; 6548 6517 int err; ··· 6573 6542 int ice_write_cgu_reg(struct ice_hw *hw, u32 addr, u32 val) 6574 6543 { 6575 6544 struct ice_sbq_msg_input cgu_msg = { 6545 + .dest_dev = ice_get_dest_cgu(hw), 6576 6546 .opcode = ice_sbq_msg_wr, 6577 - .dest_dev = ice_sbq_dev_cgu, 6578 6547 .msg_addr_low = addr, 6579 6548 .data = val 6580 6549 };
+1 -1
drivers/net/ethernet/intel/ice/ice_flex_pipe.c
··· 1479 1479 per_pf = ICE_PROF_MASK_COUNT / hw->dev_caps.num_funcs; 1480 1480 1481 1481 hw->blk[blk].masks.count = per_pf; 1482 - hw->blk[blk].masks.first = hw->pf_id * per_pf; 1482 + hw->blk[blk].masks.first = hw->logical_pf_id * per_pf; 1483 1483 1484 1484 memset(hw->blk[blk].masks.masks, 0, sizeof(hw->blk[blk].masks.masks)); 1485 1485
+1
drivers/net/ethernet/intel/ice/ice_sbq_cmd.h
··· 50 50 ice_sbq_dev_phy_0 = 0x02, 51 51 ice_sbq_dev_cgu = 0x06, 52 52 ice_sbq_dev_phy_0_peer = 0x0D, 53 + ice_sbq_dev_cgu_peer = 0x0F, 53 54 }; 54 55 55 56 enum ice_sbq_msg_opcode {
+1 -1
drivers/net/ethernet/intel/igb/igb_ethtool.c
··· 2281 2281 case ETH_SS_PRIV_FLAGS: 2282 2282 return IGB_PRIV_FLAGS_STR_LEN; 2283 2283 default: 2284 - return -ENOTSUPP; 2284 + return -EOPNOTSUPP; 2285 2285 } 2286 2286 } 2287 2287
+4 -1
drivers/net/ethernet/intel/igc/igc_ethtool.c
··· 810 810 case ETH_SS_PRIV_FLAGS: 811 811 return IGC_PRIV_FLAGS_STR_LEN; 812 812 default: 813 - return -ENOTSUPP; 813 + return -EOPNOTSUPP; 814 814 } 815 815 } 816 816 ··· 2093 2093 if (eth_test->flags == ETH_TEST_FL_OFFLINE) { 2094 2094 netdev_info(adapter->netdev, "Offline testing starting"); 2095 2095 set_bit(__IGC_TESTING, &adapter->state); 2096 + 2097 + /* power up PHY for link test */ 2098 + igc_power_up_phy_copper(&adapter->hw); 2096 2099 2097 2100 /* Link test performed before hardware reset so autoneg doesn't 2098 2101 * interfere with test result
-2
drivers/net/ethernet/intel/ixgbe/ixgbe.h
··· 821 821 #ifdef CONFIG_IXGBE_HWMON 822 822 struct hwmon_buff *ixgbe_hwmon_buff; 823 823 #endif /* CONFIG_IXGBE_HWMON */ 824 - #ifdef CONFIG_DEBUG_FS 825 824 struct dentry *ixgbe_dbg_adapter; 826 - #endif /*CONFIG_DEBUG_FS*/ 827 825 828 826 u8 default_up; 829 827 /* Bitmask indicating in use pools */
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 11507 11507 shutdown_aci: 11508 11508 mutex_destroy(&adapter->hw.aci.lock); 11509 11509 ixgbe_release_hw_control(adapter); 11510 - devlink_free(adapter->devlink); 11511 11510 clean_up_probe: 11512 11511 disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state); 11513 11512 free_netdev(netdev); 11513 + devlink_free(adapter->devlink); 11514 11514 pci_release_mem_regions(pdev); 11515 11515 if (disable_dev) 11516 11516 pci_disable_device(pdev);
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
··· 641 641 * disabled 642 642 */ 643 643 if (rq->type != PTP_CLK_REQ_PPS || !adapter->ptp_setup_sdp) 644 - return -ENOTSUPP; 644 + return -EOPNOTSUPP; 645 645 646 646 if (on) 647 647 adapter->flags2 |= IXGBE_FLAG2_PTP_PPS_ENABLED;
+2 -4
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
··· 1516 1516 pool->xdp_cnt = numptrs; 1517 1517 pool->xdp = devm_kcalloc(pfvf->dev, 1518 1518 numptrs, sizeof(struct xdp_buff *), GFP_KERNEL); 1519 - if (IS_ERR(pool->xdp)) { 1520 - netdev_err(pfvf->netdev, "Creation of xsk pool failed\n"); 1521 - return PTR_ERR(pool->xdp); 1522 - } 1519 + if (!pool->xdp) 1520 + return -ENOMEM; 1523 1521 } 1524 1522 1525 1523 return 0;
+3
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 634 634 struct mlx5e_shampo_hd { 635 635 struct mlx5e_frag_page *pages; 636 636 u32 hd_per_wq; 637 + u32 hd_per_page; 637 638 u16 hd_per_wqe; 639 + u8 log_hd_per_page; 640 + u8 log_hd_entry_size; 638 641 unsigned long *bitmap; 639 642 u16 pi; 640 643 u16 ci;
+35 -6
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
··· 320 320 err_free: 321 321 kfree(buf); 322 322 err_out: 323 - priv_rx->rq_stats->tls_resync_req_skip++; 324 323 return err; 325 324 } 326 325 ··· 338 339 339 340 if (unlikely(test_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags))) { 340 341 mlx5e_ktls_priv_rx_put(priv_rx); 342 + priv_rx->rq_stats->tls_resync_req_skip++; 343 + tls_offload_rx_resync_async_request_cancel(&resync->core); 341 344 return; 342 345 } 343 346 344 347 c = resync->priv->channels.c[priv_rx->rxq]; 345 348 sq = &c->async_icosq; 346 349 347 - if (resync_post_get_progress_params(sq, priv_rx)) 350 + if (resync_post_get_progress_params(sq, priv_rx)) { 351 + priv_rx->rq_stats->tls_resync_req_skip++; 352 + tls_offload_rx_resync_async_request_cancel(&resync->core); 348 353 mlx5e_ktls_priv_rx_put(priv_rx); 354 + } 349 355 } 350 356 351 357 static void resync_init(struct mlx5e_ktls_rx_resync_ctx *resync, ··· 429 425 { 430 426 struct mlx5e_ktls_rx_resync_buf *buf = wi->tls_get_params.buf; 431 427 struct mlx5e_ktls_offload_context_rx *priv_rx; 428 + struct tls_offload_resync_async *async_resync; 429 + struct tls_offload_context_rx *rx_ctx; 432 430 u8 tracker_state, auth_state, *ctx; 433 431 struct device *dev; 434 432 u32 hw_seq; 435 433 436 434 priv_rx = buf->priv_rx; 437 435 dev = mlx5_core_dma_dev(sq->channel->mdev); 438 - if (unlikely(test_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags))) 436 + rx_ctx = tls_offload_ctx_rx(tls_get_ctx(priv_rx->sk)); 437 + async_resync = rx_ctx->resync_async; 438 + if (unlikely(test_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags))) { 439 + priv_rx->rq_stats->tls_resync_req_skip++; 440 + tls_offload_rx_resync_async_request_cancel(async_resync); 439 441 goto out; 442 + } 440 443 441 444 dma_sync_single_for_cpu(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, 442 445 DMA_FROM_DEVICE); ··· 454 443 if (tracker_state != MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_TRACKING || 455 444 auth_state != MLX5E_TLS_PROGRESS_PARAMS_AUTH_STATE_NO_OFFLOAD) { 456 445 priv_rx->rq_stats->tls_resync_req_skip++; 446 + tls_offload_rx_resync_async_request_cancel(async_resync); 457 447 goto out; 458 448 } 459 449 460 450 hw_seq = MLX5_GET(tls_progress_params, ctx, hw_resync_tcp_sn); 461 - tls_offload_rx_resync_async_request_end(priv_rx->sk, cpu_to_be32(hw_seq)); 451 + tls_offload_rx_resync_async_request_end(async_resync, 452 + cpu_to_be32(hw_seq)); 462 453 priv_rx->rq_stats->tls_resync_req_end++; 463 454 out: 464 455 mlx5e_ktls_priv_rx_put(priv_rx); ··· 485 472 486 473 resync = &priv_rx->resync; 487 474 mlx5e_ktls_priv_rx_get(priv_rx); 488 - if (unlikely(!queue_work(resync->priv->tls->rx_wq, &resync->work))) 475 + if (unlikely(!queue_work(resync->priv->tls->rx_wq, &resync->work))) { 489 476 mlx5e_ktls_priv_rx_put(priv_rx); 477 + return false; 478 + } 490 479 491 480 return true; 492 481 } ··· 497 482 static void resync_update_sn(struct mlx5e_rq *rq, struct sk_buff *skb) 498 483 { 499 484 struct ethhdr *eth = (struct ethhdr *)(skb->data); 485 + struct tls_offload_resync_async *resync_async; 500 486 struct net_device *netdev = rq->netdev; 501 487 struct net *net = dev_net(netdev); 502 488 struct sock *sk = NULL; ··· 543 527 544 528 seq = th->seq; 545 529 datalen = skb->len - depth; 546 - tls_offload_rx_resync_async_request_start(sk, seq, datalen); 530 + resync_async = tls_offload_ctx_rx(tls_get_ctx(sk))->resync_async; 531 + tls_offload_rx_resync_async_request_start(resync_async, seq, datalen); 547 532 rq->stats->tls_resync_req_start++; 548 533 549 534 unref: ··· 571 554 c = priv->channels.c[priv_rx->rxq]; 572 555 573 556 resync_handle_seq_match(priv_rx, c); 557 + } 558 + 559 + void 560 + mlx5e_ktls_rx_resync_async_request_cancel(struct mlx5e_icosq_wqe_info *wi) 561 + { 562 + struct mlx5e_ktls_offload_context_rx *priv_rx; 563 + struct mlx5e_ktls_rx_resync_buf *buf; 564 + 565 + buf = wi->tls_get_params.buf; 566 + priv_rx = buf->priv_rx; 567 + priv_rx->rq_stats->tls_resync_req_skip++; 568 + tls_offload_rx_resync_async_request_cancel(&priv_rx->resync.core); 574 569 } 575 570 576 571 /* End of resync section */
+4
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_txrx.h
··· 29 29 void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq, 30 30 struct mlx5e_tx_wqe_info *wi, 31 31 u32 *dma_fifo_cc); 32 + 33 + void 34 + mlx5e_ktls_rx_resync_async_request_cancel(struct mlx5e_icosq_wqe_info *wi); 35 + 32 36 static inline bool 33 37 mlx5e_ktls_tx_try_handle_resync_dump_comp(struct mlx5e_txqsq *sq, 34 38 struct mlx5e_tx_wqe_info *wi,
+1 -3
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 2125 2125 if (!size_read) 2126 2126 return i; 2127 2127 2128 - if (size_read == -EINVAL) 2129 - return -EINVAL; 2130 2128 if (size_read < 0) { 2131 2129 NL_SET_ERR_MSG_FMT_MOD( 2132 2130 extack, 2133 2131 "Query module eeprom by page failed, read %u bytes, err %d", 2134 2132 i, size_read); 2135 - return i; 2133 + return size_read; 2136 2134 } 2137 2135 2138 2136 i += size_read;
+19 -5
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 791 791 int node) 792 792 { 793 793 void *wqc = MLX5_ADDR_OF(rqc, rqp->rqc, wq); 794 + u8 log_hd_per_page, log_hd_entry_size; 795 + u16 hd_per_wq, hd_per_wqe; 794 796 u32 hd_pool_size; 795 - u16 hd_per_wq; 796 797 int wq_size; 797 798 int err; 798 799 ··· 816 815 if (err) 817 816 goto err_umr_mkey; 818 817 819 - rq->mpwqe.shampo->hd_per_wqe = 820 - mlx5e_shampo_hd_per_wqe(mdev, params, rqp); 818 + hd_per_wqe = mlx5e_shampo_hd_per_wqe(mdev, params, rqp); 821 819 wq_size = BIT(MLX5_GET(wq, wqc, log_wq_sz)); 822 - hd_pool_size = (rq->mpwqe.shampo->hd_per_wqe * wq_size) / 823 - MLX5E_SHAMPO_WQ_HEADER_PER_PAGE; 820 + 821 + BUILD_BUG_ON(MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE > PAGE_SHIFT); 822 + if (hd_per_wqe >= MLX5E_SHAMPO_WQ_HEADER_PER_PAGE) { 823 + log_hd_per_page = MLX5E_SHAMPO_LOG_WQ_HEADER_PER_PAGE; 824 + log_hd_entry_size = MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE; 825 + } else { 826 + log_hd_per_page = order_base_2(hd_per_wqe); 827 + log_hd_entry_size = order_base_2(PAGE_SIZE / hd_per_wqe); 828 + } 829 + 830 + rq->mpwqe.shampo->hd_per_wqe = hd_per_wqe; 831 + rq->mpwqe.shampo->hd_per_page = BIT(log_hd_per_page); 832 + rq->mpwqe.shampo->log_hd_per_page = log_hd_per_page; 833 + rq->mpwqe.shampo->log_hd_entry_size = log_hd_entry_size; 834 + 835 + hd_pool_size = (hd_per_wqe * wq_size) >> log_hd_per_page; 824 836 825 837 if (netif_rxq_has_unreadable_mp(rq->netdev, rq->ix)) { 826 838 /* Separate page pool for shampo headers */
+43 -33
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 648 648 umr_wqe->hdr.uctrl.mkey_mask = cpu_to_be64(MLX5_MKEY_MASK_FREE); 649 649 } 650 650 651 - static struct mlx5e_frag_page *mlx5e_shampo_hd_to_frag_page(struct mlx5e_rq *rq, int header_index) 651 + static struct mlx5e_frag_page *mlx5e_shampo_hd_to_frag_page(struct mlx5e_rq *rq, 652 + int header_index) 652 653 { 653 - BUILD_BUG_ON(MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE > PAGE_SHIFT); 654 + struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; 654 655 655 - return &rq->mpwqe.shampo->pages[header_index >> MLX5E_SHAMPO_LOG_WQ_HEADER_PER_PAGE]; 656 + return &shampo->pages[header_index >> shampo->log_hd_per_page]; 656 657 } 657 658 658 - static u64 mlx5e_shampo_hd_offset(int header_index) 659 + static u64 mlx5e_shampo_hd_offset(struct mlx5e_rq *rq, int header_index) 659 660 { 660 - return (header_index & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) << 661 - MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE; 661 + struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; 662 + u32 hd_per_page = shampo->hd_per_page; 663 + 664 + return (header_index & (hd_per_page - 1)) << shampo->log_hd_entry_size; 662 665 } 663 666 664 667 static void mlx5e_free_rx_shampo_hd_entry(struct mlx5e_rq *rq, u16 header_index); ··· 674 671 u16 pi, header_offset, err, wqe_bbs; 675 672 u32 lkey = rq->mdev->mlx5e_res.hw_objs.mkey; 676 673 struct mlx5e_umr_wqe *umr_wqe; 677 - int headroom, i = 0; 674 + int headroom, i; 678 675 679 676 headroom = rq->buff.headroom; 680 677 wqe_bbs = MLX5E_KSM_UMR_WQEBBS(ksm_entries); ··· 682 679 umr_wqe = mlx5_wq_cyc_get_wqe(&sq->wq, pi); 683 680 build_ksm_umr(sq, umr_wqe, shampo->mkey_be, index, ksm_entries); 684 681 685 - WARN_ON_ONCE(ksm_entries & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)); 686 - while (i < ksm_entries) { 687 - struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, index); 682 + for (i = 0; i < ksm_entries; i++, index++) { 683 + struct mlx5e_frag_page *frag_page; 688 684 u64 addr; 689 685 690 - err = mlx5e_page_alloc_fragmented(rq->hd_page_pool, frag_page); 691 - if (unlikely(err)) 692 - goto err_unmap; 686 + frag_page = mlx5e_shampo_hd_to_frag_page(rq, index); 687 + header_offset = mlx5e_shampo_hd_offset(rq, index); 688 + if (!header_offset) { 689 + err = mlx5e_page_alloc_fragmented(rq->hd_page_pool, 690 + frag_page); 691 + if (err) 692 + goto err_unmap; 693 + } 693 694 694 695 addr = page_pool_get_dma_addr_netmem(frag_page->netmem); 695 - 696 - for (int j = 0; j < MLX5E_SHAMPO_WQ_HEADER_PER_PAGE; j++) { 697 - header_offset = mlx5e_shampo_hd_offset(index++); 698 - 699 - umr_wqe->inline_ksms[i++] = (struct mlx5_ksm) { 700 - .key = cpu_to_be32(lkey), 701 - .va = cpu_to_be64(addr + header_offset + headroom), 702 - }; 703 - } 696 + umr_wqe->inline_ksms[i] = (struct mlx5_ksm) { 697 + .key = cpu_to_be32(lkey), 698 + .va = cpu_to_be64(addr + header_offset + headroom), 699 + }; 704 700 } 705 701 706 702 sq->db.wqe_info[pi] = (struct mlx5e_icosq_wqe_info) { ··· 715 713 return 0; 716 714 717 715 err_unmap: 718 - while (--i) { 716 + while (--i >= 0) { 719 717 --index; 720 - header_offset = mlx5e_shampo_hd_offset(index); 718 + header_offset = mlx5e_shampo_hd_offset(rq, index); 721 719 if (!header_offset) { 722 720 struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, index); 723 721 ··· 737 735 struct mlx5e_icosq *sq = rq->icosq; 738 736 int i, err, max_ksm_entries, len; 739 737 740 - max_ksm_entries = ALIGN_DOWN(MLX5E_MAX_KSM_PER_WQE(rq->mdev), 741 - MLX5E_SHAMPO_WQ_HEADER_PER_PAGE); 738 + max_ksm_entries = MLX5E_MAX_KSM_PER_WQE(rq->mdev); 742 739 ksm_entries = bitmap_find_window(shampo->bitmap, 743 740 shampo->hd_per_wqe, 744 741 shampo->hd_per_wq, shampo->pi); 745 - ksm_entries = ALIGN_DOWN(ksm_entries, MLX5E_SHAMPO_WQ_HEADER_PER_PAGE); 742 + ksm_entries = ALIGN_DOWN(ksm_entries, shampo->hd_per_page); 746 743 if (!ksm_entries) 747 744 return 0; 748 745 ··· 859 858 { 860 859 struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; 861 860 862 - if (((header_index + 1) & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) == 0) { 861 + if (((header_index + 1) & (shampo->hd_per_page - 1)) == 0) { 863 862 struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, header_index); 864 863 865 864 mlx5e_page_release_fragmented(rq->hd_page_pool, frag_page); ··· 1037 1036 netdev_WARN_ONCE(cq->netdev, 1038 1037 "Bad OP in ICOSQ CQE: 0x%x\n", 1039 1038 get_cqe_opcode(cqe)); 1039 + #ifdef CONFIG_MLX5_EN_TLS 1040 + if (wi->wqe_type == MLX5E_ICOSQ_WQE_GET_PSV_TLS) 1041 + mlx5e_ktls_rx_resync_async_request_cancel(wi); 1042 + #endif 1040 1043 mlx5e_dump_error_cqe(&sq->cq, sq->sqn, 1041 1044 (struct mlx5_err_cqe *)cqe); 1042 1045 mlx5_wq_cyc_wqe_dump(&sq->wq, ci, wi->num_wqebbs); ··· 1226 1221 static void *mlx5e_shampo_get_packet_hd(struct mlx5e_rq *rq, u16 header_index) 1227 1222 { 1228 1223 struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, header_index); 1229 - u16 head_offset = mlx5e_shampo_hd_offset(header_index) + rq->buff.headroom; 1224 + u16 head_offset = mlx5e_shampo_hd_offset(rq, header_index); 1225 + void *addr = netmem_address(frag_page->netmem); 1230 1226 1231 - return netmem_address(frag_page->netmem) + head_offset; 1227 + return addr + head_offset + rq->buff.headroom; 1232 1228 } 1233 1229 1234 1230 static void mlx5e_shampo_update_ipv4_udp_hdr(struct mlx5e_rq *rq, struct iphdr *ipv4) ··· 2269 2263 struct mlx5_cqe64 *cqe, u16 header_index) 2270 2264 { 2271 2265 struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, header_index); 2272 - u16 head_offset = mlx5e_shampo_hd_offset(header_index); 2266 + u16 head_offset = mlx5e_shampo_hd_offset(rq, header_index); 2267 + struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; 2273 2268 u16 head_size = cqe->shampo.header_size; 2274 2269 u16 rx_headroom = rq->buff.headroom; 2275 2270 struct sk_buff *skb = NULL; ··· 2286 2279 data = hdr + rx_headroom; 2287 2280 frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + head_size); 2288 2281 2289 - if (likely(frag_size <= BIT(MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE))) { 2282 + if (likely(frag_size <= BIT(shampo->log_hd_entry_size))) { 2290 2283 /* build SKB around header */ 2291 2284 dma_sync_single_range_for_cpu(rq->pdev, dma_addr, 0, frag_size, rq->buff.map_dir); 2292 2285 net_prefetchw(hdr); ··· 2359 2352 { 2360 2353 int nr_frags = skb_shinfo(skb)->nr_frags; 2361 2354 2362 - return PAGE_SIZE * nr_frags + data_bcnt <= GRO_LEGACY_MAX_SIZE; 2355 + if (PAGE_SIZE >= GRO_LEGACY_MAX_SIZE) 2356 + return skb->len + data_bcnt <= GRO_LEGACY_MAX_SIZE; 2357 + else 2358 + return PAGE_SIZE * nr_frags + data_bcnt <= GRO_LEGACY_MAX_SIZE; 2363 2359 } 2364 2360 2365 2361 static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
-1
drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
··· 66 66 esw->fdb_table.legacy.addr_grp = NULL; 67 67 esw->fdb_table.legacy.allmulti_grp = NULL; 68 68 esw->fdb_table.legacy.promisc_grp = NULL; 69 - atomic64_set(&esw->user_count, 0); 70 69 } 71 70 72 71 static int esw_create_legacy_fdb_table(struct mlx5_eswitch *esw)
-1
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 1978 1978 /* Holds true only as long as DMFS is the default */ 1979 1979 mlx5_flow_namespace_set_mode(esw->fdb_table.offloads.ns, 1980 1980 MLX5_FLOW_STEERING_MODE_DMFS); 1981 - atomic64_set(&esw->user_count, 0); 1982 1981 } 1983 1982 1984 1983 static int esw_get_nr_ft_offloads_steering_src_ports(struct mlx5_eswitch *esw)
+9 -9
drivers/net/ethernet/microchip/lan966x/lan966x_ethtool.c
··· 294 294 { 295 295 int i, j; 296 296 297 - mutex_lock(&lan966x->stats_lock); 297 + spin_lock(&lan966x->stats_lock); 298 298 299 299 for (i = 0; i < lan966x->num_phys_ports; i++) { 300 300 uint idx = i * lan966x->num_stats; ··· 310 310 } 311 311 } 312 312 313 - mutex_unlock(&lan966x->stats_lock); 313 + spin_unlock(&lan966x->stats_lock); 314 314 } 315 315 316 316 static int lan966x_get_sset_count(struct net_device *dev, int sset) ··· 365 365 366 366 idx = port->chip_port * lan966x->num_stats; 367 367 368 - mutex_lock(&lan966x->stats_lock); 368 + spin_lock(&lan966x->stats_lock); 369 369 370 370 mac_stats->FramesTransmittedOK = 371 371 lan966x->stats[idx + SYS_COUNT_TX_UC] + ··· 416 416 lan966x->stats[idx + SYS_COUNT_RX_LONG] + 417 417 lan966x->stats[idx + SYS_COUNT_RX_PMAC_LONG]; 418 418 419 - mutex_unlock(&lan966x->stats_lock); 419 + spin_unlock(&lan966x->stats_lock); 420 420 } 421 421 422 422 static const struct ethtool_rmon_hist_range lan966x_rmon_ranges[] = { ··· 442 442 443 443 idx = port->chip_port * lan966x->num_stats; 444 444 445 - mutex_lock(&lan966x->stats_lock); 445 + spin_lock(&lan966x->stats_lock); 446 446 447 447 rmon_stats->undersize_pkts = 448 448 lan966x->stats[idx + SYS_COUNT_RX_SHORT] + ··· 500 500 lan966x->stats[idx + SYS_COUNT_TX_SZ_1024_1526] + 501 501 lan966x->stats[idx + SYS_COUNT_TX_PMAC_SZ_1024_1526]; 502 502 503 - mutex_unlock(&lan966x->stats_lock); 503 + spin_unlock(&lan966x->stats_lock); 504 504 505 505 *ranges = lan966x_rmon_ranges; 506 506 } ··· 603 603 604 604 idx = port->chip_port * lan966x->num_stats; 605 605 606 - mutex_lock(&lan966x->stats_lock); 606 + spin_lock(&lan966x->stats_lock); 607 607 608 608 stats->rx_bytes = lan966x->stats[idx + SYS_COUNT_RX_OCT] + 609 609 lan966x->stats[idx + SYS_COUNT_RX_PMAC_OCT]; ··· 685 685 686 686 stats->collisions = lan966x->stats[idx + SYS_COUNT_TX_COL]; 687 687 688 - mutex_unlock(&lan966x->stats_lock); 688 + spin_unlock(&lan966x->stats_lock); 689 689 } 690 690 691 691 int lan966x_stats_init(struct lan966x *lan966x) ··· 701 701 return -ENOMEM; 702 702 703 703 /* Init stats worker */ 704 - mutex_init(&lan966x->stats_lock); 704 + spin_lock_init(&lan966x->stats_lock); 705 705 snprintf(queue_name, sizeof(queue_name), "%s-stats", 706 706 dev_name(lan966x->dev)); 707 707 lan966x->stats_queue = create_singlethread_workqueue(queue_name);
-2
drivers/net/ethernet/microchip/lan966x/lan966x_main.c
··· 1261 1261 1262 1262 cancel_delayed_work_sync(&lan966x->stats_work); 1263 1263 destroy_workqueue(lan966x->stats_queue); 1264 - mutex_destroy(&lan966x->stats_lock); 1265 1264 1266 1265 debugfs_remove_recursive(lan966x->debugfs_root); 1267 1266 ··· 1278 1279 1279 1280 cancel_delayed_work_sync(&lan966x->stats_work); 1280 1281 destroy_workqueue(lan966x->stats_queue); 1281 - mutex_destroy(&lan966x->stats_lock); 1282 1282 1283 1283 lan966x_mac_purge_entries(lan966x); 1284 1284 lan966x_mdb_deinit(lan966x);
+2 -2
drivers/net/ethernet/microchip/lan966x/lan966x_main.h
··· 295 295 const struct lan966x_stat_layout *stats_layout; 296 296 u32 num_stats; 297 297 298 - /* workqueue for reading stats */ 299 - struct mutex stats_lock; 298 + /* lock for reading stats */ 299 + spinlock_t stats_lock; 300 300 u64 *stats; 301 301 struct delayed_work stats_work; 302 302 struct workqueue_struct *stats_queue;
+4 -4
drivers/net/ethernet/microchip/lan966x/lan966x_vcap_impl.c
··· 403 403 u32 counter; 404 404 405 405 id = id & 0xff; /* counter limit */ 406 - mutex_lock(&lan966x->stats_lock); 406 + spin_lock(&lan966x->stats_lock); 407 407 lan_wr(SYS_STAT_CFG_STAT_VIEW_SET(id), lan966x, SYS_STAT_CFG); 408 408 counter = lan_rd(lan966x, SYS_CNT(LAN966X_STAT_ESDX_GRN_PKTS)) + 409 409 lan_rd(lan966x, SYS_CNT(LAN966X_STAT_ESDX_YEL_PKTS)); 410 - mutex_unlock(&lan966x->stats_lock); 410 + spin_unlock(&lan966x->stats_lock); 411 411 if (counter) 412 412 admin->cache.counter = counter; 413 413 } ··· 417 417 { 418 418 id = id & 0xff; /* counter limit */ 419 419 420 - mutex_lock(&lan966x->stats_lock); 420 + spin_lock(&lan966x->stats_lock); 421 421 lan_wr(SYS_STAT_CFG_STAT_VIEW_SET(id), lan966x, SYS_STAT_CFG); 422 422 lan_wr(0, lan966x, SYS_CNT(LAN966X_STAT_ESDX_GRN_BYTES)); 423 423 lan_wr(admin->cache.counter, lan966x, 424 424 SYS_CNT(LAN966X_STAT_ESDX_GRN_PKTS)); 425 425 lan_wr(0, lan966x, SYS_CNT(LAN966X_STAT_ESDX_YEL_BYTES)); 426 426 lan_wr(0, lan966x, SYS_CNT(LAN966X_STAT_ESDX_YEL_PKTS)); 427 - mutex_unlock(&lan966x->stats_lock); 427 + spin_unlock(&lan966x->stats_lock); 428 428 } 429 429 430 430 static void lan966x_vcap_cache_write(struct net_device *dev,
+4 -2
drivers/net/ethernet/netronome/nfp/nfp_net_common.c
··· 2557 2557 err = nfp_net_tlv_caps_parse(&nn->pdev->dev, nn->dp.ctrl_bar, 2558 2558 &nn->tlv_caps); 2559 2559 if (err) 2560 - goto err_free_nn; 2560 + goto err_free_xsk_pools; 2561 2561 2562 2562 err = nfp_ccm_mbox_alloc(nn); 2563 2563 if (err) 2564 - goto err_free_nn; 2564 + goto err_free_xsk_pools; 2565 2565 2566 2566 return nn; 2567 2567 2568 + err_free_xsk_pools: 2569 + kfree(nn->dp.xsk_pools); 2568 2570 err_free_nn: 2569 2571 if (nn->dp.netdev) 2570 2572 free_netdev(nn->dp.netdev);
+17 -17
drivers/net/ethernet/pensando/ionic/ionic_txrx.c
··· 29 29 30 30 static inline void ionic_txq_post(struct ionic_queue *q, bool ring_dbell) 31 31 { 32 + /* Ensure TX descriptor writes reach memory before NIC reads them. 33 + * Prevents device from fetching stale descriptors. 34 + */ 35 + dma_wmb(); 32 36 ionic_q_post(q, ring_dbell); 33 37 } 34 38 ··· 1448 1444 bool encap; 1449 1445 int err; 1450 1446 1451 - desc_info = &q->tx_info[q->head_idx]; 1452 - 1453 - if (unlikely(ionic_tx_map_skb(q, skb, desc_info))) 1454 - return -EIO; 1455 - 1456 - len = skb->len; 1457 - mss = skb_shinfo(skb)->gso_size; 1458 - outer_csum = (skb_shinfo(skb)->gso_type & (SKB_GSO_GRE | 1459 - SKB_GSO_GRE_CSUM | 1460 - SKB_GSO_IPXIP4 | 1461 - SKB_GSO_IPXIP6 | 1462 - SKB_GSO_UDP_TUNNEL | 1463 - SKB_GSO_UDP_TUNNEL_CSUM)); 1464 1447 has_vlan = !!skb_vlan_tag_present(skb); 1465 1448 vlan_tci = skb_vlan_tag_get(skb); 1466 1449 encap = skb->encapsulation; ··· 1461 1470 err = ionic_tx_tcp_inner_pseudo_csum(skb); 1462 1471 else 1463 1472 err = ionic_tx_tcp_pseudo_csum(skb); 1464 - if (unlikely(err)) { 1465 - /* clean up mapping from ionic_tx_map_skb */ 1466 - ionic_tx_desc_unmap_bufs(q, desc_info); 1473 + if (unlikely(err)) 1467 1474 return err; 1468 - } 1469 1475 1476 + desc_info = &q->tx_info[q->head_idx]; 1477 + if (unlikely(ionic_tx_map_skb(q, skb, desc_info))) 1478 + return -EIO; 1479 + 1480 + len = skb->len; 1481 + mss = skb_shinfo(skb)->gso_size; 1482 + outer_csum = (skb_shinfo(skb)->gso_type & (SKB_GSO_GRE | 1483 + SKB_GSO_GRE_CSUM | 1484 + SKB_GSO_IPXIP4 | 1485 + SKB_GSO_IPXIP6 | 1486 + SKB_GSO_UDP_TUNNEL | 1487 + SKB_GSO_UDP_TUNNEL_CSUM)); 1470 1488 if (encap) 1471 1489 hdrlen = skb_inner_tcp_all_headers(skb); 1472 1490 else
+4
drivers/net/ethernet/sfc/mae.c
··· 1090 1090 kfree(mport); 1091 1091 } 1092 1092 1093 + /* 1094 + * Takes ownership of @desc, even if it returns an error 1095 + */ 1093 1096 static int efx_mae_process_mport(struct efx_nic *efx, 1094 1097 struct mae_mport_desc *desc) 1095 1098 { ··· 1103 1100 if (!IS_ERR_OR_NULL(mport)) { 1104 1101 netif_err(efx, drv, efx->net_dev, 1105 1102 "mport with id %u does exist!!!\n", desc->mport_id); 1103 + kfree(desc); 1106 1104 return -EEXIST; 1107 1105 } 1108 1106
+3
drivers/net/ethernet/spacemit/k1_emac.c
··· 1441 1441 struct emac_priv *priv = netdev_priv(dev); 1442 1442 u8 fc = 0; 1443 1443 1444 + if (!netif_running(dev)) 1445 + return -ENETDOWN; 1446 + 1444 1447 priv->flow_control_autoneg = pause->autoneg; 1445 1448 1446 1449 if (pause->autoneg) {
+14 -18
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 4089 4089 static bool stmmac_vlan_insert(struct stmmac_priv *priv, struct sk_buff *skb, 4090 4090 struct stmmac_tx_queue *tx_q) 4091 4091 { 4092 - u16 tag = 0x0, inner_tag = 0x0; 4093 - u32 inner_type = 0x0; 4094 4092 struct dma_desc *p; 4093 + u16 tag = 0x0; 4095 4094 4096 - if (!priv->dma_cap.vlins) 4095 + if (!priv->dma_cap.vlins || !skb_vlan_tag_present(skb)) 4097 4096 return false; 4098 - if (!skb_vlan_tag_present(skb)) 4099 - return false; 4100 - if (skb->vlan_proto == htons(ETH_P_8021AD)) { 4101 - inner_tag = skb_vlan_tag_get(skb); 4102 - inner_type = STMMAC_VLAN_INSERT; 4103 - } 4104 4097 4105 4098 tag = skb_vlan_tag_get(skb); 4106 4099 ··· 4102 4109 else 4103 4110 p = &tx_q->dma_tx[tx_q->cur_tx]; 4104 4111 4105 - if (stmmac_set_desc_vlan_tag(priv, p, tag, inner_tag, inner_type)) 4112 + if (stmmac_set_desc_vlan_tag(priv, p, tag, 0x0, 0x0)) 4106 4113 return false; 4107 4114 4108 4115 stmmac_set_tx_owner(priv, p); ··· 4500 4507 bool has_vlan, set_ic; 4501 4508 int entry, first_tx; 4502 4509 dma_addr_t des; 4510 + u32 sdu_len; 4503 4511 4504 4512 tx_q = &priv->dma_conf.tx_queue[queue]; 4505 4513 txq_stats = &priv->xstats.txq_stats[queue]; ··· 4518 4524 } 4519 4525 4520 4526 if (priv->est && priv->est->enable && 4521 - priv->est->max_sdu[queue] && 4522 - skb->len > priv->est->max_sdu[queue]){ 4523 - priv->xstats.max_sdu_txq_drop[queue]++; 4524 - goto max_sdu_err; 4527 + priv->est->max_sdu[queue]) { 4528 + sdu_len = skb->len; 4529 + /* Add VLAN tag length if VLAN tag insertion offload is requested */ 4530 + if (priv->dma_cap.vlins && skb_vlan_tag_present(skb)) 4531 + sdu_len += VLAN_HLEN; 4532 + if (sdu_len > priv->est->max_sdu[queue]) { 4533 + priv->xstats.max_sdu_txq_drop[queue]++; 4534 + goto max_sdu_err; 4535 + } 4525 4536 } 4526 4537 4527 4538 if (unlikely(stmmac_tx_avail(priv, queue) < nfrags + 1)) { ··· 7572 7573 ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER; 7573 7574 ndev->features |= NETIF_F_HW_VLAN_STAG_FILTER; 7574 7575 } 7575 - if (priv->dma_cap.vlins) { 7576 + if (priv->dma_cap.vlins) 7576 7577 ndev->features |= NETIF_F_HW_VLAN_CTAG_TX; 7577 - if (priv->dma_cap.dvlan) 7578 - ndev->features |= NETIF_F_HW_VLAN_STAG_TX; 7579 - } 7580 7578 #endif 7581 7579 priv->msg_enable = netif_msg_init(debug, default_msg_level); 7582 7580
+2 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
··· 981 981 if (qopt->cmd == TAPRIO_CMD_DESTROY) 982 982 goto disable; 983 983 984 - if (qopt->num_entries >= dep) 984 + if (qopt->num_entries > dep) 985 985 return -EINVAL; 986 986 if (!qopt->cycle_time) 987 987 return -ERANGE; ··· 1012 1012 s64 delta_ns = qopt->entries[i].interval; 1013 1013 u32 gates = qopt->entries[i].gate_mask; 1014 1014 1015 - if (delta_ns > GENMASK(wid, 0)) 1015 + if (delta_ns > GENMASK(wid - 1, 0)) 1016 1016 return -ERANGE; 1017 1017 if (gates > GENMASK(31 - wid, 0)) 1018 1018 return -ERANGE;
+1 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_vlan.c
··· 212 212 213 213 value = readl(ioaddr + VLAN_INCL); 214 214 value |= VLAN_VLTI; 215 - value |= VLAN_CSVL; /* Only use SVLAN */ 215 + value &= ~VLAN_CSVL; /* Only use CVLAN */ 216 216 value &= ~VLAN_VLC; 217 217 value |= (type << VLAN_VLC_SHIFT) & VLAN_VLC; 218 218 writel(value, ioaddr + VLAN_INCL);
+7
drivers/net/ethernet/ti/icssg/icssg_config.c
··· 66 66 #define FDB_GEN_CFG1 0x60 67 67 #define SMEM_VLAN_OFFSET 8 68 68 #define SMEM_VLAN_OFFSET_MASK GENMASK(25, 8) 69 + #define FDB_HASH_SIZE_MASK GENMASK(6, 3) 70 + #define FDB_HASH_SIZE_SHIFT 3 71 + #define FDB_HASH_SIZE 3 69 72 70 73 #define FDB_GEN_CFG2 0x64 71 74 #define FDB_VLAN_EN BIT(6) ··· 466 463 /* Set VLAN TABLE address base */ 467 464 regmap_update_bits(prueth->miig_rt, FDB_GEN_CFG1, SMEM_VLAN_OFFSET_MASK, 468 465 addr << SMEM_VLAN_OFFSET); 466 + regmap_update_bits(prueth->miig_rt, FDB_GEN_CFG1, FDB_HASH_SIZE_MASK, 467 + FDB_HASH_SIZE << FDB_HASH_SIZE_SHIFT); 469 468 /* Set enable VLAN aware mode, and FDBs for all PRUs */ 470 469 regmap_write(prueth->miig_rt, FDB_GEN_CFG2, (FDB_PRU0_EN | FDB_PRU1_EN | FDB_HOST_EN)); 471 470 prueth->vlan_tbl = (struct prueth_vlan_tbl __force *)(prueth->shram.va + ··· 489 484 /* Set VLAN TABLE address base */ 490 485 regmap_update_bits(prueth->miig_rt, FDB_GEN_CFG1, SMEM_VLAN_OFFSET_MASK, 491 486 addr << SMEM_VLAN_OFFSET); 487 + regmap_update_bits(prueth->miig_rt, FDB_GEN_CFG1, FDB_HASH_SIZE_MASK, 488 + FDB_HASH_SIZE << FDB_HASH_SIZE_SHIFT); 492 489 /* Set enable VLAN aware mode, and FDBs for all PRUs */ 493 490 regmap_write(prueth->miig_rt, FDB_GEN_CFG2, FDB_EN_ALL); 494 491 prueth->vlan_tbl = (struct prueth_vlan_tbl __force *)(prueth->shram.va +
+5 -5
drivers/net/ethernet/ti/netcp_core.c
··· 1338 1338 1339 1339 tx_pipe->dma_channel = knav_dma_open_channel(dev, 1340 1340 tx_pipe->dma_chan_name, &config); 1341 - if (IS_ERR(tx_pipe->dma_channel)) { 1341 + if (!tx_pipe->dma_channel) { 1342 1342 dev_err(dev, "failed opening tx chan(%s)\n", 1343 1343 tx_pipe->dma_chan_name); 1344 - ret = PTR_ERR(tx_pipe->dma_channel); 1344 + ret = -EINVAL; 1345 1345 goto err; 1346 1346 } 1347 1347 ··· 1359 1359 return 0; 1360 1360 1361 1361 err: 1362 - if (!IS_ERR_OR_NULL(tx_pipe->dma_channel)) 1362 + if (tx_pipe->dma_channel) 1363 1363 knav_dma_close_channel(tx_pipe->dma_channel); 1364 1364 tx_pipe->dma_channel = NULL; 1365 1365 return ret; ··· 1678 1678 1679 1679 netcp->rx_channel = knav_dma_open_channel(netcp->netcp_device->device, 1680 1680 netcp->dma_chan_name, &config); 1681 - if (IS_ERR(netcp->rx_channel)) { 1681 + if (!netcp->rx_channel) { 1682 1682 dev_err(netcp->ndev_dev, "failed opening rx chan(%s\n", 1683 1683 netcp->dma_chan_name); 1684 - ret = PTR_ERR(netcp->rx_channel); 1684 + ret = -EINVAL; 1685 1685 goto fail; 1686 1686 } 1687 1687
+2 -1
drivers/net/ethernet/wangxun/libwx/wx_hw.c
··· 2427 2427 wx->oem_svid = pdev->subsystem_vendor; 2428 2428 wx->oem_ssid = pdev->subsystem_device; 2429 2429 wx->bus.device = PCI_SLOT(pdev->devfn); 2430 - wx->bus.func = PCI_FUNC(pdev->devfn); 2430 + wx->bus.func = FIELD_GET(WX_CFG_PORT_ST_LANID, 2431 + rd32(wx, WX_CFG_PORT_ST)); 2431 2432 2432 2433 if (wx->oem_svid == PCI_VENDOR_ID_WANGXUN || 2433 2434 pdev->is_virtfn) {
+2 -2
drivers/net/ethernet/wangxun/libwx/wx_type.h
··· 97 97 #define WX_CFG_PORT_CTL_DRV_LOAD BIT(3) 98 98 #define WX_CFG_PORT_CTL_QINQ BIT(2) 99 99 #define WX_CFG_PORT_CTL_D_VLAN BIT(0) /* double vlan*/ 100 + #define WX_CFG_PORT_ST 0x14404 101 + #define WX_CFG_PORT_ST_LANID GENMASK(9, 8) 100 102 #define WX_CFG_TAG_TPID(_i) (0x14430 + ((_i) * 4)) 101 103 #define WX_CFG_PORT_CTL_NUM_VT_MASK GENMASK(13, 12) /* number of TVs */ 102 104 ··· 558 556 /* Tx Descriptors needed, worst case */ 559 557 #define TXD_USE_COUNT(S) DIV_ROUND_UP((S), WX_MAX_DATA_PER_TXD) 560 558 #define DESC_NEEDED (MAX_SKB_FRAGS + 4) 561 - 562 - #define WX_CFG_PORT_ST 0x14404 563 559 564 560 /******************* Receive Descriptor bit definitions **********************/ 565 561 #define WX_RXD_STAT_DD BIT(0) /* Done */
+5 -3
drivers/net/mctp/mctp-usb.c
··· 96 96 skb->data, skb->len, 97 97 mctp_usb_out_complete, skb); 98 98 99 + /* Stops TX queue first to prevent race condition with URB complete */ 100 + netif_stop_queue(dev); 99 101 rc = usb_submit_urb(urb, GFP_ATOMIC); 100 - if (rc) 102 + if (rc) { 103 + netif_wake_queue(dev); 101 104 goto err_drop; 102 - else 103 - netif_stop_queue(dev); 105 + } 104 106 105 107 return NETDEV_TX_OK; 106 108
+2
drivers/net/mdio/mdio-airoha.c
··· 219 219 priv = bus->priv; 220 220 priv->base_addr = addr; 221 221 priv->regmap = device_node_to_regmap(dev->parent->of_node); 222 + if (IS_ERR(priv->regmap)) 223 + return PTR_ERR(priv->regmap); 222 224 223 225 priv->clk = devm_clk_get_enabled(dev, NULL); 224 226 if (IS_ERR(priv->clk))
+23 -8
drivers/net/netconsole.c
··· 886 886 887 887 static void update_userdata(struct netconsole_target *nt) 888 888 { 889 - int complete_idx = 0, child_count = 0; 890 889 struct list_head *entry; 890 + int child_count = 0; 891 + unsigned long flags; 892 + 893 + spin_lock_irqsave(&target_list_lock, flags); 891 894 892 895 /* Clear the current string in case the last userdatum was deleted */ 893 896 nt->userdata_length = 0; ··· 900 897 struct userdatum *udm_item; 901 898 struct config_item *item; 902 899 903 - if (WARN_ON_ONCE(child_count >= MAX_EXTRADATA_ITEMS)) 904 - break; 900 + if (child_count >= MAX_EXTRADATA_ITEMS) { 901 + spin_unlock_irqrestore(&target_list_lock, flags); 902 + WARN_ON_ONCE(1); 903 + return; 904 + } 905 905 child_count++; 906 906 907 907 item = container_of(entry, struct config_item, ci_entry); ··· 918 912 * one entry length (1/MAX_EXTRADATA_ITEMS long), entry count is 919 913 * checked to not exceed MAX items with child_count above 920 914 */ 921 - complete_idx += scnprintf(&nt->extradata_complete[complete_idx], 922 - MAX_EXTRADATA_ENTRY_LEN, " %s=%s\n", 923 - item->ci_name, udm_item->value); 915 + nt->userdata_length += scnprintf(&nt->extradata_complete[nt->userdata_length], 916 + MAX_EXTRADATA_ENTRY_LEN, " %s=%s\n", 917 + item->ci_name, udm_item->value); 924 918 } 925 - nt->userdata_length = strnlen(nt->extradata_complete, 926 - sizeof(nt->extradata_complete)); 919 + spin_unlock_irqrestore(&target_list_lock, flags); 927 920 } 928 921 929 922 static ssize_t userdatum_value_store(struct config_item *item, const char *buf, ··· 936 931 if (count > MAX_EXTRADATA_VALUE_LEN) 937 932 return -EMSGSIZE; 938 933 934 + mutex_lock(&netconsole_subsys.su_mutex); 939 935 mutex_lock(&dynamic_netconsole_mutex); 940 936 941 937 ret = strscpy(udm->value, buf, sizeof(udm->value)); ··· 950 944 ret = count; 951 945 out_unlock: 952 946 mutex_unlock(&dynamic_netconsole_mutex); 947 + mutex_unlock(&netconsole_subsys.su_mutex); 953 948 return ret; 954 949 } 955 950 ··· 976 969 if (ret) 977 970 return ret; 978 971 972 + mutex_lock(&netconsole_subsys.su_mutex); 979 973 mutex_lock(&dynamic_netconsole_mutex); 980 974 curr = !!(nt->sysdata_fields & SYSDATA_MSGID); 981 975 if (msgid_enabled == curr) ··· 997 989 ret = strnlen(buf, count); 998 990 unlock: 999 991 mutex_unlock(&dynamic_netconsole_mutex); 992 + mutex_unlock(&netconsole_subsys.su_mutex); 1000 993 return ret; 1001 994 } 1002 995 ··· 1012 1003 if (ret) 1013 1004 return ret; 1014 1005 1006 + mutex_lock(&netconsole_subsys.su_mutex); 1015 1007 mutex_lock(&dynamic_netconsole_mutex); 1016 1008 curr = !!(nt->sysdata_fields & SYSDATA_RELEASE); 1017 1009 if (release_enabled == curr) ··· 1033 1023 ret = strnlen(buf, count); 1034 1024 unlock: 1035 1025 mutex_unlock(&dynamic_netconsole_mutex); 1026 + mutex_unlock(&netconsole_subsys.su_mutex); 1036 1027 return ret; 1037 1028 } 1038 1029 ··· 1048 1037 if (ret) 1049 1038 return ret; 1050 1039 1040 + mutex_lock(&netconsole_subsys.su_mutex); 1051 1041 mutex_lock(&dynamic_netconsole_mutex); 1052 1042 curr = !!(nt->sysdata_fields & SYSDATA_TASKNAME); 1053 1043 if (taskname_enabled == curr) ··· 1069 1057 ret = strnlen(buf, count); 1070 1058 unlock: 1071 1059 mutex_unlock(&dynamic_netconsole_mutex); 1060 + mutex_unlock(&netconsole_subsys.su_mutex); 1072 1061 return ret; 1073 1062 } 1074 1063 ··· 1085 1072 if (ret) 1086 1073 return ret; 1087 1074 1075 + mutex_lock(&netconsole_subsys.su_mutex); 1088 1076 mutex_lock(&dynamic_netconsole_mutex); 1089 1077 curr = !!(nt->sysdata_fields & SYSDATA_CPU_NR); 1090 1078 if (cpu_nr_enabled == curr) ··· 1114 1100 ret = strnlen(buf, count); 1115 1101 unlock: 1116 1102 mutex_unlock(&dynamic_netconsole_mutex); 1103 + mutex_unlock(&netconsole_subsys.su_mutex); 1117 1104 return ret; 1118 1105 } 1119 1106
+6
drivers/net/phy/dp83867.c
··· 738 738 return ret; 739 739 } 740 740 741 + /* Although the DP83867 reports EEE capability through the 742 + * MDIO_PCS_EEE_ABLE and MDIO_AN_EEE_ADV registers, the feature 743 + * is not actually implemented in hardware. 744 + */ 745 + phy_disable_eee(phydev); 746 + 741 747 if (phy_interface_is_rgmii(phydev) || 742 748 phydev->interface == PHY_INTERFACE_MODE_SGMII) { 743 749 val = phy_read(phydev, MII_DP83867_PHYCTRL);
+2 -2
drivers/net/phy/dp83869.c
··· 84 84 #define DP83869_CLK_DELAY_DEF 7 85 85 86 86 /* STRAP_STS1 bits */ 87 - #define DP83869_STRAP_OP_MODE_MASK GENMASK(2, 0) 87 + #define DP83869_STRAP_OP_MODE_MASK GENMASK(11, 9) 88 88 #define DP83869_STRAP_STS1_RESERVED BIT(11) 89 89 #define DP83869_STRAP_MIRROR_ENABLED BIT(12) 90 90 ··· 528 528 if (val < 0) 529 529 return val; 530 530 531 - dp83869->mode = val & DP83869_STRAP_OP_MODE_MASK; 531 + dp83869->mode = FIELD_GET(DP83869_STRAP_OP_MODE_MASK, val); 532 532 533 533 return 0; 534 534 }
+163
drivers/net/phy/micrel.c
··· 466 466 u16 rev; 467 467 }; 468 468 469 + struct lanphy_reg_data { 470 + int page; 471 + u16 addr; 472 + u16 val; 473 + }; 474 + 469 475 static const struct kszphy_type lan8814_type = { 470 476 .led_mode_reg = ~LAN8814_LED_CTRL_1, 471 477 .cable_diag_reg = LAN8814_CABLE_DIAG, ··· 2842 2836 #define LAN8814_PAGE_PCS_DIGITAL 2 2843 2837 2844 2838 /** 2839 + * LAN8814_PAGE_EEE - Selects Extended Page 3. 2840 + * 2841 + * This page contains EEE registers 2842 + */ 2843 + #define LAN8814_PAGE_EEE 3 2844 + 2845 + /** 2845 2846 * LAN8814_PAGE_COMMON_REGS - Selects Extended Page 4. 2846 2847 * 2847 2848 * This page contains device-common registers that affect the entire chip. ··· 2865 2852 * rate adaptation FIFOs, and the per-port 1588 TSU block. 2866 2853 */ 2867 2854 #define LAN8814_PAGE_PORT_REGS 5 2855 + 2856 + /** 2857 + * LAN8814_PAGE_POWER_REGS - Selects Extended Page 28. 2858 + * 2859 + * This page contains analog control registers and power mode registers. 2860 + */ 2861 + #define LAN8814_PAGE_POWER_REGS 28 2868 2862 2869 2863 /** 2870 2864 * LAN8814_PAGE_SYSTEM_CTRL - Selects Extended Page 31. ··· 5904 5884 return 0; 5905 5885 } 5906 5886 5887 + #define LAN8814_POWER_MGMT_MODE_3_ANEG_MDI 0x13 5888 + #define LAN8814_POWER_MGMT_MODE_4_ANEG_MDIX 0x14 5889 + #define LAN8814_POWER_MGMT_MODE_5_10BT_MDI 0x15 5890 + #define LAN8814_POWER_MGMT_MODE_6_10BT_MDIX 0x16 5891 + #define LAN8814_POWER_MGMT_MODE_7_100BT_TRAIN 0x17 5892 + #define LAN8814_POWER_MGMT_MODE_8_100BT_MDI 0x18 5893 + #define LAN8814_POWER_MGMT_MODE_9_100BT_EEE_MDI_TX 0x19 5894 + #define LAN8814_POWER_MGMT_MODE_10_100BT_EEE_MDI_RX 0x1a 5895 + #define LAN8814_POWER_MGMT_MODE_11_100BT_MDIX 0x1b 5896 + #define LAN8814_POWER_MGMT_MODE_12_100BT_EEE_MDIX_TX 0x1c 5897 + #define LAN8814_POWER_MGMT_MODE_13_100BT_EEE_MDIX_RX 0x1d 5898 + #define LAN8814_POWER_MGMT_MODE_14_100BTX_EEE_TX_RX 0x1e 5899 + 5900 + #define LAN8814_POWER_MGMT_DLLPD_D BIT(0) 5901 + #define LAN8814_POWER_MGMT_ADCPD_D BIT(1) 5902 + #define LAN8814_POWER_MGMT_PGAPD_D BIT(2) 5903 + #define LAN8814_POWER_MGMT_TXPD_D BIT(3) 5904 + #define LAN8814_POWER_MGMT_DLLPD_C BIT(4) 5905 + #define LAN8814_POWER_MGMT_ADCPD_C BIT(5) 5906 + #define LAN8814_POWER_MGMT_PGAPD_C BIT(6) 5907 + #define LAN8814_POWER_MGMT_TXPD_C BIT(7) 5908 + #define LAN8814_POWER_MGMT_DLLPD_B BIT(8) 5909 + #define LAN8814_POWER_MGMT_ADCPD_B BIT(9) 5910 + #define LAN8814_POWER_MGMT_PGAPD_B BIT(10) 5911 + #define LAN8814_POWER_MGMT_TXPD_B BIT(11) 5912 + #define LAN8814_POWER_MGMT_DLLPD_A BIT(12) 5913 + #define LAN8814_POWER_MGMT_ADCPD_A BIT(13) 5914 + #define LAN8814_POWER_MGMT_PGAPD_A BIT(14) 5915 + #define LAN8814_POWER_MGMT_TXPD_A BIT(15) 5916 + 5917 + #define LAN8814_POWER_MGMT_C_D (LAN8814_POWER_MGMT_DLLPD_D | \ 5918 + LAN8814_POWER_MGMT_ADCPD_D | \ 5919 + LAN8814_POWER_MGMT_PGAPD_D | \ 5920 + LAN8814_POWER_MGMT_DLLPD_C | \ 5921 + LAN8814_POWER_MGMT_ADCPD_C | \ 5922 + LAN8814_POWER_MGMT_PGAPD_C) 5923 + 5924 + #define LAN8814_POWER_MGMT_B_C_D (LAN8814_POWER_MGMT_C_D | \ 5925 + LAN8814_POWER_MGMT_DLLPD_B | \ 5926 + LAN8814_POWER_MGMT_ADCPD_B | \ 5927 + LAN8814_POWER_MGMT_PGAPD_B) 5928 + 5929 + #define LAN8814_POWER_MGMT_VAL1 (LAN8814_POWER_MGMT_C_D | \ 5930 + LAN8814_POWER_MGMT_ADCPD_B | \ 5931 + LAN8814_POWER_MGMT_PGAPD_B | \ 5932 + LAN8814_POWER_MGMT_ADCPD_A | \ 5933 + LAN8814_POWER_MGMT_PGAPD_A) 5934 + 5935 + #define LAN8814_POWER_MGMT_VAL2 LAN8814_POWER_MGMT_C_D 5936 + 5937 + #define LAN8814_POWER_MGMT_VAL3 (LAN8814_POWER_MGMT_C_D | \ 5938 + LAN8814_POWER_MGMT_DLLPD_B | \ 5939 + LAN8814_POWER_MGMT_ADCPD_B | \ 5940 + LAN8814_POWER_MGMT_PGAPD_A) 5941 + 5942 + #define LAN8814_POWER_MGMT_VAL4 (LAN8814_POWER_MGMT_B_C_D | \ 5943 + LAN8814_POWER_MGMT_ADCPD_A | \ 5944 + LAN8814_POWER_MGMT_PGAPD_A) 5945 + 5946 + #define LAN8814_POWER_MGMT_VAL5 LAN8814_POWER_MGMT_B_C_D 5947 + 5948 + #define LAN8814_EEE_WAKE_TX_TIMER 0x0e 5949 + #define LAN8814_EEE_WAKE_TX_TIMER_MAX_VAL 0x1f 5950 + 5951 + static const struct lanphy_reg_data short_center_tap_errata[] = { 5952 + { LAN8814_PAGE_POWER_REGS, 5953 + LAN8814_POWER_MGMT_MODE_3_ANEG_MDI, 5954 + LAN8814_POWER_MGMT_VAL1 }, 5955 + { LAN8814_PAGE_POWER_REGS, 5956 + LAN8814_POWER_MGMT_MODE_4_ANEG_MDIX, 5957 + LAN8814_POWER_MGMT_VAL1 }, 5958 + { LAN8814_PAGE_POWER_REGS, 5959 + LAN8814_POWER_MGMT_MODE_5_10BT_MDI, 5960 + LAN8814_POWER_MGMT_VAL1 }, 5961 + { LAN8814_PAGE_POWER_REGS, 5962 + LAN8814_POWER_MGMT_MODE_6_10BT_MDIX, 5963 + LAN8814_POWER_MGMT_VAL1 }, 5964 + { LAN8814_PAGE_POWER_REGS, 5965 + LAN8814_POWER_MGMT_MODE_7_100BT_TRAIN, 5966 + LAN8814_POWER_MGMT_VAL2 }, 5967 + { LAN8814_PAGE_POWER_REGS, 5968 + LAN8814_POWER_MGMT_MODE_8_100BT_MDI, 5969 + LAN8814_POWER_MGMT_VAL3 }, 5970 + { LAN8814_PAGE_POWER_REGS, 5971 + LAN8814_POWER_MGMT_MODE_9_100BT_EEE_MDI_TX, 5972 + LAN8814_POWER_MGMT_VAL3 }, 5973 + { LAN8814_PAGE_POWER_REGS, 5974 + LAN8814_POWER_MGMT_MODE_10_100BT_EEE_MDI_RX, 5975 + LAN8814_POWER_MGMT_VAL4 }, 5976 + { LAN8814_PAGE_POWER_REGS, 5977 + LAN8814_POWER_MGMT_MODE_11_100BT_MDIX, 5978 + LAN8814_POWER_MGMT_VAL5 }, 5979 + { LAN8814_PAGE_POWER_REGS, 5980 + LAN8814_POWER_MGMT_MODE_12_100BT_EEE_MDIX_TX, 5981 + LAN8814_POWER_MGMT_VAL5 }, 5982 + { LAN8814_PAGE_POWER_REGS, 5983 + LAN8814_POWER_MGMT_MODE_13_100BT_EEE_MDIX_RX, 5984 + LAN8814_POWER_MGMT_VAL4 }, 5985 + { LAN8814_PAGE_POWER_REGS, 5986 + LAN8814_POWER_MGMT_MODE_14_100BTX_EEE_TX_RX, 5987 + LAN8814_POWER_MGMT_VAL4 }, 5988 + }; 5989 + 5990 + static const struct lanphy_reg_data waketx_timer_errata[] = { 5991 + { LAN8814_PAGE_EEE, 5992 + LAN8814_EEE_WAKE_TX_TIMER, 5993 + LAN8814_EEE_WAKE_TX_TIMER_MAX_VAL }, 5994 + }; 5995 + 5996 + static int lanphy_write_reg_data(struct phy_device *phydev, 5997 + const struct lanphy_reg_data *data, 5998 + size_t num) 5999 + { 6000 + int ret = 0; 6001 + 6002 + while (num--) { 6003 + ret = lanphy_write_page_reg(phydev, data->page, data->addr, 6004 + data->val); 6005 + if (ret) 6006 + break; 6007 + } 6008 + 6009 + return ret; 6010 + } 6011 + 6012 + static int lan8842_erratas(struct phy_device *phydev) 6013 + { 6014 + int ret; 6015 + 6016 + ret = lanphy_write_reg_data(phydev, short_center_tap_errata, 6017 + ARRAY_SIZE(short_center_tap_errata)); 6018 + if (ret) 6019 + return ret; 6020 + 6021 + return lanphy_write_reg_data(phydev, waketx_timer_errata, 6022 + ARRAY_SIZE(waketx_timer_errata)); 6023 + } 6024 + 5907 6025 static int lan8842_config_init(struct phy_device *phydev) 5908 6026 { 5909 6027 int ret; ··· 6051 5893 LAN8814_QSGMII_SOFT_RESET, 6052 5894 LAN8814_QSGMII_SOFT_RESET_BIT, 6053 5895 LAN8814_QSGMII_SOFT_RESET_BIT); 5896 + if (ret < 0) 5897 + return ret; 5898 + 5899 + /* Apply the erratas for this device */ 5900 + ret = lan8842_erratas(phydev); 6054 5901 if (ret < 0) 6055 5902 return ret; 6056 5903
+9 -3
drivers/net/usb/asix_devices.c
··· 230 230 int i; 231 231 unsigned long gpio_bits = dev->driver_info->data; 232 232 233 - usbnet_get_endpoints(dev,intf); 233 + ret = usbnet_get_endpoints(dev, intf); 234 + if (ret) 235 + goto out; 234 236 235 237 /* Toggle the GPIOs in a manufacturer/model specific way */ 236 238 for (i = 2; i >= 0; i--) { ··· 850 848 851 849 dev->driver_priv = priv; 852 850 853 - usbnet_get_endpoints(dev, intf); 851 + ret = usbnet_get_endpoints(dev, intf); 852 + if (ret) 853 + return ret; 854 854 855 855 /* Maybe the boot loader passed the MAC address via device tree */ 856 856 if (!eth_platform_get_mac_address(&dev->udev->dev, buf)) { ··· 1285 1281 int ret; 1286 1282 u8 buf[ETH_ALEN] = {0}; 1287 1283 1288 - usbnet_get_endpoints(dev,intf); 1284 + ret = usbnet_get_endpoints(dev, intf); 1285 + if (ret) 1286 + return ret; 1289 1287 1290 1288 /* Get the MAC address */ 1291 1289 ret = asix_read_cmd(dev, AX_CMD_READ_NODE_ID, 0, 0, ETH_ALEN, buf, 0);
+6
drivers/net/usb/qmi_wwan.c
··· 192 192 if (!skbn) 193 193 return 0; 194 194 195 + /* Raw IP packets don't have a MAC header, but other subsystems 196 + * (like xfrm) may still access MAC header offsets, so they must 197 + * be initialized. 198 + */ 199 + skb_reset_mac_header(skbn); 200 + 195 201 switch (skb->data[offset + qmimux_hdr_sz] & 0xf0) { 196 202 case 0x40: 197 203 skbn->protocol = htons(ETH_P_IP);
+2
drivers/net/usb/usbnet.c
··· 1659 1659 net = dev->net; 1660 1660 unregister_netdev (net); 1661 1661 1662 + cancel_work_sync(&dev->kevent); 1663 + 1662 1664 while ((urb = usb_get_from_anchor(&dev->deferred))) { 1663 1665 dev_kfree_skb(urb->context); 1664 1666 kfree(urb->sg);
+33 -18
drivers/net/virtio_net.c
··· 910 910 goto ok; 911 911 } 912 912 913 - /* 914 - * Verify that we can indeed put this data into a skb. 915 - * This is here to handle cases when the device erroneously 916 - * tries to receive more than is possible. This is usually 917 - * the case of a broken device. 918 - */ 919 - if (unlikely(len > MAX_SKB_FRAGS * PAGE_SIZE)) { 920 - net_dbg_ratelimited("%s: too much data\n", skb->dev->name); 921 - dev_kfree_skb(skb); 922 - return NULL; 923 - } 924 913 BUG_ON(offset >= PAGE_SIZE); 925 914 while (len) { 926 915 unsigned int frag_size = min((unsigned)PAGE_SIZE - offset, len); ··· 1368 1379 ret = XDP_PASS; 1369 1380 rcu_read_lock(); 1370 1381 prog = rcu_dereference(rq->xdp_prog); 1371 - /* TODO: support multi buffer. */ 1372 - if (prog && num_buf == 1) 1373 - ret = virtnet_xdp_handler(prog, xdp, dev, xdp_xmit, stats); 1382 + if (prog) { 1383 + /* TODO: support multi buffer. */ 1384 + if (num_buf == 1) 1385 + ret = virtnet_xdp_handler(prog, xdp, dev, xdp_xmit, 1386 + stats); 1387 + else 1388 + ret = XDP_ABORTED; 1389 + } 1374 1390 rcu_read_unlock(); 1375 1391 1376 1392 switch (ret) { ··· 2101 2107 struct virtnet_rq_stats *stats) 2102 2108 { 2103 2109 struct page *page = buf; 2104 - struct sk_buff *skb = 2105 - page_to_skb(vi, rq, page, 0, len, PAGE_SIZE, 0); 2110 + struct sk_buff *skb; 2106 2111 2112 + /* Make sure that len does not exceed the size allocated in 2113 + * add_recvbuf_big. 2114 + */ 2115 + if (unlikely(len > (vi->big_packets_num_skbfrags + 1) * PAGE_SIZE)) { 2116 + pr_debug("%s: rx error: len %u exceeds allocated size %lu\n", 2117 + dev->name, len, 2118 + (vi->big_packets_num_skbfrags + 1) * PAGE_SIZE); 2119 + goto err; 2120 + } 2121 + 2122 + skb = page_to_skb(vi, rq, page, 0, len, PAGE_SIZE, 0); 2107 2123 u64_stats_add(&stats->bytes, len - vi->hdr_len); 2108 2124 if (unlikely(!skb)) 2109 2125 goto err; ··· 2538 2534 return NULL; 2539 2535 } 2540 2536 2537 + static inline u32 2538 + virtio_net_hash_value(const struct virtio_net_hdr_v1_hash *hdr_hash) 2539 + { 2540 + return __le16_to_cpu(hdr_hash->hash_value_lo) | 2541 + (__le16_to_cpu(hdr_hash->hash_value_hi) << 16); 2542 + } 2543 + 2541 2544 static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash, 2542 2545 struct sk_buff *skb) 2543 2546 { ··· 2571 2560 default: 2572 2561 rss_hash_type = PKT_HASH_TYPE_NONE; 2573 2562 } 2574 - skb_set_hash(skb, __le32_to_cpu(hdr_hash->hash_value), rss_hash_type); 2563 + skb_set_hash(skb, virtio_net_hash_value(hdr_hash), rss_hash_type); 2575 2564 } 2576 2565 2577 2566 static void virtnet_receive_done(struct virtnet_info *vi, struct receive_queue *rq, ··· 3316 3305 bool can_push; 3317 3306 3318 3307 pr_debug("%s: xmit %p %pM\n", vi->dev->name, skb, dest); 3308 + 3309 + /* Make sure it's safe to cast between formats */ 3310 + BUILD_BUG_ON(__alignof__(*hdr) != __alignof__(hdr->hash_hdr)); 3311 + BUILD_BUG_ON(__alignof__(*hdr) != __alignof__(hdr->hash_hdr.hdr)); 3319 3312 3320 3313 can_push = vi->any_header_sg && 3321 3314 !((unsigned long)skb->data & (__alignof__(*hdr) - 1)) && ··· 6760 6745 hash_report = VIRTIO_NET_HASH_REPORT_NONE; 6761 6746 6762 6747 *rss_type = virtnet_xdp_rss_type[hash_report]; 6763 - *hash = __le32_to_cpu(hdr_hash->hash_value); 6748 + *hash = virtio_net_hash_value(hdr_hash); 6764 6749 return 0; 6765 6750 } 6766 6751
+4 -3
drivers/net/wan/framer/pef2256/pef2256.c
··· 648 648 audio_devs[i].id = i; 649 649 } 650 650 651 - ret = mfd_add_devices(pef2256->dev, 0, audio_devs, count, NULL, 0, NULL); 651 + ret = devm_mfd_add_devices(pef2256->dev, 0, audio_devs, count, 652 + NULL, 0, NULL); 652 653 kfree(audio_devs); 653 654 return ret; 654 655 } ··· 823 822 824 823 platform_set_drvdata(pdev, pef2256); 825 824 826 - ret = mfd_add_devices(pef2256->dev, 0, pef2256_devs, 827 - ARRAY_SIZE(pef2256_devs), NULL, 0, NULL); 825 + ret = devm_mfd_add_devices(pef2256->dev, 0, pef2256_devs, 826 + ARRAY_SIZE(pef2256_devs), NULL, 0, NULL); 828 827 if (ret) { 829 828 dev_err(pef2256->dev, "add devices failed (%d)\n", ret); 830 829 return ret;
+21 -19
drivers/net/wireless/ath/ath10k/wmi.c
··· 1764 1764 1765 1765 int ath10k_wmi_wait_for_service_ready(struct ath10k *ar) 1766 1766 { 1767 - unsigned long timeout = jiffies + WMI_SERVICE_READY_TIMEOUT_HZ; 1768 1767 unsigned long time_left, i; 1769 1768 1770 - /* Sometimes the PCI HIF doesn't receive interrupt 1771 - * for the service ready message even if the buffer 1772 - * was completed. PCIe sniffer shows that it's 1773 - * because the corresponding CE ring doesn't fires 1774 - * it. Workaround here by polling CE rings. Since 1775 - * the message could arrive at any time, continue 1776 - * polling until timeout. 1777 - */ 1778 - do { 1769 + time_left = wait_for_completion_timeout(&ar->wmi.service_ready, 1770 + WMI_SERVICE_READY_TIMEOUT_HZ); 1771 + if (!time_left) { 1772 + /* Sometimes the PCI HIF doesn't receive interrupt 1773 + * for the service ready message even if the buffer 1774 + * was completed. PCIe sniffer shows that it's 1775 + * because the corresponding CE ring doesn't fires 1776 + * it. Workaround here by polling CE rings once. 1777 + */ 1778 + ath10k_warn(ar, "failed to receive service ready completion, polling..\n"); 1779 + 1779 1780 for (i = 0; i < CE_COUNT; i++) 1780 1781 ath10k_hif_send_complete_check(ar, i, 1); 1781 1782 1782 - /* The 100 ms granularity is a tradeoff considering scheduler 1783 - * overhead and response latency 1784 - */ 1785 1783 time_left = wait_for_completion_timeout(&ar->wmi.service_ready, 1786 - msecs_to_jiffies(100)); 1787 - if (time_left) 1788 - return 0; 1789 - } while (time_before(jiffies, timeout)); 1784 + WMI_SERVICE_READY_TIMEOUT_HZ); 1785 + if (!time_left) { 1786 + ath10k_warn(ar, "polling timed out\n"); 1787 + return -ETIMEDOUT; 1788 + } 1790 1789 1791 - ath10k_warn(ar, "failed to receive service ready completion\n"); 1792 - return -ETIMEDOUT; 1790 + ath10k_warn(ar, "service ready completion received, continuing normally\n"); 1791 + } 1792 + 1793 + return 0; 1793 1794 } 1794 1795 1795 1796 int ath10k_wmi_wait_for_unified_ready(struct ath10k *ar) ··· 1938 1937 if (cmd_id == WMI_CMD_UNSUPPORTED) { 1939 1938 ath10k_warn(ar, "wmi command %d is not supported by firmware\n", 1940 1939 cmd_id); 1940 + dev_kfree_skb_any(skb); 1941 1941 return ret; 1942 1942 } 1943 1943
+48 -6
drivers/net/wireless/ath/ath11k/core.c
··· 912 912 static const struct dmi_system_id ath11k_pm_quirk_table[] = { 913 913 { 914 914 .driver_data = (void *)ATH11K_PM_WOW, 915 - .matches = { 915 + .matches = { /* X13 G4 AMD #1 */ 916 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 917 + DMI_MATCH(DMI_PRODUCT_NAME, "21J3"), 918 + }, 919 + }, 920 + { 921 + .driver_data = (void *)ATH11K_PM_WOW, 922 + .matches = { /* X13 G4 AMD #2 */ 916 923 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 917 924 DMI_MATCH(DMI_PRODUCT_NAME, "21J4"), 918 925 }, 919 926 }, 920 927 { 921 928 .driver_data = (void *)ATH11K_PM_WOW, 922 - .matches = { 929 + .matches = { /* T14 G4 AMD #1 */ 930 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 931 + DMI_MATCH(DMI_PRODUCT_NAME, "21K3"), 932 + }, 933 + }, 934 + { 935 + .driver_data = (void *)ATH11K_PM_WOW, 936 + .matches = { /* T14 G4 AMD #2 */ 923 937 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 924 938 DMI_MATCH(DMI_PRODUCT_NAME, "21K4"), 925 939 }, 926 940 }, 927 941 { 928 942 .driver_data = (void *)ATH11K_PM_WOW, 929 - .matches = { 943 + .matches = { /* P14s G4 AMD #1 */ 944 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 945 + DMI_MATCH(DMI_PRODUCT_NAME, "21K5"), 946 + }, 947 + }, 948 + { 949 + .driver_data = (void *)ATH11K_PM_WOW, 950 + .matches = { /* P14s G4 AMD #2 */ 930 951 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 931 952 DMI_MATCH(DMI_PRODUCT_NAME, "21K6"), 932 953 }, 933 954 }, 934 955 { 935 956 .driver_data = (void *)ATH11K_PM_WOW, 936 - .matches = { 957 + .matches = { /* T16 G2 AMD #1 */ 958 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 959 + DMI_MATCH(DMI_PRODUCT_NAME, "21K7"), 960 + }, 961 + }, 962 + { 963 + .driver_data = (void *)ATH11K_PM_WOW, 964 + .matches = { /* T16 G2 AMD #2 */ 937 965 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 938 966 DMI_MATCH(DMI_PRODUCT_NAME, "21K8"), 939 967 }, 940 968 }, 941 969 { 942 970 .driver_data = (void *)ATH11K_PM_WOW, 943 - .matches = { 971 + .matches = { /* P16s G2 AMD #1 */ 972 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 973 + DMI_MATCH(DMI_PRODUCT_NAME, "21K9"), 974 + }, 975 + }, 976 + { 977 + .driver_data = (void *)ATH11K_PM_WOW, 978 + .matches = { /* P16s G2 AMD #2 */ 944 979 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 945 980 DMI_MATCH(DMI_PRODUCT_NAME, "21KA"), 946 981 }, 947 982 }, 948 983 { 949 984 .driver_data = (void *)ATH11K_PM_WOW, 950 - .matches = { 985 + .matches = { /* T14s G4 AMD #1 */ 986 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 987 + DMI_MATCH(DMI_PRODUCT_NAME, "21F8"), 988 + }, 989 + }, 990 + { 991 + .driver_data = (void *)ATH11K_PM_WOW, 992 + .matches = { /* T14s G4 AMD #2 */ 951 993 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 952 994 DMI_MATCH(DMI_PRODUCT_NAME, "21F9"), 953 995 },
+5 -5
drivers/net/wireless/ath/ath11k/mac.c
··· 1 1 // SPDX-License-Identifier: BSD-3-Clause-Clear 2 2 /* 3 3 * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved. 4 - * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved. 4 + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. 5 5 */ 6 6 7 7 #include <net/mac80211.h> ··· 4417 4417 } 4418 4418 4419 4419 if (key->flags & IEEE80211_KEY_FLAG_PAIRWISE) 4420 - flags |= WMI_KEY_PAIRWISE; 4420 + flags = WMI_KEY_PAIRWISE; 4421 4421 else 4422 - flags |= WMI_KEY_GROUP; 4422 + flags = WMI_KEY_GROUP; 4423 4423 4424 4424 ath11k_dbg(ar->ab, ATH11K_DBG_MAC, 4425 4425 "%s for peer %pM on vdev %d flags 0x%X, type = %d, num_sta %d\n", ··· 4456 4456 4457 4457 is_ap_with_no_sta = (vif->type == NL80211_IFTYPE_AP && 4458 4458 !arvif->num_stations); 4459 - if ((flags & WMI_KEY_PAIRWISE) || cmd == SET_KEY || is_ap_with_no_sta) { 4459 + if (flags == WMI_KEY_PAIRWISE || cmd == SET_KEY || is_ap_with_no_sta) { 4460 4460 ret = ath11k_install_key(arvif, key, cmd, peer_addr, flags); 4461 4461 if (ret) { 4462 4462 ath11k_warn(ab, "ath11k_install_key failed (%d)\n", ret); ··· 4470 4470 goto exit; 4471 4471 } 4472 4472 4473 - if ((flags & WMI_KEY_GROUP) && cmd == SET_KEY && is_ap_with_no_sta) 4473 + if (flags == WMI_KEY_GROUP && cmd == SET_KEY && is_ap_with_no_sta) 4474 4474 arvif->reinstall_group_keys = true; 4475 4475 } 4476 4476
+73 -83
drivers/net/wireless/ath/ath12k/mac.c
··· 4064 4064 return ret; 4065 4065 } 4066 4066 4067 - static void ath12k_mac_vif_setup_ps(struct ath12k_link_vif *arvif) 4068 - { 4069 - struct ath12k *ar = arvif->ar; 4070 - struct ieee80211_vif *vif = arvif->ahvif->vif; 4071 - struct ieee80211_conf *conf = &ath12k_ar_to_hw(ar)->conf; 4072 - enum wmi_sta_powersave_param param; 4073 - struct ieee80211_bss_conf *info; 4074 - enum wmi_sta_ps_mode psmode; 4075 - int ret; 4076 - int timeout; 4077 - bool enable_ps; 4078 - 4079 - lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy); 4080 - 4081 - if (vif->type != NL80211_IFTYPE_STATION) 4082 - return; 4083 - 4084 - enable_ps = arvif->ahvif->ps; 4085 - if (enable_ps) { 4086 - psmode = WMI_STA_PS_MODE_ENABLED; 4087 - param = WMI_STA_PS_PARAM_INACTIVITY_TIME; 4088 - 4089 - timeout = conf->dynamic_ps_timeout; 4090 - if (timeout == 0) { 4091 - info = ath12k_mac_get_link_bss_conf(arvif); 4092 - if (!info) { 4093 - ath12k_warn(ar->ab, "unable to access bss link conf in setup ps for vif %pM link %u\n", 4094 - vif->addr, arvif->link_id); 4095 - return; 4096 - } 4097 - 4098 - /* firmware doesn't like 0 */ 4099 - timeout = ieee80211_tu_to_usec(info->beacon_int) / 1000; 4100 - } 4101 - 4102 - ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id, param, 4103 - timeout); 4104 - if (ret) { 4105 - ath12k_warn(ar->ab, "failed to set inactivity time for vdev %d: %i\n", 4106 - arvif->vdev_id, ret); 4107 - return; 4108 - } 4109 - } else { 4110 - psmode = WMI_STA_PS_MODE_DISABLED; 4111 - } 4112 - 4113 - ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %d psmode %s\n", 4114 - arvif->vdev_id, psmode ? "enable" : "disable"); 4115 - 4116 - ret = ath12k_wmi_pdev_set_ps_mode(ar, arvif->vdev_id, psmode); 4117 - if (ret) 4118 - ath12k_warn(ar->ab, "failed to set sta power save mode %d for vdev %d: %d\n", 4119 - psmode, arvif->vdev_id, ret); 4120 - } 4121 - 4122 4067 static void ath12k_mac_op_vif_cfg_changed(struct ieee80211_hw *hw, 4123 4068 struct ieee80211_vif *vif, 4124 4069 u64 changed) 4125 4070 { 4126 4071 struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif); 4127 4072 unsigned long links = ahvif->links_map; 4128 - struct ieee80211_vif_cfg *vif_cfg; 4129 4073 struct ieee80211_bss_conf *info; 4130 4074 struct ath12k_link_vif *arvif; 4131 4075 struct ieee80211_sta *sta; ··· 4133 4189 } 4134 4190 } 4135 4191 } 4192 + } 4136 4193 4137 - if (changed & BSS_CHANGED_PS) { 4138 - links = ahvif->links_map; 4139 - vif_cfg = &vif->cfg; 4194 + static void ath12k_mac_vif_setup_ps(struct ath12k_link_vif *arvif) 4195 + { 4196 + struct ath12k *ar = arvif->ar; 4197 + struct ieee80211_vif *vif = arvif->ahvif->vif; 4198 + struct ieee80211_conf *conf = &ath12k_ar_to_hw(ar)->conf; 4199 + enum wmi_sta_powersave_param param; 4200 + struct ieee80211_bss_conf *info; 4201 + enum wmi_sta_ps_mode psmode; 4202 + int ret; 4203 + int timeout; 4204 + bool enable_ps; 4140 4205 4141 - for_each_set_bit(link_id, &links, IEEE80211_MLD_MAX_NUM_LINKS) { 4142 - arvif = wiphy_dereference(hw->wiphy, ahvif->link[link_id]); 4143 - if (!arvif || !arvif->ar) 4144 - continue; 4206 + lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy); 4145 4207 4146 - ar = arvif->ar; 4208 + if (vif->type != NL80211_IFTYPE_STATION) 4209 + return; 4147 4210 4148 - if (ar->ab->hw_params->supports_sta_ps) { 4149 - ahvif->ps = vif_cfg->ps; 4150 - ath12k_mac_vif_setup_ps(arvif); 4211 + enable_ps = arvif->ahvif->ps; 4212 + if (enable_ps) { 4213 + psmode = WMI_STA_PS_MODE_ENABLED; 4214 + param = WMI_STA_PS_PARAM_INACTIVITY_TIME; 4215 + 4216 + timeout = conf->dynamic_ps_timeout; 4217 + if (timeout == 0) { 4218 + info = ath12k_mac_get_link_bss_conf(arvif); 4219 + if (!info) { 4220 + ath12k_warn(ar->ab, "unable to access bss link conf in setup ps for vif %pM link %u\n", 4221 + vif->addr, arvif->link_id); 4222 + return; 4151 4223 } 4224 + 4225 + /* firmware doesn't like 0 */ 4226 + timeout = ieee80211_tu_to_usec(info->beacon_int) / 1000; 4152 4227 } 4228 + 4229 + ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id, param, 4230 + timeout); 4231 + if (ret) { 4232 + ath12k_warn(ar->ab, "failed to set inactivity time for vdev %d: %i\n", 4233 + arvif->vdev_id, ret); 4234 + return; 4235 + } 4236 + } else { 4237 + psmode = WMI_STA_PS_MODE_DISABLED; 4153 4238 } 4239 + 4240 + ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %d psmode %s\n", 4241 + arvif->vdev_id, psmode ? "enable" : "disable"); 4242 + 4243 + ret = ath12k_wmi_pdev_set_ps_mode(ar, arvif->vdev_id, psmode); 4244 + if (ret) 4245 + ath12k_warn(ar->ab, "failed to set sta power save mode %d for vdev %d: %d\n", 4246 + psmode, arvif->vdev_id, ret); 4154 4247 } 4155 4248 4156 4249 static bool ath12k_mac_supports_tpc(struct ath12k *ar, struct ath12k_vif *ahvif, ··· 4209 4228 { 4210 4229 struct ath12k_vif *ahvif = arvif->ahvif; 4211 4230 struct ieee80211_vif *vif = ath12k_ahvif_to_vif(ahvif); 4231 + struct ieee80211_vif_cfg *vif_cfg = &vif->cfg; 4212 4232 struct cfg80211_chan_def def; 4213 4233 u32 param_id, param_value; 4214 4234 enum nl80211_band band; ··· 4496 4514 } 4497 4515 4498 4516 ath12k_mac_fils_discovery(arvif, info); 4517 + 4518 + if (changed & BSS_CHANGED_PS && 4519 + ar->ab->hw_params->supports_sta_ps) { 4520 + ahvif->ps = vif_cfg->ps; 4521 + ath12k_mac_vif_setup_ps(arvif); 4522 + } 4499 4523 } 4500 4524 4501 4525 static struct ath12k_vif_cache *ath12k_ahvif_get_link_cache(struct ath12k_vif *ahvif, ··· 8278 8290 wake_up(&ar->txmgmt_empty_waitq); 8279 8291 } 8280 8292 8281 - int ath12k_mac_tx_mgmt_pending_free(int buf_id, void *skb, void *ctx) 8293 + static void ath12k_mac_tx_mgmt_free(struct ath12k *ar, int buf_id) 8282 8294 { 8283 - struct sk_buff *msdu = skb; 8295 + struct sk_buff *msdu; 8284 8296 struct ieee80211_tx_info *info; 8285 - struct ath12k *ar = ctx; 8286 - struct ath12k_base *ab = ar->ab; 8287 8297 8288 8298 spin_lock_bh(&ar->txmgmt_idr_lock); 8289 - idr_remove(&ar->txmgmt_idr, buf_id); 8299 + msdu = idr_remove(&ar->txmgmt_idr, buf_id); 8290 8300 spin_unlock_bh(&ar->txmgmt_idr_lock); 8291 - dma_unmap_single(ab->dev, ATH12K_SKB_CB(msdu)->paddr, msdu->len, 8301 + 8302 + if (!msdu) 8303 + return; 8304 + 8305 + dma_unmap_single(ar->ab->dev, ATH12K_SKB_CB(msdu)->paddr, msdu->len, 8292 8306 DMA_TO_DEVICE); 8293 8307 8294 8308 info = IEEE80211_SKB_CB(msdu); 8295 8309 memset(&info->status, 0, sizeof(info->status)); 8296 8310 8297 - ath12k_mgmt_over_wmi_tx_drop(ar, skb); 8311 + ath12k_mgmt_over_wmi_tx_drop(ar, msdu); 8312 + } 8313 + 8314 + int ath12k_mac_tx_mgmt_pending_free(int buf_id, void *skb, void *ctx) 8315 + { 8316 + struct ath12k *ar = ctx; 8317 + 8318 + ath12k_mac_tx_mgmt_free(ar, buf_id); 8298 8319 8299 8320 return 0; 8300 8321 } ··· 8312 8315 { 8313 8316 struct ieee80211_vif *vif = ctx; 8314 8317 struct ath12k_skb_cb *skb_cb = ATH12K_SKB_CB(skb); 8315 - struct sk_buff *msdu = skb; 8316 8318 struct ath12k *ar = skb_cb->ar; 8317 - struct ath12k_base *ab = ar->ab; 8318 8319 8319 - if (skb_cb->vif == vif) { 8320 - spin_lock_bh(&ar->txmgmt_idr_lock); 8321 - idr_remove(&ar->txmgmt_idr, buf_id); 8322 - spin_unlock_bh(&ar->txmgmt_idr_lock); 8323 - dma_unmap_single(ab->dev, skb_cb->paddr, msdu->len, 8324 - DMA_TO_DEVICE); 8325 - } 8320 + if (skb_cb->vif == vif) 8321 + ath12k_mac_tx_mgmt_free(ar, buf_id); 8326 8322 8327 8323 return 0; 8328 8324 }
+1 -2
drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
··· 5627 5627 *cookie, le16_to_cpu(action_frame->len), 5628 5628 le32_to_cpu(af_params->channel)); 5629 5629 5630 - ack = brcmf_p2p_send_action_frame(cfg, cfg_to_ndev(cfg), 5631 - af_params); 5630 + ack = brcmf_p2p_send_action_frame(vif->ifp, af_params); 5632 5631 5633 5632 cfg80211_mgmt_tx_status(wdev, *cookie, buf, len, ack, 5634 5633 GFP_KERNEL);
+10 -18
drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
··· 1529 1529 /** 1530 1530 * brcmf_p2p_tx_action_frame() - send action frame over fil. 1531 1531 * 1532 + * @ifp: interface to transmit on. 1532 1533 * @p2p: p2p info struct for vif. 1533 1534 * @af_params: action frame data/info. 1534 1535 * ··· 1539 1538 * The WLC_E_ACTION_FRAME_COMPLETE event will be received when the action 1540 1539 * frame is transmitted. 1541 1540 */ 1542 - static s32 brcmf_p2p_tx_action_frame(struct brcmf_p2p_info *p2p, 1541 + static s32 brcmf_p2p_tx_action_frame(struct brcmf_if *ifp, 1542 + struct brcmf_p2p_info *p2p, 1543 1543 struct brcmf_fil_af_params_le *af_params) 1544 1544 { 1545 1545 struct brcmf_pub *drvr = p2p->cfg->pub; 1546 - struct brcmf_cfg80211_vif *vif; 1547 - struct brcmf_p2p_action_frame *p2p_af; 1548 1546 s32 err = 0; 1549 1547 1550 1548 brcmf_dbg(TRACE, "Enter\n"); ··· 1552 1552 clear_bit(BRCMF_P2P_STATUS_ACTION_TX_COMPLETED, &p2p->status); 1553 1553 clear_bit(BRCMF_P2P_STATUS_ACTION_TX_NOACK, &p2p->status); 1554 1554 1555 - /* check if it is a p2p_presence response */ 1556 - p2p_af = (struct brcmf_p2p_action_frame *)af_params->action_frame.data; 1557 - if (p2p_af->subtype == P2P_AF_PRESENCE_RSP) 1558 - vif = p2p->bss_idx[P2PAPI_BSSCFG_CONNECTION].vif; 1559 - else 1560 - vif = p2p->bss_idx[P2PAPI_BSSCFG_DEVICE].vif; 1561 - 1562 - err = brcmf_fil_bsscfg_data_set(vif->ifp, "actframe", af_params, 1555 + err = brcmf_fil_bsscfg_data_set(ifp, "actframe", af_params, 1563 1556 sizeof(*af_params)); 1564 1557 if (err) { 1565 1558 bphy_err(drvr, " sending action frame has failed\n"); ··· 1704 1711 /** 1705 1712 * brcmf_p2p_send_action_frame() - send action frame . 1706 1713 * 1707 - * @cfg: driver private data for cfg80211 interface. 1708 - * @ndev: net device to transmit on. 1714 + * @ifp: interface to transmit on. 1709 1715 * @af_params: configuration data for action frame. 1710 1716 */ 1711 - bool brcmf_p2p_send_action_frame(struct brcmf_cfg80211_info *cfg, 1712 - struct net_device *ndev, 1717 + bool brcmf_p2p_send_action_frame(struct brcmf_if *ifp, 1713 1718 struct brcmf_fil_af_params_le *af_params) 1714 1719 { 1720 + struct brcmf_cfg80211_info *cfg = ifp->drvr->config; 1715 1721 struct brcmf_p2p_info *p2p = &cfg->p2p; 1716 - struct brcmf_if *ifp = netdev_priv(ndev); 1717 1722 struct brcmf_fil_action_frame_le *action_frame; 1718 1723 struct brcmf_config_af_params config_af_params; 1719 1724 struct afx_hdl *afx_hdl = &p2p->afx_hdl; ··· 1848 1857 if (af_params->channel) 1849 1858 msleep(P2P_AF_RETRY_DELAY_TIME); 1850 1859 1851 - ack = !brcmf_p2p_tx_action_frame(p2p, af_params); 1860 + ack = !brcmf_p2p_tx_action_frame(ifp, p2p, af_params); 1852 1861 tx_retry++; 1853 1862 dwell_overflow = brcmf_p2p_check_dwell_overflow(requested_dwell, 1854 1863 dwell_jiffies); ··· 2208 2217 2209 2218 WARN_ON(p2p_ifp->bsscfgidx != bsscfgidx); 2210 2219 2211 - init_completion(&p2p->send_af_done); 2212 2220 INIT_WORK(&p2p->afx_hdl.afx_work, brcmf_p2p_afx_handler); 2213 2221 init_completion(&p2p->afx_hdl.act_frm_scan); 2214 2222 init_completion(&p2p->wait_next_af); ··· 2502 2512 2503 2513 pri_ifp = brcmf_get_ifp(cfg->pub, 0); 2504 2514 p2p->bss_idx[P2PAPI_BSSCFG_PRIMARY].vif = pri_ifp->vif; 2515 + 2516 + init_completion(&p2p->send_af_done); 2505 2517 2506 2518 if (p2pdev_forced) { 2507 2519 err_ptr = brcmf_p2p_create_p2pdev(p2p, NULL, NULL);
+1 -2
drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.h
··· 168 168 int brcmf_p2p_notify_action_tx_complete(struct brcmf_if *ifp, 169 169 const struct brcmf_event_msg *e, 170 170 void *data); 171 - bool brcmf_p2p_send_action_frame(struct brcmf_cfg80211_info *cfg, 172 - struct net_device *ndev, 171 + bool brcmf_p2p_send_action_frame(struct brcmf_if *ifp, 173 172 struct brcmf_fil_af_params_le *af_params); 174 173 bool brcmf_p2p_scan_finding_common_channel(struct brcmf_cfg80211_info *cfg, 175 174 struct brcmf_bss_info_le *bi);
+3 -2
drivers/net/wireless/intel/iwlwifi/mld/link.c
··· 501 501 struct iwl_mld_vif *mld_vif = iwl_mld_vif_from_mac80211(bss_conf->vif); 502 502 struct iwl_mld_link *link = iwl_mld_link_from_mac80211(bss_conf); 503 503 bool is_deflink = link == &mld_vif->deflink; 504 + u8 fw_id = link->fw_id; 504 505 505 506 if (WARN_ON(!link || link->active)) 506 507 return; ··· 514 513 515 514 RCU_INIT_POINTER(mld_vif->link[bss_conf->link_id], NULL); 516 515 517 - if (WARN_ON(link->fw_id >= mld->fw->ucode_capa.num_links)) 516 + if (WARN_ON(fw_id >= mld->fw->ucode_capa.num_links)) 518 517 return; 519 518 520 - RCU_INIT_POINTER(mld->fw_id_to_bss_conf[link->fw_id], NULL); 519 + RCU_INIT_POINTER(mld->fw_id_to_bss_conf[fw_id], NULL); 521 520 } 522 521 523 522 void iwl_mld_handle_missed_beacon_notif(struct iwl_mld *mld,
+4 -3
drivers/net/wireless/virtual/mac80211_hwsim.c
··· 6698 6698 .n_mcgrps = ARRAY_SIZE(hwsim_mcgrps), 6699 6699 }; 6700 6700 6701 - static void remove_user_radios(u32 portid) 6701 + static void remove_user_radios(u32 portid, int netgroup) 6702 6702 { 6703 6703 struct mac80211_hwsim_data *entry, *tmp; 6704 6704 LIST_HEAD(list); 6705 6705 6706 6706 spin_lock_bh(&hwsim_radio_lock); 6707 6707 list_for_each_entry_safe(entry, tmp, &hwsim_radios, list) { 6708 - if (entry->destroy_on_close && entry->portid == portid) { 6708 + if (entry->destroy_on_close && entry->portid == portid && 6709 + entry->netgroup == netgroup) { 6709 6710 list_move(&entry->list, &list); 6710 6711 rhashtable_remove_fast(&hwsim_radios_rht, &entry->rht, 6711 6712 hwsim_rht_params); ··· 6731 6730 if (state != NETLINK_URELEASE) 6732 6731 return NOTIFY_DONE; 6733 6732 6734 - remove_user_radios(notify->portid); 6733 + remove_user_radios(notify->portid, hwsim_net_get_netgroup(notify->net)); 6735 6734 6736 6735 if (notify->portid == hwsim_net_get_wmediumd(notify->net)) { 6737 6736 printk(KERN_INFO "mac80211_hwsim: wmediumd released netlink"
+1
drivers/net/wireless/zydas/zd1211rw/zd_usb.c
··· 791 791 if (urbs) { 792 792 for (i = 0; i < RX_URBS_COUNT; i++) 793 793 free_rx_urb(urbs[i]); 794 + kfree(urbs); 794 795 } 795 796 return r; 796 797 }
+10 -3
drivers/nvme/host/pci.c
··· 1042 1042 return nvme_pci_setup_data_prp(req, &iter); 1043 1043 } 1044 1044 1045 - static blk_status_t nvme_pci_setup_meta_sgls(struct request *req) 1045 + static blk_status_t nvme_pci_setup_meta_iter(struct request *req) 1046 1046 { 1047 1047 struct nvme_queue *nvmeq = req->mq_hctx->driver_data; 1048 1048 unsigned int entries = req->nr_integrity_segments; ··· 1072 1072 * descriptor provides an explicit length, so we're relying on that 1073 1073 * mechanism to catch any misunderstandings between the application and 1074 1074 * device. 1075 + * 1076 + * P2P DMA also needs to use the blk_dma_iter method, so mptr setup 1077 + * leverages this routine when that happens. 1075 1078 */ 1076 - if (entries == 1 && !(nvme_req(req)->flags & NVME_REQ_USERCMD)) { 1079 + if (!nvme_ctrl_meta_sgl_supported(&dev->ctrl) || 1080 + (entries == 1 && !(nvme_req(req)->flags & NVME_REQ_USERCMD))) { 1077 1081 iod->cmd.common.metadata = cpu_to_le64(iter.addr); 1078 1082 iod->meta_total_len = iter.len; 1079 1083 iod->meta_dma = iter.addr; ··· 1118 1114 struct nvme_queue *nvmeq = req->mq_hctx->driver_data; 1119 1115 struct bio_vec bv = rq_integrity_vec(req); 1120 1116 1117 + if (is_pci_p2pdma_page(bv.bv_page)) 1118 + return nvme_pci_setup_meta_iter(req); 1119 + 1121 1120 iod->meta_dma = dma_map_bvec(nvmeq->dev->dev, &bv, rq_dma_dir(req), 0); 1122 1121 if (dma_mapping_error(nvmeq->dev->dev, iod->meta_dma)) 1123 1122 return BLK_STS_IOERR; ··· 1135 1128 1136 1129 if ((iod->cmd.common.flags & NVME_CMD_SGL_METABUF) && 1137 1130 nvme_pci_metadata_use_sgls(req)) 1138 - return nvme_pci_setup_meta_sgls(req); 1131 + return nvme_pci_setup_meta_iter(req); 1139 1132 return nvme_pci_setup_meta_mptr(req); 1140 1133 } 1141 1134
+3 -2
drivers/nvme/target/auth.c
··· 298 298 const char *hash_name; 299 299 u8 *challenge = req->sq->dhchap_c1; 300 300 struct nvme_dhchap_key *transformed_key; 301 - u8 buf[4]; 301 + u8 buf[4], sc_c = ctrl->concat ? 1 : 0; 302 302 int ret; 303 303 304 304 hash_name = nvme_auth_hmac_name(ctrl->shash_id); ··· 367 367 ret = crypto_shash_update(shash, buf, 2); 368 368 if (ret) 369 369 goto out; 370 - memset(buf, 0, 4); 370 + *buf = sc_c; 371 371 ret = crypto_shash_update(shash, buf, 1); 372 372 if (ret) 373 373 goto out; 374 374 ret = crypto_shash_update(shash, "HostHost", 8); 375 375 if (ret) 376 376 goto out; 377 + memset(buf, 0, 4); 377 378 ret = crypto_shash_update(shash, ctrl->hostnqn, strlen(ctrl->hostnqn)); 378 379 if (ret) 379 380 goto out;
+32
drivers/pci/controller/dwc/pcie-qcom.c
··· 247 247 int (*get_resources)(struct qcom_pcie *pcie); 248 248 int (*init)(struct qcom_pcie *pcie); 249 249 int (*post_init)(struct qcom_pcie *pcie); 250 + void (*host_post_init)(struct qcom_pcie *pcie); 250 251 void (*deinit)(struct qcom_pcie *pcie); 251 252 void (*ltssm_enable)(struct qcom_pcie *pcie); 252 253 int (*config_sid)(struct qcom_pcie *pcie); ··· 1039 1038 return 0; 1040 1039 } 1041 1040 1041 + static int qcom_pcie_enable_aspm(struct pci_dev *pdev, void *userdata) 1042 + { 1043 + /* 1044 + * Downstream devices need to be in D0 state before enabling PCI PM 1045 + * substates. 1046 + */ 1047 + pci_set_power_state_locked(pdev, PCI_D0); 1048 + pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL); 1049 + 1050 + return 0; 1051 + } 1052 + 1053 + static void qcom_pcie_host_post_init_2_7_0(struct qcom_pcie *pcie) 1054 + { 1055 + struct dw_pcie_rp *pp = &pcie->pci->pp; 1056 + 1057 + pci_walk_bus(pp->bridge->bus, qcom_pcie_enable_aspm, NULL); 1058 + } 1059 + 1042 1060 static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie) 1043 1061 { 1044 1062 struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; ··· 1332 1312 pcie->cfg->ops->deinit(pcie); 1333 1313 } 1334 1314 1315 + static void qcom_pcie_host_post_init(struct dw_pcie_rp *pp) 1316 + { 1317 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1318 + struct qcom_pcie *pcie = to_qcom_pcie(pci); 1319 + 1320 + if (pcie->cfg->ops->host_post_init) 1321 + pcie->cfg->ops->host_post_init(pcie); 1322 + } 1323 + 1335 1324 static const struct dw_pcie_host_ops qcom_pcie_dw_ops = { 1336 1325 .init = qcom_pcie_host_init, 1337 1326 .deinit = qcom_pcie_host_deinit, 1327 + .post_init = qcom_pcie_host_post_init, 1338 1328 }; 1339 1329 1340 1330 /* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */ ··· 1406 1376 .get_resources = qcom_pcie_get_resources_2_7_0, 1407 1377 .init = qcom_pcie_init_2_7_0, 1408 1378 .post_init = qcom_pcie_post_init_2_7_0, 1379 + .host_post_init = qcom_pcie_host_post_init_2_7_0, 1409 1380 .deinit = qcom_pcie_deinit_2_7_0, 1410 1381 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1411 1382 .config_sid = qcom_pcie_config_sid_1_9_0, ··· 1417 1386 .get_resources = qcom_pcie_get_resources_2_7_0, 1418 1387 .init = qcom_pcie_init_2_7_0, 1419 1388 .post_init = qcom_pcie_post_init_2_7_0, 1389 + .host_post_init = qcom_pcie_host_post_init_2_7_0, 1420 1390 .deinit = qcom_pcie_deinit_2_7_0, 1421 1391 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1422 1392 };
+1 -1
drivers/pci/setup-bus.c
··· 1604 1604 pbus_size_io(bus, realloc_head ? 0 : additional_io_size, 1605 1605 additional_io_size, realloc_head); 1606 1606 1607 - if (pref) { 1607 + if (pref && (pref->flags & IORESOURCE_PREFETCH)) { 1608 1608 pbus_size_mem(bus, 1609 1609 IORESOURCE_MEM | IORESOURCE_PREFETCH | 1610 1610 (pref->flags & IORESOURCE_MEM_64),
+1 -1
drivers/platform/x86/Kconfig
··· 432 432 depends on INPUT 433 433 help 434 434 This driver provides supports for the wireless buttons found on some AMD, 435 - HP, & Xioami laptops. 435 + HP, & Xiaomi laptops. 436 436 On such systems the driver should load automatically (via ACPI alias). 437 437 438 438 To compile this driver as a module, choose M here: the module will
+12
drivers/platform/x86/dell/dell-wmi-base.c
··· 365 365 /* Backlight brightness change event */ 366 366 { KE_IGNORE, 0x0003, { KEY_RESERVED } }, 367 367 368 + /* 369 + * Electronic privacy screen toggled, extended data gives state, 370 + * separate entries for on/off see handling in dell_wmi_process_key(). 371 + */ 372 + { KE_KEY, 0x000c, { KEY_EPRIVACY_SCREEN_OFF } }, 373 + { KE_KEY, 0x000c, { KEY_EPRIVACY_SCREEN_ON } }, 374 + 368 375 /* Ultra-performance mode switch request */ 369 376 { KE_IGNORE, 0x000d, { KEY_RESERVED } }, 370 377 ··· 442 435 "Dell tablet mode switch", 443 436 SW_TABLET_MODE, !buffer[0]); 444 437 return 1; 438 + } else if (type == 0x0012 && code == 0x000c && remaining > 0) { 439 + /* Eprivacy toggle, switch to "on" key entry for on events */ 440 + if (buffer[0] == 2) 441 + key++; 442 + used = 1; 445 443 } else if (type == 0x0012 && code == 0x000d && remaining > 0) { 446 444 value = (buffer[2] == 2); 447 445 used = 1;
+1 -4
drivers/platform/x86/intel/int3472/clk_and_regulator.c
··· 245 245 if (IS_ERR(regulator->rdev)) 246 246 return PTR_ERR(regulator->rdev); 247 247 248 - int3472->regulators[int3472->n_regulator_gpios].ena_gpio = gpio; 249 248 int3472->n_regulator_gpios++; 250 249 return 0; 251 250 } 252 251 253 252 void skl_int3472_unregister_regulator(struct int3472_discrete_device *int3472) 254 253 { 255 - for (int i = 0; i < int3472->n_regulator_gpios; i++) { 254 + for (int i = 0; i < int3472->n_regulator_gpios; i++) 256 255 regulator_unregister(int3472->regulators[i].rdev); 257 - gpiod_put(int3472->regulators[i].ena_gpio); 258 - } 259 256 }
+1 -1
drivers/platform/x86/intel/int3472/led.c
··· 43 43 44 44 int3472->pled.lookup.provider = int3472->pled.name; 45 45 int3472->pled.lookup.dev_id = int3472->sensor_name; 46 - int3472->pled.lookup.con_id = "privacy-led"; 46 + int3472->pled.lookup.con_id = "privacy"; 47 47 led_add_lookup(&int3472->pled.lookup); 48 48 49 49 return 0;
+4
drivers/ptp/ptp_chardev.c
··· 561 561 return ptp_mask_en_single(pccontext->private_clkdata, argptr); 562 562 563 563 case PTP_SYS_OFFSET_PRECISE_CYCLES: 564 + if (!ptp->has_cycles) 565 + return -EOPNOTSUPP; 564 566 return ptp_sys_offset_precise(ptp, argptr, 565 567 ptp->info->getcrosscycles); 566 568 567 569 case PTP_SYS_OFFSET_EXTENDED_CYCLES: 570 + if (!ptp->has_cycles) 571 + return -EOPNOTSUPP; 568 572 return ptp_sys_offset_extended(ptp, argptr, 569 573 ptp->info->getcyclesx64); 570 574 default:
+2
drivers/regulator/bd718x7-regulator.c
··· 1613 1613 step /= r1; 1614 1614 1615 1615 new[j].min = min; 1616 + new[j].min_sel = desc->linear_ranges[j].min_sel; 1617 + new[j].max_sel = desc->linear_ranges[j].max_sel; 1616 1618 new[j].step = step; 1617 1619 1618 1620 dev_dbg(dev, "%s: old range min %d, step %d\n",
+1
drivers/reset/Kconfig
··· 89 89 config RESET_GPIO 90 90 tristate "GPIO reset controller" 91 91 depends on GPIOLIB 92 + select AUXILIARY_BUS 92 93 help 93 94 This enables a generic reset controller for resets attached via 94 95 GPIOs. Typically for OF platforms this driver expects "reset-gpios"
+84 -62
drivers/reset/core.c
··· 4 4 * 5 5 * Copyright 2013 Philipp Zabel, Pengutronix 6 6 */ 7 + 8 + #include <linux/acpi.h> 7 9 #include <linux/atomic.h> 10 + #include <linux/auxiliary_bus.h> 8 11 #include <linux/cleanup.h> 9 12 #include <linux/device.h> 10 13 #include <linux/err.h> 11 14 #include <linux/export.h> 12 - #include <linux/kernel.h> 13 - #include <linux/kref.h> 14 15 #include <linux/gpio/driver.h> 15 16 #include <linux/gpio/machine.h> 17 + #include <linux/gpio/property.h> 16 18 #include <linux/idr.h> 19 + #include <linux/kernel.h> 20 + #include <linux/kref.h> 17 21 #include <linux/module.h> 18 22 #include <linux/of.h> 19 - #include <linux/acpi.h> 20 - #include <linux/platform_device.h> 21 23 #include <linux/reset.h> 22 24 #include <linux/reset-controller.h> 23 25 #include <linux/slab.h> ··· 78 76 /** 79 77 * struct reset_gpio_lookup - lookup key for ad-hoc created reset-gpio devices 80 78 * @of_args: phandle to the reset controller with all the args like GPIO number 79 + * @swnode: Software node containing the reference to the GPIO provider 81 80 * @list: list entry for the reset_gpio_lookup_list 82 81 */ 83 82 struct reset_gpio_lookup { 84 83 struct of_phandle_args of_args; 84 + struct fwnode_handle *swnode; 85 85 struct list_head list; 86 86 }; 87 87 ··· 852 848 kref_put(&rstc->refcnt, __reset_control_release); 853 849 } 854 850 855 - static int __reset_add_reset_gpio_lookup(int id, struct device_node *np, 856 - unsigned int gpio, 857 - unsigned int of_flags) 851 + static void reset_gpio_aux_device_release(struct device *dev) 858 852 { 859 - const struct fwnode_handle *fwnode = of_fwnode_handle(np); 860 - unsigned int lookup_flags; 861 - const char *label_tmp; 853 + struct auxiliary_device *adev = to_auxiliary_dev(dev); 862 854 863 - /* 864 - * Later we map GPIO flags between OF and Linux, however not all 865 - * constants from include/dt-bindings/gpio/gpio.h and 866 - * include/linux/gpio/machine.h match each other. 867 - */ 868 - if (of_flags > GPIO_ACTIVE_LOW) { 869 - pr_err("reset-gpio code does not support GPIO flags %u for GPIO %u\n", 870 - of_flags, gpio); 871 - return -EINVAL; 855 + kfree(adev); 856 + } 857 + 858 + static int reset_add_gpio_aux_device(struct device *parent, 859 + struct fwnode_handle *swnode, 860 + int id, void *pdata) 861 + { 862 + struct auxiliary_device *adev; 863 + int ret; 864 + 865 + adev = kzalloc(sizeof(*adev), GFP_KERNEL); 866 + if (!adev) 867 + return -ENOMEM; 868 + 869 + adev->id = id; 870 + adev->name = "gpio"; 871 + adev->dev.parent = parent; 872 + adev->dev.platform_data = pdata; 873 + adev->dev.release = reset_gpio_aux_device_release; 874 + device_set_node(&adev->dev, swnode); 875 + 876 + ret = auxiliary_device_init(adev); 877 + if (ret) { 878 + kfree(adev); 879 + return ret; 872 880 } 873 881 874 - struct gpio_device *gdev __free(gpio_device_put) = gpio_device_find_by_fwnode(fwnode); 875 - if (!gdev) 876 - return -EPROBE_DEFER; 882 + ret = __auxiliary_device_add(adev, "reset"); 883 + if (ret) { 884 + auxiliary_device_uninit(adev); 885 + kfree(adev); 886 + return ret; 887 + } 877 888 878 - label_tmp = gpio_device_get_label(gdev); 879 - if (!label_tmp) 880 - return -EINVAL; 881 - 882 - char *label __free(kfree) = kstrdup(label_tmp, GFP_KERNEL); 883 - if (!label) 884 - return -ENOMEM; 885 - 886 - /* Size: one lookup entry plus sentinel */ 887 - struct gpiod_lookup_table *lookup __free(kfree) = kzalloc(struct_size(lookup, table, 2), 888 - GFP_KERNEL); 889 - if (!lookup) 890 - return -ENOMEM; 891 - 892 - lookup->dev_id = kasprintf(GFP_KERNEL, "reset-gpio.%d", id); 893 - if (!lookup->dev_id) 894 - return -ENOMEM; 895 - 896 - lookup_flags = GPIO_PERSISTENT; 897 - lookup_flags |= of_flags & GPIO_ACTIVE_LOW; 898 - lookup->table[0] = GPIO_LOOKUP(no_free_ptr(label), gpio, "reset", 899 - lookup_flags); 900 - 901 - /* Not freed on success, because it is persisent subsystem data. */ 902 - gpiod_add_lookup_table(no_free_ptr(lookup)); 903 - 904 - return 0; 889 + return ret; 905 890 } 906 891 907 892 /* ··· 898 905 */ 899 906 static int __reset_add_reset_gpio_device(const struct of_phandle_args *args) 900 907 { 908 + struct property_entry properties[2] = { }; 909 + unsigned int offset, of_flags, lflags; 901 910 struct reset_gpio_lookup *rgpio_dev; 902 - struct platform_device *pdev; 911 + struct device *parent; 903 912 int id, ret; 904 913 905 914 /* ··· 920 925 */ 921 926 lockdep_assert_not_held(&reset_list_mutex); 922 927 928 + offset = args->args[0]; 929 + of_flags = args->args[1]; 930 + 931 + /* 932 + * Later we map GPIO flags between OF and Linux, however not all 933 + * constants from include/dt-bindings/gpio/gpio.h and 934 + * include/linux/gpio/machine.h match each other. 935 + * 936 + * FIXME: Find a better way of translating OF flags to GPIO lookup 937 + * flags. 938 + */ 939 + if (of_flags > GPIO_ACTIVE_LOW) { 940 + pr_err("reset-gpio code does not support GPIO flags %u for GPIO %u\n", 941 + of_flags, offset); 942 + return -EINVAL; 943 + } 944 + 945 + struct gpio_device *gdev __free(gpio_device_put) = 946 + gpio_device_find_by_fwnode(of_fwnode_handle(args->np)); 947 + if (!gdev) 948 + return -EPROBE_DEFER; 949 + 923 950 guard(mutex)(&reset_gpio_lookup_mutex); 924 951 925 952 list_for_each_entry(rgpio_dev, &reset_gpio_lookup_list, list) { ··· 950 933 return 0; /* Already on the list, done */ 951 934 } 952 935 } 936 + 937 + lflags = GPIO_PERSISTENT | (of_flags & GPIO_ACTIVE_LOW); 938 + parent = gpio_device_to_device(gdev); 939 + properties[0] = PROPERTY_ENTRY_GPIO("reset-gpios", parent->fwnode, offset, lflags); 953 940 954 941 id = ida_alloc(&reset_gpio_ida, GFP_KERNEL); 955 942 if (id < 0) ··· 966 945 goto err_ida_free; 967 946 } 968 947 969 - ret = __reset_add_reset_gpio_lookup(id, args->np, args->args[0], 970 - args->args[1]); 971 - if (ret < 0) 972 - goto err_kfree; 973 - 974 948 rgpio_dev->of_args = *args; 975 949 /* 976 950 * We keep the device_node reference, but of_args.np is put at the end ··· 973 957 * Hold reference as long as rgpio_dev memory is valid. 974 958 */ 975 959 of_node_get(rgpio_dev->of_args.np); 976 - pdev = platform_device_register_data(NULL, "reset-gpio", id, 977 - &rgpio_dev->of_args, 978 - sizeof(rgpio_dev->of_args)); 979 - ret = PTR_ERR_OR_ZERO(pdev); 960 + 961 + rgpio_dev->swnode = fwnode_create_software_node(properties, NULL); 962 + if (IS_ERR(rgpio_dev->swnode)) { 963 + ret = PTR_ERR(rgpio_dev->swnode); 964 + goto err_put_of_node; 965 + } 966 + 967 + ret = reset_add_gpio_aux_device(parent, rgpio_dev->swnode, id, 968 + &rgpio_dev->of_args); 980 969 if (ret) 981 - goto err_put; 970 + goto err_del_swnode; 982 971 983 972 list_add(&rgpio_dev->list, &reset_gpio_lookup_list); 984 973 985 974 return 0; 986 975 987 - err_put: 976 + err_del_swnode: 977 + fwnode_remove_software_node(rgpio_dev->swnode); 978 + err_put_of_node: 988 979 of_node_put(rgpio_dev->of_args.np); 989 - err_kfree: 990 980 kfree(rgpio_dev); 991 981 err_ida_free: 992 982 ida_free(&reset_gpio_ida, id);
+10 -9
drivers/reset/reset-gpio.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 + #include <linux/auxiliary_bus.h> 3 4 #include <linux/gpio/consumer.h> 4 5 #include <linux/mod_devicetable.h> 5 6 #include <linux/module.h> 6 7 #include <linux/of.h> 7 - #include <linux/platform_device.h> 8 8 #include <linux/reset-controller.h> 9 9 10 10 struct reset_gpio_priv { ··· 61 61 of_node_put(data); 62 62 } 63 63 64 - static int reset_gpio_probe(struct platform_device *pdev) 64 + static int reset_gpio_probe(struct auxiliary_device *adev, 65 + const struct auxiliary_device_id *id) 65 66 { 66 - struct device *dev = &pdev->dev; 67 + struct device *dev = &adev->dev; 67 68 struct of_phandle_args *platdata = dev_get_platdata(dev); 68 69 struct reset_gpio_priv *priv; 69 70 int ret; ··· 76 75 if (!priv) 77 76 return -ENOMEM; 78 77 79 - platform_set_drvdata(pdev, &priv->rc); 78 + auxiliary_set_drvdata(adev, &priv->rc); 80 79 81 80 priv->reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); 82 81 if (IS_ERR(priv->reset)) ··· 100 99 return devm_reset_controller_register(dev, &priv->rc); 101 100 } 102 101 103 - static const struct platform_device_id reset_gpio_ids[] = { 104 - { .name = "reset-gpio", }, 102 + static const struct auxiliary_device_id reset_gpio_ids[] = { 103 + { .name = "reset.gpio" }, 105 104 {} 106 105 }; 107 - MODULE_DEVICE_TABLE(platform, reset_gpio_ids); 106 + MODULE_DEVICE_TABLE(auxiliary, reset_gpio_ids); 108 107 109 - static struct platform_driver reset_gpio_driver = { 108 + static struct auxiliary_driver reset_gpio_driver = { 110 109 .probe = reset_gpio_probe, 111 110 .id_table = reset_gpio_ids, 112 111 .driver = { 113 112 .name = "reset-gpio", 114 113 }, 115 114 }; 116 - module_platform_driver(reset_gpio_driver); 115 + module_auxiliary_driver(reset_gpio_driver); 117 116 118 117 MODULE_AUTHOR("Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>"); 119 118 MODULE_DESCRIPTION("Generic GPIO reset driver");
-1
drivers/rtc/rtc-cpcap.c
··· 268 268 return err; 269 269 270 270 rtc->alarm_irq = platform_get_irq(pdev, 0); 271 - rtc->alarm_enabled = true; 272 271 err = devm_request_threaded_irq(dev, rtc->alarm_irq, NULL, 273 272 cpcap_rtc_alarm_irq, 274 273 IRQF_TRIGGER_NONE | IRQF_ONESHOT,
+1 -1
drivers/rtc/rtc-rx8025.c
··· 316 316 return hour_reg; 317 317 rx8025->is_24 = (hour_reg & RX8035_BIT_HOUR_1224); 318 318 } else { 319 - rx8025->is_24 = (ctrl[1] & RX8025_BIT_CTRL1_1224); 319 + rx8025->is_24 = (ctrl[0] & RX8025_BIT_CTRL1_1224); 320 320 } 321 321 out: 322 322 return err;
-1
drivers/rtc/rtc-tps6586x.c
··· 258 258 259 259 irq_set_status_flags(rtc->irq, IRQ_NOAUTOEN); 260 260 261 - rtc->irq_en = true; 262 261 ret = devm_request_threaded_irq(&pdev->dev, rtc->irq, NULL, 263 262 tps6586x_rtc_irq, 264 263 IRQF_ONESHOT,
+3 -2
drivers/scsi/hosts.c
··· 611 611 { 612 612 int cnt = 0; 613 613 614 - blk_mq_tagset_busy_iter(&shost->tag_set, 615 - scsi_host_check_in_flight, &cnt); 614 + if (shost->tag_set.ops) 615 + blk_mq_tagset_busy_iter(&shost->tag_set, 616 + scsi_host_check_in_flight, &cnt); 616 617 return cnt; 617 618 } 618 619 EXPORT_SYMBOL(scsi_host_busy);
+2 -2
drivers/scsi/scsi_error.c
··· 554 554 * happened, even if someone else gets the sense data. 555 555 */ 556 556 if (sshdr.asc == 0x28) 557 - scmd->device->ua_new_media_ctr++; 557 + atomic_inc(&sdev->ua_new_media_ctr); 558 558 else if (sshdr.asc == 0x29) 559 - scmd->device->ua_por_ctr++; 559 + atomic_inc(&sdev->ua_por_ctr); 560 560 } 561 561 562 562 if (scsi_sense_is_deferred(&sshdr))
+7 -7
drivers/soc/ti/knav_dma.c
··· 402 402 * @name: slave channel name 403 403 * @config: dma configuration parameters 404 404 * 405 - * Returns pointer to appropriate DMA channel on success or error. 405 + * Return: Pointer to appropriate DMA channel on success or NULL on error. 406 406 */ 407 407 void *knav_dma_open_channel(struct device *dev, const char *name, 408 408 struct knav_dma_cfg *config) ··· 414 414 415 415 if (!kdev) { 416 416 pr_err("keystone-navigator-dma driver not registered\n"); 417 - return (void *)-EINVAL; 417 + return NULL; 418 418 } 419 419 420 420 chan_num = of_channel_match_helper(dev->of_node, name, &instance); 421 421 if (chan_num < 0) { 422 422 dev_err(kdev->dev, "No DMA instance with name %s\n", name); 423 - return (void *)-EINVAL; 423 + return NULL; 424 424 } 425 425 426 426 dev_dbg(kdev->dev, "initializing %s channel %d from DMA %s\n", ··· 431 431 if (config->direction != DMA_MEM_TO_DEV && 432 432 config->direction != DMA_DEV_TO_MEM) { 433 433 dev_err(kdev->dev, "bad direction\n"); 434 - return (void *)-EINVAL; 434 + return NULL; 435 435 } 436 436 437 437 /* Look for correct dma instance */ ··· 443 443 } 444 444 if (!dma) { 445 445 dev_err(kdev->dev, "No DMA instance with name %s\n", instance); 446 - return (void *)-EINVAL; 446 + return NULL; 447 447 } 448 448 449 449 /* Look for correct dma channel from dma instance */ ··· 463 463 if (!chan) { 464 464 dev_err(kdev->dev, "channel %d is not in DMA %s\n", 465 465 chan_num, instance); 466 - return (void *)-EINVAL; 466 + return NULL; 467 467 } 468 468 469 469 if (atomic_read(&chan->ref_count) >= 1) { 470 470 if (!check_config(chan, config)) { 471 471 dev_err(kdev->dev, "channel %d config miss-match\n", 472 472 chan_num); 473 - return (void *)-EINVAL; 473 + return NULL; 474 474 } 475 475 } 476 476
+10 -30
drivers/spi/spi-cs42l43.c
··· 52 52 .mode = SPI_MODE_0, 53 53 }; 54 54 55 - static const struct software_node cs42l43_gpiochip_swnode = { 56 - .name = "cs42l43-pinctrl", 57 - }; 58 - 59 - static const struct software_node_ref_args cs42l43_cs_refs[] = { 60 - SOFTWARE_NODE_REFERENCE(&cs42l43_gpiochip_swnode, 0, GPIO_ACTIVE_LOW), 61 - SOFTWARE_NODE_REFERENCE(&swnode_gpio_undefined), 62 - }; 63 - 64 - static const struct property_entry cs42l43_cs_props[] = { 65 - PROPERTY_ENTRY_REF_ARRAY("cs-gpios", cs42l43_cs_refs), 66 - {} 67 - }; 68 - 69 55 static int cs42l43_spi_tx(struct regmap *regmap, const u8 *buf, unsigned int len) 70 56 { 71 57 const u8 *end = buf + len; ··· 310 324 fwnode_handle_put(data); 311 325 } 312 326 313 - static void cs42l43_release_sw_node(void *data) 314 - { 315 - software_node_unregister(&cs42l43_gpiochip_swnode); 316 - } 317 - 318 327 static int cs42l43_spi_probe(struct platform_device *pdev) 319 328 { 320 329 struct cs42l43 *cs42l43 = dev_get_drvdata(pdev->dev.parent); ··· 372 391 fwnode_property_read_u32(xu_fwnode, "01fa-sidecar-instances", &nsidecars); 373 392 374 393 if (nsidecars) { 394 + struct software_node_ref_args args[] = { 395 + SOFTWARE_NODE_REFERENCE(fwnode, 0, GPIO_ACTIVE_LOW), 396 + SOFTWARE_NODE_REFERENCE(&swnode_gpio_undefined), 397 + }; 398 + struct property_entry props[] = { 399 + PROPERTY_ENTRY_REF_ARRAY("cs-gpios", args), 400 + { } 401 + }; 402 + 375 403 ret = fwnode_property_read_u32(xu_fwnode, "01fa-spk-id-val", &spkid); 376 404 if (!ret) { 377 405 dev_dbg(priv->dev, "01fa-spk-id-val = %d\n", spkid); ··· 393 403 "Failed to get spk-id-gpios\n"); 394 404 } 395 405 396 - ret = software_node_register(&cs42l43_gpiochip_swnode); 397 - if (ret) 398 - return dev_err_probe(priv->dev, ret, 399 - "Failed to register gpio swnode\n"); 400 - 401 - ret = devm_add_action_or_reset(priv->dev, cs42l43_release_sw_node, NULL); 402 - if (ret) 403 - return ret; 404 - 405 - ret = device_create_managed_software_node(&priv->ctlr->dev, 406 - cs42l43_cs_props, NULL); 406 + ret = device_create_managed_software_node(&priv->ctlr->dev, props, NULL); 407 407 if (ret) 408 408 return dev_err_probe(priv->dev, ret, "Failed to add swnode\n"); 409 409 } else {
+1
drivers/spi/spi-intel-pci.c
··· 80 80 { PCI_VDEVICE(INTEL, 0x51a4), (unsigned long)&cnl_info }, 81 81 { PCI_VDEVICE(INTEL, 0x54a4), (unsigned long)&cnl_info }, 82 82 { PCI_VDEVICE(INTEL, 0x5794), (unsigned long)&cnl_info }, 83 + { PCI_VDEVICE(INTEL, 0x5825), (unsigned long)&cnl_info }, 83 84 { PCI_VDEVICE(INTEL, 0x7723), (unsigned long)&cnl_info }, 84 85 { PCI_VDEVICE(INTEL, 0x7a24), (unsigned long)&cnl_info }, 85 86 { PCI_VDEVICE(INTEL, 0x7aa4), (unsigned long)&cnl_info },
+1 -1
drivers/ufs/core/ufs-sysfs.c
··· 1949 1949 return hba->dev_info.hid_sup ? attr->mode : 0; 1950 1950 } 1951 1951 1952 - const struct attribute_group ufs_sysfs_hid_group = { 1952 + static const struct attribute_group ufs_sysfs_hid_group = { 1953 1953 .name = "hid", 1954 1954 .attrs = ufs_sysfs_hid, 1955 1955 .is_visible = ufs_sysfs_hid_is_visible,
-1
drivers/ufs/core/ufs-sysfs.h
··· 14 14 15 15 extern const struct attribute_group ufs_sysfs_unit_descriptor_group; 16 16 extern const struct attribute_group ufs_sysfs_lun_attributes_group; 17 - extern const struct attribute_group ufs_sysfs_hid_group; 18 17 19 18 #endif
+23 -22
drivers/ufs/core/ufshcd.c
··· 4282 4282 get, UIC_GET_ATTR_ID(attr_sel), 4283 4283 UFS_UIC_COMMAND_RETRIES - retries); 4284 4284 4285 - if (mib_val && !ret) 4286 - *mib_val = uic_cmd.argument3; 4285 + if (mib_val) 4286 + *mib_val = ret == 0 ? uic_cmd.argument3 : 0; 4287 4287 4288 4288 if (peer && (hba->quirks & UFSHCD_QUIRK_DME_PEER_ACCESS_AUTO_MODE) 4289 4289 && pwr_mode_change) ··· 4999 4999 5000 5000 static int ufshcd_disable_tx_lcc(struct ufs_hba *hba, bool peer) 5001 5001 { 5002 - int tx_lanes = 0, i, err = 0; 5002 + int tx_lanes, i, err = 0; 5003 5003 5004 5004 if (!peer) 5005 5005 ufshcd_dme_get(hba, UIC_ARG_MIB(PA_CONNECTEDTXDATALANES), ··· 5066 5066 * If UFS device isn't active then we will have to issue link startup 5067 5067 * 2 times to make sure the device state move to active. 5068 5068 */ 5069 - if (!ufshcd_is_ufs_dev_active(hba)) 5069 + if (!(hba->quirks & UFSHCD_QUIRK_PERFORM_LINK_STARTUP_ONCE) && 5070 + !ufshcd_is_ufs_dev_active(hba)) 5070 5071 link_startup_again = true; 5071 5072 5072 5073 link_startup: ··· 5132 5131 ufshcd_readl(hba, REG_UIC_ERROR_CODE_PHY_ADAPTER_LAYER); 5133 5132 ret = ufshcd_make_hba_operational(hba); 5134 5133 out: 5135 - if (ret) { 5134 + if (ret) 5136 5135 dev_err(hba->dev, "link startup failed %d\n", ret); 5137 - ufshcd_print_host_state(hba); 5138 - ufshcd_print_pwr_info(hba); 5139 - ufshcd_print_evt_hist(hba); 5140 - } 5141 5136 return ret; 5142 5137 } 5143 5138 ··· 6670 6673 hba->saved_uic_err, hba->force_reset, 6671 6674 ufshcd_is_link_broken(hba) ? "; link is broken" : ""); 6672 6675 6676 + /* 6677 + * Use ufshcd_rpm_get_noresume() here to safely perform link recovery 6678 + * even if an error occurs during runtime suspend or runtime resume. 6679 + * This avoids potential deadlocks that could happen if we tried to 6680 + * resume the device while a PM operation is already in progress. 6681 + */ 6682 + ufshcd_rpm_get_noresume(hba); 6683 + if (hba->pm_op_in_progress) { 6684 + ufshcd_link_recovery(hba); 6685 + ufshcd_rpm_put(hba); 6686 + return; 6687 + } 6688 + ufshcd_rpm_put(hba); 6689 + 6673 6690 down(&hba->host_sem); 6674 6691 spin_lock_irqsave(hba->host->host_lock, flags); 6675 6692 if (ufshcd_err_handling_should_stop(hba)) { ··· 6694 6683 return; 6695 6684 } 6696 6685 spin_unlock_irqrestore(hba->host->host_lock, flags); 6697 - 6698 - ufshcd_rpm_get_noresume(hba); 6699 - if (hba->pm_op_in_progress) { 6700 - ufshcd_link_recovery(hba); 6701 - ufshcd_rpm_put(hba); 6702 - return; 6703 - } 6704 - ufshcd_rpm_put(hba); 6705 6686 6706 6687 ufshcd_err_handling_prepare(hba); 6707 6688 ··· 8499 8496 dev_info->hid_sup = get_unaligned_be32(desc_buf + 8500 8497 DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP) & 8501 8498 UFS_DEV_HID_SUPPORT; 8502 - 8503 - sysfs_update_group(&hba->dev->kobj, &ufs_sysfs_hid_group); 8504 8499 8505 8500 model_index = desc_buf[DEVICE_DESC_PARAM_PRDCT_NAME]; 8506 8501 ··· 10656 10655 * @mmio_base: base register address 10657 10656 * @irq: Interrupt line of device 10658 10657 * 10659 - * Return: 0 on success, non-zero value on failure. 10658 + * Return: 0 on success; < 0 on failure. 10660 10659 */ 10661 10660 int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq) 10662 10661 { ··· 10886 10885 if (err) 10887 10886 goto out_disable; 10888 10887 10889 - async_schedule(ufshcd_async_scan, hba); 10890 10888 ufs_sysfs_add_nodes(hba->dev); 10889 + async_schedule(ufshcd_async_scan, hba); 10891 10890 10892 10891 device_enable_async_suspend(dev); 10893 10892 ufshcd_pm_qos_init(hba); ··· 10897 10896 hba->is_irq_enabled = false; 10898 10897 ufshcd_hba_exit(hba); 10899 10898 out_error: 10900 - return err; 10899 + return err > 0 ? -EIO : err; 10901 10900 } 10902 10901 EXPORT_SYMBOL_GPL(ufshcd_init); 10903 10902
+14 -1
drivers/ufs/host/ufs-qcom.c
··· 740 740 741 741 742 742 /* reset the connected UFS device during power down */ 743 - if (ufs_qcom_is_link_off(hba) && host->device_reset) 743 + if (ufs_qcom_is_link_off(hba) && host->device_reset) { 744 744 ufs_qcom_device_reset_ctrl(hba, true); 745 + /* 746 + * After sending the SSU command, asserting the rst_n 747 + * line causes the device firmware to wake up and 748 + * execute its reset routine. 749 + * 750 + * During this process, the device may draw current 751 + * beyond the permissible limit for low-power mode (LPM). 752 + * A 10ms delay, based on experimental observations, 753 + * allows the UFS device to complete its hardware reset 754 + * before transitioning the power rail to LPM. 755 + */ 756 + usleep_range(10000, 11000); 757 + } 745 758 746 759 return ufs_qcom_ice_suspend(host); 747 760 }
+67 -3
drivers/ufs/host/ufshcd-pci.c
··· 15 15 #include <linux/pci.h> 16 16 #include <linux/pm_runtime.h> 17 17 #include <linux/pm_qos.h> 18 + #include <linux/suspend.h> 18 19 #include <linux/debugfs.h> 19 20 #include <linux/uuid.h> 20 21 #include <linux/acpi.h> ··· 32 31 u32 dsm_fns; 33 32 u32 active_ltr; 34 33 u32 idle_ltr; 34 + int saved_spm_lvl; 35 35 struct dentry *debugfs_root; 36 36 struct gpio_desc *reset_gpio; 37 37 }; ··· 349 347 host = devm_kzalloc(hba->dev, sizeof(*host), GFP_KERNEL); 350 348 if (!host) 351 349 return -ENOMEM; 350 + host->saved_spm_lvl = -1; 352 351 ufshcd_set_variant(hba, host); 353 352 intel_dsm_init(host, hba->dev); 354 353 if (INTEL_DSM_SUPPORTED(host, RESET)) { ··· 428 425 static int ufs_intel_adl_init(struct ufs_hba *hba) 429 426 { 430 427 hba->nop_out_timeout = 200; 431 - hba->quirks |= UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8; 428 + hba->quirks |= UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8 | 429 + UFSHCD_QUIRK_PERFORM_LINK_STARTUP_ONCE; 432 430 hba->caps |= UFSHCD_CAP_WB_EN; 433 431 return ufs_intel_common_init(hba); 434 432 } ··· 542 538 543 539 return ufshcd_system_resume(dev); 544 540 } 541 + 542 + static int ufs_intel_suspend_prepare(struct device *dev) 543 + { 544 + struct ufs_hba *hba = dev_get_drvdata(dev); 545 + struct intel_host *host = ufshcd_get_variant(hba); 546 + int err; 547 + 548 + /* 549 + * Only s2idle (S0ix) retains link state. Force power-off 550 + * (UFS_PM_LVL_5) for any other case. 551 + */ 552 + if (pm_suspend_target_state != PM_SUSPEND_TO_IDLE && hba->spm_lvl < UFS_PM_LVL_5) { 553 + host->saved_spm_lvl = hba->spm_lvl; 554 + hba->spm_lvl = UFS_PM_LVL_5; 555 + } 556 + 557 + err = ufshcd_suspend_prepare(dev); 558 + 559 + if (err < 0 && host->saved_spm_lvl != -1) { 560 + hba->spm_lvl = host->saved_spm_lvl; 561 + host->saved_spm_lvl = -1; 562 + } 563 + 564 + return err; 565 + } 566 + 567 + static void ufs_intel_resume_complete(struct device *dev) 568 + { 569 + struct ufs_hba *hba = dev_get_drvdata(dev); 570 + struct intel_host *host = ufshcd_get_variant(hba); 571 + 572 + ufshcd_resume_complete(dev); 573 + 574 + if (host->saved_spm_lvl != -1) { 575 + hba->spm_lvl = host->saved_spm_lvl; 576 + host->saved_spm_lvl = -1; 577 + } 578 + } 579 + 580 + static int ufshcd_pci_suspend_prepare(struct device *dev) 581 + { 582 + struct ufs_hba *hba = dev_get_drvdata(dev); 583 + 584 + if (!strcmp(hba->vops->name, "intel-pci")) 585 + return ufs_intel_suspend_prepare(dev); 586 + 587 + return ufshcd_suspend_prepare(dev); 588 + } 589 + 590 + static void ufshcd_pci_resume_complete(struct device *dev) 591 + { 592 + struct ufs_hba *hba = dev_get_drvdata(dev); 593 + 594 + if (!strcmp(hba->vops->name, "intel-pci")) { 595 + ufs_intel_resume_complete(dev); 596 + return; 597 + } 598 + 599 + ufshcd_resume_complete(dev); 600 + } 545 601 #endif 546 602 547 603 /** ··· 675 611 .thaw = ufshcd_system_resume, 676 612 .poweroff = ufshcd_system_suspend, 677 613 .restore = ufshcd_pci_restore, 678 - .prepare = ufshcd_suspend_prepare, 679 - .complete = ufshcd_resume_complete, 614 + .prepare = ufshcd_pci_suspend_prepare, 615 + .complete = ufshcd_pci_resume_complete, 680 616 #endif 681 617 }; 682 618
+110 -63
drivers/vfio/vfio_iommu_type1.c
··· 38 38 #include <linux/workqueue.h> 39 39 #include <linux/notifier.h> 40 40 #include <linux/mm_inline.h> 41 + #include <linux/overflow.h> 41 42 #include "vfio.h" 42 43 43 44 #define DRIVER_VERSION "0.2" ··· 168 167 { 169 168 struct rb_node *node = iommu->dma_list.rb_node; 170 169 170 + WARN_ON(!size); 171 + 171 172 while (node) { 172 173 struct vfio_dma *dma = rb_entry(node, struct vfio_dma, node); 173 174 174 - if (start + size <= dma->iova) 175 + if (start + size - 1 < dma->iova) 175 176 node = node->rb_left; 176 - else if (start >= dma->iova + dma->size) 177 + else if (start > dma->iova + dma->size - 1) 177 178 node = node->rb_right; 178 179 else 179 180 return dma; ··· 185 182 } 186 183 187 184 static struct rb_node *vfio_find_dma_first_node(struct vfio_iommu *iommu, 188 - dma_addr_t start, u64 size) 185 + dma_addr_t start, 186 + dma_addr_t end) 189 187 { 190 188 struct rb_node *res = NULL; 191 189 struct rb_node *node = iommu->dma_list.rb_node; 192 190 struct vfio_dma *dma_res = NULL; 193 191 192 + WARN_ON(end < start); 193 + 194 194 while (node) { 195 195 struct vfio_dma *dma = rb_entry(node, struct vfio_dma, node); 196 196 197 - if (start < dma->iova + dma->size) { 197 + if (start <= dma->iova + dma->size - 1) { 198 198 res = node; 199 199 dma_res = dma; 200 200 if (start >= dma->iova) ··· 207 201 node = node->rb_right; 208 202 } 209 203 } 210 - if (res && size && dma_res->iova >= start + size) 204 + if (res && dma_res->iova > end) 211 205 res = NULL; 212 206 return res; 213 207 } ··· 217 211 struct rb_node **link = &iommu->dma_list.rb_node, *parent = NULL; 218 212 struct vfio_dma *dma; 219 213 214 + WARN_ON(new->size != 0); 215 + 220 216 while (*link) { 221 217 parent = *link; 222 218 dma = rb_entry(parent, struct vfio_dma, node); 223 219 224 - if (new->iova + new->size <= dma->iova) 220 + if (new->iova <= dma->iova) 225 221 link = &(*link)->rb_left; 226 222 else 227 223 link = &(*link)->rb_right; ··· 903 895 unsigned long remote_vaddr; 904 896 struct vfio_dma *dma; 905 897 bool do_accounting; 898 + dma_addr_t iova_end; 899 + size_t iova_size; 906 900 907 - if (!iommu || !pages) 901 + if (!iommu || !pages || npage <= 0) 908 902 return -EINVAL; 909 903 910 904 /* Supported for v2 version only */ 911 905 if (!iommu->v2) 912 906 return -EACCES; 907 + 908 + if (check_mul_overflow(npage, PAGE_SIZE, &iova_size) || 909 + check_add_overflow(user_iova, iova_size - 1, &iova_end)) 910 + return -EOVERFLOW; 913 911 914 912 mutex_lock(&iommu->lock); 915 913 ··· 1022 1008 { 1023 1009 struct vfio_iommu *iommu = iommu_data; 1024 1010 bool do_accounting; 1011 + dma_addr_t iova_end; 1012 + size_t iova_size; 1025 1013 int i; 1026 1014 1027 1015 /* Supported for v2 version only */ 1028 1016 if (WARN_ON(!iommu->v2)) 1017 + return; 1018 + 1019 + if (WARN_ON(npage <= 0)) 1020 + return; 1021 + 1022 + if (WARN_ON(check_mul_overflow(npage, PAGE_SIZE, &iova_size) || 1023 + check_add_overflow(user_iova, iova_size - 1, &iova_end))) 1029 1024 return; 1030 1025 1031 1026 mutex_lock(&iommu->lock); ··· 1090 1067 #define VFIO_IOMMU_TLB_SYNC_MAX 512 1091 1068 1092 1069 static size_t unmap_unpin_fast(struct vfio_domain *domain, 1093 - struct vfio_dma *dma, dma_addr_t *iova, 1070 + struct vfio_dma *dma, dma_addr_t iova, 1094 1071 size_t len, phys_addr_t phys, long *unlocked, 1095 1072 struct list_head *unmapped_list, 1096 1073 int *unmapped_cnt, ··· 1100 1077 struct vfio_regions *entry = kzalloc(sizeof(*entry), GFP_KERNEL); 1101 1078 1102 1079 if (entry) { 1103 - unmapped = iommu_unmap_fast(domain->domain, *iova, len, 1080 + unmapped = iommu_unmap_fast(domain->domain, iova, len, 1104 1081 iotlb_gather); 1105 1082 1106 1083 if (!unmapped) { 1107 1084 kfree(entry); 1108 1085 } else { 1109 - entry->iova = *iova; 1086 + entry->iova = iova; 1110 1087 entry->phys = phys; 1111 1088 entry->len = unmapped; 1112 1089 list_add_tail(&entry->list, unmapped_list); 1113 1090 1114 - *iova += unmapped; 1115 1091 (*unmapped_cnt)++; 1116 1092 } 1117 1093 } ··· 1129 1107 } 1130 1108 1131 1109 static size_t unmap_unpin_slow(struct vfio_domain *domain, 1132 - struct vfio_dma *dma, dma_addr_t *iova, 1110 + struct vfio_dma *dma, dma_addr_t iova, 1133 1111 size_t len, phys_addr_t phys, 1134 1112 long *unlocked) 1135 1113 { 1136 - size_t unmapped = iommu_unmap(domain->domain, *iova, len); 1114 + size_t unmapped = iommu_unmap(domain->domain, iova, len); 1137 1115 1138 1116 if (unmapped) { 1139 - *unlocked += vfio_unpin_pages_remote(dma, *iova, 1117 + *unlocked += vfio_unpin_pages_remote(dma, iova, 1140 1118 phys >> PAGE_SHIFT, 1141 1119 unmapped >> PAGE_SHIFT, 1142 1120 false); 1143 - *iova += unmapped; 1144 1121 cond_resched(); 1145 1122 } 1146 1123 return unmapped; ··· 1148 1127 static long vfio_unmap_unpin(struct vfio_iommu *iommu, struct vfio_dma *dma, 1149 1128 bool do_accounting) 1150 1129 { 1151 - dma_addr_t iova = dma->iova, end = dma->iova + dma->size; 1152 1130 struct vfio_domain *domain, *d; 1153 1131 LIST_HEAD(unmapped_region_list); 1154 1132 struct iommu_iotlb_gather iotlb_gather; 1155 1133 int unmapped_region_cnt = 0; 1156 1134 long unlocked = 0; 1135 + size_t pos = 0; 1157 1136 1158 1137 if (!dma->size) 1159 1138 return 0; ··· 1177 1156 } 1178 1157 1179 1158 iommu_iotlb_gather_init(&iotlb_gather); 1180 - while (iova < end) { 1159 + while (pos < dma->size) { 1181 1160 size_t unmapped, len; 1182 1161 phys_addr_t phys, next; 1162 + dma_addr_t iova = dma->iova + pos; 1183 1163 1184 1164 phys = iommu_iova_to_phys(domain->domain, iova); 1185 1165 if (WARN_ON(!phys)) { 1186 - iova += PAGE_SIZE; 1166 + pos += PAGE_SIZE; 1187 1167 continue; 1188 1168 } 1189 1169 ··· 1193 1171 * may require hardware cache flushing, try to find the 1194 1172 * largest contiguous physical memory chunk to unmap. 1195 1173 */ 1196 - for (len = PAGE_SIZE; iova + len < end; len += PAGE_SIZE) { 1174 + for (len = PAGE_SIZE; pos + len < dma->size; len += PAGE_SIZE) { 1197 1175 next = iommu_iova_to_phys(domain->domain, iova + len); 1198 1176 if (next != phys + len) 1199 1177 break; ··· 1203 1181 * First, try to use fast unmap/unpin. In case of failure, 1204 1182 * switch to slow unmap/unpin path. 1205 1183 */ 1206 - unmapped = unmap_unpin_fast(domain, dma, &iova, len, phys, 1184 + unmapped = unmap_unpin_fast(domain, dma, iova, len, phys, 1207 1185 &unlocked, &unmapped_region_list, 1208 1186 &unmapped_region_cnt, 1209 1187 &iotlb_gather); 1210 1188 if (!unmapped) { 1211 - unmapped = unmap_unpin_slow(domain, dma, &iova, len, 1189 + unmapped = unmap_unpin_slow(domain, dma, iova, len, 1212 1190 phys, &unlocked); 1213 1191 if (WARN_ON(!unmapped)) 1214 1192 break; 1215 1193 } 1194 + 1195 + pos += unmapped; 1216 1196 } 1217 1197 1218 1198 dma->iommu_mapped = false; ··· 1306 1282 } 1307 1283 1308 1284 static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, 1309 - dma_addr_t iova, size_t size, size_t pgsize) 1285 + dma_addr_t iova, dma_addr_t iova_end, size_t pgsize) 1310 1286 { 1311 1287 struct vfio_dma *dma; 1312 1288 struct rb_node *n; ··· 1323 1299 if (dma && dma->iova != iova) 1324 1300 return -EINVAL; 1325 1301 1326 - dma = vfio_find_dma(iommu, iova + size - 1, 0); 1327 - if (dma && dma->iova + dma->size != iova + size) 1302 + dma = vfio_find_dma(iommu, iova_end, 1); 1303 + if (dma && dma->iova + dma->size - 1 != iova_end) 1328 1304 return -EINVAL; 1329 1305 1330 1306 for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { ··· 1333 1309 if (dma->iova < iova) 1334 1310 continue; 1335 1311 1336 - if (dma->iova > iova + size - 1) 1312 + if (dma->iova > iova_end) 1337 1313 break; 1338 1314 1339 1315 ret = update_user_bitmap(bitmap, iommu, dma, iova, pgsize); ··· 1398 1374 int ret = -EINVAL, retries = 0; 1399 1375 unsigned long pgshift; 1400 1376 dma_addr_t iova = unmap->iova; 1401 - u64 size = unmap->size; 1377 + dma_addr_t iova_end; 1378 + size_t size = unmap->size; 1402 1379 bool unmap_all = unmap->flags & VFIO_DMA_UNMAP_FLAG_ALL; 1403 1380 bool invalidate_vaddr = unmap->flags & VFIO_DMA_UNMAP_FLAG_VADDR; 1404 1381 struct rb_node *n, *first_n; ··· 1412 1387 goto unlock; 1413 1388 } 1414 1389 1390 + if (iova != unmap->iova || size != unmap->size) { 1391 + ret = -EOVERFLOW; 1392 + goto unlock; 1393 + } 1394 + 1415 1395 pgshift = __ffs(iommu->pgsize_bitmap); 1416 1396 pgsize = (size_t)1 << pgshift; 1417 1397 ··· 1426 1396 if (unmap_all) { 1427 1397 if (iova || size) 1428 1398 goto unlock; 1429 - size = U64_MAX; 1430 - } else if (!size || size & (pgsize - 1) || 1431 - iova + size - 1 < iova || size > SIZE_MAX) { 1432 - goto unlock; 1399 + iova_end = ~(dma_addr_t)0; 1400 + } else { 1401 + if (!size || size & (pgsize - 1)) 1402 + goto unlock; 1403 + 1404 + if (check_add_overflow(iova, size - 1, &iova_end)) { 1405 + ret = -EOVERFLOW; 1406 + goto unlock; 1407 + } 1433 1408 } 1434 1409 1435 1410 /* When dirty tracking is enabled, allow only min supported pgsize */ ··· 1481 1446 if (dma && dma->iova != iova) 1482 1447 goto unlock; 1483 1448 1484 - dma = vfio_find_dma(iommu, iova + size - 1, 0); 1485 - if (dma && dma->iova + dma->size != iova + size) 1449 + dma = vfio_find_dma(iommu, iova_end, 1); 1450 + if (dma && dma->iova + dma->size - 1 != iova_end) 1486 1451 goto unlock; 1487 1452 } 1488 1453 1489 1454 ret = 0; 1490 - n = first_n = vfio_find_dma_first_node(iommu, iova, size); 1455 + n = first_n = vfio_find_dma_first_node(iommu, iova, iova_end); 1491 1456 1492 1457 while (n) { 1493 1458 dma = rb_entry(n, struct vfio_dma, node); 1494 - if (dma->iova >= iova + size) 1459 + if (dma->iova > iova_end) 1495 1460 break; 1496 1461 1497 1462 if (!iommu->v2 && iova > dma->iova) ··· 1683 1648 { 1684 1649 bool set_vaddr = map->flags & VFIO_DMA_MAP_FLAG_VADDR; 1685 1650 dma_addr_t iova = map->iova; 1651 + dma_addr_t iova_end; 1686 1652 unsigned long vaddr = map->vaddr; 1653 + unsigned long vaddr_end; 1687 1654 size_t size = map->size; 1688 1655 int ret = 0, prot = 0; 1689 1656 size_t pgsize; ··· 1693 1656 1694 1657 /* Verify that none of our __u64 fields overflow */ 1695 1658 if (map->size != size || map->vaddr != vaddr || map->iova != iova) 1659 + return -EOVERFLOW; 1660 + 1661 + if (!size) 1696 1662 return -EINVAL; 1663 + 1664 + if (check_add_overflow(iova, size - 1, &iova_end) || 1665 + check_add_overflow(vaddr, size - 1, &vaddr_end)) 1666 + return -EOVERFLOW; 1697 1667 1698 1668 /* READ/WRITE from device perspective */ 1699 1669 if (map->flags & VFIO_DMA_MAP_FLAG_WRITE) ··· 1717 1673 1718 1674 WARN_ON((pgsize - 1) & PAGE_MASK); 1719 1675 1720 - if (!size || (size | iova | vaddr) & (pgsize - 1)) { 1721 - ret = -EINVAL; 1722 - goto out_unlock; 1723 - } 1724 - 1725 - /* Don't allow IOVA or virtual address wrap */ 1726 - if (iova + size - 1 < iova || vaddr + size - 1 < vaddr) { 1676 + if ((size | iova | vaddr) & (pgsize - 1)) { 1727 1677 ret = -EINVAL; 1728 1678 goto out_unlock; 1729 1679 } ··· 1748 1710 goto out_unlock; 1749 1711 } 1750 1712 1751 - if (!vfio_iommu_iova_dma_valid(iommu, iova, iova + size - 1)) { 1713 + if (!vfio_iommu_iova_dma_valid(iommu, iova, iova_end)) { 1752 1714 ret = -EINVAL; 1753 1715 goto out_unlock; 1754 1716 } ··· 1821 1783 1822 1784 for (; n; n = rb_next(n)) { 1823 1785 struct vfio_dma *dma; 1824 - dma_addr_t iova; 1786 + size_t pos = 0; 1825 1787 1826 1788 dma = rb_entry(n, struct vfio_dma, node); 1827 - iova = dma->iova; 1828 1789 1829 - while (iova < dma->iova + dma->size) { 1790 + while (pos < dma->size) { 1791 + dma_addr_t iova = dma->iova + pos; 1830 1792 phys_addr_t phys; 1831 1793 size_t size; 1832 1794 ··· 1842 1804 phys = iommu_iova_to_phys(d->domain, iova); 1843 1805 1844 1806 if (WARN_ON(!phys)) { 1845 - iova += PAGE_SIZE; 1807 + pos += PAGE_SIZE; 1846 1808 continue; 1847 1809 } 1848 1810 1849 1811 size = PAGE_SIZE; 1850 1812 p = phys + size; 1851 1813 i = iova + size; 1852 - while (i < dma->iova + dma->size && 1814 + while (pos + size < dma->size && 1853 1815 p == iommu_iova_to_phys(d->domain, i)) { 1854 1816 size += PAGE_SIZE; 1855 1817 p += PAGE_SIZE; ··· 1857 1819 } 1858 1820 } else { 1859 1821 unsigned long pfn; 1860 - unsigned long vaddr = dma->vaddr + 1861 - (iova - dma->iova); 1862 - size_t n = dma->iova + dma->size - iova; 1822 + unsigned long vaddr = dma->vaddr + pos; 1823 + size_t n = dma->size - pos; 1863 1824 long npage; 1864 1825 1865 1826 npage = vfio_pin_pages_remote(dma, vaddr, ··· 1889 1852 goto unwind; 1890 1853 } 1891 1854 1892 - iova += size; 1855 + pos += size; 1893 1856 } 1894 1857 } 1895 1858 ··· 1906 1869 unwind: 1907 1870 for (; n; n = rb_prev(n)) { 1908 1871 struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); 1909 - dma_addr_t iova; 1872 + size_t pos = 0; 1910 1873 1911 1874 if (dma->iommu_mapped) { 1912 1875 iommu_unmap(domain->domain, dma->iova, dma->size); 1913 1876 continue; 1914 1877 } 1915 1878 1916 - iova = dma->iova; 1917 - while (iova < dma->iova + dma->size) { 1879 + while (pos < dma->size) { 1880 + dma_addr_t iova = dma->iova + pos; 1918 1881 phys_addr_t phys, p; 1919 1882 size_t size; 1920 1883 dma_addr_t i; 1921 1884 1922 1885 phys = iommu_iova_to_phys(domain->domain, iova); 1923 1886 if (!phys) { 1924 - iova += PAGE_SIZE; 1887 + pos += PAGE_SIZE; 1925 1888 continue; 1926 1889 } 1927 1890 1928 1891 size = PAGE_SIZE; 1929 1892 p = phys + size; 1930 1893 i = iova + size; 1931 - while (i < dma->iova + dma->size && 1894 + while (pos + size < dma->size && 1932 1895 p == iommu_iova_to_phys(domain->domain, i)) { 1933 1896 size += PAGE_SIZE; 1934 1897 p += PAGE_SIZE; ··· 3014 2977 struct vfio_iommu_type1_dirty_bitmap_get range; 3015 2978 unsigned long pgshift; 3016 2979 size_t data_size = dirty.argsz - minsz; 3017 - size_t iommu_pgsize; 2980 + size_t size, iommu_pgsize; 2981 + dma_addr_t iova, iova_end; 3018 2982 3019 2983 if (!data_size || data_size < sizeof(range)) 3020 2984 return -EINVAL; ··· 3024 2986 sizeof(range))) 3025 2987 return -EFAULT; 3026 2988 3027 - if (range.iova + range.size < range.iova) 2989 + iova = range.iova; 2990 + size = range.size; 2991 + 2992 + if (iova != range.iova || size != range.size) 2993 + return -EOVERFLOW; 2994 + 2995 + if (!size) 3028 2996 return -EINVAL; 2997 + 2998 + if (check_add_overflow(iova, size - 1, &iova_end)) 2999 + return -EOVERFLOW; 3000 + 3029 3001 if (!access_ok((void __user *)range.bitmap.data, 3030 3002 range.bitmap.size)) 3031 3003 return -EINVAL; 3032 3004 3033 3005 pgshift = __ffs(range.bitmap.pgsize); 3034 - ret = verify_bitmap_size(range.size >> pgshift, 3006 + ret = verify_bitmap_size(size >> pgshift, 3035 3007 range.bitmap.size); 3036 3008 if (ret) 3037 3009 return ret; ··· 3055 3007 ret = -EINVAL; 3056 3008 goto out_unlock; 3057 3009 } 3058 - if (range.iova & (iommu_pgsize - 1)) { 3010 + if (iova & (iommu_pgsize - 1)) { 3059 3011 ret = -EINVAL; 3060 3012 goto out_unlock; 3061 3013 } 3062 - if (!range.size || range.size & (iommu_pgsize - 1)) { 3014 + if (size & (iommu_pgsize - 1)) { 3063 3015 ret = -EINVAL; 3064 3016 goto out_unlock; 3065 3017 } 3066 3018 3067 3019 if (iommu->dirty_page_tracking) 3068 3020 ret = vfio_iova_dirty_bitmap(range.bitmap.data, 3069 - iommu, range.iova, 3070 - range.size, 3021 + iommu, iova, iova_end, 3071 3022 range.bitmap.pgsize); 3072 3023 else 3073 3024 ret = -EINVAL;
+6 -2
drivers/video/fbdev/aty/atyfb_base.c
··· 2614 2614 pr_cont("\n"); 2615 2615 } 2616 2616 #endif 2617 - if (par->pll_ops->init_pll) 2618 - par->pll_ops->init_pll(info, &par->pll); 2617 + if (par->pll_ops->init_pll) { 2618 + ret = par->pll_ops->init_pll(info, &par->pll); 2619 + if (ret) 2620 + return ret; 2621 + } 2622 + 2619 2623 if (par->pll_ops->resume_pll) 2620 2624 par->pll_ops->resume_pll(info, &par->pll); 2621 2625
+12 -4
drivers/video/fbdev/core/bitblit.c
··· 79 79 struct fb_image *image, u8 *buf, u8 *dst) 80 80 { 81 81 u16 charmask = vc->vc_hi_font_mask ? 0x1ff : 0xff; 82 + unsigned int charcnt = vc->vc_font.charcount; 82 83 u32 idx = vc->vc_font.width >> 3; 83 84 u8 *src; 84 85 85 86 while (cnt--) { 86 - src = vc->vc_font.data + (scr_readw(s++)& 87 - charmask)*cellsize; 87 + u16 ch = scr_readw(s++) & charmask; 88 + 89 + if (ch >= charcnt) 90 + ch = 0; 91 + src = vc->vc_font.data + (unsigned int)ch * cellsize; 88 92 89 93 if (attr) { 90 94 update_attr(buf, src, attr, vc); ··· 116 112 u8 *dst) 117 113 { 118 114 u16 charmask = vc->vc_hi_font_mask ? 0x1ff : 0xff; 115 + unsigned int charcnt = vc->vc_font.charcount; 119 116 u32 shift_low = 0, mod = vc->vc_font.width % 8; 120 117 u32 shift_high = 8; 121 118 u32 idx = vc->vc_font.width >> 3; 122 119 u8 *src; 123 120 124 121 while (cnt--) { 125 - src = vc->vc_font.data + (scr_readw(s++)& 126 - charmask)*cellsize; 122 + u16 ch = scr_readw(s++) & charmask; 123 + 124 + if (ch >= charcnt) 125 + ch = 0; 126 + src = vc->vc_font.data + (unsigned int)ch * cellsize; 127 127 128 128 if (attr) { 129 129 update_attr(buf, src, attr, vc);
+19
drivers/video/fbdev/core/fbcon.c
··· 2810 2810 return found; 2811 2811 } 2812 2812 2813 + static void fbcon_delete_mode(struct fb_videomode *m) 2814 + { 2815 + struct fbcon_display *p; 2816 + 2817 + for (int i = first_fb_vc; i <= last_fb_vc; i++) { 2818 + p = &fb_display[i]; 2819 + if (p->mode == m) 2820 + p->mode = NULL; 2821 + } 2822 + } 2823 + 2824 + void fbcon_delete_modelist(struct list_head *head) 2825 + { 2826 + struct fb_modelist *modelist; 2827 + 2828 + list_for_each_entry(modelist, head, list) 2829 + fbcon_delete_mode(&modelist->mode); 2830 + } 2831 + 2813 2832 #ifdef CONFIG_VT_HW_CONSOLE_BINDING 2814 2833 static void fbcon_unbind(void) 2815 2834 {
+1
drivers/video/fbdev/core/fbmem.c
··· 544 544 fb_info->pixmap.addr = NULL; 545 545 } 546 546 547 + fbcon_delete_modelist(&fb_info->modelist); 547 548 fb_destroy_modelist(&fb_info->modelist); 548 549 registered_fb[fb_info->node] = NULL; 549 550 num_registered_fb--;
+1 -1
drivers/video/fbdev/pvr2fb.c
··· 192 192 193 193 #ifdef CONFIG_PVR2_DMA 194 194 static unsigned int shdma = PVR2_CASCADE_CHAN; 195 - static unsigned int pvr2dma = ONCHIP_NR_DMA_CHANNELS; 195 + static unsigned int pvr2dma = CONFIG_NR_ONCHIP_DMA_CHANNELS; 196 196 #endif 197 197 198 198 static struct fb_videomode pvr2_modedb[] = {
+2
drivers/video/fbdev/valkyriefb.c
··· 329 329 330 330 if (of_address_to_resource(dp, 0, &r)) { 331 331 printk(KERN_ERR "can't find address for valkyrie\n"); 332 + of_node_put(dp); 332 333 return 0; 333 334 } 334 335 335 336 frame_buffer_phys = r.start; 336 337 cmap_regs_phys = r.start + 0x304000; 338 + of_node_put(dp); 337 339 } 338 340 #endif /* ppc (!CONFIG_MAC) */ 339 341
+8
fs/btrfs/extent_io.c
··· 2228 2228 wbc_account_cgroup_owner(wbc, folio, range_len); 2229 2229 folio_unlock(folio); 2230 2230 } 2231 + /* 2232 + * If the fs is already in error status, do not submit any writeback 2233 + * but immediately finish it. 2234 + */ 2235 + if (unlikely(BTRFS_FS_ERROR(fs_info))) { 2236 + btrfs_bio_end_io(bbio, errno_to_blk_status(BTRFS_FS_ERROR(fs_info))); 2237 + return; 2238 + } 2231 2239 btrfs_submit_bbio(bbio, 0); 2232 2240 } 2233 2241
+10
fs/btrfs/file.c
··· 2854 2854 { 2855 2855 struct btrfs_trans_handle *trans; 2856 2856 struct btrfs_root *root = BTRFS_I(inode)->root; 2857 + u64 range_start; 2858 + u64 range_end; 2857 2859 int ret; 2858 2860 int ret2; 2859 2861 2860 2862 if (mode & FALLOC_FL_KEEP_SIZE || end <= i_size_read(inode)) 2861 2863 return 0; 2864 + 2865 + range_start = round_down(i_size_read(inode), root->fs_info->sectorsize); 2866 + range_end = round_up(end, root->fs_info->sectorsize); 2867 + 2868 + ret = btrfs_inode_set_file_extent_range(BTRFS_I(inode), range_start, 2869 + range_end - range_start); 2870 + if (ret) 2871 + return ret; 2862 2872 2863 2873 trans = btrfs_start_transaction(root, 1); 2864 2874 if (IS_ERR(trans))
-1
fs/btrfs/inode.c
··· 6873 6873 BTRFS_I(inode)->dir_index = 0ULL; 6874 6874 inode_inc_iversion(inode); 6875 6875 inode_set_ctime_current(inode); 6876 - set_bit(BTRFS_INODE_COPY_EVERYTHING, &BTRFS_I(inode)->runtime_flags); 6877 6876 6878 6877 ret = btrfs_add_link(trans, BTRFS_I(dir), BTRFS_I(inode), 6879 6878 &fname.disk_name, 1, index);
+3 -1
fs/btrfs/qgroup.c
··· 1539 1539 ASSERT(prealloc); 1540 1540 1541 1541 /* Check the level of src and dst first */ 1542 - if (btrfs_qgroup_level(src) >= btrfs_qgroup_level(dst)) 1542 + if (btrfs_qgroup_level(src) >= btrfs_qgroup_level(dst)) { 1543 + kfree(prealloc); 1543 1544 return -EINVAL; 1545 + } 1544 1546 1545 1547 mutex_lock(&fs_info->qgroup_ioctl_lock); 1546 1548 if (!fs_info->quota_root) {
+3
fs/btrfs/tree-log.c
··· 7910 7910 bool log_pinned = false; 7911 7911 int ret; 7912 7912 7913 + /* The inode has a new name (ref/extref), so make sure we log it. */ 7914 + set_bit(BTRFS_INODE_COPY_EVERYTHING, &inode->runtime_flags); 7915 + 7913 7916 btrfs_init_log_ctx(&ctx, inode); 7914 7917 ctx.logging_new_name = true; 7915 7918
+1 -2
fs/crypto/inline_crypt.c
··· 333 333 inode = mapping->host; 334 334 335 335 *inode_ret = inode; 336 - *lblk_num_ret = ((u64)folio->index << (PAGE_SHIFT - inode->i_blkbits)) + 337 - (bh_offset(bh) >> inode->i_blkbits); 336 + *lblk_num_ret = (folio_pos(folio) + bh_offset(bh)) >> inode->i_blkbits; 338 337 return true; 339 338 } 340 339
+16 -5
fs/nfsd/nfs4proc.c
··· 988 988 static void 989 989 nfsd4_read_release(union nfsd4_op_u *u) 990 990 { 991 - if (u->read.rd_nf) 991 + if (u->read.rd_nf) { 992 + trace_nfsd_read_done(u->read.rd_rqstp, u->read.rd_fhp, 993 + u->read.rd_offset, u->read.rd_length); 992 994 nfsd_file_put(u->read.rd_nf); 993 - trace_nfsd_read_done(u->read.rd_rqstp, u->read.rd_fhp, 994 - u->read.rd_offset, u->read.rd_length); 995 + } 995 996 } 996 997 997 998 static __be32 ··· 2893 2892 2894 2893 rqstp->rq_lease_breaker = (void **)&cstate->clp; 2895 2894 2896 - trace_nfsd_compound(rqstp, args->tag, args->taglen, args->opcnt); 2895 + trace_nfsd_compound(rqstp, args->tag, args->taglen, args->client_opcnt); 2897 2896 while (!status && resp->opcnt < args->opcnt) { 2898 2897 op = &args->ops[resp->opcnt++]; 2898 + 2899 + if (unlikely(resp->opcnt == NFSD_MAX_OPS_PER_COMPOUND)) { 2900 + /* If there are still more operations to process, 2901 + * stop here and report NFS4ERR_RESOURCE. */ 2902 + if (cstate->minorversion == 0 && 2903 + args->client_opcnt > resp->opcnt) { 2904 + op->status = nfserr_resource; 2905 + goto encode_op; 2906 + } 2907 + } 2899 2908 2900 2909 /* 2901 2910 * The XDR decode routines may have pre-set op->status; ··· 2983 2972 status = op->status; 2984 2973 } 2985 2974 2986 - trace_nfsd_compound_status(args->opcnt, resp->opcnt, 2975 + trace_nfsd_compound_status(args->client_opcnt, resp->opcnt, 2987 2976 status, nfsd4_op_name(op->opnum)); 2988 2977 2989 2978 nfsd4_cstate_clear_replay(cstate);
+1
fs/nfsd/nfs4state.c
··· 3902 3902 ca->headerpadsz = 0; 3903 3903 ca->maxreq_sz = min_t(u32, ca->maxreq_sz, maxrpc); 3904 3904 ca->maxresp_sz = min_t(u32, ca->maxresp_sz, maxrpc); 3905 + ca->maxops = min_t(u32, ca->maxops, NFSD_MAX_OPS_PER_COMPOUND); 3905 3906 ca->maxresp_cached = min_t(u32, ca->maxresp_cached, 3906 3907 NFSD_SLOT_CACHE_SIZE + NFSD_MIN_HDR_SEQ_SZ); 3907 3908 ca->maxreqs = min_t(u32, ca->maxreqs, NFSD_MAX_SLOTS_PER_SESSION);
+14 -7
fs/nfsd/nfs4xdr.c
··· 2488 2488 2489 2489 if (xdr_stream_decode_u32(argp->xdr, &argp->minorversion) < 0) 2490 2490 return false; 2491 - if (xdr_stream_decode_u32(argp->xdr, &argp->opcnt) < 0) 2491 + if (xdr_stream_decode_u32(argp->xdr, &argp->client_opcnt) < 0) 2492 2492 return false; 2493 + argp->opcnt = min_t(u32, argp->client_opcnt, 2494 + NFSD_MAX_OPS_PER_COMPOUND); 2493 2495 2494 2496 if (argp->opcnt > ARRAY_SIZE(argp->iops)) { 2495 2497 argp->ops = vcalloc(argp->opcnt, sizeof(*argp->ops)); ··· 2630 2628 __be32 *p; 2631 2629 __be32 pathlen; 2632 2630 int pathlen_offset; 2633 - int strlen, count=0; 2634 2631 char *str, *end, *next; 2635 - 2636 - dprintk("nfsd4_encode_components(%s)\n", components); 2632 + int count = 0; 2637 2633 2638 2634 pathlen_offset = xdr->buf->len; 2639 2635 p = xdr_reserve_space(xdr, 4); ··· 2658 2658 for (; *end && (*end != sep); end++) 2659 2659 /* find sep or end of string */; 2660 2660 2661 - strlen = end - str; 2662 - if (strlen) { 2663 - if (xdr_stream_encode_opaque(xdr, str, strlen) < 0) 2661 + if (end > str) { 2662 + if (xdr_stream_encode_opaque(xdr, str, end - str) < 0) 2664 2663 return nfserr_resource; 2665 2664 count++; 2666 2665 } else ··· 2937 2938 2938 2939 typedef __be32(*nfsd4_enc_attr)(struct xdr_stream *xdr, 2939 2940 const struct nfsd4_fattr_args *args); 2941 + 2942 + static __be32 nfsd4_encode_fattr4__inval(struct xdr_stream *xdr, 2943 + const struct nfsd4_fattr_args *args) 2944 + { 2945 + return nfserr_inval; 2946 + } 2940 2947 2941 2948 static __be32 nfsd4_encode_fattr4__noop(struct xdr_stream *xdr, 2942 2949 const struct nfsd4_fattr_args *args) ··· 3565 3560 3566 3561 [FATTR4_MODE_UMASK] = nfsd4_encode_fattr4__noop, 3567 3562 [FATTR4_XATTR_SUPPORT] = nfsd4_encode_fattr4_xattr_support, 3563 + [FATTR4_TIME_DELEG_ACCESS] = nfsd4_encode_fattr4__inval, 3564 + [FATTR4_TIME_DELEG_MODIFY] = nfsd4_encode_fattr4__inval, 3568 3565 [FATTR4_OPEN_ARGUMENTS] = nfsd4_encode_fattr4_open_arguments, 3569 3566 }; 3570 3567
+3
fs/nfsd/nfsd.h
··· 57 57 __be32 err; /* 0, nfserr, or nfserr_eof */ 58 58 }; 59 59 60 + /* Maximum number of operations per session compound */ 61 + #define NFSD_MAX_OPS_PER_COMPOUND 200 62 + 60 63 struct nfsd_genl_rqstp { 61 64 struct sockaddr rq_daddr; 62 65 struct sockaddr rq_saddr;
+1
fs/nfsd/xdr4.h
··· 903 903 char * tag; 904 904 u32 taglen; 905 905 u32 minorversion; 906 + u32 client_opcnt; 906 907 u32 opcnt; 907 908 bool splice_ok; 908 909 struct nfsd4_op *ops;
+9 -7
fs/smb/client/cached_dir.c
··· 388 388 * lease. Release one here, and the second below. 389 389 */ 390 390 cfid->has_lease = false; 391 - kref_put(&cfid->refcount, smb2_close_cached_fid); 391 + close_cached_dir(cfid); 392 392 } 393 393 spin_unlock(&cfids->cfid_list_lock); 394 394 395 - kref_put(&cfid->refcount, smb2_close_cached_fid); 395 + close_cached_dir(cfid); 396 396 } else { 397 397 *ret_cfid = cfid; 398 398 atomic_inc(&tcon->num_remote_opens); ··· 438 438 439 439 static void 440 440 smb2_close_cached_fid(struct kref *ref) 441 + __releases(&cfid->cfids->cfid_list_lock) 441 442 { 442 443 struct cached_fid *cfid = container_of(ref, struct cached_fid, 443 444 refcount); 444 445 int rc; 445 446 446 - spin_lock(&cfid->cfids->cfid_list_lock); 447 + lockdep_assert_held(&cfid->cfids->cfid_list_lock); 448 + 447 449 if (cfid->on_list) { 448 450 list_del(&cfid->entry); 449 451 cfid->on_list = false; ··· 480 478 spin_lock(&cfid->cfids->cfid_list_lock); 481 479 if (cfid->has_lease) { 482 480 cfid->has_lease = false; 483 - kref_put(&cfid->refcount, smb2_close_cached_fid); 481 + close_cached_dir(cfid); 484 482 } 485 483 spin_unlock(&cfid->cfids->cfid_list_lock); 486 484 close_cached_dir(cfid); ··· 489 487 490 488 void close_cached_dir(struct cached_fid *cfid) 491 489 { 492 - kref_put(&cfid->refcount, smb2_close_cached_fid); 490 + kref_put_lock(&cfid->refcount, smb2_close_cached_fid, &cfid->cfids->cfid_list_lock); 493 491 } 494 492 495 493 /* ··· 598 596 599 597 WARN_ON(cfid->on_list); 600 598 601 - kref_put(&cfid->refcount, smb2_close_cached_fid); 599 + close_cached_dir(cfid); 602 600 cifs_put_tcon(tcon, netfs_trace_tcon_ref_put_cached_close); 603 601 } 604 602 ··· 764 762 * Drop the ref-count from above, either the lease-ref (if there 765 763 * was one) or the extra one acquired. 766 764 */ 767 - kref_put(&cfid->refcount, smb2_close_cached_fid); 765 + close_cached_dir(cfid); 768 766 } 769 767 queue_delayed_work(cfid_put_wq, &cfids->laundromat_work, 770 768 dir_cache_timeout * HZ);
+1 -1
fs/smb/client/cifsfs.c
··· 173 173 MODULE_PARM_DESC(enable_oplocks, "Enable or disable oplocks. Default: y/Y/1"); 174 174 175 175 module_param(enable_gcm_256, bool, 0644); 176 - MODULE_PARM_DESC(enable_gcm_256, "Enable requesting strongest (256 bit) GCM encryption. Default: y/Y/0"); 176 + MODULE_PARM_DESC(enable_gcm_256, "Enable requesting strongest (256 bit) GCM encryption. Default: y/Y/1"); 177 177 178 178 module_param(require_gcm_256, bool, 0644); 179 179 MODULE_PARM_DESC(require_gcm_256, "Require strongest (256 bit) GCM encryption. Default: n/N/0");
+2
fs/smb/client/cifsproto.h
··· 616 616 extern struct TCP_Server_Info * 617 617 cifs_find_tcp_session(struct smb3_fs_context *ctx); 618 618 619 + struct cifs_tcon *cifs_setup_ipc(struct cifs_ses *ses, bool seal); 620 + 619 621 void __cifs_put_smb_ses(struct cifs_ses *ses); 620 622 621 623 extern struct cifs_ses *
+19 -27
fs/smb/client/connect.c
··· 310 310 server->ssocket->flags); 311 311 sock_release(server->ssocket); 312 312 server->ssocket = NULL; 313 + } else if (cifs_rdma_enabled(server)) { 314 + smbd_destroy(server); 313 315 } 314 316 server->sequence_number = 0; 315 317 server->session_estab = false; ··· 339 337 list_del_init(&mid->qhead); 340 338 mid_execute_callback(mid); 341 339 release_mid(mid); 342 - } 343 - 344 - if (cifs_rdma_enabled(server)) { 345 - cifs_server_lock(server); 346 - smbd_destroy(server); 347 - cifs_server_unlock(server); 348 340 } 349 341 } 350 342 ··· 2011 2015 /** 2012 2016 * cifs_setup_ipc - helper to setup the IPC tcon for the session 2013 2017 * @ses: smb session to issue the request on 2014 - * @ctx: the superblock configuration context to use for building the 2015 - * new tree connection for the IPC (interprocess communication RPC) 2018 + * @seal: if encryption is requested 2016 2019 * 2017 2020 * A new IPC connection is made and stored in the session 2018 2021 * tcon_ipc. The IPC tcon has the same lifetime as the session. 2019 2022 */ 2020 - static int 2021 - cifs_setup_ipc(struct cifs_ses *ses, struct smb3_fs_context *ctx) 2023 + struct cifs_tcon *cifs_setup_ipc(struct cifs_ses *ses, bool seal) 2022 2024 { 2023 2025 int rc = 0, xid; 2024 2026 struct cifs_tcon *tcon; 2025 2027 char unc[SERVER_NAME_LENGTH + sizeof("//x/IPC$")] = {0}; 2026 - bool seal = false; 2027 2028 struct TCP_Server_Info *server = ses->server; 2028 2029 2029 2030 /* 2030 2031 * If the mount request that resulted in the creation of the 2031 2032 * session requires encryption, force IPC to be encrypted too. 2032 2033 */ 2033 - if (ctx->seal) { 2034 - if (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION) 2035 - seal = true; 2036 - else { 2037 - cifs_server_dbg(VFS, 2038 - "IPC: server doesn't support encryption\n"); 2039 - return -EOPNOTSUPP; 2040 - } 2034 + if (seal && !(server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION)) { 2035 + cifs_server_dbg(VFS, "IPC: server doesn't support encryption\n"); 2036 + return ERR_PTR(-EOPNOTSUPP); 2041 2037 } 2042 2038 2043 2039 /* no need to setup directory caching on IPC share, so pass in false */ 2044 2040 tcon = tcon_info_alloc(false, netfs_trace_tcon_ref_new_ipc); 2045 2041 if (tcon == NULL) 2046 - return -ENOMEM; 2042 + return ERR_PTR(-ENOMEM); 2047 2043 2048 2044 spin_lock(&server->srv_lock); 2049 2045 scnprintf(unc, sizeof(unc), "\\\\%s\\IPC$", server->hostname); ··· 2045 2057 tcon->ses = ses; 2046 2058 tcon->ipc = true; 2047 2059 tcon->seal = seal; 2048 - rc = server->ops->tree_connect(xid, ses, unc, tcon, ctx->local_nls); 2060 + rc = server->ops->tree_connect(xid, ses, unc, tcon, ses->local_nls); 2049 2061 free_xid(xid); 2050 2062 2051 2063 if (rc) { 2052 - cifs_server_dbg(VFS, "failed to connect to IPC (rc=%d)\n", rc); 2064 + cifs_server_dbg(VFS | ONCE, "failed to connect to IPC (rc=%d)\n", rc); 2053 2065 tconInfoFree(tcon, netfs_trace_tcon_ref_free_ipc_fail); 2054 - goto out; 2066 + return ERR_PTR(rc); 2055 2067 } 2056 2068 2057 2069 cifs_dbg(FYI, "IPC tcon rc=%d ipc tid=0x%x\n", rc, tcon->tid); ··· 2059 2071 spin_lock(&tcon->tc_lock); 2060 2072 tcon->status = TID_GOOD; 2061 2073 spin_unlock(&tcon->tc_lock); 2062 - ses->tcon_ipc = tcon; 2063 - out: 2064 - return rc; 2074 + return tcon; 2065 2075 } 2066 2076 2067 2077 static struct cifs_ses * ··· 2333 2347 { 2334 2348 struct sockaddr_in6 *addr6 = (struct sockaddr_in6 *)&server->dstaddr; 2335 2349 struct sockaddr_in *addr = (struct sockaddr_in *)&server->dstaddr; 2350 + struct cifs_tcon *ipc; 2336 2351 struct cifs_ses *ses; 2337 2352 unsigned int xid; 2338 2353 int retries = 0; ··· 2512 2525 list_add(&ses->smb_ses_list, &server->smb_ses_list); 2513 2526 spin_unlock(&cifs_tcp_ses_lock); 2514 2527 2515 - cifs_setup_ipc(ses, ctx); 2528 + ipc = cifs_setup_ipc(ses, ctx->seal); 2529 + spin_lock(&cifs_tcp_ses_lock); 2530 + spin_lock(&ses->ses_lock); 2531 + ses->tcon_ipc = !IS_ERR(ipc) ? ipc : NULL; 2532 + spin_unlock(&ses->ses_lock); 2533 + spin_unlock(&cifs_tcp_ses_lock); 2516 2534 2517 2535 free_xid(xid); 2518 2536
+47 -8
fs/smb/client/dfs_cache.c
··· 1120 1120 return match; 1121 1121 } 1122 1122 1123 - static bool is_ses_good(struct cifs_ses *ses) 1123 + static bool is_ses_good(struct cifs_tcon *tcon, struct cifs_ses *ses) 1124 1124 { 1125 1125 struct TCP_Server_Info *server = ses->server; 1126 - struct cifs_tcon *tcon = ses->tcon_ipc; 1126 + struct cifs_tcon *ipc = NULL; 1127 1127 bool ret; 1128 1128 1129 + spin_lock(&cifs_tcp_ses_lock); 1129 1130 spin_lock(&ses->ses_lock); 1130 1131 spin_lock(&ses->chan_lock); 1132 + 1131 1133 ret = !cifs_chan_needs_reconnect(ses, server) && 1132 - ses->ses_status == SES_GOOD && 1133 - !tcon->need_reconnect; 1134 + ses->ses_status == SES_GOOD; 1135 + 1134 1136 spin_unlock(&ses->chan_lock); 1137 + 1138 + if (!ret) 1139 + goto out; 1140 + 1141 + if (likely(ses->tcon_ipc)) { 1142 + if (ses->tcon_ipc->need_reconnect) { 1143 + ret = false; 1144 + goto out; 1145 + } 1146 + } else { 1147 + spin_unlock(&ses->ses_lock); 1148 + spin_unlock(&cifs_tcp_ses_lock); 1149 + 1150 + ipc = cifs_setup_ipc(ses, tcon->seal); 1151 + 1152 + spin_lock(&cifs_tcp_ses_lock); 1153 + spin_lock(&ses->ses_lock); 1154 + if (!IS_ERR(ipc)) { 1155 + if (!ses->tcon_ipc) { 1156 + ses->tcon_ipc = ipc; 1157 + ipc = NULL; 1158 + } 1159 + } else { 1160 + ret = false; 1161 + ipc = NULL; 1162 + } 1163 + } 1164 + 1165 + out: 1135 1166 spin_unlock(&ses->ses_lock); 1167 + spin_unlock(&cifs_tcp_ses_lock); 1168 + if (ipc && server->ops->tree_disconnect) { 1169 + unsigned int xid = get_xid(); 1170 + 1171 + (void)server->ops->tree_disconnect(xid, ipc); 1172 + _free_xid(xid); 1173 + } 1174 + tconInfoFree(ipc, netfs_trace_tcon_ref_free_ipc); 1136 1175 return ret; 1137 1176 } 1138 1177 1139 1178 /* Refresh dfs referral of @ses */ 1140 - static void refresh_ses_referral(struct cifs_ses *ses) 1179 + static void refresh_ses_referral(struct cifs_tcon *tcon, struct cifs_ses *ses) 1141 1180 { 1142 1181 struct cache_entry *ce; 1143 1182 unsigned int xid; ··· 1192 1153 } 1193 1154 1194 1155 ses = CIFS_DFS_ROOT_SES(ses); 1195 - if (!is_ses_good(ses)) { 1156 + if (!is_ses_good(tcon, ses)) { 1196 1157 cifs_dbg(FYI, "%s: skip cache refresh due to disconnected ipc\n", 1197 1158 __func__); 1198 1159 goto out; ··· 1280 1241 up_read(&htable_rw_lock); 1281 1242 1282 1243 ses = CIFS_DFS_ROOT_SES(ses); 1283 - if (!is_ses_good(ses)) { 1244 + if (!is_ses_good(tcon, ses)) { 1284 1245 cifs_dbg(FYI, "%s: skip cache refresh due to disconnected ipc\n", 1285 1246 __func__); 1286 1247 goto out; ··· 1348 1309 tcon = container_of(work, struct cifs_tcon, dfs_cache_work.work); 1349 1310 1350 1311 list_for_each_entry(ses, &tcon->dfs_ses_list, dlist) 1351 - refresh_ses_referral(ses); 1312 + refresh_ses_referral(tcon, ses); 1352 1313 refresh_tcon_referral(tcon, false); 1353 1314 1354 1315 queue_delayed_work(dfscache_wq, &tcon->dfs_cache_work,
+2
fs/smb/client/smb2inode.c
··· 1294 1294 smb2_to_name = cifs_convert_path_to_utf16(to_name, cifs_sb); 1295 1295 if (smb2_to_name == NULL) { 1296 1296 rc = -ENOMEM; 1297 + if (cfile) 1298 + cifsFileInfo_put(cfile); 1297 1299 goto smb2_rename_path; 1298 1300 } 1299 1301 in_iov.iov_base = smb2_to_name;
+2 -1
fs/smb/client/smb2ops.c
··· 2799 2799 struct cifs_fid fid; 2800 2800 int rc; 2801 2801 __le16 *utf16_path; 2802 - struct cached_fid *cfid = NULL; 2802 + struct cached_fid *cfid; 2803 2803 int retries = 0, cur_sleep = 1; 2804 2804 2805 2805 replay_again: 2806 2806 /* reinitialize for possible replay */ 2807 + cfid = NULL; 2807 2808 flags = CIFS_CP_CREATE_CLOSE_OP; 2808 2809 oplock = SMB2_OPLOCK_LEVEL_NONE; 2809 2810 server = cifs_pick_channel(ses);
+5 -2
fs/smb/client/smb2pdu.c
··· 4054 4054 4055 4055 smb_rsp = (struct smb2_change_notify_rsp *)rsp_iov.iov_base; 4056 4056 4057 - smb2_validate_iov(le16_to_cpu(smb_rsp->OutputBufferOffset), 4058 - le32_to_cpu(smb_rsp->OutputBufferLength), &rsp_iov, 4057 + rc = smb2_validate_iov(le16_to_cpu(smb_rsp->OutputBufferOffset), 4058 + le32_to_cpu(smb_rsp->OutputBufferLength), 4059 + &rsp_iov, 4059 4060 sizeof(struct file_notify_information)); 4061 + if (rc) 4062 + goto cnotify_exit; 4060 4063 4061 4064 *out_data = kmemdup((char *)smb_rsp + le16_to_cpu(smb_rsp->OutputBufferOffset), 4062 4065 le32_to_cpu(smb_rsp->OutputBufferLength), GFP_KERNEL);
+7 -1
fs/smb/server/transport_ipc.c
··· 263 263 264 264 static int handle_response(int type, void *payload, size_t sz) 265 265 { 266 - unsigned int handle = *(unsigned int *)payload; 266 + unsigned int handle; 267 267 struct ipc_msg_table_entry *entry; 268 268 int ret = 0; 269 + 270 + /* Prevent 4-byte read beyond declared payload size */ 271 + if (sz < sizeof(unsigned int)) 272 + return -EINVAL; 273 + 274 + handle = *(unsigned int *)payload; 269 275 270 276 ipc_update_last_active(); 271 277 down_read(&ipc_msg_table_lock);
+59 -12
fs/smb/server/transport_rdma.c
··· 418 418 419 419 sc->ib.dev = sc->rdma.cm_id->device; 420 420 421 - INIT_WORK(&sc->recv_io.posted.refill_work, 422 - smb_direct_post_recv_credits); 423 - INIT_WORK(&sc->idle.immediate_work, smb_direct_send_immediate_work); 424 421 INIT_DELAYED_WORK(&sc->idle.timer_work, smb_direct_idle_connection_timer); 425 422 426 423 conn = ksmbd_conn_alloc(); ··· 466 469 disable_delayed_work_sync(&sc->idle.timer_work); 467 470 disable_work_sync(&sc->idle.immediate_work); 468 471 472 + if (sc->rdma.cm_id) 473 + rdma_lock_handler(sc->rdma.cm_id); 474 + 469 475 if (sc->ib.qp) { 470 476 ib_drain_qp(sc->ib.qp); 471 477 sc->ib.qp = NULL; ··· 497 497 ib_free_cq(sc->ib.recv_cq); 498 498 if (sc->ib.pd) 499 499 ib_dealloc_pd(sc->ib.pd); 500 - if (sc->rdma.cm_id) 500 + if (sc->rdma.cm_id) { 501 + rdma_unlock_handler(sc->rdma.cm_id); 501 502 rdma_destroy_id(sc->rdma.cm_id); 503 + } 502 504 503 505 smb_direct_destroy_pools(sc); 504 506 ksmbd_conn_free(KSMBD_TRANS(t)->conn); ··· 1729 1727 } 1730 1728 case RDMA_CM_EVENT_DEVICE_REMOVAL: 1731 1729 case RDMA_CM_EVENT_DISCONNECTED: { 1732 - ib_drain_qp(sc->ib.qp); 1733 - 1734 1730 sc->status = SMBDIRECT_SOCKET_DISCONNECTED; 1735 1731 smb_direct_disconnect_rdma_work(&sc->disconnect_work); 1732 + if (sc->ib.qp) 1733 + ib_drain_qp(sc->ib.qp); 1736 1734 break; 1737 1735 } 1738 1736 case RDMA_CM_EVENT_CONNECT_ERROR: { ··· 1906 1904 goto out_err; 1907 1905 } 1908 1906 1909 - smb_direct_post_recv_credits(&sc->recv_io.posted.refill_work); 1910 1907 return 0; 1911 1908 out_err: 1912 1909 put_recvmsg(sc, recvmsg); ··· 2250 2249 return -ECONNABORTED; 2251 2250 2252 2251 ret = smb_direct_check_recvmsg(recvmsg); 2253 - if (ret == -ECONNABORTED) 2254 - goto out; 2252 + if (ret) 2253 + goto put; 2255 2254 2256 2255 req = (struct smbdirect_negotiate_req *)recvmsg->packet; 2257 2256 sp->max_recv_size = min_t(int, sp->max_recv_size, ··· 2266 2265 sc->recv_io.credits.target = min_t(u16, sc->recv_io.credits.target, sp->recv_credit_max); 2267 2266 sc->recv_io.credits.target = max_t(u16, sc->recv_io.credits.target, 1); 2268 2267 2269 - ret = smb_direct_send_negotiate_response(sc, ret); 2270 - out: 2268 + put: 2271 2269 spin_lock_irqsave(&sc->recv_io.reassembly.lock, flags); 2272 2270 sc->recv_io.reassembly.queue_length--; 2273 2271 list_del(&recvmsg->list); 2274 2272 spin_unlock_irqrestore(&sc->recv_io.reassembly.lock, flags); 2275 2273 put_recvmsg(sc, recvmsg); 2274 + 2275 + if (ret == -ECONNABORTED) 2276 + return ret; 2277 + 2278 + if (ret) 2279 + goto respond; 2280 + 2281 + /* 2282 + * We negotiated with success, so we need to refill the recv queue. 2283 + * We do that with sc->idle.immediate_work still being disabled 2284 + * via smbdirect_socket_init(), so that queue_work(sc->workqueue, 2285 + * &sc->idle.immediate_work) in smb_direct_post_recv_credits() 2286 + * is a no-op. 2287 + * 2288 + * The message that grants the credits to the client is 2289 + * the negotiate response. 2290 + */ 2291 + INIT_WORK(&sc->recv_io.posted.refill_work, smb_direct_post_recv_credits); 2292 + smb_direct_post_recv_credits(&sc->recv_io.posted.refill_work); 2293 + if (unlikely(sc->first_error)) 2294 + return sc->first_error; 2295 + INIT_WORK(&sc->idle.immediate_work, smb_direct_send_immediate_work); 2296 + 2297 + respond: 2298 + ret = smb_direct_send_negotiate_response(sc, ret); 2276 2299 2277 2300 return ret; 2278 2301 } ··· 2606 2581 } 2607 2582 } 2608 2583 2609 - bool ksmbd_rdma_capable_netdev(struct net_device *netdev) 2584 + static bool ksmbd_find_rdma_capable_netdev(struct net_device *netdev) 2610 2585 { 2611 2586 struct smb_direct_device *smb_dev; 2612 2587 int i; ··· 2646 2621 netdev->name, str_true_false(rdma_capable)); 2647 2622 2648 2623 return rdma_capable; 2624 + } 2625 + 2626 + bool ksmbd_rdma_capable_netdev(struct net_device *netdev) 2627 + { 2628 + struct net_device *lower_dev; 2629 + struct list_head *iter; 2630 + 2631 + if (ksmbd_find_rdma_capable_netdev(netdev)) 2632 + return true; 2633 + 2634 + /* check if netdev is bridge or VLAN */ 2635 + if (netif_is_bridge_master(netdev) || 2636 + netdev->priv_flags & IFF_802_1Q_VLAN) 2637 + netdev_for_each_lower_dev(netdev, lower_dev, iter) 2638 + if (ksmbd_find_rdma_capable_netdev(lower_dev)) 2639 + return true; 2640 + 2641 + /* check if netdev is IPoIB safely without layer violation */ 2642 + if (netdev->type == ARPHRD_INFINIBAND) 2643 + return true; 2644 + 2645 + return false; 2649 2646 } 2650 2647 2651 2648 static const struct ksmbd_transport_ops ksmbd_smb_direct_transport_ops = {
+6
fs/xfs/libxfs/xfs_rtgroup.h
··· 50 50 uint8_t *rtg_rsum_cache; 51 51 struct xfs_open_zone *rtg_open_zone; 52 52 }; 53 + 54 + /* 55 + * Count of outstanding GC operations for zoned XFS. Any RTG with a 56 + * non-zero rtg_gccount will not be picked as new GC victim. 57 + */ 58 + atomic_t rtg_gccount; 53 59 }; 54 60 55 61 /*
+3 -1
fs/xfs/xfs_discard.c
··· 726 726 break; 727 727 } 728 728 729 - if (!tr.queued) 729 + if (!tr.queued) { 730 + kfree(tr.extents); 730 731 break; 732 + } 731 733 732 734 /* 733 735 * We hand the extent list to the discard function here so the
+69 -13
fs/xfs/xfs_iomap.c
··· 1091 1091 }; 1092 1092 #endif /* CONFIG_XFS_RT */ 1093 1093 1094 + #ifdef DEBUG 1095 + static void 1096 + xfs_check_atomic_cow_conversion( 1097 + struct xfs_inode *ip, 1098 + xfs_fileoff_t offset_fsb, 1099 + xfs_filblks_t count_fsb, 1100 + const struct xfs_bmbt_irec *cmap) 1101 + { 1102 + struct xfs_iext_cursor icur; 1103 + struct xfs_bmbt_irec cmap2 = { }; 1104 + 1105 + if (xfs_iext_lookup_extent(ip, ip->i_cowfp, offset_fsb, &icur, &cmap2)) 1106 + xfs_trim_extent(&cmap2, offset_fsb, count_fsb); 1107 + 1108 + ASSERT(cmap2.br_startoff == cmap->br_startoff); 1109 + ASSERT(cmap2.br_blockcount == cmap->br_blockcount); 1110 + ASSERT(cmap2.br_startblock == cmap->br_startblock); 1111 + ASSERT(cmap2.br_state == cmap->br_state); 1112 + } 1113 + #else 1114 + # define xfs_check_atomic_cow_conversion(...) ((void)0) 1115 + #endif 1116 + 1094 1117 static int 1095 1118 xfs_atomic_write_cow_iomap_begin( 1096 1119 struct inode *inode, ··· 1125 1102 { 1126 1103 struct xfs_inode *ip = XFS_I(inode); 1127 1104 struct xfs_mount *mp = ip->i_mount; 1128 - const xfs_fileoff_t offset_fsb = XFS_B_TO_FSBT(mp, offset); 1129 - xfs_fileoff_t end_fsb = xfs_iomap_end_fsb(mp, offset, length); 1130 - xfs_filblks_t count_fsb = end_fsb - offset_fsb; 1105 + const xfs_fileoff_t offset_fsb = XFS_B_TO_FSBT(mp, offset); 1106 + const xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, offset + length); 1107 + const xfs_filblks_t count_fsb = end_fsb - offset_fsb; 1108 + xfs_filblks_t hole_count_fsb; 1131 1109 int nmaps = 1; 1132 1110 xfs_filblks_t resaligned; 1133 1111 struct xfs_bmbt_irec cmap; ··· 1154 1130 return -EAGAIN; 1155 1131 1156 1132 trace_xfs_iomap_atomic_write_cow(ip, offset, length); 1157 - 1133 + retry: 1158 1134 xfs_ilock(ip, XFS_ILOCK_EXCL); 1159 1135 1160 1136 if (!ip->i_cowfp) { ··· 1165 1141 if (!xfs_iext_lookup_extent(ip, ip->i_cowfp, offset_fsb, &icur, &cmap)) 1166 1142 cmap.br_startoff = end_fsb; 1167 1143 if (cmap.br_startoff <= offset_fsb) { 1144 + if (isnullstartblock(cmap.br_startblock)) 1145 + goto convert_delay; 1146 + 1147 + /* 1148 + * cmap could extend outside the write range due to previous 1149 + * speculative preallocations. We must trim cmap to the write 1150 + * range because the cow fork treats written mappings to mean 1151 + * "write in progress". 1152 + */ 1168 1153 xfs_trim_extent(&cmap, offset_fsb, count_fsb); 1169 1154 goto found; 1170 1155 } 1171 1156 1172 - end_fsb = cmap.br_startoff; 1173 - count_fsb = end_fsb - offset_fsb; 1157 + hole_count_fsb = cmap.br_startoff - offset_fsb; 1174 1158 1175 - resaligned = xfs_aligned_fsb_count(offset_fsb, count_fsb, 1159 + resaligned = xfs_aligned_fsb_count(offset_fsb, hole_count_fsb, 1176 1160 xfs_get_cowextsz_hint(ip)); 1177 1161 xfs_iunlock(ip, XFS_ILOCK_EXCL); 1178 1162 ··· 1201 1169 if (!xfs_iext_lookup_extent(ip, ip->i_cowfp, offset_fsb, &icur, &cmap)) 1202 1170 cmap.br_startoff = end_fsb; 1203 1171 if (cmap.br_startoff <= offset_fsb) { 1204 - xfs_trim_extent(&cmap, offset_fsb, count_fsb); 1205 1172 xfs_trans_cancel(tp); 1173 + if (isnullstartblock(cmap.br_startblock)) 1174 + goto convert_delay; 1175 + xfs_trim_extent(&cmap, offset_fsb, count_fsb); 1206 1176 goto found; 1207 1177 } 1208 1178 ··· 1216 1182 * atomic writes to that same range will be aligned (and don't require 1217 1183 * this COW-based method). 1218 1184 */ 1219 - error = xfs_bmapi_write(tp, ip, offset_fsb, count_fsb, 1185 + error = xfs_bmapi_write(tp, ip, offset_fsb, hole_count_fsb, 1220 1186 XFS_BMAPI_COWFORK | XFS_BMAPI_PREALLOC | 1221 1187 XFS_BMAPI_EXTSZALIGN, 0, &cmap, &nmaps); 1222 1188 if (error) { ··· 1229 1195 if (error) 1230 1196 goto out_unlock; 1231 1197 1198 + /* 1199 + * cmap could map more blocks than the range we passed into bmapi_write 1200 + * because of EXTSZALIGN or adjacent pre-existing unwritten mappings 1201 + * that were merged. Trim cmap to the original write range so that we 1202 + * don't convert more than we were asked to do for this write. 1203 + */ 1204 + xfs_trim_extent(&cmap, offset_fsb, count_fsb); 1205 + 1232 1206 found: 1233 1207 if (cmap.br_state != XFS_EXT_NORM) { 1234 - error = xfs_reflink_convert_cow_locked(ip, offset_fsb, 1235 - count_fsb); 1208 + error = xfs_reflink_convert_cow_locked(ip, cmap.br_startoff, 1209 + cmap.br_blockcount); 1236 1210 if (error) 1237 1211 goto out_unlock; 1238 1212 cmap.br_state = XFS_EXT_NORM; 1213 + xfs_check_atomic_cow_conversion(ip, offset_fsb, count_fsb, 1214 + &cmap); 1239 1215 } 1240 1216 1241 - length = XFS_FSB_TO_B(mp, cmap.br_startoff + cmap.br_blockcount); 1242 - trace_xfs_iomap_found(ip, offset, length - offset, XFS_COW_FORK, &cmap); 1217 + trace_xfs_iomap_found(ip, offset, length, XFS_COW_FORK, &cmap); 1243 1218 seq = xfs_iomap_inode_sequence(ip, IOMAP_F_SHARED); 1244 1219 xfs_iunlock(ip, XFS_ILOCK_EXCL); 1245 1220 return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, IOMAP_F_SHARED, seq); 1246 1221 1222 + convert_delay: 1223 + xfs_iunlock(ip, XFS_ILOCK_EXCL); 1224 + error = xfs_bmapi_convert_delalloc(ip, XFS_COW_FORK, offset, iomap, 1225 + NULL); 1226 + if (error) 1227 + return error; 1228 + 1229 + /* 1230 + * Try the lookup again, because the delalloc conversion might have 1231 + * turned the COW mapping into unwritten, but we need it to be in 1232 + * written state. 1233 + */ 1234 + goto retry; 1247 1235 out_unlock: 1248 1236 xfs_iunlock(ip, XFS_ILOCK_EXCL); 1249 1237 return error;
+12 -2
fs/xfs/xfs_zone_alloc.c
··· 246 246 * If a data write raced with this GC write, keep the existing data in 247 247 * the data fork, mark our newly written GC extent as reclaimable, then 248 248 * move on to the next extent. 249 + * 250 + * Note that this can also happen when racing with operations that do 251 + * not actually invalidate the data, but just move it to a different 252 + * inode (XFS_IOC_EXCHANGE_RANGE), or to a different offset inside the 253 + * inode (FALLOC_FL_COLLAPSE_RANGE / FALLOC_FL_INSERT_RANGE). If the 254 + * data was just moved around, GC fails to free the zone, but the zone 255 + * becomes a GC candidate again as soon as all previous GC I/O has 256 + * finished and these blocks will be moved out eventually. 249 257 */ 250 258 if (old_startblock != NULLFSBLOCK && 251 259 old_startblock != data.br_startblock) ··· 615 607 lockdep_assert_held(&zi->zi_open_zones_lock); 616 608 617 609 list_for_each_entry_reverse(oz, &zi->zi_open_zones, oz_entry) 618 - if (xfs_try_use_zone(zi, file_hint, oz, false)) 610 + if (xfs_try_use_zone(zi, file_hint, oz, XFS_ZONE_ALLOC_OK)) 619 611 return oz; 620 612 621 613 cond_resched_lock(&zi->zi_open_zones_lock); ··· 1249 1241 1250 1242 while ((rtg = xfs_rtgroup_next(mp, rtg))) { 1251 1243 error = xfs_init_zone(&iz, rtg, NULL); 1252 - if (error) 1244 + if (error) { 1245 + xfs_rtgroup_rele(rtg); 1253 1246 goto out_free_zone_info; 1247 + } 1254 1248 } 1255 1249 } 1256 1250
+27
fs/xfs/xfs_zone_gc.c
··· 114 114 /* Open Zone being written to */ 115 115 struct xfs_open_zone *oz; 116 116 117 + struct xfs_rtgroup *victim_rtg; 118 + 117 119 /* Bio used for reads and writes, including the bvec used by it */ 118 120 struct bio_vec bv; 119 121 struct bio bio; /* must be last */ ··· 266 264 iter->rec_count = 0; 267 265 iter->rec_idx = 0; 268 266 iter->victim_rtg = victim_rtg; 267 + atomic_inc(&victim_rtg->rtg_gccount); 269 268 } 270 269 271 270 /* ··· 365 362 366 363 return 0; 367 364 done: 365 + atomic_dec(&iter->victim_rtg->rtg_gccount); 368 366 xfs_rtgroup_rele(iter->victim_rtg); 369 367 iter->victim_rtg = NULL; 370 368 return 0; ··· 454 450 455 451 if (!rtg) 456 452 continue; 453 + 454 + /* 455 + * If the zone is already undergoing GC, don't pick it again. 456 + * 457 + * This prevents us from picking one of the zones for which we 458 + * already submitted GC I/O, but for which the remapping hasn't 459 + * concluded yet. This won't cause data corruption, but 460 + * increases write amplification and slows down GC, so this is 461 + * a bad thing. 462 + */ 463 + if (atomic_read(&rtg->rtg_gccount)) { 464 + xfs_rtgroup_rele(rtg); 465 + continue; 466 + } 457 467 458 468 /* skip zones that are just waiting for a reset */ 459 469 if (rtg_rmap(rtg)->i_used_blocks == 0 || ··· 706 688 chunk->scratch = &data->scratch[data->scratch_idx]; 707 689 chunk->data = data; 708 690 chunk->oz = oz; 691 + chunk->victim_rtg = iter->victim_rtg; 692 + atomic_inc(&chunk->victim_rtg->rtg_group.xg_active_ref); 693 + atomic_inc(&chunk->victim_rtg->rtg_gccount); 709 694 710 695 bio->bi_iter.bi_sector = xfs_rtb_to_daddr(mp, chunk->old_startblock); 711 696 bio->bi_end_io = xfs_zone_gc_end_io; ··· 731 710 xfs_zone_gc_free_chunk( 732 711 struct xfs_gc_bio *chunk) 733 712 { 713 + atomic_dec(&chunk->victim_rtg->rtg_gccount); 714 + xfs_rtgroup_rele(chunk->victim_rtg); 734 715 list_del(&chunk->entry); 735 716 xfs_open_zone_put(chunk->oz); 736 717 xfs_irele(chunk->ip); ··· 792 769 split_chunk->new_daddr = chunk->new_daddr; 793 770 split_chunk->oz = chunk->oz; 794 771 atomic_inc(&chunk->oz->oz_ref); 772 + 773 + split_chunk->victim_rtg = chunk->victim_rtg; 774 + atomic_inc(&chunk->victim_rtg->rtg_group.xg_active_ref); 775 + atomic_inc(&chunk->victim_rtg->rtg_gccount); 795 776 796 777 chunk->offset += split_len; 797 778 chunk->len -= split_len;
+1 -1
include/asm-generic/vmlinux.lds.h
··· 832 832 833 833 /* Required sections not related to debugging. */ 834 834 #define ELF_DETAILS \ 835 - .modinfo : { *(.modinfo) } \ 835 + .modinfo : { *(.modinfo) . = ALIGN(8); } \ 836 836 .comment 0 : { *(.comment) } \ 837 837 .symtab 0 : { *(.symtab) } \ 838 838 .strtab 0 : { *(.strtab) } \
+1 -1
include/drm/Makefile
··· 11 11 quiet_cmd_hdrtest = HDRTEST $(patsubst %.hdrtest,%.h,$@) 12 12 cmd_hdrtest = \ 13 13 $(CC) $(c_flags) -fsyntax-only -x c /dev/null -include $< -include $<; \ 14 - PYTHONDONTWRITEBYTECODE=1 $(KERNELDOC) -none $(if $(CONFIG_WERROR)$(CONFIG_DRM_WERROR),-Werror) $<; \ 14 + PYTHONDONTWRITEBYTECODE=1 $(PYTHON3) $(KERNELDOC) -none $(if $(CONFIG_WERROR)$(CONFIG_DRM_WERROR),-Werror) $<; \ 15 15 touch $@ 16 16 17 17 $(obj)/%.hdrtest: $(src)/%.h FORCE
+6 -5
include/linux/blk_types.h
··· 341 341 /* write the zero filled sector many times */ 342 342 REQ_OP_WRITE_ZEROES = (__force blk_opf_t)9, 343 343 /* Open a zone */ 344 - REQ_OP_ZONE_OPEN = (__force blk_opf_t)10, 344 + REQ_OP_ZONE_OPEN = (__force blk_opf_t)11, 345 345 /* Close a zone */ 346 - REQ_OP_ZONE_CLOSE = (__force blk_opf_t)11, 346 + REQ_OP_ZONE_CLOSE = (__force blk_opf_t)13, 347 347 /* Transition a zone to full */ 348 - REQ_OP_ZONE_FINISH = (__force blk_opf_t)13, 348 + REQ_OP_ZONE_FINISH = (__force blk_opf_t)15, 349 349 /* reset a zone write pointer */ 350 - REQ_OP_ZONE_RESET = (__force blk_opf_t)15, 350 + REQ_OP_ZONE_RESET = (__force blk_opf_t)17, 351 351 /* reset all the zone present on the device */ 352 - REQ_OP_ZONE_RESET_ALL = (__force blk_opf_t)17, 352 + REQ_OP_ZONE_RESET_ALL = (__force blk_opf_t)19, 353 353 354 354 /* Driver private requests */ 355 355 REQ_OP_DRV_IN = (__force blk_opf_t)34, ··· 478 478 { 479 479 switch (op & REQ_OP_MASK) { 480 480 case REQ_OP_ZONE_RESET: 481 + case REQ_OP_ZONE_RESET_ALL: 481 482 case REQ_OP_ZONE_OPEN: 482 483 case REQ_OP_ZONE_CLOSE: 483 484 case REQ_OP_ZONE_FINISH:
+8 -3
include/linux/compiler_types.h
··· 250 250 /* 251 251 * GCC does not warn about unused static inline functions for -Wunused-function. 252 252 * Suppress the warning in clang as well by using __maybe_unused, but enable it 253 - * for W=1 build. This will allow clang to find unused functions. Remove the 254 - * __inline_maybe_unused entirely after fixing most of -Wunused-function warnings. 253 + * for W=2 build. This will allow clang to find unused functions. 255 254 */ 256 - #ifdef KBUILD_EXTRA_WARN1 255 + #ifdef KBUILD_EXTRA_WARN2 257 256 #define __inline_maybe_unused 258 257 #else 259 258 #define __inline_maybe_unused __maybe_unused ··· 458 459 # define __nocfi __attribute__((__no_sanitize__("kcfi"))) 459 460 #else 460 461 # define __nocfi 462 + #endif 463 + 464 + #if defined(CONFIG_ARCH_USES_CFI_GENERIC_LLVM_PASS) 465 + # define __nocfi_generic __nocfi 466 + #else 467 + # define __nocfi_generic 461 468 #endif 462 469 463 470 /*
+2
include/linux/fbcon.h
··· 18 18 void fbcon_resumed(struct fb_info *info); 19 19 int fbcon_mode_deleted(struct fb_info *info, 20 20 struct fb_videomode *mode); 21 + void fbcon_delete_modelist(struct list_head *head); 21 22 void fbcon_new_modelist(struct fb_info *info); 22 23 void fbcon_get_requirement(struct fb_info *info, 23 24 struct fb_blit_caps *caps); ··· 39 38 static inline void fbcon_resumed(struct fb_info *info) {} 40 39 static inline int fbcon_mode_deleted(struct fb_info *info, 41 40 struct fb_videomode *mode) { return 0; } 41 + static inline void fbcon_delete_modelist(struct list_head *head) {} 42 42 static inline void fbcon_new_modelist(struct fb_info *info) {} 43 43 static inline void fbcon_get_requirement(struct fb_info *info, 44 44 struct fb_blit_caps *caps) {}
+12
include/linux/net/intel/libie/fwlog.h
··· 78 78 ); 79 79 }; 80 80 81 + #if IS_ENABLED(CONFIG_LIBIE_FWLOG) 81 82 int libie_fwlog_init(struct libie_fwlog *fwlog, struct libie_fwlog_api *api); 82 83 void libie_fwlog_deinit(struct libie_fwlog *fwlog); 83 84 void libie_fwlog_reregister(struct libie_fwlog *fwlog); 84 85 void libie_get_fwlog_data(struct libie_fwlog *fwlog, u8 *buf, u16 len); 86 + #else 87 + static inline int libie_fwlog_init(struct libie_fwlog *fwlog, 88 + struct libie_fwlog_api *api) 89 + { 90 + return -EOPNOTSUPP; 91 + } 92 + static inline void libie_fwlog_deinit(struct libie_fwlog *fwlog) { } 93 + static inline void libie_fwlog_reregister(struct libie_fwlog *fwlog) { } 94 + static inline void libie_get_fwlog_data(struct libie_fwlog *fwlog, u8 *buf, 95 + u16 len) { } 96 + #endif /* CONFIG_LIBIE_FWLOG */ 85 97 #endif /* _LIBIE_FWLOG_H_ */
-1
include/linux/platform_data/x86/int3472.h
··· 100 100 struct regulator_consumer_supply supply_map[GPIO_REGULATOR_SUPPLY_MAP_COUNT * 2]; 101 101 char supply_name_upper[GPIO_SUPPLY_NAME_LENGTH]; 102 102 char regulator_name[GPIO_REGULATOR_NAME_LENGTH]; 103 - struct gpio_desc *ena_gpio; 104 103 struct regulator_dev *rdev; 105 104 struct regulator_desc rdesc; 106 105 };
+10 -3
include/linux/property.h
··· 355 355 356 356 /** 357 357 * struct software_node_ref_args - Reference property with additional arguments 358 - * @node: Reference to a software node 358 + * @swnode: Reference to a software node 359 + * @fwnode: Alternative reference to a firmware node handle 359 360 * @nargs: Number of elements in @args array 360 361 * @args: Integer arguments 361 362 */ 362 363 struct software_node_ref_args { 363 - const struct software_node *node; 364 + const struct software_node *swnode; 365 + struct fwnode_handle *fwnode; 364 366 unsigned int nargs; 365 367 u64 args[NR_FWNODE_REFERENCE_ARGS]; 366 368 }; 367 369 368 370 #define SOFTWARE_NODE_REFERENCE(_ref_, ...) \ 369 371 (const struct software_node_ref_args) { \ 370 - .node = _ref_, \ 372 + .swnode = _Generic(_ref_, \ 373 + const struct software_node *: _ref_, \ 374 + default: NULL), \ 375 + .fwnode = _Generic(_ref_, \ 376 + struct fwnode_handle *: _ref_, \ 377 + default: NULL), \ 371 378 .nargs = COUNT_ARGS(__VA_ARGS__), \ 372 379 .args = { __VA_ARGS__ }, \ 373 380 }
+1 -1
include/linux/regmap.h
··· 1643 1643 * @status_invert: Inverted status register: cleared bits are active interrupts. 1644 1644 * @status_is_level: Status register is actuall signal level: Xor status 1645 1645 * register with previous value to get active interrupts. 1646 - * @wake_invert: Inverted wake register: cleared bits are wake enabled. 1646 + * @wake_invert: Inverted wake register: cleared bits are wake disabled. 1647 1647 * @type_in_mask: Use the mask registers for controlling irq type. Use this if 1648 1648 * the hardware provides separate bits for rising/falling edge 1649 1649 * or low/high level interrupts and they should be combined into
+2 -2
include/linux/sched.h
··· 2407 2407 * be defined in kernel/sched/core.c. 2408 2408 */ 2409 2409 #ifndef INSTANTIATE_EXPORTED_MIGRATE_DISABLE 2410 - static inline void migrate_disable(void) 2410 + static __always_inline void migrate_disable(void) 2411 2411 { 2412 2412 __migrate_disable(); 2413 2413 } 2414 2414 2415 - static inline void migrate_enable(void) 2415 + static __always_inline void migrate_enable(void) 2416 2416 { 2417 2417 __migrate_enable(); 2418 2418 }
+2 -1
include/linux/virtio_net.h
··· 401 401 if (!tnl_hdr_negotiated) 402 402 return -EINVAL; 403 403 404 - vhdr->hash_hdr.hash_value = 0; 404 + vhdr->hash_hdr.hash_value_lo = 0; 405 + vhdr->hash_hdr.hash_value_hi = 0; 405 406 vhdr->hash_hdr.hash_report = 0; 406 407 vhdr->hash_hdr.padding = 0; 407 408
+1
include/net/bluetooth/hci.h
··· 434 434 HCI_USER_CHANNEL, 435 435 HCI_EXT_CONFIGURED, 436 436 HCI_LE_ADV, 437 + HCI_LE_ADV_0, 437 438 HCI_LE_PER_ADV, 438 439 HCI_LE_SCAN, 439 440 HCI_SSP_ENABLED,
+1
include/net/bluetooth/hci_core.h
··· 244 244 bool enabled; 245 245 bool pending; 246 246 bool periodic; 247 + bool periodic_enabled; 247 248 __u8 mesh; 248 249 __u8 instance; 249 250 __u8 handle;
+2 -2
include/net/bluetooth/l2cap.h
··· 38 38 #define L2CAP_DEFAULT_TX_WINDOW 63 39 39 #define L2CAP_DEFAULT_EXT_WINDOW 0x3FFF 40 40 #define L2CAP_DEFAULT_MAX_TX 3 41 - #define L2CAP_DEFAULT_RETRANS_TO 2 /* seconds */ 42 - #define L2CAP_DEFAULT_MONITOR_TO 12 /* seconds */ 41 + #define L2CAP_DEFAULT_RETRANS_TO 2000 /* 2 seconds */ 42 + #define L2CAP_DEFAULT_MONITOR_TO 12000 /* 12 seconds */ 43 43 #define L2CAP_DEFAULT_MAX_PDU_SIZE 1492 /* Sized for AMP packet */ 44 44 #define L2CAP_DEFAULT_ACK_TO 200 45 45 #define L2CAP_DEFAULT_MAX_SDU_SIZE 0xFFFF
+2 -2
include/net/bluetooth/mgmt.h
··· 780 780 __u8 ad_type; 781 781 __u8 offset; 782 782 __u8 length; 783 - __u8 value[31]; 783 + __u8 value[HCI_MAX_AD_LENGTH]; 784 784 } __packed; 785 785 786 786 #define MGMT_OP_ADD_ADV_PATTERNS_MONITOR 0x0052 ··· 853 853 __le16 window; 854 854 __le16 period; 855 855 __u8 num_ad_types; 856 - __u8 ad_types[]; 856 + __u8 ad_types[] __counted_by(num_ad_types); 857 857 } __packed; 858 858 #define MGMT_SET_MESH_RECEIVER_SIZE 6 859 859
+78
include/net/cfg80211.h
··· 6435 6435 * after wiphy_lock() was called. Therefore, wiphy_cancel_work() can 6436 6436 * use just cancel_work() instead of cancel_work_sync(), it requires 6437 6437 * being in a section protected by wiphy_lock(). 6438 + * 6439 + * Note that these are scheduled with a timer where the accuracy 6440 + * becomes less the longer in the future the scheduled timer is. Use 6441 + * wiphy_hrtimer_work_queue() if the timer must be not be late by more 6442 + * than approximately 10 percent. 6438 6443 */ 6439 6444 void wiphy_delayed_work_queue(struct wiphy *wiphy, 6440 6445 struct wiphy_delayed_work *dwork, ··· 6510 6505 */ 6511 6506 bool wiphy_delayed_work_pending(struct wiphy *wiphy, 6512 6507 struct wiphy_delayed_work *dwork); 6508 + 6509 + struct wiphy_hrtimer_work { 6510 + struct wiphy_work work; 6511 + struct wiphy *wiphy; 6512 + struct hrtimer timer; 6513 + }; 6514 + 6515 + enum hrtimer_restart wiphy_hrtimer_work_timer(struct hrtimer *t); 6516 + 6517 + static inline void wiphy_hrtimer_work_init(struct wiphy_hrtimer_work *hrwork, 6518 + wiphy_work_func_t func) 6519 + { 6520 + hrtimer_setup(&hrwork->timer, wiphy_hrtimer_work_timer, 6521 + CLOCK_BOOTTIME, HRTIMER_MODE_REL); 6522 + wiphy_work_init(&hrwork->work, func); 6523 + } 6524 + 6525 + /** 6526 + * wiphy_hrtimer_work_queue - queue hrtimer work for the wiphy 6527 + * @wiphy: the wiphy to queue for 6528 + * @hrwork: the high resolution timer worker 6529 + * @delay: the delay given as a ktime_t 6530 + * 6531 + * Please refer to wiphy_delayed_work_queue(). The difference is that 6532 + * the hrtimer work uses a high resolution timer for scheduling. This 6533 + * may be needed if timeouts might be scheduled further in the future 6534 + * and the accuracy of the normal timer is not sufficient. 6535 + * 6536 + * Expect a delay of a few milliseconds as the timer is scheduled 6537 + * with some slack and some more time may pass between queueing the 6538 + * work and its start. 6539 + */ 6540 + void wiphy_hrtimer_work_queue(struct wiphy *wiphy, 6541 + struct wiphy_hrtimer_work *hrwork, 6542 + ktime_t delay); 6543 + 6544 + /** 6545 + * wiphy_hrtimer_work_cancel - cancel previously queued hrtimer work 6546 + * @wiphy: the wiphy, for debug purposes 6547 + * @hrtimer: the hrtimer work to cancel 6548 + * 6549 + * Cancel the work *without* waiting for it, this assumes being 6550 + * called under the wiphy mutex acquired by wiphy_lock(). 6551 + */ 6552 + void wiphy_hrtimer_work_cancel(struct wiphy *wiphy, 6553 + struct wiphy_hrtimer_work *hrtimer); 6554 + 6555 + /** 6556 + * wiphy_hrtimer_work_flush - flush previously queued hrtimer work 6557 + * @wiphy: the wiphy, for debug purposes 6558 + * @hrwork: the hrtimer work to flush 6559 + * 6560 + * Flush the work (i.e. run it if pending). This must be called 6561 + * under the wiphy mutex acquired by wiphy_lock(). 6562 + */ 6563 + void wiphy_hrtimer_work_flush(struct wiphy *wiphy, 6564 + struct wiphy_hrtimer_work *hrwork); 6565 + 6566 + /** 6567 + * wiphy_hrtimer_work_pending - Find out whether a wiphy hrtimer 6568 + * work item is currently pending. 6569 + * 6570 + * @wiphy: the wiphy, for debug purposes 6571 + * @hrwork: the hrtimer work in question 6572 + * 6573 + * Return: true if timer is pending, false otherwise 6574 + * 6575 + * Please refer to the wiphy_delayed_work_pending() documentation as 6576 + * this is the equivalent function for hrtimer based delayed work 6577 + * items. 6578 + */ 6579 + bool wiphy_hrtimer_work_pending(struct wiphy *wiphy, 6580 + struct wiphy_hrtimer_work *hrwork); 6513 6581 6514 6582 /** 6515 6583 * enum ieee80211_ap_reg_power - regulatory power for an Access Point
+1 -1
include/net/libeth/xdp.h
··· 513 513 * can't fail, but can send less frames if there's no enough free descriptors 514 514 * available. The actual free space is returned by @prep from the driver. 515 515 */ 516 - static __always_inline u32 516 + static __always_inline __nocfi_generic u32 517 517 libeth_xdp_tx_xmit_bulk(const struct libeth_xdp_tx_frame *bulk, void *xdpsq, 518 518 u32 n, bool unroll, u64 priv, 519 519 u32 (*prep)(void *xdpsq, struct libeth_xdpsq *sq),
+1 -1
include/net/tcp.h
··· 370 370 int tcp_ioctl(struct sock *sk, int cmd, int *karg); 371 371 enum skb_drop_reason tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb); 372 372 void tcp_rcv_established(struct sock *sk, struct sk_buff *skb); 373 - void tcp_rcvbuf_grow(struct sock *sk); 373 + void tcp_rcvbuf_grow(struct sock *sk, u32 newval); 374 374 void tcp_rcv_space_adjust(struct sock *sk); 375 375 int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp); 376 376 void tcp_twsk_destructor(struct sock *sk);
+13 -12
include/net/tls.h
··· 451 451 452 452 /* Log all TLS record header TCP sequences in [seq, seq+len] */ 453 453 static inline void 454 - tls_offload_rx_resync_async_request_start(struct sock *sk, __be32 seq, u16 len) 454 + tls_offload_rx_resync_async_request_start(struct tls_offload_resync_async *resync_async, 455 + __be32 seq, u16 len) 455 456 { 456 - struct tls_context *tls_ctx = tls_get_ctx(sk); 457 - struct tls_offload_context_rx *rx_ctx = tls_offload_ctx_rx(tls_ctx); 458 - 459 - atomic64_set(&rx_ctx->resync_async->req, ((u64)ntohl(seq) << 32) | 457 + atomic64_set(&resync_async->req, ((u64)ntohl(seq) << 32) | 460 458 ((u64)len << 16) | RESYNC_REQ | RESYNC_REQ_ASYNC); 461 - rx_ctx->resync_async->loglen = 0; 462 - rx_ctx->resync_async->rcd_delta = 0; 459 + resync_async->loglen = 0; 460 + resync_async->rcd_delta = 0; 463 461 } 464 462 465 463 static inline void 466 - tls_offload_rx_resync_async_request_end(struct sock *sk, __be32 seq) 464 + tls_offload_rx_resync_async_request_end(struct tls_offload_resync_async *resync_async, 465 + __be32 seq) 467 466 { 468 - struct tls_context *tls_ctx = tls_get_ctx(sk); 469 - struct tls_offload_context_rx *rx_ctx = tls_offload_ctx_rx(tls_ctx); 467 + atomic64_set(&resync_async->req, ((u64)ntohl(seq) << 32) | RESYNC_REQ); 468 + } 470 469 471 - atomic64_set(&rx_ctx->resync_async->req, 472 - ((u64)ntohl(seq) << 32) | RESYNC_REQ); 470 + static inline void 471 + tls_offload_rx_resync_async_request_cancel(struct tls_offload_resync_async *resync_async) 472 + { 473 + atomic64_set(&resync_async->req, 0); 473 474 } 474 475 475 476 static inline void
+4 -6
include/scsi/scsi_device.h
··· 252 252 unsigned int queue_stopped; /* request queue is quiesced */ 253 253 bool offline_already; /* Device offline message logged */ 254 254 255 - unsigned int ua_new_media_ctr; /* Counter for New Media UNIT ATTENTIONs */ 256 - unsigned int ua_por_ctr; /* Counter for Power On / Reset UAs */ 255 + atomic_t ua_new_media_ctr; /* Counter for New Media UNIT ATTENTIONs */ 256 + atomic_t ua_por_ctr; /* Counter for Power On / Reset UAs */ 257 257 258 258 atomic_t disk_events_disable_depth; /* disable depth for disk events */ 259 259 ··· 693 693 } 694 694 695 695 /* Macros to access the UNIT ATTENTION counters */ 696 - #define scsi_get_ua_new_media_ctr(sdev) \ 697 - ((const unsigned int)(sdev->ua_new_media_ctr)) 698 - #define scsi_get_ua_por_ctr(sdev) \ 699 - ((const unsigned int)(sdev->ua_por_ctr)) 696 + #define scsi_get_ua_new_media_ctr(sdev) atomic_read(&sdev->ua_new_media_ctr) 697 + #define scsi_get_ua_por_ctr(sdev) atomic_read(&sdev->ua_por_ctr) 700 698 701 699 #define MODULE_ALIAS_SCSI_DEVICE(type) \ 702 700 MODULE_ALIAS("scsi:t-" __stringify(type) "*")
+9
include/trace/events/tcp.h
··· 218 218 __field(__u32, space) 219 219 __field(__u32, ooo_space) 220 220 __field(__u32, rcvbuf) 221 + __field(__u32, rcv_ssthresh) 222 + __field(__u32, window_clamp) 223 + __field(__u32, rcv_wnd) 221 224 __field(__u8, scaling_ratio) 222 225 __field(__u16, sport) 223 226 __field(__u16, dport) ··· 248 245 tp->rcv_nxt; 249 246 250 247 __entry->rcvbuf = sk->sk_rcvbuf; 248 + __entry->rcv_ssthresh = tp->rcv_ssthresh; 249 + __entry->window_clamp = tp->window_clamp; 250 + __entry->rcv_wnd = tp->rcv_wnd; 251 251 __entry->scaling_ratio = tp->scaling_ratio; 252 252 __entry->sport = ntohs(inet->inet_sport); 253 253 __entry->dport = ntohs(inet->inet_dport); ··· 270 264 ), 271 265 272 266 TP_printk("time=%u rtt_us=%u copied=%u inq=%u space=%u ooo=%u scaling_ratio=%u rcvbuf=%u " 267 + "rcv_ssthresh=%u window_clamp=%u rcv_wnd=%u " 273 268 "family=%s sport=%hu dport=%hu saddr=%pI4 daddr=%pI4 " 274 269 "saddrv6=%pI6c daddrv6=%pI6c skaddr=%p sock_cookie=%llx", 275 270 __entry->time, __entry->rtt_us, __entry->copied, 276 271 __entry->inq, __entry->space, __entry->ooo_space, 277 272 __entry->scaling_ratio, __entry->rcvbuf, 273 + __entry->rcv_ssthresh, __entry->window_clamp, 274 + __entry->rcv_wnd, 278 275 show_family_name(__entry->family), 279 276 __entry->sport, __entry->dport, 280 277 __entry->saddr, __entry->daddr,
+15 -8
include/uapi/drm/drm_fourcc.h
··· 979 979 * 2 = Gob Height 8, Turing+ Page Kind mapping 980 980 * 3 = Reserved for future use. 981 981 * 982 - * 22:22 s Sector layout. On Tegra GPUs prior to Xavier, there is a further 983 - * bit remapping step that occurs at an even lower level than the 984 - * page kind and block linear swizzles. This causes the layout of 985 - * surfaces mapped in those SOC's GPUs to be incompatible with the 986 - * equivalent mapping on other GPUs in the same system. 982 + * 22:22 s Sector layout. There is a further bit remapping step that occurs 983 + * 26:27 at an even lower level than the page kind and block linear 984 + * swizzles. This causes the bit arrangement of surfaces in memory 985 + * to differ subtly, and prevents direct sharing of surfaces between 986 + * GPUs with different layouts. 987 987 * 988 - * 0 = Tegra K1 - Tegra Parker/TX2 Layout. 989 - * 1 = Desktop GPU and Tegra Xavier+ Layout 988 + * 0 = Tegra K1 - Tegra Parker/TX2 Layout 989 + * 1 = Pre-GB20x, GB20x 32+ bpp, GB10, Tegra Xavier-Orin Layout 990 + * 2 = GB20x(Blackwell 2)+ 8 bpp surface layout 991 + * 3 = GB20x(Blackwell 2)+ 16 bpp surface layout 992 + * 4 = Reserved for future use. 993 + * 5 = Reserved for future use. 994 + * 6 = Reserved for future use. 995 + * 7 = Reserved for future use. 990 996 * 991 997 * 25:23 c Lossless Framebuffer Compression type. 992 998 * ··· 1007 1001 * 6 = Reserved for future use 1008 1002 * 7 = Reserved for future use 1009 1003 * 1010 - * 55:25 - Reserved for future use. Must be zero. 1004 + * 55:28 - Reserved for future use. Must be zero. 1011 1005 */ 1012 1006 #define DRM_FORMAT_MOD_NVIDIA_BLOCK_LINEAR_2D(c, s, g, k, h) \ 1013 1007 fourcc_mod_code(NVIDIA, (0x10 | \ ··· 1015 1009 (((k) & 0xff) << 12) | \ 1016 1010 (((g) & 0x3) << 20) | \ 1017 1011 (((s) & 0x1) << 22) | \ 1012 + (((s) & 0x6) << 25) | \ 1018 1013 (((c) & 0x7) << 23))) 1019 1014 1020 1015 /* To grandfather in prior block linear format modifiers to the above layout,
+1 -1
include/uapi/linux/fb.h
··· 319 319 #define FB_VBLANK_HAVE_VCOUNT 0x020 /* the vcount field is valid */ 320 320 #define FB_VBLANK_HAVE_HCOUNT 0x040 /* the hcount field is valid */ 321 321 #define FB_VBLANK_VSYNCING 0x080 /* currently in a vsync */ 322 - #define FB_VBLANK_HAVE_VSYNC 0x100 /* verical syncs can be detected */ 322 + #define FB_VBLANK_HAVE_VSYNC 0x100 /* vertical syncs can be detected */ 323 323 324 324 struct fb_vblank { 325 325 __u32 flags; /* FB_VBLANK flags */
+12
include/uapi/linux/input-event-codes.h
··· 631 631 #define KEY_BRIGHTNESS_MIN 0x250 /* Set Brightness to Minimum */ 632 632 #define KEY_BRIGHTNESS_MAX 0x251 /* Set Brightness to Maximum */ 633 633 634 + /* 635 + * Keycodes for hotkeys toggling the electronic privacy screen found on some 636 + * laptops on/off. Note when the embedded-controller turns on/off the eprivacy 637 + * screen itself then the state should be reported through drm connecter props: 638 + * https://www.kernel.org/doc/html/latest/gpu/drm-kms.html#standard-connector-properties 639 + * Except when implementing the drm connecter properties API is not possible 640 + * because e.g. the firmware does not allow querying the presence and/or status 641 + * of the eprivacy screen at boot. 642 + */ 643 + #define KEY_EPRIVACY_SCREEN_ON 0x252 644 + #define KEY_EPRIVACY_SCREEN_OFF 0x253 645 + 634 646 #define KEY_KBDINPUTASSIST_PREV 0x260 635 647 #define KEY_KBDINPUTASSIST_NEXT 0x261 636 648 #define KEY_KBDINPUTASSIST_PREVGROUP 0x262
-12
include/uapi/linux/io_uring.h
··· 689 689 /* query various aspects of io_uring, see linux/io_uring/query.h */ 690 690 IORING_REGISTER_QUERY = 35, 691 691 692 - /* return zcrx buffers back into circulation */ 693 - IORING_REGISTER_ZCRX_REFILL = 36, 694 - 695 692 /* this goes last */ 696 693 IORING_REGISTER_LAST, 697 694 ··· 1068 1071 __u32 zcrx_id; 1069 1072 __u32 __resv2; 1070 1073 __u64 __resv[3]; 1071 - }; 1072 - 1073 - struct io_uring_zcrx_sync_refill { 1074 - __u32 zcrx_id; 1075 - /* the number of entries to return */ 1076 - __u32 nr_entries; 1077 - /* pointer to an array of struct io_uring_zcrx_rqe */ 1078 - __u64 rqes; 1079 - __u64 __resv[2]; 1080 1074 }; 1081 1075 1082 1076 #ifdef __cplusplus
+2 -1
include/uapi/linux/virtio_net.h
··· 193 193 194 194 struct virtio_net_hdr_v1_hash { 195 195 struct virtio_net_hdr_v1 hdr; 196 - __le32 hash_value; 196 + __le16 hash_value_lo; 197 + __le16 hash_value_hi; 197 198 #define VIRTIO_NET_HASH_REPORT_NONE 0 198 199 #define VIRTIO_NET_HASH_REPORT_IPv4 1 199 200 #define VIRTIO_NET_HASH_REPORT_TCPv4 2
+7
include/ufs/ufshcd.h
··· 688 688 * single doorbell mode. 689 689 */ 690 690 UFSHCD_QUIRK_BROKEN_LSDBS_CAP = 1 << 25, 691 + 692 + /* 693 + * This quirk indicates that DME_LINKSTARTUP should not be issued a 2nd 694 + * time (refer link_startup_again) after the 1st time was successful, 695 + * because it causes link startup to become unreliable. 696 + */ 697 + UFSHCD_QUIRK_PERFORM_LINK_STARTUP_ONCE = 1 << 26, 691 698 }; 692 699 693 700 enum ufshcd_caps {
+1 -1
io_uring/memmap.c
··· 135 135 struct io_mapped_region *mr, 136 136 struct io_uring_region_desc *reg) 137 137 { 138 - unsigned long size = mr->nr_pages << PAGE_SHIFT; 138 + unsigned long size = (size_t) mr->nr_pages << PAGE_SHIFT; 139 139 struct page **pages; 140 140 int nr_pages; 141 141
-3
io_uring/register.c
··· 827 827 case IORING_REGISTER_QUERY: 828 828 ret = io_query(ctx, arg, nr_args); 829 829 break; 830 - case IORING_REGISTER_ZCRX_REFILL: 831 - ret = io_zcrx_return_bufs(ctx, arg, nr_args); 832 - break; 833 830 default: 834 831 ret = -EINVAL; 835 832 break;
+9 -2
io_uring/rsrc.c
··· 1403 1403 size_t max_segs = 0; 1404 1404 unsigned i; 1405 1405 1406 - for (i = 0; i < nr_iovs; i++) 1406 + for (i = 0; i < nr_iovs; i++) { 1407 1407 max_segs += (iov[i].iov_len >> shift) + 2; 1408 + if (max_segs > INT_MAX) 1409 + return -EOVERFLOW; 1410 + } 1408 1411 return max_segs; 1409 1412 } 1410 1413 ··· 1513 1510 if (unlikely(ret)) 1514 1511 return ret; 1515 1512 } else { 1516 - nr_segs = io_estimate_bvec_size(iov, nr_iovs, imu); 1513 + int ret = io_estimate_bvec_size(iov, nr_iovs, imu); 1514 + 1515 + if (ret < 0) 1516 + return ret; 1517 + nr_segs = ret; 1517 1518 } 1518 1519 1519 1520 if (sizeof(struct bio_vec) > sizeof(struct iovec)) {
-68
io_uring/zcrx.c
··· 928 928 .uninstall = io_pp_uninstall, 929 929 }; 930 930 931 - #define IO_ZCRX_MAX_SYS_REFILL_BUFS (1 << 16) 932 - #define IO_ZCRX_SYS_REFILL_BATCH 32 933 - 934 - static void io_return_buffers(struct io_zcrx_ifq *ifq, 935 - struct io_uring_zcrx_rqe *rqes, unsigned nr) 936 - { 937 - int i; 938 - 939 - for (i = 0; i < nr; i++) { 940 - struct net_iov *niov; 941 - netmem_ref netmem; 942 - 943 - if (!io_parse_rqe(&rqes[i], ifq, &niov)) 944 - continue; 945 - 946 - scoped_guard(spinlock_bh, &ifq->rq_lock) { 947 - if (!io_zcrx_put_niov_uref(niov)) 948 - continue; 949 - } 950 - 951 - netmem = net_iov_to_netmem(niov); 952 - if (!page_pool_unref_and_test(netmem)) 953 - continue; 954 - io_zcrx_return_niov(niov); 955 - } 956 - } 957 - 958 - int io_zcrx_return_bufs(struct io_ring_ctx *ctx, 959 - void __user *arg, unsigned nr_arg) 960 - { 961 - struct io_uring_zcrx_rqe rqes[IO_ZCRX_SYS_REFILL_BATCH]; 962 - struct io_uring_zcrx_rqe __user *user_rqes; 963 - struct io_uring_zcrx_sync_refill zr; 964 - struct io_zcrx_ifq *ifq; 965 - unsigned nr, i; 966 - 967 - if (nr_arg) 968 - return -EINVAL; 969 - if (copy_from_user(&zr, arg, sizeof(zr))) 970 - return -EFAULT; 971 - if (!zr.nr_entries || zr.nr_entries > IO_ZCRX_MAX_SYS_REFILL_BUFS) 972 - return -EINVAL; 973 - if (!mem_is_zero(&zr.__resv, sizeof(zr.__resv))) 974 - return -EINVAL; 975 - 976 - ifq = xa_load(&ctx->zcrx_ctxs, zr.zcrx_id); 977 - if (!ifq) 978 - return -EINVAL; 979 - nr = zr.nr_entries; 980 - user_rqes = u64_to_user_ptr(zr.rqes); 981 - 982 - for (i = 0; i < nr;) { 983 - unsigned batch = min(nr - i, IO_ZCRX_SYS_REFILL_BATCH); 984 - size_t size = batch * sizeof(rqes[0]); 985 - 986 - if (copy_from_user(rqes, user_rqes + i, size)) 987 - return i ? i : -EFAULT; 988 - io_return_buffers(ifq, rqes, batch); 989 - 990 - i += batch; 991 - 992 - if (fatal_signal_pending(current)) 993 - return i; 994 - cond_resched(); 995 - } 996 - return nr; 997 - } 998 - 999 931 static bool io_zcrx_queue_cqe(struct io_kiocb *req, struct net_iov *niov, 1000 932 struct io_zcrx_ifq *ifq, int off, int len) 1001 933 {
-7
io_uring/zcrx.h
··· 63 63 }; 64 64 65 65 #if defined(CONFIG_IO_URING_ZCRX) 66 - int io_zcrx_return_bufs(struct io_ring_ctx *ctx, 67 - void __user *arg, unsigned nr_arg); 68 66 int io_register_zcrx_ifq(struct io_ring_ctx *ctx, 69 67 struct io_uring_zcrx_ifq_reg __user *arg); 70 68 void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx); ··· 94 96 unsigned int id) 95 97 { 96 98 return NULL; 97 - } 98 - static inline int io_zcrx_return_bufs(struct io_ring_ctx *ctx, 99 - void __user *arg, unsigned nr_arg) 100 - { 101 - return -EOPNOTSUPP; 102 99 } 103 100 #endif 104 101
+2
kernel/bpf/helpers.c
··· 4345 4345 BTF_ID_FLAGS(func, bpf_iter_kmem_cache_destroy, KF_ITER_DESTROY | KF_SLEEPABLE) 4346 4346 BTF_ID_FLAGS(func, bpf_local_irq_save) 4347 4347 BTF_ID_FLAGS(func, bpf_local_irq_restore) 4348 + #ifdef CONFIG_BPF_EVENTS 4348 4349 BTF_ID_FLAGS(func, bpf_probe_read_user_dynptr) 4349 4350 BTF_ID_FLAGS(func, bpf_probe_read_kernel_dynptr) 4350 4351 BTF_ID_FLAGS(func, bpf_probe_read_user_str_dynptr) ··· 4354 4353 BTF_ID_FLAGS(func, bpf_copy_from_user_str_dynptr, KF_SLEEPABLE) 4355 4354 BTF_ID_FLAGS(func, bpf_copy_from_user_task_dynptr, KF_SLEEPABLE | KF_TRUSTED_ARGS) 4356 4355 BTF_ID_FLAGS(func, bpf_copy_from_user_task_str_dynptr, KF_SLEEPABLE | KF_TRUSTED_ARGS) 4356 + #endif 4357 4357 #ifdef CONFIG_DMA_SHARED_BUFFER 4358 4358 BTF_ID_FLAGS(func, bpf_iter_dmabuf_new, KF_ITER_NEW | KF_SLEEPABLE) 4359 4359 BTF_ID_FLAGS(func, bpf_iter_dmabuf_next, KF_ITER_NEXT | KF_RET_NULL | KF_SLEEPABLE)
+2
kernel/bpf/ringbuf.c
··· 216 216 217 217 static void bpf_ringbuf_free(struct bpf_ringbuf *rb) 218 218 { 219 + irq_work_sync(&rb->work); 220 + 219 221 /* copy pages pointer and nr_pages to local variable, as we are going 220 222 * to unmap rb itself with vunmap() below 221 223 */
+15 -5
kernel/events/core.c
··· 11773 11773 11774 11774 event = container_of(hrtimer, struct perf_event, hw.hrtimer); 11775 11775 11776 - if (event->state != PERF_EVENT_STATE_ACTIVE) 11776 + if (event->state != PERF_EVENT_STATE_ACTIVE || 11777 + event->hw.state & PERF_HES_STOPPED) 11777 11778 return HRTIMER_NORESTART; 11778 11779 11779 11780 event->pmu->read(event); ··· 11820 11819 struct hw_perf_event *hwc = &event->hw; 11821 11820 11822 11821 /* 11823 - * The throttle can be triggered in the hrtimer handler. 11824 - * The HRTIMER_NORESTART should be used to stop the timer, 11825 - * rather than hrtimer_cancel(). See perf_swevent_hrtimer() 11822 + * Careful: this function can be triggered in the hrtimer handler, 11823 + * for cpu-clock events, so hrtimer_cancel() would cause a 11824 + * deadlock. 11825 + * 11826 + * So use hrtimer_try_to_cancel() to try to stop the hrtimer, 11827 + * and the cpu-clock handler also sets the PERF_HES_STOPPED flag, 11828 + * which guarantees that perf_swevent_hrtimer() will stop the 11829 + * hrtimer once it sees the PERF_HES_STOPPED flag. 11826 11830 */ 11827 11831 if (is_sampling_event(event) && (hwc->interrupts != MAX_INTERRUPTS)) { 11828 11832 ktime_t remaining = hrtimer_get_remaining(&hwc->hrtimer); 11829 11833 local64_set(&hwc->period_left, ktime_to_ns(remaining)); 11830 11834 11831 - hrtimer_cancel(&hwc->hrtimer); 11835 + hrtimer_try_to_cancel(&hwc->hrtimer); 11832 11836 } 11833 11837 } 11834 11838 ··· 11877 11871 11878 11872 static void cpu_clock_event_start(struct perf_event *event, int flags) 11879 11873 { 11874 + event->hw.state = 0; 11880 11875 local64_set(&event->hw.prev_count, local_clock()); 11881 11876 perf_swevent_start_hrtimer(event); 11882 11877 } 11883 11878 11884 11879 static void cpu_clock_event_stop(struct perf_event *event, int flags) 11885 11880 { 11881 + event->hw.state = PERF_HES_STOPPED; 11886 11882 perf_swevent_cancel_hrtimer(event); 11887 11883 if (flags & PERF_EF_UPDATE) 11888 11884 cpu_clock_event_update(event); ··· 11958 11950 11959 11951 static void task_clock_event_start(struct perf_event *event, int flags) 11960 11952 { 11953 + event->hw.state = 0; 11961 11954 local64_set(&event->hw.prev_count, event->ctx->time); 11962 11955 perf_swevent_start_hrtimer(event); 11963 11956 } 11964 11957 11965 11958 static void task_clock_event_stop(struct perf_event *event, int flags) 11966 11959 { 11960 + event->hw.state = PERF_HES_STOPPED; 11967 11961 perf_swevent_cancel_hrtimer(event); 11968 11962 if (flags & PERF_EF_UPDATE) 11969 11963 task_clock_event_update(event, event->ctx->time);
+6 -6
kernel/futex/core.c
··· 1680 1680 { 1681 1681 struct mm_struct *mm = fph->mm; 1682 1682 1683 - guard(rcu)(); 1683 + guard(preempt)(); 1684 1684 1685 - if (smp_load_acquire(&fph->state) == FR_PERCPU) { 1686 - this_cpu_inc(*mm->futex_ref); 1685 + if (READ_ONCE(fph->state) == FR_PERCPU) { 1686 + __this_cpu_inc(*mm->futex_ref); 1687 1687 return true; 1688 1688 } 1689 1689 ··· 1694 1694 { 1695 1695 struct mm_struct *mm = fph->mm; 1696 1696 1697 - guard(rcu)(); 1697 + guard(preempt)(); 1698 1698 1699 - if (smp_load_acquire(&fph->state) == FR_PERCPU) { 1700 - this_cpu_dec(*mm->futex_ref); 1699 + if (READ_ONCE(fph->state) == FR_PERCPU) { 1700 + __this_cpu_dec(*mm->futex_ref); 1701 1701 return false; 1702 1702 } 1703 1703
-4
kernel/power/hibernate.c
··· 706 706 707 707 #ifdef CONFIG_SUSPEND 708 708 if (hibernation_mode == HIBERNATION_SUSPEND) { 709 - pm_restore_gfp_mask(); 710 709 error = suspend_devices_and_enter(mem_sleep_current); 711 710 if (!error) 712 711 goto exit; ··· 745 746 cpu_relax(); 746 747 747 748 exit: 748 - /* Match the pm_restore_gfp_mask() call in hibernate(). */ 749 - pm_restrict_gfp_mask(); 750 - 751 749 /* Restore swap signature. */ 752 750 error = swsusp_unmark(); 753 751 if (error)
+17 -5
kernel/power/main.c
··· 31 31 * held, unless the suspend/hibernate code is guaranteed not to run in parallel 32 32 * with that modification). 33 33 */ 34 + static unsigned int saved_gfp_count; 34 35 static gfp_t saved_gfp_mask; 35 36 36 37 void pm_restore_gfp_mask(void) 37 38 { 38 39 WARN_ON(!mutex_is_locked(&system_transition_mutex)); 39 - if (saved_gfp_mask) { 40 - gfp_allowed_mask = saved_gfp_mask; 41 - saved_gfp_mask = 0; 42 - } 40 + 41 + if (WARN_ON(!saved_gfp_count) || --saved_gfp_count) 42 + return; 43 + 44 + gfp_allowed_mask = saved_gfp_mask; 45 + saved_gfp_mask = 0; 46 + 47 + pm_pr_dbg("GFP mask restored\n"); 43 48 } 44 49 45 50 void pm_restrict_gfp_mask(void) 46 51 { 47 52 WARN_ON(!mutex_is_locked(&system_transition_mutex)); 48 - WARN_ON(saved_gfp_mask); 53 + 54 + if (saved_gfp_count++) { 55 + WARN_ON((saved_gfp_mask & ~(__GFP_IO | __GFP_FS)) != gfp_allowed_mask); 56 + return; 57 + } 58 + 49 59 saved_gfp_mask = gfp_allowed_mask; 50 60 gfp_allowed_mask &= ~(__GFP_IO | __GFP_FS); 61 + 62 + pm_pr_dbg("GFP mask restricted\n"); 51 63 } 52 64 53 65 unsigned int lock_system_sleep(void)
+1
kernel/power/process.c
··· 132 132 if (!pm_freezing) 133 133 static_branch_inc(&freezer_active); 134 134 135 + pm_wakeup_clear(0); 135 136 pm_freezing = true; 136 137 error = try_to_freeze_tasks(true); 137 138 if (!error)
-1
kernel/power/suspend.c
··· 595 595 } 596 596 597 597 pm_pr_dbg("Preparing system for sleep (%s)\n", mem_sleep_labels[state]); 598 - pm_wakeup_clear(0); 599 598 pm_suspend_clear_flags(); 600 599 error = suspend_prepare(state); 601 600 if (error)
+1 -1
kernel/sched/core.c
··· 9606 9606 9607 9607 guard(rq_lock_irq)(rq); 9608 9608 cfs_rq->runtime_enabled = runtime_enabled; 9609 - cfs_rq->runtime_remaining = 0; 9609 + cfs_rq->runtime_remaining = 1; 9610 9610 9611 9611 if (cfs_rq->throttled) 9612 9612 unthrottle_cfs_rq(cfs_rq);
+111 -15
kernel/sched/ext.c
··· 67 67 68 68 static struct delayed_work scx_watchdog_work; 69 69 70 - /* for %SCX_KICK_WAIT */ 71 - static unsigned long __percpu *scx_kick_cpus_pnt_seqs; 70 + /* 71 + * For %SCX_KICK_WAIT: Each CPU has a pointer to an array of pick_task sequence 72 + * numbers. The arrays are allocated with kvzalloc() as size can exceed percpu 73 + * allocator limits on large machines. O(nr_cpu_ids^2) allocation, allocated 74 + * lazily when enabling and freed when disabling to avoid waste when sched_ext 75 + * isn't active. 76 + */ 77 + struct scx_kick_pseqs { 78 + struct rcu_head rcu; 79 + unsigned long seqs[]; 80 + }; 81 + 82 + static DEFINE_PER_CPU(struct scx_kick_pseqs __rcu *, scx_kick_pseqs); 72 83 73 84 /* 74 85 * Direct dispatch marker. ··· 791 780 if (rq->scx.flags & SCX_RQ_IN_WAKEUP) 792 781 return; 793 782 783 + /* Don't do anything if there already is a deferred operation. */ 784 + if (rq->scx.flags & SCX_RQ_BAL_CB_PENDING) 785 + return; 786 + 794 787 /* 795 788 * If in balance, the balance callbacks will be called before rq lock is 796 789 * released. Schedule one. 790 + * 791 + * 792 + * We can't directly insert the callback into the 793 + * rq's list: The call can drop its lock and make the pending balance 794 + * callback visible to unrelated code paths that call rq_pin_lock(). 795 + * 796 + * Just let balance_one() know that it must do it itself. 797 797 */ 798 798 if (rq->scx.flags & SCX_RQ_IN_BALANCE) { 799 - queue_balance_callback(rq, &rq->scx.deferred_bal_cb, 800 - deferred_bal_cb_workfn); 799 + rq->scx.flags |= SCX_RQ_BAL_CB_PENDING; 801 800 return; 802 801 } 803 802 ··· 2024 2003 dspc->cursor = 0; 2025 2004 } 2026 2005 2006 + static inline void maybe_queue_balance_callback(struct rq *rq) 2007 + { 2008 + lockdep_assert_rq_held(rq); 2009 + 2010 + if (!(rq->scx.flags & SCX_RQ_BAL_CB_PENDING)) 2011 + return; 2012 + 2013 + queue_balance_callback(rq, &rq->scx.deferred_bal_cb, 2014 + deferred_bal_cb_workfn); 2015 + 2016 + rq->scx.flags &= ~SCX_RQ_BAL_CB_PENDING; 2017 + } 2018 + 2027 2019 static int balance_one(struct rq *rq, struct task_struct *prev) 2028 2020 { 2029 2021 struct scx_sched *sch = scx_root; ··· 2183 2149 } 2184 2150 #endif 2185 2151 rq_repin_lock(rq, rf); 2152 + 2153 + maybe_queue_balance_callback(rq); 2186 2154 2187 2155 return ret; 2188 2156 } ··· 3507 3471 struct scx_dispatch_q *dsq; 3508 3472 int node; 3509 3473 3474 + irq_work_sync(&sch->error_irq_work); 3510 3475 kthread_stop(sch->helper->task); 3476 + 3511 3477 free_percpu(sch->pcpu); 3512 3478 3513 3479 for_each_node_state(node, N_POSSIBLE) ··· 3888 3850 } 3889 3851 } 3890 3852 3853 + static void free_kick_pseqs_rcu(struct rcu_head *rcu) 3854 + { 3855 + struct scx_kick_pseqs *pseqs = container_of(rcu, struct scx_kick_pseqs, rcu); 3856 + 3857 + kvfree(pseqs); 3858 + } 3859 + 3860 + static void free_kick_pseqs(void) 3861 + { 3862 + int cpu; 3863 + 3864 + for_each_possible_cpu(cpu) { 3865 + struct scx_kick_pseqs **pseqs = per_cpu_ptr(&scx_kick_pseqs, cpu); 3866 + struct scx_kick_pseqs *to_free; 3867 + 3868 + to_free = rcu_replace_pointer(*pseqs, NULL, true); 3869 + if (to_free) 3870 + call_rcu(&to_free->rcu, free_kick_pseqs_rcu); 3871 + } 3872 + } 3873 + 3891 3874 static void scx_disable_workfn(struct kthread_work *work) 3892 3875 { 3893 3876 struct scx_sched *sch = container_of(work, struct scx_sched, disable_work); ··· 4045 3986 free_percpu(scx_dsp_ctx); 4046 3987 scx_dsp_ctx = NULL; 4047 3988 scx_dsp_max_batch = 0; 3989 + free_kick_pseqs(); 4048 3990 4049 3991 mutex_unlock(&scx_enable_mutex); 4050 3992 ··· 4408 4348 irq_work_queue(&sch->error_irq_work); 4409 4349 } 4410 4350 4351 + static int alloc_kick_pseqs(void) 4352 + { 4353 + int cpu; 4354 + 4355 + /* 4356 + * Allocate per-CPU arrays sized by nr_cpu_ids. Use kvzalloc as size 4357 + * can exceed percpu allocator limits on large machines. 4358 + */ 4359 + for_each_possible_cpu(cpu) { 4360 + struct scx_kick_pseqs **pseqs = per_cpu_ptr(&scx_kick_pseqs, cpu); 4361 + struct scx_kick_pseqs *new_pseqs; 4362 + 4363 + WARN_ON_ONCE(rcu_access_pointer(*pseqs)); 4364 + 4365 + new_pseqs = kvzalloc_node(struct_size(new_pseqs, seqs, nr_cpu_ids), 4366 + GFP_KERNEL, cpu_to_node(cpu)); 4367 + if (!new_pseqs) { 4368 + free_kick_pseqs(); 4369 + return -ENOMEM; 4370 + } 4371 + 4372 + rcu_assign_pointer(*pseqs, new_pseqs); 4373 + } 4374 + 4375 + return 0; 4376 + } 4377 + 4411 4378 static struct scx_sched *scx_alloc_and_add_sched(struct sched_ext_ops *ops) 4412 4379 { 4413 4380 struct scx_sched *sch; ··· 4582 4495 goto err_unlock; 4583 4496 } 4584 4497 4498 + ret = alloc_kick_pseqs(); 4499 + if (ret) 4500 + goto err_unlock; 4501 + 4585 4502 sch = scx_alloc_and_add_sched(ops); 4586 4503 if (IS_ERR(sch)) { 4587 4504 ret = PTR_ERR(sch); 4588 - goto err_unlock; 4505 + goto err_free_pseqs; 4589 4506 } 4590 4507 4591 4508 /* ··· 4792 4701 4793 4702 return 0; 4794 4703 4704 + err_free_pseqs: 4705 + free_kick_pseqs(); 4795 4706 err_unlock: 4796 4707 mutex_unlock(&scx_enable_mutex); 4797 4708 return ret; ··· 5175 5082 { 5176 5083 struct rq *this_rq = this_rq(); 5177 5084 struct scx_rq *this_scx = &this_rq->scx; 5178 - unsigned long *pseqs = this_cpu_ptr(scx_kick_cpus_pnt_seqs); 5085 + struct scx_kick_pseqs __rcu *pseqs_pcpu = __this_cpu_read(scx_kick_pseqs); 5179 5086 bool should_wait = false; 5087 + unsigned long *pseqs; 5180 5088 s32 cpu; 5089 + 5090 + if (unlikely(!pseqs_pcpu)) { 5091 + pr_warn_once("kick_cpus_irq_workfn() called with NULL scx_kick_pseqs"); 5092 + return; 5093 + } 5094 + 5095 + pseqs = rcu_dereference_bh(pseqs_pcpu)->seqs; 5181 5096 5182 5097 for_each_cpu(cpu, this_scx->cpus_to_kick) { 5183 5098 should_wait |= kick_one_cpu(cpu, this_rq, pseqs); ··· 5308 5207 SCX_TG_ONLINE); 5309 5208 5310 5209 scx_idle_init_masks(); 5311 - 5312 - scx_kick_cpus_pnt_seqs = 5313 - __alloc_percpu(sizeof(scx_kick_cpus_pnt_seqs[0]) * nr_cpu_ids, 5314 - __alignof__(scx_kick_cpus_pnt_seqs[0])); 5315 - BUG_ON(!scx_kick_cpus_pnt_seqs); 5316 5210 5317 5211 for_each_possible_cpu(cpu) { 5318 5212 struct rq *rq = cpu_rq(cpu); ··· 5784 5688 BTF_ID_FLAGS(func, scx_bpf_dispatch_nr_slots) 5785 5689 BTF_ID_FLAGS(func, scx_bpf_dispatch_cancel) 5786 5690 BTF_ID_FLAGS(func, scx_bpf_dsq_move_to_local) 5787 - BTF_ID_FLAGS(func, scx_bpf_dsq_move_set_slice) 5788 - BTF_ID_FLAGS(func, scx_bpf_dsq_move_set_vtime) 5691 + BTF_ID_FLAGS(func, scx_bpf_dsq_move_set_slice, KF_RCU) 5692 + BTF_ID_FLAGS(func, scx_bpf_dsq_move_set_vtime, KF_RCU) 5789 5693 BTF_ID_FLAGS(func, scx_bpf_dsq_move, KF_RCU) 5790 5694 BTF_ID_FLAGS(func, scx_bpf_dsq_move_vtime, KF_RCU) 5791 5695 BTF_KFUNCS_END(scx_kfunc_ids_dispatch) ··· 5916 5820 5917 5821 BTF_KFUNCS_START(scx_kfunc_ids_unlocked) 5918 5822 BTF_ID_FLAGS(func, scx_bpf_create_dsq, KF_SLEEPABLE) 5919 - BTF_ID_FLAGS(func, scx_bpf_dsq_move_set_slice) 5920 - BTF_ID_FLAGS(func, scx_bpf_dsq_move_set_vtime) 5823 + BTF_ID_FLAGS(func, scx_bpf_dsq_move_set_slice, KF_RCU) 5824 + BTF_ID_FLAGS(func, scx_bpf_dsq_move_set_vtime, KF_RCU) 5921 5825 BTF_ID_FLAGS(func, scx_bpf_dsq_move, KF_RCU) 5922 5826 BTF_ID_FLAGS(func, scx_bpf_dsq_move_vtime, KF_RCU) 5923 5827 BTF_KFUNCS_END(scx_kfunc_ids_unlocked)
+6 -9
kernel/sched/fair.c
··· 6024 6024 struct sched_entity *se = cfs_rq->tg->se[cpu_of(rq)]; 6025 6025 6026 6026 /* 6027 - * It's possible we are called with !runtime_remaining due to things 6028 - * like user changed quota setting(see tg_set_cfs_bandwidth()) or async 6029 - * unthrottled us with a positive runtime_remaining but other still 6030 - * running entities consumed those runtime before we reached here. 6027 + * It's possible we are called with runtime_remaining < 0 due to things 6028 + * like async unthrottled us with a positive runtime_remaining but other 6029 + * still running entities consumed those runtime before we reached here. 6031 6030 * 6032 - * Anyway, we can't unthrottle this cfs_rq without any runtime remaining 6033 - * because any enqueue in tg_unthrottle_up() will immediately trigger a 6034 - * throttle, which is not supposed to happen on unthrottle path. 6031 + * We can't unthrottle this cfs_rq without any runtime remaining because 6032 + * any enqueue in tg_unthrottle_up() will immediately trigger a throttle, 6033 + * which is not supposed to happen on unthrottle path. 6035 6034 */ 6036 6035 if (cfs_rq->runtime_enabled && cfs_rq->runtime_remaining <= 0) 6037 6036 return; 6038 - 6039 - se = cfs_rq->tg->se[cpu_of(rq)]; 6040 6037 6041 6038 cfs_rq->throttled = 0; 6042 6039
+1
kernel/sched/sched.h
··· 784 784 SCX_RQ_BAL_KEEP = 1 << 3, /* balance decided to keep current */ 785 785 SCX_RQ_BYPASSING = 1 << 4, 786 786 SCX_RQ_CLK_VALID = 1 << 5, /* RQ clock is fresh and valid */ 787 + SCX_RQ_BAL_CB_PENDING = 1 << 6, /* must queue a cb after dispatching */ 787 788 788 789 SCX_RQ_IN_WAKEUP = 1 << 16, 789 790 SCX_RQ_IN_BALANCE = 1 << 17,
+4
kernel/trace/ring_buffer.c
··· 7344 7344 goto out; 7345 7345 } 7346 7346 7347 + /* Did the reader catch up with the writer? */ 7348 + if (cpu_buffer->reader_page == cpu_buffer->commit_page) 7349 + goto out; 7350 + 7347 7351 reader = rb_get_reader_page(cpu_buffer); 7348 7352 if (WARN_ON(!reader)) 7349 7353 goto out;
+4 -2
kernel/trace/trace_events_hist.c
··· 3272 3272 var = create_var(hist_data, file, field_name, val->size, val->type); 3273 3273 if (IS_ERR(var)) { 3274 3274 hist_err(tr, HIST_ERR_VAR_CREATE_FIND_FAIL, errpos(field_name)); 3275 - kfree(val); 3275 + destroy_hist_field(val, 0); 3276 3276 ret = PTR_ERR(var); 3277 3277 goto err; 3278 3278 } 3279 3279 3280 3280 field_var = kzalloc(sizeof(struct field_var), GFP_KERNEL); 3281 3281 if (!field_var) { 3282 - kfree(val); 3282 + destroy_hist_field(val, 0); 3283 + kfree_const(var->type); 3284 + kfree(var->var.name); 3283 3285 kfree(var); 3284 3286 ret = -ENOMEM; 3285 3287 goto err;
+6 -1
kernel/trace/trace_fprobe.c
··· 106 106 if (!tuser->name) 107 107 return NULL; 108 108 109 + /* Register tracepoint if it is loaded. */ 109 110 if (tpoint) { 111 + tuser->tpoint = tpoint; 110 112 ret = tracepoint_user_register(tuser); 111 113 if (ret) 112 114 return ERR_PTR(ret); 113 115 } 114 116 115 - tuser->tpoint = tpoint; 116 117 tuser->refcount = 1; 117 118 INIT_LIST_HEAD(&tuser->list); 118 119 list_add(&tuser->list, &tracepoint_user_list); ··· 1514 1513 if (!trace_probe_is_enabled(tp)) { 1515 1514 list_for_each_entry(tf, trace_probe_probe_list(tp), tp.list) { 1516 1515 unregister_fprobe(&tf->fp); 1516 + if (tf->tuser) { 1517 + tracepoint_user_put(tf->tuser); 1518 + tf->tuser = NULL; 1519 + } 1517 1520 } 1518 1521 } 1519 1522
+1 -1
lib/Kconfig.kmsan
··· 3 3 bool 4 4 5 5 config HAVE_KMSAN_COMPILER 6 - def_bool CC_IS_CLANG 6 + def_bool $(cc-option,-fsanitize=kernel-memory) 7 7 8 8 config KMSAN 9 9 bool "KMSAN: detector of uninitialized values use"
+1 -1
lib/crypto/Kconfig
··· 64 64 config CRYPTO_LIB_CURVE25519_ARCH 65 65 bool 66 66 depends on CRYPTO_LIB_CURVE25519 && !UML && !KMSAN 67 - default y if ARM && KERNEL_MODE_NEON 67 + default y if ARM && KERNEL_MODE_NEON && !CPU_BIG_ENDIAN 68 68 default y if PPC64 && CPU_LITTLE_ENDIAN 69 69 default y if X86_64 70 70
+1 -1
lib/crypto/Makefile
··· 90 90 libcurve25519-$(CONFIG_CRYPTO_LIB_CURVE25519_GENERIC) += curve25519-fiat32.o 91 91 endif 92 92 # clang versions prior to 18 may blow out the stack with KASAN 93 - ifeq ($(call clang-min-version, 180000),) 93 + ifeq ($(CONFIG_CC_IS_CLANG)_$(call clang-min-version, 180000),y_) 94 94 KASAN_SANITIZE_curve25519-hacl64.o := n 95 95 endif 96 96
+1 -1
lib/kunit/kunit-test.c
··· 739 739 740 740 static void test_dev_action(void *priv) 741 741 { 742 - *(void **)priv = (void *)1; 742 + *(long *)priv = 1; 743 743 } 744 744 745 745 static void kunit_device_test(struct kunit *test)
+2 -1
lib/kunit/test.c
··· 745 745 .param_index = ++test.param_index, 746 746 .parent = &test, 747 747 }; 748 - kunit_init_test(&param_test, test_case->name, test_case->log); 748 + kunit_init_test(&param_test, test_case->name, NULL); 749 + param_test.log = test_case->log; 749 750 kunit_run_case_catch_errors(suite, test_case, &param_test); 750 751 751 752 if (param_desc[0] == '\0') {
+5 -1
mm/slub.c
··· 4666 4666 if (kmem_cache_debug(s)) { 4667 4667 freelist = alloc_single_from_new_slab(s, slab, orig_size, gfpflags); 4668 4668 4669 - if (unlikely(!freelist)) 4669 + if (unlikely(!freelist)) { 4670 + /* This could cause an endless loop. Fail instead. */ 4671 + if (!allow_spin) 4672 + return NULL; 4670 4673 goto new_objects; 4674 + } 4671 4675 4672 4676 if (s->flags & SLAB_STORE_USER) 4673 4677 set_track(s, freelist, TRACK_ALLOC, addr,
+2
net/8021q/vlan.c
··· 193 193 vlan_group_set_device(grp, vlan->vlan_proto, vlan_id, dev); 194 194 grp->nr_vlan_devs++; 195 195 196 + netdev_update_features(dev); 197 + 196 198 return 0; 197 199 198 200 out_unregister_netdev:
+12 -2
net/batman-adv/originator.c
··· 763 763 bat_priv = netdev_priv(mesh_iface); 764 764 765 765 primary_if = batadv_primary_if_get_selected(bat_priv); 766 - if (!primary_if || primary_if->if_status != BATADV_IF_ACTIVE) { 766 + if (!primary_if) { 767 767 ret = -ENOENT; 768 768 goto out_put_mesh_iface; 769 + } 770 + 771 + if (primary_if->if_status != BATADV_IF_ACTIVE) { 772 + ret = -ENOENT; 773 + goto out_put_primary_if; 769 774 } 770 775 771 776 hard_iface = batadv_netlink_get_hardif(bat_priv, cb); ··· 1332 1327 bat_priv = netdev_priv(mesh_iface); 1333 1328 1334 1329 primary_if = batadv_primary_if_get_selected(bat_priv); 1335 - if (!primary_if || primary_if->if_status != BATADV_IF_ACTIVE) { 1330 + if (!primary_if) { 1336 1331 ret = -ENOENT; 1337 1332 goto out_put_mesh_iface; 1333 + } 1334 + 1335 + if (primary_if->if_status != BATADV_IF_ACTIVE) { 1336 + ret = -ENOENT; 1337 + goto out_put_primary_if; 1338 1338 } 1339 1339 1340 1340 hard_iface = batadv_netlink_get_hardif(bat_priv, cb);
+7
net/bluetooth/hci_conn.c
··· 843 843 if (bis) 844 844 return; 845 845 846 + bis = hci_conn_hash_lookup_big_state(hdev, 847 + conn->iso_qos.bcast.big, 848 + BT_OPEN, 849 + HCI_ROLE_MASTER); 850 + if (bis) 851 + return; 852 + 846 853 hci_le_terminate_big(hdev, conn); 847 854 } else { 848 855 hci_le_big_terminate(hdev, conn->iso_qos.bcast.big,
+16 -2
net/bluetooth/hci_event.c
··· 1607 1607 1608 1608 hci_dev_set_flag(hdev, HCI_LE_ADV); 1609 1609 1610 - if (adv && !adv->periodic) 1610 + if (adv) 1611 1611 adv->enabled = true; 1612 + else if (!set->handle) 1613 + hci_dev_set_flag(hdev, HCI_LE_ADV_0); 1612 1614 1613 1615 conn = hci_lookup_le_connect(hdev); 1614 1616 if (conn) ··· 1621 1619 if (cp->num_of_sets) { 1622 1620 if (adv) 1623 1621 adv->enabled = false; 1622 + else if (!set->handle) 1623 + hci_dev_clear_flag(hdev, HCI_LE_ADV_0); 1624 1624 1625 1625 /* If just one instance was disabled check if there are 1626 1626 * any other instance enabled before clearing HCI_LE_ADV ··· 3963 3959 hci_dev_set_flag(hdev, HCI_LE_PER_ADV); 3964 3960 3965 3961 if (adv) 3966 - adv->enabled = true; 3962 + adv->periodic_enabled = true; 3967 3963 } else { 3964 + if (adv) 3965 + adv->periodic_enabled = false; 3966 + 3968 3967 /* If just one instance was disabled check if there are 3969 3968 * any other instance enabled before clearing HCI_LE_PER_ADV. 3970 3969 * The current periodic adv instance will be marked as ··· 4218 4211 } 4219 4212 4220 4213 if (i == ARRAY_SIZE(hci_cc_table)) { 4214 + if (!skb->len) { 4215 + bt_dev_err(hdev, "Unexpected cc 0x%4.4x with no status", 4216 + *opcode); 4217 + *status = HCI_ERROR_UNSPECIFIED; 4218 + return; 4219 + } 4220 + 4221 4221 /* Unknown opcode, assume byte 0 contains the status, so 4222 4222 * that e.g. __hci_cmd_sync() properly returns errors 4223 4223 * for vendor specific commands send by HCI drivers.
+14 -9
net/bluetooth/hci_sync.c
··· 863 863 { 864 864 struct hci_cmd_sync_work_entry *entry; 865 865 866 - entry = hci_cmd_sync_lookup_entry(hdev, func, data, destroy); 867 - if (!entry) 868 - return false; 866 + mutex_lock(&hdev->cmd_sync_work_lock); 869 867 870 - hci_cmd_sync_cancel_entry(hdev, entry); 868 + entry = _hci_cmd_sync_lookup_entry(hdev, func, data, destroy); 869 + if (!entry) { 870 + mutex_unlock(&hdev->cmd_sync_work_lock); 871 + return false; 872 + } 873 + 874 + _hci_cmd_sync_cancel_entry(hdev, entry, -ECANCELED); 875 + 876 + mutex_unlock(&hdev->cmd_sync_work_lock); 871 877 872 878 return true; 873 879 } ··· 1607 1601 1608 1602 /* If periodic advertising already disabled there is nothing to do. */ 1609 1603 adv = hci_find_adv_instance(hdev, instance); 1610 - if (!adv || !adv->periodic || !adv->enabled) 1604 + if (!adv || !adv->periodic_enabled) 1611 1605 return 0; 1612 1606 1613 1607 memset(&cp, 0, sizeof(cp)); ··· 1672 1666 1673 1667 /* If periodic advertising already enabled there is nothing to do. */ 1674 1668 adv = hci_find_adv_instance(hdev, instance); 1675 - if (adv && adv->periodic && adv->enabled) 1669 + if (adv && adv->periodic_enabled) 1676 1670 return 0; 1677 1671 1678 1672 memset(&cp, 0, sizeof(cp)); ··· 2606 2600 /* If current advertising instance is set to instance 0x00 2607 2601 * then we need to re-enable it. 2608 2602 */ 2609 - if (!hdev->cur_adv_instance) 2610 - err = hci_enable_ext_advertising_sync(hdev, 2611 - hdev->cur_adv_instance); 2603 + if (hci_dev_test_and_clear_flag(hdev, HCI_LE_ADV_0)) 2604 + err = hci_enable_ext_advertising_sync(hdev, 0x00); 2612 2605 } else { 2613 2606 /* Schedule for most recent instance to be restarted and begin 2614 2607 * the software rotation loop
+8 -2
net/bluetooth/iso.c
··· 2032 2032 */ 2033 2033 if (!bacmp(&hcon->dst, BDADDR_ANY)) { 2034 2034 bacpy(&hcon->dst, &iso_pi(parent)->dst); 2035 - hcon->dst_type = iso_pi(parent)->dst_type; 2035 + hcon->dst_type = le_addr_type(iso_pi(parent)->dst_type); 2036 2036 } 2037 2037 2038 2038 if (test_bit(HCI_CONN_PA_SYNC, &hcon->flags)) { ··· 2046 2046 } 2047 2047 2048 2048 bacpy(&iso_pi(sk)->dst, &hcon->dst); 2049 - iso_pi(sk)->dst_type = hcon->dst_type; 2049 + 2050 + /* Convert from HCI to three-value type */ 2051 + if (hcon->dst_type == ADDR_LE_DEV_PUBLIC) 2052 + iso_pi(sk)->dst_type = BDADDR_LE_PUBLIC; 2053 + else 2054 + iso_pi(sk)->dst_type = BDADDR_LE_RANDOM; 2055 + 2050 2056 iso_pi(sk)->sync_handle = iso_pi(parent)->sync_handle; 2051 2057 memcpy(iso_pi(sk)->base, iso_pi(parent)->base, iso_pi(parent)->base_len); 2052 2058 iso_pi(sk)->base_len = iso_pi(parent)->base_len;
+2 -2
net/bluetooth/l2cap_core.c
··· 282 282 if (!delayed_work_pending(&chan->monitor_timer) && 283 283 chan->retrans_timeout) { 284 284 l2cap_set_timer(chan, &chan->retrans_timer, 285 - secs_to_jiffies(chan->retrans_timeout)); 285 + msecs_to_jiffies(chan->retrans_timeout)); 286 286 } 287 287 } 288 288 ··· 291 291 __clear_retrans_timer(chan); 292 292 if (chan->monitor_timeout) { 293 293 l2cap_set_timer(chan, &chan->monitor_timer, 294 - secs_to_jiffies(chan->monitor_timeout)); 294 + msecs_to_jiffies(chan->monitor_timeout)); 295 295 } 296 296 } 297 297
+18 -14
net/bluetooth/mgmt.c
··· 2175 2175 sk = cmd->sk; 2176 2176 2177 2177 if (status) { 2178 + mgmt_cmd_status(cmd->sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER, 2179 + status); 2178 2180 mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev, true, 2179 2181 cmd_status_rsp, &status); 2180 - return; 2182 + goto done; 2181 2183 } 2182 2184 2183 - mgmt_pending_remove(cmd); 2184 2185 mgmt_cmd_complete(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER, 0, NULL, 0); 2186 + 2187 + done: 2188 + mgmt_pending_free(cmd); 2185 2189 } 2186 2190 2187 2191 static int set_mesh_sync(struct hci_dev *hdev, void *data) 2188 2192 { 2189 2193 struct mgmt_pending_cmd *cmd = data; 2190 - struct mgmt_cp_set_mesh cp; 2194 + DEFINE_FLEX(struct mgmt_cp_set_mesh, cp, ad_types, num_ad_types, 2195 + sizeof(hdev->mesh_ad_types)); 2191 2196 size_t len; 2192 2197 2193 2198 mutex_lock(&hdev->mgmt_pending_lock); ··· 2202 2197 return -ECANCELED; 2203 2198 } 2204 2199 2205 - memcpy(&cp, cmd->param, sizeof(cp)); 2200 + len = cmd->param_len; 2201 + memcpy(cp, cmd->param, min(__struct_size(cp), len)); 2206 2202 2207 2203 mutex_unlock(&hdev->mgmt_pending_lock); 2208 2204 2209 - len = cmd->param_len; 2210 - 2211 2205 memset(hdev->mesh_ad_types, 0, sizeof(hdev->mesh_ad_types)); 2212 2206 2213 - if (cp.enable) 2207 + if (cp->enable) 2214 2208 hci_dev_set_flag(hdev, HCI_MESH); 2215 2209 else 2216 2210 hci_dev_clear_flag(hdev, HCI_MESH); 2217 2211 2218 - hdev->le_scan_interval = __le16_to_cpu(cp.period); 2219 - hdev->le_scan_window = __le16_to_cpu(cp.window); 2212 + hdev->le_scan_interval = __le16_to_cpu(cp->period); 2213 + hdev->le_scan_window = __le16_to_cpu(cp->window); 2220 2214 2221 - len -= sizeof(cp); 2215 + len -= sizeof(struct mgmt_cp_set_mesh); 2222 2216 2223 2217 /* If filters don't fit, forward all adv pkts */ 2224 2218 if (len <= sizeof(hdev->mesh_ad_types)) 2225 - memcpy(hdev->mesh_ad_types, cp.ad_types, len); 2219 + memcpy(hdev->mesh_ad_types, cp->ad_types, len); 2226 2220 2227 2221 hci_update_passive_scan_sync(hdev); 2228 2222 return 0; ··· 5395 5391 for (i = 0; i < pattern_count; i++) { 5396 5392 offset = patterns[i].offset; 5397 5393 length = patterns[i].length; 5398 - if (offset >= HCI_MAX_EXT_AD_LENGTH || 5399 - length > HCI_MAX_EXT_AD_LENGTH || 5400 - (offset + length) > HCI_MAX_EXT_AD_LENGTH) 5394 + if (offset >= HCI_MAX_AD_LENGTH || 5395 + length > HCI_MAX_AD_LENGTH || 5396 + (offset + length) > HCI_MAX_AD_LENGTH) 5401 5397 return MGMT_STATUS_INVALID_PARAMS; 5402 5398 5403 5399 p = kmalloc(sizeof(*p), GFP_KERNEL);
+11 -15
net/bluetooth/rfcomm/tty.c
··· 643 643 tty_port_tty_hangup(&dev->port, true); 644 644 645 645 dev->modem_status = 646 - ((v24_sig & RFCOMM_V24_RTC) ? (TIOCM_DSR | TIOCM_DTR) : 0) | 647 - ((v24_sig & RFCOMM_V24_RTR) ? (TIOCM_RTS | TIOCM_CTS) : 0) | 646 + ((v24_sig & RFCOMM_V24_RTC) ? TIOCM_DSR : 0) | 647 + ((v24_sig & RFCOMM_V24_RTR) ? TIOCM_CTS : 0) | 648 648 ((v24_sig & RFCOMM_V24_IC) ? TIOCM_RI : 0) | 649 649 ((v24_sig & RFCOMM_V24_DV) ? TIOCM_CD : 0); 650 650 } ··· 1055 1055 static int rfcomm_tty_tiocmget(struct tty_struct *tty) 1056 1056 { 1057 1057 struct rfcomm_dev *dev = tty->driver_data; 1058 + struct rfcomm_dlc *dlc = dev->dlc; 1059 + u8 v24_sig; 1058 1060 1059 1061 BT_DBG("tty %p dev %p", tty, dev); 1060 1062 1061 - return dev->modem_status; 1063 + rfcomm_dlc_get_modem_status(dlc, &v24_sig); 1064 + 1065 + return (v24_sig & (TIOCM_DTR | TIOCM_RTS)) | dev->modem_status; 1062 1066 } 1063 1067 1064 1068 static int rfcomm_tty_tiocmset(struct tty_struct *tty, unsigned int set, unsigned int clear) ··· 1075 1071 1076 1072 rfcomm_dlc_get_modem_status(dlc, &v24_sig); 1077 1073 1078 - if (set & TIOCM_DSR || set & TIOCM_DTR) 1074 + if (set & TIOCM_DTR) 1079 1075 v24_sig |= RFCOMM_V24_RTC; 1080 - if (set & TIOCM_RTS || set & TIOCM_CTS) 1076 + if (set & TIOCM_RTS) 1081 1077 v24_sig |= RFCOMM_V24_RTR; 1082 - if (set & TIOCM_RI) 1083 - v24_sig |= RFCOMM_V24_IC; 1084 - if (set & TIOCM_CD) 1085 - v24_sig |= RFCOMM_V24_DV; 1086 1078 1087 - if (clear & TIOCM_DSR || clear & TIOCM_DTR) 1079 + if (clear & TIOCM_DTR) 1088 1080 v24_sig &= ~RFCOMM_V24_RTC; 1089 - if (clear & TIOCM_RTS || clear & TIOCM_CTS) 1081 + if (clear & TIOCM_RTS) 1090 1082 v24_sig &= ~RFCOMM_V24_RTR; 1091 - if (clear & TIOCM_RI) 1092 - v24_sig &= ~RFCOMM_V24_IC; 1093 - if (clear & TIOCM_CD) 1094 - v24_sig &= ~RFCOMM_V24_DV; 1095 1083 1096 1084 rfcomm_dlc_set_modem_status(dlc, v24_sig); 1097 1085
+1 -1
net/bridge/br_forward.c
··· 25 25 26 26 vg = nbp_vlan_group_rcu(p); 27 27 return ((p->flags & BR_HAIRPIN_MODE) || skb->dev != p->dev) && 28 - (br_mst_is_enabled(p->br) || p->state == BR_STATE_FORWARDING) && 28 + (br_mst_is_enabled(p) || p->state == BR_STATE_FORWARDING) && 29 29 br_allowed_egress(vg, skb) && nbp_switchdev_allowed_egress(p, skb) && 30 30 !br_skb_isolated(p, skb); 31 31 }
+1
net/bridge/br_if.c
··· 386 386 del_nbp(p); 387 387 } 388 388 389 + br_mst_uninit(br); 389 390 br_recalculate_neigh_suppress_enabled(br); 390 391 391 392 br_fdb_delete_by_port(br, NULL, 0, 1);
+2 -2
net/bridge/br_input.c
··· 94 94 95 95 br = p->br; 96 96 97 - if (br_mst_is_enabled(br)) { 97 + if (br_mst_is_enabled(p)) { 98 98 state = BR_STATE_FORWARDING; 99 99 } else { 100 100 if (p->state == BR_STATE_DISABLED) { ··· 429 429 return RX_HANDLER_PASS; 430 430 431 431 forward: 432 - if (br_mst_is_enabled(p->br)) 432 + if (br_mst_is_enabled(p)) 433 433 goto defer_stp_filtering; 434 434 435 435 switch (p->state) {
+8 -2
net/bridge/br_mst.c
··· 22 22 } 23 23 EXPORT_SYMBOL_GPL(br_mst_enabled); 24 24 25 + void br_mst_uninit(struct net_bridge *br) 26 + { 27 + if (br_opt_get(br, BROPT_MST_ENABLED)) 28 + static_branch_dec(&br_mst_used); 29 + } 30 + 25 31 int br_mst_get_info(const struct net_device *dev, u16 msti, unsigned long *vids) 26 32 { 27 33 const struct net_bridge_vlan_group *vg; ··· 231 225 return err; 232 226 233 227 if (on) 234 - static_branch_enable(&br_mst_used); 228 + static_branch_inc(&br_mst_used); 235 229 else 236 - static_branch_disable(&br_mst_used); 230 + static_branch_dec(&br_mst_used); 237 231 238 232 br_opt_toggle(br, BROPT_MST_ENABLED, on); 239 233 return 0;
+10 -3
net/bridge/br_private.h
··· 1935 1935 /* br_mst.c */ 1936 1936 #ifdef CONFIG_BRIDGE_VLAN_FILTERING 1937 1937 DECLARE_STATIC_KEY_FALSE(br_mst_used); 1938 - static inline bool br_mst_is_enabled(struct net_bridge *br) 1938 + static inline bool br_mst_is_enabled(const struct net_bridge_port *p) 1939 1939 { 1940 + /* check the port's vlan group to avoid racing with port deletion */ 1940 1941 return static_branch_unlikely(&br_mst_used) && 1941 - br_opt_get(br, BROPT_MST_ENABLED); 1942 + br_opt_get(p->br, BROPT_MST_ENABLED) && 1943 + rcu_access_pointer(p->vlgrp); 1942 1944 } 1943 1945 1944 1946 int br_mst_set_state(struct net_bridge_port *p, u16 msti, u8 state, ··· 1954 1952 const struct net_bridge_vlan_group *vg); 1955 1953 int br_mst_process(struct net_bridge_port *p, const struct nlattr *mst_attr, 1956 1954 struct netlink_ext_ack *extack); 1955 + void br_mst_uninit(struct net_bridge *br); 1957 1956 #else 1958 - static inline bool br_mst_is_enabled(struct net_bridge *br) 1957 + static inline bool br_mst_is_enabled(const struct net_bridge_port *p) 1959 1958 { 1960 1959 return false; 1961 1960 } ··· 1989 1986 struct netlink_ext_ack *extack) 1990 1987 { 1991 1988 return -EOPNOTSUPP; 1989 + } 1990 + 1991 + static inline void br_mst_uninit(struct net_bridge *br) 1992 + { 1992 1993 } 1993 1994 #endif 1994 1995
+24 -3
net/core/devmem.c
··· 17 17 #include <net/page_pool/helpers.h> 18 18 #include <net/page_pool/memory_provider.h> 19 19 #include <net/sock.h> 20 + #include <net/tcp.h> 20 21 #include <trace/events/page_pool.h> 21 22 22 23 #include "devmem.h" ··· 358 357 unsigned int dmabuf_id) 359 358 { 360 359 struct net_devmem_dmabuf_binding *binding; 361 - struct dst_entry *dst = __sk_dst_get(sk); 360 + struct net_device *dst_dev; 361 + struct dst_entry *dst; 362 362 int err = 0; 363 363 364 364 binding = net_devmem_lookup_dmabuf(dmabuf_id); ··· 368 366 goto out_err; 369 367 } 370 368 369 + rcu_read_lock(); 370 + dst = __sk_dst_get(sk); 371 + /* If dst is NULL (route expired), attempt to rebuild it. */ 372 + if (unlikely(!dst)) { 373 + if (inet_csk(sk)->icsk_af_ops->rebuild_header(sk)) { 374 + err = -EHOSTUNREACH; 375 + goto out_unlock; 376 + } 377 + dst = __sk_dst_get(sk); 378 + if (unlikely(!dst)) { 379 + err = -ENODEV; 380 + goto out_unlock; 381 + } 382 + } 383 + 371 384 /* The dma-addrs in this binding are only reachable to the corresponding 372 385 * net_device. 373 386 */ 374 - if (!dst || !dst->dev || dst->dev->ifindex != binding->dev->ifindex) { 387 + dst_dev = dst_dev_rcu(dst); 388 + if (unlikely(!dst_dev) || unlikely(dst_dev != binding->dev)) { 375 389 err = -ENODEV; 376 - goto out_err; 390 + goto out_unlock; 377 391 } 378 392 393 + rcu_read_unlock(); 379 394 return binding; 380 395 396 + out_unlock: 397 + rcu_read_unlock(); 381 398 out_err: 382 399 if (binding) 383 400 net_devmem_dmabuf_binding_put(binding);
+2 -1
net/core/filter.c
··· 3877 3877 u32 new_len = skb->len + head_room; 3878 3878 int ret; 3879 3879 3880 - if (unlikely(flags || (!skb_is_gso(skb) && new_len > max_len) || 3880 + if (unlikely(flags || (int)head_room < 0 || 3881 + (!skb_is_gso(skb) && new_len > max_len) || 3881 3882 new_len < skb->len)) 3882 3883 return -EINVAL; 3883 3884
+2 -2
net/core/gro_cells.c
··· 60 60 struct sk_buff *skb; 61 61 int work_done = 0; 62 62 63 - __local_lock_nested_bh(&cell->bh_lock); 64 63 while (work_done < budget) { 64 + __local_lock_nested_bh(&cell->bh_lock); 65 65 skb = __skb_dequeue(&cell->napi_skbs); 66 + __local_unlock_nested_bh(&cell->bh_lock); 66 67 if (!skb) 67 68 break; 68 69 napi_gro_receive(napi, skb); ··· 72 71 73 72 if (work_done < budget) 74 73 napi_complete_done(napi, work_done); 75 - __local_unlock_nested_bh(&cell->bh_lock); 76 74 return work_done; 77 75 } 78 76
+2 -5
net/core/netpoll.c
··· 228 228 { 229 229 struct sk_buff_head *skb_pool; 230 230 struct sk_buff *skb; 231 - unsigned long flags; 232 231 233 232 skb_pool = &np->skb_pool; 234 233 235 - spin_lock_irqsave(&skb_pool->lock, flags); 236 - while (skb_pool->qlen < MAX_SKBS) { 234 + while (READ_ONCE(skb_pool->qlen) < MAX_SKBS) { 237 235 skb = alloc_skb(MAX_SKB_SIZE, GFP_ATOMIC); 238 236 if (!skb) 239 237 break; 240 238 241 - __skb_queue_tail(skb_pool, skb); 239 + skb_queue_tail(skb_pool, skb); 242 240 } 243 - spin_unlock_irqrestore(&skb_pool->lock, flags); 244 241 } 245 242 246 243 static void zap_completion_queue(void)
+8 -2
net/dsa/tag_brcm.c
··· 224 224 { 225 225 int len = BRCM_LEG_TAG_LEN; 226 226 int source_port; 227 + __be16 *proto; 227 228 u8 *brcm_tag; 228 229 229 230 if (unlikely(!pskb_may_pull(skb, BRCM_LEG_TAG_LEN + VLAN_HLEN))) 230 231 return NULL; 231 232 232 233 brcm_tag = dsa_etype_header_pos_rx(skb); 234 + proto = (__be16 *)(brcm_tag + BRCM_LEG_TAG_LEN); 233 235 234 236 source_port = brcm_tag[5] & BRCM_LEG_PORT_ID; 235 237 ··· 239 237 if (!skb->dev) 240 238 return NULL; 241 239 242 - /* VLAN tag is added by BCM63xx internal switch */ 243 - if (netdev_uses_dsa(skb->dev)) 240 + /* The internal switch in BCM63XX SoCs always tags on egress on the CPU 241 + * port. We use VID 0 internally for untagged traffic, so strip the tag 242 + * if the TCI field is all 0, and keep it otherwise to also retain 243 + * e.g. 802.1p tagged packets. 244 + */ 245 + if (proto[0] == htons(ETH_P_8021Q) && proto[1] == 0) 244 246 len += VLAN_HLEN; 245 247 246 248 /* Remove Broadcom tag and update checksum */
+14 -7
net/ipv4/tcp_input.c
··· 891 891 } 892 892 } 893 893 894 - void tcp_rcvbuf_grow(struct sock *sk) 894 + void tcp_rcvbuf_grow(struct sock *sk, u32 newval) 895 895 { 896 896 const struct net *net = sock_net(sk); 897 897 struct tcp_sock *tp = tcp_sk(sk); 898 - int rcvwin, rcvbuf, cap; 898 + u32 rcvwin, rcvbuf, cap, oldval; 899 + u64 grow; 900 + 901 + oldval = tp->rcvq_space.space; 902 + tp->rcvq_space.space = newval; 899 903 900 904 if (!READ_ONCE(net->ipv4.sysctl_tcp_moderate_rcvbuf) || 901 905 (sk->sk_userlocks & SOCK_RCVBUF_LOCK)) 902 906 return; 903 907 908 + /* DRS is always one RTT late. */ 909 + rcvwin = newval << 1; 910 + 904 911 /* slow start: allow the sender to double its rate. */ 905 - rcvwin = tp->rcvq_space.space << 1; 912 + grow = (u64)rcvwin * (newval - oldval); 913 + do_div(grow, oldval); 914 + rcvwin += grow << 1; 906 915 907 916 if (!RB_EMPTY_ROOT(&tp->out_of_order_queue)) 908 917 rcvwin += TCP_SKB_CB(tp->ooo_last_skb)->end_seq - tp->rcv_nxt; ··· 952 943 953 944 trace_tcp_rcvbuf_grow(sk, time); 954 945 955 - tp->rcvq_space.space = copied; 956 - 957 - tcp_rcvbuf_grow(sk); 946 + tcp_rcvbuf_grow(sk, copied); 958 947 959 948 new_measure: 960 949 tp->rcvq_space.seq = tp->copied_seq; ··· 5277 5270 } 5278 5271 /* do not grow rcvbuf for not-yet-accepted or orphaned sockets. */ 5279 5272 if (sk->sk_socket) 5280 - tcp_rcvbuf_grow(sk); 5273 + tcp_rcvbuf_grow(sk, tp->rcvq_space.space); 5281 5274 } 5282 5275 5283 5276 static int __must_check tcp_queue_rcv(struct sock *sk, struct sk_buff *skb,
+3
net/mac80211/cfg.c
··· 1876 1876 link_conf->nontransmitted = false; 1877 1877 link_conf->ema_ap = false; 1878 1878 link_conf->bssid_indicator = 0; 1879 + link_conf->fils_discovery.min_interval = 0; 1880 + link_conf->fils_discovery.max_interval = 0; 1881 + link_conf->unsol_bcast_probe_resp_interval = 0; 1879 1882 1880 1883 __sta_info_flush(sdata, true, link_id, NULL); 1881 1884
+1 -1
net/mac80211/chan.c
··· 1290 1290 &link->csa.finalize_work); 1291 1291 break; 1292 1292 case NL80211_IFTYPE_STATION: 1293 - wiphy_delayed_work_queue(sdata->local->hw.wiphy, 1293 + wiphy_hrtimer_work_queue(sdata->local->hw.wiphy, 1294 1294 &link->u.mgd.csa.switch_work, 0); 1295 1295 break; 1296 1296 case NL80211_IFTYPE_UNSPECIFIED:
+4 -4
net/mac80211/ieee80211_i.h
··· 612 612 u8 *assoc_req_ies; 613 613 size_t assoc_req_ies_len; 614 614 615 - struct wiphy_delayed_work ml_reconf_work; 615 + struct wiphy_hrtimer_work ml_reconf_work; 616 616 u16 removed_links; 617 617 618 618 /* TID-to-link mapping support */ 619 - struct wiphy_delayed_work ttlm_work; 619 + struct wiphy_hrtimer_work ttlm_work; 620 620 struct ieee80211_adv_ttlm_info ttlm_info; 621 621 struct wiphy_work teardown_ttlm_work; 622 622 ··· 1017 1017 bool operating_11g_mode; 1018 1018 1019 1019 struct { 1020 - struct wiphy_delayed_work switch_work; 1020 + struct wiphy_hrtimer_work switch_work; 1021 1021 struct cfg80211_chan_def ap_chandef; 1022 1022 struct ieee80211_parsed_tpe tpe; 1023 - unsigned long time; 1023 + ktime_t time; 1024 1024 bool waiting_bcn; 1025 1025 bool ignored_same_chan; 1026 1026 bool blocked_tx;
+8 -3
net/mac80211/key.c
··· 508 508 ret = ieee80211_key_enable_hw_accel(new); 509 509 } 510 510 } else { 511 - if (!new->local->wowlan) 511 + if (!new->local->wowlan) { 512 512 ret = ieee80211_key_enable_hw_accel(new); 513 - else if (link_id < 0 || !sdata->vif.active_links || 514 - BIT(link_id) & sdata->vif.active_links) 513 + } else if (link_id < 0 || !sdata->vif.active_links || 514 + BIT(link_id) & sdata->vif.active_links) { 515 515 new->flags |= KEY_FLAG_UPLOADED_TO_HARDWARE; 516 + if (!(new->conf.flags & (IEEE80211_KEY_FLAG_GENERATE_MMIC | 517 + IEEE80211_KEY_FLAG_PUT_MIC_SPACE | 518 + IEEE80211_KEY_FLAG_RESERVE_TAILROOM))) 519 + decrease_tailroom_need_count(sdata, 1); 520 + } 516 521 } 517 522 518 523 if (ret)
+2 -2
net/mac80211/link.c
··· 472 472 * from there. 473 473 */ 474 474 if (link->conf->csa_active) 475 - wiphy_delayed_work_queue(local->hw.wiphy, 475 + wiphy_hrtimer_work_queue(local->hw.wiphy, 476 476 &link->u.mgd.csa.switch_work, 477 477 link->u.mgd.csa.time - 478 - jiffies); 478 + ktime_get_boottime()); 479 479 } 480 480 481 481 for_each_set_bit(link_id, &add, IEEE80211_MLD_MAX_NUM_LINKS) {
+26 -26
net/mac80211/mlme.c
··· 45 45 #define IEEE80211_ASSOC_TIMEOUT_SHORT (HZ / 10) 46 46 #define IEEE80211_ASSOC_MAX_TRIES 3 47 47 48 - #define IEEE80211_ADV_TTLM_SAFETY_BUFFER_MS msecs_to_jiffies(100) 48 + #define IEEE80211_ADV_TTLM_SAFETY_BUFFER_MS (100 * USEC_PER_MSEC) 49 49 #define IEEE80211_ADV_TTLM_ST_UNDERFLOW 0xff00 50 50 51 51 #define IEEE80211_NEG_TTLM_REQ_TIMEOUT (HZ / 5) ··· 2594 2594 return; 2595 2595 } 2596 2596 2597 - wiphy_delayed_work_queue(sdata->local->hw.wiphy, 2597 + wiphy_hrtimer_work_queue(sdata->local->hw.wiphy, 2598 2598 &link->u.mgd.csa.switch_work, 0); 2599 2599 } 2600 2600 ··· 2753 2753 .timestamp = timestamp, 2754 2754 .device_timestamp = device_timestamp, 2755 2755 }; 2756 - unsigned long now; 2756 + u32 csa_time_tu; 2757 + ktime_t now; 2757 2758 int res; 2758 2759 2759 2760 lockdep_assert_wiphy(local->hw.wiphy); ··· 2984 2983 csa_ie.mode); 2985 2984 2986 2985 /* we may have to handle timeout for deactivated link in software */ 2987 - now = jiffies; 2988 - link->u.mgd.csa.time = now + 2989 - TU_TO_JIFFIES((max_t(int, csa_ie.count, 1) - 1) * 2990 - link->conf->beacon_int); 2986 + now = ktime_get_boottime(); 2987 + csa_time_tu = (max_t(int, csa_ie.count, 1) - 1) * link->conf->beacon_int; 2988 + link->u.mgd.csa.time = now + us_to_ktime(ieee80211_tu_to_usec(csa_time_tu)); 2991 2989 2992 2990 if (ieee80211_vif_link_active(&sdata->vif, link->link_id) && 2993 2991 local->ops->channel_switch) { ··· 3001 3001 } 3002 3002 3003 3003 /* channel switch handled in software */ 3004 - wiphy_delayed_work_queue(local->hw.wiphy, 3004 + wiphy_hrtimer_work_queue(local->hw.wiphy, 3005 3005 &link->u.mgd.csa.switch_work, 3006 3006 link->u.mgd.csa.time - now); 3007 3007 return; ··· 4242 4242 4243 4243 memset(&sdata->u.mgd.ttlm_info, 0, 4244 4244 sizeof(sdata->u.mgd.ttlm_info)); 4245 - wiphy_delayed_work_cancel(sdata->local->hw.wiphy, &ifmgd->ttlm_work); 4245 + wiphy_hrtimer_work_cancel(sdata->local->hw.wiphy, &ifmgd->ttlm_work); 4246 4246 4247 4247 memset(&sdata->vif.neg_ttlm, 0, sizeof(sdata->vif.neg_ttlm)); 4248 4248 wiphy_delayed_work_cancel(sdata->local->hw.wiphy, 4249 4249 &ifmgd->neg_ttlm_timeout_work); 4250 4250 4251 4251 sdata->u.mgd.removed_links = 0; 4252 - wiphy_delayed_work_cancel(sdata->local->hw.wiphy, 4252 + wiphy_hrtimer_work_cancel(sdata->local->hw.wiphy, 4253 4253 &sdata->u.mgd.ml_reconf_work); 4254 4254 4255 4255 wiphy_work_cancel(sdata->local->hw.wiphy, ··· 6876 6876 /* In case the removal was cancelled, abort it */ 6877 6877 if (sdata->u.mgd.removed_links) { 6878 6878 sdata->u.mgd.removed_links = 0; 6879 - wiphy_delayed_work_cancel(sdata->local->hw.wiphy, 6879 + wiphy_hrtimer_work_cancel(sdata->local->hw.wiphy, 6880 6880 &sdata->u.mgd.ml_reconf_work); 6881 6881 } 6882 6882 return; ··· 6906 6906 } 6907 6907 6908 6908 sdata->u.mgd.removed_links = removed_links; 6909 - wiphy_delayed_work_queue(sdata->local->hw.wiphy, 6909 + wiphy_hrtimer_work_queue(sdata->local->hw.wiphy, 6910 6910 &sdata->u.mgd.ml_reconf_work, 6911 - TU_TO_JIFFIES(delay)); 6911 + us_to_ktime(ieee80211_tu_to_usec(delay))); 6912 6912 } 6913 6913 6914 6914 static int ieee80211_ttlm_set_links(struct ieee80211_sub_if_data *sdata, ··· 7095 7095 /* if a planned TID-to-link mapping was cancelled - 7096 7096 * abort it 7097 7097 */ 7098 - wiphy_delayed_work_cancel(sdata->local->hw.wiphy, 7098 + wiphy_hrtimer_work_cancel(sdata->local->hw.wiphy, 7099 7099 &sdata->u.mgd.ttlm_work); 7100 7100 } else if (sdata->u.mgd.ttlm_info.active) { 7101 7101 /* if no TID-to-link element, set to default mapping in ··· 7130 7130 7131 7131 if (ttlm_info.switch_time) { 7132 7132 u16 beacon_ts_tu, st_tu, delay; 7133 - u32 delay_jiffies; 7133 + u64 delay_usec; 7134 7134 u64 mask; 7135 7135 7136 7136 /* The t2l map switch time is indicated with a partial ··· 7152 7152 if (delay > IEEE80211_ADV_TTLM_ST_UNDERFLOW) 7153 7153 return; 7154 7154 7155 - delay_jiffies = TU_TO_JIFFIES(delay); 7155 + delay_usec = ieee80211_tu_to_usec(delay); 7156 7156 7157 7157 /* Link switching can take time, so schedule it 7158 7158 * 100ms before to be ready on time 7159 7159 */ 7160 - if (delay_jiffies > IEEE80211_ADV_TTLM_SAFETY_BUFFER_MS) 7161 - delay_jiffies -= 7160 + if (delay_usec > IEEE80211_ADV_TTLM_SAFETY_BUFFER_MS) 7161 + delay_usec -= 7162 7162 IEEE80211_ADV_TTLM_SAFETY_BUFFER_MS; 7163 7163 else 7164 - delay_jiffies = 0; 7164 + delay_usec = 0; 7165 7165 7166 7166 sdata->u.mgd.ttlm_info = ttlm_info; 7167 - wiphy_delayed_work_cancel(sdata->local->hw.wiphy, 7167 + wiphy_hrtimer_work_cancel(sdata->local->hw.wiphy, 7168 7168 &sdata->u.mgd.ttlm_work); 7169 - wiphy_delayed_work_queue(sdata->local->hw.wiphy, 7169 + wiphy_hrtimer_work_queue(sdata->local->hw.wiphy, 7170 7170 &sdata->u.mgd.ttlm_work, 7171 - delay_jiffies); 7171 + us_to_ktime(delay_usec)); 7172 7172 return; 7173 7173 } 7174 7174 } ··· 8793 8793 ieee80211_csa_connection_drop_work); 8794 8794 wiphy_delayed_work_init(&ifmgd->tdls_peer_del_work, 8795 8795 ieee80211_tdls_peer_del_work); 8796 - wiphy_delayed_work_init(&ifmgd->ml_reconf_work, 8796 + wiphy_hrtimer_work_init(&ifmgd->ml_reconf_work, 8797 8797 ieee80211_ml_reconf_work); 8798 8798 wiphy_delayed_work_init(&ifmgd->reconf.wk, 8799 8799 ieee80211_ml_sta_reconf_timeout); ··· 8802 8802 timer_setup(&ifmgd->conn_mon_timer, ieee80211_sta_conn_mon_timer, 0); 8803 8803 wiphy_delayed_work_init(&ifmgd->tx_tspec_wk, 8804 8804 ieee80211_sta_handle_tspec_ac_params_wk); 8805 - wiphy_delayed_work_init(&ifmgd->ttlm_work, 8805 + wiphy_hrtimer_work_init(&ifmgd->ttlm_work, 8806 8806 ieee80211_tid_to_link_map_work); 8807 8807 wiphy_delayed_work_init(&ifmgd->neg_ttlm_timeout_work, 8808 8808 ieee80211_neg_ttlm_timeout_work); ··· 8849 8849 else 8850 8850 link->u.mgd.req_smps = IEEE80211_SMPS_OFF; 8851 8851 8852 - wiphy_delayed_work_init(&link->u.mgd.csa.switch_work, 8852 + wiphy_hrtimer_work_init(&link->u.mgd.csa.switch_work, 8853 8853 ieee80211_csa_switch_work); 8854 8854 8855 8855 ieee80211_clear_tpe(&link->conf->tpe); ··· 10064 10064 &link->u.mgd.request_smps_work); 10065 10065 wiphy_work_cancel(link->sdata->local->hw.wiphy, 10066 10066 &link->u.mgd.recalc_smps); 10067 - wiphy_delayed_work_cancel(link->sdata->local->hw.wiphy, 10067 + wiphy_hrtimer_work_cancel(link->sdata->local->hw.wiphy, 10068 10068 &link->u.mgd.csa.switch_work); 10069 10069 } 10070 10070
+1
net/mptcp/mib.c
··· 85 85 SNMP_MIB_ITEM("DssFallback", MPTCP_MIB_DSSFALLBACK), 86 86 SNMP_MIB_ITEM("SimultConnectFallback", MPTCP_MIB_SIMULTCONNFALLBACK), 87 87 SNMP_MIB_ITEM("FallbackFailed", MPTCP_MIB_FALLBACKFAILED), 88 + SNMP_MIB_ITEM("WinProbe", MPTCP_MIB_WINPROBE), 88 89 }; 89 90 90 91 /* mptcp_mib_alloc - allocate percpu mib counters
+1
net/mptcp/mib.h
··· 88 88 MPTCP_MIB_DSSFALLBACK, /* Bad or missing DSS */ 89 89 MPTCP_MIB_SIMULTCONNFALLBACK, /* Simultaneous connect */ 90 90 MPTCP_MIB_FALLBACKFAILED, /* Can't fallback due to msk status */ 91 + MPTCP_MIB_WINPROBE, /* MPTCP-level zero window probe */ 91 92 __MPTCP_MIB_MAX 92 93 }; 93 94
+53 -30
net/mptcp/protocol.c
··· 194 194 * - mptcp does not maintain a msk-level window clamp 195 195 * - returns true when the receive buffer is actually updated 196 196 */ 197 - static bool mptcp_rcvbuf_grow(struct sock *sk) 197 + static bool mptcp_rcvbuf_grow(struct sock *sk, u32 newval) 198 198 { 199 199 struct mptcp_sock *msk = mptcp_sk(sk); 200 200 const struct net *net = sock_net(sk); 201 - int rcvwin, rcvbuf, cap; 201 + u32 rcvwin, rcvbuf, cap, oldval; 202 + u64 grow; 202 203 204 + oldval = msk->rcvq_space.space; 205 + msk->rcvq_space.space = newval; 203 206 if (!READ_ONCE(net->ipv4.sysctl_tcp_moderate_rcvbuf) || 204 207 (sk->sk_userlocks & SOCK_RCVBUF_LOCK)) 205 208 return false; 206 209 207 - rcvwin = msk->rcvq_space.space << 1; 210 + /* DRS is always one RTT late. */ 211 + rcvwin = newval << 1; 212 + 213 + /* slow start: allow the sender to double its rate. */ 214 + grow = (u64)rcvwin * (newval - oldval); 215 + do_div(grow, oldval); 216 + rcvwin += grow << 1; 208 217 209 218 if (!RB_EMPTY_ROOT(&msk->out_of_order_queue)) 210 219 rcvwin += MPTCP_SKB_CB(msk->ooo_last_skb)->end_seq - msk->ack_seq; ··· 343 334 skb_set_owner_r(skb, sk); 344 335 /* do not grow rcvbuf for not-yet-accepted or orphaned sockets. */ 345 336 if (sk->sk_socket) 346 - mptcp_rcvbuf_grow(sk); 337 + mptcp_rcvbuf_grow(sk, msk->rcvq_space.space); 347 338 } 348 339 349 340 static void mptcp_init_skb(struct sock *ssk, struct sk_buff *skb, int offset, ··· 1007 998 if (WARN_ON_ONCE(!msk->recovery)) 1008 999 break; 1009 1000 1010 - WRITE_ONCE(msk->first_pending, mptcp_send_next(sk)); 1001 + msk->first_pending = mptcp_send_next(sk); 1011 1002 } 1012 1003 1013 1004 dfrag_clear(sk, dfrag); ··· 1299 1290 if (copy == 0) { 1300 1291 u64 snd_una = READ_ONCE(msk->snd_una); 1301 1292 1302 - if (snd_una != msk->snd_nxt || tcp_write_queue_tail(ssk)) { 1293 + /* No need for zero probe if there are any data pending 1294 + * either at the msk or ssk level; skb is the current write 1295 + * queue tail and can be empty at this point. 1296 + */ 1297 + if (snd_una != msk->snd_nxt || skb->len || 1298 + skb != tcp_send_head(ssk)) { 1303 1299 tcp_remove_empty_skb(ssk); 1304 1300 return 0; 1305 1301 } ··· 1355 1341 mpext->dsn64); 1356 1342 1357 1343 if (zero_window_probe) { 1344 + MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_WINPROBE); 1358 1345 mptcp_subflow_ctx(ssk)->rel_write_seq += copy; 1359 1346 mpext->frozen = 1; 1360 1347 if (READ_ONCE(msk->csum_enabled)) ··· 1558 1543 1559 1544 mptcp_update_post_push(msk, dfrag, ret); 1560 1545 } 1561 - WRITE_ONCE(msk->first_pending, mptcp_send_next(sk)); 1546 + msk->first_pending = mptcp_send_next(sk); 1562 1547 1563 1548 if (msk->snd_burst <= 0 || 1564 1549 !sk_stream_memory_free(ssk) || ··· 1918 1903 get_page(dfrag->page); 1919 1904 list_add_tail(&dfrag->list, &msk->rtx_queue); 1920 1905 if (!msk->first_pending) 1921 - WRITE_ONCE(msk->first_pending, dfrag); 1906 + msk->first_pending = dfrag; 1922 1907 } 1923 1908 pr_debug("msk=%p dfrag at seq=%llu len=%u sent=%u new=%d\n", msk, 1924 1909 dfrag->data_seq, dfrag->data_len, dfrag->already_sent, ··· 1951 1936 1952 1937 static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied); 1953 1938 1954 - static int __mptcp_recvmsg_mskq(struct sock *sk, 1955 - struct msghdr *msg, 1956 - size_t len, int flags, 1939 + static int __mptcp_recvmsg_mskq(struct sock *sk, struct msghdr *msg, 1940 + size_t len, int flags, int copied_total, 1957 1941 struct scm_timestamping_internal *tss, 1958 1942 int *cmsg_flags) 1959 1943 { 1960 1944 struct mptcp_sock *msk = mptcp_sk(sk); 1961 1945 struct sk_buff *skb, *tmp; 1946 + int total_data_len = 0; 1962 1947 int copied = 0; 1963 1948 1964 1949 skb_queue_walk_safe(&sk->sk_receive_queue, skb, tmp) { 1965 - u32 offset = MPTCP_SKB_CB(skb)->offset; 1950 + u32 delta, offset = MPTCP_SKB_CB(skb)->offset; 1966 1951 u32 data_len = skb->len - offset; 1967 - u32 count = min_t(size_t, len - copied, data_len); 1952 + u32 count; 1968 1953 int err; 1969 1954 1955 + if (flags & MSG_PEEK) { 1956 + /* skip already peeked skbs */ 1957 + if (total_data_len + data_len <= copied_total) { 1958 + total_data_len += data_len; 1959 + continue; 1960 + } 1961 + 1962 + /* skip the already peeked data in the current skb */ 1963 + delta = copied_total - total_data_len; 1964 + offset += delta; 1965 + data_len -= delta; 1966 + } 1967 + 1968 + count = min_t(size_t, len - copied, data_len); 1970 1969 if (!(flags & MSG_TRUNC)) { 1971 1970 err = skb_copy_datagram_msg(skb, offset, msg, count); 1972 1971 if (unlikely(err < 0)) { ··· 1997 1968 1998 1969 copied += count; 1999 1970 2000 - if (count < data_len) { 2001 - if (!(flags & MSG_PEEK)) { 1971 + if (!(flags & MSG_PEEK)) { 1972 + msk->bytes_consumed += count; 1973 + if (count < data_len) { 2002 1974 MPTCP_SKB_CB(skb)->offset += count; 2003 1975 MPTCP_SKB_CB(skb)->map_seq += count; 2004 - msk->bytes_consumed += count; 1976 + break; 2005 1977 } 2006 - break; 2007 - } 2008 1978 2009 - if (!(flags & MSG_PEEK)) { 2010 1979 /* avoid the indirect call, we know the destructor is sock_rfree */ 2011 1980 skb->destructor = NULL; 2012 1981 skb->sk = NULL; ··· 2012 1985 sk_mem_uncharge(sk, skb->truesize); 2013 1986 __skb_unlink(skb, &sk->sk_receive_queue); 2014 1987 skb_attempt_defer_free(skb); 2015 - msk->bytes_consumed += count; 2016 1988 } 2017 1989 2018 1990 if (copied >= len) ··· 2075 2049 if (msk->rcvq_space.copied <= msk->rcvq_space.space) 2076 2050 goto new_measure; 2077 2051 2078 - msk->rcvq_space.space = msk->rcvq_space.copied; 2079 - if (mptcp_rcvbuf_grow(sk)) { 2080 - 2052 + if (mptcp_rcvbuf_grow(sk, msk->rcvq_space.copied)) { 2081 2053 /* Make subflows follow along. If we do not do this, we 2082 2054 * get drops at subflow level if skbs can't be moved to 2083 2055 * the mptcp rx queue fast enough (announced rcv_win can ··· 2087 2063 2088 2064 ssk = mptcp_subflow_tcp_sock(subflow); 2089 2065 slow = lock_sock_fast(ssk); 2090 - tcp_sk(ssk)->rcvq_space.space = msk->rcvq_space.copied; 2091 - tcp_rcvbuf_grow(ssk); 2066 + /* subflows can be added before tcp_init_transfer() */ 2067 + if (tcp_sk(ssk)->rcvq_space.space) 2068 + tcp_rcvbuf_grow(ssk, msk->rcvq_space.copied); 2092 2069 unlock_sock_fast(ssk, slow); 2093 2070 } 2094 2071 } ··· 2208 2183 while (copied < len) { 2209 2184 int err, bytes_read; 2210 2185 2211 - bytes_read = __mptcp_recvmsg_mskq(sk, msg, len - copied, flags, &tss, &cmsg_flags); 2186 + bytes_read = __mptcp_recvmsg_mskq(sk, msg, len - copied, flags, 2187 + copied, &tss, &cmsg_flags); 2212 2188 if (unlikely(bytes_read < 0)) { 2213 2189 if (!copied) 2214 2190 copied = bytes_read; ··· 2900 2874 struct mptcp_sock *msk = mptcp_sk(sk); 2901 2875 struct mptcp_data_frag *dtmp, *dfrag; 2902 2876 2903 - WRITE_ONCE(msk->first_pending, NULL); 2877 + msk->first_pending = NULL; 2904 2878 list_for_each_entry_safe(dfrag, dtmp, &msk->rtx_queue, list) 2905 2879 dfrag_clear(sk, dfrag); 2906 2880 } ··· 3440 3414 3441 3415 void __mptcp_check_push(struct sock *sk, struct sock *ssk) 3442 3416 { 3443 - if (!mptcp_send_head(sk)) 3444 - return; 3445 - 3446 3417 if (!sock_owned_by_user(sk)) 3447 3418 __mptcp_subflow_push_pending(sk, ssk, false); 3448 3419 else
+1 -1
net/mptcp/protocol.h
··· 414 414 { 415 415 const struct mptcp_sock *msk = mptcp_sk(sk); 416 416 417 - return READ_ONCE(msk->first_pending); 417 + return msk->first_pending; 418 418 } 419 419 420 420 static inline struct mptcp_data_frag *mptcp_send_next(struct sock *sk)
+1 -1
net/netfilter/nft_connlimit.c
··· 48 48 return; 49 49 } 50 50 51 - count = priv->list->count; 51 + count = READ_ONCE(priv->list->count); 52 52 53 53 if ((count > priv->limit) ^ priv->invert) { 54 54 regs->verdict.code = NFT_BREAK;
+27 -3
net/netfilter/nft_ct.c
··· 22 22 #include <net/netfilter/nf_conntrack_timeout.h> 23 23 #include <net/netfilter/nf_conntrack_l4proto.h> 24 24 #include <net/netfilter/nf_conntrack_expect.h> 25 + #include <net/netfilter/nf_conntrack_seqadj.h> 25 26 26 27 struct nft_ct_helper_obj { 27 28 struct nf_conntrack_helper *helper4; ··· 380 379 } 381 380 #endif 382 381 382 + static void __nft_ct_get_destroy(const struct nft_ctx *ctx, struct nft_ct *priv) 383 + { 384 + #ifdef CONFIG_NF_CONNTRACK_LABELS 385 + if (priv->key == NFT_CT_LABELS) 386 + nf_connlabels_put(ctx->net); 387 + #endif 388 + } 389 + 383 390 static int nft_ct_get_init(const struct nft_ctx *ctx, 384 391 const struct nft_expr *expr, 385 392 const struct nlattr * const tb[]) ··· 422 413 if (tb[NFTA_CT_DIRECTION] != NULL) 423 414 return -EINVAL; 424 415 len = NF_CT_LABELS_MAX_SIZE; 416 + 417 + err = nf_connlabels_get(ctx->net, (len * BITS_PER_BYTE) - 1); 418 + if (err) 419 + return err; 425 420 break; 426 421 #endif 427 422 case NFT_CT_HELPER: ··· 507 494 case IP_CT_DIR_REPLY: 508 495 break; 509 496 default: 510 - return -EINVAL; 497 + err = -EINVAL; 498 + goto err; 511 499 } 512 500 } 513 501 ··· 516 502 err = nft_parse_register_store(ctx, tb[NFTA_CT_DREG], &priv->dreg, NULL, 517 503 NFT_DATA_VALUE, len); 518 504 if (err < 0) 519 - return err; 505 + goto err; 520 506 521 507 err = nf_ct_netns_get(ctx->net, ctx->family); 522 508 if (err < 0) 523 - return err; 509 + goto err; 524 510 525 511 if (priv->key == NFT_CT_BYTES || 526 512 priv->key == NFT_CT_PKTS || ··· 528 514 nf_ct_set_acct(ctx->net, true); 529 515 530 516 return 0; 517 + err: 518 + __nft_ct_get_destroy(ctx, priv); 519 + return err; 531 520 } 532 521 533 522 static void __nft_ct_set_destroy(const struct nft_ctx *ctx, struct nft_ct *priv) ··· 643 626 static void nft_ct_get_destroy(const struct nft_ctx *ctx, 644 627 const struct nft_expr *expr) 645 628 { 629 + struct nft_ct *priv = nft_expr_priv(expr); 630 + 631 + __nft_ct_get_destroy(ctx, priv); 646 632 nf_ct_netns_put(ctx->net, ctx->family); 647 633 } 648 634 ··· 1193 1173 if (help) { 1194 1174 rcu_assign_pointer(help->helper, to_assign); 1195 1175 set_bit(IPS_HELPER_BIT, &ct->status); 1176 + 1177 + if ((ct->status & IPS_NAT_MASK) && !nfct_seqadj(ct)) 1178 + if (!nfct_seqadj_ext_add(ct)) 1179 + regs->verdict.code = NF_DROP; 1196 1180 } 1197 1181 } 1198 1182
+17 -6
net/sctp/diag.c
··· 73 73 struct nlattr *attr; 74 74 void *info = NULL; 75 75 76 + rcu_read_lock(); 76 77 list_for_each_entry_rcu(laddr, address_list, list) 77 78 addrcnt++; 79 + rcu_read_unlock(); 78 80 79 81 attr = nla_reserve(skb, INET_DIAG_LOCALS, addrlen * addrcnt); 80 82 if (!attr) 81 83 return -EMSGSIZE; 82 84 83 85 info = nla_data(attr); 86 + rcu_read_lock(); 84 87 list_for_each_entry_rcu(laddr, address_list, list) { 85 88 memcpy(info, &laddr->a, sizeof(laddr->a)); 86 89 memset(info + sizeof(laddr->a), 0, addrlen - sizeof(laddr->a)); 87 90 info += addrlen; 91 + 92 + if (!--addrcnt) 93 + break; 88 94 } 95 + rcu_read_unlock(); 89 96 90 97 return 0; 91 98 } ··· 230 223 bool net_admin; 231 224 }; 232 225 233 - static size_t inet_assoc_attr_size(struct sctp_association *asoc) 226 + static size_t inet_assoc_attr_size(struct sock *sk, 227 + struct sctp_association *asoc) 234 228 { 235 229 int addrlen = sizeof(struct sockaddr_storage); 236 230 int addrcnt = 0; 237 231 struct sctp_sockaddr_entry *laddr; 238 232 239 233 list_for_each_entry_rcu(laddr, &asoc->base.bind_addr.address_list, 240 - list) 234 + list, lockdep_sock_is_held(sk)) 241 235 addrcnt++; 242 236 243 237 return nla_total_size(sizeof(struct sctp_info)) ··· 264 256 if (err) 265 257 return err; 266 258 267 - rep = nlmsg_new(inet_assoc_attr_size(assoc), GFP_KERNEL); 268 - if (!rep) 269 - return -ENOMEM; 270 - 271 259 lock_sock(sk); 260 + 261 + rep = nlmsg_new(inet_assoc_attr_size(sk, assoc), GFP_KERNEL); 262 + if (!rep) { 263 + release_sock(sk); 264 + return -ENOMEM; 265 + } 266 + 272 267 if (ep != assoc->ep) { 273 268 err = -EAGAIN; 274 269 goto out;
+1 -1
net/sctp/input.c
··· 190 190 goto discard_release; 191 191 nf_reset_ct(skb); 192 192 193 - if (sk_filter(sk, skb)) 193 + if (sk_filter(sk, skb) || skb->len < sizeof(struct sctp_chunkhdr)) 194 194 goto discard_release; 195 195 196 196 /* Create an SCTP packet structure. */
+6 -15
net/sctp/transport.c
··· 37 37 /* 1st Level Abstractions. */ 38 38 39 39 /* Initialize a new transport from provided memory. */ 40 - static struct sctp_transport *sctp_transport_init(struct net *net, 41 - struct sctp_transport *peer, 42 - const union sctp_addr *addr, 43 - gfp_t gfp) 40 + static void sctp_transport_init(struct net *net, 41 + struct sctp_transport *peer, 42 + const union sctp_addr *addr, 43 + gfp_t gfp) 44 44 { 45 45 /* Copy in the address. */ 46 46 peer->af_specific = sctp_get_af_specific(addr->sa.sa_family); ··· 83 83 get_random_bytes(&peer->hb_nonce, sizeof(peer->hb_nonce)); 84 84 85 85 refcount_set(&peer->refcnt, 1); 86 - 87 - return peer; 88 86 } 89 87 90 88 /* Allocate and initialize a new transport. */ ··· 94 96 95 97 transport = kzalloc(sizeof(*transport), gfp); 96 98 if (!transport) 97 - goto fail; 99 + return NULL; 98 100 99 - if (!sctp_transport_init(net, transport, addr, gfp)) 100 - goto fail_init; 101 + sctp_transport_init(net, transport, addr, gfp); 101 102 102 103 SCTP_DBG_OBJCNT_INC(transport); 103 104 104 105 return transport; 105 - 106 - fail_init: 107 - kfree(transport); 108 - 109 - fail: 110 - return NULL; 111 106 } 112 107 113 108 /* This transport is no longer needed. Free up if possible, or
+3 -1
net/tls/tls_device.c
··· 723 723 /* shouldn't get to wraparound: 724 724 * too long in async stage, something bad happened 725 725 */ 726 - if (WARN_ON_ONCE(resync_async->rcd_delta == USHRT_MAX)) 726 + if (WARN_ON_ONCE(resync_async->rcd_delta == USHRT_MAX)) { 727 + tls_offload_rx_resync_async_request_cancel(resync_async); 727 728 return false; 729 + } 728 730 729 731 /* asynchronous stage: log all headers seq such that 730 732 * req_seq <= seq <= end_seq, and wait for real resync request
+56
net/wireless/core.c
··· 1787 1787 } 1788 1788 EXPORT_SYMBOL_GPL(wiphy_delayed_work_pending); 1789 1789 1790 + enum hrtimer_restart wiphy_hrtimer_work_timer(struct hrtimer *t) 1791 + { 1792 + struct wiphy_hrtimer_work *hrwork = 1793 + container_of(t, struct wiphy_hrtimer_work, timer); 1794 + 1795 + wiphy_work_queue(hrwork->wiphy, &hrwork->work); 1796 + 1797 + return HRTIMER_NORESTART; 1798 + } 1799 + EXPORT_SYMBOL_GPL(wiphy_hrtimer_work_timer); 1800 + 1801 + void wiphy_hrtimer_work_queue(struct wiphy *wiphy, 1802 + struct wiphy_hrtimer_work *hrwork, 1803 + ktime_t delay) 1804 + { 1805 + trace_wiphy_hrtimer_work_queue(wiphy, &hrwork->work, delay); 1806 + 1807 + if (!delay) { 1808 + hrtimer_cancel(&hrwork->timer); 1809 + wiphy_work_queue(wiphy, &hrwork->work); 1810 + return; 1811 + } 1812 + 1813 + hrwork->wiphy = wiphy; 1814 + hrtimer_start_range_ns(&hrwork->timer, delay, 1815 + 1000 * NSEC_PER_USEC, HRTIMER_MODE_REL); 1816 + } 1817 + EXPORT_SYMBOL_GPL(wiphy_hrtimer_work_queue); 1818 + 1819 + void wiphy_hrtimer_work_cancel(struct wiphy *wiphy, 1820 + struct wiphy_hrtimer_work *hrwork) 1821 + { 1822 + lockdep_assert_held(&wiphy->mtx); 1823 + 1824 + hrtimer_cancel(&hrwork->timer); 1825 + wiphy_work_cancel(wiphy, &hrwork->work); 1826 + } 1827 + EXPORT_SYMBOL_GPL(wiphy_hrtimer_work_cancel); 1828 + 1829 + void wiphy_hrtimer_work_flush(struct wiphy *wiphy, 1830 + struct wiphy_hrtimer_work *hrwork) 1831 + { 1832 + lockdep_assert_held(&wiphy->mtx); 1833 + 1834 + hrtimer_cancel(&hrwork->timer); 1835 + wiphy_work_flush(wiphy, &hrwork->work); 1836 + } 1837 + EXPORT_SYMBOL_GPL(wiphy_hrtimer_work_flush); 1838 + 1839 + bool wiphy_hrtimer_work_pending(struct wiphy *wiphy, 1840 + struct wiphy_hrtimer_work *hrwork) 1841 + { 1842 + return hrtimer_is_queued(&hrwork->timer); 1843 + } 1844 + EXPORT_SYMBOL_GPL(wiphy_hrtimer_work_pending); 1845 + 1790 1846 static int __init cfg80211_init(void) 1791 1847 { 1792 1848 int err;
+1 -2
net/wireless/nl80211.c
··· 4136 4136 rdev->wiphy.txq_quantum = old_txq_quantum; 4137 4137 } 4138 4138 4139 - if (old_rts_threshold) 4140 - kfree(old_radio_rts_threshold); 4139 + kfree(old_radio_rts_threshold); 4141 4140 return result; 4142 4141 } 4143 4142
+21
net/wireless/trace.h
··· 304 304 __entry->delay) 305 305 ); 306 306 307 + TRACE_EVENT(wiphy_hrtimer_work_queue, 308 + TP_PROTO(struct wiphy *wiphy, struct wiphy_work *work, 309 + ktime_t delay), 310 + TP_ARGS(wiphy, work, delay), 311 + TP_STRUCT__entry( 312 + WIPHY_ENTRY 313 + __field(void *, instance) 314 + __field(void *, func) 315 + __field(ktime_t, delay) 316 + ), 317 + TP_fast_assign( 318 + WIPHY_ASSIGN; 319 + __entry->instance = work; 320 + __entry->func = work->func; 321 + __entry->delay = delay; 322 + ), 323 + TP_printk(WIPHY_PR_FMT " instance=%p func=%pS delay=%llu", 324 + WIPHY_PR_ARG, __entry->instance, __entry->func, 325 + __entry->delay) 326 + ); 327 + 307 328 TRACE_EVENT(wiphy_work_worker_start, 308 329 TP_PROTO(struct wiphy *wiphy), 309 330 TP_ARGS(wiphy),
+14 -1
rust/Makefile
··· 69 69 # the time being (https://github.com/rust-lang/rust/issues/144521). 70 70 rustdoc_modifiers_workaround := $(if $(call rustc-min-version,108800),-Cunsafe-allow-abi-mismatch=fixed-x18) 71 71 72 + # Similarly, for doctests (https://github.com/rust-lang/rust/issues/146465). 73 + doctests_modifiers_workaround := $(rustdoc_modifiers_workaround)$(if $(call rustc-min-version,109100),$(comma)sanitizer) 74 + 72 75 # `rustc` recognizes `--remap-path-prefix` since 1.26.0, but `rustdoc` only 73 76 # since Rust 1.81.0. Moreover, `rustdoc` ICEs on out-of-tree builds since Rust 74 77 # 1.82.0 (https://github.com/rust-lang/rust/issues/138520). Thus workaround both ··· 130 127 rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs rustdoc-clean FORCE 131 128 +$(call if_changed,rustdoc) 132 129 130 + # Even if `rustdoc` targets are not kernel objects, they should still be 131 + # treated as such so that we pass the same flags. Otherwise, for instance, 132 + # `rustdoc` will complain about missing sanitizer flags causing an ABI mismatch. 133 + rustdoc-compiler_builtins: private is-kernel-object := y 133 134 rustdoc-compiler_builtins: $(src)/compiler_builtins.rs rustdoc-core FORCE 134 135 +$(call if_changed,rustdoc) 135 136 137 + rustdoc-ffi: private is-kernel-object := y 136 138 rustdoc-ffi: $(src)/ffi.rs rustdoc-core FORCE 137 139 +$(call if_changed,rustdoc) 138 140 ··· 155 147 rustdoc-macros FORCE 156 148 +$(call if_changed,rustdoc) 157 149 150 + rustdoc-kernel: private is-kernel-object := y 158 151 rustdoc-kernel: private rustc_target_flags = --extern ffi --extern pin_init \ 159 152 --extern build_error --extern macros \ 160 153 --extern bindings --extern uapi ··· 239 230 --extern bindings --extern uapi \ 240 231 --no-run --crate-name kernel -Zunstable-options \ 241 232 --sysroot=/dev/null \ 242 - $(rustdoc_modifiers_workaround) \ 233 + $(doctests_modifiers_workaround) \ 243 234 --test-builder $(objtree)/scripts/rustdoc_test_builder \ 244 235 $< $(rustdoc_test_kernel_quiet); \ 245 236 $(objtree)/scripts/rustdoc_test_gen ··· 531 522 $(obj)/$(libpin_init_internal_name) $(obj)/$(libmacros_name) FORCE 532 523 +$(call if_changed_rule,rustc_library) 533 524 525 + # Even if normally `build_error` is not a kernel object, it should still be 526 + # treated as such so that we pass the same flags. Otherwise, for instance, 527 + # `rustc` will complain about missing sanitizer flags causing an ABI mismatch. 528 + $(obj)/build_error.o: private is-kernel-object := y 534 529 $(obj)/build_error.o: private skip_gendwarfksyms = 1 535 530 $(obj)/build_error.o: $(src)/build_error.rs $(obj)/compiler_builtins.o FORCE 536 531 +$(call if_changed_rule,rustc_library)
+1 -1
rust/kernel/devres.rs
··· 103 103 /// 104 104 /// # Invariants 105 105 /// 106 - /// [`Self::inner`] is guaranteed to be initialized and is always accessed read-only. 106 + /// `Self::inner` is guaranteed to be initialized and is always accessed read-only. 107 107 #[pin_data(PinnedDrop)] 108 108 pub struct Devres<T: Send> { 109 109 dev: ARef<Device>,
+1 -1
rust/kernel/sync/condvar.rs
··· 36 36 /// spuriously. 37 37 /// 38 38 /// Instances of [`CondVar`] need a lock class and to be pinned. The recommended way to create such 39 - /// instances is with the [`pin_init`](crate::pin_init!) and [`new_condvar`] macros. 39 + /// instances is with the [`pin_init`](pin_init::pin_init!) and [`new_condvar`] macros. 40 40 /// 41 41 /// # Examples 42 42 ///
+1 -1
scripts/Makefile.build
··· 167 167 endif 168 168 169 169 ifneq ($(KBUILD_EXTRA_WARN),) 170 - cmd_checkdoc = PYTHONDONTWRITEBYTECODE=1 $(KERNELDOC) -none $(KDOCFLAGS) \ 170 + cmd_checkdoc = PYTHONDONTWRITEBYTECODE=1 $(PYTHON3) $(KERNELDOC) -none $(KDOCFLAGS) \ 171 171 $(if $(findstring 2, $(KBUILD_EXTRA_WARN)), -Wall) \ 172 172 $< 173 173 endif
+14 -1
scripts/Makefile.vmlinux
··· 102 102 # modules.builtin.modinfo 103 103 # --------------------------------------------------------------------------- 104 104 105 + # .modinfo in vmlinux.unstripped is aligned to 8 bytes for compatibility with 106 + # tools that expect vmlinux to have sufficiently aligned sections but the 107 + # additional bytes used for padding .modinfo to satisfy this requirement break 108 + # certain versions of kmod with 109 + # 110 + # depmod: ERROR: kmod_builtin_iter_next: unexpected string without modname prefix 111 + # 112 + # Strip the trailing padding bytes after extracting .modinfo to comply with 113 + # what kmod expects to parse. 114 + quiet_cmd_modules_builtin_modinfo = GEN $@ 115 + cmd_modules_builtin_modinfo = $(cmd_objcopy); \ 116 + sed -i 's/\x00\+$$/\x00/g' $@ 117 + 105 118 OBJCOPYFLAGS_modules.builtin.modinfo := -j .modinfo -O binary 106 119 107 120 targets += modules.builtin.modinfo 108 121 modules.builtin.modinfo: vmlinux.unstripped FORCE 109 - $(call if_changed,objcopy) 122 + $(call if_changed,modules_builtin_modinfo) 110 123 111 124 # modules.builtin 112 125 # ---------------------------------------------------------------------------
+3
scripts/kconfig/mconf.c
··· 12 12 #include <errno.h> 13 13 #include <fcntl.h> 14 14 #include <limits.h> 15 + #include <locale.h> 15 16 #include <stdarg.h> 16 17 #include <stdlib.h> 17 18 #include <string.h> ··· 931 930 int res; 932 931 933 932 signal(SIGINT, sig_handler); 933 + 934 + setlocale(LC_ALL, ""); 934 935 935 936 if (ac > 1 && strcmp(av[1], "-s") == 0) { 936 937 silent = 1;
+3
scripts/kconfig/nconf.c
··· 7 7 #ifndef _GNU_SOURCE 8 8 #define _GNU_SOURCE 9 9 #endif 10 + #include <locale.h> 10 11 #include <string.h> 11 12 #include <strings.h> 12 13 #include <stdlib.h> ··· 1478 1477 { 1479 1478 int lines, columns; 1480 1479 char *mode; 1480 + 1481 + setlocale(LC_ALL, ""); 1481 1482 1482 1483 if (ac > 1 && strcmp(av[1], "-s") == 0) { 1483 1484 /* Silence conf_read() until the real callback is set up */
+1 -1
scripts/package/install-extmod-build
··· 63 63 # Clear VPATH and srcroot because the source files reside in the output 64 64 # directory. 65 65 # shellcheck disable=SC2016 # $(MAKE) and $(build) will be expanded by Make 66 - "${MAKE}" run-command KBUILD_RUN_COMMAND='+$(MAKE) HOSTCC='"${CC}"' VPATH= srcroot=. $(build)='"$(realpath --relative-base=. "${destdir}")"/scripts 66 + "${MAKE}" run-command KBUILD_RUN_COMMAND='+$(MAKE) HOSTCC='"${CC}"' VPATH= srcroot=. $(build)='"$(realpath --relative-to=. "${destdir}")"/scripts 67 67 68 68 rm -f "${destdir}/scripts/Kbuild" 69 69 fi
+14
sound/hda/codecs/realtek/alc269.c
··· 3736 3736 ALC285_FIXUP_ASUS_GA605K_I2C_SPEAKER2_TO_DAC1, 3737 3737 ALC269_FIXUP_POSITIVO_P15X_HEADSET_MIC, 3738 3738 ALC289_FIXUP_ASUS_ZEPHYRUS_DUAL_SPK, 3739 + ALC256_FIXUP_VAIO_RPL_MIC_NO_PRESENCE, 3739 3740 }; 3740 3741 3741 3742 /* A special fixup for Lenovo C940 and Yoga Duet 7; ··· 6173 6172 { 0x1e, 0x90170150 }, /* Internal Speaker */ 6174 6173 { } 6175 6174 }, 6175 + }, 6176 + [ALC256_FIXUP_VAIO_RPL_MIC_NO_PRESENCE] = { 6177 + .type = HDA_FIXUP_PINS, 6178 + .v.pins = (const struct hda_pintbl[]) { 6179 + { 0x19, 0x03a1113c }, /* use as headset mic, without its own jack detect */ 6180 + { 0x1a, 0x22a190a0 }, /* dock mic */ 6181 + { } 6182 + }, 6183 + .chained = true, 6184 + .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST 6176 6185 } 6177 6186 }; 6178 6187 ··· 6589 6578 SND_PCI_QUIRK(0x103c, 0x8c16, "HP Spectre x360 2-in-1 Laptop 16-aa0xxx", ALC245_FIXUP_HP_SPECTRE_X360_16_AA0XXX), 6590 6579 SND_PCI_QUIRK(0x103c, 0x8c17, "HP Spectre 16", ALC287_FIXUP_CS35L41_I2C_2), 6591 6580 SND_PCI_QUIRK(0x103c, 0x8c21, "HP Pavilion Plus Laptop 14-ey0XXX", ALC245_FIXUP_HP_X360_MUTE_LEDS), 6581 + SND_PCI_QUIRK(0x103c, 0x8c2d, "HP Victus 15-fa1xxx (MB 8C2D)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 6592 6582 SND_PCI_QUIRK(0x103c, 0x8c30, "HP Victus 15-fb1xxx", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 6593 6583 SND_PCI_QUIRK(0x103c, 0x8c46, "HP EliteBook 830 G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 6594 6584 SND_PCI_QUIRK(0x103c, 0x8c47, "HP EliteBook 840 G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), ··· 6971 6959 SND_PCI_QUIRK(0x1558, 0x971d, "Clevo N970T[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 6972 6960 SND_PCI_QUIRK(0x1558, 0xa500, "Clevo NL5[03]RU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 6973 6961 SND_PCI_QUIRK(0x1558, 0xa554, "VAIO VJFH52", ALC269_FIXUP_VAIO_VJFH52_MIC_NO_PRESENCE), 6962 + SND_PCI_QUIRK(0x1558, 0xa559, "VAIO RPL", ALC256_FIXUP_VAIO_RPL_MIC_NO_PRESENCE), 6974 6963 SND_PCI_QUIRK(0x1558, 0xa600, "Clevo NL50NU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 6975 6964 SND_PCI_QUIRK(0x1558, 0xa650, "Clevo NP[567]0SN[CD]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 6976 6965 SND_PCI_QUIRK(0x1558, 0xa671, "Clevo NP70SN[CDE]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 7093 7080 SND_PCI_QUIRK(0x17aa, 0x38a9, "Thinkbook 16P", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD), 7094 7081 SND_PCI_QUIRK(0x17aa, 0x38ab, "Thinkbook 16P", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD), 7095 7082 SND_PCI_QUIRK(0x17aa, 0x38b4, "Legion Slim 7 16IRH8", ALC287_FIXUP_CS35L41_I2C_2), 7083 + HDA_CODEC_QUIRK(0x17aa, 0x391c, "Lenovo Yoga 7 2-in-1 14AKP10", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), 7096 7084 SND_PCI_QUIRK(0x17aa, 0x38b5, "Legion Slim 7 16IRH8", ALC287_FIXUP_CS35L41_I2C_2), 7097 7085 SND_PCI_QUIRK(0x17aa, 0x38b6, "Legion Slim 7 16APH8", ALC287_FIXUP_CS35L41_I2C_2), 7098 7086 SND_PCI_QUIRK(0x17aa, 0x38b7, "Legion Slim 7 16APH8", ALC287_FIXUP_CS35L41_I2C_2),
+157
sound/soc/amd/acp/amd-acp70-acpi-match.c
··· 30 30 .group_id = 1 31 31 }; 32 32 33 + static const struct snd_soc_acpi_endpoint spk_2_endpoint = { 34 + .num = 0, 35 + .aggregated = 1, 36 + .group_position = 2, 37 + .group_id = 1 38 + }; 39 + 40 + static const struct snd_soc_acpi_endpoint spk_3_endpoint = { 41 + .num = 0, 42 + .aggregated = 1, 43 + .group_position = 3, 44 + .group_id = 1 45 + }; 46 + 33 47 static const struct snd_soc_acpi_adr_device rt711_rt1316_group_adr[] = { 34 48 { 35 49 .adr = 0x000030025D071101ull, ··· 126 112 } 127 113 }; 128 114 115 + static const struct snd_soc_acpi_endpoint cs42l43_endpoints[] = { 116 + { /* Jack Playback Endpoint */ 117 + .num = 0, 118 + .aggregated = 0, 119 + .group_position = 0, 120 + .group_id = 0, 121 + }, 122 + { /* DMIC Capture Endpoint */ 123 + .num = 1, 124 + .aggregated = 0, 125 + .group_position = 0, 126 + .group_id = 0, 127 + }, 128 + { /* Jack Capture Endpoint */ 129 + .num = 2, 130 + .aggregated = 0, 131 + .group_position = 0, 132 + .group_id = 0, 133 + }, 134 + { /* Speaker Playback Endpoint */ 135 + .num = 3, 136 + .aggregated = 0, 137 + .group_position = 0, 138 + .group_id = 0, 139 + }, 140 + }; 141 + 142 + static const struct snd_soc_acpi_adr_device cs42l43_0_adr[] = { 143 + { 144 + .adr = 0x00003001FA424301ull, 145 + .num_endpoints = ARRAY_SIZE(cs42l43_endpoints), 146 + .endpoints = cs42l43_endpoints, 147 + .name_prefix = "cs42l43" 148 + } 149 + }; 150 + 151 + static const struct snd_soc_acpi_adr_device cs42l43_1_cs35l56x4_1_adr[] = { 152 + { 153 + .adr = 0x00013001FA424301ull, 154 + .num_endpoints = ARRAY_SIZE(cs42l43_endpoints), 155 + .endpoints = cs42l43_endpoints, 156 + .name_prefix = "cs42l43" 157 + }, 158 + { 159 + .adr = 0x00013001FA355601ull, 160 + .num_endpoints = 1, 161 + .endpoints = &spk_l_endpoint, 162 + .name_prefix = "AMP1" 163 + }, 164 + { 165 + .adr = 0x00013101FA355601ull, 166 + .num_endpoints = 1, 167 + .endpoints = &spk_r_endpoint, 168 + .name_prefix = "AMP2" 169 + }, 170 + { 171 + .adr = 0x00013201FA355601ull, 172 + .num_endpoints = 1, 173 + .endpoints = &spk_2_endpoint, 174 + .name_prefix = "AMP3" 175 + }, 176 + { 177 + .adr = 0x00013301FA355601ull, 178 + .num_endpoints = 1, 179 + .endpoints = &spk_3_endpoint, 180 + .name_prefix = "AMP4" 181 + }, 182 + }; 183 + 184 + static const struct snd_soc_acpi_adr_device cs35l56x4_1_adr[] = { 185 + { 186 + .adr = 0x00013301FA355601ull, 187 + .num_endpoints = 1, 188 + .endpoints = &spk_l_endpoint, 189 + .name_prefix = "AMP1" 190 + }, 191 + { 192 + .adr = 0x00013201FA355601ull, 193 + .num_endpoints = 1, 194 + .endpoints = &spk_r_endpoint, 195 + .name_prefix = "AMP2" 196 + }, 197 + { 198 + .adr = 0x00013101FA355601ull, 199 + .num_endpoints = 1, 200 + .endpoints = &spk_2_endpoint, 201 + .name_prefix = "AMP3" 202 + }, 203 + { 204 + .adr = 0x00013001FA355601ull, 205 + .num_endpoints = 1, 206 + .endpoints = &spk_3_endpoint, 207 + .name_prefix = "AMP4" 208 + }, 209 + }; 210 + 211 + static const struct snd_soc_acpi_link_adr acp70_cs42l43_l1_cs35l56x4_l1[] = { 212 + { 213 + .mask = BIT(1), 214 + .num_adr = ARRAY_SIZE(cs42l43_1_cs35l56x4_1_adr), 215 + .adr_d = cs42l43_1_cs35l56x4_1_adr, 216 + }, 217 + {} 218 + }; 219 + 220 + static const struct snd_soc_acpi_link_adr acp70_cs42l43_l0_cs35l56x4_l1[] = { 221 + { 222 + .mask = BIT(0), 223 + .num_adr = ARRAY_SIZE(cs42l43_0_adr), 224 + .adr_d = cs42l43_0_adr, 225 + }, 226 + { 227 + .mask = BIT(1), 228 + .num_adr = ARRAY_SIZE(cs35l56x4_1_adr), 229 + .adr_d = cs35l56x4_1_adr, 230 + }, 231 + {} 232 + }; 233 + 234 + static const struct snd_soc_acpi_link_adr acp70_cs35l56x4_l1[] = { 235 + { 236 + .mask = BIT(1), 237 + .num_adr = ARRAY_SIZE(cs35l56x4_1_adr), 238 + .adr_d = cs35l56x4_1_adr, 239 + }, 240 + {} 241 + }; 242 + 129 243 static const struct snd_soc_acpi_link_adr acp70_rt722_only[] = { 130 244 { 131 245 .mask = BIT(0), ··· 291 149 { 292 150 .link_mask = BIT(0) | BIT(1), 293 151 .links = acp70_4_in_1_sdca, 152 + .drv_name = "amd_sdw", 153 + }, 154 + { 155 + .link_mask = BIT(0) | BIT(1), 156 + .links = acp70_cs42l43_l0_cs35l56x4_l1, 157 + .drv_name = "amd_sdw", 158 + }, 159 + { 160 + .link_mask = BIT(1), 161 + .links = acp70_cs42l43_l1_cs35l56x4_l1, 162 + .drv_name = "amd_sdw", 163 + }, 164 + { 165 + .link_mask = BIT(1), 166 + .links = acp70_cs35l56x4_l1, 294 167 .drv_name = "amd_sdw", 295 168 }, 296 169 {},
+1
sound/soc/codecs/cs-amp-lib-test.c
··· 7 7 8 8 #include <kunit/resource.h> 9 9 #include <kunit/test.h> 10 + #include <kunit/test-bug.h> 10 11 #include <kunit/static_stub.h> 11 12 #include <linux/device/faux.h> 12 13 #include <linux/firmware/cirrus/cs_dsp.h>
+1 -1
sound/soc/codecs/cs530x.c
··· 793 793 case CS530X_SYSCLK_SRC_PLL: 794 794 break; 795 795 default: 796 - dev_err(component->dev, "Invalid clock id %d\n", clk_id); 796 + dev_err(component->dev, "Invalid sysclk source: %d\n", source); 797 797 return -EINVAL; 798 798 } 799 799
+4 -2
sound/soc/codecs/max98090.c
··· 1239 1239 SND_SOC_DAPM_SUPPLY("DMIC4_ENA", M98090_REG_DIGITAL_MIC_ENABLE, 1240 1240 M98090_DIGMIC4_SHIFT, 0, max98090_shdn_event, 1241 1241 SND_SOC_DAPM_POST_PMU), 1242 + SND_SOC_DAPM_SUPPLY("DMIC34_HPF", M98090_REG_FILTER_CONFIG, 1243 + M98090_FLT_DMIC34HPF_SHIFT, 0, NULL, 0), 1242 1244 }; 1243 1245 1244 1246 static const struct snd_soc_dapm_route max98090_dapm_routes[] = { ··· 1429 1427 /* DMIC inputs */ 1430 1428 {"DMIC3", NULL, "DMIC3_ENA"}, 1431 1429 {"DMIC4", NULL, "DMIC4_ENA"}, 1432 - {"DMIC3", NULL, "AHPF"}, 1433 - {"DMIC4", NULL, "AHPF"}, 1430 + {"DMIC3", NULL, "DMIC34_HPF"}, 1431 + {"DMIC4", NULL, "DMIC34_HPF"}, 1434 1432 }; 1435 1433 1436 1434 static int max98090_add_widgets(struct snd_soc_component *component)
+4
sound/soc/codecs/rt721-sdca.c
··· 281 281 rt_sdca_index_write(rt721->mbq_regmap, RT721_BOOST_CTRL, 282 282 RT721_BST_4CH_TOP_GATING_CTRL1, 0x002a); 283 283 regmap_write(rt721->regmap, 0x2f58, 0x07); 284 + 285 + regmap_write(rt721->regmap, 0x2f51, 0x00); 286 + rt_sdca_index_write(rt721->mbq_regmap, RT721_HDA_SDCA_FLOAT, 287 + RT721_MISC_CTL, 0x0004); 284 288 } 285 289 286 290 static void rt721_sdca_jack_init(struct rt721_sdca_priv *rt721)
+1
sound/soc/codecs/rt721-sdca.h
··· 137 137 #define RT721_HDA_LEGACY_UAJ_CTL 0x02 138 138 #define RT721_HDA_LEGACY_CTL1 0x05 139 139 #define RT721_HDA_LEGACY_RESET_CTL 0x06 140 + #define RT721_MISC_CTL 0x07 140 141 #define RT721_XU_REL_CTRL 0x0c 141 142 #define RT721_GE_REL_CTRL1 0x0d 142 143 #define RT721_HDA_LEGACY_GPIO_WAKE_EN_CTL 0x0e
+2 -2
sound/soc/fsl/fsl_micfil.c
··· 131 131 .fifos = 8, 132 132 .fifo_depth = 32, 133 133 .dataline = 0xf, 134 - .formats = SNDRV_PCM_FMTBIT_S32_LE | SNDRV_PCM_FMTBIT_DSD_U32_BE, 134 + .formats = SNDRV_PCM_FMTBIT_S32_LE | SNDRV_PCM_FMTBIT_DSD_U32_LE, 135 135 .use_edma = true, 136 136 .use_verid = true, 137 137 .volume_sx = false, ··· 823 823 break; 824 824 } 825 825 826 - if (format == SNDRV_PCM_FORMAT_DSD_U32_BE) { 826 + if (format == SNDRV_PCM_FORMAT_DSD_U32_LE) { 827 827 micfil->dec_bypass = true; 828 828 /* 829 829 * According to equation 29 in RM:
+5 -6
sound/soc/fsl/fsl_sai.c
··· 353 353 break; 354 354 case SND_SOC_DAIFMT_PDM: 355 355 val_cr2 |= FSL_SAI_CR2_BCP; 356 - val_cr4 &= ~FSL_SAI_CR4_MF; 357 356 sai->is_pdm_mode = true; 358 357 break; 359 358 case SND_SOC_DAIFMT_RIGHT_J: ··· 637 638 val_cr5 |= FSL_SAI_CR5_WNW(slot_width); 638 639 val_cr5 |= FSL_SAI_CR5_W0W(slot_width); 639 640 640 - if (sai->is_lsb_first || sai->is_pdm_mode) 641 + if (sai->is_lsb_first) 641 642 val_cr5 |= FSL_SAI_CR5_FBT(0); 642 643 else 643 644 val_cr5 |= FSL_SAI_CR5_FBT(word_width - 1); ··· 652 653 val_cr4 |= FSL_SAI_CR4_CHMOD; 653 654 654 655 /* 655 - * For SAI provider mode, when Tx(Rx) sync with Rx(Tx) clock, Rx(Tx) will 656 - * generate bclk and frame clock for Tx(Rx), we should set RCR4(TCR4), 657 - * RCR5(TCR5) for playback(capture), or there will be sync error. 656 + * When Tx(Rx) sync with Rx(Tx) clock, Rx(Tx) will provide bclk and 657 + * frame clock for Tx(Rx). We should set RCR4(TCR4), RCR5(TCR5) 658 + * for playback(capture), or there will be sync error. 658 659 */ 659 660 660 - if (!sai->is_consumer_mode[tx] && fsl_sai_dir_is_synced(sai, adir)) { 661 + if (fsl_sai_dir_is_synced(sai, adir)) { 661 662 regmap_update_bits(sai->regmap, FSL_SAI_xCR4(!tx, ofs), 662 663 FSL_SAI_CR4_SYWD_MASK | FSL_SAI_CR4_FRSZ_MASK | 663 664 FSL_SAI_CR4_CHMOD_MASK,
+3
sound/soc/intel/avs/pcm.c
··· 651 651 652 652 data = snd_soc_dai_get_dma_data(dai, substream); 653 653 654 + disable_work_sync(&data->period_elapsed_work); 654 655 snd_hdac_ext_stream_release(data->host_stream, HDAC_EXT_STREAM_TYPE_HOST); 655 656 avs_dai_shutdown(substream, dai); 656 657 } ··· 755 754 data = snd_soc_dai_get_dma_data(dai, substream); 756 755 host_stream = data->host_stream; 757 756 757 + if (runtime->state == SNDRV_PCM_STATE_XRUN) 758 + hdac_stream(host_stream)->prepared = false; 758 759 if (hdac_stream(host_stream)->prepared) 759 760 return 0; 760 761
+10 -8
sound/soc/intel/avs/probes.c
··· 14 14 #include "debug.h" 15 15 #include "messages.h" 16 16 17 - static int avs_dsp_init_probe(struct avs_dev *adev, union avs_connector_node_id node_id, 18 - size_t buffer_size) 17 + static int avs_dsp_init_probe(struct avs_dev *adev, struct snd_compr_params *params, int bps, 18 + union avs_connector_node_id node_id, size_t buffer_size) 19 19 { 20 20 struct avs_probe_cfg cfg = {{0}}; 21 21 struct avs_module_entry mentry; ··· 27 27 return ret; 28 28 29 29 /* 30 - * Probe module uses no cycles, audio data format and input and output 31 - * frame sizes are unused. It is also not owned by any pipeline. 30 + * Probe module uses no cycles, input and output frame sizes are unused. 31 + * It is also not owned by any pipeline. 32 32 */ 33 33 cfg.base.ibs = 1; 34 34 /* BSS module descriptor is always segment of index=2. */ 35 35 cfg.base.is_pages = mentry.segments[2].flags.length; 36 + cfg.base.audio_fmt.sampling_freq = params->codec.sample_rate; 37 + cfg.base.audio_fmt.bit_depth = bps; 38 + cfg.base.audio_fmt.num_channels = params->codec.ch_out; 39 + cfg.base.audio_fmt.valid_bit_depth = bps; 36 40 cfg.gtw_cfg.node_id = node_id; 37 41 cfg.gtw_cfg.dma_buffer_size = buffer_size; 38 42 ··· 132 128 struct hdac_ext_stream *host_stream = avs_compr_get_host_stream(cstream); 133 129 struct snd_compr_runtime *rtd = cstream->runtime; 134 130 struct avs_dev *adev = to_avs_dev(dai->dev); 135 - /* compr params do not store bit depth, default to S32_LE. */ 136 - snd_pcm_format_t format = SNDRV_PCM_FORMAT_S32_LE; 137 131 unsigned int format_val; 138 132 int bps, ret; 139 133 ··· 144 142 ret = snd_compr_malloc_pages(cstream, rtd->buffer_size); 145 143 if (ret < 0) 146 144 return ret; 147 - bps = snd_pcm_format_physical_width(format); 145 + bps = snd_pcm_format_physical_width(params->codec.format); 148 146 if (bps < 0) 149 147 return bps; 150 148 format_val = snd_hdac_stream_format(params->codec.ch_out, bps, params->codec.sample_rate); ··· 168 166 node_id.vindex = hdac_stream(host_stream)->stream_tag - 1; 169 167 node_id.dma_type = AVS_DMA_HDA_HOST_INPUT; 170 168 171 - ret = avs_dsp_init_probe(adev, node_id, rtd->dma_bytes); 169 + ret = avs_dsp_init_probe(adev, params, bps, node_id, rtd->dma_bytes); 172 170 if (ret < 0) { 173 171 dev_err(dai->dev, "probe init failed: %d\n", ret); 174 172 avs_dsp_enable_d0ix(adev);
-52
sound/soc/intel/common/soc-acpi-intel-ptl-match.c
··· 227 227 }, 228 228 }; 229 229 230 - static const struct snd_soc_acpi_endpoint cs42l43_endpoints[] = { 231 - { /* Jack Playback Endpoint */ 232 - .num = 0, 233 - .aggregated = 0, 234 - .group_position = 0, 235 - .group_id = 0, 236 - }, 237 - { /* DMIC Capture Endpoint */ 238 - .num = 1, 239 - .aggregated = 0, 240 - .group_position = 0, 241 - .group_id = 0, 242 - }, 243 - { /* Jack Capture Endpoint */ 244 - .num = 2, 245 - .aggregated = 0, 246 - .group_position = 0, 247 - .group_id = 0, 248 - }, 249 - { /* Speaker Playback Endpoint */ 250 - .num = 3, 251 - .aggregated = 0, 252 - .group_position = 0, 253 - .group_id = 0, 254 - }, 255 - }; 256 - 257 230 static const struct snd_soc_acpi_adr_device cs42l43_2_adr[] = { 258 231 { 259 232 .adr = 0x00023001fa424301ull, ··· 275 302 .num_endpoints = 1, 276 303 .endpoints = &spk_6_endpoint, 277 304 .name_prefix = "AMP6" 278 - } 279 - }; 280 - 281 - static const struct snd_soc_acpi_adr_device cs42l43_3_adr[] = { 282 - { 283 - .adr = 0x00033001FA424301ull, 284 - .num_endpoints = ARRAY_SIZE(cs42l43_endpoints), 285 - .endpoints = cs42l43_endpoints, 286 - .name_prefix = "cs42l43" 287 305 } 288 306 }; 289 307 ··· 446 482 .mask = BIT(3), 447 483 .num_adr = ARRAY_SIZE(cs35l56_3_3amp_adr), 448 484 .adr_d = cs35l56_3_3amp_adr, 449 - }, 450 - {} 451 - }; 452 - 453 - static const struct snd_soc_acpi_link_adr ptl_cs42l43_l3[] = { 454 - { 455 - .mask = BIT(3), 456 - .num_adr = ARRAY_SIZE(cs42l43_3_adr), 457 - .adr_d = cs42l43_3_adr, 458 485 }, 459 486 {} 460 487 }; ··· 665 710 .links = ptl_rt722_l1, 666 711 .drv_name = "sof_sdw", 667 712 .sof_tplg_filename = "sof-ptl-rt722.tplg", 668 - .get_function_tplg_files = sof_sdw_get_tplg_files, 669 - }, 670 - { 671 - .link_mask = BIT(3), 672 - .links = ptl_cs42l43_l3, 673 - .drv_name = "sof_sdw", 674 - .sof_tplg_filename = "sof-ptl-cs42l43-l3.tplg", 675 713 .get_function_tplg_files = sof_sdw_get_tplg_files, 676 714 }, 677 715 {
-1
sound/soc/mediatek/mt8195/mt8195-afe-pcm.c
··· 3176 3176 3177 3177 static void mt8195_afe_pcm_dev_remove(struct platform_device *pdev) 3178 3178 { 3179 - pm_runtime_disable(&pdev->dev); 3180 3179 if (!pm_runtime_status_suspended(&pdev->dev)) 3181 3180 mt8195_afe_runtime_suspend(&pdev->dev); 3182 3181 }
-1
sound/soc/mediatek/mt8365/mt8365-afe-pcm.c
··· 2238 2238 2239 2239 mt8365_afe_disable_top_cg(afe, MT8365_TOP_CG_AFE); 2240 2240 2241 - pm_runtime_disable(&pdev->dev); 2242 2241 if (!pm_runtime_status_suspended(&pdev->dev)) 2243 2242 mt8365_afe_runtime_suspend(&pdev->dev); 2244 2243 }
+1 -1
sound/soc/qcom/qdsp6/q6asm.c
··· 377 377 378 378 spin_lock_irqsave(&ac->lock, flags); 379 379 port->num_periods = 0; 380 + spin_unlock_irqrestore(&ac->lock, flags); 380 381 kfree(port->buf); 381 382 port->buf = NULL; 382 - spin_unlock_irqrestore(&ac->lock, flags); 383 383 } 384 384 385 385 /**
+12 -13
sound/soc/renesas/rz-ssi.c
··· 85 85 struct snd_pcm_substream *substream; 86 86 int fifo_sample_size; /* sample capacity of SSI FIFO */ 87 87 int dma_buffer_pos; /* The address for the next DMA descriptor */ 88 + int completed_dma_buf_pos; /* The address of the last completed DMA descriptor. */ 88 89 int period_counter; /* for keeping track of periods transferred */ 89 90 int sample_width; 90 91 int buffer_pos; /* current frame position in the buffer */ ··· 216 215 rz_ssi_set_substream(strm, substream); 217 216 strm->sample_width = samples_to_bytes(runtime, 1); 218 217 strm->dma_buffer_pos = 0; 218 + strm->completed_dma_buf_pos = 0; 219 219 strm->period_counter = 0; 220 220 strm->buffer_pos = 0; 221 221 ··· 439 437 snd_pcm_period_elapsed(strm->substream); 440 438 strm->period_counter = current_period; 441 439 } 440 + 441 + strm->completed_dma_buf_pos += runtime->period_size; 442 + if (strm->completed_dma_buf_pos >= runtime->buffer_size) 443 + strm->completed_dma_buf_pos = 0; 442 444 } 443 445 444 446 static int rz_ssi_pio_recv(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm) ··· 784 778 return -ENODEV; 785 779 } 786 780 787 - static int rz_ssi_trigger_resume(struct rz_ssi_priv *ssi) 781 + static int rz_ssi_trigger_resume(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm) 788 782 { 783 + struct snd_pcm_substream *substream = strm->substream; 784 + struct snd_pcm_runtime *runtime = substream->runtime; 789 785 int ret; 786 + 787 + strm->dma_buffer_pos = strm->completed_dma_buf_pos + runtime->period_size; 790 788 791 789 if (rz_ssi_is_stream_running(&ssi->playback) || 792 790 rz_ssi_is_stream_running(&ssi->capture)) ··· 804 794 ssi->hw_params_cache.channels); 805 795 } 806 796 807 - static void rz_ssi_streams_suspend(struct rz_ssi_priv *ssi) 808 - { 809 - if (rz_ssi_is_stream_running(&ssi->playback) || 810 - rz_ssi_is_stream_running(&ssi->capture)) 811 - return; 812 - 813 - ssi->playback.dma_buffer_pos = 0; 814 - ssi->capture.dma_buffer_pos = 0; 815 - } 816 - 817 797 static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd, 818 798 struct snd_soc_dai *dai) 819 799 { ··· 813 813 814 814 switch (cmd) { 815 815 case SNDRV_PCM_TRIGGER_RESUME: 816 - ret = rz_ssi_trigger_resume(ssi); 816 + ret = rz_ssi_trigger_resume(ssi, strm); 817 817 if (ret) 818 818 return ret; 819 819 ··· 852 852 853 853 case SNDRV_PCM_TRIGGER_SUSPEND: 854 854 rz_ssi_stop(ssi, strm); 855 - rz_ssi_streams_suspend(ssi); 856 855 break; 857 856 858 857 case SNDRV_PCM_TRIGGER_STOP:
-1
sound/soc/sdw_utils/soc_sdw_utils.c
··· 638 638 { 639 639 .direction = {true, false}, 640 640 .dai_name = "cs42l43-dp6", 641 - .component_name = "cs42l43", 642 641 .dai_type = SOC_SDW_DAI_TYPE_AMP, 643 642 .dailink = {SOC_SDW_AMP_OUT_DAI_ID, SOC_SDW_UNUSED_DAI_ID}, 644 643 .init = asoc_sdw_cs42l43_spk_init,
+20 -17
sound/usb/mixer_s1810c.c
··· 178 178 179 179 pkt_out.fields[SC1810C_STATE_F1_IDX] = SC1810C_SET_STATE_F1; 180 180 pkt_out.fields[SC1810C_STATE_F2_IDX] = SC1810C_SET_STATE_F2; 181 - ret = snd_usb_ctl_msg(dev, usb_rcvctrlpipe(dev, 0), 181 + ret = snd_usb_ctl_msg(dev, usb_sndctrlpipe(dev, 0), 182 182 SC1810C_SET_STATE_REQ, 183 183 SC1810C_SET_STATE_REQTYPE, 184 184 (*seqnum), 0, &pkt_out, sizeof(pkt_out)); ··· 597 597 if (!list_empty(&chip->mixer_list)) 598 598 return 0; 599 599 600 - dev_info(&dev->dev, 601 - "Presonus Studio 1810c, device_setup: %u\n", chip->setup); 602 - if (chip->setup == 1) 603 - dev_info(&dev->dev, "(8out/18in @ 48kHz)\n"); 604 - else if (chip->setup == 2) 605 - dev_info(&dev->dev, "(6out/8in @ 192kHz)\n"); 606 - else 607 - dev_info(&dev->dev, "(8out/14in @ 96kHz)\n"); 608 - 609 600 ret = snd_s1810c_init_mixer_maps(chip); 610 601 if (ret < 0) 611 602 return ret; ··· 625 634 if (ret < 0) 626 635 return ret; 627 636 628 - // The 1824c has a Mono Main switch instead of a 629 - // A/B select switch. 630 - if (mixer->chip->usb_id == USB_ID(0x194f, 0x010d)) { 631 - ret = snd_s1810c_switch_init(mixer, &snd_s1824c_mono_sw); 632 - if (ret < 0) 633 - return ret; 634 - } else if (mixer->chip->usb_id == USB_ID(0x194f, 0x010c)) { 637 + switch (chip->usb_id) { 638 + case USB_ID(0x194f, 0x010c): /* Presonus Studio 1810c */ 639 + dev_info(&dev->dev, 640 + "Presonus Studio 1810c, device_setup: %u\n", chip->setup); 641 + if (chip->setup == 1) 642 + dev_info(&dev->dev, "(8out/18in @ 48kHz)\n"); 643 + else if (chip->setup == 2) 644 + dev_info(&dev->dev, "(6out/8in @ 192kHz)\n"); 645 + else 646 + dev_info(&dev->dev, "(8out/14in @ 96kHz)\n"); 647 + 635 648 ret = snd_s1810c_switch_init(mixer, &snd_s1810c_ab_sw); 636 649 if (ret < 0) 637 650 return ret; 651 + 652 + break; 653 + case USB_ID(0x194f, 0x010d): /* Presonus Studio 1824c */ 654 + ret = snd_s1810c_switch_init(mixer, &snd_s1824c_mono_sw); 655 + if (ret < 0) 656 + return ret; 657 + 658 + break; 638 659 } 639 660 640 661 return ret;
+5
tools/arch/x86/include/asm/cpufeatures.h
··· 444 444 #define X86_FEATURE_VM_PAGE_FLUSH (19*32+ 2) /* VM Page Flush MSR is supported */ 445 445 #define X86_FEATURE_SEV_ES (19*32+ 3) /* "sev_es" Secure Encrypted Virtualization - Encrypted State */ 446 446 #define X86_FEATURE_SEV_SNP (19*32+ 4) /* "sev_snp" Secure Encrypted Virtualization - Secure Nested Paging */ 447 + #define X86_FEATURE_SNP_SECURE_TSC (19*32+ 8) /* SEV-SNP Secure TSC */ 447 448 #define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* Virtual TSC_AUX */ 448 449 #define X86_FEATURE_SME_COHERENT (19*32+10) /* hardware-enforced cache coherency */ 449 450 #define X86_FEATURE_DEBUG_SWAP (19*32+14) /* "debug_swap" SEV-ES full debug state swap support */ ··· 496 495 #define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */ 497 496 #define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */ 498 497 #define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */ 498 + #define X86_FEATURE_IBPB_EXIT_TO_USER (21*32+14) /* Use IBPB on exit-to-userspace, see VMSCAPE bug */ 499 + #define X86_FEATURE_ABMC (21*32+15) /* Assignable Bandwidth Monitoring Counters */ 500 + #define X86_FEATURE_MSR_IMM (21*32+16) /* MSR immediate form instructions */ 499 501 500 502 /* 501 503 * BUG word(s) ··· 555 551 #define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */ 556 552 #define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */ 557 553 #define X86_BUG_TSA X86_BUG( 1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */ 554 + #define X86_BUG_VMSCAPE X86_BUG( 1*32+10) /* "vmscape" CPU is affected by VMSCAPE attacks from guests */ 558 555 #endif /* _ASM_X86_CPUFEATURES_H */
+19 -1
tools/arch/x86/include/asm/msr-index.h
··· 315 315 #define PERF_CAP_PT_IDX 16 316 316 317 317 #define MSR_PEBS_LD_LAT_THRESHOLD 0x000003f6 318 + 319 + #define PERF_CAP_LBR_FMT 0x3f 318 320 #define PERF_CAP_PEBS_TRAP BIT_ULL(6) 319 321 #define PERF_CAP_ARCH_REG BIT_ULL(7) 320 322 #define PERF_CAP_PEBS_FORMAT 0xf00 323 + #define PERF_CAP_FW_WRITES BIT_ULL(13) 321 324 #define PERF_CAP_PEBS_BASELINE BIT_ULL(14) 322 325 #define PERF_CAP_PEBS_TIMING_INFO BIT_ULL(17) 323 326 #define PERF_CAP_PEBS_MASK (PERF_CAP_PEBS_TRAP | PERF_CAP_ARCH_REG | \ ··· 636 633 #define MSR_AMD_PPIN 0xc00102f1 637 634 #define MSR_AMD64_CPUID_FN_7 0xc0011002 638 635 #define MSR_AMD64_CPUID_FN_1 0xc0011004 636 + 637 + #define MSR_AMD64_CPUID_EXT_FEAT 0xc0011005 638 + #define MSR_AMD64_CPUID_EXT_FEAT_TOPOEXT_BIT 54 639 + #define MSR_AMD64_CPUID_EXT_FEAT_TOPOEXT BIT_ULL(MSR_AMD64_CPUID_EXT_FEAT_TOPOEXT_BIT) 640 + 639 641 #define MSR_AMD64_LS_CFG 0xc0011020 640 642 #define MSR_AMD64_DC_CFG 0xc0011022 641 643 #define MSR_AMD64_TW_CFG 0xc0011023 ··· 709 701 #define MSR_AMD64_SNP_VMSA_REG_PROT BIT_ULL(MSR_AMD64_SNP_VMSA_REG_PROT_BIT) 710 702 #define MSR_AMD64_SNP_SMT_PROT_BIT 17 711 703 #define MSR_AMD64_SNP_SMT_PROT BIT_ULL(MSR_AMD64_SNP_SMT_PROT_BIT) 712 - #define MSR_AMD64_SNP_RESV_BIT 18 704 + #define MSR_AMD64_SNP_SECURE_AVIC_BIT 18 705 + #define MSR_AMD64_SNP_SECURE_AVIC BIT_ULL(MSR_AMD64_SNP_SECURE_AVIC_BIT) 706 + #define MSR_AMD64_SNP_RESV_BIT 19 713 707 #define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT) 708 + #define MSR_AMD64_SAVIC_CONTROL 0xc0010138 709 + #define MSR_AMD64_SAVIC_EN_BIT 0 710 + #define MSR_AMD64_SAVIC_EN BIT_ULL(MSR_AMD64_SAVIC_EN_BIT) 711 + #define MSR_AMD64_SAVIC_ALLOWEDNMI_BIT 1 712 + #define MSR_AMD64_SAVIC_ALLOWEDNMI BIT_ULL(MSR_AMD64_SAVIC_ALLOWEDNMI_BIT) 714 713 #define MSR_AMD64_RMP_BASE 0xc0010132 715 714 #define MSR_AMD64_RMP_END 0xc0010133 716 715 #define MSR_AMD64_RMP_CFG 0xc0010136 ··· 750 735 #define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300 751 736 #define MSR_AMD64_PERF_CNTR_GLOBAL_CTL 0xc0000301 752 737 #define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR 0xc0000302 738 + #define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_SET 0xc0000303 753 739 754 740 /* AMD Hardware Feedback Support MSRs */ 755 741 #define MSR_AMD_WORKLOAD_CLASS_CONFIG 0xc0000500 ··· 1241 1225 /* - AMD: */ 1242 1226 #define MSR_IA32_MBA_BW_BASE 0xc0000200 1243 1227 #define MSR_IA32_SMBA_BW_BASE 0xc0000280 1228 + #define MSR_IA32_L3_QOS_ABMC_CFG 0xc00003fd 1229 + #define MSR_IA32_L3_QOS_EXT_CFG 0xc00003ff 1244 1230 #define MSR_IA32_EVT_CFG_BASE 0xc0000400 1245 1231 1246 1232 /* AMD-V MSRs */
+34
tools/arch/x86/include/uapi/asm/kvm.h
··· 35 35 #define MC_VECTOR 18 36 36 #define XM_VECTOR 19 37 37 #define VE_VECTOR 20 38 + #define CP_VECTOR 21 39 + 40 + #define HV_VECTOR 28 41 + #define VC_VECTOR 29 42 + #define SX_VECTOR 30 38 43 39 44 /* Select x86 specific features in <linux/kvm.h> */ 40 45 #define __KVM_HAVE_PIT ··· 415 410 struct kvm_xcr xcrs[KVM_MAX_XCRS]; 416 411 __u64 padding[16]; 417 412 }; 413 + 414 + #define KVM_X86_REG_TYPE_MSR 2 415 + #define KVM_X86_REG_TYPE_KVM 3 416 + 417 + #define KVM_X86_KVM_REG_SIZE(reg) \ 418 + ({ \ 419 + reg == KVM_REG_GUEST_SSP ? KVM_REG_SIZE_U64 : 0; \ 420 + }) 421 + 422 + #define KVM_X86_REG_TYPE_SIZE(type, reg) \ 423 + ({ \ 424 + __u64 type_size = (__u64)type << 32; \ 425 + \ 426 + type_size |= type == KVM_X86_REG_TYPE_MSR ? KVM_REG_SIZE_U64 : \ 427 + type == KVM_X86_REG_TYPE_KVM ? KVM_X86_KVM_REG_SIZE(reg) : \ 428 + 0; \ 429 + type_size; \ 430 + }) 431 + 432 + #define KVM_X86_REG_ID(type, index) \ 433 + (KVM_REG_X86 | KVM_X86_REG_TYPE_SIZE(type, index) | index) 434 + 435 + #define KVM_X86_REG_MSR(index) \ 436 + KVM_X86_REG_ID(KVM_X86_REG_TYPE_MSR, index) 437 + #define KVM_X86_REG_KVM(index) \ 438 + KVM_X86_REG_ID(KVM_X86_REG_TYPE_KVM, index) 439 + 440 + /* KVM-defined registers starting from 0 */ 441 + #define KVM_REG_GUEST_SSP 0 418 442 419 443 #define KVM_SYNC_X86_REGS (1UL << 0) 420 444 #define KVM_SYNC_X86_SREGS (1UL << 1)
+4
tools/arch/x86/include/uapi/asm/svm.h
··· 118 118 #define SVM_VMGEXIT_AP_CREATE 1 119 119 #define SVM_VMGEXIT_AP_DESTROY 2 120 120 #define SVM_VMGEXIT_SNP_RUN_VMPL 0x80000018 121 + #define SVM_VMGEXIT_SAVIC 0x8000001a 122 + #define SVM_VMGEXIT_SAVIC_REGISTER_GPA 0 123 + #define SVM_VMGEXIT_SAVIC_UNREGISTER_GPA 1 124 + #define SVM_VMGEXIT_SAVIC_SELF_GPA ~0ULL 121 125 #define SVM_VMGEXIT_HV_FEATURES 0x8000fffd 122 126 #define SVM_VMGEXIT_TERM_REQUEST 0x8000fffe 123 127 #define SVM_VMGEXIT_TERM_REASON(reason_set, reason_code) \
+5 -1
tools/arch/x86/include/uapi/asm/vmx.h
··· 94 94 #define EXIT_REASON_BUS_LOCK 74 95 95 #define EXIT_REASON_NOTIFY 75 96 96 #define EXIT_REASON_TDCALL 77 97 + #define EXIT_REASON_MSR_READ_IMM 84 98 + #define EXIT_REASON_MSR_WRITE_IMM 85 97 99 98 100 #define VMX_EXIT_REASONS \ 99 101 { EXIT_REASON_EXCEPTION_NMI, "EXCEPTION_NMI" }, \ ··· 160 158 { EXIT_REASON_TPAUSE, "TPAUSE" }, \ 161 159 { EXIT_REASON_BUS_LOCK, "BUS_LOCK" }, \ 162 160 { EXIT_REASON_NOTIFY, "NOTIFY" }, \ 163 - { EXIT_REASON_TDCALL, "TDCALL" } 161 + { EXIT_REASON_TDCALL, "TDCALL" }, \ 162 + { EXIT_REASON_MSR_READ_IMM, "MSR_READ_IMM" }, \ 163 + { EXIT_REASON_MSR_WRITE_IMM, "MSR_WRITE_IMM" } 164 164 165 165 #define VMX_EXIT_REASON_FLAGS \ 166 166 { VMX_EXIT_REASONS_FAILED_VMENTRY, "FAILED_VMENTRY" }
+1 -1
tools/include/asm-generic/bitops/__fls.h
··· 10 10 * 11 11 * Undefined if no set bit exists, so code should check against 0 first. 12 12 */ 13 - static __always_inline unsigned int generic___fls(unsigned long word) 13 + static __always_inline __attribute_const__ unsigned int generic___fls(unsigned long word) 14 14 { 15 15 unsigned int num = BITS_PER_LONG - 1; 16 16
+1 -1
tools/include/asm-generic/bitops/fls.h
··· 10 10 * Note fls(0) = 0, fls(1) = 1, fls(0x80000000) = 32. 11 11 */ 12 12 13 - static __always_inline int generic_fls(unsigned int x) 13 + static __always_inline __attribute_const__ int generic_fls(unsigned int x) 14 14 { 15 15 int r = 32; 16 16
+2 -2
tools/include/asm-generic/bitops/fls64.h
··· 16 16 * at position 64. 17 17 */ 18 18 #if BITS_PER_LONG == 32 19 - static __always_inline int fls64(__u64 x) 19 + static __always_inline __attribute_const__ int fls64(__u64 x) 20 20 { 21 21 __u32 h = x >> 32; 22 22 if (h) ··· 24 24 return fls(x); 25 25 } 26 26 #elif BITS_PER_LONG == 64 27 - static __always_inline int fls64(__u64 x) 27 + static __always_inline __attribute_const__ int fls64(__u64 x) 28 28 { 29 29 if (x == 0) 30 30 return 0;
+51 -12
tools/include/uapi/drm/drm.h
··· 597 597 int drm_dd_minor; 598 598 }; 599 599 600 - /* DRM_IOCTL_GEM_CLOSE ioctl argument type */ 600 + /** 601 + * struct drm_gem_close - Argument for &DRM_IOCTL_GEM_CLOSE ioctl. 602 + * @handle: Handle of the object to be closed. 603 + * @pad: Padding. 604 + * 605 + * Releases the handle to an mm object. 606 + */ 601 607 struct drm_gem_close { 602 - /** Handle of the object to be closed. */ 603 608 __u32 handle; 604 609 __u32 pad; 605 610 }; 606 611 607 - /* DRM_IOCTL_GEM_FLINK ioctl argument type */ 612 + /** 613 + * struct drm_gem_flink - Argument for &DRM_IOCTL_GEM_FLINK ioctl. 614 + * @handle: Handle for the object being named. 615 + * @name: Returned global name. 616 + * 617 + * Create a global name for an object, returning the name. 618 + * 619 + * Note that the name does not hold a reference; when the object 620 + * is freed, the name goes away. 621 + */ 608 622 struct drm_gem_flink { 609 - /** Handle for the object being named */ 610 623 __u32 handle; 611 - 612 - /** Returned global name */ 613 624 __u32 name; 614 625 }; 615 626 616 - /* DRM_IOCTL_GEM_OPEN ioctl argument type */ 627 + /** 628 + * struct drm_gem_open - Argument for &DRM_IOCTL_GEM_OPEN ioctl. 629 + * @name: Name of object being opened. 630 + * @handle: Returned handle for the object. 631 + * @size: Returned size of the object 632 + * 633 + * Open an object using the global name, returning a handle and the size. 634 + * 635 + * This handle (of course) holds a reference to the object, so the object 636 + * will not go away until the handle is deleted. 637 + */ 617 638 struct drm_gem_open { 618 - /** Name of object being opened */ 619 639 __u32 name; 620 - 621 - /** Returned handle for the object */ 622 640 __u32 handle; 623 - 624 - /** Returned size of the object */ 625 641 __u64 size; 642 + }; 643 + 644 + /** 645 + * struct drm_gem_change_handle - Argument for &DRM_IOCTL_GEM_CHANGE_HANDLE ioctl. 646 + * @handle: The handle of a gem object. 647 + * @new_handle: An available gem handle. 648 + * 649 + * This ioctl changes the handle of a GEM object to the specified one. 650 + * The new handle must be unused. On success the old handle is closed 651 + * and all further IOCTL should refer to the new handle only. 652 + * Calls to DRM_IOCTL_PRIME_FD_TO_HANDLE will return the new handle. 653 + */ 654 + struct drm_gem_change_handle { 655 + __u32 handle; 656 + __u32 new_handle; 626 657 }; 627 658 628 659 /** ··· 1339 1308 * The call will fail if the name contains whitespaces or non-printable chars. 1340 1309 */ 1341 1310 #define DRM_IOCTL_SET_CLIENT_NAME DRM_IOWR(0xD1, struct drm_set_client_name) 1311 + 1312 + /** 1313 + * DRM_IOCTL_GEM_CHANGE_HANDLE - Move an object to a different handle 1314 + * 1315 + * Some applications (notably CRIU) need objects to have specific gem handles. 1316 + * This ioctl changes the object at one gem handle to use a new gem handle. 1317 + */ 1318 + #define DRM_IOCTL_GEM_CHANGE_HANDLE DRM_IOWR(0xD2, struct drm_gem_change_handle) 1342 1319 1343 1320 /* 1344 1321 * Device specific ioctls should only be in their respective headers
+3
tools/include/uapi/linux/kvm.h
··· 962 962 #define KVM_CAP_ARM_EL2_E2H0 241 963 963 #define KVM_CAP_RISCV_MP_STATE_RESET 242 964 964 #define KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED 243 965 + #define KVM_CAP_GUEST_MEMFD_FLAGS 244 965 966 966 967 struct kvm_irq_routing_irqchip { 967 968 __u32 irqchip; ··· 1599 1598 #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) 1600 1599 1601 1600 #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd) 1601 + #define GUEST_MEMFD_FLAG_MMAP (1ULL << 0) 1602 + #define GUEST_MEMFD_FLAG_INIT_SHARED (1ULL << 1) 1602 1603 1603 1604 struct kvm_create_guest_memfd { 1604 1605 __u64 size;
+1 -1
tools/lib/bpf/bpf_tracing.h
··· 311 311 #define __PT_RET_REG regs[31] 312 312 #define __PT_FP_REG __unsupported__ 313 313 #define __PT_RC_REG gpr[3] 314 - #define __PT_SP_REG sp 314 + #define __PT_SP_REG gpr[1] 315 315 #define __PT_IP_REG nip 316 316 317 317 #elif defined(bpf_target_sparc)
+2 -2
tools/net/ynl/lib/ynl-priv.h
··· 313 313 struct nlattr *attr; 314 314 size_t len; 315 315 316 - len = strlen(str); 316 + len = strlen(str) + 1; 317 317 if (__ynl_attr_put_overflow(nlh, len)) 318 318 return; 319 319 ··· 321 321 attr->nla_type = attr_type; 322 322 323 323 strcpy((char *)ynl_attr_data(attr), str); 324 - attr->nla_len = NLA_HDRLEN + NLA_ALIGN(len); 324 + attr->nla_len = NLA_HDRLEN + len; 325 325 326 326 nlh->nlmsg_len += NLMSG_ALIGN(attr->nla_len); 327 327 }
+3
tools/net/ynl/pyynl/ethtool.py
··· 44 44 Pretty-print a set of fields from the reply. desc specifies the 45 45 fields and the optional type (bool/yn). 46 46 """ 47 + if not reply: 48 + return 49 + 47 50 if len(desc) == 0: 48 51 return print_field(reply, *zip(reply.keys(), reply.keys())) 49 52
+4 -1
tools/objtool/check.c
··· 3516 3516 { 3517 3517 struct instruction *alt_insn = insn->alts ? insn->alts->insn : NULL; 3518 3518 3519 + if (!insn->alt_group) 3520 + return false; 3521 + 3519 3522 /* ANNOTATE_IGNORE_ALTERNATIVE */ 3520 - if (insn->alt_group && insn->alt_group->ignore) 3523 + if (insn->alt_group->ignore) 3521 3524 return true; 3522 3525 3523 3526 /*
+1
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
··· 345 345 333 common io_pgetevents sys_io_pgetevents 346 346 334 common rseq sys_rseq 347 347 335 common uretprobe sys_uretprobe 348 + 336 common uprobe sys_uprobe 348 349 # don't use numbers 387 through 423, add new calls after the last 349 350 # 'common' entry 350 351 424 common pidfd_send_signal sys_pidfd_send_signal
+1
tools/perf/trace/beauty/include/uapi/linux/fcntl.h
··· 111 111 #define PIDFD_SELF_THREAD_GROUP -10001 /* Current thread group leader. */ 112 112 113 113 #define FD_PIDFS_ROOT -10002 /* Root of the pidfs filesystem */ 114 + #define FD_NSFS_ROOT -10003 /* Root of the nsfs filesystem */ 114 115 #define FD_INVALID -10009 /* Invalid file descriptor: -10000 - EBADF = -10009 */ 115 116 116 117 /* Generic flags for the *at(2) family of syscalls. */
+4 -1
tools/perf/trace/beauty/include/uapi/linux/fs.h
··· 430 430 /* buffered IO that drops the cache after reading or writing data */ 431 431 #define RWF_DONTCACHE ((__force __kernel_rwf_t)0x00000080) 432 432 433 + /* prevent pipe and socket writes from raising SIGPIPE */ 434 + #define RWF_NOSIGNAL ((__force __kernel_rwf_t)0x00000100) 435 + 433 436 /* mask of flags supported by the kernel */ 434 437 #define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\ 435 438 RWF_APPEND | RWF_NOAPPEND | RWF_ATOMIC |\ 436 - RWF_DONTCACHE) 439 + RWF_DONTCACHE | RWF_NOSIGNAL) 437 440 438 441 #define PROCFS_IOCTL_MAGIC 'f' 439 442
+10
tools/perf/trace/beauty/include/uapi/linux/prctl.h
··· 177 177 178 178 #define PR_GET_TID_ADDRESS 40 179 179 180 + /* 181 + * Flags for PR_SET_THP_DISABLE are only applicable when disabling. Bit 0 182 + * is reserved, so PR_GET_THP_DISABLE can return "1 | flags", to effectively 183 + * return "1" when no flags were specified for PR_SET_THP_DISABLE. 184 + */ 180 185 #define PR_SET_THP_DISABLE 41 186 + /* 187 + * Don't disable THPs when explicitly advised (e.g., MADV_HUGEPAGE / 188 + * VM_HUGEPAGE, MADV_COLLAPSE). 189 + */ 190 + # define PR_THP_DISABLE_EXCEPT_ADVISED (1 << 1) 181 191 #define PR_GET_THP_DISABLE 42 182 192 183 193 /*
+5 -1
tools/perf/util/symbol.c
··· 112 112 // 'N' first seen in: 113 113 // ffffffff9b35d130 N __pfx__RNCINvNtNtNtCsbDUBuN8AbD4_4core4iter8adapters3map12map_try_foldjNtCs6vVzKs5jPr6_12drm_panic_qr7VersionuINtNtNtBa_3ops12control_flow11ControlFlowB10_ENcB10_0NCINvNvNtNtNtB8_6traits8iterator8Iterator4find5checkB10_NCNvMB12_B10_13from_segments0E0E0B12_ 114 114 // a seemingly Rust mangled name 115 + // Ditto for '1': 116 + // root@x1:~# grep ' 1 ' /proc/kallsyms 117 + // ffffffffb098bc00 1 __pfx__RNCINvNtNtNtCsfwaGRd4cjqE_4core4iter8adapters3map12map_try_foldjNtCskFudTml27HW_12drm_panic_qr7VersionuINtNtNtBa_3ops12control_flow11ControlFlowB10_ENcB10_0NCINvNvNtNtNtB8_6traits8iterator8Iterator4find5checkB10_NCNvMB12_B10_13from_segments0E0E0B12_ 118 + // ffffffffb098bc10 1 _RNCINvNtNtNtCsfwaGRd4cjqE_4core4iter8adapters3map12map_try_foldjNtCskFudTml27HW_12drm_panic_qr7VersionuINtNtNtBa_3ops12control_flow11ControlFlowB10_ENcB10_0NCINvNvNtNtNtB8_6traits8iterator8Iterator4find5checkB10_NCNvMB12_B10_13from_segments0E0E0B12_ 115 119 char symbol_type = toupper(__symbol_type); 116 120 return symbol_type == 'T' || symbol_type == 'W' || symbol_type == 'D' || symbol_type == 'B' || 117 - __symbol_type == 'u' || __symbol_type == 'l' || __symbol_type == 'N'; 121 + __symbol_type == 'u' || __symbol_type == 'l' || __symbol_type == 'N' || __symbol_type == '1'; 118 122 } 119 123 120 124 static int prefix_underscores_count(const char *str)
+1
tools/testing/selftests/cachestat/.gitignore
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 test_cachestat 3 + tmpshmcstat
+2 -2
tools/testing/selftests/cachestat/test_cachestat.c
··· 226 226 int syscall_ret; 227 227 size_t compute_len = PS * 512; 228 228 struct cachestat_range cs_range = { PS, compute_len }; 229 - char *filename = "tmpshmcstat"; 229 + char *filename = "tmpshmcstat", *map; 230 230 struct cachestat cs; 231 231 bool ret = true; 232 232 int fd; ··· 257 257 } 258 258 break; 259 259 case FILE_MMAP: 260 - char *map = mmap(NULL, filesize, PROT_READ | PROT_WRITE, 260 + map = mmap(NULL, filesize, PROT_READ | PROT_WRITE, 261 261 MAP_SHARED, fd, 0); 262 262 263 263 if (map == MAP_FAILED) {
+4
tools/testing/selftests/drivers/net/netdevsim/Makefile
··· 20 20 udp_tunnel_nic.sh \ 21 21 # end of TEST_PROGS 22 22 23 + TEST_FILES := \ 24 + ethtool-common.sh 25 + # end of TEST_FILES 26 + 23 27 include ../../../lib.mk
+2
tools/testing/selftests/iommu/iommufd.c
··· 2638 2638 ASSERT_EQ(0, ioctl(self->fd, VFIO_IOMMU_MAP_DMA, &map_cmd)); 2639 2639 ASSERT_EQ(0, ioctl(self->fd, VFIO_IOMMU_UNMAP_DMA, &unmap_cmd)); 2640 2640 ASSERT_EQ(BUFFER_SIZE, unmap_cmd.size); 2641 + /* Unmap of empty is success */ 2642 + ASSERT_EQ(0, ioctl(self->fd, VFIO_IOMMU_UNMAP_DMA, &unmap_cmd)); 2641 2643 2642 2644 /* UNMAP_FLAG_ALL requires 0 iova/size */ 2643 2645 ASSERT_EQ(0, ioctl(self->fd, VFIO_IOMMU_MAP_DMA, &map_cmd));
+2 -2
tools/testing/selftests/iommu/iommufd_utils.h
··· 1044 1044 }; 1045 1045 1046 1046 while (nvevents--) { 1047 - if (!ioctl(fd, _IOMMU_TEST_CMD(IOMMU_TEST_OP_TRIGGER_VEVENT), 1048 - &trigger_vevent_cmd)) 1047 + if (ioctl(fd, _IOMMU_TEST_CMD(IOMMU_TEST_OP_TRIGGER_VEVENT), 1048 + &trigger_vevent_cmd)) 1049 1049 return -1; 1050 1050 } 1051 1051 return 0;
+1 -1
tools/testing/selftests/net/bareudp.sh
··· 1 - #!/bin/sh 1 + #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 4 # Test various bareudp tunnel configurations.
+10 -2
tools/testing/selftests/net/gro.c
··· 754 754 static char exthdr_pck[sizeof(buf) + MIN_EXTHDR_SIZE]; 755 755 756 756 create_packet(buf, 0, 0, PAYLOAD_LEN, 0); 757 - add_ipv6_exthdr(buf, exthdr_pck, IPPROTO_HOPOPTS, ext_data1); 757 + add_ipv6_exthdr(buf, exthdr_pck, IPPROTO_DSTOPTS, ext_data1); 758 758 write_packet(fd, exthdr_pck, total_hdr_len + PAYLOAD_LEN + MIN_EXTHDR_SIZE, daddr); 759 759 760 760 create_packet(buf, PAYLOAD_LEN * 1, 0, PAYLOAD_LEN, 0); 761 - add_ipv6_exthdr(buf, exthdr_pck, IPPROTO_HOPOPTS, ext_data2); 761 + add_ipv6_exthdr(buf, exthdr_pck, IPPROTO_DSTOPTS, ext_data2); 762 762 write_packet(fd, exthdr_pck, total_hdr_len + PAYLOAD_LEN + MIN_EXTHDR_SIZE, daddr); 763 763 } 764 764 ··· 989 989 990 990 static void gro_sender(void) 991 991 { 992 + const int fin_delay_us = 100 * 1000; 992 993 static char fin_pkt[MAX_HDR_LEN]; 993 994 struct sockaddr_ll daddr = {}; 994 995 int txfd = -1; ··· 1033 1032 write_packet(txfd, fin_pkt, total_hdr_len, &daddr); 1034 1033 } else if (strcmp(testname, "tcp") == 0) { 1035 1034 send_changed_checksum(txfd, &daddr); 1035 + /* Adding sleep before sending FIN so that it is not 1036 + * received prior to other packets. 1037 + */ 1038 + usleep(fin_delay_us); 1036 1039 write_packet(txfd, fin_pkt, total_hdr_len, &daddr); 1037 1040 1038 1041 send_changed_seq(txfd, &daddr); 1042 + usleep(fin_delay_us); 1039 1043 write_packet(txfd, fin_pkt, total_hdr_len, &daddr); 1040 1044 1041 1045 send_changed_ts(txfd, &daddr); 1046 + usleep(fin_delay_us); 1042 1047 write_packet(txfd, fin_pkt, total_hdr_len, &daddr); 1043 1048 1044 1049 send_diff_opt(txfd, &daddr); 1050 + usleep(fin_delay_us); 1045 1051 write_packet(txfd, fin_pkt, total_hdr_len, &daddr); 1046 1052 } else if (strcmp(testname, "ip") == 0) { 1047 1053 send_changed_ECN(txfd, &daddr);
+23 -4
tools/testing/selftests/vfio/lib/include/vfio_util.h
··· 206 206 void vfio_pci_device_cleanup(struct vfio_pci_device *device); 207 207 void vfio_pci_device_reset(struct vfio_pci_device *device); 208 208 209 - void vfio_pci_dma_map(struct vfio_pci_device *device, 210 - struct vfio_dma_region *region); 211 - void vfio_pci_dma_unmap(struct vfio_pci_device *device, 212 - struct vfio_dma_region *region); 209 + int __vfio_pci_dma_map(struct vfio_pci_device *device, 210 + struct vfio_dma_region *region); 211 + int __vfio_pci_dma_unmap(struct vfio_pci_device *device, 212 + struct vfio_dma_region *region, 213 + u64 *unmapped); 214 + int __vfio_pci_dma_unmap_all(struct vfio_pci_device *device, u64 *unmapped); 215 + 216 + static inline void vfio_pci_dma_map(struct vfio_pci_device *device, 217 + struct vfio_dma_region *region) 218 + { 219 + VFIO_ASSERT_EQ(__vfio_pci_dma_map(device, region), 0); 220 + } 221 + 222 + static inline void vfio_pci_dma_unmap(struct vfio_pci_device *device, 223 + struct vfio_dma_region *region) 224 + { 225 + VFIO_ASSERT_EQ(__vfio_pci_dma_unmap(device, region, NULL), 0); 226 + } 227 + 228 + static inline void vfio_pci_dma_unmap_all(struct vfio_pci_device *device) 229 + { 230 + VFIO_ASSERT_EQ(__vfio_pci_dma_unmap_all(device, NULL), 0); 231 + } 213 232 214 233 void vfio_pci_config_access(struct vfio_pci_device *device, bool write, 215 234 size_t config, size_t size, void *data);
+83 -25
tools/testing/selftests/vfio/lib/vfio_pci_device.c
··· 2 2 #include <dirent.h> 3 3 #include <fcntl.h> 4 4 #include <libgen.h> 5 + #include <stdint.h> 5 6 #include <stdlib.h> 6 7 #include <string.h> 7 8 #include <unistd.h> ··· 142 141 ioctl_assert(device->fd, VFIO_DEVICE_GET_IRQ_INFO, irq_info); 143 142 } 144 143 145 - static void vfio_iommu_dma_map(struct vfio_pci_device *device, 144 + static int vfio_iommu_dma_map(struct vfio_pci_device *device, 146 145 struct vfio_dma_region *region) 147 146 { 148 147 struct vfio_iommu_type1_dma_map args = { ··· 153 152 .size = region->size, 154 153 }; 155 154 156 - ioctl_assert(device->container_fd, VFIO_IOMMU_MAP_DMA, &args); 155 + if (ioctl(device->container_fd, VFIO_IOMMU_MAP_DMA, &args)) 156 + return -errno; 157 + 158 + return 0; 157 159 } 158 160 159 - static void iommufd_dma_map(struct vfio_pci_device *device, 161 + static int iommufd_dma_map(struct vfio_pci_device *device, 160 162 struct vfio_dma_region *region) 161 163 { 162 164 struct iommu_ioas_map args = { ··· 173 169 .ioas_id = device->ioas_id, 174 170 }; 175 171 176 - ioctl_assert(device->iommufd, IOMMU_IOAS_MAP, &args); 172 + if (ioctl(device->iommufd, IOMMU_IOAS_MAP, &args)) 173 + return -errno; 174 + 175 + return 0; 177 176 } 178 177 179 - void vfio_pci_dma_map(struct vfio_pci_device *device, 178 + int __vfio_pci_dma_map(struct vfio_pci_device *device, 180 179 struct vfio_dma_region *region) 181 180 { 181 + int ret; 182 + 182 183 if (device->iommufd) 183 - iommufd_dma_map(device, region); 184 + ret = iommufd_dma_map(device, region); 184 185 else 185 - vfio_iommu_dma_map(device, region); 186 + ret = vfio_iommu_dma_map(device, region); 187 + 188 + if (ret) 189 + return ret; 186 190 187 191 list_add(&region->link, &device->dma_regions); 192 + 193 + return 0; 188 194 } 189 195 190 - static void vfio_iommu_dma_unmap(struct vfio_pci_device *device, 191 - struct vfio_dma_region *region) 196 + static int vfio_iommu_dma_unmap(int fd, u64 iova, u64 size, u32 flags, 197 + u64 *unmapped) 192 198 { 193 199 struct vfio_iommu_type1_dma_unmap args = { 194 200 .argsz = sizeof(args), 195 - .iova = region->iova, 196 - .size = region->size, 201 + .iova = iova, 202 + .size = size, 203 + .flags = flags, 197 204 }; 198 205 199 - ioctl_assert(device->container_fd, VFIO_IOMMU_UNMAP_DMA, &args); 206 + if (ioctl(fd, VFIO_IOMMU_UNMAP_DMA, &args)) 207 + return -errno; 208 + 209 + if (unmapped) 210 + *unmapped = args.size; 211 + 212 + return 0; 200 213 } 201 214 202 - static void iommufd_dma_unmap(struct vfio_pci_device *device, 203 - struct vfio_dma_region *region) 215 + static int iommufd_dma_unmap(int fd, u64 iova, u64 length, u32 ioas_id, 216 + u64 *unmapped) 204 217 { 205 218 struct iommu_ioas_unmap args = { 206 219 .size = sizeof(args), 207 - .iova = region->iova, 208 - .length = region->size, 209 - .ioas_id = device->ioas_id, 220 + .iova = iova, 221 + .length = length, 222 + .ioas_id = ioas_id, 210 223 }; 211 224 212 - ioctl_assert(device->iommufd, IOMMU_IOAS_UNMAP, &args); 225 + if (ioctl(fd, IOMMU_IOAS_UNMAP, &args)) 226 + return -errno; 227 + 228 + if (unmapped) 229 + *unmapped = args.length; 230 + 231 + return 0; 213 232 } 214 233 215 - void vfio_pci_dma_unmap(struct vfio_pci_device *device, 216 - struct vfio_dma_region *region) 234 + int __vfio_pci_dma_unmap(struct vfio_pci_device *device, 235 + struct vfio_dma_region *region, u64 *unmapped) 217 236 { 218 - if (device->iommufd) 219 - iommufd_dma_unmap(device, region); 220 - else 221 - vfio_iommu_dma_unmap(device, region); 237 + int ret; 222 238 223 - list_del(&region->link); 239 + if (device->iommufd) 240 + ret = iommufd_dma_unmap(device->iommufd, region->iova, 241 + region->size, device->ioas_id, 242 + unmapped); 243 + else 244 + ret = vfio_iommu_dma_unmap(device->container_fd, region->iova, 245 + region->size, 0, unmapped); 246 + 247 + if (ret) 248 + return ret; 249 + 250 + list_del_init(&region->link); 251 + 252 + return 0; 253 + } 254 + 255 + int __vfio_pci_dma_unmap_all(struct vfio_pci_device *device, u64 *unmapped) 256 + { 257 + int ret; 258 + struct vfio_dma_region *curr, *next; 259 + 260 + if (device->iommufd) 261 + ret = iommufd_dma_unmap(device->iommufd, 0, UINT64_MAX, 262 + device->ioas_id, unmapped); 263 + else 264 + ret = vfio_iommu_dma_unmap(device->container_fd, 0, 0, 265 + VFIO_DMA_UNMAP_FLAG_ALL, unmapped); 266 + 267 + if (ret) 268 + return ret; 269 + 270 + list_for_each_entry_safe(curr, next, &device->dma_regions, link) 271 + list_del_init(&curr->link); 272 + 273 + return 0; 224 274 } 225 275 226 276 static void vfio_pci_region_get(struct vfio_pci_device *device, int index,
+94 -1
tools/testing/selftests/vfio/vfio_dma_mapping_test.c
··· 112 112 FIXTURE_VARIANT_ADD_ALL_IOMMU_MODES(anonymous_hugetlb_2mb, SZ_2M, MAP_HUGETLB | MAP_HUGE_2MB); 113 113 FIXTURE_VARIANT_ADD_ALL_IOMMU_MODES(anonymous_hugetlb_1gb, SZ_1G, MAP_HUGETLB | MAP_HUGE_1GB); 114 114 115 + #undef FIXTURE_VARIANT_ADD_IOMMU_MODE 116 + 115 117 FIXTURE_SETUP(vfio_dma_mapping_test) 116 118 { 117 119 self->device = vfio_pci_device_init(device_bdf, variant->iommu_mode); ··· 131 129 struct vfio_dma_region region; 132 130 struct iommu_mapping mapping; 133 131 u64 mapping_size = size; 132 + u64 unmapped; 134 133 int rc; 135 134 136 135 region.vaddr = mmap(NULL, size, PROT_READ | PROT_WRITE, flags, -1, 0); ··· 187 184 } 188 185 189 186 unmap: 190 - vfio_pci_dma_unmap(self->device, &region); 187 + rc = __vfio_pci_dma_unmap(self->device, &region, &unmapped); 188 + ASSERT_EQ(rc, 0); 189 + ASSERT_EQ(unmapped, region.size); 191 190 printf("Unmapped IOVA 0x%lx\n", region.iova); 192 191 ASSERT_EQ(INVALID_IOVA, __to_iova(self->device, region.vaddr)); 193 192 ASSERT_NE(0, iommu_mapping_get(device_bdf, region.iova, &mapping)); 194 193 195 194 ASSERT_TRUE(!munmap(region.vaddr, size)); 195 + } 196 + 197 + FIXTURE(vfio_dma_map_limit_test) { 198 + struct vfio_pci_device *device; 199 + struct vfio_dma_region region; 200 + size_t mmap_size; 201 + }; 202 + 203 + FIXTURE_VARIANT(vfio_dma_map_limit_test) { 204 + const char *iommu_mode; 205 + }; 206 + 207 + #define FIXTURE_VARIANT_ADD_IOMMU_MODE(_iommu_mode) \ 208 + FIXTURE_VARIANT_ADD(vfio_dma_map_limit_test, _iommu_mode) { \ 209 + .iommu_mode = #_iommu_mode, \ 210 + } 211 + 212 + FIXTURE_VARIANT_ADD_ALL_IOMMU_MODES(); 213 + 214 + #undef FIXTURE_VARIANT_ADD_IOMMU_MODE 215 + 216 + FIXTURE_SETUP(vfio_dma_map_limit_test) 217 + { 218 + struct vfio_dma_region *region = &self->region; 219 + u64 region_size = getpagesize(); 220 + 221 + /* 222 + * Over-allocate mmap by double the size to provide enough backing vaddr 223 + * for overflow tests 224 + */ 225 + self->mmap_size = 2 * region_size; 226 + 227 + self->device = vfio_pci_device_init(device_bdf, variant->iommu_mode); 228 + region->vaddr = mmap(NULL, self->mmap_size, PROT_READ | PROT_WRITE, 229 + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); 230 + ASSERT_NE(region->vaddr, MAP_FAILED); 231 + 232 + /* One page prior to the end of address space */ 233 + region->iova = ~(iova_t)0 & ~(region_size - 1); 234 + region->size = region_size; 235 + } 236 + 237 + FIXTURE_TEARDOWN(vfio_dma_map_limit_test) 238 + { 239 + vfio_pci_device_cleanup(self->device); 240 + ASSERT_EQ(munmap(self->region.vaddr, self->mmap_size), 0); 241 + } 242 + 243 + TEST_F(vfio_dma_map_limit_test, unmap_range) 244 + { 245 + struct vfio_dma_region *region = &self->region; 246 + u64 unmapped; 247 + int rc; 248 + 249 + vfio_pci_dma_map(self->device, region); 250 + ASSERT_EQ(region->iova, to_iova(self->device, region->vaddr)); 251 + 252 + rc = __vfio_pci_dma_unmap(self->device, region, &unmapped); 253 + ASSERT_EQ(rc, 0); 254 + ASSERT_EQ(unmapped, region->size); 255 + } 256 + 257 + TEST_F(vfio_dma_map_limit_test, unmap_all) 258 + { 259 + struct vfio_dma_region *region = &self->region; 260 + u64 unmapped; 261 + int rc; 262 + 263 + vfio_pci_dma_map(self->device, region); 264 + ASSERT_EQ(region->iova, to_iova(self->device, region->vaddr)); 265 + 266 + rc = __vfio_pci_dma_unmap_all(self->device, &unmapped); 267 + ASSERT_EQ(rc, 0); 268 + ASSERT_EQ(unmapped, region->size); 269 + } 270 + 271 + TEST_F(vfio_dma_map_limit_test, overflow) 272 + { 273 + struct vfio_dma_region *region = &self->region; 274 + int rc; 275 + 276 + region->size = self->mmap_size; 277 + 278 + rc = __vfio_pci_dma_map(self->device, region); 279 + ASSERT_EQ(rc, -EOVERFLOW); 280 + 281 + rc = __vfio_pci_dma_unmap(self->device, region, NULL); 282 + ASSERT_EQ(rc, -EOVERFLOW); 196 283 } 197 284 198 285 int main(int argc, char *argv[])
+4 -4
tools/testing/selftests/vsock/vmtest.sh
··· 389 389 local rc 390 390 391 391 host_oops_cnt_before=$(dmesg | grep -c -i 'Oops') 392 - host_warn_cnt_before=$(dmesg --level=warn | wc -l) 392 + host_warn_cnt_before=$(dmesg --level=warn | grep -c -i 'vsock') 393 393 vm_oops_cnt_before=$(vm_ssh -- dmesg | grep -c -i 'Oops') 394 - vm_warn_cnt_before=$(vm_ssh -- dmesg --level=warn | wc -l) 394 + vm_warn_cnt_before=$(vm_ssh -- dmesg --level=warn | grep -c -i 'vsock') 395 395 396 396 name=$(echo "${1}" | awk '{ print $1 }') 397 397 eval test_"${name}" ··· 403 403 rc=$KSFT_FAIL 404 404 fi 405 405 406 - host_warn_cnt_after=$(dmesg --level=warn | wc -l) 406 + host_warn_cnt_after=$(dmesg --level=warn | grep -c -i 'vsock') 407 407 if [[ ${host_warn_cnt_after} -gt ${host_warn_cnt_before} ]]; then 408 408 echo "FAIL: kernel warning detected on host" | log_host "${name}" 409 409 rc=$KSFT_FAIL ··· 415 415 rc=$KSFT_FAIL 416 416 fi 417 417 418 - vm_warn_cnt_after=$(vm_ssh -- dmesg --level=warn | wc -l) 418 + vm_warn_cnt_after=$(vm_ssh -- dmesg --level=warn | grep -c -i 'vsock') 419 419 if [[ ${vm_warn_cnt_after} -gt ${vm_warn_cnt_before} ]]; then 420 420 echo "FAIL: kernel warning detected on vm" | log_host "${name}" 421 421 rc=$KSFT_FAIL
+1 -1
tools/tracing/latency/latency-collector.c
··· 1725 1725 "-n, --notrace\t\tIf latency is detected, do not print out the content of\n" 1726 1726 "\t\t\tthe trace file to standard output\n\n" 1727 1727 1728 - "-t, --threads NRTHR\tRun NRTHR threads for printing. Default is %d.\n\n" 1728 + "-e, --threads NRTHR\tRun NRTHR threads for printing. Default is %d.\n\n" 1729 1729 1730 1730 "-r, --random\t\tArbitrarily sleep a certain amount of time, default\n" 1731 1731 "\t\t\t%ld ms, before reading the trace file. The\n"