Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

ASoC: nau8821: Fixes and driver cleanup

Merge series from Cristian Ciocaltea <cristian.ciocaltea@collabora.com>:

This series provides several fixes and cleanup patches for the Nuvoton
NAU88L21 audio codec driver.

Testing and validation has been performed on Valve Steam Deck.

+3177 -1907
+2 -2
.mailmap
··· 127 127 Barry Song <baohua@kernel.org> <barry.song@analog.com> 128 128 Bart Van Assche <bvanassche@acm.org> <bart.vanassche@sandisk.com> 129 129 Bart Van Assche <bvanassche@acm.org> <bart.vanassche@wdc.com> 130 - Bartosz Golaszewski <brgl@bgdev.pl> <bgolaszewski@baylibre.com> 130 + Bartosz Golaszewski <brgl@kernel.org> <bartosz.golaszewski@linaro.org> 131 + Bartosz Golaszewski <brgl@kernel.org> <bgolaszewski@baylibre.com> 131 132 Ben Dooks <ben-linux@fluff.org> <ben.dooks@simtec.co.uk> 132 133 Ben Dooks <ben-linux@fluff.org> <ben.dooks@sifive.com> 133 134 Ben Gardner <bgardner@wabtec.com> ··· 858 857 Vladimir Davydov <vdavydov.dev@gmail.com> <vdavydov@virtuozzo.com> 859 858 WangYuli <wangyuli@aosc.io> <wangyl5933@chinaunicom.cn> 860 859 WangYuli <wangyuli@aosc.io> <wangyuli@deepin.org> 861 - WangYuli <wangyuli@aosc.io> <wangyuli@uniontech.com> 862 860 Weiwen Hu <huweiwen@linux.alibaba.com> <sehuww@mail.scut.edu.cn> 863 861 WeiXiong Liao <gmpy.liaowx@gmail.com> <liaoweixiong@allwinnertech.com> 864 862 Wen Gong <quic_wgong@quicinc.com> <wgong@codeaurora.org>
+8
Documentation/arch/riscv/hwprobe.rst
··· 281 281 * :c:macro:`RISCV_HWPROBE_EXT_ZICBOP`: The Zicbop extension is supported, as 282 282 ratified in commit 3dd606f ("Create cmobase-v1.0.pdf") of riscv-CMOs. 283 283 284 + * :c:macro:`RISCV_HWPROBE_EXT_ZILSD`: The Zilsd extension is supported as 285 + defined in the RISC-V ISA manual starting from commit f88abf1 ("Integrating 286 + load/store pair for RV32 with the main manual") of the riscv-isa-manual. 287 + 288 + * :c:macro:`RISCV_HWPROBE_EXT_ZCLSD`: The Zclsd extension is supported as 289 + defined in the RISC-V ISA manual starting from commit f88abf1 ("Integrating 290 + load/store pair for RV32 with the main manual") of the riscv-isa-manual. 291 + 284 292 * :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: Deprecated. Returns similar values to 285 293 :c:macro:`RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF`, but the key was 286 294 mistakenly classified as a bitmask rather than a value.
+36
Documentation/devicetree/bindings/riscv/extensions.yaml
··· 377 377 guarantee on LR/SC sequences, as ratified in commit b1d806605f87 378 378 ("Updated to ratified state.") of the riscv profiles specification. 379 379 380 + - const: zilsd 381 + description: 382 + The standard Zilsd extension which provides support for aligned 383 + register-pair load and store operations in 32-bit instruction 384 + encodings, as ratified in commit f88abf1 ("Integrating 385 + load/store pair for RV32 with the main manual") of riscv-isa-manual. 386 + 387 + - const: zclsd 388 + description: 389 + The Zclsd extension implements the compressed (16-bit) version of the 390 + Load/Store Pair for RV32. As with Zilsd, this extension was ratified 391 + in commit f88abf1 ("Integrating load/store pair for RV32 with the 392 + main manual") of riscv-isa-manual. 393 + 380 394 - const: zk 381 395 description: 382 396 The standard Zk Standard Scalar cryptography extension as ratified ··· 896 882 anyOf: 897 883 - const: v 898 884 - const: zve32x 885 + # Zclsd depends on Zilsd and Zca 886 + - if: 887 + contains: 888 + anyOf: 889 + - const: zclsd 890 + then: 891 + contains: 892 + allOf: 893 + - const: zilsd 894 + - const: zca 899 895 900 896 allOf: 901 897 # Zcf extension does not exist on rv64 ··· 923 899 not: 924 900 contains: 925 901 const: zcf 902 + # Zilsd extension does not exist on rv64 903 + - if: 904 + properties: 905 + riscv,isa-base: 906 + contains: 907 + const: rv64i 908 + then: 909 + properties: 910 + riscv,isa-extensions: 911 + not: 912 + contains: 913 + const: zilsd 926 914 927 915 additionalProperties: true 928 916 ...
+4
Documentation/devicetree/bindings/spi/allwinner,sun6i-a31-spi.yaml
··· 17 17 compatible: 18 18 oneOf: 19 19 - const: allwinner,sun50i-r329-spi 20 + - const: allwinner,sun55i-a523-spi 20 21 - const: allwinner,sun6i-a31-spi 21 22 - const: allwinner,sun8i-h3-spi 22 23 - items: ··· 36 35 - const: allwinner,sun20i-d1-spi-dbi 37 36 - const: allwinner,sun50i-r329-spi-dbi 38 37 - const: allwinner,sun50i-r329-spi 38 + - items: 39 + - const: allwinner,sun55i-a523-spi-dbi 40 + - const: allwinner,sun55i-a523-spi 39 41 40 42 reg: 41 43 maxItems: 1
+7 -2
MAINTAINERS
··· 13959 13959 F: Documentation/admin-guide/mm/kho.rst 13960 13960 F: Documentation/core-api/kho/* 13961 13961 F: include/linux/kexec_handover.h 13962 + F: include/linux/kho/ 13962 13963 F: kernel/liveupdate/kexec_handover* 13963 13964 F: lib/test_kho.c 13964 13965 F: tools/testing/selftests/kho/ ··· 14638 14637 F: Documentation/core-api/liveupdate.rst 14639 14638 F: Documentation/mm/memfd_preservation.rst 14640 14639 F: Documentation/userspace-api/liveupdate.rst 14640 + F: include/linux/kho/abi/ 14641 14641 F: include/linux/liveupdate.h 14642 14642 F: include/linux/liveupdate/ 14643 14643 F: include/uapi/linux/liveupdate.h ··· 16428 16426 M: David Hildenbrand <david@kernel.org> 16429 16427 M: Oscar Salvador <osalvador@suse.de> 16430 16428 L: linux-mm@kvack.org 16429 + L: linux-cxl@vger.kernel.org 16431 16430 S: Maintained 16432 16431 F: Documentation/admin-guide/mm/memory-hotplug.rst 16433 16432 F: Documentation/core-api/memory-hotplug.rst ··· 16754 16751 16755 16752 MEMORY MANAGEMENT - USERFAULTFD 16756 16753 M: Andrew Morton <akpm@linux-foundation.org> 16754 + M: Mike Rapoport <rppt@kernel.org> 16757 16755 R: Peter Xu <peterx@redhat.com> 16758 16756 L: linux-mm@kvack.org 16759 16757 S: Maintained ··· 21349 21345 F: drivers/net/wwan/qcom_bam_dmux.c 21350 21346 21351 21347 QUALCOMM BLUETOOTH DRIVER 21352 - M: Bartosz Golaszewski <brgl@bgdev.pl> 21348 + M: Bartosz Golaszewski <brgl@kernel.org> 21353 21349 L: linux-arm-msm@vger.kernel.org 21354 21350 S: Maintained 21355 21351 F: drivers/bluetooth/btqca.[ch] ··· 24575 24571 F: include/linux/sunserialcore.h 24576 24572 24577 24573 SPARSE CHECKER 24578 - M: "Luc Van Oostenryck" <luc.vanoostenryck@gmail.com> 24574 + M: Chris Li <sparse@chrisli.org> 24579 24575 L: linux-sparse@vger.kernel.org 24580 24576 S: Maintained 24581 24577 W: https://sparse.docs.kernel.org/ ··· 27924 27920 F: rust/kernel/regulator.rs 27925 27921 F: include/dt-bindings/regulator/ 27926 27922 F: include/linux/regulator/ 27923 + F: include/uapi/regulator/ 27927 27924 K: regulator_get_optional 27928 27925 27929 27926 VOLTAGE AND CURRENT REGULATOR IRQ HELPERS
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 19 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc4 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+7
arch/loongarch/include/asm/loongarch.h
··· 94 94 #define CPUCFG2_LSPW BIT(21) 95 95 #define CPUCFG2_LAM BIT(22) 96 96 #define CPUCFG2_PTW BIT(24) 97 + #define CPUCFG2_FRECIPE BIT(25) 98 + #define CPUCFG2_DIV32 BIT(26) 99 + #define CPUCFG2_LAM_BH BIT(27) 100 + #define CPUCFG2_LAMCAS BIT(28) 101 + #define CPUCFG2_LLACQ_SCREL BIT(29) 102 + #define CPUCFG2_SCQ BIT(30) 97 103 98 104 #define LOONGARCH_CPUCFG3 0x3 99 105 #define CPUCFG3_CCDMA BIT(0) ··· 114 108 #define CPUCFG3_SPW_HG_HF BIT(11) 115 109 #define CPUCFG3_RVA BIT(12) 116 110 #define CPUCFG3_RVAMAX GENMASK(16, 13) 111 + #define CPUCFG3_DBAR_HINTS BIT(17) 117 112 #define CPUCFG3_ALDORDER_CAP BIT(18) /* All address load ordered, capability */ 118 113 #define CPUCFG3_ASTORDER_CAP BIT(19) /* All address store ordered, capability */ 119 114 #define CPUCFG3_ALDORDER_STA BIT(20) /* All address load ordered, status */
+2 -2
arch/loongarch/kernel/head.S
··· 42 42 .align 12 43 43 44 44 SYM_CODE_START(kernel_entry) # kernel entry point 45 + UNWIND_HINT_END_OF_STACK 45 46 46 47 SETUP_TWINS 47 48 SETUP_MODES t0 ··· 114 113 * function after setting up the stack and tp registers. 115 114 */ 116 115 SYM_CODE_START(smpboot_entry) 116 + UNWIND_HINT_END_OF_STACK 117 117 118 118 SETUP_TWINS 119 119 SETUP_MODES t0 ··· 144 142 SYM_CODE_END(smpboot_entry) 145 143 146 144 #endif /* CONFIG_SMP */ 147 - 148 - SYM_ENTRY(kernel_entry_end, SYM_L_GLOBAL, SYM_A_NONE)
+10 -4
arch/loongarch/kernel/mcount_dyn.S
··· 94 94 * at the callsite, so there is no need to restore the T series regs. 95 95 */ 96 96 ftrace_common_return: 97 - PTR_L ra, sp, PT_R1 98 97 PTR_L a0, sp, PT_R4 99 98 PTR_L a1, sp, PT_R5 100 99 PTR_L a2, sp, PT_R6 ··· 103 104 PTR_L a6, sp, PT_R10 104 105 PTR_L a7, sp, PT_R11 105 106 PTR_L fp, sp, PT_R22 106 - PTR_L t0, sp, PT_ERA 107 107 PTR_L t1, sp, PT_R13 108 - PTR_ADDI sp, sp, PT_SIZE 109 108 bnez t1, .Ldirect 109 + 110 + PTR_L ra, sp, PT_R1 111 + PTR_L t0, sp, PT_ERA 112 + PTR_ADDI sp, sp, PT_SIZE 110 113 jr t0 111 114 .Ldirect: 115 + PTR_L t0, sp, PT_R1 116 + PTR_L ra, sp, PT_ERA 117 + PTR_ADDI sp, sp, PT_SIZE 112 118 jr t1 113 119 SYM_CODE_END(ftrace_common) 114 120 ··· 165 161 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 166 162 SYM_CODE_START(ftrace_stub_direct_tramp) 167 163 UNWIND_HINT_UNDEFINED 168 - jr t0 164 + move t1, ra 165 + move ra, t0 166 + jr t1 169 167 SYM_CODE_END(ftrace_stub_direct_tramp) 170 168 #endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
+5
arch/loongarch/kernel/traps.c
··· 535 535 asmlinkage void noinstr do_ade(struct pt_regs *regs) 536 536 { 537 537 irqentry_state_t state = irqentry_enter(regs); 538 + unsigned int esubcode = FIELD_GET(CSR_ESTAT_ESUBCODE, regs->csr_estat); 539 + 540 + if ((esubcode == EXSUBCODE_ADEM) && fixup_exception(regs)) 541 + goto out; 538 542 539 543 die_if_kernel("Kernel ade access", regs); 540 544 force_sig_fault(SIGBUS, BUS_ADRERR, (void __user *)regs->csr_badvaddr); 541 545 546 + out: 542 547 irqentry_exit(regs, state); 543 548 } 544 549
+5 -22
arch/loongarch/kernel/unwind_orc.c
··· 348 348 } 349 349 EXPORT_SYMBOL_GPL(unwind_start); 350 350 351 - static bool is_entry_func(unsigned long addr) 352 - { 353 - extern u32 kernel_entry; 354 - extern u32 kernel_entry_end; 355 - 356 - return addr >= (unsigned long)&kernel_entry && addr < (unsigned long)&kernel_entry_end; 357 - } 358 - 359 351 static inline unsigned long bt_address(unsigned long ra) 360 352 { 361 353 extern unsigned long eentry; 362 - 363 - if (__kernel_text_address(ra)) 364 - return ra; 365 - 366 - if (__module_text_address(ra)) 367 - return ra; 368 354 369 355 if (ra >= eentry && ra < eentry + EXCCODE_INT_END * VECSIZE) { 370 356 unsigned long func; ··· 369 383 break; 370 384 } 371 385 372 - return func + offset; 386 + ra = func + offset; 373 387 } 374 388 375 - return ra; 389 + if (__kernel_text_address(ra)) 390 + return ra; 391 + 392 + return 0; 376 393 } 377 394 378 395 bool unwind_next_frame(struct unwind_state *state) ··· 390 401 391 402 /* Don't let modules unload while we're reading their ORC data. */ 392 403 guard(rcu)(); 393 - 394 - if (is_entry_func(state->pc)) 395 - goto end; 396 404 397 405 orc = orc_find(state->pc); 398 406 if (!orc) { ··· 497 511 pr_err("cannot find unwind pc at %p\n", (void *)pc); 498 512 goto err; 499 513 } 500 - 501 - if (!__kernel_text_address(state->pc)) 502 - goto err; 503 514 504 515 return true; 505 516
+4 -4
arch/loongarch/mm/cache.c
··· 160 160 161 161 static const pgprot_t protection_map[16] = { 162 162 [VM_NONE] = __pgprot(_CACHE_CC | _PAGE_USER | 163 - _PAGE_PROTNONE | _PAGE_NO_EXEC | 164 - _PAGE_NO_READ), 163 + _PAGE_NO_EXEC | _PAGE_NO_READ | 164 + (_PAGE_PROTNONE ? : _PAGE_PRESENT)), 165 165 [VM_READ] = __pgprot(_CACHE_CC | _PAGE_VALID | 166 166 _PAGE_USER | _PAGE_PRESENT | 167 167 _PAGE_NO_EXEC), ··· 180 180 [VM_EXEC | VM_WRITE | VM_READ] = __pgprot(_CACHE_CC | _PAGE_VALID | 181 181 _PAGE_USER | _PAGE_PRESENT), 182 182 [VM_SHARED] = __pgprot(_CACHE_CC | _PAGE_USER | 183 - _PAGE_PROTNONE | _PAGE_NO_EXEC | 184 - _PAGE_NO_READ), 183 + _PAGE_NO_EXEC | _PAGE_NO_READ | 184 + (_PAGE_PROTNONE ? : _PAGE_PRESENT)), 185 185 [VM_SHARED | VM_READ] = __pgprot(_CACHE_CC | _PAGE_VALID | 186 186 _PAGE_USER | _PAGE_PRESENT | 187 187 _PAGE_NO_EXEC),
+47 -11
arch/loongarch/net/bpf_jit.c
··· 139 139 stack_adjust = round_up(stack_adjust, 16); 140 140 stack_adjust += bpf_stack_adjust; 141 141 142 + move_reg(ctx, LOONGARCH_GPR_T0, LOONGARCH_GPR_RA); 142 143 /* Reserve space for the move_imm + jirl instruction */ 143 144 for (i = 0; i < LOONGARCH_LONG_JUMP_NINSNS; i++) 144 145 emit_insn(ctx, nop); ··· 239 238 * Call the next bpf prog and skip the first instruction 240 239 * of TCC initialization. 241 240 */ 242 - emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_T3, 6); 241 + emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_T3, 7); 243 242 } 244 243 } 245 244 ··· 281 280 * goto out; 282 281 */ 283 282 tc_ninsn = insn ? ctx->offset[insn+1] - ctx->offset[insn] : ctx->offset[0]; 283 + emit_zext_32(ctx, a2, true); 284 + 284 285 off = offsetof(struct bpf_array, map.max_entries); 285 286 emit_insn(ctx, ldwu, t1, a1, off); 286 287 /* bgeu $a2, $t1, jmp_offset */ ··· 953 950 emit_insn(ctx, ldd, REG_TCC, LOONGARCH_GPR_SP, tcc_ptr_off); 954 951 } 955 952 953 + if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { 954 + const struct btf_func_model *m; 955 + int i; 956 + 957 + m = bpf_jit_find_kfunc_model(ctx->prog, insn); 958 + if (!m) 959 + return -EINVAL; 960 + 961 + for (i = 0; i < m->nr_args; i++) { 962 + u8 reg = regmap[BPF_REG_1 + i]; 963 + bool sign = m->arg_flags[i] & BTF_FMODEL_SIGNED_ARG; 964 + 965 + emit_abi_ext(ctx, reg, m->arg_size[i], sign); 966 + } 967 + } 968 + 956 969 move_addr(ctx, t1, func_addr); 957 970 emit_insn(ctx, jirl, LOONGARCH_GPR_RA, t1, 0); 958 971 ··· 1284 1265 return 0; 1285 1266 } 1286 1267 1287 - return emit_jump_and_link(&ctx, is_call ? LOONGARCH_GPR_T0 : LOONGARCH_GPR_ZERO, (u64)target); 1268 + return emit_jump_and_link(&ctx, is_call ? LOONGARCH_GPR_RA : LOONGARCH_GPR_ZERO, (u64)target); 1288 1269 } 1289 1270 1290 1271 static int emit_call(struct jit_ctx *ctx, u64 addr) ··· 1309 1290 { 1310 1291 int ret; 1311 1292 bool is_call; 1293 + unsigned long size = 0; 1294 + unsigned long offset = 0; 1295 + void *image = NULL; 1296 + char namebuf[KSYM_NAME_LEN]; 1312 1297 u32 old_insns[LOONGARCH_LONG_JUMP_NINSNS] = {[0 ... 4] = INSN_NOP}; 1313 1298 u32 new_insns[LOONGARCH_LONG_JUMP_NINSNS] = {[0 ... 4] = INSN_NOP}; 1314 1299 1315 1300 /* Only poking bpf text is supported. Since kernel function entry 1316 1301 * is set up by ftrace, we rely on ftrace to poke kernel functions. 1317 1302 */ 1318 - if (!is_bpf_text_address((unsigned long)ip)) 1303 + if (!__bpf_address_lookup((unsigned long)ip, &size, &offset, namebuf)) 1319 1304 return -ENOTSUPP; 1305 + 1306 + image = ip - offset; 1307 + 1308 + /* zero offset means we're poking bpf prog entry */ 1309 + if (offset == 0) { 1310 + /* skip to the nop instruction in bpf prog entry: 1311 + * move t0, ra 1312 + * nop 1313 + */ 1314 + ip = image + LOONGARCH_INSN_SIZE; 1315 + } 1320 1316 1321 1317 is_call = old_t == BPF_MOD_CALL; 1322 1318 ret = emit_jump_or_nops(old_addr, ip, old_insns, is_call); ··· 1656 1622 1657 1623 /* To traced function */ 1658 1624 /* Ftrace jump skips 2 NOP instructions */ 1659 - if (is_kernel_text((unsigned long)orig_call)) 1625 + if (is_kernel_text((unsigned long)orig_call) || 1626 + is_module_text_address((unsigned long)orig_call)) 1660 1627 orig_call += LOONGARCH_FENTRY_NBYTES; 1661 1628 /* Direct jump skips 5 NOP instructions */ 1662 1629 else if (is_bpf_text_address((unsigned long)orig_call)) 1663 1630 orig_call += LOONGARCH_BPF_FENTRY_NBYTES; 1664 - /* Module tracing not supported - cause kernel lockups */ 1665 - else if (is_module_text_address((unsigned long)orig_call)) 1666 - return -ENOTSUPP; 1667 1631 1668 1632 if (flags & BPF_TRAMP_F_CALL_ORIG) { 1669 1633 move_addr(ctx, LOONGARCH_GPR_A0, (const u64)im); ··· 1754 1722 emit_insn(ctx, ldd, LOONGARCH_GPR_FP, LOONGARCH_GPR_SP, 0); 1755 1723 emit_insn(ctx, addid, LOONGARCH_GPR_SP, LOONGARCH_GPR_SP, 16); 1756 1724 1757 - if (flags & BPF_TRAMP_F_SKIP_FRAME) 1725 + if (flags & BPF_TRAMP_F_SKIP_FRAME) { 1758 1726 /* return to parent function */ 1759 - emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_RA, 0); 1760 - else 1761 - /* return to traced function */ 1727 + move_reg(ctx, LOONGARCH_GPR_RA, LOONGARCH_GPR_T0); 1762 1728 emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_T0, 0); 1729 + } else { 1730 + /* return to traced function */ 1731 + move_reg(ctx, LOONGARCH_GPR_T1, LOONGARCH_GPR_RA); 1732 + move_reg(ctx, LOONGARCH_GPR_RA, LOONGARCH_GPR_T0); 1733 + emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_T1, 0); 1734 + } 1763 1735 } 1764 1736 1765 1737 ret = ctx->idx;
+26
arch/loongarch/net/bpf_jit.h
··· 88 88 emit_insn(ctx, addiw, reg, reg, 0); 89 89 } 90 90 91 + /* Emit proper extension according to ABI requirements. 92 + * Note that it requires a value of size `size` already resides in register `reg`. 93 + */ 94 + static inline void emit_abi_ext(struct jit_ctx *ctx, int reg, u8 size, bool sign) 95 + { 96 + /* ABI requires unsigned char/short to be zero-extended */ 97 + if (!sign && (size == 1 || size == 2)) 98 + return; 99 + 100 + switch (size) { 101 + case 1: 102 + emit_insn(ctx, extwb, reg, reg); 103 + break; 104 + case 2: 105 + emit_insn(ctx, extwh, reg, reg); 106 + break; 107 + case 4: 108 + emit_insn(ctx, addiw, reg, reg, 0); 109 + break; 110 + case 8: 111 + break; 112 + default: 113 + pr_warn("bpf_jit: invalid size %d for extension\n", size); 114 + } 115 + } 116 + 91 117 static inline void move_addr(struct jit_ctx *ctx, enum loongarch_gpr rd, u64 addr) 92 118 { 93 119 u64 imm_11_0, imm_31_12, imm_51_32, imm_63_52;
+1 -1
arch/powerpc/include/asm/hw_irq.h
··· 90 90 if (IS_ENABLED(CONFIG_BOOKE)) 91 91 wrtee(0); 92 92 else if (IS_ENABLED(CONFIG_PPC_8xx)) 93 - wrtspr(SPRN_NRI); 93 + wrtspr_sync(SPRN_NRI); 94 94 else if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) 95 95 __mtmsrd(0, 1); 96 96 else
+1
arch/powerpc/include/asm/reg.h
··· 1400 1400 : "r" ((unsigned long)(v)) \ 1401 1401 : "memory") 1402 1402 #define wrtspr(rn) asm volatile("mtspr " __stringify(rn) ",2" : : : "memory") 1403 + #define wrtspr_sync(rn) asm volatile("mtspr " __stringify(rn) ",2; sync" : : : "memory") 1403 1404 1404 1405 static inline void wrtee(unsigned long val) 1405 1406 {
+2 -1
arch/powerpc/kernel/btext.c
··· 20 20 #include <asm/io.h> 21 21 #include <asm/processor.h> 22 22 #include <asm/udbg.h> 23 + #include <asm/setup.h> 23 24 24 25 #define NO_SCROLL 25 26 ··· 464 463 { 465 464 unsigned char *base = calc_base(locX << 3, locY << 4); 466 465 unsigned int font_index = c * 16; 467 - const unsigned char *font = font_sun_8x16.data + font_index; 466 + const unsigned char *font = PTRRELOC(font_sun_8x16.data) + font_index; 468 467 int rb = dispDeviceRowBytes; 469 468 470 469 rmci_maybe_on();
-15
arch/powerpc/kernel/entry_32.S
··· 101 101 .endm 102 102 #endif 103 103 104 - .macro clr_ri trash 105 - #ifndef CONFIG_BOOKE 106 - #ifdef CONFIG_PPC_8xx 107 - mtspr SPRN_NRI, \trash 108 - #else 109 - li \trash, MSR_KERNEL & ~MSR_RI 110 - mtmsr \trash 111 - #endif 112 - #endif 113 - .endm 114 - 115 104 .globl transfer_to_syscall 116 105 transfer_to_syscall: 117 106 stw r3, ORIG_GPR3(r1) ··· 149 160 cmpwi r3,0 150 161 REST_GPR(3, r1) 151 162 syscall_exit_finish: 152 - clr_ri r4 153 163 mtspr SPRN_SRR0,r7 154 164 mtspr SPRN_SRR1,r8 155 165 ··· 225 237 /* Clear the exception marker on the stack to avoid confusing stacktrace */ 226 238 li r10, 0 227 239 stw r10, 8(r11) 228 - clr_ri r10 229 240 mtspr SPRN_SRR1,r9 230 241 mtspr SPRN_SRR0,r12 231 242 REST_GPR(9, r11) ··· 257 270 .Lfast_user_interrupt_return: 258 271 lwz r11,_NIP(r1) 259 272 lwz r12,_MSR(r1) 260 - clr_ri r4 261 273 mtspr SPRN_SRR0,r11 262 274 mtspr SPRN_SRR1,r12 263 275 ··· 299 313 cmpwi cr1,r3,0 300 314 lwz r11,_NIP(r1) 301 315 lwz r12,_MSR(r1) 302 - clr_ri r4 303 316 mtspr SPRN_SRR0,r11 304 317 mtspr SPRN_SRR1,r12 305 318
+4 -1
arch/powerpc/kernel/interrupt.c
··· 38 38 #else 39 39 static inline bool exit_must_hard_disable(void) 40 40 { 41 - return false; 41 + return true; 42 42 } 43 43 #endif 44 44 ··· 443 443 444 444 if (unlikely(stack_store)) 445 445 __hard_EE_RI_disable(); 446 + #else 447 + } else { 448 + __hard_EE_RI_disable(); 446 449 #endif /* CONFIG_PPC64 */ 447 450 } 448 451
+19
arch/powerpc/kexec/core_64.c
··· 202 202 mb(); 203 203 } 204 204 205 + 206 + /* 207 + * The add_cpu() call in wake_offline_cpus() can fail as cpu_bootable() 208 + * returns false for CPUs that fail the cpu_smt_thread_allowed() check 209 + * or non primary threads if SMT is disabled. Re-enable SMT and set the 210 + * number of SMT threads to threads per core. 211 + */ 212 + static void kexec_smt_reenable(void) 213 + { 214 + #if defined(CONFIG_SMP) && defined(CONFIG_HOTPLUG_SMT) 215 + lock_device_hotplug(); 216 + cpu_smt_num_threads = threads_per_core; 217 + cpu_smt_control = CPU_SMT_ENABLED; 218 + unlock_device_hotplug(); 219 + #endif 220 + } 221 + 205 222 /* 206 223 * We need to make sure each present CPU is online. The next kernel will scan 207 224 * the device tree and assume primary threads are online and query secondary ··· 232 215 static void wake_offline_cpus(void) 233 216 { 234 217 int cpu = 0; 218 + 219 + kexec_smt_reenable(); 235 220 236 221 for_each_present_cpu(cpu) { 237 222 if (!cpu_online(cpu)) {
+5 -4
arch/powerpc/platforms/powernv/idle.c
··· 1171 1171 u64 max_residency_ns = 0; 1172 1172 int i; 1173 1173 1174 - /* stop is not really architected, we only have p9,p10 drivers */ 1175 - if (!pvr_version_is(PVR_POWER10) && !pvr_version_is(PVR_POWER9)) 1174 + /* stop is not really architected, we only have p9,p10 and p11 drivers */ 1175 + if (!pvr_version_is(PVR_POWER9) && !pvr_version_is(PVR_POWER10) && 1176 + !pvr_version_is(PVR_POWER11)) 1176 1177 return; 1177 1178 1178 1179 /* ··· 1190 1189 struct pnv_idle_states_t *state = &pnv_idle_states[i]; 1191 1190 u64 psscr_rl = state->psscr_val & PSSCR_RL_MASK; 1192 1191 1193 - /* No deep loss driver implemented for POWER10 yet */ 1194 - if (pvr_version_is(PVR_POWER10) && 1192 + /* No deep loss driver implemented for POWER10 and POWER11 yet */ 1193 + if ((pvr_version_is(PVR_POWER10) || pvr_version_is(PVR_POWER11)) && 1195 1194 state->flags & (OPAL_PM_TIMEBASE_STOP|OPAL_PM_LOSE_FULL_CONTEXT)) 1196 1195 continue; 1197 1196
-1
arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh
··· 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 4 set -e 5 - set -o pipefail 6 5 7 6 # To debug, uncomment the following line 8 7 # set -x
-1
arch/powerpc/tools/gcc-check-mprofile-kernel.sh
··· 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 4 set -e 5 - set -o pipefail 6 5 7 6 # To debug, uncomment the following line 8 7 # set -x
+4 -4
arch/riscv/include/asm/atomic.h
··· 203 203 " add %[rc], %[p], %[a]\n" \ 204 204 " sc." sfx ".rl %[rc], %[rc], %[c]\n" \ 205 205 " bnez %[rc], 0b\n" \ 206 - " fence rw, rw\n" \ 206 + RISCV_FULL_BARRIER \ 207 207 "1:\n" \ 208 208 : [p]"=&r" (_prev), [rc]"=&r" (_rc), [c]"+A" (counter) \ 209 209 : [a]"r" (_a), [u]"r" (_u) \ ··· 242 242 " addi %[rc], %[p], 1\n" \ 243 243 " sc." sfx ".rl %[rc], %[rc], %[c]\n" \ 244 244 " bnez %[rc], 0b\n" \ 245 - " fence rw, rw\n" \ 245 + RISCV_FULL_BARRIER \ 246 246 "1:\n" \ 247 247 : [p]"=&r" (_prev), [rc]"=&r" (_rc), [c]"+A" (counter) \ 248 248 : \ ··· 268 268 " addi %[rc], %[p], -1\n" \ 269 269 " sc." sfx ".rl %[rc], %[rc], %[c]\n" \ 270 270 " bnez %[rc], 0b\n" \ 271 - " fence rw, rw\n" \ 271 + RISCV_FULL_BARRIER \ 272 272 "1:\n" \ 273 273 : [p]"=&r" (_prev), [rc]"=&r" (_rc), [c]"+A" (counter) \ 274 274 : \ ··· 294 294 " bltz %[rc], 1f\n" \ 295 295 " sc." sfx ".rl %[rc], %[rc], %[c]\n" \ 296 296 " bnez %[rc], 0b\n" \ 297 - " fence rw, rw\n" \ 297 + RISCV_FULL_BARRIER \ 298 298 "1:\n" \ 299 299 : [p]"=&r" (_prev), [rc]"=&r" (_rc), [c]"+A" (counter) \ 300 300 : \
+2
arch/riscv/include/asm/hwcap.h
··· 108 108 #define RISCV_ISA_EXT_ZICBOP 99 109 109 #define RISCV_ISA_EXT_SVRSW60T59B 100 110 110 #define RISCV_ISA_EXT_ZALASR 101 111 + #define RISCV_ISA_EXT_ZILSD 102 112 + #define RISCV_ISA_EXT_ZCLSD 103 111 113 112 114 #define RISCV_ISA_EXT_XLINUXENVCFG 127 113 115
+14 -2
arch/riscv/include/asm/pgtable.h
··· 660 660 static inline pte_t ptep_get_and_clear(struct mm_struct *mm, 661 661 unsigned long address, pte_t *ptep) 662 662 { 663 - pte_t pte = __pte(atomic_long_xchg((atomic_long_t *)ptep, 0)); 663 + #ifdef CONFIG_SMP 664 + pte_t pte = __pte(xchg(&ptep->pte, 0)); 665 + #else 666 + pte_t pte = *ptep; 667 + 668 + set_pte(ptep, __pte(0)); 669 + #endif 664 670 665 671 page_table_check_pte_clear(mm, pte); 666 672 ··· 1003 997 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, 1004 998 unsigned long address, pmd_t *pmdp) 1005 999 { 1006 - pmd_t pmd = __pmd(atomic_long_xchg((atomic_long_t *)pmdp, 0)); 1000 + #ifdef CONFIG_SMP 1001 + pmd_t pmd = __pmd(xchg(&pmdp->pmd, 0)); 1002 + #else 1003 + pmd_t pmd = *pmdp; 1004 + 1005 + pmd_clear(pmdp); 1006 + #endif 1007 1007 1008 1008 page_table_check_pmd_clear(mm, pmd); 1009 1009
+29
arch/riscv/include/asm/sbi.h
··· 37 37 SBI_EXT_NACL = 0x4E41434C, 38 38 SBI_EXT_FWFT = 0x46574654, 39 39 SBI_EXT_MPXY = 0x4D505859, 40 + SBI_EXT_DBTR = 0x44425452, 40 41 41 42 /* Experimentals extensions must lie within this range */ 42 43 SBI_EXT_EXPERIMENTAL_START = 0x08000000, ··· 505 504 #define SBI_MPXY_CHAN_CAP_SEND_WITH_RESP BIT(3) 506 505 #define SBI_MPXY_CHAN_CAP_SEND_WITHOUT_RESP BIT(4) 507 506 #define SBI_MPXY_CHAN_CAP_GET_NOTIFICATIONS BIT(5) 507 + 508 + /* SBI debug triggers function IDs */ 509 + enum sbi_ext_dbtr_fid { 510 + SBI_EXT_DBTR_NUM_TRIGGERS = 0, 511 + SBI_EXT_DBTR_SETUP_SHMEM, 512 + SBI_EXT_DBTR_TRIG_READ, 513 + SBI_EXT_DBTR_TRIG_INSTALL, 514 + SBI_EXT_DBTR_TRIG_UPDATE, 515 + SBI_EXT_DBTR_TRIG_UNINSTALL, 516 + SBI_EXT_DBTR_TRIG_ENABLE, 517 + SBI_EXT_DBTR_TRIG_DISABLE, 518 + }; 519 + 520 + struct sbi_dbtr_data_msg { 521 + unsigned long tstate; 522 + unsigned long tdata1; 523 + unsigned long tdata2; 524 + unsigned long tdata3; 525 + }; 526 + 527 + struct sbi_dbtr_id_msg { 528 + unsigned long idx; 529 + }; 530 + 531 + union sbi_dbtr_shmem_entry { 532 + struct sbi_dbtr_data_msg data; 533 + struct sbi_dbtr_id_msg id; 534 + }; 508 535 509 536 /* SBI spec version fields */ 510 537 #define SBI_SPEC_VERSION_DEFAULT 0x1
+3
arch/riscv/include/asm/vector.h
··· 424 424 #define riscv_v_thread_free(tsk) do {} while (0) 425 425 #define riscv_v_setup_ctx_cache() do {} while (0) 426 426 #define riscv_v_thread_alloc(tsk) do {} while (0) 427 + #define get_cpu_vector_context() do {} while (0) 428 + #define put_cpu_vector_context() do {} while (0) 429 + #define riscv_v_vstate_set_restore(task, regs) do {} while (0) 427 430 428 431 #endif /* CONFIG_RISCV_ISA_V */ 429 432
+3
arch/riscv/include/uapi/asm/hwprobe.h
··· 84 84 #define RISCV_HWPROBE_EXT_ZABHA (1ULL << 58) 85 85 #define RISCV_HWPROBE_EXT_ZALASR (1ULL << 59) 86 86 #define RISCV_HWPROBE_EXT_ZICBOP (1ULL << 60) 87 + #define RISCV_HWPROBE_EXT_ZILSD (1ULL << 61) 88 + #define RISCV_HWPROBE_EXT_ZCLSD (1ULL << 62) 89 + 87 90 #define RISCV_HWPROBE_KEY_CPUPERF_0 5 88 91 #define RISCV_HWPROBE_MISALIGNED_UNKNOWN (0 << 0) 89 92 #define RISCV_HWPROBE_MISALIGNED_EMULATED (1 << 0)
+24
arch/riscv/kernel/cpufeature.c
··· 242 242 return -EPROBE_DEFER; 243 243 } 244 244 245 + static int riscv_ext_zilsd_validate(const struct riscv_isa_ext_data *data, 246 + const unsigned long *isa_bitmap) 247 + { 248 + if (IS_ENABLED(CONFIG_64BIT)) 249 + return -EINVAL; 250 + 251 + return 0; 252 + } 253 + 254 + static int riscv_ext_zclsd_validate(const struct riscv_isa_ext_data *data, 255 + const unsigned long *isa_bitmap) 256 + { 257 + if (IS_ENABLED(CONFIG_64BIT)) 258 + return -EINVAL; 259 + 260 + if (__riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_ZILSD) && 261 + __riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_ZCA)) 262 + return 0; 263 + 264 + return -EPROBE_DEFER; 265 + } 266 + 245 267 static int riscv_vector_f_validate(const struct riscv_isa_ext_data *data, 246 268 const unsigned long *isa_bitmap) 247 269 { ··· 506 484 __RISCV_ISA_EXT_DATA_VALIDATE(zcd, RISCV_ISA_EXT_ZCD, riscv_ext_zcd_validate), 507 485 __RISCV_ISA_EXT_DATA_VALIDATE(zcf, RISCV_ISA_EXT_ZCF, riscv_ext_zcf_validate), 508 486 __RISCV_ISA_EXT_DATA_VALIDATE(zcmop, RISCV_ISA_EXT_ZCMOP, riscv_ext_zca_depends), 487 + __RISCV_ISA_EXT_DATA_VALIDATE(zclsd, RISCV_ISA_EXT_ZCLSD, riscv_ext_zclsd_validate), 488 + __RISCV_ISA_EXT_DATA_VALIDATE(zilsd, RISCV_ISA_EXT_ZILSD, riscv_ext_zilsd_validate), 509 489 __RISCV_ISA_EXT_DATA(zba, RISCV_ISA_EXT_ZBA), 510 490 __RISCV_ISA_EXT_DATA(zbb, RISCV_ISA_EXT_ZBB), 511 491 __RISCV_ISA_EXT_DATA(zbc, RISCV_ISA_EXT_ZBC),
+41 -21
arch/riscv/kernel/signal.c
··· 68 68 #define restore_fp_state(task, regs) (0) 69 69 #endif 70 70 71 - #ifdef CONFIG_RISCV_ISA_V 72 - 73 - static long save_v_state(struct pt_regs *regs, void __user **sc_vec) 71 + static long save_v_state(struct pt_regs *regs, void __user *sc_vec) 74 72 { 75 - struct __riscv_ctx_hdr __user *hdr; 76 73 struct __sc_riscv_v_state __user *state; 77 74 void __user *datap; 78 75 long err; 79 76 80 - hdr = *sc_vec; 81 - /* Place state to the user's signal context space after the hdr */ 82 - state = (struct __sc_riscv_v_state __user *)(hdr + 1); 77 + if (!IS_ENABLED(CONFIG_RISCV_ISA_V) || 78 + !((has_vector() || has_xtheadvector()) && 79 + riscv_v_vstate_query(regs))) 80 + return 0; 81 + 82 + /* Place state to the user's signal context space */ 83 + state = (struct __sc_riscv_v_state __user *)sc_vec; 83 84 /* Point datap right after the end of __sc_riscv_v_state */ 84 85 datap = state + 1; 85 86 ··· 98 97 err |= __put_user((__force void *)datap, &state->v_state.datap); 99 98 /* Copy the whole vector content to user space datap. */ 100 99 err |= __copy_to_user(datap, current->thread.vstate.datap, riscv_v_vsize); 101 - /* Copy magic to the user space after saving all vector conetext */ 102 - err |= __put_user(RISCV_V_MAGIC, &hdr->magic); 103 - err |= __put_user(riscv_v_sc_size, &hdr->size); 104 100 if (unlikely(err)) 105 - return err; 101 + return -EFAULT; 106 102 107 - /* Only progress the sv_vec if everything has done successfully */ 108 - *sc_vec += riscv_v_sc_size; 109 - return 0; 103 + /* Only return the size if everything has done successfully */ 104 + return riscv_v_sc_size; 110 105 } 111 106 112 107 /* ··· 139 142 */ 140 143 return copy_from_user(current->thread.vstate.datap, datap, riscv_v_vsize); 141 144 } 142 - #else 143 - #define save_v_state(task, regs) (0) 144 - #define __restore_v_state(task, regs) (0) 145 - #endif 145 + 146 + struct arch_ext_priv { 147 + __u32 magic; 148 + long (*save)(struct pt_regs *regs, void __user *sc_vec); 149 + }; 150 + 151 + struct arch_ext_priv arch_ext_list[] = { 152 + { 153 + .magic = RISCV_V_MAGIC, 154 + .save = &save_v_state, 155 + }, 156 + }; 157 + 158 + const size_t nr_arch_exts = ARRAY_SIZE(arch_ext_list); 146 159 147 160 static long restore_sigcontext(struct pt_regs *regs, 148 161 struct sigcontext __user *sc) ··· 277 270 { 278 271 struct sigcontext __user *sc = &frame->uc.uc_mcontext; 279 272 struct __riscv_ctx_hdr __user *sc_ext_ptr = &sc->sc_extdesc.hdr; 280 - long err; 273 + struct arch_ext_priv *arch_ext; 274 + long err, i, ext_size; 281 275 282 276 /* sc_regs is structured the same as the start of pt_regs */ 283 277 err = __copy_to_user(&sc->sc_regs, regs, sizeof(sc->sc_regs)); ··· 286 278 if (has_fpu()) 287 279 err |= save_fp_state(regs, &sc->sc_fpregs); 288 280 /* Save the vector state. */ 289 - if ((has_vector() || has_xtheadvector()) && riscv_v_vstate_query(regs)) 290 - err |= save_v_state(regs, (void __user **)&sc_ext_ptr); 281 + for (i = 0; i < nr_arch_exts; i++) { 282 + arch_ext = &arch_ext_list[i]; 283 + if (!arch_ext->save) 284 + continue; 285 + 286 + ext_size = arch_ext->save(regs, sc_ext_ptr + 1); 287 + if (ext_size <= 0) { 288 + err |= ext_size; 289 + } else { 290 + err |= __put_user(arch_ext->magic, &sc_ext_ptr->magic); 291 + err |= __put_user(ext_size, &sc_ext_ptr->size); 292 + sc_ext_ptr = (void *)sc_ext_ptr + ext_size; 293 + } 294 + } 291 295 /* Write zero to fp-reserved space and check it on restore_sigcontext */ 292 296 err |= __put_user(0, &sc->sc_extdesc.reserved); 293 297 /* And put END __riscv_ctx_hdr at the end. */
+2
arch/riscv/kernel/sys_hwprobe.c
··· 121 121 EXT_KEY(ZBS); 122 122 EXT_KEY(ZCA); 123 123 EXT_KEY(ZCB); 124 + EXT_KEY(ZCLSD); 124 125 EXT_KEY(ZCMOP); 125 126 EXT_KEY(ZICBOM); 126 127 EXT_KEY(ZICBOP); ··· 131 130 EXT_KEY(ZIHINTNTL); 132 131 EXT_KEY(ZIHINTPAUSE); 133 132 EXT_KEY(ZIHPM); 133 + EXT_KEY(ZILSD); 134 134 EXT_KEY(ZIMOP); 135 135 EXT_KEY(ZKND); 136 136 EXT_KEY(ZKNE);
+1 -1
arch/x86/kernel/cpu/microcode/amd.c
··· 258 258 if (fam == 0x1a) { 259 259 if (model <= 0x2f || 260 260 (0x40 <= model && model <= 0x4f) || 261 - (0x60 <= model && model <= 0x6f)) 261 + (0x60 <= model && model <= 0x7f)) 262 262 return true; 263 263 } 264 264
+1 -1
block/bfq-cgroup.c
··· 380 380 blkg_rwstat_add_aux(&to->merged, &from->merged); 381 381 blkg_rwstat_add_aux(&to->service_time, &from->service_time); 382 382 blkg_rwstat_add_aux(&to->wait_time, &from->wait_time); 383 - bfq_stat_add_aux(&from->time, &from->time); 383 + bfq_stat_add_aux(&to->time, &from->time); 384 384 bfq_stat_add_aux(&to->avg_queue_size_sum, &from->avg_queue_size_sum); 385 385 bfq_stat_add_aux(&to->avg_queue_size_samples, 386 386 &from->avg_queue_size_samples);
+1 -1
block/bfq-iosched.h
··· 984 984 * unused for the root group. Used to know whether there 985 985 * are groups with more than one active @bfq_entity 986 986 * (see the comments to the function 987 - * bfq_bfqq_may_idle()). 987 + * bfq_better_to_idle()). 988 988 * @rq_pos_tree: rbtree sorted by next_request position, used when 989 989 * determining if two or more queues have interleaving 990 990 * requests (see bfq_find_close_cooperator()).
+1 -1
block/blk-mq.c
··· 3721 3721 struct blk_mq_hw_ctx, cpuhp_online); 3722 3722 int ret = 0; 3723 3723 3724 - if (blk_mq_hctx_has_online_cpu(hctx, cpu)) 3724 + if (!hctx->nr_ctx || blk_mq_hctx_has_online_cpu(hctx, cpu)) 3725 3725 return 0; 3726 3726 3727 3727 /*
+5 -3
crypto/seqiv.c
··· 50 50 struct aead_geniv_ctx *ctx = crypto_aead_ctx(geniv); 51 51 struct aead_request *subreq = aead_request_ctx(req); 52 52 crypto_completion_t compl; 53 + bool unaligned_info; 53 54 void *data; 54 55 u8 *info; 55 56 unsigned int ivsize = 8; ··· 69 68 memcpy_sglist(req->dst, req->src, 70 69 req->assoclen + req->cryptlen); 71 70 72 - if (unlikely(!IS_ALIGNED((unsigned long)info, 73 - crypto_aead_alignmask(geniv) + 1))) { 71 + unaligned_info = !IS_ALIGNED((unsigned long)info, 72 + crypto_aead_alignmask(geniv) + 1); 73 + if (unlikely(unaligned_info)) { 74 74 info = kmemdup(req->iv, ivsize, req->base.flags & 75 75 CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : 76 76 GFP_ATOMIC); ··· 91 89 scatterwalk_map_and_copy(info, req->dst, req->assoclen, ivsize, 1); 92 90 93 91 err = crypto_aead_encrypt(subreq); 94 - if (unlikely(info != req->iv)) 92 + if (unlikely(unaligned_info)) 95 93 seqiv_aead_encrypt_complete2(req, err); 96 94 return err; 97 95 }
+1 -1
drivers/block/rnbd/rnbd-clt.h
··· 112 112 struct rnbd_queue *hw_queues; 113 113 u32 device_id; 114 114 /* local Idr index - used to track minor number allocations. */ 115 - u32 clt_device_id; 115 + int clt_device_id; 116 116 struct mutex lock; 117 117 enum rnbd_clt_dev_state dev_state; 118 118 refcount_t refcount;
+33 -5
drivers/block/ublk_drv.c
··· 237 237 bool canceling; 238 238 pid_t ublksrv_tgid; 239 239 struct delayed_work exit_work; 240 + struct work_struct partition_scan_work; 240 241 241 242 struct ublk_queue *queues[]; 242 243 }; ··· 254 253 static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub, 255 254 u16 q_id, u16 tag, struct ublk_io *io, size_t offset); 256 255 static inline unsigned int ublk_req_build_flags(struct request *req); 256 + 257 + static void ublk_partition_scan_work(struct work_struct *work) 258 + { 259 + struct ublk_device *ub = 260 + container_of(work, struct ublk_device, partition_scan_work); 261 + 262 + if (WARN_ON_ONCE(!test_and_clear_bit(GD_SUPPRESS_PART_SCAN, 263 + &ub->ub_disk->state))) 264 + return; 265 + 266 + mutex_lock(&ub->ub_disk->open_mutex); 267 + bdev_disk_changed(ub->ub_disk, false); 268 + mutex_unlock(&ub->ub_disk->open_mutex); 269 + } 257 270 258 271 static inline struct ublksrv_io_desc * 259 272 ublk_get_iod(const struct ublk_queue *ubq, unsigned tag) ··· 1622 1607 { 1623 1608 int i, j; 1624 1609 1625 - if (!(ub->dev_info.flags & (UBLK_F_SUPPORT_ZERO_COPY | 1626 - UBLK_F_AUTO_BUF_REG))) 1610 + if (!ublk_dev_need_req_ref(ub)) 1627 1611 return false; 1628 1612 1629 1613 for (i = 0; i < ub->dev_info.nr_hw_queues; i++) { ··· 2041 2027 mutex_lock(&ub->mutex); 2042 2028 ublk_stop_dev_unlocked(ub); 2043 2029 mutex_unlock(&ub->mutex); 2030 + flush_work(&ub->partition_scan_work); 2044 2031 ublk_cancel_dev(ub); 2045 2032 } 2046 2033 ··· 2970 2955 2971 2956 ublk_apply_params(ub); 2972 2957 2973 - /* don't probe partitions if any daemon task is un-trusted */ 2974 - if (ub->unprivileged_daemons) 2975 - set_bit(GD_SUPPRESS_PART_SCAN, &disk->state); 2958 + /* 2959 + * Suppress partition scan to avoid potential IO hang. 2960 + * 2961 + * If ublk server error occurs during partition scan, the IO may 2962 + * wait while holding ub->mutex, which can deadlock with other 2963 + * operations that need the mutex. Defer partition scan to async 2964 + * work. 2965 + * For unprivileged daemons, keep GD_SUPPRESS_PART_SCAN set 2966 + * permanently. 2967 + */ 2968 + set_bit(GD_SUPPRESS_PART_SCAN, &disk->state); 2976 2969 2977 2970 ublk_get_device(ub); 2978 2971 ub->dev_info.state = UBLK_S_DEV_LIVE; ··· 2996 2973 goto out_put_cdev; 2997 2974 2998 2975 set_bit(UB_STATE_USED, &ub->state); 2976 + 2977 + /* Schedule async partition scan for trusted daemons */ 2978 + if (!ub->unprivileged_daemons) 2979 + schedule_work(&ub->partition_scan_work); 2999 2980 3000 2981 out_put_cdev: 3001 2982 if (ret) { ··· 3166 3139 mutex_init(&ub->mutex); 3167 3140 spin_lock_init(&ub->lock); 3168 3141 mutex_init(&ub->cancel_mutex); 3142 + INIT_WORK(&ub->partition_scan_work, ublk_partition_scan_work); 3169 3143 3170 3144 ret = ublk_alloc_dev_number(ub, header->dev_id); 3171 3145 if (ret < 0)
+9 -3
drivers/bluetooth/btusb.c
··· 4052 4052 return -ENODEV; 4053 4053 } 4054 4054 4055 - data = devm_kzalloc(&intf->dev, sizeof(*data), GFP_KERNEL); 4055 + data = kzalloc(sizeof(*data), GFP_KERNEL); 4056 4056 if (!data) 4057 4057 return -ENOMEM; 4058 4058 ··· 4075 4075 } 4076 4076 } 4077 4077 4078 - if (!data->intr_ep || !data->bulk_tx_ep || !data->bulk_rx_ep) 4078 + if (!data->intr_ep || !data->bulk_tx_ep || !data->bulk_rx_ep) { 4079 + kfree(data); 4079 4080 return -ENODEV; 4081 + } 4080 4082 4081 4083 if (id->driver_info & BTUSB_AMP) { 4082 4084 data->cmdreq_type = USB_TYPE_CLASS | 0x01; ··· 4133 4131 data->recv_acl = hci_recv_frame; 4134 4132 4135 4133 hdev = hci_alloc_dev_priv(priv_size); 4136 - if (!hdev) 4134 + if (!hdev) { 4135 + kfree(data); 4137 4136 return -ENOMEM; 4137 + } 4138 4138 4139 4139 hdev->bus = HCI_USB; 4140 4140 hci_set_drvdata(hdev, data); ··· 4410 4406 if (data->reset_gpio) 4411 4407 gpiod_put(data->reset_gpio); 4412 4408 hci_free_dev(hdev); 4409 + kfree(data); 4413 4410 return err; 4414 4411 } 4415 4412 ··· 4459 4454 } 4460 4455 4461 4456 hci_free_dev(hdev); 4457 + kfree(data); 4462 4458 } 4463 4459 4464 4460 #ifdef CONFIG_PM
+4 -5
drivers/crypto/hisilicon/qm.c
··· 991 991 return; 992 992 poll_data = &qm->poll_data[cqn]; 993 993 994 - while (QM_EQE_PHASE(dw0) != qm->status.eqc_phase) { 994 + do { 995 995 poll_data->qp_finish_id[eqe_num] = dw0 & QM_EQE_CQN_MASK; 996 996 eqe_num++; 997 997 ··· 1004 1004 qm->status.eq_head++; 1005 1005 } 1006 1006 1007 - if (eqe_num == (eq_depth >> 1) - 1) 1008 - break; 1009 - 1010 1007 dw0 = le32_to_cpu(eqe->dw0); 1011 - } 1008 + if (QM_EQE_PHASE(dw0) != qm->status.eqc_phase) 1009 + break; 1010 + } while (eqe_num < (eq_depth >> 1) - 1); 1012 1011 1013 1012 poll_data->eqe_num = eqe_num; 1014 1013 queue_work(qm->wq, &poll_data->work);
+5 -5
drivers/firewire/nosy.c
··· 36 36 37 37 static char driver_name[] = KBUILD_MODNAME; 38 38 39 + #define RCV_BUFFER_SIZE (16 * 1024) 40 + 39 41 /* this is the physical layout of a PCL, its size is 128 bytes */ 40 42 struct pcl { 41 43 __le32 next; ··· 519 517 lynx->rcv_start_pcl, lynx->rcv_start_pcl_bus); 520 518 dma_free_coherent(&lynx->pci_device->dev, sizeof(struct pcl), 521 519 lynx->rcv_pcl, lynx->rcv_pcl_bus); 522 - dma_free_coherent(&lynx->pci_device->dev, PAGE_SIZE, lynx->rcv_buffer, 523 - lynx->rcv_buffer_bus); 520 + dma_free_coherent(&lynx->pci_device->dev, RCV_BUFFER_SIZE, 521 + lynx->rcv_buffer, lynx->rcv_buffer_bus); 524 522 525 523 iounmap(lynx->registers); 526 524 pci_disable_device(dev); 527 525 lynx_put(lynx); 528 526 } 529 - 530 - #define RCV_BUFFER_SIZE (16 * 1024) 531 527 532 528 static int 533 529 add_card(struct pci_dev *dev, const struct pci_device_id *unused) ··· 680 680 dma_free_coherent(&lynx->pci_device->dev, sizeof(struct pcl), 681 681 lynx->rcv_pcl, lynx->rcv_pcl_bus); 682 682 if (lynx->rcv_buffer) 683 - dma_free_coherent(&lynx->pci_device->dev, PAGE_SIZE, 683 + dma_free_coherent(&lynx->pci_device->dev, RCV_BUFFER_SIZE, 684 684 lynx->rcv_buffer, lynx->rcv_buffer_bus); 685 685 iounmap(lynx->registers); 686 686
+1
drivers/firmware/efi/efi.c
··· 73 73 MMAP_LOCK_INITIALIZER(efi_mm) 74 74 .page_table_lock = __SPIN_LOCK_UNLOCKED(efi_mm.page_table_lock), 75 75 .mmlist = LIST_HEAD_INIT(efi_mm.mmlist), 76 + .user_ns = &init_user_ns, 76 77 .cpu_bitmap = { [BITS_TO_LONGS(NR_CPUS)] = 0}, 77 78 #ifdef CONFIG_SCHED_MM_CID 78 79 .mm_cid.lock = __RAW_SPIN_LOCK_UNLOCKED(efi_mm.mm_cid.lock),
+4 -4
drivers/firmware/efi/libstub/gop.c
··· 513 513 status = efi_bs_call(handle_protocol, handle, &EFI_EDID_ACTIVE_PROTOCOL_GUID, 514 514 (void **)&active_edid); 515 515 if (status == EFI_SUCCESS) { 516 - gop_size_of_edid = active_edid->size_of_edid; 517 - gop_edid = active_edid->edid; 516 + gop_size_of_edid = efi_table_attr(active_edid, size_of_edid); 517 + gop_edid = efi_table_attr(active_edid, edid); 518 518 } else { 519 519 status = efi_bs_call(handle_protocol, handle, 520 520 &EFI_EDID_DISCOVERED_PROTOCOL_GUID, 521 521 (void **)&discovered_edid); 522 522 if (status == EFI_SUCCESS) { 523 - gop_size_of_edid = discovered_edid->size_of_edid; 524 - gop_edid = discovered_edid->edid; 523 + gop_size_of_edid = efi_table_attr(discovered_edid, size_of_edid); 524 + gop_edid = efi_table_attr(discovered_edid, edid); 525 525 } 526 526 } 527 527
+3 -2
drivers/gpu/drm/drm_gem_shmem_helper.c
··· 96 96 /** 97 97 * drm_gem_shmem_init - Initialize an allocated object. 98 98 * @dev: DRM device 99 - * @obj: The allocated shmem GEM object. 99 + * @shmem: The allocated shmem GEM object. 100 + * @size: Buffer size in bytes 100 101 * 101 102 * Returns: 102 103 * 0 on success, or a negative error code on failure. ··· 896 895 897 896 MODULE_DESCRIPTION("DRM SHMEM memory-management helpers"); 898 897 MODULE_IMPORT_NS("DMA_BUF"); 899 - MODULE_LICENSE("GPL v2"); 898 + MODULE_LICENSE("GPL");
+13 -4
drivers/gpu/drm/drm_pagemap.c
··· 3 3 * Copyright © 2024-2025 Intel Corporation 4 4 */ 5 5 6 + #include <linux/dma-fence.h> 6 7 #include <linux/dma-mapping.h> 7 8 #include <linux/migrate.h> 8 9 #include <linux/pagemap.h> ··· 409 408 drm_pagemap_get_devmem_page(page, zdd); 410 409 } 411 410 412 - err = ops->copy_to_devmem(pages, pagemap_addr, npages); 411 + err = ops->copy_to_devmem(pages, pagemap_addr, npages, 412 + devmem_allocation->pre_migrate_fence); 413 413 if (err) 414 414 goto err_finalize; 415 + 416 + dma_fence_put(devmem_allocation->pre_migrate_fence); 417 + devmem_allocation->pre_migrate_fence = NULL; 415 418 416 419 /* Upon success bind devmem allocation to range and zdd */ 417 420 devmem_allocation->timeslice_expiration = get_jiffies_64() + ··· 601 596 for (i = 0; i < npages; ++i) 602 597 pages[i] = migrate_pfn_to_page(src[i]); 603 598 604 - err = ops->copy_to_ram(pages, pagemap_addr, npages); 599 + err = ops->copy_to_ram(pages, pagemap_addr, npages, NULL); 605 600 if (err) 606 601 goto err_finalize; 607 602 ··· 737 732 for (i = 0; i < npages; ++i) 738 733 pages[i] = migrate_pfn_to_page(migrate.src[i]); 739 734 740 - err = ops->copy_to_ram(pages, pagemap_addr, npages); 735 + err = ops->copy_to_ram(pages, pagemap_addr, npages, NULL); 741 736 if (err) 742 737 goto err_finalize; 743 738 ··· 818 813 * @ops: Pointer to the operations structure for GPU SVM device memory 819 814 * @dpagemap: The struct drm_pagemap we're allocating from. 820 815 * @size: Size of device memory allocation 816 + * @pre_migrate_fence: Fence to wait for or pipeline behind before migration starts. 817 + * (May be NULL). 821 818 */ 822 819 void drm_pagemap_devmem_init(struct drm_pagemap_devmem *devmem_allocation, 823 820 struct device *dev, struct mm_struct *mm, 824 821 const struct drm_pagemap_devmem_ops *ops, 825 - struct drm_pagemap *dpagemap, size_t size) 822 + struct drm_pagemap *dpagemap, size_t size, 823 + struct dma_fence *pre_migrate_fence) 826 824 { 827 825 init_completion(&devmem_allocation->detached); 828 826 devmem_allocation->dev = dev; ··· 833 825 devmem_allocation->ops = ops; 834 826 devmem_allocation->dpagemap = dpagemap; 835 827 devmem_allocation->size = size; 828 + devmem_allocation->pre_migrate_fence = pre_migrate_fence; 836 829 } 837 830 EXPORT_SYMBOL_GPL(drm_pagemap_devmem_init); 838 831
+17 -20
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
··· 951 951 vma = eb_lookup_vma(eb, eb->exec[i].handle); 952 952 if (IS_ERR(vma)) { 953 953 err = PTR_ERR(vma); 954 - goto err; 954 + return err; 955 955 } 956 956 957 957 err = eb_validate_vma(eb, &eb->exec[i], vma); 958 958 if (unlikely(err)) { 959 959 i915_vma_put(vma); 960 - goto err; 960 + return err; 961 961 } 962 962 963 963 err = eb_add_vma(eb, &current_batch, i, vma); ··· 966 966 967 967 if (i915_gem_object_is_userptr(vma->obj)) { 968 968 err = i915_gem_object_userptr_submit_init(vma->obj); 969 - if (err) { 970 - if (i + 1 < eb->buffer_count) { 971 - /* 972 - * Execbuffer code expects last vma entry to be NULL, 973 - * since we already initialized this entry, 974 - * set the next value to NULL or we mess up 975 - * cleanup handling. 976 - */ 977 - eb->vma[i + 1].vma = NULL; 978 - } 979 - 969 + if (err) 980 970 return err; 981 - } 982 971 983 972 eb->vma[i].flags |= __EXEC_OBJECT_USERPTR_INIT; 984 973 eb->args->flags |= __EXEC_USERPTR_USED; ··· 975 986 } 976 987 977 988 return 0; 978 - 979 - err: 980 - eb->vma[i].vma = NULL; 981 - return err; 982 989 } 983 990 984 991 static int eb_lock_vmas(struct i915_execbuffer *eb) ··· 3360 3375 3361 3376 eb.exec = exec; 3362 3377 eb.vma = (struct eb_vma *)(exec + args->buffer_count + 1); 3363 - eb.vma[0].vma = NULL; 3378 + memset(eb.vma, 0, (args->buffer_count + 1) * sizeof(struct eb_vma)); 3379 + 3364 3380 eb.batch_pool = NULL; 3365 3381 3366 3382 eb.invalid_flags = __EXEC_OBJECT_UNKNOWN_FLAGS; ··· 3570 3584 if (err) 3571 3585 return err; 3572 3586 3573 - /* Allocate extra slots for use by the command parser */ 3587 + /* 3588 + * Allocate extra slots for use by the command parser. 3589 + * 3590 + * Note that this allocation handles two different arrays (the 3591 + * exec2_list array, and the eventual eb.vma array introduced in 3592 + * i915_gem_do_execbuffer()), that reside in virtually contiguous 3593 + * memory. Also note that the allocation intentionally doesn't fill the 3594 + * area with zeros, because the exec2_list part doesn't need to be, as 3595 + * it's immediately overwritten by user data a few lines below. 3596 + * However, the eb.vma part is explicitly zeroed later in 3597 + * i915_gem_do_execbuffer(). 3598 + */ 3574 3599 exec2_list = kvmalloc_array(count + 2, eb_element_size(), 3575 3600 __GFP_NOWARN | GFP_KERNEL); 3576 3601 if (exec2_list == NULL) {
+11
drivers/gpu/drm/imagination/pvr_gem.c
··· 28 28 drm_gem_shmem_object_free(obj); 29 29 } 30 30 31 + static struct dma_buf *pvr_gem_export(struct drm_gem_object *obj, int flags) 32 + { 33 + struct pvr_gem_object *pvr_obj = gem_to_pvr_gem(obj); 34 + 35 + if (pvr_obj->flags & DRM_PVR_BO_PM_FW_PROTECT) 36 + return ERR_PTR(-EPERM); 37 + 38 + return drm_gem_prime_export(obj, flags); 39 + } 40 + 31 41 static int pvr_gem_mmap(struct drm_gem_object *gem_obj, struct vm_area_struct *vma) 32 42 { 33 43 struct pvr_gem_object *pvr_obj = gem_to_pvr_gem(gem_obj); ··· 52 42 static const struct drm_gem_object_funcs pvr_gem_object_funcs = { 53 43 .free = pvr_gem_object_free, 54 44 .print_info = drm_gem_shmem_object_print_info, 45 + .export = pvr_gem_export, 55 46 .pin = drm_gem_shmem_object_pin, 56 47 .unpin = drm_gem_shmem_object_unpin, 57 48 .get_sg_table = drm_gem_shmem_object_get_sg_table,
+12 -1
drivers/gpu/drm/msm/adreno/a6xx_catalog.c
··· 1376 1376 REG_A6XX_UCHE_MODE_CNTL, 1377 1377 REG_A6XX_RB_NC_MODE_CNTL, 1378 1378 REG_A6XX_RB_CMP_DBG_ECO_CNTL, 1379 - REG_A7XX_GRAS_NC_MODE_CNTL, 1380 1379 REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE_ENABLE, 1381 1380 REG_A6XX_UCHE_GBIF_GX_CONFIG, 1382 1381 REG_A6XX_UCHE_CLIENT_PF, ··· 1391 1392 REG_A6XX_TPL1_BICUBIC_WEIGHTS_TABLE(2), 1392 1393 REG_A6XX_TPL1_BICUBIC_WEIGHTS_TABLE(3), 1393 1394 REG_A6XX_TPL1_BICUBIC_WEIGHTS_TABLE(4), 1395 + REG_A6XX_RBBM_PERFCTR_CNTL, 1394 1396 REG_A6XX_TPL1_NC_MODE_CNTL, 1395 1397 REG_A6XX_SP_NC_MODE_CNTL, 1396 1398 REG_A6XX_CP_DBG_ECO_CNTL, ··· 1448 1448 1449 1449 DECLARE_ADRENO_REGLIST_LIST(a750_ifpc_reglist); 1450 1450 1451 + static const struct adreno_reglist_pipe a7xx_dyn_pwrup_reglist_regs[] = { 1452 + { REG_A7XX_GRAS_NC_MODE_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1453 + }; 1454 + 1455 + DECLARE_ADRENO_REGLIST_PIPE_LIST(a7xx_dyn_pwrup_reglist); 1456 + 1451 1457 static const struct adreno_info a7xx_gpus[] = { 1452 1458 { 1453 1459 .chip_ids = ADRENO_CHIP_IDS(0x07000200), ··· 1497 1491 .hwcg = a730_hwcg, 1498 1492 .protect = &a730_protect, 1499 1493 .pwrup_reglist = &a7xx_pwrup_reglist, 1494 + .dyn_pwrup_reglist = &a7xx_dyn_pwrup_reglist, 1500 1495 .gbif_cx = a640_gbif, 1501 1496 .gmu_cgc_mode = 0x00020000, 1502 1497 }, ··· 1520 1513 .hwcg = a740_hwcg, 1521 1514 .protect = &a730_protect, 1522 1515 .pwrup_reglist = &a7xx_pwrup_reglist, 1516 + .dyn_pwrup_reglist = &a7xx_dyn_pwrup_reglist, 1523 1517 .gbif_cx = a640_gbif, 1524 1518 .gmu_chipid = 0x7020100, 1525 1519 .gmu_cgc_mode = 0x00020202, ··· 1555 1547 .hwcg = a740_hwcg, 1556 1548 .protect = &a730_protect, 1557 1549 .pwrup_reglist = &a7xx_pwrup_reglist, 1550 + .dyn_pwrup_reglist = &a7xx_dyn_pwrup_reglist, 1558 1551 .ifpc_reglist = &a750_ifpc_reglist, 1559 1552 .gbif_cx = a640_gbif, 1560 1553 .gmu_chipid = 0x7050001, ··· 1598 1589 .a6xx = &(const struct a6xx_info) { 1599 1590 .protect = &a730_protect, 1600 1591 .pwrup_reglist = &a7xx_pwrup_reglist, 1592 + .dyn_pwrup_reglist = &a7xx_dyn_pwrup_reglist, 1601 1593 .ifpc_reglist = &a750_ifpc_reglist, 1602 1594 .gbif_cx = a640_gbif, 1603 1595 .gmu_chipid = 0x7090100, ··· 1633 1623 .hwcg = a740_hwcg, 1634 1624 .protect = &a730_protect, 1635 1625 .pwrup_reglist = &a7xx_pwrup_reglist, 1626 + .dyn_pwrup_reglist = &a7xx_dyn_pwrup_reglist, 1636 1627 .gbif_cx = a640_gbif, 1637 1628 .gmu_chipid = 0x70f0000, 1638 1629 .gmu_cgc_mode = 0x00020222,
+40 -12
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 849 849 min_acc_len_64b << 3 | 850 850 hbb_lo << 1 | ubwc_mode); 851 851 852 - if (adreno_is_a7xx(adreno_gpu)) 853 - gpu_write(gpu, REG_A7XX_GRAS_NC_MODE_CNTL, 854 - FIELD_PREP(GENMASK(8, 5), hbb_lo)); 852 + if (adreno_is_a7xx(adreno_gpu)) { 853 + for (u32 pipe_id = PIPE_BR; pipe_id <= PIPE_BV; pipe_id++) { 854 + gpu_write(gpu, REG_A7XX_CP_APERTURE_CNTL_HOST, 855 + A7XX_CP_APERTURE_CNTL_HOST_PIPE(pipe_id)); 856 + gpu_write(gpu, REG_A7XX_GRAS_NC_MODE_CNTL, 857 + FIELD_PREP(GENMASK(8, 5), hbb_lo)); 858 + } 859 + gpu_write(gpu, REG_A7XX_CP_APERTURE_CNTL_HOST, 860 + A7XX_CP_APERTURE_CNTL_HOST_PIPE(PIPE_NONE)); 861 + } 855 862 856 863 gpu_write(gpu, REG_A6XX_UCHE_MODE_CNTL, 857 864 min_acc_len_64b << 23 | hbb_lo << 21); ··· 872 865 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 873 866 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 874 867 const struct adreno_reglist_list *reglist; 868 + const struct adreno_reglist_pipe_list *dyn_pwrup_reglist; 875 869 void *ptr = a6xx_gpu->pwrup_reglist_ptr; 876 870 struct cpu_gpu_lock *lock = ptr; 877 871 u32 *dest = (u32 *)&lock->regs[0]; 872 + u32 dyn_pwrup_reglist_count = 0; 878 873 int i; 879 874 880 875 lock->gpu_req = lock->cpu_req = lock->turn = 0; 881 876 882 877 reglist = adreno_gpu->info->a6xx->ifpc_reglist; 883 - lock->ifpc_list_len = reglist->count; 878 + if (reglist) { 879 + lock->ifpc_list_len = reglist->count; 884 880 885 - /* 886 - * For each entry in each of the lists, write the offset and the current 887 - * register value into the GPU buffer 888 - */ 889 - for (i = 0; i < reglist->count; i++) { 890 - *dest++ = reglist->regs[i]; 891 - *dest++ = gpu_read(gpu, reglist->regs[i]); 881 + /* 882 + * For each entry in each of the lists, write the offset and the current 883 + * register value into the GPU buffer 884 + */ 885 + for (i = 0; i < reglist->count; i++) { 886 + *dest++ = reglist->regs[i]; 887 + *dest++ = gpu_read(gpu, reglist->regs[i]); 888 + } 892 889 } 893 890 894 891 reglist = adreno_gpu->info->a6xx->pwrup_reglist; ··· 918 907 * (<aperture, shifted 12 bits> <address> <data>), and the length is 919 908 * stored as number for triplets in dynamic_list_len. 920 909 */ 921 - lock->dynamic_list_len = 0; 910 + dyn_pwrup_reglist = adreno_gpu->info->a6xx->dyn_pwrup_reglist; 911 + if (dyn_pwrup_reglist) { 912 + for (u32 pipe_id = PIPE_BR; pipe_id <= PIPE_BV; pipe_id++) { 913 + gpu_write(gpu, REG_A7XX_CP_APERTURE_CNTL_HOST, 914 + A7XX_CP_APERTURE_CNTL_HOST_PIPE(pipe_id)); 915 + for (i = 0; i < dyn_pwrup_reglist->count; i++) { 916 + if ((dyn_pwrup_reglist->regs[i].pipe & BIT(pipe_id)) == 0) 917 + continue; 918 + *dest++ = A7XX_CP_APERTURE_CNTL_HOST_PIPE(pipe_id); 919 + *dest++ = dyn_pwrup_reglist->regs[i].offset; 920 + *dest++ = gpu_read(gpu, dyn_pwrup_reglist->regs[i].offset); 921 + dyn_pwrup_reglist_count++; 922 + } 923 + } 924 + gpu_write(gpu, REG_A7XX_CP_APERTURE_CNTL_HOST, 925 + A7XX_CP_APERTURE_CNTL_HOST_PIPE(PIPE_NONE)); 926 + } 927 + lock->dynamic_list_len = dyn_pwrup_reglist_count; 922 928 } 923 929 924 930 static int a7xx_preempt_start(struct msm_gpu *gpu)
+1
drivers/gpu/drm/msm/adreno/a6xx_gpu.h
··· 45 45 const struct adreno_reglist *hwcg; 46 46 const struct adreno_protect *protect; 47 47 const struct adreno_reglist_list *pwrup_reglist; 48 + const struct adreno_reglist_pipe_list *dyn_pwrup_reglist; 48 49 const struct adreno_reglist_list *ifpc_reglist; 49 50 const struct adreno_reglist *gbif_cx; 50 51 const struct adreno_reglist_pipe *nonctxt_reglist;
+2 -2
drivers/gpu/drm/msm/adreno/a6xx_preempt.c
··· 454 454 gpu->vm, &a6xx_gpu->preempt_postamble_bo, 455 455 &a6xx_gpu->preempt_postamble_iova); 456 456 457 - preempt_prepare_postamble(a6xx_gpu); 458 - 459 457 if (IS_ERR(a6xx_gpu->preempt_postamble_ptr)) 460 458 goto fail; 459 + 460 + preempt_prepare_postamble(a6xx_gpu); 461 461 462 462 timer_setup(&a6xx_gpu->preempt_timer, a6xx_preempt_timer, 0); 463 463
+13
drivers/gpu/drm/msm/adreno/adreno_gpu.h
··· 188 188 .count = ARRAY_SIZE(name ## _regs), \ 189 189 }; 190 190 191 + struct adreno_reglist_pipe_list { 192 + /** @reg: List of register **/ 193 + const struct adreno_reglist_pipe *regs; 194 + /** @count: Number of registers in the list **/ 195 + u32 count; 196 + }; 197 + 198 + #define DECLARE_ADRENO_REGLIST_PIPE_LIST(name) \ 199 + static const struct adreno_reglist_pipe_list name = { \ 200 + .regs = name ## _regs, \ 201 + .count = ARRAY_SIZE(name ## _regs), \ 202 + }; 203 + 191 204 struct adreno_gpu { 192 205 struct msm_gpu base; 193 206 const struct adreno_info *info;
+7 -31
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
··· 200 200 struct dpu_crtc_state *crtc_state) 201 201 { 202 202 struct dpu_crtc_mixer *m; 203 - u32 crcs[CRTC_QUAD_MIXERS]; 203 + u32 crcs[CRTC_DUAL_MIXERS]; 204 204 205 205 int rc = 0; 206 206 int i; ··· 1328 1328 struct drm_display_mode *mode = &crtc_state->adjusted_mode; 1329 1329 struct msm_display_topology topology = {0}; 1330 1330 struct drm_encoder *drm_enc; 1331 - u32 num_rt_intf; 1332 1331 1333 1332 drm_for_each_encoder_mask(drm_enc, crtc->dev, crtc_state->encoder_mask) 1334 1333 dpu_encoder_update_topology(drm_enc, &topology, crtc_state->state, ··· 1341 1342 * Dual display 1342 1343 * 2 LM, 2 INTF ( Split display using 2 interfaces) 1343 1344 * 1344 - * If DSC is enabled, try to use 4:4:2 topology if there is enough 1345 - * resource. Otherwise, use 2:2:2 topology. 1346 - * 1347 1345 * Single display 1348 1346 * 1 LM, 1 INTF 1349 1347 * 2 LM, 1 INTF (stream merge to support high resolution interfaces) 1350 1348 * 1351 - * If DSC is enabled, use 2:2:1 topology 1349 + * If DSC is enabled, use 2 LMs for 2:2:1 topology 1352 1350 * 1353 1351 * Add dspps to the reservation requirements if ctm is requested 1354 1352 * ··· 1357 1361 * (mode->hdisplay > MAX_HDISPLAY_SPLIT) check. 1358 1362 */ 1359 1363 1360 - num_rt_intf = topology.num_intf; 1361 - if (topology.cwb_enabled) 1362 - num_rt_intf--; 1363 - 1364 - if (topology.num_dsc) { 1365 - if (dpu_kms->catalog->dsc_count >= num_rt_intf * 2) 1366 - topology.num_dsc = num_rt_intf * 2; 1367 - else 1368 - topology.num_dsc = num_rt_intf; 1369 - topology.num_lm = topology.num_dsc; 1370 - } else if (num_rt_intf == 2) { 1364 + if (topology.num_intf == 2 && !topology.cwb_enabled) 1371 1365 topology.num_lm = 2; 1372 - } else if (dpu_kms->catalog->caps->has_3d_merge) { 1366 + else if (topology.num_dsc == 2) 1367 + topology.num_lm = 2; 1368 + else if (dpu_kms->catalog->caps->has_3d_merge) 1373 1369 topology.num_lm = (mode->hdisplay > MAX_HDISPLAY_SPLIT) ? 2 : 1; 1374 - } else { 1370 + else 1375 1371 topology.num_lm = 1; 1376 - } 1377 1372 1378 1373 if (crtc_state->ctm) 1379 1374 topology.num_dspp = topology.num_lm; ··· 1605 1618 } 1606 1619 1607 1620 return 0; 1608 - } 1609 - 1610 - /** 1611 - * dpu_crtc_get_num_lm - Get mixer number in this CRTC pipeline 1612 - * @state: Pointer to drm crtc state object 1613 - */ 1614 - unsigned int dpu_crtc_get_num_lm(const struct drm_crtc_state *state) 1615 - { 1616 - struct dpu_crtc_state *cstate = to_dpu_crtc_state(state); 1617 - 1618 - return cstate->num_mixers; 1619 1621 } 1620 1622 1621 1623 #ifdef CONFIG_DEBUG_FS
+3 -5
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.h
··· 210 210 211 211 bool bw_control; 212 212 bool bw_split_vote; 213 - struct drm_rect lm_bounds[CRTC_QUAD_MIXERS]; 213 + struct drm_rect lm_bounds[CRTC_DUAL_MIXERS]; 214 214 215 215 uint64_t input_fence_timeout_ns; 216 216 ··· 218 218 219 219 /* HW Resources reserved for the crtc */ 220 220 u32 num_mixers; 221 - struct dpu_crtc_mixer mixers[CRTC_QUAD_MIXERS]; 221 + struct dpu_crtc_mixer mixers[CRTC_DUAL_MIXERS]; 222 222 223 223 u32 num_ctls; 224 - struct dpu_hw_ctl *hw_ctls[CRTC_QUAD_MIXERS]; 224 + struct dpu_hw_ctl *hw_ctls[CRTC_DUAL_MIXERS]; 225 225 226 226 enum dpu_crtc_crc_source crc_source; 227 227 int crc_frame_skip_count; ··· 266 266 } 267 267 268 268 void dpu_crtc_frame_event_cb(struct drm_crtc *crtc, u32 event); 269 - 270 - unsigned int dpu_crtc_get_num_lm(const struct drm_crtc_state *state); 271 269 272 270 #endif /* _DPU_CRTC_H_ */
+20 -9
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
··· 55 55 #define MAX_PHYS_ENCODERS_PER_VIRTUAL \ 56 56 (MAX_H_TILES_PER_DISPLAY * NUM_PHYS_ENCODER_TYPES) 57 57 58 - #define MAX_CHANNELS_PER_ENC 4 58 + #define MAX_CHANNELS_PER_ENC 2 59 59 #define MAX_CWB_PER_ENC 2 60 60 61 61 #define IDLE_SHORT_TIMEOUT 1 ··· 661 661 struct dpu_encoder_virt *dpu_enc = to_dpu_encoder_virt(drm_enc); 662 662 struct msm_drm_private *priv = dpu_enc->base.dev->dev_private; 663 663 struct msm_display_info *disp_info = &dpu_enc->disp_info; 664 + struct dpu_kms *dpu_kms = to_dpu_kms(priv->kms); 664 665 struct drm_connector *connector; 665 666 struct drm_connector_state *conn_state; 666 667 struct drm_framebuffer *fb; ··· 675 674 676 675 dsc = dpu_encoder_get_dsc_config(drm_enc); 677 676 678 - /* 679 - * Set DSC number as 1 to mark the enabled status, will be adjusted 680 - * in dpu_crtc_get_topology() 681 - */ 682 - if (dsc) 683 - topology->num_dsc = 1; 677 + /* We only support 2 DSC mode (with 2 LM and 1 INTF) */ 678 + if (dsc) { 679 + /* 680 + * Use 2 DSC encoders, 2 layer mixers and 1 or 2 interfaces 681 + * when Display Stream Compression (DSC) is enabled, 682 + * and when enough DSC blocks are available. 683 + * This is power-optimal and can drive up to (including) 4k 684 + * screens. 685 + */ 686 + WARN(topology->num_intf > 2, 687 + "DSC topology cannot support more than 2 interfaces\n"); 688 + if (topology->num_intf >= 2 || dpu_kms->catalog->dsc_count >= 2) 689 + topology->num_dsc = 2; 690 + else 691 + topology->num_dsc = 1; 692 + } 684 693 685 694 connector = drm_atomic_get_new_connector_for_encoder(state, drm_enc); 686 695 if (!connector) ··· 2180 2169 { 2181 2170 int i, num_lm; 2182 2171 struct dpu_global_state *global_state; 2183 - struct dpu_hw_blk *hw_lm[MAX_CHANNELS_PER_ENC]; 2184 - struct dpu_hw_mixer *hw_mixer[MAX_CHANNELS_PER_ENC]; 2172 + struct dpu_hw_blk *hw_lm[2]; 2173 + struct dpu_hw_mixer *hw_mixer[2]; 2185 2174 struct dpu_hw_ctl *ctl = phys_enc->hw_ctl; 2186 2175 2187 2176 /* reset all mixers for this encoder */
+1 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h
··· 302 302 303 303 /* Use merge_3d unless DSC MERGE topology is used */ 304 304 if (phys_enc->split_role == ENC_ROLE_SOLO && 305 - (dpu_cstate->num_mixers != 1) && 305 + dpu_cstate->num_mixers == CRTC_DUAL_MIXERS && 306 306 !dpu_encoder_use_dsc_merge(phys_enc->parent)) 307 307 return BLEND_3D_H_ROW_INT; 308 308
+4 -6
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
··· 247 247 if (hw_cdm) 248 248 intf_cfg.cdm = hw_cdm->idx; 249 249 250 - if (phys_enc->hw_pp->merge_3d && phys_enc->hw_pp->merge_3d->ops.setup_3d_mode) 251 - phys_enc->hw_pp->merge_3d->ops.setup_3d_mode(phys_enc->hw_pp->merge_3d, 252 - mode_3d); 250 + if (hw_pp && hw_pp->merge_3d && hw_pp->merge_3d->ops.setup_3d_mode) 251 + hw_pp->merge_3d->ops.setup_3d_mode(hw_pp->merge_3d, mode_3d); 253 252 254 253 /* setup which pp blk will connect to this wb */ 255 - if (hw_pp && phys_enc->hw_wb->ops.bind_pingpong_blk) 256 - phys_enc->hw_wb->ops.bind_pingpong_blk(phys_enc->hw_wb, 257 - phys_enc->hw_pp->idx); 254 + if (hw_pp && hw_wb->ops.bind_pingpong_blk) 255 + hw_wb->ops.bind_pingpong_blk(hw_wb, hw_pp->idx); 258 256 259 257 phys_enc->hw_ctl->ops.setup_intf_cfg(phys_enc->hw_ctl, &intf_cfg); 260 258 } else if (phys_enc->hw_ctl && phys_enc->hw_ctl->ops.setup_intf_cfg) {
+1 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
··· 24 24 #define DPU_MAX_IMG_WIDTH 0x3fff 25 25 #define DPU_MAX_IMG_HEIGHT 0x3fff 26 26 27 - #define CRTC_QUAD_MIXERS 4 27 + #define CRTC_DUAL_MIXERS 2 28 28 29 29 #define MAX_XIN_COUNT 16 30 30
+2 -2
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_cdm.h
··· 89 89 */ 90 90 struct dpu_hw_cdm_ops { 91 91 /** 92 - * Enable the CDM module 92 + * @enable: Enable the CDM module 93 93 * @cdm Pointer to chroma down context 94 94 */ 95 95 int (*enable)(struct dpu_hw_cdm *cdm, struct dpu_hw_cdm_cfg *cfg); 96 96 97 97 /** 98 - * Enable/disable the connection with pingpong 98 + * @bind_pingpong_blk: Enable/disable the connection with pingpong 99 99 * @cdm Pointer to chroma down context 100 100 * @pp pingpong block id. 101 101 */
+53 -31
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h
··· 12 12 #include "dpu_hw_sspp.h" 13 13 14 14 /** 15 - * dpu_ctl_mode_sel: Interface mode selection 16 - * DPU_CTL_MODE_SEL_VID: Video mode interface 17 - * DPU_CTL_MODE_SEL_CMD: Command mode interface 15 + * enum dpu_ctl_mode_sel: Interface mode selection 16 + * @DPU_CTL_MODE_SEL_VID: Video mode interface 17 + * @DPU_CTL_MODE_SEL_CMD: Command mode interface 18 18 */ 19 19 enum dpu_ctl_mode_sel { 20 20 DPU_CTL_MODE_SEL_VID = 0, ··· 37 37 * struct dpu_hw_intf_cfg :Describes how the DPU writes data to output interface 38 38 * @intf : Interface id 39 39 * @intf_master: Master interface id in the dual pipe topology 40 + * @wb: Writeback mode 40 41 * @mode_3d: 3d mux configuration 41 42 * @merge_3d: 3d merge block used 42 43 * @intf_mode_sel: Interface mode, cmd / vid ··· 65 64 */ 66 65 struct dpu_hw_ctl_ops { 67 66 /** 68 - * kickoff hw operation for Sw controlled interfaces 67 + * @trigger_start: kickoff hw operation for Sw controlled interfaces 69 68 * DSI cmd mode and WB interface are SW controlled 70 69 * @ctx : ctl path ctx pointer 71 70 */ 72 71 void (*trigger_start)(struct dpu_hw_ctl *ctx); 73 72 74 73 /** 75 - * check if the ctl is started 74 + * @is_started: check if the ctl is started 76 75 * @ctx : ctl path ctx pointer 77 76 * @Return: true if started, false if stopped 78 77 */ 79 78 bool (*is_started)(struct dpu_hw_ctl *ctx); 80 79 81 80 /** 82 - * kickoff prepare is in progress hw operation for sw 81 + * @trigger_pending: kickoff prepare is in progress hw operation for sw 83 82 * controlled interfaces: DSI cmd mode and WB interface 84 83 * are SW controlled 85 84 * @ctx : ctl path ctx pointer ··· 87 86 void (*trigger_pending)(struct dpu_hw_ctl *ctx); 88 87 89 88 /** 90 - * Clear the value of the cached pending_flush_mask 89 + * @clear_pending_flush: Clear the value of the cached pending_flush_mask 91 90 * No effect on hardware. 92 91 * Required to be implemented. 93 92 * @ctx : ctl path ctx pointer ··· 95 94 void (*clear_pending_flush)(struct dpu_hw_ctl *ctx); 96 95 97 96 /** 98 - * Query the value of the cached pending_flush_mask 97 + * @get_pending_flush: Query the value of the cached pending_flush_mask 99 98 * No effect on hardware 100 99 * @ctx : ctl path ctx pointer 101 100 */ 102 101 u32 (*get_pending_flush)(struct dpu_hw_ctl *ctx); 103 102 104 103 /** 105 - * OR in the given flushbits to the cached pending_flush_mask 104 + * @update_pending_flush: OR in the given flushbits to the cached 105 + * pending_flush_mask. 106 106 * No effect on hardware 107 107 * @ctx : ctl path ctx pointer 108 108 * @flushbits : module flushmask ··· 112 110 u32 flushbits); 113 111 114 112 /** 115 - * OR in the given flushbits to the cached pending_(wb_)flush_mask 113 + * @update_pending_flush_wb: OR in the given flushbits to the 114 + * cached pending_(wb_)flush_mask. 116 115 * No effect on hardware 117 116 * @ctx : ctl path ctx pointer 118 117 * @blk : writeback block index ··· 122 119 enum dpu_wb blk); 123 120 124 121 /** 125 - * OR in the given flushbits to the cached pending_(cwb_)flush_mask 122 + * @update_pending_flush_cwb: OR in the given flushbits to the 123 + * cached pending_(cwb_)flush_mask. 126 124 * No effect on hardware 127 125 * @ctx : ctl path ctx pointer 128 126 * @blk : concurrent writeback block index ··· 132 128 enum dpu_cwb blk); 133 129 134 130 /** 135 - * OR in the given flushbits to the cached pending_(intf_)flush_mask 131 + * @update_pending_flush_intf: OR in the given flushbits to the 132 + * cached pending_(intf_)flush_mask. 136 133 * No effect on hardware 137 134 * @ctx : ctl path ctx pointer 138 135 * @blk : interface block index ··· 142 137 enum dpu_intf blk); 143 138 144 139 /** 145 - * OR in the given flushbits to the cached pending_(periph_)flush_mask 140 + * @update_pending_flush_periph: OR in the given flushbits to the 141 + * cached pending_(periph_)flush_mask. 146 142 * No effect on hardware 147 143 * @ctx : ctl path ctx pointer 148 144 * @blk : interface block index ··· 152 146 enum dpu_intf blk); 153 147 154 148 /** 155 - * OR in the given flushbits to the cached pending_(merge_3d_)flush_mask 149 + * @update_pending_flush_merge_3d: OR in the given flushbits to the 150 + * cached pending_(merge_3d_)flush_mask. 156 151 * No effect on hardware 157 152 * @ctx : ctl path ctx pointer 158 153 * @blk : interface block index ··· 162 155 enum dpu_merge_3d blk); 163 156 164 157 /** 165 - * OR in the given flushbits to the cached pending_flush_mask 158 + * @update_pending_flush_sspp: OR in the given flushbits to the 159 + * cached pending_flush_mask. 166 160 * No effect on hardware 167 161 * @ctx : ctl path ctx pointer 168 162 * @blk : SSPP block index ··· 172 164 enum dpu_sspp blk); 173 165 174 166 /** 175 - * OR in the given flushbits to the cached pending_flush_mask 167 + * @update_pending_flush_mixer: OR in the given flushbits to the 168 + * cached pending_flush_mask. 176 169 * No effect on hardware 177 170 * @ctx : ctl path ctx pointer 178 171 * @blk : LM block index ··· 182 173 enum dpu_lm blk); 183 174 184 175 /** 185 - * OR in the given flushbits to the cached pending_flush_mask 176 + * @update_pending_flush_dspp: OR in the given flushbits to the 177 + * cached pending_flush_mask. 186 178 * No effect on hardware 187 179 * @ctx : ctl path ctx pointer 188 180 * @blk : DSPP block index ··· 193 183 enum dpu_dspp blk, u32 dspp_sub_blk); 194 184 195 185 /** 196 - * OR in the given flushbits to the cached pending_(dsc_)flush_mask 186 + * @update_pending_flush_dsc: OR in the given flushbits to the 187 + * cached pending_(dsc_)flush_mask. 197 188 * No effect on hardware 198 189 * @ctx: ctl path ctx pointer 199 190 * @blk: interface block index ··· 203 192 enum dpu_dsc blk); 204 193 205 194 /** 206 - * OR in the given flushbits to the cached pending_(cdm_)flush_mask 195 + * @update_pending_flush_cdm: OR in the given flushbits to the 196 + * cached pending_(cdm_)flush_mask. 207 197 * No effect on hardware 208 198 * @ctx: ctl path ctx pointer 209 199 * @cdm_num: idx of cdm to be flushed ··· 212 200 void (*update_pending_flush_cdm)(struct dpu_hw_ctl *ctx, enum dpu_cdm cdm_num); 213 201 214 202 /** 215 - * Write the value of the pending_flush_mask to hardware 203 + * @trigger_flush: Write the value of the pending_flush_mask to hardware 216 204 * @ctx : ctl path ctx pointer 217 205 */ 218 206 void (*trigger_flush)(struct dpu_hw_ctl *ctx); 219 207 220 208 /** 221 - * Read the value of the flush register 209 + * @get_flush_register: Read the value of the flush register 222 210 * @ctx : ctl path ctx pointer 223 211 * @Return: value of the ctl flush register. 224 212 */ 225 213 u32 (*get_flush_register)(struct dpu_hw_ctl *ctx); 226 214 227 215 /** 228 - * Setup ctl_path interface config 216 + * @setup_intf_cfg: Setup ctl_path interface config 229 217 * @ctx 230 218 * @cfg : interface config structure pointer 231 219 */ ··· 233 221 struct dpu_hw_intf_cfg *cfg); 234 222 235 223 /** 236 - * reset ctl_path interface config 224 + * @reset_intf_cfg: reset ctl_path interface config 237 225 * @ctx : ctl path ctx pointer 238 226 * @cfg : interface config structure pointer 239 227 */ 240 228 void (*reset_intf_cfg)(struct dpu_hw_ctl *ctx, 241 229 struct dpu_hw_intf_cfg *cfg); 242 230 231 + /** 232 + * @reset: reset function for this ctl type 233 + */ 243 234 int (*reset)(struct dpu_hw_ctl *c); 244 235 245 - /* 246 - * wait_reset_status - checks ctl reset status 236 + /** 237 + * @wait_reset_status: checks ctl reset status 247 238 * @ctx : ctl path ctx pointer 248 239 * 249 240 * This function checks the ctl reset status bit. ··· 257 242 int (*wait_reset_status)(struct dpu_hw_ctl *ctx); 258 243 259 244 /** 260 - * Set all blend stages to disabled 245 + * @clear_all_blendstages: Set all blend stages to disabled 261 246 * @ctx : ctl path ctx pointer 262 247 */ 263 248 void (*clear_all_blendstages)(struct dpu_hw_ctl *ctx); 264 249 265 250 /** 266 - * Configure layer mixer to pipe configuration 251 + * @setup_blendstage: Configure layer mixer to pipe configuration 267 252 * @ctx : ctl path ctx pointer 268 253 * @lm : layer mixer enumeration 269 254 * @cfg : blend stage configuration ··· 271 256 void (*setup_blendstage)(struct dpu_hw_ctl *ctx, 272 257 enum dpu_lm lm, struct dpu_hw_stage_cfg *cfg); 273 258 259 + /** 260 + * @set_active_fetch_pipes: Set active pipes attached to this CTL 261 + * @ctx: ctl path ctx pointer 262 + * @active_pipes: bitmap of enum dpu_sspp 263 + */ 274 264 void (*set_active_fetch_pipes)(struct dpu_hw_ctl *ctx, 275 265 unsigned long *fetch_active); 276 266 277 267 /** 278 - * Set active pipes attached to this CTL 268 + * @set_active_pipes: Set active pipes attached to this CTL 279 269 * @ctx: ctl path ctx pointer 280 270 * @active_pipes: bitmap of enum dpu_sspp 281 271 */ ··· 288 268 unsigned long *active_pipes); 289 269 290 270 /** 291 - * Set active layer mixers attached to this CTL 271 + * @set_active_lms: Set active layer mixers attached to this CTL 292 272 * @ctx: ctl path ctx pointer 293 273 * @active_lms: bitmap of enum dpu_lm 294 274 */ 295 275 void (*set_active_lms)(struct dpu_hw_ctl *ctx, 296 276 unsigned long *active_lms); 297 - 298 277 }; 299 278 300 279 /** ··· 308 289 * @pending_intf_flush_mask: pending INTF flush 309 290 * @pending_wb_flush_mask: pending WB flush 310 291 * @pending_cwb_flush_mask: pending CWB flush 292 + * @pending_periph_flush_mask: pending PERIPH flush 293 + * @pending_merge_3d_flush_mask: pending MERGE 3D flush 294 + * @pending_dspp_flush_mask: pending DSPP flush 311 295 * @pending_dsc_flush_mask: pending DSC flush 312 296 * @pending_cdm_flush_mask: pending CDM flush 313 297 * @mdss_ver: MDSS revision information ··· 342 320 }; 343 321 344 322 /** 345 - * dpu_hw_ctl - convert base object dpu_hw_base to container 323 + * to_dpu_hw_ctl - convert base object dpu_hw_base to container 346 324 * @hw: Pointer to base hardware block 347 325 * return: Pointer to hardware block container 348 326 */
+1 -2
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_cwb.h
··· 28 28 }; 29 29 30 30 /** 31 - * 32 31 * struct dpu_hw_cwb_ops : Interface to the cwb hw driver functions 33 32 * @config_cwb: configure CWB mux 34 33 */ ··· 53 54 }; 54 55 55 56 /** 56 - * dpu_hw_cwb - convert base object dpu_hw_base to container 57 + * to_dpu_hw_cwb - convert base object dpu_hw_base to container 57 58 * @hw: Pointer to base hardware block 58 59 * return: Pointer to hardware block container 59 60 */
+7 -3
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.h
··· 21 21 */ 22 22 struct dpu_hw_dsc_ops { 23 23 /** 24 - * dsc_disable - disable dsc 24 + * @dsc_disable: disable dsc 25 25 * @hw_dsc: Pointer to dsc context 26 26 */ 27 27 void (*dsc_disable)(struct dpu_hw_dsc *hw_dsc); 28 28 29 29 /** 30 - * dsc_config - configures dsc encoder 30 + * @dsc_config: configures dsc encoder 31 31 * @hw_dsc: Pointer to dsc context 32 32 * @dsc: panel dsc parameters 33 33 * @mode: dsc topology mode to be set ··· 39 39 u32 initial_lines); 40 40 41 41 /** 42 - * dsc_config_thresh - programs panel thresholds 42 + * @dsc_config_thresh: programs panel thresholds 43 43 * @hw_dsc: Pointer to dsc context 44 44 * @dsc: panel dsc parameters 45 45 */ 46 46 void (*dsc_config_thresh)(struct dpu_hw_dsc *hw_dsc, 47 47 struct drm_dsc_config *dsc); 48 48 49 + /** 50 + * @dsc_bind_pingpong_blk: binds pixel output from a DSC block 51 + * to a pingpong block 52 + */ 49 53 void (*dsc_bind_pingpong_blk)(struct dpu_hw_dsc *hw_dsc, 50 54 enum dpu_pingpong pp); 51 55 };
+3 -3
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.h
··· 22 22 }; 23 23 24 24 /** 25 - * struct dpu_hw_pcc - pcc feature structure 25 + * struct dpu_hw_pcc_cfg - pcc feature structure 26 26 * @r: red coefficients. 27 27 * @g: green coefficients. 28 28 * @b: blue coefficients. ··· 40 40 */ 41 41 struct dpu_hw_dspp_ops { 42 42 /** 43 - * setup_pcc - setup dspp pcc 43 + * @setup_pcc: setup_pcc - setup dspp pcc 44 44 * @ctx: Pointer to dspp context 45 45 * @cfg: Pointer to configuration 46 46 */ ··· 69 69 }; 70 70 71 71 /** 72 - * dpu_hw_dspp - convert base object dpu_hw_base to container 72 + * to_dpu_hw_dspp - convert base object dpu_hw_base to container 73 73 * @hw: Pointer to base hardware block 74 74 * return: Pointer to hardware block container 75 75 */
+7 -13
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
··· 57 57 /** 58 58 * struct dpu_hw_intf_ops : Interface to the interface Hw driver functions 59 59 * Assumption is these functions will be called after clocks are enabled 60 - * @ setup_timing_gen : programs the timing engine 61 - * @ setup_prog_fetch : enables/disables the programmable fetch logic 62 - * @ enable_timing: enable/disable timing engine 63 - * @ get_status: returns if timing engine is enabled or not 64 - * @ get_line_count: reads current vertical line counter 60 + * @setup_timing_gen : programs the timing engine 61 + * @setup_prg_fetch : enables/disables the programmable fetch logic 62 + * @enable_timing: enable/disable timing engine 63 + * @get_status: returns if timing engine is enabled or not 64 + * @get_line_count: reads current vertical line counter 65 65 * @bind_pingpong_blk: enable/disable the connection with pingpong which will 66 66 * feed pixels to this interface 67 67 * @setup_misr: enable/disable MISR ··· 70 70 * pointer and programs the tear check configuration 71 71 * @disable_tearcheck: Disables tearcheck block 72 72 * @connect_external_te: Read, modify, write to either set or clear listening to external TE 73 - * Return: 1 if TE was originally connected, 0 if not, or -ERROR 74 - * @get_vsync_info: Provides the programmed and current line_count 75 - * @setup_autorefresh: Configure and enable the autorefresh config 76 - * @get_autorefresh: Retrieve autorefresh config from hardware 77 - * Return: 0 on success, -ETIMEDOUT on timeout 73 + * Returns 1 if TE was originally connected, 0 if not, or -ERROR 78 74 * @vsync_sel: Select vsync signal for tear-effect configuration 75 + * @disable_autorefresh: Disable autorefresh if enabled 79 76 * @program_intf_cmd_cfg: Program the DPU to interface datapath for command mode 80 77 */ 81 78 struct dpu_hw_intf_ops { ··· 106 109 107 110 void (*vsync_sel)(struct dpu_hw_intf *intf, enum dpu_vsync_source vsync_source); 108 111 109 - /** 110 - * Disable autorefresh if enabled 111 - */ 112 112 void (*disable_autorefresh)(struct dpu_hw_intf *intf, uint32_t encoder_id, u16 vdisplay); 113 113 114 114 void (*program_intf_cmd_cfg)(struct dpu_hw_intf *intf,
+11 -12
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.h
··· 25 25 }; 26 26 27 27 /** 28 - * 29 28 * struct dpu_hw_lm_ops : Interface to the mixer Hw driver functions 30 29 * Assumption is these functions will be called after clocks are enabled 31 30 */ 32 31 struct dpu_hw_lm_ops { 33 - /* 34 - * Sets up mixer output width and height 32 + /** 33 + * @setup_mixer_out: Sets up mixer output width and height 35 34 * and border color if enabled 36 35 */ 37 36 void (*setup_mixer_out)(struct dpu_hw_mixer *ctx, 38 37 struct dpu_hw_mixer_cfg *cfg); 39 38 40 - /* 41 - * Alpha blending configuration 39 + /** 40 + * @setup_blend_config: Alpha blending configuration 42 41 * for the specified stage 43 42 */ 44 43 void (*setup_blend_config)(struct dpu_hw_mixer *ctx, uint32_t stage, 45 44 uint32_t fg_alpha, uint32_t bg_alpha, uint32_t blend_op); 46 45 47 - /* 48 - * Alpha color component selection from either fg or bg 46 + /** 47 + * @setup_alpha_out: Alpha color component selection from either fg or bg 49 48 */ 50 49 void (*setup_alpha_out)(struct dpu_hw_mixer *ctx, uint32_t mixer_op); 51 50 52 51 /** 53 - * Clear layer mixer to pipe configuration 52 + * @clear_all_blendstages: Clear layer mixer to pipe configuration 54 53 * @ctx : mixer ctx pointer 55 54 * Returns: 0 on success or -error 56 55 */ 57 56 int (*clear_all_blendstages)(struct dpu_hw_mixer *ctx); 58 57 59 58 /** 60 - * Configure layer mixer to pipe configuration 59 + * @setup_blendstage: Configure layer mixer to pipe configuration 61 60 * @ctx : mixer ctx pointer 62 61 * @lm : layer mixer enumeration 63 62 * @stage_cfg : blend stage configuration ··· 66 67 struct dpu_hw_stage_cfg *stage_cfg); 67 68 68 69 /** 69 - * setup_border_color : enable/disable border color 70 + * @setup_border_color : enable/disable border color 70 71 */ 71 72 void (*setup_border_color)(struct dpu_hw_mixer *ctx, 72 73 struct dpu_mdss_color *color, 73 74 u8 border_en); 74 75 75 76 /** 76 - * setup_misr: Enable/disable MISR 77 + * @setup_misr: Enable/disable MISR 77 78 */ 78 79 void (*setup_misr)(struct dpu_hw_mixer *ctx); 79 80 80 81 /** 81 - * collect_misr: Read MISR signature 82 + * @collect_misr: Read MISR signature 82 83 */ 83 84 int (*collect_misr)(struct dpu_hw_mixer *ctx, u32 *misr_value); 84 85 };
+1 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h
··· 34 34 #define DPU_MAX_PLANES 4 35 35 #endif 36 36 37 - #define STAGES_PER_PLANE 2 37 + #define STAGES_PER_PLANE 1 38 38 #define PIPES_PER_STAGE 2 39 39 #define PIPES_PER_PLANE (PIPES_PER_STAGE * STAGES_PER_PLANE) 40 40 #ifndef DPU_MAX_DE_CURVES
-1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_merge3d.h
··· 12 12 struct dpu_hw_merge_3d; 13 13 14 14 /** 15 - * 16 15 * struct dpu_hw_merge_3d_ops : Interface to the merge_3d Hw driver functions 17 16 * Assumption is these functions will be called after clocks are enabled 18 17 * @setup_3d_mode : enable 3D merge
+10 -10
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.h
··· 34 34 }; 35 35 36 36 /** 37 - * 38 37 * struct dpu_hw_pingpong_ops : Interface to the pingpong Hw driver functions 39 38 * Assumption is these functions will be called after clocks are enabled 40 39 * @enable_tearcheck: program and enable tear check block ··· 43 44 */ 44 45 struct dpu_hw_pingpong_ops { 45 46 /** 46 - * enables vysnc generation and sets up init value of 47 + * @enable_tearcheck: enables vysnc generation and sets up init value of 47 48 * read pointer and programs the tear check cofiguration 48 49 */ 49 50 int (*enable_tearcheck)(struct dpu_hw_pingpong *pp, 50 51 struct dpu_hw_tear_check *cfg); 51 52 52 53 /** 53 - * disables tear check block 54 + * @disable_tearcheck: disables tear check block 54 55 */ 55 56 int (*disable_tearcheck)(struct dpu_hw_pingpong *pp); 56 57 57 58 /** 58 - * read, modify, write to either set or clear listening to external TE 59 + * @connect_external_te: read, modify, write to either set or clear 60 + * listening to external TE 59 61 * @Return: 1 if TE was originally connected, 0 if not, or -ERROR 60 62 */ 61 63 int (*connect_external_te)(struct dpu_hw_pingpong *pp, 62 64 bool enable_external_te); 63 65 64 66 /** 65 - * Obtain current vertical line counter 67 + * @get_line_count: Obtain current vertical line counter 66 68 */ 67 69 u32 (*get_line_count)(struct dpu_hw_pingpong *pp); 68 70 69 71 /** 70 - * Disable autorefresh if enabled 72 + * @disable_autorefresh: Disable autorefresh if enabled 71 73 */ 72 74 void (*disable_autorefresh)(struct dpu_hw_pingpong *pp, uint32_t encoder_id, u16 vdisplay); 73 75 74 76 /** 75 - * Setup dither matix for pingpong block 77 + * @setup_dither: Setup dither matix for pingpong block 76 78 */ 77 79 void (*setup_dither)(struct dpu_hw_pingpong *pp, 78 80 struct dpu_hw_dither_cfg *cfg); 79 81 /** 80 - * Enable DSC 82 + * @enable_dsc: Enable DSC 81 83 */ 82 84 int (*enable_dsc)(struct dpu_hw_pingpong *pp); 83 85 84 86 /** 85 - * Disable DSC 87 + * @disable_dsc: Disable DSC 86 88 */ 87 89 void (*disable_dsc)(struct dpu_hw_pingpong *pp); 88 90 89 91 /** 90 - * Setup DSC 92 + * @setup_dsc: Setup DSC 91 93 */ 92 94 int (*setup_dsc)(struct dpu_hw_pingpong *pp); 93 95 };
+24 -23
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h
··· 14 14 15 15 #define DPU_SSPP_MAX_PITCH_SIZE 0xffff 16 16 17 - /** 17 + /* 18 18 * Flags 19 19 */ 20 20 #define DPU_SSPP_FLIP_LR BIT(0) ··· 23 23 #define DPU_SSPP_ROT_90 BIT(3) 24 24 #define DPU_SSPP_SOLID_FILL BIT(4) 25 25 26 - /** 26 + /* 27 27 * Component indices 28 28 */ 29 29 enum { ··· 36 36 }; 37 37 38 38 /** 39 - * DPU_SSPP_RECT_SOLO - multirect disabled 40 - * DPU_SSPP_RECT_0 - rect0 of a multirect pipe 41 - * DPU_SSPP_RECT_1 - rect1 of a multirect pipe 39 + * enum dpu_sspp_multirect_index - multirect mode 40 + * @DPU_SSPP_RECT_SOLO: multirect disabled 41 + * @DPU_SSPP_RECT_0: rect0 of a multirect pipe 42 + * @DPU_SSPP_RECT_1: rect1 of a multirect pipe 42 43 * 43 44 * Note: HW supports multirect with either RECT0 or 44 45 * RECT1. Considering no benefit of such configs over ··· 144 143 * struct dpu_sw_pipe_cfg : software pipe configuration 145 144 * @src_rect: src ROI, caller takes into account the different operations 146 145 * such as decimation, flip etc to program this field 147 - * @dest_rect: destination ROI. 146 + * @dst_rect: destination ROI. 148 147 * @rotation: simplified drm rotation hint 149 148 */ 150 149 struct dpu_sw_pipe_cfg { ··· 166 165 /** 167 166 * struct dpu_sw_pipe - software pipe description 168 167 * @sspp: backing SSPP pipe 169 - * @index: index of the rectangle of SSPP 170 - * @mode: parallel or time multiplex multirect mode 168 + * @multirect_index: index of the rectangle of SSPP 169 + * @multirect_mode: parallel or time multiplex multirect mode 171 170 */ 172 171 struct dpu_sw_pipe { 173 172 struct dpu_hw_sspp *sspp; ··· 182 181 */ 183 182 struct dpu_hw_sspp_ops { 184 183 /** 185 - * setup_format - setup pixel format cropping rectangle, flip 184 + * @setup_format: setup pixel format cropping rectangle, flip 186 185 * @pipe: Pointer to software pipe context 187 186 * @cfg: Pointer to pipe config structure 188 187 * @flags: Extra flags for format config ··· 191 190 const struct msm_format *fmt, u32 flags); 192 191 193 192 /** 194 - * setup_rects - setup pipe ROI rectangles 193 + * @setup_rects: setup pipe ROI rectangles 195 194 * @pipe: Pointer to software pipe context 196 195 * @cfg: Pointer to pipe config structure 197 196 */ ··· 199 198 struct dpu_sw_pipe_cfg *cfg); 200 199 201 200 /** 202 - * setup_pe - setup pipe pixel extension 201 + * @setup_pe: setup pipe pixel extension 203 202 * @ctx: Pointer to pipe context 204 203 * @pe_ext: Pointer to pixel ext settings 205 204 */ ··· 207 206 struct dpu_hw_pixel_ext *pe_ext); 208 207 209 208 /** 210 - * setup_sourceaddress - setup pipe source addresses 209 + * @setup_sourceaddress: setup pipe source addresses 211 210 * @pipe: Pointer to software pipe context 212 211 * @layout: format layout information for programming buffer to hardware 213 212 */ ··· 215 214 struct dpu_hw_fmt_layout *layout); 216 215 217 216 /** 218 - * setup_csc - setup color space coversion 217 + * @setup_csc: setup color space coversion 219 218 * @ctx: Pointer to pipe context 220 219 * @data: Pointer to config structure 221 220 */ 222 221 void (*setup_csc)(struct dpu_hw_sspp *ctx, const struct dpu_csc_cfg *data); 223 222 224 223 /** 225 - * setup_solidfill - enable/disable colorfill 224 + * @setup_solidfill: enable/disable colorfill 226 225 * @pipe: Pointer to software pipe context 227 226 * @const_color: Fill color value 228 227 * @flags: Pipe flags ··· 230 229 void (*setup_solidfill)(struct dpu_sw_pipe *pipe, u32 color); 231 230 232 231 /** 233 - * setup_multirect - setup multirect configuration 232 + * @setup_multirect: setup multirect configuration 234 233 * @pipe: Pointer to software pipe context 235 234 */ 236 235 237 236 void (*setup_multirect)(struct dpu_sw_pipe *pipe); 238 237 239 238 /** 240 - * setup_sharpening - setup sharpening 239 + * @setup_sharpening: setup sharpening 241 240 * @ctx: Pointer to pipe context 242 241 * @cfg: Pointer to config structure 243 242 */ 244 243 void (*setup_sharpening)(struct dpu_hw_sspp *ctx, 245 244 struct dpu_hw_sharp_cfg *cfg); 246 245 247 - 248 246 /** 249 - * setup_qos_lut - setup QoS LUTs 247 + * @setup_qos_lut: setup QoS LUTs 250 248 * @ctx: Pointer to pipe context 251 249 * @cfg: LUT configuration 252 250 */ ··· 253 253 struct dpu_hw_qos_cfg *cfg); 254 254 255 255 /** 256 - * setup_qos_ctrl - setup QoS control 256 + * @setup_qos_ctrl: setup QoS control 257 257 * @ctx: Pointer to pipe context 258 258 * @danger_safe_en: flags controlling enabling of danger/safe QoS/LUT 259 259 */ ··· 261 261 bool danger_safe_en); 262 262 263 263 /** 264 - * setup_clk_force_ctrl - setup clock force control 264 + * @setup_clk_force_ctrl: setup clock force control 265 265 * @ctx: Pointer to pipe context 266 266 * @enable: enable clock force if true 267 267 */ ··· 269 269 bool enable); 270 270 271 271 /** 272 - * setup_histogram - setup histograms 272 + * @setup_histogram: setup histograms 273 273 * @ctx: Pointer to pipe context 274 274 * @cfg: Pointer to histogram configuration 275 275 */ ··· 277 277 void *cfg); 278 278 279 279 /** 280 - * setup_scaler - setup scaler 280 + * @setup_scaler: setup scaler 281 281 * @scaler3_cfg: Pointer to scaler configuration 282 282 * @format: pixel format parameters 283 283 */ ··· 286 286 const struct msm_format *format); 287 287 288 288 /** 289 - * setup_cdp - setup client driven prefetch 289 + * @setup_cdp: setup client driven prefetch 290 290 * @pipe: Pointer to software pipe context 291 291 * @fmt: format used by the sw pipe 292 292 * @enable: whether the CDP should be enabled for this pipe ··· 303 303 * @ubwc: UBWC configuration data 304 304 * @idx: pipe index 305 305 * @cap: pointer to layer_cfg 306 + * @mdss_ver: MDSS version info to use for feature checks 306 307 * @ops: pointer to operations possible for this pipe 307 308 */ 308 309 struct dpu_hw_sspp {
+10 -11
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.h
··· 77 77 /** 78 78 * struct dpu_hw_mdp_ops - interface to the MDP TOP Hw driver functions 79 79 * Assumption is these functions will be called after clocks are enabled. 80 - * @setup_split_pipe : Programs the pipe control registers 81 - * @setup_pp_split : Programs the pp split control registers 82 - * @setup_traffic_shaper : programs traffic shaper control 83 80 */ 84 81 struct dpu_hw_mdp_ops { 85 - /** setup_split_pipe() : Registers are not double buffered, thisk 82 + /** 83 + * @setup_split_pipe : Programs the pipe control registers. 84 + * Registers are not double buffered, this 86 85 * function should be called before timing control enable 87 86 * @mdp : mdp top context driver 88 87 * @cfg : upper and lower part of pipe configuration ··· 90 91 struct split_pipe_cfg *p); 91 92 92 93 /** 93 - * setup_traffic_shaper() : Setup traffic shaper control 94 + * @setup_traffic_shaper : programs traffic shaper control. 94 95 * @mdp : mdp top context driver 95 96 * @cfg : traffic shaper configuration 96 97 */ ··· 98 99 struct traffic_shaper_cfg *cfg); 99 100 100 101 /** 101 - * setup_clk_force_ctrl - set clock force control 102 + * @setup_clk_force_ctrl: set clock force control 102 103 * @mdp: mdp top context driver 103 104 * @clk_ctrl: clock to be controlled 104 105 * @enable: force on enable ··· 108 109 enum dpu_clk_ctrl_type clk_ctrl, bool enable); 109 110 110 111 /** 111 - * get_danger_status - get danger status 112 + * @get_danger_status: get danger status 112 113 * @mdp: mdp top context driver 113 114 * @status: Pointer to danger safe status 114 115 */ ··· 116 117 struct dpu_danger_safe_status *status); 117 118 118 119 /** 119 - * setup_vsync_source - setup vsync source configuration details 120 + * @setup_vsync_source: setup vsync source configuration details 120 121 * @mdp: mdp top context driver 121 122 * @cfg: vsync source selection configuration 122 123 */ ··· 124 125 struct dpu_vsync_source_cfg *cfg); 125 126 126 127 /** 127 - * get_safe_status - get safe status 128 + * @get_safe_status: get safe status 128 129 * @mdp: mdp top context driver 129 130 * @status: Pointer to danger safe status 130 131 */ ··· 132 133 struct dpu_danger_safe_status *status); 133 134 134 135 /** 135 - * dp_phy_intf_sel - configure intf to phy mapping 136 + * @dp_phy_intf_sel: configure intf to phy mapping 136 137 * @mdp: mdp top context driver 137 138 * @phys: list of phys the DP interfaces should be connected to. 0 disables the INTF. 138 139 */ 139 140 void (*dp_phy_intf_sel)(struct dpu_hw_mdp *mdp, enum dpu_dp_phy_sel phys[2]); 140 141 141 142 /** 142 - * intf_audio_select - select the external interface for audio 143 + * @intf_audio_select: select the external interface for audio 143 144 * @mdp: mdp top context driver 144 145 */ 145 146 void (*intf_audio_select)(struct dpu_hw_mdp *mdp);
+8 -8
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_vbif.h
··· 17 17 */ 18 18 struct dpu_hw_vbif_ops { 19 19 /** 20 - * set_limit_conf - set transaction limit config 20 + * @set_limit_conf: set transaction limit config 21 21 * @vbif: vbif context driver 22 22 * @xin_id: client interface identifier 23 23 * @rd: true for read limit; false for write limit ··· 27 27 u32 xin_id, bool rd, u32 limit); 28 28 29 29 /** 30 - * get_limit_conf - get transaction limit config 30 + * @get_limit_conf: get transaction limit config 31 31 * @vbif: vbif context driver 32 32 * @xin_id: client interface identifier 33 33 * @rd: true for read limit; false for write limit ··· 37 37 u32 xin_id, bool rd); 38 38 39 39 /** 40 - * set_halt_ctrl - set halt control 40 + * @set_halt_ctrl: set halt control 41 41 * @vbif: vbif context driver 42 42 * @xin_id: client interface identifier 43 43 * @enable: halt control enable ··· 46 46 u32 xin_id, bool enable); 47 47 48 48 /** 49 - * get_halt_ctrl - get halt control 49 + * @get_halt_ctrl: get halt control 50 50 * @vbif: vbif context driver 51 51 * @xin_id: client interface identifier 52 52 * @return: halt control enable ··· 55 55 u32 xin_id); 56 56 57 57 /** 58 - * set_qos_remap - set QoS priority remap 58 + * @set_qos_remap: set QoS priority remap 59 59 * @vbif: vbif context driver 60 60 * @xin_id: client interface identifier 61 61 * @level: priority level ··· 65 65 u32 xin_id, u32 level, u32 remap_level); 66 66 67 67 /** 68 - * set_mem_type - set memory type 68 + * @set_mem_type: set memory type 69 69 * @vbif: vbif context driver 70 70 * @xin_id: client interface identifier 71 71 * @value: memory type value ··· 74 74 u32 xin_id, u32 value); 75 75 76 76 /** 77 - * clear_errors - clear any vbif errors 77 + * @clear_errors: clear any vbif errors 78 78 * This function clears any detected pending/source errors 79 79 * on the VBIF interface, and optionally returns the detected 80 80 * error mask(s). ··· 86 86 u32 *pnd_errors, u32 *src_errors); 87 87 88 88 /** 89 - * set_write_gather_en - set write_gather enable 89 + * @set_write_gather_en: set write_gather enable 90 90 * @vbif: vbif context driver 91 91 * @xin_id: client interface identifier 92 92 */
+2 -2
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.h
··· 22 22 }; 23 23 24 24 /** 25 - * 26 25 * struct dpu_hw_wb_ops : Interface to the wb hw driver functions 27 26 * Assumption is these functions will be called after clocks are enabled 28 27 * @setup_outaddress: setup output address from the writeback job 29 28 * @setup_outformat: setup output format of writeback block from writeback job 29 + * @setup_roi: setup ROI (Region of Interest) parameters 30 30 * @setup_qos_lut: setup qos LUT for writeback block based on input 31 31 * @setup_cdp: setup chroma down prefetch block for writeback block 32 32 * @setup_clk_force_ctrl: setup clock force control ··· 61 61 * struct dpu_hw_wb : WB driver object 62 62 * @hw: block hardware details 63 63 * @idx: hardware index number within type 64 - * @wb_hw_caps: hardware capabilities 64 + * @caps: hardware capabilities 65 65 * @ops: function pointers 66 66 */ 67 67 struct dpu_hw_wb {
+41 -98
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
··· 826 826 struct dpu_plane_state *pstate = to_dpu_plane_state(new_plane_state); 827 827 struct dpu_sw_pipe_cfg *pipe_cfg; 828 828 struct dpu_sw_pipe_cfg *r_pipe_cfg; 829 - struct dpu_sw_pipe_cfg init_pipe_cfg; 830 829 struct drm_rect fb_rect = { 0 }; 831 - const struct drm_display_mode *mode = &crtc_state->adjusted_mode; 832 830 uint32_t max_linewidth; 833 - u32 num_lm; 834 - int stage_id, num_stages; 835 831 836 832 min_scale = FRAC_16_16(1, MAX_UPSCALE_RATIO); 837 833 max_scale = MAX_DOWNSCALE_RATIO << 16; ··· 850 854 return -EINVAL; 851 855 } 852 856 853 - num_lm = dpu_crtc_get_num_lm(crtc_state); 854 - 857 + /* move the assignment here, to ease handling to another pairs later */ 858 + pipe_cfg = &pstate->pipe_cfg[0]; 859 + r_pipe_cfg = &pstate->pipe_cfg[1]; 855 860 /* state->src is 16.16, src_rect is not */ 856 - drm_rect_fp_to_int(&init_pipe_cfg.src_rect, &new_plane_state->src); 861 + drm_rect_fp_to_int(&pipe_cfg->src_rect, &new_plane_state->src); 862 + 863 + pipe_cfg->dst_rect = new_plane_state->dst; 857 864 858 865 fb_rect.x2 = new_plane_state->fb->width; 859 866 fb_rect.y2 = new_plane_state->fb->height; ··· 881 882 882 883 max_linewidth = pdpu->catalog->caps->max_linewidth; 883 884 884 - drm_rect_rotate(&init_pipe_cfg.src_rect, 885 + drm_rect_rotate(&pipe_cfg->src_rect, 885 886 new_plane_state->fb->width, new_plane_state->fb->height, 886 887 new_plane_state->rotation); 887 888 888 - /* 889 - * We have 1 mixer pair cfg for 1:1:1 and 2:2:1 topology, 2 mixer pair 890 - * configs for left and right half screen in case of 4:4:2 topology. 891 - * But we may have 2 rect to split wide plane that exceeds limit with 1 892 - * config for 2:2:1. So need to handle both wide plane splitting, and 893 - * two halves of screen splitting for quad-pipe case. Check dest 894 - * rectangle left/right clipping first, then check wide rectangle 895 - * splitting in every half next. 896 - */ 897 - num_stages = (num_lm + 1) / 2; 898 - /* iterate mixer configs for this plane, to separate left/right with the id */ 899 - for (stage_id = 0; stage_id < num_stages; stage_id++) { 900 - struct drm_rect mixer_rect = { 901 - .x1 = stage_id * mode->hdisplay / num_stages, 902 - .y1 = 0, 903 - .x2 = (stage_id + 1) * mode->hdisplay / num_stages, 904 - .y2 = mode->vdisplay 905 - }; 906 - int cfg_idx = stage_id * PIPES_PER_STAGE; 907 - 908 - pipe_cfg = &pstate->pipe_cfg[cfg_idx]; 909 - r_pipe_cfg = &pstate->pipe_cfg[cfg_idx + 1]; 910 - 911 - drm_rect_fp_to_int(&pipe_cfg->src_rect, &new_plane_state->src); 912 - pipe_cfg->dst_rect = new_plane_state->dst; 913 - 914 - DPU_DEBUG_PLANE(pdpu, "checking src " DRM_RECT_FMT 915 - " vs clip window " DRM_RECT_FMT "\n", 916 - DRM_RECT_ARG(&pipe_cfg->src_rect), 917 - DRM_RECT_ARG(&mixer_rect)); 918 - 919 - /* 920 - * If this plane does not fall into mixer rect, check next 921 - * mixer rect. 922 - */ 923 - if (!drm_rect_clip_scaled(&pipe_cfg->src_rect, 924 - &pipe_cfg->dst_rect, 925 - &mixer_rect)) { 926 - memset(pipe_cfg, 0, 2 * sizeof(struct dpu_sw_pipe_cfg)); 927 - 928 - continue; 889 + if ((drm_rect_width(&pipe_cfg->src_rect) > max_linewidth) || 890 + _dpu_plane_calc_clk(&crtc_state->adjusted_mode, pipe_cfg) > max_mdp_clk_rate) { 891 + if (drm_rect_width(&pipe_cfg->src_rect) > 2 * max_linewidth) { 892 + DPU_DEBUG_PLANE(pdpu, "invalid src " DRM_RECT_FMT " line:%u\n", 893 + DRM_RECT_ARG(&pipe_cfg->src_rect), max_linewidth); 894 + return -E2BIG; 929 895 } 930 896 931 - pipe_cfg->dst_rect.x1 -= mixer_rect.x1; 932 - pipe_cfg->dst_rect.x2 -= mixer_rect.x1; 933 - 934 - DPU_DEBUG_PLANE(pdpu, "Got clip src:" DRM_RECT_FMT " dst: " DRM_RECT_FMT "\n", 935 - DRM_RECT_ARG(&pipe_cfg->src_rect), DRM_RECT_ARG(&pipe_cfg->dst_rect)); 936 - 937 - /* Split wide rect into 2 rect */ 938 - if ((drm_rect_width(&pipe_cfg->src_rect) > max_linewidth) || 939 - _dpu_plane_calc_clk(mode, pipe_cfg) > max_mdp_clk_rate) { 940 - 941 - if (drm_rect_width(&pipe_cfg->src_rect) > 2 * max_linewidth) { 942 - DPU_DEBUG_PLANE(pdpu, "invalid src " DRM_RECT_FMT " line:%u\n", 943 - DRM_RECT_ARG(&pipe_cfg->src_rect), max_linewidth); 944 - return -E2BIG; 945 - } 946 - 947 - memcpy(r_pipe_cfg, pipe_cfg, sizeof(struct dpu_sw_pipe_cfg)); 948 - pipe_cfg->src_rect.x2 = (pipe_cfg->src_rect.x1 + pipe_cfg->src_rect.x2) >> 1; 949 - pipe_cfg->dst_rect.x2 = (pipe_cfg->dst_rect.x1 + pipe_cfg->dst_rect.x2) >> 1; 950 - r_pipe_cfg->src_rect.x1 = pipe_cfg->src_rect.x2; 951 - r_pipe_cfg->dst_rect.x1 = pipe_cfg->dst_rect.x2; 952 - DPU_DEBUG_PLANE(pdpu, "Split wide plane into:" 953 - DRM_RECT_FMT " and " DRM_RECT_FMT "\n", 954 - DRM_RECT_ARG(&pipe_cfg->src_rect), 955 - DRM_RECT_ARG(&r_pipe_cfg->src_rect)); 956 - } else { 957 - memset(r_pipe_cfg, 0, sizeof(struct dpu_sw_pipe_cfg)); 958 - } 959 - 960 - drm_rect_rotate_inv(&pipe_cfg->src_rect, 961 - new_plane_state->fb->width, 962 - new_plane_state->fb->height, 963 - new_plane_state->rotation); 964 - 965 - if (drm_rect_width(&r_pipe_cfg->src_rect) != 0) 966 - drm_rect_rotate_inv(&r_pipe_cfg->src_rect, 967 - new_plane_state->fb->width, 968 - new_plane_state->fb->height, 969 - new_plane_state->rotation); 897 + *r_pipe_cfg = *pipe_cfg; 898 + pipe_cfg->src_rect.x2 = (pipe_cfg->src_rect.x1 + pipe_cfg->src_rect.x2) >> 1; 899 + pipe_cfg->dst_rect.x2 = (pipe_cfg->dst_rect.x1 + pipe_cfg->dst_rect.x2) >> 1; 900 + r_pipe_cfg->src_rect.x1 = pipe_cfg->src_rect.x2; 901 + r_pipe_cfg->dst_rect.x1 = pipe_cfg->dst_rect.x2; 902 + } else { 903 + memset(r_pipe_cfg, 0, sizeof(*r_pipe_cfg)); 970 904 } 905 + 906 + drm_rect_rotate_inv(&pipe_cfg->src_rect, 907 + new_plane_state->fb->width, new_plane_state->fb->height, 908 + new_plane_state->rotation); 909 + if (drm_rect_width(&r_pipe_cfg->src_rect) != 0) 910 + drm_rect_rotate_inv(&r_pipe_cfg->src_rect, 911 + new_plane_state->fb->width, new_plane_state->fb->height, 912 + new_plane_state->rotation); 971 913 972 914 pstate->needs_qos_remap = drm_atomic_crtc_needs_modeset(crtc_state); 973 915 ··· 985 1045 drm_atomic_get_new_plane_state(state, plane); 986 1046 struct dpu_plane *pdpu = to_dpu_plane(plane); 987 1047 struct dpu_plane_state *pstate = to_dpu_plane_state(new_plane_state); 988 - struct dpu_sw_pipe *pipe; 989 - struct dpu_sw_pipe_cfg *pipe_cfg; 990 - int ret = 0, i; 1048 + struct dpu_sw_pipe *pipe = &pstate->pipe[0]; 1049 + struct dpu_sw_pipe *r_pipe = &pstate->pipe[1]; 1050 + struct dpu_sw_pipe_cfg *pipe_cfg = &pstate->pipe_cfg[0]; 1051 + struct dpu_sw_pipe_cfg *r_pipe_cfg = &pstate->pipe_cfg[1]; 1052 + int ret = 0; 991 1053 992 - for (i = 0; i < PIPES_PER_PLANE; i++) { 993 - pipe = &pstate->pipe[i]; 994 - pipe_cfg = &pstate->pipe_cfg[i]; 995 - if (!drm_rect_width(&pipe_cfg->src_rect)) 996 - continue; 997 - DPU_DEBUG_PLANE(pdpu, "pipe %d is in use, validate it\n", i); 998 - ret = dpu_plane_atomic_check_pipe(pdpu, pipe, pipe_cfg, 1054 + ret = dpu_plane_atomic_check_pipe(pdpu, pipe, pipe_cfg, 1055 + &crtc_state->adjusted_mode, 1056 + new_plane_state); 1057 + if (ret) 1058 + return ret; 1059 + 1060 + if (drm_rect_width(&r_pipe_cfg->src_rect) != 0) { 1061 + ret = dpu_plane_atomic_check_pipe(pdpu, r_pipe, r_pipe_cfg, 999 1062 &crtc_state->adjusted_mode, 1000 1063 new_plane_state); 1001 1064 if (ret)
+5 -1
drivers/gpu/drm/msm/disp/mdp_format.h
··· 24 24 #define MSM_FORMAT_FLAG_UNPACK_TIGHT BIT(MSM_FORMAT_FLAG_UNPACK_TIGHT_BIT) 25 25 #define MSM_FORMAT_FLAG_UNPACK_ALIGN_MSB BIT(MSM_FORMAT_FLAG_UNPACK_ALIGN_MSB_BIT) 26 26 27 - /** 27 + /* 28 28 * DPU HW,Component order color map 29 29 */ 30 30 enum { ··· 37 37 /** 38 38 * struct msm_format: defines the format configuration 39 39 * @pixel_format: format fourcc 40 + * @bpc_g_y: element bit widths: BPC for G or Y 41 + * @bpc_b_cb: element bit widths: BPC for B or Cb 42 + * @bpc_r_cr: element bit widths: BPC for R or Cr 43 + * @bpc_a: element bit widths: BPC for the alpha channel 40 44 * @element: element color ordering 41 45 * @fetch_type: how the color components are packed in pixel format 42 46 * @chroma_sample: chroma sub-samplng type
+1 -1
drivers/gpu/drm/msm/dp/dp_debug.h
··· 12 12 #if defined(CONFIG_DEBUG_FS) 13 13 14 14 /** 15 - * msm_dp_debug_get() - configure and get the DisplayPlot debug module data 15 + * msm_dp_debug_init() - configure and get the DisplayPlot debug module data 16 16 * 17 17 * @dev: device instance of the caller 18 18 * @panel: instance of panel module
+1
drivers/gpu/drm/msm/dp/dp_drm.c
··· 18 18 /** 19 19 * msm_dp_bridge_detect - callback to determine if connector is connected 20 20 * @bridge: Pointer to drm bridge structure 21 + * @connector: Pointer to drm connector structure 21 22 * Returns: Bridge's 'is connected' status 22 23 */ 23 24 static enum drm_connector_status
+5 -4
drivers/gpu/drm/msm/dp/dp_link.h
··· 80 80 }; 81 81 82 82 /** 83 - * mdss_dp_test_bit_depth_to_bpp() - convert test bit depth to bpp 83 + * msm_dp_link_bit_depth_to_bpp() - convert test bit depth to bpp 84 84 * @tbd: test bit depth 85 85 * 86 - * Returns the bits per pixel (bpp) to be used corresponding to the 87 - * git bit depth value. This function assumes that bit depth has 86 + * Returns: the bits per pixel (bpp) to be used corresponding to the 87 + * bit depth value. This function assumes that bit depth has 88 88 * already been validated. 89 89 */ 90 90 static inline u32 msm_dp_link_bit_depth_to_bpp(u32 tbd) ··· 120 120 121 121 /** 122 122 * msm_dp_link_get() - get the functionalities of dp test module 123 - * 123 + * @dev: kernel device structure 124 + * @aux: DisplayPort AUX channel 124 125 * 125 126 * return: a pointer to msm_dp_link struct 126 127 */
+4 -4
drivers/gpu/drm/msm/dp/dp_panel.h
··· 63 63 64 64 /** 65 65 * is_link_rate_valid() - validates the link rate 66 - * @lane_rate: link rate requested by the sink 66 + * @bw_code: link rate requested by the sink 67 67 * 68 - * Returns true if the requested link rate is supported. 68 + * Returns: true if the requested link rate is supported. 69 69 */ 70 70 static inline bool is_link_rate_valid(u32 bw_code) 71 71 { ··· 76 76 } 77 77 78 78 /** 79 - * msm_dp_link_is_lane_count_valid() - validates the lane count 79 + * is_lane_count_valid() - validates the lane count 80 80 * @lane_count: lane count requested by the sink 81 81 * 82 - * Returns true if the requested lane count is supported. 82 + * Returns: true if the requested lane count is supported. 83 83 */ 84 84 static inline bool is_lane_count_valid(u32 lane_count) 85 85 {
+19 -17
drivers/gpu/drm/msm/msm_fence.h
··· 16 16 * incrementing fence seqno at the end of each submit 17 17 */ 18 18 struct msm_fence_context { 19 + /** @dev: the drm device */ 19 20 struct drm_device *dev; 20 - /** name: human readable name for fence timeline */ 21 + /** @name: human readable name for fence timeline */ 21 22 char name[32]; 22 - /** context: see dma_fence_context_alloc() */ 23 + /** @context: see dma_fence_context_alloc() */ 23 24 unsigned context; 24 - /** index: similar to context, but local to msm_fence_context's */ 25 + /** @index: similar to context, but local to msm_fence_context's */ 25 26 unsigned index; 26 - 27 27 /** 28 - * last_fence: 29 - * 28 + * @last_fence: 30 29 * Last assigned fence, incremented each time a fence is created 31 30 * on this fence context. If last_fence == completed_fence, 32 31 * there is no remaining pending work 33 32 */ 34 33 uint32_t last_fence; 35 - 36 34 /** 37 - * completed_fence: 38 - * 35 + * @completed_fence: 39 36 * The last completed fence, updated from the CPU after interrupt 40 37 * from GPU 41 38 */ 42 39 uint32_t completed_fence; 43 - 44 40 /** 45 - * fenceptr: 46 - * 41 + * @fenceptr: 47 42 * The address that the GPU directly writes with completed fence 48 43 * seqno. This can be ahead of completed_fence. We can peek at 49 44 * this to see if a fence has already signaled but the CPU hasn't ··· 46 51 */ 47 52 volatile uint32_t *fenceptr; 48 53 54 + /** 55 + * @spinlock: fence context spinlock 56 + */ 49 57 spinlock_t spinlock; 50 58 51 59 /* ··· 57 59 * don't queue, so maybe that is ok 58 60 */ 59 61 60 - /** next_deadline: Time of next deadline */ 62 + /** @next_deadline: Time of next deadline */ 61 63 ktime_t next_deadline; 62 - 63 64 /** 64 - * next_deadline_fence: 65 - * 65 + * @next_deadline_fence: 66 66 * Fence value for next pending deadline. The deadline timer is 67 67 * canceled when this fence is signaled. 68 68 */ 69 69 uint32_t next_deadline_fence; 70 - 70 + /** 71 + * @deadline_timer: tracks nearest deadline of a fence timeline and 72 + * expires just before it. 73 + */ 71 74 struct hrtimer deadline_timer; 75 + /** 76 + * @deadline_work: work to do after deadline_timer expires 77 + */ 72 78 struct kthread_work deadline_work; 73 79 }; 74 80
+4 -1
drivers/gpu/drm/msm/msm_gem_vma.c
··· 65 65 }; 66 66 67 67 /** 68 - * struct msm_vma_op - A MAP or UNMAP operation 68 + * struct msm_vm_op - A MAP or UNMAP operation 69 69 */ 70 70 struct msm_vm_op { 71 71 /** @op: The operation type */ ··· 798 798 * synchronous operations are supported. In a user managed VM, userspace 799 799 * handles virtual address allocation, and both async and sync operations 800 800 * are supported. 801 + * 802 + * Returns: pointer to the created &struct drm_gpuvm on success 803 + * or an ERR_PTR(-errno) on failure. 801 804 */ 802 805 struct drm_gpuvm * 803 806 msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
+18 -50
drivers/gpu/drm/msm/msm_gpu.h
··· 116 116 * struct msm_gpu_devfreq - devfreq related state 117 117 */ 118 118 struct msm_gpu_devfreq { 119 - /** devfreq: devfreq instance */ 119 + /** @devfreq: devfreq instance */ 120 120 struct devfreq *devfreq; 121 - 122 - /** lock: lock for "suspended", "busy_cycles", and "time" */ 121 + /** @lock: lock for "suspended", "busy_cycles", and "time" */ 123 122 struct mutex lock; 124 - 125 123 /** 126 - * idle_freq: 127 - * 124 + * @idle_freq: 128 125 * Shadow frequency used while the GPU is idle. From the PoV of 129 126 * the devfreq governor, we are continuing to sample busyness and 130 127 * adjust frequency while the GPU is idle, but we use this shadow ··· 129 132 * it is inactive. 130 133 */ 131 134 unsigned long idle_freq; 132 - 133 135 /** 134 - * boost_constraint: 135 - * 136 + * @boost_freq: 136 137 * A PM QoS constraint to boost min freq for a period of time 137 138 * until the boost expires. 138 139 */ 139 140 struct dev_pm_qos_request boost_freq; 140 - 141 141 /** 142 - * busy_cycles: Last busy counter value, for calculating elapsed busy 142 + * @busy_cycles: Last busy counter value, for calculating elapsed busy 143 143 * cycles since last sampling period. 144 144 */ 145 145 u64 busy_cycles; 146 - 147 - /** time: Time of last sampling period. */ 146 + /** @time: Time of last sampling period. */ 148 147 ktime_t time; 149 - 150 - /** idle_time: Time of last transition to idle: */ 148 + /** @idle_time: Time of last transition to idle. */ 151 149 ktime_t idle_time; 152 - 153 150 /** 154 - * idle_work: 155 - * 151 + * @idle_work: 156 152 * Used to delay clamping to idle freq on active->idle transition. 157 153 */ 158 154 struct msm_hrtimer_work idle_work; 159 - 160 155 /** 161 - * boost_work: 162 - * 156 + * @boost_work: 163 157 * Used to reset the boost_constraint after the boost period has 164 158 * elapsed 165 159 */ 166 160 struct msm_hrtimer_work boost_work; 167 161 168 - /** suspended: tracks if we're suspended */ 162 + /** @suspended: tracks if we're suspended */ 169 163 bool suspended; 170 164 }; 171 165 ··· 346 358 struct msm_context { 347 359 /** @queuelock: synchronizes access to submitqueues list */ 348 360 rwlock_t queuelock; 349 - 350 361 /** @submitqueues: list of &msm_gpu_submitqueue created by userspace */ 351 362 struct list_head submitqueues; 352 - 353 363 /** 354 364 * @queueid: 355 - * 356 365 * Counter incremented each time a submitqueue is created, used to 357 366 * assign &msm_gpu_submitqueue.id 358 367 */ 359 368 int queueid; 360 - 361 369 /** 362 370 * @closed: The device file associated with this context has been closed. 363 - * 364 371 * Once the device is closed, any submits that have not been written 365 372 * to the ring buffer are no-op'd. 366 373 */ 367 374 bool closed; 368 - 369 375 /** 370 376 * @userspace_managed_vm: 371 - * 372 377 * Has userspace opted-in to userspace managed VM (ie. VM_BIND) via 373 378 * MSM_PARAM_EN_VM_BIND? 374 379 */ 375 380 bool userspace_managed_vm; 376 - 377 381 /** 378 382 * @vm: 379 - * 380 383 * The per-process GPU address-space. Do not access directly, use 381 384 * msm_context_vm(). 382 385 */ 383 386 struct drm_gpuvm *vm; 384 - 385 - /** @kref: the reference count */ 387 + /** @ref: the reference count */ 386 388 struct kref ref; 387 - 388 389 /** 389 390 * @seqno: 390 - * 391 391 * A unique per-process sequence number. Used to detect context 392 392 * switches, without relying on keeping a, potentially dangling, 393 393 * pointer to the previous context. 394 394 */ 395 395 int seqno; 396 - 397 396 /** 398 397 * @sysprof: 399 - * 400 398 * The value of MSM_PARAM_SYSPROF set by userspace. This is 401 399 * intended to be used by system profiling tools like Mesa's 402 400 * pps-producer (perfetto), and restricted to CAP_SYS_ADMIN. ··· 397 423 * file is closed. 398 424 */ 399 425 int sysprof; 400 - 401 426 /** 402 427 * @comm: Overridden task comm, see MSM_PARAM_COMM 403 428 * 404 429 * Accessed under msm_gpu::lock 405 430 */ 406 431 char *comm; 407 - 408 432 /** 409 433 * @cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE 410 434 * 411 435 * Accessed under msm_gpu::lock 412 436 */ 413 437 char *cmdline; 414 - 415 438 /** 416 - * @elapsed: 417 - * 439 + * @elapsed_ns: 418 440 * The total (cumulative) elapsed time GPU was busy with rendering 419 441 * from this context in ns. 420 442 */ 421 443 uint64_t elapsed_ns; 422 - 423 444 /** 424 445 * @cycles: 425 - * 426 446 * The total (cumulative) GPU cycles elapsed attributed to this 427 447 * context. 428 448 */ 429 449 uint64_t cycles; 430 - 431 450 /** 432 451 * @entities: 433 - * 434 452 * Table of per-priority-level sched entities used by submitqueues 435 453 * associated with this &drm_file. Because some userspace apps 436 454 * make assumptions about rendering from multiple gl contexts ··· 432 466 * level. 433 467 */ 434 468 struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS]; 435 - 436 469 /** 437 470 * @ctx_mem: 438 - * 439 471 * Total amount of memory of GEM buffers with handles attached for 440 472 * this context. 441 473 */ ··· 443 479 struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx); 444 480 445 481 /** 446 - * msm_context_is_vm_bind() - has userspace opted in to VM_BIND? 482 + * msm_context_is_vmbind() - has userspace opted in to VM_BIND? 447 483 * 448 484 * @ctx: the drm_file context 449 485 * ··· 451 487 * do sparse binding including having multiple, potentially partial, 452 488 * mappings in the VM. Therefore certain legacy uabi (ie. GET_IOVA, 453 489 * SET_IOVA) are rejected because they don't have a sensible meaning. 490 + * 491 + * Returns: %true if userspace is managing the VM, %false otherwise. 454 492 */ 455 493 static inline bool 456 494 msm_context_is_vmbind(struct msm_context *ctx) ··· 484 518 * This allows generations without preemption (nr_rings==1) to have some 485 519 * amount of prioritization, and provides more priority levels for gens 486 520 * that do have preemption. 521 + * 522 + * Returns: %0 on success, %-errno on error. 487 523 */ 488 524 static inline int msm_gpu_convert_priority(struct msm_gpu *gpu, int prio, 489 525 unsigned *ring_nr, enum drm_sched_priority *sched_prio) ··· 509 541 } 510 542 511 543 /** 512 - * struct msm_gpu_submitqueues - Userspace created context. 544 + * struct msm_gpu_submitqueue - Userspace created context. 513 545 * 514 546 * A submitqueue is associated with a gl context or vk queue (or equiv) 515 547 * in userspace.
+2 -2
drivers/gpu/drm/msm/msm_iommu.c
··· 364 364 } 365 365 366 366 /** 367 - * alloc_pt() - Custom page table allocator 367 + * msm_iommu_pagetable_alloc_pt() - Custom page table allocator 368 368 * @cookie: Cookie passed at page table allocation time. 369 369 * @size: Size of the page table. This size should be fixed, 370 370 * and determined at creation time based on the granule size. ··· 416 416 417 417 418 418 /** 419 - * free_pt() - Custom page table free function 419 + * msm_iommu_pagetable_free_pt() - Custom page table free function 420 420 * @cookie: Cookie passed at page table allocation time. 421 421 * @data: Page table to free. 422 422 * @size: Size of the page table. This size should be fixed,
+5 -5
drivers/gpu/drm/msm/msm_perf.c
··· 65 65 66 66 if ((perf->cnt++ % 32) == 0) { 67 67 /* Header line: */ 68 - n = snprintf(ptr, rem, "%%BUSY"); 68 + n = scnprintf(ptr, rem, "%%BUSY"); 69 69 ptr += n; 70 70 rem -= n; 71 71 72 72 for (i = 0; i < gpu->num_perfcntrs; i++) { 73 73 const struct msm_gpu_perfcntr *perfcntr = &gpu->perfcntrs[i]; 74 - n = snprintf(ptr, rem, "\t%s", perfcntr->name); 74 + n = scnprintf(ptr, rem, "\t%s", perfcntr->name); 75 75 ptr += n; 76 76 rem -= n; 77 77 } ··· 93 93 return ret; 94 94 95 95 val = totaltime ? 1000 * activetime / totaltime : 0; 96 - n = snprintf(ptr, rem, "%3d.%d%%", val / 10, val % 10); 96 + n = scnprintf(ptr, rem, "%3d.%d%%", val / 10, val % 10); 97 97 ptr += n; 98 98 rem -= n; 99 99 100 100 for (i = 0; i < ret; i++) { 101 101 /* cycle counters (I think).. convert to MHz.. */ 102 102 val = cntrs[i] / 10000; 103 - n = snprintf(ptr, rem, "\t%5d.%02d", 103 + n = scnprintf(ptr, rem, "\t%5d.%02d", 104 104 val / 100, val % 100); 105 105 ptr += n; 106 106 rem -= n; 107 107 } 108 108 } 109 109 110 - n = snprintf(ptr, rem, "\n"); 110 + n = scnprintf(ptr, rem, "\n"); 111 111 ptr += n; 112 112 rem -= n; 113 113
+13
drivers/gpu/drm/nouveau/dispnv50/atom.h
··· 152 152 nv50_head_atom_get(struct drm_atomic_state *state, struct drm_crtc *crtc) 153 153 { 154 154 struct drm_crtc_state *statec = drm_atomic_get_crtc_state(state, crtc); 155 + 155 156 if (IS_ERR(statec)) 156 157 return (void *)statec; 158 + 159 + return nv50_head_atom(statec); 160 + } 161 + 162 + static inline struct nv50_head_atom * 163 + nv50_head_atom_get_new(struct drm_atomic_state *state, struct drm_crtc *crtc) 164 + { 165 + struct drm_crtc_state *statec = drm_atomic_get_new_crtc_state(state, crtc); 166 + 167 + if (!statec) 168 + return NULL; 169 + 157 170 return nv50_head_atom(statec); 158 171 } 159 172
+1 -1
drivers/gpu/drm/nouveau/dispnv50/wndw.c
··· 583 583 asyw->image.offset[0] = nvbo->offset; 584 584 585 585 if (wndw->func->prepare) { 586 - asyh = nv50_head_atom_get(asyw->state.state, asyw->state.crtc); 586 + asyh = nv50_head_atom_get_new(asyw->state.state, asyw->state.crtc); 587 587 if (IS_ERR(asyh)) 588 588 return PTR_ERR(asyh); 589 589
+10 -4
drivers/gpu/drm/xe/xe_guc_ct.c
··· 104 104 { 105 105 g2h_fence->cancel = true; 106 106 g2h_fence->fail = true; 107 - g2h_fence->done = true; 107 + 108 + /* WRITE_ONCE pairs with READ_ONCEs in guc_ct_send_recv. */ 109 + WRITE_ONCE(g2h_fence->done, true); 108 110 } 109 111 110 112 static bool g2h_fence_needs_alloc(struct g2h_fence *g2h_fence) ··· 1205 1203 return ret; 1206 1204 } 1207 1205 1208 - ret = wait_event_timeout(ct->g2h_fence_wq, g2h_fence.done, HZ); 1206 + /* READ_ONCEs pairs with WRITE_ONCEs in parse_g2h_response 1207 + * and g2h_fence_cancel. 1208 + */ 1209 + ret = wait_event_timeout(ct->g2h_fence_wq, READ_ONCE(g2h_fence.done), HZ); 1209 1210 if (!ret) { 1210 1211 LNL_FLUSH_WORK(&ct->g2h_worker); 1211 - if (g2h_fence.done) { 1212 + if (READ_ONCE(g2h_fence.done)) { 1212 1213 xe_gt_warn(gt, "G2H fence %u, action %04x, done\n", 1213 1214 g2h_fence.seqno, action[0]); 1214 1215 ret = 1; ··· 1459 1454 1460 1455 g2h_release_space(ct, GUC_CTB_HXG_MSG_MAX_LEN); 1461 1456 1462 - g2h_fence->done = true; 1457 + /* WRITE_ONCE pairs with READ_ONCEs in guc_ct_send_recv. */ 1458 + WRITE_ONCE(g2h_fence->done, true); 1463 1459 smp_mb(); 1464 1460 1465 1461 wake_up_all(&ct->g2h_fence_wq);
+20 -5
drivers/gpu/drm/xe/xe_migrate.c
··· 2062 2062 unsigned long sram_offset, 2063 2063 struct drm_pagemap_addr *sram_addr, 2064 2064 u64 vram_addr, 2065 + struct dma_fence *deps, 2065 2066 const enum xe_migrate_copy_dir dir) 2066 2067 { 2067 2068 struct xe_gt *gt = m->tile->primary_gt; ··· 2151 2150 2152 2151 xe_sched_job_add_migrate_flush(job, MI_INVALIDATE_TLB); 2153 2152 2153 + if (deps && !dma_fence_is_signaled(deps)) { 2154 + dma_fence_get(deps); 2155 + err = drm_sched_job_add_dependency(&job->drm, deps); 2156 + if (err) 2157 + dma_fence_wait(deps, false); 2158 + err = 0; 2159 + } 2160 + 2154 2161 mutex_lock(&m->job_mutex); 2155 2162 xe_sched_job_arm(job); 2156 2163 fence = dma_fence_get(&job->drm.s_fence->finished); ··· 2184 2175 * @npages: Number of pages to migrate. 2185 2176 * @src_addr: Array of DMA information (source of migrate) 2186 2177 * @dst_addr: Device physical address of VRAM (destination of migrate) 2178 + * @deps: struct dma_fence representing the dependencies that need 2179 + * to be signaled before migration. 2187 2180 * 2188 2181 * Copy from an array dma addresses to a VRAM device physical address 2189 2182 * ··· 2195 2184 struct dma_fence *xe_migrate_to_vram(struct xe_migrate *m, 2196 2185 unsigned long npages, 2197 2186 struct drm_pagemap_addr *src_addr, 2198 - u64 dst_addr) 2187 + u64 dst_addr, 2188 + struct dma_fence *deps) 2199 2189 { 2200 2190 return xe_migrate_vram(m, npages * PAGE_SIZE, 0, src_addr, dst_addr, 2201 - XE_MIGRATE_COPY_TO_VRAM); 2191 + deps, XE_MIGRATE_COPY_TO_VRAM); 2202 2192 } 2203 2193 2204 2194 /** ··· 2208 2196 * @npages: Number of pages to migrate. 2209 2197 * @src_addr: Device physical address of VRAM (source of migrate) 2210 2198 * @dst_addr: Array of DMA information (destination of migrate) 2199 + * @deps: struct dma_fence representing the dependencies that need 2200 + * to be signaled before migration. 2211 2201 * 2212 2202 * Copy from a VRAM device physical address to an array dma addresses 2213 2203 * ··· 2219 2205 struct dma_fence *xe_migrate_from_vram(struct xe_migrate *m, 2220 2206 unsigned long npages, 2221 2207 u64 src_addr, 2222 - struct drm_pagemap_addr *dst_addr) 2208 + struct drm_pagemap_addr *dst_addr, 2209 + struct dma_fence *deps) 2223 2210 { 2224 2211 return xe_migrate_vram(m, npages * PAGE_SIZE, 0, dst_addr, src_addr, 2225 - XE_MIGRATE_COPY_TO_SRAM); 2212 + deps, XE_MIGRATE_COPY_TO_SRAM); 2226 2213 } 2227 2214 2228 2215 static void xe_migrate_dma_unmap(struct xe_device *xe, ··· 2399 2384 __fence = xe_migrate_vram(m, current_bytes, 2400 2385 (unsigned long)buf & ~PAGE_MASK, 2401 2386 &pagemap_addr[current_page], 2402 - vram_addr, write ? 2387 + vram_addr, NULL, write ? 2403 2388 XE_MIGRATE_COPY_TO_VRAM : 2404 2389 XE_MIGRATE_COPY_TO_SRAM); 2405 2390 if (IS_ERR(__fence)) {
+4 -2
drivers/gpu/drm/xe/xe_migrate.h
··· 116 116 struct dma_fence *xe_migrate_to_vram(struct xe_migrate *m, 117 117 unsigned long npages, 118 118 struct drm_pagemap_addr *src_addr, 119 - u64 dst_addr); 119 + u64 dst_addr, 120 + struct dma_fence *deps); 120 121 121 122 struct dma_fence *xe_migrate_from_vram(struct xe_migrate *m, 122 123 unsigned long npages, 123 124 u64 src_addr, 124 - struct drm_pagemap_addr *dst_addr); 125 + struct drm_pagemap_addr *dst_addr, 126 + struct dma_fence *deps); 125 127 126 128 struct dma_fence *xe_migrate_copy(struct xe_migrate *m, 127 129 struct xe_bo *src_bo,
+38 -13
drivers/gpu/drm/xe/xe_svm.c
··· 476 476 477 477 static int xe_svm_copy(struct page **pages, 478 478 struct drm_pagemap_addr *pagemap_addr, 479 - unsigned long npages, const enum xe_svm_copy_dir dir) 479 + unsigned long npages, const enum xe_svm_copy_dir dir, 480 + struct dma_fence *pre_migrate_fence) 480 481 { 481 482 struct xe_vram_region *vr = NULL; 482 483 struct xe_gt *gt = NULL; ··· 566 565 __fence = xe_migrate_from_vram(vr->migrate, 567 566 i - pos + incr, 568 567 vram_addr, 569 - &pagemap_addr[pos]); 568 + &pagemap_addr[pos], 569 + pre_migrate_fence); 570 570 } else { 571 571 vm_dbg(&xe->drm, 572 572 "COPY TO VRAM - 0x%016llx -> 0x%016llx, NPAGES=%ld", ··· 576 574 __fence = xe_migrate_to_vram(vr->migrate, 577 575 i - pos + incr, 578 576 &pagemap_addr[pos], 579 - vram_addr); 577 + vram_addr, 578 + pre_migrate_fence); 580 579 } 581 580 if (IS_ERR(__fence)) { 582 581 err = PTR_ERR(__fence); 583 582 goto err_out; 584 583 } 585 - 584 + pre_migrate_fence = NULL; 586 585 dma_fence_put(fence); 587 586 fence = __fence; 588 587 } ··· 606 603 vram_addr, (u64)pagemap_addr[pos].addr, 1); 607 604 __fence = xe_migrate_from_vram(vr->migrate, 1, 608 605 vram_addr, 609 - &pagemap_addr[pos]); 606 + &pagemap_addr[pos], 607 + pre_migrate_fence); 610 608 } else { 611 609 vm_dbg(&xe->drm, 612 610 "COPY TO VRAM - 0x%016llx -> 0x%016llx, NPAGES=%d", 613 611 (u64)pagemap_addr[pos].addr, vram_addr, 1); 614 612 __fence = xe_migrate_to_vram(vr->migrate, 1, 615 613 &pagemap_addr[pos], 616 - vram_addr); 614 + vram_addr, 615 + pre_migrate_fence); 617 616 } 618 617 if (IS_ERR(__fence)) { 619 618 err = PTR_ERR(__fence); 620 619 goto err_out; 621 620 } 622 - 621 + pre_migrate_fence = NULL; 623 622 dma_fence_put(fence); 624 623 fence = __fence; 625 624 } ··· 634 629 dma_fence_wait(fence, false); 635 630 dma_fence_put(fence); 636 631 } 632 + if (pre_migrate_fence) 633 + dma_fence_wait(pre_migrate_fence, false); 637 634 638 635 /* 639 636 * XXX: We can't derive the GT here (or anywhere in this functions, but ··· 652 645 653 646 static int xe_svm_copy_to_devmem(struct page **pages, 654 647 struct drm_pagemap_addr *pagemap_addr, 655 - unsigned long npages) 648 + unsigned long npages, 649 + struct dma_fence *pre_migrate_fence) 656 650 { 657 - return xe_svm_copy(pages, pagemap_addr, npages, XE_SVM_COPY_TO_VRAM); 651 + return xe_svm_copy(pages, pagemap_addr, npages, XE_SVM_COPY_TO_VRAM, 652 + pre_migrate_fence); 658 653 } 659 654 660 655 static int xe_svm_copy_to_ram(struct page **pages, 661 656 struct drm_pagemap_addr *pagemap_addr, 662 - unsigned long npages) 657 + unsigned long npages, 658 + struct dma_fence *pre_migrate_fence) 663 659 { 664 - return xe_svm_copy(pages, pagemap_addr, npages, XE_SVM_COPY_TO_SRAM); 660 + return xe_svm_copy(pages, pagemap_addr, npages, XE_SVM_COPY_TO_SRAM, 661 + pre_migrate_fence); 665 662 } 666 663 667 664 static struct xe_bo *to_xe_bo(struct drm_pagemap_devmem *devmem_allocation) ··· 678 667 struct xe_bo *bo = to_xe_bo(devmem_allocation); 679 668 struct xe_device *xe = xe_bo_device(bo); 680 669 670 + dma_fence_put(devmem_allocation->pre_migrate_fence); 681 671 xe_bo_put_async(bo); 682 672 xe_pm_runtime_put(xe); 683 673 } ··· 873 861 unsigned long timeslice_ms) 874 862 { 875 863 struct xe_vram_region *vr = container_of(dpagemap, typeof(*vr), dpagemap); 864 + struct dma_fence *pre_migrate_fence = NULL; 876 865 struct xe_device *xe = vr->xe; 877 866 struct device *dev = xe->drm.dev; 878 867 struct drm_buddy_block *block; ··· 900 887 break; 901 888 } 902 889 890 + /* Ensure that any clearing or async eviction will complete before migration. */ 891 + if (!dma_resv_test_signaled(bo->ttm.base.resv, DMA_RESV_USAGE_KERNEL)) { 892 + err = dma_resv_get_singleton(bo->ttm.base.resv, DMA_RESV_USAGE_KERNEL, 893 + &pre_migrate_fence); 894 + if (err) 895 + dma_resv_wait_timeout(bo->ttm.base.resv, DMA_RESV_USAGE_KERNEL, 896 + false, MAX_SCHEDULE_TIMEOUT); 897 + else if (pre_migrate_fence) 898 + dma_fence_enable_sw_signaling(pre_migrate_fence); 899 + } 900 + 903 901 drm_pagemap_devmem_init(&bo->devmem_allocation, dev, mm, 904 - &dpagemap_devmem_ops, dpagemap, end - start); 902 + &dpagemap_devmem_ops, dpagemap, end - start, 903 + pre_migrate_fence); 905 904 906 905 blocks = &to_xe_ttm_vram_mgr_resource(bo->ttm.resource)->blocks; 907 906 list_for_each_entry(block, blocks, link) ··· 966 941 xe_assert(vm->xe, IS_DGFX(vm->xe)); 967 942 968 943 if (xe_svm_range_in_vram(range)) { 969 - drm_info(&vm->xe->drm, "Range is already in VRAM\n"); 944 + drm_dbg(&vm->xe->drm, "Range is already in VRAM\n"); 970 945 return false; 971 946 } 972 947
+10 -23
drivers/infiniband/core/addr.c
··· 80 80 .min = sizeof(struct rdma_nla_ls_gid)}, 81 81 }; 82 82 83 - static inline bool ib_nl_is_good_ip_resp(const struct nlmsghdr *nlh) 83 + static void ib_nl_process_ip_rsep(const struct nlmsghdr *nlh) 84 84 { 85 85 struct nlattr *tb[LS_NLA_TYPE_MAX] = {}; 86 + union ib_gid gid; 87 + struct addr_req *req; 88 + int found = 0; 86 89 int ret; 87 90 88 91 if (nlh->nlmsg_flags & RDMA_NL_LS_F_ERR) 89 - return false; 92 + return; 90 93 91 94 ret = nla_parse_deprecated(tb, LS_NLA_TYPE_MAX - 1, nlmsg_data(nlh), 92 95 nlmsg_len(nlh), ib_nl_addr_policy, NULL); 93 96 if (ret) 94 - return false; 97 + return; 95 98 96 - return true; 97 - } 98 - 99 - static void ib_nl_process_good_ip_rsep(const struct nlmsghdr *nlh) 100 - { 101 - const struct nlattr *head, *curr; 102 - union ib_gid gid; 103 - struct addr_req *req; 104 - int len, rem; 105 - int found = 0; 106 - 107 - head = (const struct nlattr *)nlmsg_data(nlh); 108 - len = nlmsg_len(nlh); 109 - 110 - nla_for_each_attr(curr, head, len, rem) { 111 - if (curr->nla_type == LS_NLA_TYPE_DGID) 112 - memcpy(&gid, nla_data(curr), nla_len(curr)); 113 - } 99 + if (!tb[LS_NLA_TYPE_DGID]) 100 + return; 101 + memcpy(&gid, nla_data(tb[LS_NLA_TYPE_DGID]), sizeof(gid)); 114 102 115 103 spin_lock_bh(&lock); 116 104 list_for_each_entry(req, &req_list, list) { ··· 125 137 !(NETLINK_CB(skb).sk)) 126 138 return -EPERM; 127 139 128 - if (ib_nl_is_good_ip_resp(nlh)) 129 - ib_nl_process_good_ip_rsep(nlh); 140 + ib_nl_process_ip_rsep(nlh); 130 141 131 142 return 0; 132 143 }
+3
drivers/infiniband/core/cma.c
··· 2009 2009 ib_sa_free_multicast(mc->sa_mc); 2010 2010 2011 2011 if (rdma_protocol_roce(id_priv->id.device, id_priv->id.port_num)) { 2012 + struct rdma_cm_event *event = &mc->iboe_join.event; 2012 2013 struct rdma_dev_addr *dev_addr = 2013 2014 &id_priv->id.route.addr.dev_addr; 2014 2015 struct net_device *ndev = NULL; ··· 2032 2031 dev_put(ndev); 2033 2032 2034 2033 cancel_work_sync(&mc->iboe_join.work); 2034 + if (event->event == RDMA_CM_EVENT_MULTICAST_JOIN) 2035 + rdma_destroy_ah_attr(&event->param.ud.ah_attr); 2035 2036 } 2036 2037 kfree(mc); 2037 2038 }
+3 -1
drivers/infiniband/core/device.c
··· 2881 2881 { 2882 2882 struct ib_device *parent = sub->parent; 2883 2883 2884 - if (!parent) 2884 + if (!parent) { 2885 + ib_device_put(sub); 2885 2886 return -EOPNOTSUPP; 2887 + } 2886 2888 2887 2889 mutex_lock(&parent->subdev_lock); 2888 2890 list_del(&sub->subdev_list);
+1 -1
drivers/infiniband/core/verbs.c
··· 738 738 (struct in6_addr *)dgid); 739 739 return 0; 740 740 } else if (net_type == RDMA_NETWORK_IPV6 || 741 - net_type == RDMA_NETWORK_IB || RDMA_NETWORK_ROCE_V1) { 741 + net_type == RDMA_NETWORK_IB || net_type == RDMA_NETWORK_ROCE_V1) { 742 742 *dgid = hdr->ibgrh.dgid; 743 743 *sgid = hdr->ibgrh.sgid; 744 744 return 0;
+3 -3
drivers/infiniband/hw/bnxt_re/hw_counters.h
··· 89 89 BNXT_RE_RES_SRQ_LOAD_ERR, 90 90 BNXT_RE_RES_TX_PCI_ERR, 91 91 BNXT_RE_RES_RX_PCI_ERR, 92 + BNXT_RE_REQ_CQE_ERROR, 93 + BNXT_RE_RESP_CQE_ERROR, 94 + BNXT_RE_RESP_REMOTE_ACCESS_ERRS, 92 95 BNXT_RE_OUT_OF_SEQ_ERR, 93 96 BNXT_RE_TX_ATOMIC_REQ, 94 97 BNXT_RE_TX_READ_REQ, ··· 113 110 BNXT_RE_TX_CNP, 114 111 BNXT_RE_RX_CNP, 115 112 BNXT_RE_RX_ECN, 116 - BNXT_RE_REQ_CQE_ERROR, 117 - BNXT_RE_RESP_CQE_ERROR, 118 - BNXT_RE_RESP_REMOTE_ACCESS_ERRS, 119 113 BNXT_RE_NUM_EXT_COUNTERS 120 114 }; 121 115
+1 -6
drivers/infiniband/hw/bnxt_re/ib_verbs.c
··· 2919 2919 wqe.rawqp1.lflags |= 2920 2920 SQ_SEND_RAWETH_QP1_LFLAGS_ROCE_CRC; 2921 2921 } 2922 - switch (wr->send_flags) { 2923 - case IB_SEND_IP_CSUM: 2922 + if (wr->send_flags & IB_SEND_IP_CSUM) 2924 2923 wqe.rawqp1.lflags |= 2925 2924 SQ_SEND_RAWETH_QP1_LFLAGS_IP_CHKSUM; 2926 - break; 2927 - default: 2928 - break; 2929 - } 2930 2925 fallthrough; 2931 2926 case IB_WR_SEND_WITH_INV: 2932 2927 rc = bnxt_re_build_send_wqe(qp, wr, &wqe);
+1 -1
drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
··· 1112 1112 creq_db->dbinfo.flags = 0; 1113 1113 creq_db->reg.bar_id = RCFW_COMM_CONS_PCI_BAR_REGION; 1114 1114 creq_db->reg.bar_base = pci_resource_start(pdev, creq_db->reg.bar_id); 1115 - if (!creq_db->reg.bar_id) 1115 + if (!creq_db->reg.bar_base) 1116 1116 dev_err(&pdev->dev, 1117 1117 "QPLIB: CREQ BAR region %d resc start is 0!", 1118 1118 creq_db->reg.bar_id);
+3 -5
drivers/infiniband/hw/bnxt_re/qplib_res.c
··· 64 64 for (i = 0; i < pbl->pg_count; i++) { 65 65 if (pbl->pg_arr[i]) 66 66 dma_free_coherent(&pdev->dev, pbl->pg_size, 67 - (void *)((unsigned long) 68 - pbl->pg_arr[i] & 69 - PAGE_MASK), 67 + pbl->pg_arr[i], 70 68 pbl->pg_map_arr[i]); 71 69 else 72 70 dev_warn(&pdev->dev, ··· 235 237 if (npbl % BIT(MAX_PDL_LVL_SHIFT)) 236 238 npde++; 237 239 /* Alloc PDE pages */ 238 - sginfo.pgsize = npde * pg_size; 240 + sginfo.pgsize = npde * ROCE_PG_SIZE_4K; 239 241 sginfo.npages = 1; 240 242 rc = __alloc_pbl(res, &hwq->pbl[PBL_LVL_0], &sginfo); 241 243 if (rc) ··· 243 245 244 246 /* Alloc PBL pages */ 245 247 sginfo.npages = npbl; 246 - sginfo.pgsize = PAGE_SIZE; 248 + sginfo.pgsize = ROCE_PG_SIZE_4K; 247 249 rc = __alloc_pbl(res, &hwq->pbl[PBL_LVL_1], &sginfo); 248 250 if (rc) 249 251 goto fail;
-4
drivers/infiniband/hw/efa/efa_verbs.c
··· 1320 1320 u32 hp_cnt, 1321 1321 u8 hp_shift) 1322 1322 { 1323 - u32 pages_in_hp = BIT(hp_shift - PAGE_SHIFT); 1324 1323 struct ib_block_iter biter; 1325 1324 unsigned int hp_idx = 0; 1326 - 1327 - ibdev_dbg(&dev->ibdev, "hp_cnt[%u], pages_in_hp[%u]\n", 1328 - hp_cnt, pages_in_hp); 1329 1325 1330 1326 rdma_umem_for_each_dma_block(umem, &biter, BIT(hp_shift)) 1331 1327 page_list[hp_idx++] = rdma_block_iter_dma_address(&biter);
+2 -1
drivers/infiniband/hw/irdma/utils.c
··· 251 251 void *ptr) 252 252 { 253 253 struct neighbour *neigh = ptr; 254 - struct net_device *real_dev, *netdev = (struct net_device *)neigh->dev; 254 + struct net_device *real_dev, *netdev; 255 255 struct irdma_device *iwdev; 256 256 struct ib_device *ibdev; 257 257 __be32 *p; ··· 260 260 261 261 switch (event) { 262 262 case NETEVENT_NEIGH_UPDATE: 263 + netdev = neigh->dev; 263 264 real_dev = rdma_vlan_dev_real_dev(netdev); 264 265 if (!real_dev) 265 266 real_dev = netdev;
+4
drivers/infiniband/hw/mana/cq.c
··· 56 56 doorbell = mana_ucontext->doorbell; 57 57 } else { 58 58 is_rnic_cq = true; 59 + if (attr->cqe > U32_MAX / COMP_ENTRY_SIZE / 2 + 1) { 60 + ibdev_dbg(ibdev, "CQE %d exceeding limit\n", attr->cqe); 61 + return -EINVAL; 62 + } 59 63 buf_size = MANA_PAGE_ALIGN(roundup_pow_of_two(attr->cqe * COMP_ENTRY_SIZE)); 60 64 cq->cqe = buf_size / COMP_ENTRY_SIZE; 61 65 err = mana_ib_create_kernel_queue(mdev, buf_size, GDMA_CQ, &cq->queue);
+32
drivers/infiniband/sw/rxe/rxe_net.c
··· 64 64 break; 65 65 default: 66 66 WARN_ON_ONCE(1); 67 + return; 67 68 } 69 + /* 70 + * sock_lock_init_class_and_name() calls 71 + * sk_owner_set(sk, THIS_MODULE); in order 72 + * to make sure the referenced global 73 + * variables rxe_recv_slock_key and 74 + * rxe_recv_sk_key are not removed 75 + * before the socket is closed. 76 + * 77 + * However this prevents rxe_net_exit() 78 + * from being called and 'rmmod rdma_rxe' 79 + * is refused because of the references. 80 + * 81 + * For the global sockets in recv_sockets, 82 + * we are sure that rxe_net_exit() will call 83 + * rxe_release_udp_tunnel -> udp_tunnel_sock_release. 84 + * 85 + * So we don't need the additional reference to 86 + * our own (THIS_MODULE). 87 + */ 88 + sk_owner_put(sk); 89 + /* 90 + * We also call sk_owner_clear() otherwise 91 + * sk_owner_put(sk) in sk_prot_free will 92 + * fail, which is called via 93 + * sk_free -> __sk_free -> sk_destruct 94 + * and sk_destruct calls __sk_destruct 95 + * directly or via call_rcu() 96 + * so sk_prot_free() might be called 97 + * after rxe_net_exit(). 98 + */ 99 + sk_owner_clear(sk); 68 100 #endif /* CONFIG_DEBUG_LOCK_ALLOC */ 69 101 } 70 102
+3 -1
drivers/infiniband/sw/rxe/rxe_odp.c
··· 179 179 return err; 180 180 181 181 need_fault = rxe_check_pagefault(umem_odp, iova, length); 182 - if (need_fault) 182 + if (need_fault) { 183 + mutex_unlock(&umem_odp->umem_mutex); 183 184 return -EFAULT; 185 + } 184 186 } 185 187 186 188 return 0;
+1
drivers/infiniband/ulp/rtrs/rtrs-clt.c
··· 1464 1464 mr_page_shift = max(12, ffs(ib_dev->attrs.page_size_cap) - 1); 1465 1465 max_pages_per_mr = ib_dev->attrs.max_mr_size; 1466 1466 do_div(max_pages_per_mr, (1ull << mr_page_shift)); 1467 + max_pages_per_mr = min_not_zero((u32)max_pages_per_mr, U32_MAX); 1467 1468 clt_path->max_pages_per_mr = 1468 1469 min3(clt_path->max_pages_per_mr, (u32)max_pages_per_mr, 1469 1470 ib_dev->attrs.max_fast_reg_page_list_len);
+21 -11
drivers/infiniband/ulp/rtrs/rtrs-pri.h
··· 150 150 151 151 /** 152 152 * enum rtrs_msg_flags - RTRS message flags. 153 - * @RTRS_NEED_INVAL: Send invalidation in response. 153 + * @RTRS_MSG_NEED_INVAL_F: Send invalidation in response. 154 154 * @RTRS_MSG_NEW_RKEY_F: Send refreshed rkey in response. 155 155 */ 156 156 enum rtrs_msg_flags { ··· 179 179 * @recon_cnt: Reconnections counter 180 180 * @sess_uuid: UUID of a session (path) 181 181 * @paths_uuid: UUID of a group of sessions (paths) 182 - * 182 + * @first_conn: %1 if the connection request is the first for that session, 183 + * otherwise %0 183 184 * NOTE: max size 56 bytes, see man rdma_connect(). 184 185 */ 185 186 struct rtrs_msg_conn_req { 186 - /* Is set to 0 by cma.c in case of AF_IB, do not touch that. 187 - * see https://www.spinics.net/lists/linux-rdma/msg22397.html 187 + /** 188 + * @__cma_version: Is set to 0 by cma.c in case of AF_IB, do not touch 189 + * that. See https://www.spinics.net/lists/linux-rdma/msg22397.html 188 190 */ 189 191 u8 __cma_version; 190 - /* On sender side that should be set to 0, or cma_save_ip_info() 191 - * extract garbage and will fail. 192 + /** 193 + * @__ip_version: On sender side that should be set to 0, or 194 + * cma_save_ip_info() extract garbage and will fail. 192 195 */ 193 196 u8 __ip_version; 194 197 __le16 magic; ··· 202 199 uuid_t sess_uuid; 203 200 uuid_t paths_uuid; 204 201 u8 first_conn : 1; 202 + /* private: */ 205 203 u8 reserved_bits : 7; 206 204 u8 reserved[11]; 207 205 }; ··· 215 211 * @queue_depth: max inflight messages (queue-depth) in this session 216 212 * @max_io_size: max io size server supports 217 213 * @max_hdr_size: max msg header size server supports 214 + * @flags: RTRS message flags for this message 218 215 * 219 216 * NOTE: size is 56 bytes, max possible is 136 bytes, see man rdma_accept(). 220 217 */ ··· 227 222 __le32 max_io_size; 228 223 __le32 max_hdr_size; 229 224 __le32 flags; 225 + /* private: */ 230 226 u8 reserved[36]; 231 227 }; 232 228 233 229 /** 234 - * struct rtrs_msg_info_req 230 + * struct rtrs_msg_info_req - client additional info request 235 231 * @type: @RTRS_MSG_INFO_REQ 236 232 * @pathname: Path name chosen by client 237 233 */ 238 234 struct rtrs_msg_info_req { 239 235 __le16 type; 240 236 u8 pathname[NAME_MAX]; 237 + /* private: */ 241 238 u8 reserved[15]; 242 239 }; 243 240 244 241 /** 245 - * struct rtrs_msg_info_rsp 242 + * struct rtrs_msg_info_rsp - server additional info response 246 243 * @type: @RTRS_MSG_INFO_RSP 247 244 * @sg_cnt: Number of @desc entries 248 245 * @desc: RDMA buffers where the client can write to server ··· 252 245 struct rtrs_msg_info_rsp { 253 246 __le16 type; 254 247 __le16 sg_cnt; 248 + /* private: */ 255 249 u8 reserved[4]; 250 + /* public: */ 256 251 struct rtrs_sg_desc desc[]; 257 252 }; 258 253 259 254 /** 260 - * struct rtrs_msg_rkey_rsp 255 + * struct rtrs_msg_rkey_rsp - server refreshed rkey response 261 256 * @type: @RTRS_MSG_RKEY_RSP 262 257 * @buf_id: RDMA buf_id of the new rkey 263 258 * @rkey: new remote key for RDMA buffers id from server ··· 273 264 /** 274 265 * struct rtrs_msg_rdma_read - RDMA data transfer request from client 275 266 * @type: always @RTRS_MSG_READ 267 + * @flags: RTRS message flags (enum rtrs_msg_flags) 276 268 * @usr_len: length of user payload 277 269 * @sg_cnt: number of @desc entries 278 270 * @desc: RDMA buffers where the server can write the result to ··· 287 277 }; 288 278 289 279 /** 290 - * struct_msg_rdma_write - Message transferred to server with RDMA-Write 280 + * struct rtrs_msg_rdma_write - Message transferred to server with RDMA-Write 291 281 * @type: always @RTRS_MSG_WRITE 292 282 * @usr_len: length of user payload 293 283 */ ··· 297 287 }; 298 288 299 289 /** 300 - * struct_msg_rdma_hdr - header for read or write request 290 + * struct rtrs_msg_rdma_hdr - header for read or write request 301 291 * @type: @RTRS_MSG_WRITE | @RTRS_MSG_READ 302 292 */ 303 293 struct rtrs_msg_rdma_hdr {
+15 -9
drivers/infiniband/ulp/rtrs/rtrs.h
··· 24 24 25 25 /** 26 26 * enum rtrs_clt_link_ev - Events about connectivity state of a client 27 - * @RTRS_CLT_LINK_EV_RECONNECTED Client was reconnected. 28 - * @RTRS_CLT_LINK_EV_DISCONNECTED Client was disconnected. 27 + * @RTRS_CLT_LINK_EV_RECONNECTED: Client was reconnected. 28 + * @RTRS_CLT_LINK_EV_DISCONNECTED: Client was disconnected. 29 29 */ 30 30 enum rtrs_clt_link_ev { 31 31 RTRS_CLT_LINK_EV_RECONNECTED, ··· 33 33 }; 34 34 35 35 /** 36 - * Source and destination address of a path to be established 36 + * struct rtrs_addr - Source and destination address of a path to be established 37 + * @src: source address 38 + * @dst: destination address 37 39 */ 38 40 struct rtrs_addr { 39 41 struct sockaddr_storage *src; ··· 43 41 }; 44 42 45 43 /** 46 - * rtrs_clt_ops - it holds the link event callback and private pointer. 44 + * struct rtrs_clt_ops - it holds the link event callback and private pointer. 47 45 * @priv: User supplied private data. 48 46 * @link_ev: Event notification callback function for connection state changes 49 47 * @priv: User supplied data that was passed to rtrs_clt_open() ··· 69 67 }; 70 68 71 69 /** 72 - * enum rtrs_clt_con_type() type of ib connection to use with a given 70 + * enum rtrs_clt_con_type - type of ib connection to use with a given 73 71 * rtrs_permit 74 - * @ADMIN_CON - use connection reserved for "service" messages 75 - * @IO_CON - use a connection reserved for IO 72 + * @RTRS_ADMIN_CON: use connection reserved for "service" messages 73 + * @RTRS_IO_CON: use a connection reserved for IO 76 74 */ 77 75 enum rtrs_clt_con_type { 78 76 RTRS_ADMIN_CON, ··· 87 85 struct rtrs_permit *permit); 88 86 89 87 /** 90 - * rtrs_clt_req_ops - it holds the request confirmation callback 88 + * struct rtrs_clt_req_ops - it holds the request confirmation callback 91 89 * and a private pointer. 92 90 * @priv: User supplied private data. 93 91 * @conf_fn: callback function to be called as confirmation ··· 107 105 int rtrs_clt_rdma_cq_direct(struct rtrs_clt_sess *clt, unsigned int index); 108 106 109 107 /** 110 - * rtrs_attrs - RTRS session attributes 108 + * struct rtrs_attrs - RTRS session attributes 109 + * @queue_depth: queue_depth saved from rtrs_clt_sess message 110 + * @max_io_size: max_io_size from rtrs_clt_sess message, capped to 111 + * @max_segments * %SZ_4K 112 + * @max_segments: max_segments saved from rtrs_clt_sess message 111 113 */ 112 114 struct rtrs_attrs { 113 115 u32 queue_depth;
+50 -11
drivers/md/md.c
··· 1999 1999 mddev->layout = le32_to_cpu(sb->layout); 2000 2000 mddev->raid_disks = le32_to_cpu(sb->raid_disks); 2001 2001 mddev->dev_sectors = le64_to_cpu(sb->size); 2002 - mddev->logical_block_size = le32_to_cpu(sb->logical_block_size); 2003 2002 mddev->events = ev1; 2004 2003 mddev->bitmap_info.offset = 0; 2005 2004 mddev->bitmap_info.space = 0; ··· 2013 2014 memcpy(mddev->uuid, sb->set_uuid, 16); 2014 2015 2015 2016 mddev->max_disks = (4096-256)/2; 2017 + 2018 + if (!mddev->logical_block_size) 2019 + mddev->logical_block_size = le32_to_cpu(sb->logical_block_size); 2016 2020 2017 2021 if ((le32_to_cpu(sb->feature_map) & MD_FEATURE_BITMAP_OFFSET) && 2018 2022 mddev->bitmap_info.file == NULL) { ··· 3884 3882 3885 3883 static int analyze_sbs(struct mddev *mddev) 3886 3884 { 3887 - int i; 3888 3885 struct md_rdev *rdev, *freshest, *tmp; 3889 3886 3890 3887 freshest = NULL; ··· 3910 3909 super_types[mddev->major_version]. 3911 3910 validate_super(mddev, NULL/*freshest*/, freshest); 3912 3911 3913 - i = 0; 3914 3912 rdev_for_each_safe(rdev, tmp, mddev) { 3915 3913 if (mddev->max_disks && 3916 - (rdev->desc_nr >= mddev->max_disks || 3917 - i > mddev->max_disks)) { 3914 + rdev->desc_nr >= mddev->max_disks) { 3918 3915 pr_warn("md: %s: %pg: only %d devices permitted\n", 3919 3916 mdname(mddev), rdev->bdev, 3920 3917 mddev->max_disks); ··· 4406 4407 if (err < 0) 4407 4408 return err; 4408 4409 4409 - err = mddev_lock(mddev); 4410 + err = mddev_suspend_and_lock(mddev); 4410 4411 if (err) 4411 4412 return err; 4412 4413 if (mddev->pers) ··· 4431 4432 } else 4432 4433 mddev->raid_disks = n; 4433 4434 out_unlock: 4434 - mddev_unlock(mddev); 4435 + mddev_unlock_and_resume(mddev); 4435 4436 return err ? err : len; 4436 4437 } 4437 4438 static struct md_sysfs_entry md_raid_disks = ··· 5980 5981 if (mddev->major_version == 0) 5981 5982 return -EINVAL; 5982 5983 5983 - if (mddev->pers) 5984 - return -EBUSY; 5985 - 5986 5984 err = kstrtouint(buf, 10, &lbs); 5987 5985 if (err < 0) 5988 5986 return -EINVAL; 5987 + 5988 + if (mddev->pers) { 5989 + unsigned int curr_lbs; 5990 + 5991 + if (mddev->logical_block_size) 5992 + return -EBUSY; 5993 + /* 5994 + * To fix forward compatibility issues, LBS is not 5995 + * configured for arrays from old kernels (<=6.18) by default. 5996 + * If the user confirms no rollback to old kernels, 5997 + * enable LBS by writing current LBS — to prevent data 5998 + * loss from LBS changes. 5999 + */ 6000 + curr_lbs = queue_logical_block_size(mddev->gendisk->queue); 6001 + if (lbs != curr_lbs) 6002 + return -EINVAL; 6003 + 6004 + mddev->logical_block_size = curr_lbs; 6005 + set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags); 6006 + pr_info("%s: logical block size configured successfully, array will not be assembled in old kernels (<= 6.18)\n", 6007 + mdname(mddev)); 6008 + return len; 6009 + } 5989 6010 5990 6011 err = mddev_lock(mddev); 5991 6012 if (err) ··· 6182 6163 mdname(mddev)); 6183 6164 return -EINVAL; 6184 6165 } 6185 - mddev->logical_block_size = lim->logical_block_size; 6166 + 6167 + /* Only 1.x meta needs to set logical block size */ 6168 + if (mddev->major_version == 0) 6169 + return 0; 6170 + 6171 + /* 6172 + * Fix forward compatibility issue. Only set LBS by default for 6173 + * new arrays, mddev->events == 0 indicates the array was just 6174 + * created. When assembling an array, read LBS from the superblock 6175 + * instead — LBS is 0 in superblocks created by old kernels. 6176 + */ 6177 + if (!mddev->events) { 6178 + pr_info("%s: array will not be assembled in old kernels that lack configurable LBS support (<= 6.18)\n", 6179 + mdname(mddev)); 6180 + mddev->logical_block_size = lim->logical_block_size; 6181 + } 6182 + 6183 + if (!mddev->logical_block_size) 6184 + pr_warn("%s: echo current LBS to md/logical_block_size to prevent data loss issues from LBS changes.\n" 6185 + "\tNote: After setting, array will not be assembled in old kernels (<= 6.18)\n", 6186 + mdname(mddev)); 6186 6187 6187 6188 return 0; 6188 6189 }
+6 -4
drivers/md/raid5.c
··· 7187 7187 err = mddev_suspend_and_lock(mddev); 7188 7188 if (err) 7189 7189 return err; 7190 + conf = mddev->private; 7191 + if (!conf) { 7192 + mddev_unlock_and_resume(mddev); 7193 + return -ENODEV; 7194 + } 7190 7195 raid5_quiesce(mddev, true); 7191 7196 7192 - conf = mddev->private; 7193 - if (!conf) 7194 - err = -ENODEV; 7195 - else if (new != conf->worker_cnt_per_group) { 7197 + if (new != conf->worker_cnt_per_group) { 7196 7198 old_groups = conf->worker_groups; 7197 7199 if (old_groups) 7198 7200 flush_workqueue(raid5_wq);
+3
drivers/net/dsa/b53/b53_common.c
··· 2169 2169 if (!ent->is_valid) 2170 2170 return 0; 2171 2171 2172 + if (is_multicast_ether_addr(ent->mac)) 2173 + return 0; 2174 + 2172 2175 if (port != ent->port) 2173 2176 return 0; 2174 2177
+26 -13
drivers/net/ethernet/airoha/airoha_eth.c
··· 2924 2924 port->id = id; 2925 2925 eth->ports[p] = port; 2926 2926 2927 - err = airoha_metadata_dst_alloc(port); 2928 - if (err) 2929 - return err; 2927 + return airoha_metadata_dst_alloc(port); 2928 + } 2930 2929 2931 - err = register_netdev(dev); 2932 - if (err) 2933 - goto free_metadata_dst; 2930 + static int airoha_register_gdm_devices(struct airoha_eth *eth) 2931 + { 2932 + int i; 2933 + 2934 + for (i = 0; i < ARRAY_SIZE(eth->ports); i++) { 2935 + struct airoha_gdm_port *port = eth->ports[i]; 2936 + int err; 2937 + 2938 + if (!port) 2939 + continue; 2940 + 2941 + err = register_netdev(port->dev); 2942 + if (err) 2943 + return err; 2944 + } 2934 2945 2935 2946 return 0; 2936 - 2937 - free_metadata_dst: 2938 - airoha_metadata_dst_free(port); 2939 - return err; 2940 2947 } 2941 2948 2942 2949 static int airoha_probe(struct platform_device *pdev) ··· 3034 3027 } 3035 3028 } 3036 3029 3030 + err = airoha_register_gdm_devices(eth); 3031 + if (err) 3032 + goto error_napi_stop; 3033 + 3037 3034 return 0; 3038 3035 3039 3036 error_napi_stop: ··· 3051 3040 for (i = 0; i < ARRAY_SIZE(eth->ports); i++) { 3052 3041 struct airoha_gdm_port *port = eth->ports[i]; 3053 3042 3054 - if (port && port->dev->reg_state == NETREG_REGISTERED) { 3043 + if (!port) 3044 + continue; 3045 + 3046 + if (port->dev->reg_state == NETREG_REGISTERED) 3055 3047 unregister_netdev(port->dev); 3056 - airoha_metadata_dst_free(port); 3057 - } 3048 + airoha_metadata_dst_free(port); 3058 3049 } 3059 3050 free_netdev(eth->napi_dev); 3060 3051 platform_set_drvdata(pdev, NULL);
+2
drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
··· 1928 1928 { 1929 1929 if (pdata->rx_adapt_retries++ >= MAX_RX_ADAPT_RETRIES) { 1930 1930 pdata->rx_adapt_retries = 0; 1931 + pdata->mode_set = false; 1931 1932 return; 1932 1933 } 1933 1934 ··· 1975 1974 */ 1976 1975 netif_dbg(pdata, link, pdata->netdev, "Block_lock done"); 1977 1976 pdata->rx_adapt_done = true; 1977 + pdata->rx_adapt_retries = 0; 1978 1978 pdata->mode_set = false; 1979 1979 return; 1980 1980 }
+4 -4
drivers/net/ethernet/broadcom/Kconfig
··· 255 255 devices, via the hwmon sysfs interface. 256 256 257 257 config BNGE 258 - tristate "Broadcom Ethernet device support" 258 + tristate "Broadcom ThorUltra Ethernet device support" 259 259 depends on PCI 260 260 select NET_DEVLINK 261 261 select PAGE_POOL 262 262 help 263 - This driver supports Broadcom 50/100/200/400/800 gigabit Ethernet cards. 264 - The module will be called bng_en. To compile this driver as a module, 265 - choose M here. 263 + This driver supports Broadcom ThorUltra 50/100/200/400/800 gigabit 264 + Ethernet cards. The module will be called bng_en. To compile this 265 + driver as a module, choose M here. 266 266 267 267 config BCMASP 268 268 tristate "Broadcom ASP 2.0 Ethernet support"
+1 -1
drivers/net/ethernet/broadcom/bnge/bnge.h
··· 5 5 #define _BNGE_H_ 6 6 7 7 #define DRV_NAME "bng_en" 8 - #define DRV_SUMMARY "Broadcom 800G Ethernet Linux Driver" 8 + #define DRV_SUMMARY "Broadcom ThorUltra NIC Ethernet Driver" 9 9 10 10 #include <linux/etherdevice.h> 11 11 #include <linux/bnxt/hsi.h>
+1 -1
drivers/net/ethernet/broadcom/bnge/bnge_core.c
··· 19 19 static const struct { 20 20 char *name; 21 21 } board_info[] = { 22 - [BCM57708] = { "Broadcom BCM57708 50Gb/100Gb/200Gb/400Gb/800Gb Ethernet" }, 22 + [BCM57708] = { "Broadcom BCM57708 ThorUltra 50Gb/100Gb/200Gb/400Gb/800Gb Ethernet" }, 23 23 }; 24 24 25 25 static const struct pci_device_id bnge_pci_tbl[] = {
+2 -1
drivers/net/ethernet/cadence/macb_main.c
··· 708 708 /* Initialize rings & buffers as clearing MACB_BIT(TE) in link down 709 709 * cleared the pipeline and control registers. 710 710 */ 711 - bp->macbgem_ops.mog_init_rings(bp); 712 711 macb_init_buffers(bp); 713 712 714 713 for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) ··· 2952 2953 err); 2953 2954 goto pm_exit; 2954 2955 } 2956 + 2957 + bp->macbgem_ops.mog_init_rings(bp); 2955 2958 2956 2959 for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) { 2957 2960 napi_enable(&queue->napi_rx);
+7 -1
drivers/net/ethernet/freescale/enetc/netc_blk_ctrl.c
··· 577 577 } 578 578 579 579 addr = netc_get_phy_addr(np); 580 - if (addr <= 0) { 580 + if (addr < 0) { 581 581 dev_err(dev, "Failed to get PHY address\n"); 582 582 return addr; 583 583 } 584 + 585 + /* The default value of LaBCR[MDIO_PHYAD_PRTAD] is 0, 586 + * so no need to set the register. 587 + */ 588 + if (!addr) 589 + return 0; 584 590 585 591 if (phy_mask & BIT(addr)) { 586 592 dev_err(dev,
+1 -1
drivers/net/ethernet/google/gve/gve_main.c
··· 558 558 block->priv = priv; 559 559 err = request_irq(priv->msix_vectors[msix_idx].vector, 560 560 gve_is_gqi(priv) ? gve_intr : gve_intr_dqo, 561 - 0, block->name, block); 561 + IRQF_NO_AUTOEN, block->name, block); 562 562 if (err) { 563 563 dev_err(&priv->pdev->dev, 564 564 "Failed to receive msix vector %d\n", i);
+2
drivers/net/ethernet/google/gve/gve_utils.c
··· 112 112 113 113 netif_napi_add_locked(priv->dev, &block->napi, gve_poll); 114 114 netif_napi_set_irq_locked(&block->napi, block->irq); 115 + enable_irq(block->irq); 115 116 } 116 117 117 118 void gve_remove_napi(struct gve_priv *priv, int ntfy_idx) 118 119 { 119 120 struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; 120 121 122 + disable_irq(block->irq); 121 123 netif_napi_del_locked(&block->napi); 122 124 }
+9 -1
drivers/net/ethernet/intel/e1000/e1000_main.c
··· 4094 4094 u32 length, const u8 *data) 4095 4095 { 4096 4096 struct e1000_hw *hw = &adapter->hw; 4097 - u8 last_byte = *(data + length - 1); 4097 + u8 last_byte; 4098 + 4099 + /* Guard against OOB on data[length - 1] */ 4100 + if (unlikely(!length)) 4101 + return false; 4102 + /* Upper bound: length must not exceed rx_buffer_len */ 4103 + if (unlikely(length > adapter->rx_buffer_len)) 4104 + return false; 4105 + last_byte = *(data + length - 1); 4098 4106 4099 4107 if (TBI_ACCEPT(hw, status, errors, length, last_byte)) { 4100 4108 unsigned long irq_flags;
+11
drivers/net/ethernet/intel/i40e/i40e.h
··· 1422 1422 return (pf->lan_veb != I40E_NO_VEB) ? pf->veb[pf->lan_veb] : NULL; 1423 1423 } 1424 1424 1425 + static inline u32 i40e_get_max_num_descriptors(const struct i40e_pf *pf) 1426 + { 1427 + const struct i40e_hw *hw = &pf->hw; 1428 + 1429 + switch (hw->mac.type) { 1430 + case I40E_MAC_XL710: 1431 + return I40E_MAX_NUM_DESCRIPTORS_XL710; 1432 + default: 1433 + return I40E_MAX_NUM_DESCRIPTORS; 1434 + } 1435 + } 1425 1436 #endif /* _I40E_H_ */
-12
drivers/net/ethernet/intel/i40e/i40e_ethtool.c
··· 2013 2013 drvinfo->n_priv_flags += I40E_GL_PRIV_FLAGS_STR_LEN; 2014 2014 } 2015 2015 2016 - static u32 i40e_get_max_num_descriptors(struct i40e_pf *pf) 2017 - { 2018 - struct i40e_hw *hw = &pf->hw; 2019 - 2020 - switch (hw->mac.type) { 2021 - case I40E_MAC_XL710: 2022 - return I40E_MAX_NUM_DESCRIPTORS_XL710; 2023 - default: 2024 - return I40E_MAX_NUM_DESCRIPTORS; 2025 - } 2026 - } 2027 - 2028 2016 static void i40e_get_ringparam(struct net_device *netdev, 2029 2017 struct ethtool_ringparam *ring, 2030 2018 struct kernel_ethtool_ringparam *kernel_ring,
+1
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 2234 2234 vsi->flags |= I40E_VSI_FLAG_FILTER_CHANGED; 2235 2235 set_bit(__I40E_MACVLAN_SYNC_PENDING, vsi->back->state); 2236 2236 } 2237 + i40e_service_event_schedule(vsi->back); 2237 2238 } 2238 2239 2239 2240 /**
+2 -2
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 656 656 657 657 /* ring_len has to be multiple of 8 */ 658 658 if (!IS_ALIGNED(info->ring_len, 8) || 659 - info->ring_len > I40E_MAX_NUM_DESCRIPTORS_XL710) { 659 + info->ring_len > i40e_get_max_num_descriptors(pf)) { 660 660 ret = -EINVAL; 661 661 goto error_context; 662 662 } ··· 726 726 727 727 /* ring_len has to be multiple of 32 */ 728 728 if (!IS_ALIGNED(info->ring_len, 32) || 729 - info->ring_len > I40E_MAX_NUM_DESCRIPTORS_XL710) { 729 + info->ring_len > i40e_get_max_num_descriptors(pf)) { 730 730 ret = -EINVAL; 731 731 goto error_param; 732 732 }
+2 -2
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 1726 1726 u16 i; 1727 1727 1728 1728 dw = (u32 *)adapter->rss_key; 1729 - for (i = 0; i <= adapter->rss_key_size / 4; i++) 1729 + for (i = 0; i < adapter->rss_key_size / 4; i++) 1730 1730 wr32(hw, IAVF_VFQF_HKEY(i), dw[i]); 1731 1731 1732 1732 dw = (u32 *)adapter->rss_lut; 1733 - for (i = 0; i <= adapter->rss_lut_size / 4; i++) 1733 + for (i = 0; i < adapter->rss_lut_size / 4; i++) 1734 1734 wr32(hw, IAVF_VFQF_HLUT(i), dw[i]); 1735 1735 1736 1736 iavf_flush(hw);
+1 -1
drivers/net/ethernet/intel/idpf/idpf_lib.c
··· 1271 1271 idpf_mb_irq_enable(adapter); 1272 1272 else 1273 1273 queue_delayed_work(adapter->mbx_wq, &adapter->mbx_task, 1274 - msecs_to_jiffies(300)); 1274 + usecs_to_jiffies(300)); 1275 1275 1276 1276 idpf_recv_mb_msg(adapter); 1277 1277 }
+5
drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
··· 1016 1016 struct idpf_vc_xn_params xn_params = { 1017 1017 .vc_op = VIRTCHNL2_OP_GET_LAN_MEMORY_REGIONS, 1018 1018 .recv_buf.iov_len = IDPF_CTLQ_MAX_BUF_LEN, 1019 + .send_buf.iov_len = 1020 + sizeof(struct virtchnl2_get_lan_memory_regions) + 1021 + sizeof(struct virtchnl2_mem_region), 1019 1022 .timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC, 1020 1023 }; 1021 1024 int num_regions, size; ··· 1031 1028 return -ENOMEM; 1032 1029 1033 1030 xn_params.recv_buf.iov_base = rcvd_regions; 1031 + rcvd_regions->num_memory_regions = cpu_to_le16(1); 1032 + xn_params.send_buf.iov_base = rcvd_regions; 1034 1033 reply_sz = idpf_vc_xn_exec(adapter, &xn_params); 1035 1034 if (reply_sz < 0) 1036 1035 return reply_sz;
+8
drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
··· 418 418 */ 419 419 if (rx_count < pfvf->hw.rq_skid) 420 420 rx_count = pfvf->hw.rq_skid; 421 + 422 + if (ring->rx_pending < 16) { 423 + netdev_err(netdev, 424 + "rx ring size %u invalid, min is 16\n", 425 + ring->rx_pending); 426 + return -EINVAL; 427 + } 428 + 421 429 rx_count = Q_COUNT(Q_SIZE(rx_count, 3)); 422 430 423 431 /* Due pipelining impact minimum 2000 unused SQ CQE's
+1 -1
drivers/net/ethernet/microsoft/mana/gdma_main.c
··· 481 481 /* Perform PCI rescan on device if we failed on HWC */ 482 482 dev_err(&pdev->dev, "MANA service: resume failed, rescanning\n"); 483 483 mana_serv_rescan(pdev); 484 - goto out; 484 + return; 485 485 } 486 486 487 487 if (ret)
+1 -9
drivers/net/ethernet/smsc/smc91x.c
··· 516 516 * any other concurrent access and C would always interrupt B. But life 517 517 * isn't that easy in a SMP world... 518 518 */ 519 - #define smc_special_trylock(lock, flags) \ 520 - ({ \ 521 - int __ret; \ 522 - local_irq_save(flags); \ 523 - __ret = spin_trylock(lock); \ 524 - if (!__ret) \ 525 - local_irq_restore(flags); \ 526 - __ret; \ 527 - }) 519 + #define smc_special_trylock(lock, flags) spin_trylock_irqsave(lock, flags) 528 520 #define smc_special_lock(lock, flags) spin_lock_irqsave(lock, flags) 529 521 #define smc_special_unlock(lock, flags) spin_unlock_irqrestore(lock, flags) 530 522 #else
+15 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 89 89 #define STMMAC_XDP_CONSUMED BIT(0) 90 90 #define STMMAC_XDP_TX BIT(1) 91 91 #define STMMAC_XDP_REDIRECT BIT(2) 92 + #define STMMAC_XSK_CONSUMED BIT(3) 92 93 93 94 static int flow_ctrl = 0xdead; 94 95 module_param(flow_ctrl, int, 0644); ··· 5127 5126 static int stmmac_xdp_xmit_back(struct stmmac_priv *priv, 5128 5127 struct xdp_buff *xdp) 5129 5128 { 5129 + bool zc = !!(xdp->rxq->mem.type == MEM_TYPE_XSK_BUFF_POOL); 5130 5130 struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp); 5131 5131 int cpu = smp_processor_id(); 5132 5132 struct netdev_queue *nq; ··· 5144 5142 /* Avoids TX time-out as we are sharing with slow path */ 5145 5143 txq_trans_cond_update(nq); 5146 5144 5147 - res = stmmac_xdp_xmit_xdpf(priv, queue, xdpf, false); 5148 - if (res == STMMAC_XDP_TX) 5145 + /* For zero copy XDP_TX action, dma_map is true */ 5146 + res = stmmac_xdp_xmit_xdpf(priv, queue, xdpf, zc); 5147 + if (res == STMMAC_XDP_TX) { 5149 5148 stmmac_flush_tx_descriptors(priv, queue); 5149 + } else if (res == STMMAC_XDP_CONSUMED && zc) { 5150 + /* xdp has been freed by xdp_convert_buff_to_frame(), 5151 + * no need to call xsk_buff_free() again, so return 5152 + * STMMAC_XSK_CONSUMED. 5153 + */ 5154 + res = STMMAC_XSK_CONSUMED; 5155 + xdp_return_frame(xdpf); 5156 + } 5150 5157 5151 5158 __netif_tx_unlock(nq); 5152 5159 ··· 5505 5494 break; 5506 5495 case STMMAC_XDP_CONSUMED: 5507 5496 xsk_buff_free(buf->xdp); 5497 + fallthrough; 5498 + case STMMAC_XSK_CONSUMED: 5508 5499 rx_dropped++; 5509 5500 break; 5510 5501 case STMMAC_XDP_TX:
+1 -3
drivers/net/ethernet/wangxun/Kconfig
··· 21 21 depends on PTP_1588_CLOCK_OPTIONAL 22 22 select PAGE_POOL 23 23 select DIMLIB 24 + select PHYLINK 24 25 help 25 26 Common library for Wangxun(R) Ethernet drivers. 26 27 ··· 30 29 depends on PCI 31 30 depends on PTP_1588_CLOCK_OPTIONAL 32 31 select LIBWX 33 - select PHYLINK 34 32 help 35 33 This driver supports Wangxun(R) GbE PCI Express family of 36 34 adapters. ··· 48 48 depends on PTP_1588_CLOCK_OPTIONAL 49 49 select MARVELL_10G_PHY 50 50 select REGMAP 51 - select PHYLINK 52 51 select HWMON if TXGBE=y 53 52 select SFP 54 53 select GPIOLIB ··· 70 71 depends on PCI_MSI 71 72 depends on PTP_1588_CLOCK_OPTIONAL 72 73 select LIBWX 73 - select PHYLINK 74 74 help 75 75 This driver supports virtual functions for SP1000A, WX1820AL, 76 76 WX5XXX, WX5XXXAL.
+9 -3
drivers/net/fjes/fjes_hw.c
··· 334 334 335 335 ret = fjes_hw_reset(hw); 336 336 if (ret) 337 - return ret; 337 + goto err_iounmap; 338 338 339 339 fjes_hw_set_irqmask(hw, REG_ICTL_MASK_ALL, true); 340 340 ··· 347 347 hw->max_epid = fjes_hw_get_max_epid(hw); 348 348 hw->my_epid = fjes_hw_get_my_epid(hw); 349 349 350 - if ((hw->max_epid == 0) || (hw->my_epid >= hw->max_epid)) 351 - return -ENXIO; 350 + if ((hw->max_epid == 0) || (hw->my_epid >= hw->max_epid)) { 351 + ret = -ENXIO; 352 + goto err_iounmap; 353 + } 352 354 353 355 ret = fjes_hw_setup(hw); 354 356 355 357 hw->hw_info.trace = vzalloc(FJES_DEBUG_BUFFER_SIZE); 356 358 hw->hw_info.trace_size = FJES_DEBUG_BUFFER_SIZE; 357 359 360 + return ret; 361 + 362 + err_iounmap: 363 + fjes_hw_iounmap(hw); 358 364 return ret; 359 365 } 360 366
+7
drivers/net/mdio/mdio-aspeed.c
··· 63 63 64 64 iowrite32(ctrl, ctx->base + ASPEED_MDIO_CTRL); 65 65 66 + /* Workaround for read-after-write issue. 67 + * The controller may return stale data if a read follows immediately 68 + * after a write. A dummy read forces the hardware to update its 69 + * internal state, ensuring that the next real read returns correct data. 70 + */ 71 + ioread32(ctx->base + ASPEED_MDIO_CTRL); 72 + 66 73 return readl_poll_timeout(ctx->base + ASPEED_MDIO_CTRL, ctrl, 67 74 !(ctrl & ASPEED_MDIO_CTRL_FIRE), 68 75 ASPEED_MDIO_INTERVAL_US,
+2 -4
drivers/net/mdio/mdio-realtek-rtl9300.c
··· 354 354 struct fwnode_handle *node) 355 355 { 356 356 struct rtl9300_mdio_chan *chan; 357 - struct fwnode_handle *child; 358 357 struct mii_bus *bus; 359 358 u32 mdio_bus; 360 359 int err; ··· 370 371 * compatible = "ethernet-phy-ieee802.3-c45". This does mean we can't 371 372 * support both c45 and c22 on the same MDIO bus. 372 373 */ 373 - fwnode_for_each_child_node(node, child) 374 + fwnode_for_each_child_node_scoped(node, child) 374 375 if (fwnode_device_is_compatible(child, "ethernet-phy-ieee802.3-c45")) 375 376 priv->smi_bus_is_c45[mdio_bus] = true; 376 377 ··· 408 409 { 409 410 struct rtl9300_mdio_priv *priv = dev_get_drvdata(dev); 410 411 struct device *parent = dev->parent; 411 - struct fwnode_handle *port; 412 412 int err; 413 413 414 414 struct fwnode_handle *ports __free(fwnode_handle) = ··· 416 418 return dev_err_probe(dev, -EINVAL, "%pfwP missing ethernet-ports\n", 417 419 dev_fwnode(parent)); 418 420 419 - fwnode_for_each_child_node(ports, port) { 421 + fwnode_for_each_child_node_scoped(ports, port) { 420 422 struct device_node *mdio_dn; 421 423 u32 addr; 422 424 u32 bus;
+1 -1
drivers/net/phy/mediatek/mtk-ge-soc.c
··· 1167 1167 } 1168 1168 1169 1169 buf = (u32 *)nvmem_cell_read(cell, &len); 1170 + nvmem_cell_put(cell); 1170 1171 if (IS_ERR(buf)) 1171 1172 return PTR_ERR(buf); 1172 - nvmem_cell_put(cell); 1173 1173 1174 1174 if (!buf[0] || !buf[1] || !buf[2] || !buf[3] || len < 4 * sizeof(u32)) { 1175 1175 phydev_err(phydev, "invalid efuse data\n");
+1 -1
drivers/net/team/team_core.c
··· 878 878 static void team_queue_override_port_prio_changed(struct team *team, 879 879 struct team_port *port) 880 880 { 881 - if (!port->queue_id || team_port_enabled(port)) 881 + if (!port->queue_id || !team_port_enabled(port)) 882 882 return; 883 883 __team_queue_override_port_del(team, port); 884 884 __team_queue_override_port_add(team, port);
+5
drivers/net/usb/asix_common.c
··· 335 335 offset = (internal ? 1 : 0); 336 336 ret = buf[offset]; 337 337 338 + if (ret >= PHY_MAX_ADDR) { 339 + netdev_err(dev->net, "invalid PHY address: %d\n", ret); 340 + return -ENODEV; 341 + } 342 + 338 343 netdev_dbg(dev->net, "%s PHY address 0x%x\n", 339 344 internal ? "internal" : "external", ret); 340 345
+1 -5
drivers/net/usb/ax88172a.c
··· 210 210 ret = asix_read_phy_addr(dev, priv->use_embdphy); 211 211 if (ret < 0) 212 212 goto free; 213 - if (ret >= PHY_MAX_ADDR) { 214 - netdev_err(dev->net, "Invalid PHY address %#x\n", ret); 215 - ret = -ENODEV; 216 - goto free; 217 - } 213 + 218 214 priv->phy_addr = ret; 219 215 220 216 ax88172a_reset_phy(dev, priv->use_embdphy);
+2
drivers/net/usb/rtl8150.c
··· 211 211 if (res == -ENODEV) 212 212 netif_device_detach(dev->netdev); 213 213 dev_err(&dev->udev->dev, "%s failed with %d\n", __func__, res); 214 + kfree(req); 215 + usb_free_urb(async_urb); 214 216 } 215 217 return res; 216 218 }
+7 -2
drivers/net/usb/sr9700.c
··· 52 52 53 53 static int sr_write_reg(struct usbnet *dev, u8 reg, u8 value) 54 54 { 55 - return usbnet_write_cmd(dev, SR_WR_REGS, SR_REQ_WR_REG, 55 + return usbnet_write_cmd(dev, SR_WR_REG, SR_REQ_WR_REG, 56 56 value, reg, NULL, 0); 57 57 } 58 58 ··· 65 65 66 66 static void sr_write_reg_async(struct usbnet *dev, u8 reg, u8 value) 67 67 { 68 - usbnet_write_cmd_async(dev, SR_WR_REGS, SR_REQ_WR_REG, 68 + usbnet_write_cmd_async(dev, SR_WR_REG, SR_REQ_WR_REG, 69 69 value, reg, NULL, 0); 70 70 } 71 71 ··· 537 537 static const struct usb_device_id products[] = { 538 538 { 539 539 USB_DEVICE(0x0fe6, 0x9700), /* SR9700 device */ 540 + .driver_info = (unsigned long)&sr9700_driver_info, 541 + }, 542 + { 543 + /* SR9700 with virtual driver CD-ROM - interface 0 is the CD-ROM device */ 544 + USB_DEVICE_INTERFACE_NUMBER(0x0fe6, 0x9702, 1), 540 545 .driver_info = (unsigned long)&sr9700_driver_info, 541 546 }, 542 547 {}, /* END */
+2 -1
drivers/net/usb/usbnet.c
··· 831 831 832 832 clear_bit(EVENT_DEV_OPEN, &dev->flags); 833 833 netif_stop_queue(net); 834 - netdev_reset_queue(net); 835 834 836 835 netif_info(dev, ifdown, dev->net, 837 836 "stop stats: rx/tx %lu/%lu, errs %lu/%lu\n", ··· 873 874 cancel_work_sync(&dev->bh_work); 874 875 timer_delete_sync(&dev->delay); 875 876 cancel_work_sync(&dev->kevent); 877 + 878 + netdev_reset_queue(net); 876 879 877 880 if (!pm) 878 881 usb_autopm_put_interface(dev->intf);
+2 -2
drivers/net/wireless/intel/iwlwifi/iwl-drv.c
··· 1597 1597 */ 1598 1598 static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context) 1599 1599 { 1600 - unsigned int min_core, max_core, loaded_core; 1600 + int min_core, max_core, loaded_core; 1601 1601 struct iwl_drv *drv = context; 1602 1602 struct iwl_fw *fw = &drv->fw; 1603 1603 const struct iwl_ucode_header *ucode; ··· 1676 1676 if (loaded_core < min_core || loaded_core > max_core) { 1677 1677 IWL_ERR(drv, 1678 1678 "Driver unable to support your firmware API. " 1679 - "Driver supports FW core %u..%u, firmware is %u.\n", 1679 + "Driver supports FW core %d..%d, firmware is %d.\n", 1680 1680 min_core, max_core, loaded_core); 1681 1681 goto try_again; 1682 1682 }
+7
drivers/net/wireless/intel/iwlwifi/mld/ptp.c
··· 121 121 return 0; 122 122 } 123 123 124 + static int iwl_mld_ptp_settime(struct ptp_clock_info *ptp, 125 + const struct timespec64 *ts) 126 + { 127 + return -EOPNOTSUPP; 128 + } 129 + 124 130 static int iwl_mld_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta) 125 131 { 126 132 struct iwl_mld *mld = container_of(ptp, struct iwl_mld, ··· 285 279 286 280 mld->ptp_data.ptp_clock_info.owner = THIS_MODULE; 287 281 mld->ptp_data.ptp_clock_info.gettime64 = iwl_mld_ptp_gettime; 282 + mld->ptp_data.ptp_clock_info.settime64 = iwl_mld_ptp_settime; 288 283 mld->ptp_data.ptp_clock_info.max_adj = 0x7fffffff; 289 284 mld->ptp_data.ptp_clock_info.adjtime = iwl_mld_ptp_adjtime; 290 285 mld->ptp_data.ptp_clock_info.adjfine = iwl_mld_ptp_adjfine;
+7
drivers/net/wireless/intel/iwlwifi/mvm/ptp.c
··· 220 220 return 0; 221 221 } 222 222 223 + static int iwl_mvm_ptp_settime(struct ptp_clock_info *ptp, 224 + const struct timespec64 *ts) 225 + { 226 + return -EOPNOTSUPP; 227 + } 228 + 223 229 static int iwl_mvm_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta) 224 230 { 225 231 struct iwl_mvm *mvm = container_of(ptp, struct iwl_mvm, ··· 287 281 mvm->ptp_data.ptp_clock_info.adjfine = iwl_mvm_ptp_adjfine; 288 282 mvm->ptp_data.ptp_clock_info.adjtime = iwl_mvm_ptp_adjtime; 289 283 mvm->ptp_data.ptp_clock_info.gettime64 = iwl_mvm_ptp_gettime; 284 + mvm->ptp_data.ptp_clock_info.settime64 = iwl_mvm_ptp_settime; 290 285 mvm->ptp_data.scaled_freq = SCALE_FACTOR; 291 286 292 287 /* Give a short 'friendly name' to identify the PHC clock */
+4 -8
drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
··· 3019 3019 } 3020 3020 3021 3021 hdr = (const void *)(fw->data + fw->size - sizeof(*hdr)); 3022 - dev_info(dev->dev, "WM Firmware Version: %.10s, Build Time: %.15s\n", 3022 + dev_info(dev->dev, "WM Firmware Version: %.10s, Build Time: %.15s", 3023 3023 hdr->fw_ver, hdr->build_date); 3024 3024 3025 3025 ret = mt76_connac_mcu_send_ram_firmware(dev, hdr, fw->data, false); ··· 3048 3048 } 3049 3049 3050 3050 hdr = (const void *)(fw->data + fw->size - sizeof(*hdr)); 3051 - dev_info(dev->dev, "WA Firmware Version: %.10s, Build Time: %.15s\n", 3051 + dev_info(dev->dev, "WA Firmware Version: %.10s, Build Time: %.15s", 3052 3052 hdr->fw_ver, hdr->build_date); 3053 3053 3054 3054 ret = mt76_connac_mcu_send_ram_firmware(dev, hdr, fw->data, true); ··· 3101 3101 int i, ret, sem, max_len = mt76_is_sdio(dev) ? 2048 : 4096; 3102 3102 const struct mt76_connac2_patch_hdr *hdr; 3103 3103 const struct firmware *fw = NULL; 3104 - char build_date[17]; 3105 3104 3106 3105 sem = mt76_connac_mcu_patch_sem_ctrl(dev, true); 3107 3106 switch (sem) { ··· 3124 3125 } 3125 3126 3126 3127 hdr = (const void *)fw->data; 3127 - strscpy(build_date, hdr->build_date, sizeof(build_date)); 3128 - build_date[16] = '\0'; 3129 - strim(build_date); 3130 - dev_info(dev->dev, "HW/SW Version: 0x%x, Build Time: %.16s\n", 3131 - be32_to_cpu(hdr->hw_sw_ver), build_date); 3128 + dev_info(dev->dev, "HW/SW Version: 0x%x, Build Time: %.16s", 3129 + be32_to_cpu(hdr->hw_sw_ver), hdr->build_date); 3132 3130 3133 3131 for (i = 0; i < be32_to_cpu(hdr->desc.n_region); i++) { 3134 3132 struct mt76_connac2_patch_sec *sec;
+2 -1
drivers/net/wireless/realtek/rtlwifi/rtl8192cu/trx.c
··· 511 511 if (sta) { 512 512 sta_entry = (struct rtl_sta_info *)sta->drv_priv; 513 513 tid = ieee80211_get_tid(hdr); 514 - agg_state = sta_entry->tids[tid].agg.agg_state; 514 + if (tid < MAX_TID_COUNT) 515 + agg_state = sta_entry->tids[tid].agg.agg_state; 515 516 ampdu_density = sta->deflink.ht_cap.ampdu_density; 516 517 } 517 518
+3 -1
drivers/net/wireless/realtek/rtw88/sdio.c
··· 144 144 145 145 static bool rtw_sdio_use_direct_io(struct rtw_dev *rtwdev, u32 addr) 146 146 { 147 + bool might_indirect_under_power_off = rtwdev->chip->id == RTW_CHIP_TYPE_8822C; 148 + 147 149 if (!test_bit(RTW_FLAG_POWERON, rtwdev->flags) && 148 - !rtw_sdio_is_bus_addr(addr)) 150 + !rtw_sdio_is_bus_addr(addr) && might_indirect_under_power_off) 149 151 return false; 150 152 151 153 return !rtw_sdio_is_sdio30_supported(rtwdev) ||
+1 -2
drivers/net/wireless/realtek/rtw88/usb.c
··· 965 965 struct sk_buff *rx_skb; 966 966 int i; 967 967 968 - rtwusb->rxwq = alloc_workqueue("rtw88_usb: rx wq", WQ_BH | WQ_UNBOUND, 969 - 0); 968 + rtwusb->rxwq = alloc_workqueue("rtw88_usb: rx wq", WQ_BH, 0); 970 969 if (!rtwusb->rxwq) { 971 970 rtw_err(rtwdev, "failed to create RX work queue\n"); 972 971 return -ENOMEM;
+5
drivers/net/wireless/ti/wlcore/tx.c
··· 207 207 total_blocks = wlcore_hw_calc_tx_blocks(wl, total_len, spare_blocks); 208 208 209 209 if (total_blocks <= wl->tx_blocks_available) { 210 + if (skb_headroom(skb) < (total_len - skb->len) && 211 + pskb_expand_head(skb, (total_len - skb->len), 0, GFP_ATOMIC)) { 212 + wl1271_free_tx_id(wl, id); 213 + return -EAGAIN; 214 + } 210 215 desc = skb_push(skb, total_len - skb->len); 211 216 212 217 wlcore_hw_set_tx_desc_blocks(wl, desc, total_blocks,
+2 -2
drivers/parisc/sba_iommu.c
··· 578 578 pba &= IOVP_MASK; 579 579 pba |= (ci >> PAGE_SHIFT) & 0xff; /* move CI (8 bits) into lowest byte */ 580 580 581 - pba |= SBA_PDIR_VALID_BIT; /* set "valid" bit */ 582 - *pdir_ptr = cpu_to_le64(pba); /* swap and store into I/O Pdir */ 581 + /* set "valid" bit, swap and store into I/O Pdir */ 582 + *pdir_ptr = cpu_to_le64((unsigned long)pba | SBA_PDIR_VALID_BIT); 583 583 584 584 /* 585 585 * If the PDC_MODEL capabilities has Non-coherent IO-PDIR bit set
+7 -7
drivers/platform/mellanox/mlxbf-pmc.c
··· 801 801 {11, "GDC_MISS_MACHINE_CHI_TXDAT"}, 802 802 {12, "GDC_MISS_MACHINE_CHI_RXDAT"}, 803 803 {13, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC0_0"}, 804 - {14, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC0_1 "}, 804 + {14, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC0_1"}, 805 805 {15, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC0_2"}, 806 - {16, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC0_3 "}, 807 - {17, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC1_0 "}, 808 - {18, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC1_1 "}, 809 - {19, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC1_2 "}, 810 - {20, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC1_3 "}, 806 + {16, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC0_3"}, 807 + {17, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC1_0"}, 808 + {18, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC1_1"}, 809 + {19, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC1_2"}, 810 + {20, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC1_3"}, 811 811 {21, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE0_0"}, 812 812 {22, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE0_1"}, 813 813 {23, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE0_2"}, 814 814 {24, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE0_3"}, 815 - {25, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE1_0 "}, 815 + {25, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE1_0"}, 816 816 {26, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE1_1"}, 817 817 {27, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE1_2"}, 818 818 {28, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE1_3"},
+173 -3
drivers/platform/x86/asus-armoury.h
··· 449 449 .ac_data = &(struct power_limits) { 450 450 .ppt_pl1_spl_min = 15, 451 451 .ppt_pl1_spl_max = 80, 452 - .ppt_pl2_sppt_min = 25, 452 + .ppt_pl2_sppt_min = 35, 453 453 .ppt_pl2_sppt_max = 80, 454 454 .ppt_pl3_fppt_min = 35, 455 - .ppt_pl3_fppt_max = 80 455 + .ppt_pl3_fppt_max = 80, 456 + .nv_dynamic_boost_min = 5, 457 + .nv_dynamic_boost_max = 25, 458 + .nv_temp_target_min = 75, 459 + .nv_temp_target_max = 87, 456 460 }, 457 - .dc_data = NULL, 461 + .dc_data = &(struct power_limits) { 462 + .ppt_pl1_spl_min = 15, 463 + .ppt_pl1_spl_def = 45, 464 + .ppt_pl1_spl_max = 65, 465 + .ppt_pl2_sppt_min = 35, 466 + .ppt_pl2_sppt_def = 54, 467 + .ppt_pl2_sppt_max = 65, 468 + .ppt_pl3_fppt_min = 35, 469 + .ppt_pl3_fppt_max = 65, 470 + .nv_temp_target_min = 75, 471 + .nv_temp_target_max = 87, 472 + }, 458 473 }, 459 474 }, 460 475 { ··· 562 547 .ppt_pl2_sppt_max = 80, 563 548 .ppt_pl3_fppt_min = 25, 564 549 .ppt_pl3_fppt_max = 80, 550 + .nv_temp_target_min = 75, 551 + .nv_temp_target_max = 87, 552 + }, 553 + }, 554 + }, 555 + { 556 + .matches = { 557 + DMI_MATCH(DMI_BOARD_NAME, "FA608UM"), 558 + }, 559 + .driver_data = &(struct power_data) { 560 + .ac_data = &(struct power_limits) { 561 + .ppt_pl1_spl_min = 15, 562 + .ppt_pl1_spl_def = 45, 563 + .ppt_pl1_spl_max = 90, 564 + .ppt_pl2_sppt_min = 35, 565 + .ppt_pl2_sppt_def = 54, 566 + .ppt_pl2_sppt_max = 90, 567 + .ppt_pl3_fppt_min = 35, 568 + .ppt_pl3_fppt_def = 90, 569 + .ppt_pl3_fppt_max = 65, 570 + .nv_dynamic_boost_min = 10, 571 + .nv_dynamic_boost_max = 15, 572 + .nv_temp_target_min = 75, 573 + .nv_temp_target_max = 87, 574 + .nv_tgp_min = 55, 575 + .nv_tgp_max = 100, 576 + }, 577 + .dc_data = &(struct power_limits) { 578 + .ppt_pl1_spl_min = 15, 579 + .ppt_pl1_spl_def = 45, 580 + .ppt_pl1_spl_max = 65, 581 + .ppt_pl2_sppt_min = 35, 582 + .ppt_pl2_sppt_def = 54, 583 + .ppt_pl2_sppt_max = 65, 584 + .ppt_pl3_fppt_min = 35, 585 + .ppt_pl3_fppt_max = 65, 565 586 .nv_temp_target_min = 75, 566 587 .nv_temp_target_max = 87, 567 588 }, ··· 875 824 }, 876 825 { 877 826 .matches = { 827 + DMI_MATCH(DMI_BOARD_NAME, "GA403WR"), 828 + }, 829 + .driver_data = &(struct power_data) { 830 + .ac_data = &(struct power_limits) { 831 + .ppt_pl1_spl_min = 15, 832 + .ppt_pl1_spl_max = 80, 833 + .ppt_pl2_sppt_min = 25, 834 + .ppt_pl2_sppt_max = 80, 835 + .ppt_pl3_fppt_min = 35, 836 + .ppt_pl3_fppt_max = 80, 837 + .nv_dynamic_boost_min = 0, 838 + .nv_dynamic_boost_max = 25, 839 + .nv_temp_target_min = 75, 840 + .nv_temp_target_max = 87, 841 + .nv_tgp_min = 80, 842 + .nv_tgp_max = 95, 843 + }, 844 + .dc_data = &(struct power_limits) { 845 + .ppt_pl1_spl_min = 15, 846 + .ppt_pl1_spl_max = 35, 847 + .ppt_pl2_sppt_min = 25, 848 + .ppt_pl2_sppt_max = 35, 849 + .ppt_pl3_fppt_min = 35, 850 + .ppt_pl3_fppt_max = 65, 851 + .nv_temp_target_min = 75, 852 + .nv_temp_target_max = 87, 853 + }, 854 + .requires_fan_curve = true, 855 + }, 856 + }, 857 + { 858 + .matches = { 878 859 DMI_MATCH(DMI_BOARD_NAME, "GA503QR"), 879 860 }, 880 861 .driver_data = &(struct power_data) { ··· 1031 948 .nv_temp_target_min = 75, 1032 949 .nv_temp_target_max = 87, 1033 950 }, 951 + }, 952 + }, 953 + { 954 + .matches = { 955 + DMI_MATCH(DMI_BOARD_NAME, "GU605CR"), 956 + }, 957 + .driver_data = &(struct power_data) { 958 + .ac_data = &(struct power_limits) { 959 + .ppt_pl1_spl_min = 30, 960 + .ppt_pl1_spl_max = 85, 961 + .ppt_pl2_sppt_min = 38, 962 + .ppt_pl2_sppt_max = 110, 963 + .nv_dynamic_boost_min = 5, 964 + .nv_dynamic_boost_max = 20, 965 + .nv_temp_target_min = 75, 966 + .nv_temp_target_max = 87, 967 + .nv_tgp_min = 80, 968 + .nv_tgp_def = 90, 969 + .nv_tgp_max = 105, 970 + }, 971 + .dc_data = &(struct power_limits) { 972 + .ppt_pl1_spl_min = 30, 973 + .ppt_pl1_spl_max = 85, 974 + .ppt_pl2_sppt_min = 38, 975 + .ppt_pl2_sppt_max = 110, 976 + .nv_temp_target_min = 75, 977 + .nv_temp_target_max = 87, 978 + }, 979 + .requires_fan_curve = true, 1034 980 }, 1035 981 }, 1036 982 { ··· 1374 1262 }, 1375 1263 { 1376 1264 .matches = { 1265 + DMI_MATCH(DMI_BOARD_NAME, "G615LR"), 1266 + }, 1267 + .driver_data = &(struct power_data) { 1268 + .ac_data = &(struct power_limits) { 1269 + .ppt_pl1_spl_min = 28, 1270 + .ppt_pl1_spl_def = 140, 1271 + .ppt_pl1_spl_max = 175, 1272 + .ppt_pl2_sppt_min = 28, 1273 + .ppt_pl2_sppt_max = 175, 1274 + .nv_temp_target_min = 75, 1275 + .nv_temp_target_max = 87, 1276 + .nv_dynamic_boost_min = 5, 1277 + .nv_dynamic_boost_max = 25, 1278 + .nv_tgp_min = 65, 1279 + .nv_tgp_max = 115, 1280 + }, 1281 + .dc_data = &(struct power_limits) { 1282 + .ppt_pl1_spl_min = 25, 1283 + .ppt_pl1_spl_max = 55, 1284 + .ppt_pl2_sppt_min = 25, 1285 + .ppt_pl2_sppt_max = 70, 1286 + .nv_temp_target_min = 75, 1287 + .nv_temp_target_max = 87, 1288 + }, 1289 + .requires_fan_curve = true, 1290 + }, 1291 + }, 1292 + { 1293 + .matches = { 1377 1294 DMI_MATCH(DMI_BOARD_NAME, "G634J"), 1378 1295 }, 1379 1296 .driver_data = &(struct power_data) { ··· 1555 1414 .nv_dynamic_boost_max = 25, 1556 1415 .nv_temp_target_min = 75, 1557 1416 .nv_temp_target_max = 87, 1417 + }, 1418 + .dc_data = &(struct power_limits) { 1419 + .ppt_pl1_spl_min = 25, 1420 + .ppt_pl1_spl_max = 55, 1421 + .ppt_pl2_sppt_min = 25, 1422 + .ppt_pl2_sppt_max = 70, 1423 + .nv_temp_target_min = 75, 1424 + .nv_temp_target_max = 87, 1425 + }, 1426 + .requires_fan_curve = true, 1427 + }, 1428 + }, 1429 + { 1430 + .matches = { 1431 + DMI_MATCH(DMI_BOARD_NAME, "G835LW"), 1432 + }, 1433 + .driver_data = &(struct power_data) { 1434 + .ac_data = &(struct power_limits) { 1435 + .ppt_pl1_spl_min = 28, 1436 + .ppt_pl1_spl_def = 140, 1437 + .ppt_pl1_spl_max = 175, 1438 + .ppt_pl2_sppt_min = 28, 1439 + .ppt_pl2_sppt_max = 175, 1440 + .nv_dynamic_boost_min = 5, 1441 + .nv_dynamic_boost_max = 25, 1442 + .nv_temp_target_min = 75, 1443 + .nv_temp_target_max = 87, 1444 + .nv_tgp_min = 80, 1445 + .nv_tgp_max = 150, 1558 1446 }, 1559 1447 .dc_data = &(struct power_limits) { 1560 1448 .ppt_pl1_spl_min = 25,
+1
drivers/platform/x86/asus-nb-wmi.c
··· 580 580 { KE_KEY, 0x2a, { KEY_SELECTIVE_SCREENSHOT } }, 581 581 { KE_IGNORE, 0x2b, }, /* PrintScreen (also send via PS/2) on newer models */ 582 582 { KE_IGNORE, 0x2c, }, /* CapsLock (also send via PS/2) on newer models */ 583 + { KE_KEY, 0x2d, { KEY_DISPLAYTOGGLE } }, 583 584 { KE_KEY, 0x30, { KEY_VOLUMEUP } }, 584 585 { KE_KEY, 0x31, { KEY_VOLUMEDOWN } }, 585 586 { KE_KEY, 0x32, { KEY_MUTE } },
+32
drivers/platform/x86/dell/alienware-wmi-wmax.c
··· 90 90 91 91 static const struct dmi_system_id awcc_dmi_table[] __initconst = { 92 92 { 93 + .ident = "Alienware 16 Area-51", 94 + .matches = { 95 + DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 96 + DMI_MATCH(DMI_PRODUCT_NAME, "Alienware 16 Area-51"), 97 + }, 98 + .driver_data = &g_series_quirks, 99 + }, 100 + { 101 + .ident = "Alienware 16X Aurora", 102 + .matches = { 103 + DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 104 + DMI_MATCH(DMI_PRODUCT_NAME, "Alienware 16X Aurora"), 105 + }, 106 + .driver_data = &g_series_quirks, 107 + }, 108 + { 109 + .ident = "Alienware 18 Area-51", 110 + .matches = { 111 + DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 112 + DMI_MATCH(DMI_PRODUCT_NAME, "Alienware 18 Area-51"), 113 + }, 114 + .driver_data = &g_series_quirks, 115 + }, 116 + { 93 117 .ident = "Alienware 16 Aurora", 94 118 .matches = { 95 119 DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), ··· 184 160 DMI_MATCH(DMI_PRODUCT_NAME, "Alienware x15"), 185 161 }, 186 162 .driver_data = &generic_quirks, 163 + }, 164 + { 165 + .ident = "Alienware x16", 166 + .matches = { 167 + DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 168 + DMI_MATCH(DMI_PRODUCT_NAME, "Alienware x16"), 169 + }, 170 + .driver_data = &g_series_quirks, 187 171 }, 188 172 { 189 173 .ident = "Alienware x17",
+1
drivers/platform/x86/dell/dell-lis3lv02d.c
··· 44 44 /* 45 45 * Additional individual entries were added after verification. 46 46 */ 47 + DELL_LIS3LV02D_DMI_ENTRY("Latitude 5400", 0x29), 47 48 DELL_LIS3LV02D_DMI_ENTRY("Latitude 5480", 0x29), 48 49 DELL_LIS3LV02D_DMI_ENTRY("Latitude 5500", 0x29), 49 50 DELL_LIS3LV02D_DMI_ENTRY("Latitude E6330", 0x29),
+2 -2
drivers/platform/x86/hp/hp-bioscfg/enum-attributes.c
··· 207 207 case PREREQUISITES: 208 208 size = min_t(u32, enum_data->common.prerequisites_size, MAX_PREREQUISITES_SIZE); 209 209 for (reqs = 0; reqs < size; reqs++) { 210 - if (elem >= enum_obj_count) { 210 + if (elem + reqs >= enum_obj_count) { 211 211 pr_err("Error enum-objects package is too small\n"); 212 212 return -EINVAL; 213 213 } ··· 255 255 256 256 for (pos_values = 0; pos_values < size && pos_values < MAX_VALUES_SIZE; 257 257 pos_values++) { 258 - if (elem >= enum_obj_count) { 258 + if (elem + pos_values >= enum_obj_count) { 259 259 pr_err("Error enum-objects package is too small\n"); 260 260 return -EINVAL; 261 261 }
+1 -1
drivers/platform/x86/hp/hp-bioscfg/int-attributes.c
··· 227 227 size = min_t(u32, integer_data->common.prerequisites_size, MAX_PREREQUISITES_SIZE); 228 228 229 229 for (reqs = 0; reqs < size; reqs++) { 230 - if (elem >= integer_obj_count) { 230 + if (elem + reqs >= integer_obj_count) { 231 231 pr_err("Error elem-objects package is too small\n"); 232 232 return -EINVAL; 233 233 }
+5
drivers/platform/x86/hp/hp-bioscfg/order-list-attributes.c
··· 216 216 size = min_t(u32, ordered_list_data->common.prerequisites_size, 217 217 MAX_PREREQUISITES_SIZE); 218 218 for (reqs = 0; reqs < size; reqs++) { 219 + if (elem + reqs >= order_obj_count) { 220 + pr_err("Error elem-objects package is too small\n"); 221 + return -EINVAL; 222 + } 223 + 219 224 ret = hp_convert_hexstr_to_str(order_obj[elem + reqs].string.pointer, 220 225 order_obj[elem + reqs].string.length, 221 226 &str_value, &value_len);
+5
drivers/platform/x86/hp/hp-bioscfg/passwdobj-attributes.c
··· 303 303 MAX_PREREQUISITES_SIZE); 304 304 305 305 for (reqs = 0; reqs < size; reqs++) { 306 + if (elem + reqs >= password_obj_count) { 307 + pr_err("Error elem-objects package is too small\n"); 308 + return -EINVAL; 309 + } 310 + 306 311 ret = hp_convert_hexstr_to_str(password_obj[elem + reqs].string.pointer, 307 312 password_obj[elem + reqs].string.length, 308 313 &str_value, &value_len);
+1 -1
drivers/platform/x86/hp/hp-bioscfg/string-attributes.c
··· 217 217 MAX_PREREQUISITES_SIZE); 218 218 219 219 for (reqs = 0; reqs < size; reqs++) { 220 - if (elem >= string_obj_count) { 220 + if (elem + reqs >= string_obj_count) { 221 221 pr_err("Error elem-objects package is too small\n"); 222 222 return -EINVAL; 223 223 }
+1 -1
drivers/platform/x86/ibm_rtl.c
··· 273 273 /* search for the _RTL_ signature at the start of the table */ 274 274 for (i = 0 ; i < ebda_size/sizeof(unsigned int); i++) { 275 275 struct ibm_rtl_table __iomem * tmp; 276 - tmp = (struct ibm_rtl_table __iomem *) (ebda_map+i); 276 + tmp = (struct ibm_rtl_table __iomem *) (ebda_map + i*sizeof(unsigned int)); 277 277 if ((readq(&tmp->signature) & RTL_MASK) == RTL_SIGNATURE) { 278 278 phys_addr_t addr; 279 279 unsigned int plen;
+5 -3
drivers/platform/x86/intel/pmt/discovery.c
··· 503 503 504 504 ret = kobject_init_and_add(&feature->kobj, ktype, &priv->dev->kobj, 505 505 "%s", pmt_feature_names[feature->id]); 506 - if (ret) 506 + if (ret) { 507 + kobject_put(&feature->kobj); 507 508 return ret; 509 + } 508 510 509 511 kobject_uevent(&feature->kobj, KOBJ_ADD); 510 512 pmt_features_add_feat(feature); ··· 548 546 priv->dev = device_create(&intel_pmt_class, &auxdev->dev, MKDEV(0, 0), priv, 549 547 "%s-%s", "features", dev_name(priv->parent)); 550 548 if (IS_ERR(priv->dev)) 551 - return dev_err_probe(priv->dev, PTR_ERR(priv->dev), 549 + return dev_err_probe(&auxdev->dev, PTR_ERR(priv->dev), 552 550 "Could not create %s-%s device node\n", 553 - "features", dev_name(priv->dev)); 551 + "features", dev_name(priv->parent)); 554 552 555 553 /* Initialize each feature */ 556 554 for (i = 0; i < ivdev->num_resources; i++) {
+1 -1
drivers/platform/x86/lenovo/ideapad-laptop.c
··· 1367 1367 /* Performance toggle also Fn+Q, handled inside ideapad_wmi_notify() */ 1368 1368 { KE_KEY, 0x3d | IDEAPAD_WMI_KEY, { KEY_PROG4 } }, 1369 1369 /* shift + prtsc */ 1370 - { KE_KEY, 0x2d | IDEAPAD_WMI_KEY, { KEY_CUT } }, 1370 + { KE_KEY, 0x2d | IDEAPAD_WMI_KEY, { KEY_SELECTIVE_SCREENSHOT } }, 1371 1371 { KE_KEY, 0x29 | IDEAPAD_WMI_KEY, { KEY_TOUCHPAD_TOGGLE } }, 1372 1372 { KE_KEY, 0x2a | IDEAPAD_WMI_KEY, { KEY_ROOT_MENU } }, 1373 1373
+5 -1
drivers/platform/x86/lenovo/think-lmi.c
··· 195 195 }; 196 196 197 197 static const struct tlmi_cert_guids thinkcenter_cert_guid = { 198 - .thumbprint = NULL, 198 + .thumbprint = LENOVO_CERT_THUMBPRINT_GUID, /* Same GUID as TP */ 199 199 .set_bios_setting = LENOVO_TC_SET_BIOS_SETTING_CERT_GUID, 200 200 .save_bios_setting = LENOVO_TC_SAVE_BIOS_SETTING_CERT_GUID, 201 201 .cert_to_password = LENOVO_TC_CERT_TO_PASSWORD_GUID, ··· 707 707 acpi_status status; 708 708 709 709 if (!tlmi_priv.cert_guid->thumbprint) 710 + return -EOPNOTSUPP; 711 + 712 + /* Older ThinkCenter BIOS may not have support */ 713 + if (!wmi_has_guid(tlmi_priv.cert_guid->thumbprint)) 710 714 return -EOPNOTSUPP; 711 715 712 716 status = wmi_evaluate_method(tlmi_priv.cert_guid->thumbprint, 0, 0, &input, &output);
+3
drivers/platform/x86/msi-laptop.c
··· 1130 1130 sysfs_remove_group(&msipf_device->dev.kobj, &msipf_attribute_group); 1131 1131 if (!quirks->old_ec_model && threeg_exists) 1132 1132 device_remove_file(&msipf_device->dev, &dev_attr_threeg); 1133 + if (quirks->old_ec_model) 1134 + sysfs_remove_group(&msipf_device->dev.kobj, 1135 + &msipf_old_attribute_group); 1133 1136 platform_device_unregister(msipf_device); 1134 1137 platform_driver_unregister(&msipf_driver); 1135 1138 backlight_device_unregister(msibl_device);
+6 -3
drivers/platform/x86/samsung-galaxybook.c
··· 442 442 union power_supply_propval *val) 443 443 { 444 444 struct samsung_galaxybook *galaxybook = ext_data; 445 + u8 value; 445 446 int err; 446 447 447 448 if (psp != POWER_SUPPLY_PROP_CHARGE_CONTROL_END_THRESHOLD) 448 449 return -EINVAL; 449 450 450 - err = charge_control_end_threshold_acpi_get(galaxybook, (u8 *)&val->intval); 451 + err = charge_control_end_threshold_acpi_get(galaxybook, &value); 451 452 if (err) 452 453 return err; 453 454 ··· 456 455 * device stores "no end threshold" as 0 instead of 100; 457 456 * if device has 0, report 100 458 457 */ 459 - if (val->intval == 0) 460 - val->intval = 100; 458 + if (value == 0) 459 + value = 100; 460 + 461 + val->intval = value; 461 462 462 463 return 0; 463 464 }
+7
drivers/platform/x86/uniwill/uniwill-acpi.c
··· 1845 1845 }, 1846 1846 }, 1847 1847 { 1848 + .ident = "TUXEDO Book BA15 Gen10 AMD", 1849 + .matches = { 1850 + DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"), 1851 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "PF5PU1G"), 1852 + }, 1853 + }, 1854 + { 1848 1855 .ident = "TUXEDO Pulse 14 Gen1 AMD", 1849 1856 .matches = { 1850 1857 DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
+2 -3
drivers/pmdomain/imx/gpc.c
··· 402 402 static int imx_gpc_probe(struct platform_device *pdev) 403 403 { 404 404 const struct imx_gpc_dt_data *of_id_data = device_get_match_data(&pdev->dev); 405 - struct device_node *pgc_node; 405 + struct device_node *pgc_node __free(device_node) 406 + = of_get_child_by_name(pdev->dev.of_node, "pgc"); 406 407 struct regmap *regmap; 407 408 void __iomem *base; 408 409 int ret; 409 - 410 - pgc_node = of_get_child_by_name(pdev->dev.of_node, "pgc"); 411 410 412 411 /* bail out if DT too old and doesn't provide the necessary info */ 413 412 if (!of_property_present(pdev->dev.of_node, "#power-domain-cells") &&
+6 -15
drivers/pmdomain/mediatek/mtk-pm-domains.c
··· 984 984 } 985 985 } 986 986 987 - static struct device_node *scpsys_get_legacy_regmap(struct device_node *np, const char *pn) 988 - { 989 - struct device_node *local_node; 990 - 991 - for_each_child_of_node(np, local_node) { 992 - if (of_property_present(local_node, pn)) 993 - return local_node; 994 - } 995 - 996 - return NULL; 997 - } 998 - 999 987 static int scpsys_get_bus_protection_legacy(struct device *dev, struct scpsys *scpsys) 1000 988 { 1001 989 const u8 bp_blocks[3] = { ··· 1005 1017 * this makes it then possible to allocate the array of bus_prot 1006 1018 * regmaps and convert all to the new style handling. 1007 1019 */ 1008 - node = scpsys_get_legacy_regmap(np, "mediatek,infracfg"); 1020 + of_node_get(np); 1021 + node = of_find_node_with_property(np, "mediatek,infracfg"); 1009 1022 if (node) { 1010 1023 regmap[0] = syscon_regmap_lookup_by_phandle(node, "mediatek,infracfg"); 1011 1024 of_node_put(node); ··· 1019 1030 regmap[0] = NULL; 1020 1031 } 1021 1032 1022 - node = scpsys_get_legacy_regmap(np, "mediatek,smi"); 1033 + of_node_get(np); 1034 + node = of_find_node_with_property(np, "mediatek,smi"); 1023 1035 if (node) { 1024 1036 smi_np = of_parse_phandle(node, "mediatek,smi", 0); 1025 1037 of_node_put(node); ··· 1038 1048 regmap[1] = NULL; 1039 1049 } 1040 1050 1041 - node = scpsys_get_legacy_regmap(np, "mediatek,infracfg-nao"); 1051 + of_node_get(np); 1052 + node = of_find_node_with_property(np, "mediatek,infracfg-nao"); 1042 1053 if (node) { 1043 1054 regmap[2] = syscon_regmap_lookup_by_phandle(node, "mediatek,infracfg-nao"); 1044 1055 num_regmaps++;
+3
drivers/regulator/fp9931.c
··· 391 391 { 392 392 .name = "v3p3", 393 393 .of_match = of_match_ptr("v3p3"), 394 + .regulators_node = of_match_ptr("regulators"), 394 395 .id = 0, 395 396 .ops = &fp9931_v3p3ops, 396 397 .type = REGULATOR_VOLTAGE, ··· 404 403 { 405 404 .name = "vposneg", 406 405 .of_match = of_match_ptr("vposneg"), 406 + .regulators_node = of_match_ptr("regulators"), 407 407 .id = 1, 408 408 .ops = &fp9931_vposneg_ops, 409 409 .type = REGULATOR_VOLTAGE, ··· 417 415 { 418 416 .name = "vcom", 419 417 .of_match = of_match_ptr("vcom"), 418 + .regulators_node = of_match_ptr("regulators"), 420 419 .id = 2, 421 420 .ops = &fp9931_vcom_ops, 422 421 .type = REGULATOR_VOLTAGE,
+1
drivers/scsi/mpi3mr/mpi/mpi30_ioc.h
··· 166 166 #define MPI3_IOCFACTS_FLAGS_SIGNED_NVDATA_REQUIRED (0x00010000) 167 167 #define MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_MASK (0x0000ff00) 168 168 #define MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_SHIFT (8) 169 + #define MPI3_IOCFACTS_FLAGS_MAX_REQ_PER_REPLY_QUEUE_LIMIT (0x00000040) 169 170 #define MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_MASK (0x00000030) 170 171 #define MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_SHIFT (4) 171 172 #define MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_NOT_STARTED (0x00000000)
+2
drivers/scsi/mpi3mr/mpi3mr_fw.c
··· 3158 3158 mrioc->facts.dma_mask = (facts_flags & 3159 3159 MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_MASK) >> 3160 3160 MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_SHIFT; 3161 + mrioc->facts.max_req_limit = (facts_flags & 3162 + MPI3_IOCFACTS_FLAGS_MAX_REQ_PER_REPLY_QUEUE_LIMIT); 3161 3163 mrioc->facts.protocol_flags = facts_data->protocol_flags; 3162 3164 mrioc->facts.mpi_version = le32_to_cpu(facts_data->mpi_version.word); 3163 3165 mrioc->facts.max_reqs = le16_to_cpu(facts_data->max_outstanding_requests);
+1 -1
drivers/scsi/scsi_debug.c
··· 7459 7459 MODULE_PARM_DESC(lbpu, "enable LBP, support UNMAP command (def=0)"); 7460 7460 MODULE_PARM_DESC(lbpws, "enable LBP, support WRITE SAME(16) with UNMAP bit (def=0)"); 7461 7461 MODULE_PARM_DESC(lbpws10, "enable LBP, support WRITE SAME(10) with UNMAP bit (def=0)"); 7462 - MODULE_PARM_DESC(atomic_write, "enable ATOMIC WRITE support, support WRITE ATOMIC(16) (def=0)"); 7462 + MODULE_PARM_DESC(atomic_wr, "enable ATOMIC WRITE support, support WRITE ATOMIC(16) (def=0)"); 7463 7463 MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)"); 7464 7464 MODULE_PARM_DESC(lun_format, "LUN format: 0->peripheral (def); 1 --> flat address method"); 7465 7465 MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)");
+13 -7
drivers/scsi/sg.c
··· 731 731 sg_remove_request(sfp, srp); 732 732 return -EFAULT; 733 733 } 734 + hp->duration = jiffies_to_msecs(jiffies); 735 + 734 736 if (hp->interface_id != 'S') { 735 737 sg_remove_request(sfp, srp); 736 738 return -ENOSYS; ··· 817 815 return -ENODEV; 818 816 } 819 817 820 - hp->duration = jiffies_to_msecs(jiffies); 821 818 if (hp->interface_id != '\0' && /* v3 (or later) interface */ 822 819 (SG_FLAG_Q_AT_TAIL & hp->flags)) 823 820 at_head = 0; ··· 1339 1338 "sg_cmd_done: pack_id=%d, res=0x%x\n", 1340 1339 srp->header.pack_id, result)); 1341 1340 srp->header.resid = resid; 1342 - ms = jiffies_to_msecs(jiffies); 1343 - srp->header.duration = (ms > srp->header.duration) ? 1344 - (ms - srp->header.duration) : 0; 1345 1341 if (0 != result) { 1346 1342 struct scsi_sense_hdr sshdr; 1347 1343 ··· 1387 1389 done = 0; 1388 1390 } 1389 1391 srp->done = done; 1392 + ms = jiffies_to_msecs(jiffies); 1393 + srp->header.duration = (ms > srp->header.duration) ? 1394 + (ms - srp->header.duration) : 0; 1390 1395 write_unlock_irqrestore(&sfp->rq_list_lock, iflags); 1391 1396 1392 1397 if (likely(done)) { ··· 2534 2533 const sg_io_hdr_t *hp; 2535 2534 const char * cp; 2536 2535 unsigned int ms; 2536 + unsigned int duration; 2537 2537 2538 2538 k = 0; 2539 2539 list_for_each_entry(fp, &sdp->sfds, sfd_siblings) { ··· 2572 2570 seq_printf(s, " id=%d blen=%d", 2573 2571 srp->header.pack_id, blen); 2574 2572 if (srp->done) 2575 - seq_printf(s, " dur=%d", hp->duration); 2573 + seq_printf(s, " dur=%u", hp->duration); 2576 2574 else { 2577 2575 ms = jiffies_to_msecs(jiffies); 2578 - seq_printf(s, " t_o/elap=%d/%d", 2576 + duration = READ_ONCE(hp->duration); 2577 + if (duration) 2578 + duration = (ms > duration ? 2579 + ms - duration : 0); 2580 + seq_printf(s, " t_o/elap=%u/%u", 2579 2581 (new_interface ? hp->timeout : 2580 2582 jiffies_to_msecs(fp->timeout)), 2581 - (ms > hp->duration ? ms - hp->duration : 0)); 2583 + duration); 2582 2584 } 2583 2585 seq_printf(s, "ms sgat=%d op=0x%02x\n", usg, 2584 2586 (int) srp->data.cmd_opcode);
+11 -12
drivers/spi/spi-cadence-quadspi.c
··· 300 300 CQSPI_REG_IRQ_IND_SRAM_FULL | \ 301 301 CQSPI_REG_IRQ_IND_COMP) 302 302 303 + #define CQSPI_IRQ_MASK_RD_SLOW_SRAM (CQSPI_REG_IRQ_WATERMARK | \ 304 + CQSPI_REG_IRQ_IND_COMP) 305 + 303 306 #define CQSPI_IRQ_MASK_WR (CQSPI_REG_IRQ_IND_COMP | \ 304 307 CQSPI_REG_IRQ_WATERMARK | \ 305 308 CQSPI_REG_IRQ_UNDERFLOW) ··· 384 381 else if (!cqspi->slow_sram) 385 382 irq_status &= CQSPI_IRQ_MASK_RD | CQSPI_IRQ_MASK_WR; 386 383 else 387 - irq_status &= CQSPI_REG_IRQ_WATERMARK | CQSPI_IRQ_MASK_WR; 384 + irq_status &= CQSPI_IRQ_MASK_RD_SLOW_SRAM | CQSPI_IRQ_MASK_WR; 388 385 389 386 if (irq_status) 390 387 complete(&cqspi->transfer_complete); ··· 760 757 */ 761 758 762 759 if (use_irq && cqspi->slow_sram) 763 - writel(CQSPI_REG_IRQ_WATERMARK, reg_base + CQSPI_REG_IRQMASK); 760 + writel(CQSPI_IRQ_MASK_RD_SLOW_SRAM, reg_base + CQSPI_REG_IRQMASK); 764 761 else if (use_irq) 765 762 writel(CQSPI_IRQ_MASK_RD, reg_base + CQSPI_REG_IRQMASK); 766 763 else ··· 772 769 readl(reg_base + CQSPI_REG_INDIRECTRD); /* Flush posted write. */ 773 770 774 771 while (remaining > 0) { 772 + ret = 0; 775 773 if (use_irq && 776 774 !wait_for_completion_timeout(&cqspi->transfer_complete, 777 775 msecs_to_jiffies(CQSPI_READ_TIMEOUT_MS))) 778 776 ret = -ETIMEDOUT; 779 777 780 778 /* 781 - * Disable all read interrupts until 782 - * we are out of "bytes to read" 779 + * Prevent lost interrupt and race condition by reinitializing early. 780 + * A spurious wakeup and another wait cycle can occur here, 781 + * which is preferable to waiting until timeout if interrupt is lost. 783 782 */ 784 - if (cqspi->slow_sram) 785 - writel(0x0, reg_base + CQSPI_REG_IRQMASK); 783 + if (use_irq) 784 + reinit_completion(&cqspi->transfer_complete); 786 785 787 786 bytes_to_read = cqspi_get_rd_sram_level(cqspi); 788 787 ··· 815 810 rxbuf += bytes_to_read; 816 811 remaining -= bytes_to_read; 817 812 bytes_to_read = cqspi_get_rd_sram_level(cqspi); 818 - } 819 - 820 - if (use_irq && remaining > 0) { 821 - reinit_completion(&cqspi->transfer_complete); 822 - if (cqspi->slow_sram) 823 - writel(CQSPI_REG_IRQ_WATERMARK, reg_base + CQSPI_REG_IRQMASK); 824 813 } 825 814 } 826 815
+7 -4
drivers/spi/spi-sun6i.c
··· 795 795 static const struct of_device_id sun6i_spi_match[] = { 796 796 { .compatible = "allwinner,sun6i-a31-spi", .data = &sun6i_a31_spi_cfg }, 797 797 { .compatible = "allwinner,sun8i-h3-spi", .data = &sun8i_h3_spi_cfg }, 798 - { 799 - .compatible = "allwinner,sun50i-r329-spi", 800 - .data = &sun50i_r329_spi_cfg 801 - }, 798 + { .compatible = "allwinner,sun50i-r329-spi", .data = &sun50i_r329_spi_cfg }, 799 + /* 800 + * A523's SPI controller has a combined RX buffer + FIFO counter 801 + * at offset 0x400, instead of split buffer count in FIFO status 802 + * register. But in practice we only care about the FIFO level. 803 + */ 804 + { .compatible = "allwinner,sun55i-a523-spi", .data = &sun50i_r329_spi_cfg }, 802 805 {} 803 806 }; 804 807 MODULE_DEVICE_TABLE(of, sun6i_spi_match);
+2 -2
drivers/tty/serial/8250/8250_loongson.c
··· 128 128 port->private_data = priv; 129 129 130 130 port->membase = devm_platform_get_and_ioremap_resource(pdev, 0, &priv->res); 131 - if (!port->membase) 132 - return -ENOMEM; 131 + if (IS_ERR(port->membase)) 132 + return PTR_ERR(port->membase); 133 133 134 134 port->mapbase = priv->res->start; 135 135 port->mapsize = resource_size(priv->res);
+7 -4
drivers/tty/serial/serial_base_bus.c
··· 13 13 #include <linux/device.h> 14 14 #include <linux/idr.h> 15 15 #include <linux/module.h> 16 - #include <linux/of.h> 16 + #include <linux/property.h> 17 17 #include <linux/serial_core.h> 18 18 #include <linux/slab.h> 19 19 #include <linux/spinlock.h> ··· 60 60 driver_unregister(driver); 61 61 } 62 62 63 + /* On failure the caller must put device @dev with put_device() */ 63 64 static int serial_base_device_init(struct uart_port *port, 64 65 struct device *dev, 65 66 struct device *parent_dev, ··· 74 73 dev->parent = parent_dev; 75 74 dev->bus = &serial_base_bus_type; 76 75 dev->release = release; 77 - device_set_of_node_from_dev(dev, parent_dev); 76 + dev->of_node_reused = true; 77 + 78 + device_set_node(dev, fwnode_handle_get(dev_fwnode(parent_dev))); 78 79 79 80 if (!serial_base_initialized) { 80 81 dev_dbg(port->dev, "uart_add_one_port() called before arch_initcall()?\n"); ··· 97 94 { 98 95 struct serial_ctrl_device *ctrl_dev = to_serial_base_ctrl_device(dev); 99 96 100 - of_node_put(dev->of_node); 97 + fwnode_handle_put(dev_fwnode(dev)); 101 98 kfree(ctrl_dev); 102 99 } 103 100 ··· 145 142 { 146 143 struct serial_port_device *port_dev = to_serial_base_port_device(dev); 147 144 148 - of_node_put(dev->of_node); 145 + fwnode_handle_put(dev_fwnode(dev)); 149 146 kfree(port_dev); 150 147 } 151 148
+1 -1
drivers/tty/serial/sh-sci.c
··· 1914 1914 struct dma_tx_state state; 1915 1915 enum dma_status status; 1916 1916 1917 - if (!s->chan_tx) 1917 + if (!s->chan_tx || s->cookie_tx <= 0) 1918 1918 return; 1919 1919 1920 1920 status = dmaengine_tx_status(s->chan_tx, s->cookie_tx, &state);
+7 -7
drivers/tty/serial/xilinx_uartps.c
··· 428 428 struct tty_port *tport = &port->state->port; 429 429 unsigned int numbytes; 430 430 unsigned char ch; 431 + ktime_t rts_delay; 431 432 432 433 if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) { 433 434 /* Disable the TX Empty interrupt */ 434 435 writel(CDNS_UART_IXR_TXEMPTY, port->membase + CDNS_UART_IDR); 436 + /* Set RTS line after delay */ 437 + if (cdns_uart->port->rs485.flags & SER_RS485_ENABLED) { 438 + cdns_uart->tx_timer.function = &cdns_rs485_rx_callback; 439 + rts_delay = ns_to_ktime(cdns_calc_after_tx_delay(cdns_uart)); 440 + hrtimer_start(&cdns_uart->tx_timer, rts_delay, HRTIMER_MODE_REL); 441 + } 435 442 return; 436 443 } 437 444 ··· 455 448 456 449 /* Enable the TX Empty interrupt */ 457 450 writel(CDNS_UART_IXR_TXEMPTY, cdns_uart->port->membase + CDNS_UART_IER); 458 - 459 - if (cdns_uart->port->rs485.flags & SER_RS485_ENABLED && 460 - (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port))) { 461 - hrtimer_update_function(&cdns_uart->tx_timer, cdns_rs485_rx_callback); 462 - hrtimer_start(&cdns_uart->tx_timer, 463 - ns_to_ktime(cdns_calc_after_tx_delay(cdns_uart)), HRTIMER_MODE_REL); 464 - } 465 451 } 466 452 467 453 /**
+4 -1
drivers/ufs/core/ufshcd.c
··· 10359 10359 ret = ufshcd_setup_clocks(hba, false); 10360 10360 if (ret) { 10361 10361 ufshcd_enable_irq(hba); 10362 - return ret; 10362 + goto out; 10363 10363 } 10364 10364 if (ufshcd_is_clkgating_allowed(hba)) { 10365 10365 hba->clk_gating.state = CLKS_OFF; ··· 10371 10371 /* Put the host controller in low power mode if possible */ 10372 10372 ufshcd_hba_vreg_set_lpm(hba); 10373 10373 ufshcd_pm_qos_update(hba, false); 10374 + out: 10375 + if (ret) 10376 + ufshcd_update_evt_hist(hba, UFS_EVT_SUSPEND_ERR, (u32)ret); 10374 10377 return ret; 10375 10378 } 10376 10379
+4 -3
drivers/usb/dwc3/dwc3-of-simple.c
··· 70 70 simple->num_clocks = ret; 71 71 ret = clk_bulk_prepare_enable(simple->num_clocks, simple->clks); 72 72 if (ret) 73 - goto err_resetc_assert; 73 + goto err_clk_put_all; 74 74 75 75 ret = of_platform_populate(np, NULL, NULL, dev); 76 76 if (ret) 77 - goto err_clk_put; 77 + goto err_clk_disable; 78 78 79 79 pm_runtime_set_active(dev); 80 80 pm_runtime_enable(dev); ··· 82 82 83 83 return 0; 84 84 85 - err_clk_put: 85 + err_clk_disable: 86 86 clk_bulk_disable_unprepare(simple->num_clocks, simple->clks); 87 + err_clk_put_all: 87 88 clk_bulk_put_all(simple->num_clocks, simple->clks); 88 89 89 90 err_resetc_assert:
+1 -1
drivers/usb/dwc3/gadget.c
··· 4826 4826 if (!dwc->gadget) 4827 4827 return; 4828 4828 4829 - dwc3_enable_susphy(dwc, false); 4829 + dwc3_enable_susphy(dwc, true); 4830 4830 usb_del_gadget(dwc->gadget); 4831 4831 dwc3_gadget_free_endpoints(dwc); 4832 4832 usb_put_gadget(dwc->gadget);
+1 -1
drivers/usb/dwc3/host.c
··· 227 227 if (dwc->sys_wakeup) 228 228 device_init_wakeup(&dwc->xhci->dev, false); 229 229 230 - dwc3_enable_susphy(dwc, false); 230 + dwc3_enable_susphy(dwc, true); 231 231 platform_device_unregister(dwc->xhci); 232 232 dwc->xhci = NULL; 233 233 }
+25 -17
drivers/usb/gadget/udc/lpc32xx_udc.c
··· 3020 3020 pdev->dev.dma_mask = &lpc32xx_usbd_dmamask; 3021 3021 retval = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 3022 3022 if (retval) 3023 - return retval; 3023 + goto err_put_client; 3024 3024 3025 3025 udc->board = &lpc32xx_usbddata; 3026 3026 ··· 3038 3038 /* Get IRQs */ 3039 3039 for (i = 0; i < 4; i++) { 3040 3040 udc->udp_irq[i] = platform_get_irq(pdev, i); 3041 - if (udc->udp_irq[i] < 0) 3042 - return udc->udp_irq[i]; 3041 + if (udc->udp_irq[i] < 0) { 3042 + retval = udc->udp_irq[i]; 3043 + goto err_put_client; 3044 + } 3043 3045 } 3044 3046 3045 3047 udc->udp_baseaddr = devm_platform_ioremap_resource(pdev, 0); 3046 3048 if (IS_ERR(udc->udp_baseaddr)) { 3047 3049 dev_err(udc->dev, "IO map failure\n"); 3048 - return PTR_ERR(udc->udp_baseaddr); 3050 + retval = PTR_ERR(udc->udp_baseaddr); 3051 + goto err_put_client; 3049 3052 } 3050 3053 3051 3054 /* Get USB device clock */ 3052 3055 udc->usb_slv_clk = devm_clk_get(&pdev->dev, NULL); 3053 3056 if (IS_ERR(udc->usb_slv_clk)) { 3054 3057 dev_err(udc->dev, "failed to acquire USB device clock\n"); 3055 - return PTR_ERR(udc->usb_slv_clk); 3058 + retval = PTR_ERR(udc->usb_slv_clk); 3059 + goto err_put_client; 3056 3060 } 3057 3061 3058 3062 /* Enable USB device clock */ 3059 3063 retval = clk_prepare_enable(udc->usb_slv_clk); 3060 3064 if (retval < 0) { 3061 3065 dev_err(udc->dev, "failed to start USB device clock\n"); 3062 - return retval; 3066 + goto err_put_client; 3063 3067 } 3064 3068 3065 3069 /* Setup deferred workqueue data */ ··· 3084 3080 if (!udc->udca_v_base) { 3085 3081 dev_err(udc->dev, "error getting UDCA region\n"); 3086 3082 retval = -ENOMEM; 3087 - goto i2c_fail; 3083 + goto err_disable_clk; 3088 3084 } 3089 3085 udc->udca_p_base = dma_handle; 3090 3086 dev_dbg(udc->dev, "DMA buffer(0x%x bytes), P:0x%08x, V:0x%p\n", ··· 3097 3093 if (!udc->dd_cache) { 3098 3094 dev_err(udc->dev, "error getting DD DMA region\n"); 3099 3095 retval = -ENOMEM; 3100 - goto dma_alloc_fail; 3096 + goto err_free_dma; 3101 3097 } 3102 3098 3103 3099 /* Clear USB peripheral and initialize gadget endpoints */ ··· 3111 3107 if (retval < 0) { 3112 3108 dev_err(udc->dev, "LP request irq %d failed\n", 3113 3109 udc->udp_irq[IRQ_USB_LP]); 3114 - goto irq_req_fail; 3110 + goto err_destroy_pool; 3115 3111 } 3116 3112 retval = devm_request_irq(dev, udc->udp_irq[IRQ_USB_HP], 3117 3113 lpc32xx_usb_hp_irq, 0, "udc_hp", udc); 3118 3114 if (retval < 0) { 3119 3115 dev_err(udc->dev, "HP request irq %d failed\n", 3120 3116 udc->udp_irq[IRQ_USB_HP]); 3121 - goto irq_req_fail; 3117 + goto err_destroy_pool; 3122 3118 } 3123 3119 3124 3120 retval = devm_request_irq(dev, udc->udp_irq[IRQ_USB_DEVDMA], ··· 3126 3122 if (retval < 0) { 3127 3123 dev_err(udc->dev, "DEV request irq %d failed\n", 3128 3124 udc->udp_irq[IRQ_USB_DEVDMA]); 3129 - goto irq_req_fail; 3125 + goto err_destroy_pool; 3130 3126 } 3131 3127 3132 3128 /* The transceiver interrupt is used for VBUS detection and will ··· 3137 3133 if (retval < 0) { 3138 3134 dev_err(udc->dev, "VBUS request irq %d failed\n", 3139 3135 udc->udp_irq[IRQ_USB_ATX]); 3140 - goto irq_req_fail; 3136 + goto err_destroy_pool; 3141 3137 } 3142 3138 3143 3139 /* Initialize wait queue */ ··· 3146 3142 3147 3143 retval = usb_add_gadget_udc(dev, &udc->gadget); 3148 3144 if (retval < 0) 3149 - goto add_gadget_fail; 3145 + goto err_destroy_pool; 3150 3146 3151 3147 dev_set_drvdata(dev, udc); 3152 3148 device_init_wakeup(dev, 1); ··· 3158 3154 dev_info(udc->dev, "%s version %s\n", driver_name, DRIVER_VERSION); 3159 3155 return 0; 3160 3156 3161 - add_gadget_fail: 3162 - irq_req_fail: 3157 + err_destroy_pool: 3163 3158 dma_pool_destroy(udc->dd_cache); 3164 - dma_alloc_fail: 3159 + err_free_dma: 3165 3160 dma_free_coherent(&pdev->dev, UDCA_BUFF_SIZE, 3166 3161 udc->udca_v_base, udc->udca_p_base); 3167 - i2c_fail: 3162 + err_disable_clk: 3168 3163 clk_disable_unprepare(udc->usb_slv_clk); 3164 + err_put_client: 3165 + put_device(&udc->isp1301_i2c_client->dev); 3166 + 3169 3167 dev_err(udc->dev, "%s probe failed, %d\n", driver_name, retval); 3170 3168 3171 3169 return retval; ··· 3196 3190 udc->udca_v_base, udc->udca_p_base); 3197 3191 3198 3192 clk_disable_unprepare(udc->usb_slv_clk); 3193 + 3194 + put_device(&udc->isp1301_i2c_client->dev); 3199 3195 } 3200 3196 3201 3197 #ifdef CONFIG_PM
+10 -8
drivers/usb/host/ohci-nxp.c
··· 169 169 170 170 ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 171 171 if (ret) 172 - goto fail_disable; 172 + goto err_put_client; 173 173 174 174 dev_dbg(&pdev->dev, "%s: " DRIVER_DESC " (nxp)\n", hcd_name); 175 175 if (usb_disabled()) { 176 176 dev_err(&pdev->dev, "USB is disabled\n"); 177 177 ret = -ENODEV; 178 - goto fail_disable; 178 + goto err_put_client; 179 179 } 180 180 181 181 /* Enable USB host clock */ ··· 183 183 if (IS_ERR(usb_host_clk)) { 184 184 dev_err(&pdev->dev, "failed to acquire and start USB OHCI clock\n"); 185 185 ret = PTR_ERR(usb_host_clk); 186 - goto fail_disable; 186 + goto err_put_client; 187 187 } 188 188 189 189 isp1301_configure(); ··· 192 192 if (!hcd) { 193 193 dev_err(&pdev->dev, "Failed to allocate HC buffer\n"); 194 194 ret = -ENOMEM; 195 - goto fail_disable; 195 + goto err_put_client; 196 196 } 197 197 198 198 hcd->regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 199 199 if (IS_ERR(hcd->regs)) { 200 200 ret = PTR_ERR(hcd->regs); 201 - goto fail_resource; 201 + goto err_put_hcd; 202 202 } 203 203 hcd->rsrc_start = res->start; 204 204 hcd->rsrc_len = resource_size(res); ··· 206 206 irq = platform_get_irq(pdev, 0); 207 207 if (irq < 0) { 208 208 ret = -ENXIO; 209 - goto fail_resource; 209 + goto err_put_hcd; 210 210 } 211 211 212 212 ohci_nxp_start_hc(); ··· 220 220 } 221 221 222 222 ohci_nxp_stop_hc(); 223 - fail_resource: 223 + err_put_hcd: 224 224 usb_put_hcd(hcd); 225 - fail_disable: 225 + err_put_client: 226 + put_device(&isp1301_i2c_client->dev); 226 227 isp1301_i2c_client = NULL; 227 228 return ret; 228 229 } ··· 235 234 usb_remove_hcd(hcd); 236 235 ohci_nxp_stop_hc(); 237 236 usb_put_hcd(hcd); 237 + put_device(&isp1301_i2c_client->dev); 238 238 isp1301_i2c_client = NULL; 239 239 } 240 240
+1 -1
drivers/usb/host/xhci-dbgtty.c
··· 554 554 * Hang up the TTY. This wakes up any blocked 555 555 * writers and causes subsequent writes to fail. 556 556 */ 557 - tty_vhangup(port->port.tty); 557 + tty_port_tty_vhangup(&port->port); 558 558 559 559 tty_unregister_device(dbc_tty_driver, port->minor); 560 560 xhci_dbc_tty_exit_port(port);
+1
drivers/usb/phy/phy-fsl-usb.c
··· 988 988 { 989 989 struct fsl_usb2_platform_data *pdata = dev_get_platdata(&pdev->dev); 990 990 991 + disable_delayed_work_sync(&fsl_otg_dev->otg_event); 991 992 usb_remove_phy(&fsl_otg_dev->phy); 992 993 free_irq(fsl_otg_dev->irq, fsl_otg_dev); 993 994
+6 -1
drivers/usb/phy/phy-isp1301.c
··· 149 149 return client; 150 150 151 151 /* non-DT: only one ISP1301 chip supported */ 152 - return isp1301_i2c_client; 152 + if (isp1301_i2c_client) { 153 + get_device(&isp1301_i2c_client->dev); 154 + return isp1301_i2c_client; 155 + } 156 + 157 + return NULL; 153 158 } 154 159 EXPORT_SYMBOL_GPL(isp1301_get_client); 155 160
+2
drivers/usb/renesas_usbhs/pipe.c
··· 713 713 /* make sure pipe is not busy */ 714 714 ret = usbhsp_pipe_barrier(pipe); 715 715 if (ret < 0) { 716 + usbhsp_put_pipe(pipe); 716 717 dev_err(dev, "pipe setup failed %d\n", usbhs_pipe_number(pipe)); 717 718 return NULL; 718 719 } 719 720 720 721 if (usbhsp_setup_pipecfg(pipe, is_host, dir_in, &pipecfg)) { 722 + usbhsp_put_pipe(pipe); 721 723 dev_err(dev, "can't setup pipe\n"); 722 724 return NULL; 723 725 }
+1 -1
drivers/usb/storage/unusual_uas.h
··· 98 98 US_FL_NO_ATA_1X), 99 99 100 100 /* Reported-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> */ 101 - UNUSUAL_DEV(0x13fd, 0x3940, 0x0309, 0x0309, 101 + UNUSUAL_DEV(0x13fd, 0x3940, 0x0000, 0x0309, 102 102 "Initio Corporation", 103 103 "INIC-3069", 104 104 USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+6 -2
drivers/usb/typec/altmodes/displayport.c
··· 766 766 if (!(DP_CAP_PIN_ASSIGN_DFP_D(port->vdo) & 767 767 DP_CAP_PIN_ASSIGN_UFP_D(alt->vdo)) && 768 768 !(DP_CAP_PIN_ASSIGN_UFP_D(port->vdo) & 769 - DP_CAP_PIN_ASSIGN_DFP_D(alt->vdo))) 769 + DP_CAP_PIN_ASSIGN_DFP_D(alt->vdo))) { 770 + typec_altmode_put_plug(plug); 770 771 return -ENODEV; 772 + } 771 773 772 774 dp = devm_kzalloc(&alt->dev, sizeof(*dp), GFP_KERNEL); 773 - if (!dp) 775 + if (!dp) { 776 + typec_altmode_put_plug(plug); 774 777 return -ENOMEM; 778 + } 775 779 776 780 INIT_WORK(&dp->work, dp_altmode_work); 777 781 mutex_init(&dp->lock);
+1
drivers/usb/typec/ucsi/Kconfig
··· 96 96 config UCSI_HUAWEI_GAOKUN 97 97 tristate "UCSI Interface Driver for Huawei Matebook E Go" 98 98 depends on EC_HUAWEI_GAOKUN 99 + depends on DRM || !DRM 99 100 select DRM_AUX_HPD_BRIDGE if DRM_BRIDGE && OF 100 101 help 101 102 This driver enables UCSI support on the Huawei Matebook E Go tablet,
+3 -2
drivers/usb/typec/ucsi/cros_ec_ucsi.c
··· 105 105 return 0; 106 106 } 107 107 108 - static int cros_ucsi_sync_control(struct ucsi *ucsi, u64 cmd, u32 *cci) 108 + static int cros_ucsi_sync_control(struct ucsi *ucsi, u64 cmd, u32 *cci, 109 + void *data, size_t size) 109 110 { 110 111 struct cros_ucsi_data *udata = ucsi_get_drvdata(ucsi); 111 112 int ret; 112 113 113 - ret = ucsi_sync_control_common(ucsi, cmd, cci); 114 + ret = ucsi_sync_control_common(ucsi, cmd, cci, data, size); 114 115 switch (ret) { 115 116 case -EBUSY: 116 117 /* EC may return -EBUSY if CCI.busy is set.
+4 -32
drivers/usb/typec/ucsi/debugfs.c
··· 37 37 case UCSI_SET_USB: 38 38 case UCSI_SET_POWER_LEVEL: 39 39 case UCSI_READ_POWER_LEVEL: 40 - case UCSI_SET_PDOS: 41 - ucsi->message_in_size = 0; 42 - ret = ucsi_send_command(ucsi, val); 40 + ret = ucsi_send_command(ucsi, val, NULL, 0); 43 41 break; 44 42 case UCSI_GET_CAPABILITY: 45 43 case UCSI_GET_CONNECTOR_CAPABILITY: ··· 52 54 case UCSI_GET_ATTENTION_VDO: 53 55 case UCSI_GET_CAM_CS: 54 56 case UCSI_GET_LPM_PPM_INFO: 55 - ucsi->message_in_size = sizeof(ucsi->debugfs->response); 56 - ret = ucsi_send_command(ucsi, val); 57 - memcpy(&ucsi->debugfs->response, ucsi->message_in, sizeof(ucsi->debugfs->response)); 57 + ret = ucsi_send_command(ucsi, val, 58 + &ucsi->debugfs->response, 59 + sizeof(ucsi->debugfs->response)); 58 60 break; 59 61 default: 60 62 ret = -EOPNOTSUPP; ··· 109 111 } 110 112 DEFINE_SHOW_ATTRIBUTE(ucsi_vbus_volt); 111 113 112 - static ssize_t ucsi_message_out_write(struct file *file, 113 - const char __user *data, size_t count, loff_t *ppos) 114 - { 115 - struct ucsi *ucsi = file->private_data; 116 - int ret; 117 - 118 - char *buf __free(kfree) = memdup_user_nul(data, count); 119 - if (IS_ERR(buf)) 120 - return PTR_ERR(buf); 121 - 122 - ucsi->message_out_size = min(count / 2, UCSI_MAX_MESSAGE_OUT_LENGTH); 123 - ret = hex2bin(ucsi->message_out, buf, ucsi->message_out_size); 124 - if (ret) 125 - return ret; 126 - 127 - return count; 128 - } 129 - 130 - static const struct file_operations ucsi_message_out_fops = { 131 - .open = simple_open, 132 - .write = ucsi_message_out_write, 133 - .llseek = generic_file_llseek, 134 - }; 135 - 136 114 void ucsi_debugfs_register(struct ucsi *ucsi) 137 115 { 138 116 ucsi->debugfs = kzalloc(sizeof(*ucsi->debugfs), GFP_KERNEL); ··· 121 147 debugfs_create_file("peak_current", 0400, ucsi->debugfs->dentry, ucsi, &ucsi_peak_curr_fops); 122 148 debugfs_create_file("avg_current", 0400, ucsi->debugfs->dentry, ucsi, &ucsi_avg_curr_fops); 123 149 debugfs_create_file("vbus_voltage", 0400, ucsi->debugfs->dentry, ucsi, &ucsi_vbus_volt_fops); 124 - debugfs_create_file("message_out", 0200, ucsi->debugfs->dentry, ucsi, 125 - &ucsi_message_out_fops); 126 150 } 127 151 128 152 void ucsi_debugfs_unregister(struct ucsi *ucsi)
+3 -8
drivers/usb/typec/ucsi/displayport.c
··· 67 67 } 68 68 69 69 command = UCSI_GET_CURRENT_CAM | UCSI_CONNECTOR_NUMBER(dp->con->num); 70 - ucsi->message_in_size = sizeof(cur); 71 - ret = ucsi_send_command(ucsi, command); 70 + ret = ucsi_send_command(ucsi, command, &cur, sizeof(cur)); 72 71 if (ret < 0) { 73 72 if (ucsi->version > 0x0100) 74 73 goto err_unlock; 75 74 cur = 0xff; 76 - } else { 77 - memcpy(&cur, ucsi->message_in, ucsi->message_in_size); 78 75 } 79 76 80 77 if (cur != 0xff) { ··· 126 129 } 127 130 128 131 command = UCSI_CMD_SET_NEW_CAM(dp->con->num, 0, dp->offset, 0); 129 - dp->con->ucsi->message_in_size = 0; 130 - ret = ucsi_send_command(dp->con->ucsi, command); 132 + ret = ucsi_send_command(dp->con->ucsi, command, NULL, 0); 131 133 if (ret < 0) 132 134 goto out_unlock; 133 135 ··· 193 197 194 198 command = UCSI_CMD_SET_NEW_CAM(dp->con->num, 1, dp->offset, pins); 195 199 196 - dp->con->ucsi->message_in_size = 0; 197 - return ucsi_send_command(dp->con->ucsi, command); 200 + return ucsi_send_command(dp->con->ucsi, command, NULL, 0); 198 201 } 199 202 200 203 static int ucsi_displayport_vdm(struct typec_altmode *alt,
+36 -82
drivers/usb/typec/ucsi/ucsi.c
··· 55 55 } 56 56 EXPORT_SYMBOL_GPL(ucsi_notify_common); 57 57 58 - int ucsi_sync_control_common(struct ucsi *ucsi, u64 command, u32 *cci) 58 + int ucsi_sync_control_common(struct ucsi *ucsi, u64 command, u32 *cci, 59 + void *data, size_t size) 59 60 { 60 61 bool ack = UCSI_COMMAND(command) == UCSI_ACK_CC_CI; 61 62 int ret; ··· 67 66 set_bit(COMMAND_PENDING, &ucsi->flags); 68 67 69 68 reinit_completion(&ucsi->complete); 70 - 71 - if (ucsi->message_out_size > 0) { 72 - if (!ucsi->ops->write_message_out) { 73 - ucsi->message_out_size = 0; 74 - ret = -EOPNOTSUPP; 75 - goto out_clear_bit; 76 - } 77 - 78 - ret = ucsi->ops->write_message_out(ucsi, ucsi->message_out, 79 - ucsi->message_out_size); 80 - ucsi->message_out_size = 0; 81 - if (ret) 82 - goto out_clear_bit; 83 - } 84 69 85 70 ret = ucsi->ops->async_control(ucsi, command); 86 71 if (ret) ··· 84 97 if (!ret && cci) 85 98 ret = ucsi->ops->read_cci(ucsi, cci); 86 99 87 - if (!ret && ucsi->message_in_size > 0 && 100 + if (!ret && data && 88 101 (*cci & UCSI_CCI_COMMAND_COMPLETE)) 89 - ret = ucsi->ops->read_message_in(ucsi, ucsi->message_in, 90 - ucsi->message_in_size); 102 + ret = ucsi->ops->read_message_in(ucsi, data, size); 91 103 92 104 return ret; 93 105 } ··· 103 117 ctrl |= UCSI_ACK_CONNECTOR_CHANGE; 104 118 } 105 119 106 - ucsi->message_in_size = 0; 107 - return ucsi->ops->sync_control(ucsi, ctrl, NULL); 120 + return ucsi->ops->sync_control(ucsi, ctrl, NULL, NULL, 0); 108 121 } 109 122 110 - static int ucsi_run_command(struct ucsi *ucsi, u64 command, u32 *cci, bool conn_ack) 123 + static int ucsi_run_command(struct ucsi *ucsi, u64 command, u32 *cci, 124 + void *data, size_t size, bool conn_ack) 111 125 { 112 126 int ret, err; 113 127 114 128 *cci = 0; 115 129 116 - if (ucsi->message_in_size > UCSI_MAX_DATA_LENGTH(ucsi)) 130 + if (size > UCSI_MAX_DATA_LENGTH(ucsi)) 117 131 return -EINVAL; 118 132 119 - ret = ucsi->ops->sync_control(ucsi, command, cci); 133 + ret = ucsi->ops->sync_control(ucsi, command, cci, data, size); 120 134 121 - if (*cci & UCSI_CCI_BUSY) { 122 - ucsi->message_in_size = 0; 123 - return ucsi_run_command(ucsi, UCSI_CANCEL, cci, false) ?: -EBUSY; 124 - } 135 + if (*cci & UCSI_CCI_BUSY) 136 + return ucsi_run_command(ucsi, UCSI_CANCEL, cci, NULL, 0, false) ?: -EBUSY; 125 137 if (ret) 126 138 return ret; 127 139 ··· 151 167 int ret; 152 168 153 169 command = UCSI_GET_ERROR_STATUS | UCSI_CONNECTOR_NUMBER(connector_num); 154 - ucsi->message_in_size = sizeof(error); 155 - ret = ucsi_run_command(ucsi, command, &cci, false); 170 + ret = ucsi_run_command(ucsi, command, &cci, &error, sizeof(error), false); 156 171 if (ret < 0) 157 172 return ret; 158 - 159 - memcpy(&error, ucsi->message_in, sizeof(error)); 160 173 161 174 switch (error) { 162 175 case UCSI_ERROR_INCOMPATIBLE_PARTNER: ··· 200 219 return -EIO; 201 220 } 202 221 203 - static int ucsi_send_command_common(struct ucsi *ucsi, u64 cmd, bool conn_ack) 222 + static int ucsi_send_command_common(struct ucsi *ucsi, u64 cmd, 223 + void *data, size_t size, bool conn_ack) 204 224 { 205 225 u8 connector_num; 206 226 u32 cci; ··· 229 247 230 248 mutex_lock(&ucsi->ppm_lock); 231 249 232 - ret = ucsi_run_command(ucsi, cmd, &cci, conn_ack); 250 + ret = ucsi_run_command(ucsi, cmd, &cci, data, size, conn_ack); 233 251 234 252 if (cci & UCSI_CCI_ERROR) 235 253 ret = ucsi_read_error(ucsi, connector_num); ··· 238 256 return ret; 239 257 } 240 258 241 - int ucsi_send_command(struct ucsi *ucsi, u64 command) 259 + int ucsi_send_command(struct ucsi *ucsi, u64 command, 260 + void *data, size_t size) 242 261 { 243 - return ucsi_send_command_common(ucsi, command, false); 262 + return ucsi_send_command_common(ucsi, command, data, size, false); 244 263 } 245 264 EXPORT_SYMBOL_GPL(ucsi_send_command); 246 265 ··· 319 336 int i; 320 337 321 338 command = UCSI_GET_CURRENT_CAM | UCSI_CONNECTOR_NUMBER(con->num); 322 - con->ucsi->message_in_size = sizeof(cur); 323 - ret = ucsi_send_command(con->ucsi, command); 339 + ret = ucsi_send_command(con->ucsi, command, &cur, sizeof(cur)); 324 340 if (ret < 0) { 325 341 if (con->ucsi->version > 0x0100) { 326 342 dev_err(con->ucsi->dev, ··· 327 345 return; 328 346 } 329 347 cur = 0xff; 330 - } else { 331 - memcpy(&cur, con->ucsi->message_in, sizeof(cur)); 332 348 } 333 349 334 350 if (cur < UCSI_MAX_ALTMODES) ··· 510 530 command |= UCSI_GET_ALTMODE_RECIPIENT(recipient); 511 531 command |= UCSI_GET_ALTMODE_CONNECTOR_NUMBER(con->num); 512 532 command |= UCSI_GET_ALTMODE_OFFSET(i); 513 - ucsi->message_in_size = sizeof(alt); 514 - len = ucsi_send_command(con->ucsi, command); 533 + len = ucsi_send_command(con->ucsi, command, &alt, sizeof(alt)); 515 534 /* 516 535 * We are collecting all altmodes first and then registering. 517 536 * Some type-C device will return zero length data beyond last ··· 518 539 */ 519 540 if (len < 0) 520 541 return len; 521 - 522 - memcpy(&alt, ucsi->message_in, sizeof(alt)); 523 542 524 543 /* We got all altmodes, now break out and register them */ 525 544 if (!len || !alt.svid) ··· 586 609 command |= UCSI_GET_ALTMODE_RECIPIENT(recipient); 587 610 command |= UCSI_GET_ALTMODE_CONNECTOR_NUMBER(con->num); 588 611 command |= UCSI_GET_ALTMODE_OFFSET(i); 589 - con->ucsi->message_in_size = sizeof(alt); 590 - len = ucsi_send_command(con->ucsi, command); 612 + len = ucsi_send_command(con->ucsi, command, alt, sizeof(alt)); 591 613 if (len == -EBUSY) 592 614 continue; 593 615 if (len <= 0) 594 616 return len; 595 - 596 - memcpy(&alt, con->ucsi->message_in, sizeof(alt)); 597 617 598 618 /* 599 619 * This code is requesting one alt mode at a time, but some PPMs ··· 659 685 UCSI_MAX_DATA_LENGTH(con->ucsi)); 660 686 int ret; 661 687 662 - con->ucsi->message_in_size = size; 663 - ret = ucsi_send_command_common(con->ucsi, command, conn_ack); 664 - memcpy(&con->status, con->ucsi->message_in, size); 688 + ret = ucsi_send_command_common(con->ucsi, command, &con->status, size, conn_ack); 665 689 666 690 return ret < 0 ? ret : 0; 667 691 } ··· 682 710 command |= UCSI_GET_PDOS_PDO_OFFSET(offset); 683 711 command |= UCSI_GET_PDOS_NUM_PDOS(num_pdos - 1); 684 712 command |= is_source(role) ? UCSI_GET_PDOS_SRC_PDOS : 0; 685 - ucsi->message_in_size = num_pdos * sizeof(u32); 686 - ret = ucsi_send_command(ucsi, command); 687 - memcpy(pdos + offset, ucsi->message_in, num_pdos * sizeof(u32)); 713 + ret = ucsi_send_command(ucsi, command, pdos + offset, 714 + num_pdos * sizeof(u32)); 688 715 if (ret < 0 && ret != -ETIMEDOUT) 689 716 dev_err(ucsi->dev, "UCSI_GET_PDOS failed (%d)\n", ret); 690 717 ··· 770 799 command |= UCSI_GET_PD_MESSAGE_BYTES(len); 771 800 command |= UCSI_GET_PD_MESSAGE_TYPE(type); 772 801 773 - con->ucsi->message_in_size = len; 774 - ret = ucsi_send_command(con->ucsi, command); 775 - memcpy(data + offset, con->ucsi->message_in, len); 802 + ret = ucsi_send_command(con->ucsi, command, data + offset, len); 776 803 if (ret < 0) 777 804 return ret; 778 805 } ··· 935 966 int ret; 936 967 937 968 command = UCSI_GET_CABLE_PROPERTY | UCSI_CONNECTOR_NUMBER(con->num); 938 - con->ucsi->message_in_size = sizeof(cable_prop); 939 - ret = ucsi_send_command(con->ucsi, command); 940 - memcpy(&cable_prop, con->ucsi->message_in, sizeof(cable_prop)); 969 + ret = ucsi_send_command(con->ucsi, command, &cable_prop, sizeof(cable_prop)); 941 970 if (ret < 0) { 942 971 dev_err(con->ucsi->dev, "GET_CABLE_PROPERTY failed (%d)\n", ret); 943 972 return ret; ··· 996 1029 return 0; 997 1030 998 1031 command = UCSI_GET_CONNECTOR_CAPABILITY | UCSI_CONNECTOR_NUMBER(con->num); 999 - con->ucsi->message_in_size = sizeof(con->cap); 1000 - ret = ucsi_send_command(con->ucsi, command); 1001 - memcpy(&con->cap, con->ucsi->message_in, sizeof(con->cap)); 1032 + ret = ucsi_send_command(con->ucsi, command, &con->cap, sizeof(con->cap)); 1002 1033 if (ret < 0) { 1003 1034 dev_err(con->ucsi->dev, "GET_CONNECTOR_CAPABILITY failed (%d)\n", ret); 1004 1035 return ret; ··· 1380 1415 else if (con->ucsi->version >= UCSI_VERSION_2_0) 1381 1416 command |= hard ? 0 : UCSI_CONNECTOR_RESET_DATA_VER_2_0; 1382 1417 1383 - con->ucsi->message_in_size = 0; 1384 - return ucsi_send_command(con->ucsi, command); 1418 + return ucsi_send_command(con->ucsi, command, NULL, 0); 1385 1419 } 1386 1420 1387 1421 static int ucsi_reset_ppm(struct ucsi *ucsi) ··· 1461 1497 { 1462 1498 int ret; 1463 1499 1464 - con->ucsi->message_in_size = 0; 1465 - ret = ucsi_send_command(con->ucsi, command); 1500 + ret = ucsi_send_command(con->ucsi, command, NULL, 0); 1466 1501 if (ret == -ETIMEDOUT) { 1467 1502 u64 c; 1468 1503 ··· 1469 1506 ucsi_reset_ppm(con->ucsi); 1470 1507 1471 1508 c = UCSI_SET_NOTIFICATION_ENABLE | con->ucsi->ntfy; 1472 - con->ucsi->message_in_size = 0; 1473 - ucsi_send_command(con->ucsi, c); 1509 + ucsi_send_command(con->ucsi, c, NULL, 0); 1474 1510 1475 1511 ucsi_reset_connector(con, true); 1476 1512 } ··· 1622 1660 /* Get connector capability */ 1623 1661 command = UCSI_GET_CONNECTOR_CAPABILITY; 1624 1662 command |= UCSI_CONNECTOR_NUMBER(con->num); 1625 - ucsi->message_in_size = sizeof(con->cap); 1626 - ret = ucsi_send_command(ucsi, command); 1663 + ret = ucsi_send_command(ucsi, command, &con->cap, sizeof(con->cap)); 1627 1664 if (ret < 0) 1628 1665 goto out_unlock; 1629 - 1630 - memcpy(&con->cap, ucsi->message_in, sizeof(con->cap)); 1631 1666 1632 1667 if (UCSI_CONCAP(con, OPMODE_DRP)) 1633 1668 cap->data = TYPEC_PORT_DRD; ··· 1822 1863 /* Enable basic notifications */ 1823 1864 ntfy = UCSI_ENABLE_NTFY_CMD_COMPLETE | UCSI_ENABLE_NTFY_ERROR; 1824 1865 command = UCSI_SET_NOTIFICATION_ENABLE | ntfy; 1825 - ucsi->message_in_size = 0; 1826 - ret = ucsi_send_command(ucsi, command); 1866 + ret = ucsi_send_command(ucsi, command, NULL, 0); 1827 1867 if (ret < 0) 1828 1868 goto err_reset; 1829 1869 1830 1870 /* Get PPM capabilities */ 1831 1871 command = UCSI_GET_CAPABILITY; 1832 - ucsi->message_in_size = BITS_TO_BYTES(UCSI_GET_CAPABILITY_SIZE); 1833 - ret = ucsi_send_command(ucsi, command); 1872 + ret = ucsi_send_command(ucsi, command, &ucsi->cap, 1873 + BITS_TO_BYTES(UCSI_GET_CAPABILITY_SIZE)); 1834 1874 if (ret < 0) 1835 1875 goto err_reset; 1836 - 1837 - memcpy(&ucsi->cap, ucsi->message_in, BITS_TO_BYTES(UCSI_GET_CAPABILITY_SIZE)); 1838 1876 1839 1877 if (!ucsi->cap.num_connectors) { 1840 1878 ret = -ENODEV; ··· 1862 1906 /* Enable all supported notifications */ 1863 1907 ntfy = ucsi_get_supported_notifications(ucsi); 1864 1908 command = UCSI_SET_NOTIFICATION_ENABLE | ntfy; 1865 - ucsi->message_in_size = 0; 1866 - ret = ucsi_send_command(ucsi, command); 1909 + ret = ucsi_send_command(ucsi, command, NULL, 0); 1867 1910 if (ret < 0) 1868 1911 goto err_unregister; 1869 1912 ··· 1913 1958 1914 1959 /* Restore UCSI notification enable mask after system resume */ 1915 1960 command = UCSI_SET_NOTIFICATION_ENABLE | ucsi->ntfy; 1916 - ucsi->message_in_size = 0; 1917 - ret = ucsi_send_command(ucsi, command); 1961 + ret = ucsi_send_command(ucsi, command, NULL, 0); 1918 1962 if (ret < 0) { 1919 1963 dev_err(ucsi->dev, "failed to re-enable notifications (%d)\n", ret); 1920 1964 return;
+6 -16
drivers/usb/typec/ucsi/ucsi.h
··· 29 29 #define UCSI_MESSAGE_OUT 32 30 30 #define UCSIv2_MESSAGE_OUT 272 31 31 32 - /* Define maximum lengths for message buffers */ 33 - #define UCSI_MAX_MESSAGE_IN_LENGTH 256 34 - #define UCSI_MAX_MESSAGE_OUT_LENGTH 256 35 - 36 32 /* UCSI versions */ 37 33 #define UCSI_VERSION_1_0 0x0100 38 34 #define UCSI_VERSION_1_1 0x0110 ··· 65 69 * @read_cci: Read CCI register 66 70 * @poll_cci: Read CCI register while polling with notifications disabled 67 71 * @read_message_in: Read message data from UCSI 68 - * @write_message_out: Write message data to UCSI 69 72 * @sync_control: Blocking control operation 70 73 * @async_control: Non-blocking control operation 71 74 * @update_altmodes: Squashes duplicate DP altmodes ··· 80 85 int (*read_cci)(struct ucsi *ucsi, u32 *cci); 81 86 int (*poll_cci)(struct ucsi *ucsi, u32 *cci); 82 87 int (*read_message_in)(struct ucsi *ucsi, void *val, size_t val_len); 83 - int (*write_message_out)(struct ucsi *ucsi, void *data, size_t data_len); 84 - int (*sync_control)(struct ucsi *ucsi, u64 command, u32 *cci); 88 + int (*sync_control)(struct ucsi *ucsi, u64 command, u32 *cci, 89 + void *data, size_t size); 85 90 int (*async_control)(struct ucsi *ucsi, u64 command); 86 91 bool (*update_altmodes)(struct ucsi *ucsi, u8 recipient, 87 92 struct ucsi_altmode *orig, ··· 132 137 #define UCSI_GET_PD_MESSAGE 0x15 133 138 #define UCSI_GET_CAM_CS 0x18 134 139 #define UCSI_SET_SINK_PATH 0x1c 135 - #define UCSI_SET_PDOS 0x1d 136 140 #define UCSI_READ_POWER_LEVEL 0x1e 137 141 #define UCSI_SET_USB 0x21 138 142 #define UCSI_GET_LPM_PPM_INFO 0x22 ··· 493 499 unsigned long quirks; 494 500 #define UCSI_NO_PARTNER_PDOS BIT(0) /* Don't read partner's PDOs */ 495 501 #define UCSI_DELAY_DEVICE_PDOS BIT(1) /* Reading PDOs fails until the parter is in PD mode */ 496 - 497 - /* Fixed-size buffers for incoming and outgoing messages */ 498 - u8 message_in[UCSI_MAX_MESSAGE_IN_LENGTH]; 499 - size_t message_in_size; 500 - u8 message_out[UCSI_MAX_MESSAGE_OUT_LENGTH]; 501 - size_t message_out_size; 502 502 }; 503 503 504 504 #define UCSI_MAX_DATA_LENGTH(u) (((u)->version < UCSI_VERSION_2_0) ? 0x10 : 0xff) ··· 555 567 struct usb_pd_identity cable_identity; 556 568 }; 557 569 558 - int ucsi_send_command(struct ucsi *ucsi, u64 command); 570 + int ucsi_send_command(struct ucsi *ucsi, u64 command, 571 + void *retval, size_t size); 559 572 560 573 void ucsi_altmode_update_active(struct ucsi_connector *con); 561 574 int ucsi_resume(struct ucsi *ucsi); 562 575 563 576 void ucsi_notify_common(struct ucsi *ucsi, u32 cci); 564 - int ucsi_sync_control_common(struct ucsi *ucsi, u64 command, u32 *cci); 577 + int ucsi_sync_control_common(struct ucsi *ucsi, u64 command, u32 *cci, 578 + void *data, size_t size); 565 579 566 580 #if IS_ENABLED(CONFIG_POWER_SUPPLY) 567 581 int ucsi_register_port_psy(struct ucsi_connector *con);
+5 -20
drivers/usb/typec/ucsi/ucsi_acpi.c
··· 86 86 return 0; 87 87 } 88 88 89 - static int ucsi_acpi_write_message_out(struct ucsi *ucsi, void *data, size_t data_len) 90 - { 91 - struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); 92 - 93 - if (!data || !data_len) 94 - return -EINVAL; 95 - 96 - if (ucsi->version <= UCSI_VERSION_1_2) 97 - memcpy(ua->base + UCSI_MESSAGE_OUT, data, data_len); 98 - else 99 - memcpy(ua->base + UCSIv2_MESSAGE_OUT, data, data_len); 100 - 101 - return 0; 102 - } 103 - 104 89 static int ucsi_acpi_async_control(struct ucsi *ucsi, u64 command) 105 90 { 106 91 struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); ··· 101 116 .read_cci = ucsi_acpi_read_cci, 102 117 .poll_cci = ucsi_acpi_poll_cci, 103 118 .read_message_in = ucsi_acpi_read_message_in, 104 - .write_message_out = ucsi_acpi_write_message_out, 105 119 .sync_control = ucsi_sync_control_common, 106 120 .async_control = ucsi_acpi_async_control 107 121 }; 108 122 109 - static int ucsi_gram_sync_control(struct ucsi *ucsi, u64 command, u32 *cci) 123 + static int ucsi_gram_sync_control(struct ucsi *ucsi, u64 command, u32 *cci, 124 + void *val, size_t len) 110 125 { 111 126 u16 bogus_change = UCSI_CONSTAT_POWER_LEVEL_CHANGE | 112 127 UCSI_CONSTAT_PDOS_CHANGE; 113 128 struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); 114 129 int ret; 115 130 116 - ret = ucsi_sync_control_common(ucsi, command, cci); 131 + ret = ucsi_sync_control_common(ucsi, command, cci, val, len); 117 132 if (ret < 0) 118 133 return ret; 119 134 ··· 125 140 if (UCSI_COMMAND(ua->cmd) == UCSI_GET_CONNECTOR_STATUS && 126 141 ua->check_bogus_event) { 127 142 /* Clear the bogus change */ 128 - if (*(u16 *)ucsi->message_in == bogus_change) 129 - *(u16 *)ucsi->message_in = 0; 143 + if (*(u16 *)val == bogus_change) 144 + *(u16 *)val = 0; 130 145 131 146 ua->check_bogus_event = false; 132 147 }
+6 -5
drivers/usb/typec/ucsi/ucsi_ccg.c
··· 606 606 return ccg_write(uc, reg, (u8 *)&command, sizeof(command)); 607 607 } 608 608 609 - static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command, u32 *cci) 609 + static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command, u32 *cci, 610 + void *data, size_t size) 610 611 { 611 612 struct ucsi_ccg *uc = ucsi_get_drvdata(ucsi); 612 613 struct ucsi_connector *con; ··· 629 628 ucsi_ccg_update_set_new_cam_cmd(uc, con, &command); 630 629 } 631 630 632 - ret = ucsi_sync_control_common(ucsi, command, cci); 631 + ret = ucsi_sync_control_common(ucsi, command, cci, data, size); 633 632 634 633 switch (UCSI_COMMAND(command)) { 635 634 case UCSI_GET_CURRENT_CAM: 636 635 if (uc->has_multiple_dp) 637 - ucsi_ccg_update_get_current_cam_cmd(uc, (u8 *)ucsi->message_in); 636 + ucsi_ccg_update_get_current_cam_cmd(uc, (u8 *)data); 638 637 break; 639 638 case UCSI_GET_ALTERNATE_MODES: 640 639 if (UCSI_ALTMODE_RECIPIENT(command) == UCSI_RECIPIENT_SOP) { 641 - struct ucsi_altmode *alt = (struct ucsi_altmode *)ucsi->message_in; 640 + struct ucsi_altmode *alt = data; 642 641 643 642 if (alt[0].svid == USB_TYPEC_NVIDIA_VLINK_SID) 644 643 ucsi_ccg_nvidia_altmode(uc, alt, command); ··· 646 645 break; 647 646 case UCSI_GET_CAPABILITY: 648 647 if (uc->fw_build == CCG_FW_BUILD_NVIDIA_TEGRA) { 649 - struct ucsi_capability *cap = (struct ucsi_capability *)ucsi->message_in; 648 + struct ucsi_capability *cap = data; 650 649 651 650 cap->features &= ~UCSI_CAP_ALT_MODE_DETAILS; 652 651 }
+8 -7
drivers/usb/typec/ucsi/ucsi_yoga_c630.c
··· 88 88 89 89 static int yoga_c630_ucsi_sync_control(struct ucsi *ucsi, 90 90 u64 command, 91 - u32 *cci) 91 + u32 *cci, 92 + void *data, size_t size) 92 93 { 93 94 int ret; 94 95 ··· 107 106 }; 108 107 109 108 dev_dbg(ucsi->dev, "faking DP altmode for con1\n"); 110 - memset(ucsi->message_in, 0, ucsi->message_in_size); 111 - memcpy(ucsi->message_in, &alt, min(sizeof(alt), ucsi->message_in_size)); 109 + memset(data, 0, size); 110 + memcpy(data, &alt, min(sizeof(alt), size)); 112 111 *cci = UCSI_CCI_COMMAND_COMPLETE | UCSI_SET_CCI_LENGTH(sizeof(alt)); 113 112 return 0; 114 113 } ··· 121 120 if (UCSI_COMMAND(command) == UCSI_GET_ALTERNATE_MODES && 122 121 UCSI_GET_ALTMODE_GET_CONNECTOR_NUMBER(command) == 2) { 123 122 dev_dbg(ucsi->dev, "ignoring altmodes for con2\n"); 124 - memset(ucsi->message_in, 0, ucsi->message_in_size); 123 + memset(data, 0, size); 125 124 *cci = UCSI_CCI_COMMAND_COMPLETE; 126 125 return 0; 127 126 } 128 127 129 - ret = ucsi_sync_control_common(ucsi, command, cci); 128 + ret = ucsi_sync_control_common(ucsi, command, cci, data, size); 130 129 if (ret < 0) 131 130 return ret; 132 131 133 132 /* UCSI_GET_CURRENT_CAM is off-by-one on all ports */ 134 - if (UCSI_COMMAND(command) == UCSI_GET_CURRENT_CAM && ucsi->message_in_size > 0) 135 - ucsi->message_in[0]--; 133 + if (UCSI_COMMAND(command) == UCSI_GET_CURRENT_CAM && data) 134 + ((u8 *)data)[0]--; 136 135 137 136 return ret; 138 137 }
+2 -2
drivers/vfio/pci/nvgrace-gpu/main.c
··· 561 561 ret = vfio_pci_core_do_io_rw(&nvdev->core_device, false, 562 562 nvdev->resmem.ioaddr, 563 563 buf, offset, mem_count, 564 - 0, 0, false); 564 + 0, 0, false, VFIO_PCI_IO_WIDTH_8); 565 565 } 566 566 567 567 return ret; ··· 693 693 ret = vfio_pci_core_do_io_rw(&nvdev->core_device, false, 694 694 nvdev->resmem.ioaddr, 695 695 (char __user *)buf, pos, mem_count, 696 - 0, 0, true); 696 + 0, 0, true, VFIO_PCI_IO_WIDTH_8); 697 697 } 698 698 699 699 return ret;
+5 -2
drivers/vfio/pci/pds/dirty.c
··· 292 292 len = num_ranges * sizeof(*region_info); 293 293 294 294 node = interval_tree_iter_first(ranges, 0, ULONG_MAX); 295 - if (!node) 296 - return -EINVAL; 295 + if (!node) { 296 + err = -EINVAL; 297 + goto out_free_region_info; 298 + } 299 + 297 300 for (int i = 0; i < num_ranges; i++) { 298 301 struct pds_lm_dirty_region_info *ri = &region_info[i]; 299 302 u64 region_size = node->last - node->start + 1;
+18 -7
drivers/vfio/pci/vfio_pci_rdwr.c
··· 135 135 ssize_t vfio_pci_core_do_io_rw(struct vfio_pci_core_device *vdev, bool test_mem, 136 136 void __iomem *io, char __user *buf, 137 137 loff_t off, size_t count, size_t x_start, 138 - size_t x_end, bool iswrite) 138 + size_t x_end, bool iswrite, 139 + enum vfio_pci_io_width max_width) 139 140 { 140 141 ssize_t done = 0; 141 142 int ret; ··· 151 150 else 152 151 fillable = 0; 153 152 154 - if (fillable >= 8 && !(off % 8)) { 153 + if (fillable >= 8 && !(off % 8) && max_width >= 8) { 155 154 ret = vfio_pci_iordwr64(vdev, iswrite, test_mem, 156 155 io, buf, off, &filled); 157 156 if (ret) 158 157 return ret; 159 158 160 - } else 161 - if (fillable >= 4 && !(off % 4)) { 159 + } else if (fillable >= 4 && !(off % 4) && max_width >= 4) { 162 160 ret = vfio_pci_iordwr32(vdev, iswrite, test_mem, 163 161 io, buf, off, &filled); 164 162 if (ret) 165 163 return ret; 166 164 167 - } else if (fillable >= 2 && !(off % 2)) { 165 + } else if (fillable >= 2 && !(off % 2) && max_width >= 2) { 168 166 ret = vfio_pci_iordwr16(vdev, iswrite, test_mem, 169 167 io, buf, off, &filled); 170 168 if (ret) ··· 234 234 void __iomem *io; 235 235 struct resource *res = &vdev->pdev->resource[bar]; 236 236 ssize_t done; 237 + enum vfio_pci_io_width max_width = VFIO_PCI_IO_WIDTH_8; 237 238 238 239 if (pci_resource_start(pdev, bar)) 239 240 end = pci_resource_len(pdev, bar); ··· 263 262 if (!io) 264 263 return -ENOMEM; 265 264 x_end = end; 265 + 266 + /* 267 + * Certain devices (e.g. Intel X710) don't support qword 268 + * access to the ROM bar. Otherwise PCI AER errors might be 269 + * triggered. 270 + * 271 + * Disable qword access to the ROM bar universally, which 272 + * worked reliably for years before qword access is enabled. 273 + */ 274 + max_width = VFIO_PCI_IO_WIDTH_4; 266 275 } else { 267 276 int ret = vfio_pci_core_setup_barmap(vdev, bar); 268 277 if (ret) { ··· 289 278 } 290 279 291 280 done = vfio_pci_core_do_io_rw(vdev, res->flags & IORESOURCE_MEM, io, buf, pos, 292 - count, x_start, x_end, iswrite); 281 + count, x_start, x_end, iswrite, max_width); 293 282 294 283 if (done >= 0) 295 284 *ppos += done; ··· 363 352 * to the memory enable bit in the command register. 364 353 */ 365 354 done = vfio_pci_core_do_io_rw(vdev, false, iomem, buf, off, count, 366 - 0, 0, iswrite); 355 + 0, 0, iswrite, VFIO_PCI_IO_WIDTH_4); 367 356 368 357 vga_put(vdev->pdev, rsrc); 369 358
+4 -1
drivers/vfio/pci/xe/main.c
··· 250 250 struct xe_vfio_pci_migration_file *migf; 251 251 const struct file_operations *fops; 252 252 int flags; 253 + int ret; 253 254 254 255 migf = kzalloc(sizeof(*migf), GFP_KERNEL_ACCOUNT); 255 256 if (!migf) ··· 260 259 flags = type == XE_VFIO_FILE_SAVE ? O_RDONLY : O_WRONLY; 261 260 migf->filp = anon_inode_getfile("xe_vfio_mig", fops, migf, flags); 262 261 if (IS_ERR(migf->filp)) { 262 + ret = PTR_ERR(migf->filp); 263 263 kfree(migf); 264 - return ERR_CAST(migf->filp); 264 + return ERR_PTR(ret); 265 265 } 266 266 267 267 mutex_init(&migf->lock); ··· 506 504 .open_device = xe_vfio_pci_open_device, 507 505 .close_device = xe_vfio_pci_close_device, 508 506 .ioctl = vfio_pci_core_ioctl, 507 + .get_region_info_caps = vfio_pci_ioctl_get_region_info, 509 508 .device_feature = vfio_pci_core_ioctl_feature, 510 509 .read = vfio_pci_core_read, 511 510 .write = vfio_pci_core_write,
+11 -4
drivers/vhost/vsock.c
··· 66 66 return VHOST_VSOCK_DEFAULT_HOST_CID; 67 67 } 68 68 69 - /* Callers that dereference the return value must hold vhost_vsock_mutex or the 70 - * RCU read lock. 69 + /* Callers must be in an RCU read section or hold the vhost_vsock_mutex. 70 + * The return value can only be dereferenced while within the section. 71 71 */ 72 72 static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) 73 73 { 74 74 struct vhost_vsock *vsock; 75 75 76 - hash_for_each_possible_rcu(vhost_vsock_hash, vsock, hash, guest_cid) { 76 + hash_for_each_possible_rcu(vhost_vsock_hash, vsock, hash, guest_cid, 77 + lockdep_is_held(&vhost_vsock_mutex)) { 77 78 u32 other_cid = vsock->guest_cid; 78 79 79 80 /* Skip instances that have no CID yet */ ··· 710 709 * executing. 711 710 */ 712 711 712 + rcu_read_lock(); 713 + 713 714 /* If the peer is still valid, no need to reset connection */ 714 - if (vhost_vsock_get(vsk->remote_addr.svm_cid)) 715 + if (vhost_vsock_get(vsk->remote_addr.svm_cid)) { 716 + rcu_read_unlock(); 715 717 return; 718 + } 719 + 720 + rcu_read_unlock(); 716 721 717 722 /* If the close timeout is pending, let it expire. This avoids races 718 723 * with the timeout callback.
+5 -2
fs/debugfs/inode.c
··· 841 841 rd.new_parent = rd.old_parent; 842 842 rd.flags = RENAME_NOREPLACE; 843 843 target = lookup_noperm_unlocked(&QSTR(new_name), rd.new_parent); 844 - if (IS_ERR(target)) 845 - return PTR_ERR(target); 844 + if (IS_ERR(target)) { 845 + error = PTR_ERR(target); 846 + goto out_free; 847 + } 846 848 847 849 error = start_renaming_two_dentries(&rd, dentry, target); 848 850 if (error) { ··· 864 862 out: 865 863 dput(rd.old_parent); 866 864 dput(target); 865 + out_free: 867 866 kfree_const(new_name); 868 867 return error; 869 868 }
+4 -4
fs/erofs/zdata.c
··· 1262 1262 return err; 1263 1263 } 1264 1264 1265 - static int z_erofs_decompress_pcluster(struct z_erofs_backend *be, int err) 1265 + static int z_erofs_decompress_pcluster(struct z_erofs_backend *be, bool eio) 1266 1266 { 1267 1267 struct erofs_sb_info *const sbi = EROFS_SB(be->sb); 1268 1268 struct z_erofs_pcluster *pcl = be->pcl; ··· 1270 1270 const struct z_erofs_decompressor *alg = 1271 1271 z_erofs_decomp[pcl->algorithmformat]; 1272 1272 bool try_free = true; 1273 - int i, j, jtop, err2; 1273 + int i, j, jtop, err2, err = eio ? -EIO : 0; 1274 1274 struct page *page; 1275 1275 bool overlapped; 1276 1276 const char *reason; ··· 1413 1413 .pcl = io->head, 1414 1414 }; 1415 1415 struct z_erofs_pcluster *next; 1416 - int err = io->eio ? -EIO : 0; 1416 + int err = 0; 1417 1417 1418 1418 for (; be.pcl != Z_EROFS_PCLUSTER_TAIL; be.pcl = next) { 1419 1419 DBG_BUGON(!be.pcl); 1420 1420 next = READ_ONCE(be.pcl->next); 1421 - err = z_erofs_decompress_pcluster(&be, err) ?: err; 1421 + err = z_erofs_decompress_pcluster(&be, io->eio) ?: err; 1422 1422 } 1423 1423 return err; 1424 1424 }
+4 -2
fs/kernfs/dir.c
··· 681 681 return kn; 682 682 683 683 err_out4: 684 - simple_xattrs_free(&kn->iattr->xattrs, NULL); 685 - kmem_cache_free(kernfs_iattrs_cache, kn->iattr); 684 + if (kn->iattr) { 685 + simple_xattrs_free(&kn->iattr->xattrs, NULL); 686 + kmem_cache_free(kernfs_iattrs_cache, kn->iattr); 687 + } 686 688 err_out3: 687 689 spin_lock(&root->kernfs_idr_lock); 688 690 idr_remove(&root->ino_idr, (u32)kernfs_ino(kn));
+1 -3
fs/lockd/svc4proc.c
··· 97 97 struct nlm_args *argp = rqstp->rq_argp; 98 98 struct nlm_host *host; 99 99 struct nlm_file *file; 100 - struct nlm_lockowner *test_owner; 101 100 __be32 rc = rpc_success; 102 101 103 102 dprintk("lockd: TEST4 called\n"); ··· 106 107 if ((resp->status = nlm4svc_retrieve_args(rqstp, argp, &host, &file))) 107 108 return resp->status == nlm_drop_reply ? rpc_drop_reply :rpc_success; 108 109 109 - test_owner = argp->lock.fl.c.flc_owner; 110 110 /* Now check for conflicting locks */ 111 111 resp->status = nlmsvc_testlock(rqstp, file, host, &argp->lock, 112 112 &resp->lock); ··· 114 116 else 115 117 dprintk("lockd: TEST4 status %d\n", ntohl(resp->status)); 116 118 117 - nlmsvc_put_lockowner(test_owner); 119 + nlmsvc_release_lockowner(&argp->lock); 118 120 nlmsvc_release_host(host); 119 121 nlm_release_file(file); 120 122 return rc;
+12 -9
fs/lockd/svclock.c
··· 633 633 } 634 634 635 635 mode = lock_to_openmode(&lock->fl); 636 - error = vfs_test_lock(file->f_file[mode], &lock->fl); 636 + locks_init_lock(&conflock->fl); 637 + /* vfs_test_lock only uses start, end, and owner, but tests flc_file */ 638 + conflock->fl.c.flc_file = lock->fl.c.flc_file; 639 + conflock->fl.fl_start = lock->fl.fl_start; 640 + conflock->fl.fl_end = lock->fl.fl_end; 641 + conflock->fl.c.flc_owner = lock->fl.c.flc_owner; 642 + error = vfs_test_lock(file->f_file[mode], &conflock->fl); 637 643 if (error) { 638 644 /* We can't currently deal with deferred test requests */ 639 645 if (error == FILE_LOCK_DEFERRED) ··· 649 643 goto out; 650 644 } 651 645 652 - if (lock->fl.c.flc_type == F_UNLCK) { 646 + if (conflock->fl.c.flc_type == F_UNLCK) { 653 647 ret = nlm_granted; 654 648 goto out; 655 649 } 656 650 657 651 dprintk("lockd: conflicting lock(ty=%d, %Ld-%Ld)\n", 658 - lock->fl.c.flc_type, (long long)lock->fl.fl_start, 659 - (long long)lock->fl.fl_end); 652 + conflock->fl.c.flc_type, (long long)conflock->fl.fl_start, 653 + (long long)conflock->fl.fl_end); 660 654 conflock->caller = "somehost"; /* FIXME */ 661 655 conflock->len = strlen(conflock->caller); 662 656 conflock->oh.len = 0; /* don't return OH info */ 663 - conflock->svid = lock->fl.c.flc_pid; 664 - conflock->fl.c.flc_type = lock->fl.c.flc_type; 665 - conflock->fl.fl_start = lock->fl.fl_start; 666 - conflock->fl.fl_end = lock->fl.fl_end; 667 - locks_release_private(&lock->fl); 657 + conflock->svid = conflock->fl.c.flc_pid; 658 + locks_release_private(&conflock->fl); 668 659 669 660 ret = nlm_lck_denied; 670 661 out:
+1 -4
fs/lockd/svcproc.c
··· 117 117 struct nlm_args *argp = rqstp->rq_argp; 118 118 struct nlm_host *host; 119 119 struct nlm_file *file; 120 - struct nlm_lockowner *test_owner; 121 120 __be32 rc = rpc_success; 122 121 123 122 dprintk("lockd: TEST called\n"); ··· 125 126 /* Obtain client and file */ 126 127 if ((resp->status = nlmsvc_retrieve_args(rqstp, argp, &host, &file))) 127 128 return resp->status == nlm_drop_reply ? rpc_drop_reply :rpc_success; 128 - 129 - test_owner = argp->lock.fl.c.flc_owner; 130 129 131 130 /* Now check for conflicting locks */ 132 131 resp->status = cast_status(nlmsvc_testlock(rqstp, file, host, ··· 135 138 dprintk("lockd: TEST status %d vers %d\n", 136 139 ntohl(resp->status), rqstp->rq_vers); 137 140 138 - nlmsvc_put_lockowner(test_owner); 141 + nlmsvc_release_lockowner(&argp->lock); 139 142 nlmsvc_release_host(host); 140 143 nlm_release_file(file); 141 144 return rc;
+10 -2
fs/locks.c
··· 2236 2236 /** 2237 2237 * vfs_test_lock - test file byte range lock 2238 2238 * @filp: The file to test lock for 2239 - * @fl: The lock to test; also used to hold result 2239 + * @fl: The byte-range in the file to test; also used to hold result 2240 2240 * 2241 + * On entry, @fl does not contain a lock, but identifies a range (fl_start, fl_end) 2242 + * in the file (c.flc_file), and an owner (c.flc_owner) for whom existing locks 2243 + * should be ignored. c.flc_type and c.flc_flags are ignored. 2244 + * Both fl_lmops and fl_ops in @fl must be NULL. 2241 2245 * Returns -ERRNO on failure. Indicates presence of conflicting lock by 2242 - * setting conf->fl_type to something other than F_UNLCK. 2246 + * setting fl->fl_type to something other than F_UNLCK. 2247 + * 2248 + * If vfs_test_lock() does find a lock and return it, the caller must 2249 + * use locks_free_lock() or locks_release_private() on the returned lock. 2243 2250 */ 2244 2251 int vfs_test_lock(struct file *filp, struct file_lock *fl) 2245 2252 { 2253 + WARN_ON_ONCE(fl->fl_ops || fl->fl_lmops); 2246 2254 WARN_ON_ONCE(filp != fl->c.flc_file); 2247 2255 if (filp->f_op->lock) 2248 2256 return filp->f_op->lock(filp, F_GETLK, fl);
+1 -1
fs/nfsd/export.c
··· 1024 1024 { 1025 1025 struct svc_export *exp; 1026 1026 struct path path; 1027 - struct inode *inode; 1027 + struct inode *inode __maybe_unused; 1028 1028 struct svc_fh fh; 1029 1029 int err; 1030 1030 struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+14 -6
fs/nfsd/nfs4state.c
··· 1218 1218 1219 1219 if (nf) 1220 1220 nfsd_file_put(nf); 1221 - if (rnf) 1221 + if (rnf) { 1222 + nfsd_file_put(rnf); 1222 1223 nfs4_file_put_access(fp, NFS4_SHARE_ACCESS_READ); 1224 + } 1223 1225 } 1224 1226 1225 1227 static void nfsd4_finalize_deleg_timestamps(struct nfs4_delegation *dp, struct file *f) 1226 1228 { 1227 - struct iattr ia = { .ia_valid = ATTR_ATIME | ATTR_CTIME | ATTR_MTIME }; 1229 + struct iattr ia = { .ia_valid = ATTR_ATIME | ATTR_CTIME | ATTR_MTIME | ATTR_DELEG }; 1228 1230 struct inode *inode = file_inode(f); 1229 1231 int ret; 1230 1232 ··· 3099 3097 return -ENXIO; 3100 3098 3101 3099 ret = seq_open(file, &states_seq_ops); 3102 - if (ret) 3100 + if (ret) { 3101 + drop_client(clp); 3103 3102 return ret; 3103 + } 3104 3104 s = file->private_data; 3105 3105 s->private = clp; 3106 3106 return 0; ··· 6235 6231 fp = stp->st_stid.sc_file; 6236 6232 spin_lock(&fp->fi_lock); 6237 6233 __nfs4_file_get_access(fp, NFS4_SHARE_ACCESS_READ); 6238 - fp = stp->st_stid.sc_file; 6239 - fp->fi_fds[O_RDONLY] = nf; 6240 - fp->fi_rdeleg_file = nf; 6234 + if (!fp->fi_fds[O_RDONLY]) { 6235 + fp->fi_fds[O_RDONLY] = nf; 6236 + nf = NULL; 6237 + } 6238 + fp->fi_rdeleg_file = nfsd_file_get(fp->fi_fds[O_RDONLY]); 6241 6239 spin_unlock(&fp->fi_lock); 6240 + if (nf) 6241 + nfsd_file_put(nf); 6242 6242 } 6243 6243 return true; 6244 6244 }
+5
fs/nfsd/nfs4xdr.c
··· 3375 3375 u32 supp[3]; 3376 3376 3377 3377 memcpy(supp, nfsd_suppattrs[resp->cstate.minorversion], sizeof(supp)); 3378 + if (!IS_POSIXACL(d_inode(args->dentry))) 3379 + supp[0] &= ~FATTR4_WORD0_ACL; 3380 + if (!args->contextsupport) 3381 + supp[2] &= ~FATTR4_WORD2_SECURITY_LABEL; 3382 + 3378 3383 supp[0] &= NFSD_SUPPATTR_EXCLCREAT_WORD0; 3379 3384 supp[1] &= NFSD_SUPPATTR_EXCLCREAT_WORD1; 3380 3385 supp[2] &= NFSD_SUPPATTR_EXCLCREAT_WORD2;
+7 -1
fs/nfsd/nfsd.h
··· 547 547 #define NFSD_SUPPATTR_EXCLCREAT_WORD1 \ 548 548 (NFSD_WRITEABLE_ATTRS_WORD1 & \ 549 549 ~(FATTR4_WORD1_TIME_ACCESS_SET | FATTR4_WORD1_TIME_MODIFY_SET)) 550 + /* 551 + * The FATTR4_WORD2_TIME_DELEG attributes are not to be allowed for 552 + * OPEN(create) with EXCLUSIVE4_1. It doesn't make sense to set a 553 + * delegated timestamp on a new file. 554 + */ 550 555 #define NFSD_SUPPATTR_EXCLCREAT_WORD2 \ 551 - NFSD_WRITEABLE_ATTRS_WORD2 556 + (NFSD_WRITEABLE_ATTRS_WORD2 & \ 557 + ~(FATTR4_WORD2_TIME_DELEG_ACCESS | FATTR4_WORD2_TIME_DELEG_MODIFY)) 552 558 553 559 extern int nfsd4_is_junction(struct dentry *dentry); 554 560 extern int register_cld_notifier(void);
+4 -1
fs/nfsd/nfssvc.c
··· 615 615 serv = svc_create_pooled(nfsd_programs, ARRAY_SIZE(nfsd_programs), 616 616 &nn->nfsd_svcstats, 617 617 nfsd_max_blksize, nfsd); 618 - if (serv == NULL) 618 + if (serv == NULL) { 619 + percpu_ref_exit(&nn->nfsd_net_ref); 619 620 return -ENOMEM; 621 + } 620 622 621 623 error = svc_bind(serv, net); 622 624 if (error < 0) { 623 625 svc_destroy(&serv); 626 + percpu_ref_exit(&nn->nfsd_net_ref); 624 627 return error; 625 628 } 626 629 spin_lock(&nfsd_notifier_lock);
+2 -1
fs/nfsd/vfs.h
··· 67 67 struct iattr *iap = attrs->na_iattr; 68 68 69 69 return (iap->ia_valid || (attrs->na_seclabel && 70 - attrs->na_seclabel->len)); 70 + attrs->na_seclabel->len) || 71 + attrs->na_pacl || attrs->na_dpacl); 71 72 } 72 73 73 74 __be32 nfserrno (int errno);
+2
fs/smb/client/fs_context.c
··· 1139 1139 rc = smb3_sync_session_ctx_passwords(cifs_sb, ses); 1140 1140 if (rc) { 1141 1141 mutex_unlock(&ses->session_mutex); 1142 + kfree_sensitive(new_password); 1143 + kfree_sensitive(new_password2); 1142 1144 return rc; 1143 1145 } 1144 1146
+3
fs/smb/client/ioctl.c
··· 588 588 break; 589 589 default: 590 590 cifs_dbg(FYI, "unsupported ioctl\n"); 591 + trace_smb3_unsupported_ioctl(xid, 592 + pSMBFile ? pSMBFile->fid.persistent_fid : 0, 593 + command); 591 594 break; 592 595 } 593 596 cifs_ioc_exit:
+6
fs/smb/client/smb2ops.c
··· 1905 1905 src_off_prev = src_off; 1906 1906 dst_off_prev = dst_off; 1907 1907 1908 + /* 1909 + * __counted_by_le(ChunkCount): set to allocated chunks before 1910 + * populating Chunks[] 1911 + */ 1912 + cc_req->ChunkCount = cpu_to_le32(chunk_count); 1913 + 1908 1914 chunks = 0; 1909 1915 copy_bytes = 0; 1910 1916 copy_bytes_left = umin(total_bytes_left, tcon->max_bytes_copy);
+1
fs/smb/client/trace.h
··· 1579 1579 TP_ARGS(xid, fid, command)) 1580 1580 1581 1581 DEFINE_SMB3_IOCTL_EVENT(ioctl); 1582 + DEFINE_SMB3_IOCTL_EVENT(unsupported_ioctl); 1582 1583 1583 1584 DECLARE_EVENT_CLASS(smb3_shutdown_class, 1584 1585 TP_PROTO(__u32 flags,
+2 -2
fs/smb/server/auth.c
··· 714 714 int ksmbd_gen_preauth_integrity_hash(struct ksmbd_conn *conn, char *buf, 715 715 __u8 *pi_hash) 716 716 { 717 - struct smb2_hdr *rcv_hdr = smb2_get_msg(buf); 717 + struct smb2_hdr *rcv_hdr = smb_get_msg(buf); 718 718 char *all_bytes_msg = (char *)&rcv_hdr->ProtocolId; 719 719 int msg_size = get_rfc1002_len(buf); 720 720 struct sha512_ctx sha_ctx; ··· 841 841 unsigned int nvec, int enc) 842 842 { 843 843 struct ksmbd_conn *conn = work->conn; 844 - struct smb2_transform_hdr *tr_hdr = smb2_get_msg(iov[0].iov_base); 844 + struct smb2_transform_hdr *tr_hdr = smb_get_msg(iov[0].iov_base); 845 845 unsigned int assoc_data_len = sizeof(struct smb2_transform_hdr) - 20; 846 846 int rc; 847 847 struct scatterlist *sg;
+6 -5
fs/smb/server/connection.c
··· 295 295 return true; 296 296 } 297 297 298 - #define SMB1_MIN_SUPPORTED_HEADER_SIZE (sizeof(struct smb_hdr)) 299 - #define SMB2_MIN_SUPPORTED_HEADER_SIZE (sizeof(struct smb2_hdr) + 4) 298 + /* "+2" for BCC field (ByteCount, 2 bytes) */ 299 + #define SMB1_MIN_SUPPORTED_PDU_SIZE (sizeof(struct smb_hdr) + 2) 300 + #define SMB2_MIN_SUPPORTED_PDU_SIZE (sizeof(struct smb2_pdu)) 300 301 301 302 /** 302 303 * ksmbd_conn_handler_loop() - session thread to listen on new smb requests ··· 364 363 if (pdu_size > MAX_STREAM_PROT_LEN) 365 364 break; 366 365 367 - if (pdu_size < SMB1_MIN_SUPPORTED_HEADER_SIZE) 366 + if (pdu_size < SMB1_MIN_SUPPORTED_PDU_SIZE) 368 367 break; 369 368 370 369 /* 4 for rfc1002 length field */ ··· 395 394 if (!ksmbd_smb_request(conn)) 396 395 break; 397 396 398 - if (((struct smb2_hdr *)smb2_get_msg(conn->request_buf))->ProtocolId == 397 + if (((struct smb2_hdr *)smb_get_msg(conn->request_buf))->ProtocolId == 399 398 SMB2_PROTO_NUMBER) { 400 - if (pdu_size < SMB2_MIN_SUPPORTED_HEADER_SIZE) 399 + if (pdu_size < SMB2_MIN_SUPPORTED_PDU_SIZE) 401 400 break; 402 401 } 403 402
+4 -4
fs/smb/server/oplock.c
··· 637 637 goto out; 638 638 } 639 639 640 - rsp_hdr = smb2_get_msg(work->response_buf); 640 + rsp_hdr = smb_get_msg(work->response_buf); 641 641 memset(rsp_hdr, 0, sizeof(struct smb2_hdr) + 2); 642 642 rsp_hdr->ProtocolId = SMB2_PROTO_NUMBER; 643 643 rsp_hdr->StructureSize = SMB2_HEADER_STRUCTURE_SIZE; ··· 651 651 rsp_hdr->SessionId = 0; 652 652 memset(rsp_hdr->Signature, 0, 16); 653 653 654 - rsp = smb2_get_msg(work->response_buf); 654 + rsp = smb_get_msg(work->response_buf); 655 655 656 656 rsp->StructureSize = cpu_to_le16(24); 657 657 if (!br_info->open_trunc && ··· 744 744 goto out; 745 745 } 746 746 747 - rsp_hdr = smb2_get_msg(work->response_buf); 747 + rsp_hdr = smb_get_msg(work->response_buf); 748 748 memset(rsp_hdr, 0, sizeof(struct smb2_hdr) + 2); 749 749 rsp_hdr->ProtocolId = SMB2_PROTO_NUMBER; 750 750 rsp_hdr->StructureSize = SMB2_HEADER_STRUCTURE_SIZE; ··· 758 758 rsp_hdr->SessionId = 0; 759 759 memset(rsp_hdr->Signature, 0, 16); 760 760 761 - rsp = smb2_get_msg(work->response_buf); 761 + rsp = smb_get_msg(work->response_buf); 762 762 rsp->StructureSize = cpu_to_le16(44); 763 763 rsp->Epoch = br_info->epoch; 764 764 rsp->Flags = 0;
+1 -1
fs/smb/server/server.c
··· 95 95 96 96 if (ksmbd_conn_exiting(work->conn) || 97 97 ksmbd_conn_need_reconnect(work->conn)) { 98 - rsp_hdr = work->response_buf; 98 + rsp_hdr = smb_get_msg(work->response_buf); 99 99 rsp_hdr->Status.CifsError = STATUS_CONNECTION_DISCONNECTED; 100 100 return 1; 101 101 }
+44 -38
fs/smb/server/smb2pdu.c
··· 47 47 *req = ksmbd_req_buf_next(work); 48 48 *rsp = ksmbd_resp_buf_next(work); 49 49 } else { 50 - *req = smb2_get_msg(work->request_buf); 51 - *rsp = smb2_get_msg(work->response_buf); 50 + *req = smb_get_msg(work->request_buf); 51 + *rsp = smb_get_msg(work->response_buf); 52 52 } 53 53 } 54 54 ··· 146 146 if (work->next_smb2_rcv_hdr_off) 147 147 err_rsp = ksmbd_resp_buf_next(work); 148 148 else 149 - err_rsp = smb2_get_msg(work->response_buf); 149 + err_rsp = smb_get_msg(work->response_buf); 150 150 151 151 if (err_rsp->hdr.Status != STATUS_STOPPED_ON_SYMLINK) { 152 152 int err; ··· 172 172 */ 173 173 bool is_smb2_neg_cmd(struct ksmbd_work *work) 174 174 { 175 - struct smb2_hdr *hdr = smb2_get_msg(work->request_buf); 175 + struct smb2_hdr *hdr = smb_get_msg(work->request_buf); 176 176 177 177 /* is it SMB2 header ? */ 178 178 if (hdr->ProtocolId != SMB2_PROTO_NUMBER) ··· 196 196 */ 197 197 bool is_smb2_rsp(struct ksmbd_work *work) 198 198 { 199 - struct smb2_hdr *hdr = smb2_get_msg(work->response_buf); 199 + struct smb2_hdr *hdr = smb_get_msg(work->response_buf); 200 200 201 201 /* is it SMB2 header ? */ 202 202 if (hdr->ProtocolId != SMB2_PROTO_NUMBER) ··· 222 222 if (work->next_smb2_rcv_hdr_off) 223 223 rcv_hdr = ksmbd_req_buf_next(work); 224 224 else 225 - rcv_hdr = smb2_get_msg(work->request_buf); 225 + rcv_hdr = smb_get_msg(work->request_buf); 226 226 return le16_to_cpu(rcv_hdr->Command); 227 227 } 228 228 ··· 235 235 { 236 236 struct smb2_hdr *rsp_hdr; 237 237 238 - rsp_hdr = smb2_get_msg(work->response_buf); 238 + rsp_hdr = smb_get_msg(work->response_buf); 239 239 rsp_hdr->Status = err; 240 240 241 241 work->iov_idx = 0; ··· 258 258 struct ksmbd_conn *conn = work->conn; 259 259 int err; 260 260 261 - rsp_hdr = smb2_get_msg(work->response_buf); 261 + rsp_hdr = smb_get_msg(work->response_buf); 262 262 memset(rsp_hdr, 0, sizeof(struct smb2_hdr) + 2); 263 263 rsp_hdr->ProtocolId = SMB2_PROTO_NUMBER; 264 264 rsp_hdr->StructureSize = SMB2_HEADER_STRUCTURE_SIZE; ··· 272 272 rsp_hdr->SessionId = 0; 273 273 memset(rsp_hdr->Signature, 0, 16); 274 274 275 - rsp = smb2_get_msg(work->response_buf); 275 + rsp = smb_get_msg(work->response_buf); 276 276 277 277 WARN_ON(ksmbd_conn_good(conn)); 278 278 ··· 446 446 */ 447 447 bool is_chained_smb2_message(struct ksmbd_work *work) 448 448 { 449 - struct smb2_hdr *hdr = smb2_get_msg(work->request_buf); 449 + struct smb2_hdr *hdr = smb_get_msg(work->request_buf); 450 450 unsigned int len, next_cmd; 451 451 452 452 if (hdr->ProtocolId != SMB2_PROTO_NUMBER) ··· 497 497 */ 498 498 int init_smb2_rsp_hdr(struct ksmbd_work *work) 499 499 { 500 - struct smb2_hdr *rsp_hdr = smb2_get_msg(work->response_buf); 501 - struct smb2_hdr *rcv_hdr = smb2_get_msg(work->request_buf); 500 + struct smb2_hdr *rsp_hdr = smb_get_msg(work->response_buf); 501 + struct smb2_hdr *rcv_hdr = smb_get_msg(work->request_buf); 502 502 503 503 memset(rsp_hdr, 0, sizeof(struct smb2_hdr) + 2); 504 504 rsp_hdr->ProtocolId = rcv_hdr->ProtocolId; ··· 527 527 */ 528 528 int smb2_allocate_rsp_buf(struct ksmbd_work *work) 529 529 { 530 - struct smb2_hdr *hdr = smb2_get_msg(work->request_buf); 530 + struct smb2_hdr *hdr = smb_get_msg(work->request_buf); 531 531 size_t small_sz = MAX_CIFS_SMALL_BUFFER_SIZE; 532 532 size_t large_sz = small_sz + work->conn->vals->max_trans_size; 533 533 size_t sz = small_sz; ··· 543 543 offsetof(struct smb2_query_info_req, OutputBufferLength)) 544 544 return -EINVAL; 545 545 546 - req = smb2_get_msg(work->request_buf); 546 + req = smb_get_msg(work->request_buf); 547 547 if ((req->InfoType == SMB2_O_INFO_FILE && 548 548 (req->FileInfoClass == FILE_FULL_EA_INFORMATION || 549 549 req->FileInfoClass == FILE_ALL_INFORMATION)) || ··· 712 712 } 713 713 714 714 in_work->conn = work->conn; 715 - memcpy(smb2_get_msg(in_work->response_buf), ksmbd_resp_buf_next(work), 715 + memcpy(smb_get_msg(in_work->response_buf), ksmbd_resp_buf_next(work), 716 716 __SMB2_HEADER_STRUCTURE_SIZE); 717 717 718 - rsp_hdr = smb2_get_msg(in_work->response_buf); 718 + rsp_hdr = smb_get_msg(in_work->response_buf); 719 719 rsp_hdr->Flags |= SMB2_FLAGS_ASYNC_COMMAND; 720 720 rsp_hdr->Id.AsyncId = cpu_to_le64(work->async_id); 721 721 smb2_set_err_rsp(in_work); ··· 1093 1093 int smb2_handle_negotiate(struct ksmbd_work *work) 1094 1094 { 1095 1095 struct ksmbd_conn *conn = work->conn; 1096 - struct smb2_negotiate_req *req = smb2_get_msg(work->request_buf); 1097 - struct smb2_negotiate_rsp *rsp = smb2_get_msg(work->response_buf); 1096 + struct smb2_negotiate_req *req = smb_get_msg(work->request_buf); 1097 + struct smb2_negotiate_rsp *rsp = smb_get_msg(work->response_buf); 1098 1098 int rc = 0; 1099 1099 unsigned int smb2_buf_len, smb2_neg_size, neg_ctxt_len = 0; 1100 1100 __le32 status; ··· 2281 2281 { 2282 2282 struct smb2_create_rsp *rsp; 2283 2283 struct smb2_create_req *req; 2284 - int id; 2284 + int id = -1; 2285 2285 int err; 2286 2286 char *name; 2287 2287 ··· 2337 2337 rsp->hdr.Status = STATUS_NO_MEMORY; 2338 2338 break; 2339 2339 } 2340 + 2341 + if (id >= 0) 2342 + ksmbd_session_rpc_close(work->sess, id); 2340 2343 2341 2344 if (!IS_ERR(name)) 2342 2345 kfree(name); ··· 2812 2809 SMB2_CLIENT_GUID_SIZE)) { 2813 2810 if (!(req->hdr.Flags & SMB2_FLAGS_REPLAY_OPERATION)) { 2814 2811 err = -ENOEXEC; 2812 + ksmbd_put_durable_fd(dh_info->fp); 2815 2813 goto out; 2816 2814 } 2817 2815 ··· 3010 3006 file_info = FILE_OPENED; 3011 3007 3012 3008 rc = ksmbd_vfs_getattr(&fp->filp->f_path, &stat); 3009 + ksmbd_put_durable_fd(fp); 3013 3010 if (rc) 3014 3011 goto err_out2; 3015 3012 3016 - ksmbd_put_durable_fd(fp); 3017 3013 goto reconnected_fp; 3018 3014 } 3019 3015 } else if (req_op_level == SMB2_OPLOCK_LEVEL_LEASE) ··· 4927 4923 4928 4924 ret = vfs_getattr(&fp->filp->f_path, &stat, STATX_BASIC_STATS, 4929 4925 AT_STATX_SYNC_AS_STAT); 4930 - if (ret) 4926 + if (ret) { 4927 + kfree(filename); 4931 4928 return ret; 4929 + } 4932 4930 4933 4931 ksmbd_debug(SMB, "filename = %s\n", filename); 4934 4932 delete_pending = ksmbd_inode_pending_delete(fp); ··· 5973 5967 */ 5974 5968 int smb2_echo(struct ksmbd_work *work) 5975 5969 { 5976 - struct smb2_echo_rsp *rsp = smb2_get_msg(work->response_buf); 5970 + struct smb2_echo_rsp *rsp = smb_get_msg(work->response_buf); 5977 5971 5978 5972 ksmbd_debug(SMB, "Received smb2 echo request\n"); 5979 5973 ··· 6526 6520 pid = work->compound_pfid; 6527 6521 } 6528 6522 } else { 6529 - req = smb2_get_msg(work->request_buf); 6530 - rsp = smb2_get_msg(work->response_buf); 6523 + req = smb_get_msg(work->request_buf); 6524 + rsp = smb_get_msg(work->response_buf); 6531 6525 } 6532 6526 6533 6527 if (!test_tree_conn_flag(work->tcon, KSMBD_TREE_CONN_FLAG_WRITABLE)) { ··· 6760 6754 pid = work->compound_pfid; 6761 6755 } 6762 6756 } else { 6763 - req = smb2_get_msg(work->request_buf); 6764 - rsp = smb2_get_msg(work->response_buf); 6757 + req = smb_get_msg(work->request_buf); 6758 + rsp = smb_get_msg(work->response_buf); 6765 6759 } 6766 6760 6767 6761 if (!has_file_id(id)) { ··· 7189 7183 int smb2_cancel(struct ksmbd_work *work) 7190 7184 { 7191 7185 struct ksmbd_conn *conn = work->conn; 7192 - struct smb2_hdr *hdr = smb2_get_msg(work->request_buf); 7186 + struct smb2_hdr *hdr = smb_get_msg(work->request_buf); 7193 7187 struct smb2_hdr *chdr; 7194 7188 struct ksmbd_work *iter; 7195 7189 struct list_head *command_list; ··· 7206 7200 spin_lock(&conn->request_lock); 7207 7201 list_for_each_entry(iter, command_list, 7208 7202 async_request_entry) { 7209 - chdr = smb2_get_msg(iter->request_buf); 7203 + chdr = smb_get_msg(iter->request_buf); 7210 7204 7211 7205 if (iter->async_id != 7212 7206 le64_to_cpu(hdr->Id.AsyncId)) ··· 7227 7221 7228 7222 spin_lock(&conn->request_lock); 7229 7223 list_for_each_entry(iter, command_list, request_entry) { 7230 - chdr = smb2_get_msg(iter->request_buf); 7224 + chdr = smb_get_msg(iter->request_buf); 7231 7225 7232 7226 if (chdr->MessageId != hdr->MessageId || 7233 7227 iter == work) ··· 8157 8151 id = work->compound_fid; 8158 8152 } 8159 8153 } else { 8160 - req = smb2_get_msg(work->request_buf); 8161 - rsp = smb2_get_msg(work->response_buf); 8154 + req = smb_get_msg(work->request_buf); 8155 + rsp = smb_get_msg(work->response_buf); 8162 8156 } 8163 8157 8164 8158 if (!has_file_id(id)) ··· 8823 8817 */ 8824 8818 bool smb2_is_sign_req(struct ksmbd_work *work, unsigned int command) 8825 8819 { 8826 - struct smb2_hdr *rcv_hdr2 = smb2_get_msg(work->request_buf); 8820 + struct smb2_hdr *rcv_hdr2 = smb_get_msg(work->request_buf); 8827 8821 8828 8822 if ((rcv_hdr2->Flags & SMB2_FLAGS_SIGNED) && 8829 8823 command != SMB2_NEGOTIATE_HE && ··· 8848 8842 struct kvec iov[1]; 8849 8843 size_t len; 8850 8844 8851 - hdr = smb2_get_msg(work->request_buf); 8845 + hdr = smb_get_msg(work->request_buf); 8852 8846 if (work->next_smb2_rcv_hdr_off) 8853 8847 hdr = ksmbd_req_buf_next(work); 8854 8848 ··· 8922 8916 struct kvec iov[1]; 8923 8917 size_t len; 8924 8918 8925 - hdr = smb2_get_msg(work->request_buf); 8919 + hdr = smb_get_msg(work->request_buf); 8926 8920 if (work->next_smb2_rcv_hdr_off) 8927 8921 hdr = ksmbd_req_buf_next(work); 8928 8922 ··· 9055 9049 static void fill_transform_hdr(void *tr_buf, char *old_buf, __le16 cipher_type) 9056 9050 { 9057 9051 struct smb2_transform_hdr *tr_hdr = tr_buf + 4; 9058 - struct smb2_hdr *hdr = smb2_get_msg(old_buf); 9052 + struct smb2_hdr *hdr = smb_get_msg(old_buf); 9059 9053 unsigned int orig_len = get_rfc1002_len(old_buf); 9060 9054 9061 9055 /* tr_buf must be cleared by the caller */ ··· 9094 9088 9095 9089 bool smb3_is_transform_hdr(void *buf) 9096 9090 { 9097 - struct smb2_transform_hdr *trhdr = smb2_get_msg(buf); 9091 + struct smb2_transform_hdr *trhdr = smb_get_msg(buf); 9098 9092 9099 9093 return trhdr->ProtocolId == SMB2_TRANSFORM_PROTO_NUM; 9100 9094 } ··· 9106 9100 unsigned int pdu_length = get_rfc1002_len(buf); 9107 9101 struct kvec iov[2]; 9108 9102 int buf_data_size = pdu_length - sizeof(struct smb2_transform_hdr); 9109 - struct smb2_transform_hdr *tr_hdr = smb2_get_msg(buf); 9103 + struct smb2_transform_hdr *tr_hdr = smb_get_msg(buf); 9110 9104 int rc = 0; 9111 9105 9112 9106 if (pdu_length < sizeof(struct smb2_transform_hdr) || ··· 9147 9141 { 9148 9142 struct ksmbd_conn *conn = work->conn; 9149 9143 struct ksmbd_session *sess = work->sess; 9150 - struct smb2_hdr *rsp = smb2_get_msg(work->response_buf); 9144 + struct smb2_hdr *rsp = smb_get_msg(work->response_buf); 9151 9145 9152 9146 if (conn->dialect < SMB30_PROT_ID) 9153 9147 return false;
-9
fs/smb/server/smb2pdu.h
··· 383 383 int smb2_oplock_break(struct ksmbd_work *work); 384 384 int smb2_notify(struct ksmbd_work *ksmbd_work); 385 385 386 - /* 387 - * Get the body of the smb2 message excluding the 4 byte rfc1002 headers 388 - * from request/response buffer. 389 - */ 390 - static inline void *smb2_get_msg(void *buf) 391 - { 392 - return buf + 4; 393 - } 394 - 395 386 #define POSIX_TYPE_FILE 0 396 387 #define POSIX_TYPE_DIR 1 397 388 #define POSIX_TYPE_SYMLINK 2
+13 -13
fs/smb/server/smb_common.c
··· 140 140 if (smb2_hdr->ProtocolId == SMB2_PROTO_NUMBER) 141 141 return ksmbd_smb2_check_message(work); 142 142 143 - hdr = work->request_buf; 143 + hdr = smb_get_msg(work->request_buf); 144 144 if (*(__le32 *)hdr->Protocol == SMB1_PROTO_NUMBER && 145 145 hdr->Command == SMB_COM_NEGOTIATE) { 146 146 work->conn->outstanding_credits++; ··· 163 163 if (conn->request_buf[0] != 0) 164 164 return false; 165 165 166 - proto = (__le32 *)smb2_get_msg(conn->request_buf); 166 + proto = (__le32 *)smb_get_msg(conn->request_buf); 167 167 if (*proto == SMB2_COMPRESSION_TRANSFORM_ID) { 168 168 pr_err_ratelimited("smb2 compression not support yet"); 169 169 return false; ··· 259 259 static int ksmbd_negotiate_smb_dialect(void *buf) 260 260 { 261 261 int smb_buf_length = get_rfc1002_len(buf); 262 - __le32 proto = ((struct smb2_hdr *)smb2_get_msg(buf))->ProtocolId; 262 + __le32 proto = ((struct smb2_hdr *)smb_get_msg(buf))->ProtocolId; 263 263 264 264 if (proto == SMB2_PROTO_NUMBER) { 265 265 struct smb2_negotiate_req *req; 266 266 int smb2_neg_size = 267 267 offsetof(struct smb2_negotiate_req, Dialects); 268 268 269 - req = (struct smb2_negotiate_req *)smb2_get_msg(buf); 269 + req = (struct smb2_negotiate_req *)smb_get_msg(buf); 270 270 if (smb2_neg_size > smb_buf_length) 271 271 goto err_out; 272 272 ··· 278 278 req->DialectCount); 279 279 } 280 280 281 - proto = *(__le32 *)((struct smb_hdr *)buf)->Protocol; 282 281 if (proto == SMB1_PROTO_NUMBER) { 283 282 struct smb_negotiate_req *req; 284 283 285 - req = (struct smb_negotiate_req *)buf; 284 + req = (struct smb_negotiate_req *)smb_get_msg(buf); 286 285 if (le16_to_cpu(req->ByteCount) < 2) 287 286 goto err_out; 288 287 289 - if (offsetof(struct smb_negotiate_req, DialectsArray) - 4 + 288 + if (offsetof(struct smb_negotiate_req, DialectsArray) + 290 289 le16_to_cpu(req->ByteCount) > smb_buf_length) { 291 290 goto err_out; 292 291 } ··· 319 320 */ 320 321 static int init_smb1_rsp_hdr(struct ksmbd_work *work) 321 322 { 322 - struct smb_hdr *rsp_hdr = (struct smb_hdr *)work->response_buf; 323 - struct smb_hdr *rcv_hdr = (struct smb_hdr *)work->request_buf; 323 + struct smb_hdr *rsp_hdr = (struct smb_hdr *)smb_get_msg(work->response_buf); 324 + struct smb_hdr *rcv_hdr = (struct smb_hdr *)smb_get_msg(work->request_buf); 324 325 325 326 rsp_hdr->Command = SMB_COM_NEGOTIATE; 326 327 *(__le32 *)rsp_hdr->Protocol = SMB1_PROTO_NUMBER; ··· 411 412 412 413 int ksmbd_init_smb_server(struct ksmbd_conn *conn) 413 414 { 415 + struct smb_hdr *rcv_hdr = (struct smb_hdr *)smb_get_msg(conn->request_buf); 414 416 __le32 proto; 415 417 416 - proto = *(__le32 *)((struct smb_hdr *)conn->request_buf)->Protocol; 418 + proto = *(__le32 *)rcv_hdr->Protocol; 417 419 if (conn->need_neg == false) { 418 420 if (proto == SMB1_PROTO_NUMBER) 419 421 return -EINVAL; ··· 572 572 573 573 static int smb_handle_negotiate(struct ksmbd_work *work) 574 574 { 575 - struct smb_negotiate_rsp *neg_rsp = work->response_buf; 575 + struct smb_negotiate_rsp *neg_rsp = smb_get_msg(work->response_buf); 576 576 577 577 ksmbd_debug(SMB, "Unsupported SMB1 protocol\n"); 578 578 579 - if (ksmbd_iov_pin_rsp(work, (void *)neg_rsp + 4, 580 - sizeof(struct smb_negotiate_rsp) - 4)) 579 + if (ksmbd_iov_pin_rsp(work, (void *)neg_rsp, 580 + sizeof(struct smb_negotiate_rsp))) 581 581 return -ENOMEM; 582 582 583 583 neg_rsp->hdr.Status.CifsError = STATUS_SUCCESS;
+9
fs/smb/server/smb_common.h
··· 203 203 unsigned int ksmbd_server_side_copy_max_total_size(void); 204 204 bool is_asterisk(char *p); 205 205 __le32 smb_map_generic_desired_access(__le32 daccess); 206 + 207 + /* 208 + * Get the body of the smb message excluding the 4 byte rfc1002 headers 209 + * from request/response buffer. 210 + */ 211 + static inline void *smb_get_msg(void *buf) 212 + { 213 + return buf + 4; 214 + } 206 215 #endif /* __SMB_SERVER_COMMON_H__ */
+14 -3
include/drm/drm_pagemap.h
··· 8 8 9 9 #define NR_PAGES(order) (1U << (order)) 10 10 11 + struct dma_fence; 11 12 struct drm_pagemap; 12 13 struct drm_pagemap_zdd; 13 14 struct device; ··· 175 174 * @pages: Pointer to array of device memory pages (destination) 176 175 * @pagemap_addr: Pointer to array of DMA information (source) 177 176 * @npages: Number of pages to copy 177 + * @pre_migrate_fence: dma-fence to wait for before migration start. 178 + * May be NULL. 178 179 * 179 180 * Copy pages to device memory. If the order of a @pagemap_addr entry 180 181 * is greater than 0, the entry is populated but subsequent entries ··· 186 183 */ 187 184 int (*copy_to_devmem)(struct page **pages, 188 185 struct drm_pagemap_addr *pagemap_addr, 189 - unsigned long npages); 186 + unsigned long npages, 187 + struct dma_fence *pre_migrate_fence); 190 188 191 189 /** 192 190 * @copy_to_ram: Copy to system RAM (required for migration) 193 191 * @pages: Pointer to array of device memory pages (source) 194 192 * @pagemap_addr: Pointer to array of DMA information (destination) 195 193 * @npages: Number of pages to copy 194 + * @pre_migrate_fence: dma-fence to wait for before migration start. 195 + * May be NULL. 196 196 * 197 197 * Copy pages to system RAM. If the order of a @pagemap_addr entry 198 198 * is greater than 0, the entry is populated but subsequent entries ··· 205 199 */ 206 200 int (*copy_to_ram)(struct page **pages, 207 201 struct drm_pagemap_addr *pagemap_addr, 208 - unsigned long npages); 202 + unsigned long npages, 203 + struct dma_fence *pre_migrate_fence); 209 204 }; 210 205 211 206 /** ··· 219 212 * @dpagemap: The struct drm_pagemap of the pages this allocation belongs to. 220 213 * @size: Size of device memory allocation 221 214 * @timeslice_expiration: Timeslice expiration in jiffies 215 + * @pre_migrate_fence: Fence to wait for or pipeline behind before migration starts. 216 + * (May be NULL). 222 217 */ 223 218 struct drm_pagemap_devmem { 224 219 struct device *dev; ··· 230 221 struct drm_pagemap *dpagemap; 231 222 size_t size; 232 223 u64 timeslice_expiration; 224 + struct dma_fence *pre_migrate_fence; 233 225 }; 234 226 235 227 int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation, ··· 248 238 void drm_pagemap_devmem_init(struct drm_pagemap_devmem *devmem_allocation, 249 239 struct device *dev, struct mm_struct *mm, 250 240 const struct drm_pagemap_devmem_ops *ops, 251 - struct drm_pagemap *dpagemap, size_t size); 241 + struct drm_pagemap *dpagemap, size_t size, 242 + struct dma_fence *pre_migrate_fence); 252 243 253 244 int drm_pagemap_populate_mm(struct drm_pagemap *dpagemap, 254 245 unsigned long start, unsigned long end,
+33 -20
include/kunit/run-in-irq-context.h
··· 20 20 bool task_func_reported_failure; 21 21 bool hardirq_func_reported_failure; 22 22 bool softirq_func_reported_failure; 23 - unsigned long hardirq_func_calls; 24 - unsigned long softirq_func_calls; 23 + atomic_t hardirq_func_calls; 24 + atomic_t softirq_func_calls; 25 25 struct hrtimer timer; 26 26 struct work_struct bh_work; 27 27 }; ··· 32 32 container_of(timer, typeof(*state), timer); 33 33 34 34 WARN_ON_ONCE(!in_hardirq()); 35 - state->hardirq_func_calls++; 35 + atomic_inc(&state->hardirq_func_calls); 36 36 37 37 if (!state->func(state->test_specific_state)) 38 38 state->hardirq_func_reported_failure = true; ··· 48 48 container_of(work, typeof(*state), bh_work); 49 49 50 50 WARN_ON_ONCE(!in_serving_softirq()); 51 - state->softirq_func_calls++; 51 + atomic_inc(&state->softirq_func_calls); 52 52 53 53 if (!state->func(state->test_specific_state)) 54 54 state->softirq_func_reported_failure = true; ··· 59 59 * hardirq context concurrently, and reports a failure to KUnit if any 60 60 * invocation of @func in any context returns false. @func is passed 61 61 * @test_specific_state as its argument. At most 3 invocations of @func will 62 - * run concurrently: one in each of task, softirq, and hardirq context. 62 + * run concurrently: one in each of task, softirq, and hardirq context. @func 63 + * will continue running until either @max_iterations calls have been made (so 64 + * long as at least one each runs in task, softirq, and hardirq contexts), or 65 + * one second has passed. 63 66 * 64 67 * The main purpose of this interrupt context testing is to validate fallback 65 68 * code paths that run in contexts where the normal code path cannot be used, ··· 88 85 .test_specific_state = test_specific_state, 89 86 }; 90 87 unsigned long end_jiffies; 88 + int hardirq_calls, softirq_calls; 89 + bool allctx = false; 91 90 92 91 /* 93 92 * Set up a hrtimer (the way we access hardirq context) and a work ··· 99 94 CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD); 100 95 INIT_WORK_ONSTACK(&state.bh_work, kunit_irq_test_bh_work_func); 101 96 102 - /* Run for up to max_iterations or 1 second, whichever comes first. */ 97 + /* 98 + * Run for up to max_iterations (including at least one task, softirq, 99 + * and hardirq), or 1 second, whichever comes first. 100 + */ 103 101 end_jiffies = jiffies + HZ; 104 102 hrtimer_start(&state.timer, KUNIT_IRQ_TEST_HRTIMER_INTERVAL, 105 103 HRTIMER_MODE_REL_HARD); 106 - for (int i = 0; i < max_iterations && !time_after(jiffies, end_jiffies); 107 - i++) { 104 + for (int task_calls = 0, calls = 0; 105 + ((calls < max_iterations) || !allctx) && 106 + !time_after(jiffies, end_jiffies); 107 + task_calls++) { 108 108 if (!func(test_specific_state)) 109 109 state.task_func_reported_failure = true; 110 + 111 + hardirq_calls = atomic_read(&state.hardirq_func_calls); 112 + softirq_calls = atomic_read(&state.softirq_func_calls); 113 + calls = task_calls + hardirq_calls + softirq_calls; 114 + allctx = (task_calls > 0) && (hardirq_calls > 0) && 115 + (softirq_calls > 0); 110 116 } 111 117 112 118 /* Cancel the timer and work. */ ··· 125 109 flush_work(&state.bh_work); 126 110 127 111 /* Sanity check: the timer and BH functions should have been run. */ 128 - KUNIT_EXPECT_GT_MSG(test, state.hardirq_func_calls, 0, 112 + KUNIT_EXPECT_GT_MSG(test, atomic_read(&state.hardirq_func_calls), 0, 129 113 "Timer function was not called"); 130 - KUNIT_EXPECT_GT_MSG(test, state.softirq_func_calls, 0, 114 + KUNIT_EXPECT_GT_MSG(test, atomic_read(&state.softirq_func_calls), 0, 131 115 "BH work function was not called"); 132 116 133 - /* Check for incorrect hash values reported from any context. */ 134 - KUNIT_EXPECT_FALSE_MSG( 135 - test, state.task_func_reported_failure, 136 - "Incorrect hash values reported from task context"); 137 - KUNIT_EXPECT_FALSE_MSG( 138 - test, state.hardirq_func_reported_failure, 139 - "Incorrect hash values reported from hardirq context"); 140 - KUNIT_EXPECT_FALSE_MSG( 141 - test, state.softirq_func_reported_failure, 142 - "Incorrect hash values reported from softirq context"); 117 + /* Check for failure reported from any context. */ 118 + KUNIT_EXPECT_FALSE_MSG(test, state.task_func_reported_failure, 119 + "Failure reported from task context"); 120 + KUNIT_EXPECT_FALSE_MSG(test, state.hardirq_func_reported_failure, 121 + "Failure reported from hardirq context"); 122 + KUNIT_EXPECT_FALSE_MSG(test, state.softirq_func_reported_failure, 123 + "Failure reported from softirq context"); 143 124 } 144 125 145 126 #endif /* _KUNIT_RUN_IN_IRQ_CONTEXT_H */
+1
include/linux/genalloc.h
··· 44 44 * @nr: The number of zeroed bits we're looking for 45 45 * @data: optional additional data used by the callback 46 46 * @pool: the pool being allocated from 47 + * @start_addr: start address of memory chunk 47 48 */ 48 49 typedef unsigned long (*genpool_algo_t)(unsigned long *map, 49 50 unsigned long size,
+9 -8
include/linux/intel_vsec.h
··· 80 80 81 81 /** 82 82 * struct pmt_callbacks - Callback infrastructure for PMT devices 83 - * ->read_telem() when specified, called by client driver to access PMT data (instead 84 - * of direct copy). 85 - * @pdev: PCI device reference for the callback's use 86 - * @guid: ID of data to acccss 87 - * @data: buffer for the data to be copied 88 - * @off: offset into the requested buffer 89 - * @count: size of buffer 83 + * @read_telem: when specified, called by client driver to access PMT 84 + * data (instead of direct copy). 85 + * * pdev: PCI device reference for the callback's use 86 + * * guid: ID of data to acccss 87 + * * data: buffer for the data to be copied 88 + * * off: offset into the requested buffer 89 + * * count: size of buffer 90 90 */ 91 91 struct pmt_callbacks { 92 92 int (*read_telem)(struct pci_dev *pdev, u32 guid, u64 *data, loff_t off, u32 count); ··· 120 120 }; 121 121 122 122 /** 123 - * struct intel_sec_device - Auxbus specific device information 123 + * struct intel_vsec_device - Auxbus specific device information 124 124 * @auxdev: auxbus device struct for auxbus access 125 125 * @pcidev: pci device associated with the device 126 126 * @resource: any resources shared by the parent ··· 128 128 * @num_resources: number of resources 129 129 * @id: xarray id 130 130 * @priv_data: any private data needed 131 + * @priv_data_size: size of private data area 131 132 * @quirks: specified quirks 132 133 * @base_addr: base address of entries (if specified) 133 134 * @cap_id: the enumerated id of the vsec feature
+7 -1
include/linux/io_uring_types.h
··· 424 424 struct user_struct *user; 425 425 struct mm_struct *mm_account; 426 426 427 + /* 428 + * List of tctx nodes for this ctx, protected by tctx_lock. For 429 + * cancelation purposes, nests under uring_lock. 430 + */ 431 + struct list_head tctx_list; 432 + struct mutex tctx_lock; 433 + 427 434 /* ctx exit and cancelation */ 428 435 struct llist_head fallback_llist; 429 436 struct delayed_work fallback_work; 430 437 struct work_struct exit_work; 431 - struct list_head tctx_list; 432 438 struct completion ref_comp; 433 439 434 440 /* io-wq management, e.g. thread count */
+2 -2
include/linux/irq-entry-common.h
··· 110 110 static inline void local_irq_enable_exit_to_user(unsigned long ti_work); 111 111 112 112 #ifndef local_irq_enable_exit_to_user 113 - static inline void local_irq_enable_exit_to_user(unsigned long ti_work) 113 + static __always_inline void local_irq_enable_exit_to_user(unsigned long ti_work) 114 114 { 115 115 local_irq_enable(); 116 116 } ··· 125 125 static inline void local_irq_disable_exit_to_user(void); 126 126 127 127 #ifndef local_irq_disable_exit_to_user 128 - static inline void local_irq_disable_exit_to_user(void) 128 + static __always_inline void local_irq_disable_exit_to_user(void) 129 129 { 130 130 local_irq_disable(); 131 131 }
+16
include/linux/kasan.h
··· 28 28 #define KASAN_VMALLOC_INIT ((__force kasan_vmalloc_flags_t)0x01u) 29 29 #define KASAN_VMALLOC_VM_ALLOC ((__force kasan_vmalloc_flags_t)0x02u) 30 30 #define KASAN_VMALLOC_PROT_NORMAL ((__force kasan_vmalloc_flags_t)0x04u) 31 + #define KASAN_VMALLOC_KEEP_TAG ((__force kasan_vmalloc_flags_t)0x08u) 31 32 32 33 #define KASAN_VMALLOC_PAGE_RANGE 0x1 /* Apply exsiting page range */ 33 34 #define KASAN_VMALLOC_TLB_FLUSH 0x2 /* TLB flush */ ··· 631 630 __kasan_poison_vmalloc(start, size); 632 631 } 633 632 633 + void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms, 634 + kasan_vmalloc_flags_t flags); 635 + static __always_inline void 636 + kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms, 637 + kasan_vmalloc_flags_t flags) 638 + { 639 + if (kasan_enabled()) 640 + __kasan_unpoison_vmap_areas(vms, nr_vms, flags); 641 + } 642 + 634 643 #else /* CONFIG_KASAN_VMALLOC */ 635 644 636 645 static inline void kasan_populate_early_vm_area_shadow(void *start, ··· 663 652 return (void *)start; 664 653 } 665 654 static inline void kasan_poison_vmalloc(const void *start, unsigned long size) 655 + { } 656 + 657 + static __always_inline void 658 + kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms, 659 + kasan_vmalloc_flags_t flags) 666 660 { } 667 661 668 662 #endif /* CONFIG_KASAN_VMALLOC */
+2 -2
include/linux/kexec.h
··· 530 530 #define kexec_dprintk(fmt, arg...) \ 531 531 do { if (kexec_file_dbg_print) pr_info(fmt, ##arg); } while (0) 532 532 533 - extern void *kimage_map_segment(struct kimage *image, unsigned long addr, unsigned long size); 533 + extern void *kimage_map_segment(struct kimage *image, int idx); 534 534 extern void kimage_unmap_segment(void *buffer); 535 535 #else /* !CONFIG_KEXEC_CORE */ 536 536 struct pt_regs; ··· 540 540 static inline void crash_kexec(struct pt_regs *regs) { } 541 541 static inline int kexec_should_crash(struct task_struct *p) { return 0; } 542 542 static inline int kexec_crash_loaded(void) { return 0; } 543 - static inline void *kimage_map_segment(struct kimage *image, unsigned long addr, unsigned long size) 543 + static inline void *kimage_map_segment(struct kimage *image, int idx) 544 544 { return NULL; } 545 545 static inline void kimage_unmap_segment(void *buffer) { } 546 546 #define kexec_in_progress false
+2 -2
include/linux/leafops.h
··· 133 133 134 134 /** 135 135 * softleaf_type() - Identify the type of leaf entry. 136 - * @enntry: Leaf entry. 136 + * @entry: Leaf entry. 137 137 * 138 138 * Returns: the leaf entry type associated with @entry. 139 139 */ ··· 534 534 /** 535 535 * pte_is_uffd_marker() - Does this PTE entry encode a userfault-specific marker 536 536 * leaf entry? 537 - * @entry: Leaf entry. 537 + * @pte: PTE entry. 538 538 * 539 539 * It's useful to be able to determine which leaf entries encode UFFD-specific 540 540 * markers so we can handle these correctly.
+2
include/linux/memory-failure.h
··· 9 9 struct pfn_address_space { 10 10 struct interval_tree_node node; 11 11 struct address_space *mapping; 12 + int (*pfn_to_vma_pgoff)(struct vm_area_struct *vma, 13 + unsigned long pfn, pgoff_t *pgoff); 12 14 }; 13 15 14 16 int register_pfn_address_space(struct pfn_address_space *pfn_space);
+4 -4
include/linux/mm.h
··· 2459 2459 if (WARN_ON_ONCE(page_has_type(&folio->page) && !folio_test_hugetlb(folio))) 2460 2460 return 0; 2461 2461 2462 - if (folio_test_anon(folio)) { 2463 - /* One reference per page from the swapcache. */ 2464 - ref_count += folio_test_swapcache(folio) << order; 2465 - } else { 2462 + /* One reference per page from the swapcache. */ 2463 + ref_count += folio_test_swapcache(folio) << order; 2464 + 2465 + if (!folio_test_anon(folio)) { 2466 2466 /* One reference per page from the pagecache. */ 2467 2467 ref_count += !!folio->mapping << order; 2468 2468 /* One reference from PG_private. */
+1
include/linux/property.h
··· 371 371 (const struct software_node_ref_args) { \ 372 372 .swnode = _Generic(_ref_, \ 373 373 const struct software_node *: _ref_, \ 374 + struct software_node *: _ref_, \ 374 375 default: NULL), \ 375 376 .fwnode = _Generic(_ref_, \ 376 377 struct fwnode_handle *: _ref_, \
+9 -1
include/linux/vfio_pci_core.h
··· 145 145 struct list_head dmabufs; 146 146 }; 147 147 148 + enum vfio_pci_io_width { 149 + VFIO_PCI_IO_WIDTH_1 = 1, 150 + VFIO_PCI_IO_WIDTH_2 = 2, 151 + VFIO_PCI_IO_WIDTH_4 = 4, 152 + VFIO_PCI_IO_WIDTH_8 = 8, 153 + }; 154 + 148 155 /* Will be exported for vfio pci drivers usage */ 149 156 int vfio_pci_core_register_dev_region(struct vfio_pci_core_device *vdev, 150 157 unsigned int type, unsigned int subtype, ··· 195 188 ssize_t vfio_pci_core_do_io_rw(struct vfio_pci_core_device *vdev, bool test_mem, 196 189 void __iomem *io, char __user *buf, 197 190 loff_t off, size_t count, size_t x_start, 198 - size_t x_end, bool iswrite); 191 + size_t x_end, bool iswrite, 192 + enum vfio_pci_io_width max_width); 199 193 bool __vfio_pci_memory_enabled(struct vfio_pci_core_device *vdev); 200 194 bool vfio_pci_core_range_intersect_range(loff_t buf_start, size_t buf_cnt, 201 195 loff_t reg_start, size_t reg_cnt,
+2
include/linux/virtio.h
··· 13 13 #include <linux/completion.h> 14 14 #include <linux/virtio_features.h> 15 15 16 + struct module; 17 + 16 18 /** 17 19 * struct virtqueue - a queue to register buffers for sending or receiving. 18 20 * @list: the chain of virtqueues for this device
+2
include/linux/virtio_features.h
··· 3 3 #define _LINUX_VIRTIO_FEATURES_H 4 4 5 5 #include <linux/bits.h> 6 + #include <linux/bug.h> 7 + #include <linux/string.h> 6 8 7 9 #define VIRTIO_FEATURES_U64S 2 8 10 #define VIRTIO_FEATURES_BITS (VIRTIO_FEATURES_U64S * 64)
+1
include/net/dsa.h
··· 302 302 struct devlink_port devlink_port; 303 303 struct phylink *pl; 304 304 struct phylink_config pl_config; 305 + netdevice_tracker conduit_tracker; 305 306 struct dsa_lag *lag; 306 307 struct net_device *hsr_dev; 307 308
+4 -1
include/sound/soc-acpi.h
··· 203 203 * @mach: the pointer of the machine driver 204 204 * @prefix: the prefix of the topology file name. Typically, it is the path. 205 205 * @tplg_files: the pointer of the array of the topology file names. 206 + * @best_effort: ignore non supported links and try to build the card in best effort 207 + * with supported links 206 208 */ 207 209 /* Descriptor for SST ASoC machine driver */ 208 210 struct snd_soc_acpi_mach { ··· 226 224 const u32 tplg_quirk_mask; 227 225 int (*get_function_tplg_files)(struct snd_soc_card *card, 228 226 const struct snd_soc_acpi_mach *mach, 229 - const char *prefix, const char ***tplg_files); 227 + const char *prefix, const char ***tplg_files, 228 + bool best_effort); 230 229 }; 231 230 232 231 #define SND_SOC_ACPI_MAX_CODECS 3
+1 -1
include/uapi/rdma/irdma-abi.h
··· 57 57 __u8 rsvd2; 58 58 __aligned_u64 comp_mask; 59 59 __u16 min_hw_wq_size; 60 + __u8 revd3[2]; 60 61 __u32 max_hw_srq_quanta; 61 - __u8 rsvd3[2]; 62 62 }; 63 63 64 64 struct irdma_alloc_pd_resp {
+3 -1
include/uapi/rdma/rdma_user_cm.h
··· 192 192 193 193 struct rdma_ucm_query_ib_service_resp { 194 194 __u32 num_service_recs; 195 + __u32 reserved; 195 196 struct ib_user_service_rec recs[]; 196 197 }; 197 198 ··· 355 354 356 355 #define RDMA_USER_CM_IB_SERVICE_NAME_SIZE 64 357 356 struct rdma_ucm_ib_service { 358 - __u64 service_id; 357 + __aligned_u64 service_id; 359 358 __u8 service_name[RDMA_USER_CM_IB_SERVICE_NAME_SIZE]; 360 359 __u32 flags; 361 360 __u32 reserved; ··· 363 362 364 363 struct rdma_ucm_resolve_ib_service { 365 364 __u32 id; 365 + __u32 reserved; 366 366 struct rdma_ucm_ib_service ibs; 367 367 }; 368 368
+1 -5
include/uapi/regulator/regulator.h
··· 8 8 #ifndef _UAPI_REGULATOR_H 9 9 #define _UAPI_REGULATOR_H 10 10 11 - #ifdef __KERNEL__ 12 11 #include <linux/types.h> 13 - #else 14 - #include <stdint.h> 15 - #endif 16 12 17 13 /* 18 14 * Regulator notifier events. ··· 58 62 59 63 struct reg_genl_event { 60 64 char reg_name[32]; 61 - uint64_t event; 65 + __u64 event; 62 66 }; 63 67 64 68 /* attributes of reg_genl_family */
+5
io_uring/cancel.c
··· 184 184 } while (1); 185 185 186 186 /* slow path, try all io-wq's */ 187 + __set_current_state(TASK_RUNNING); 187 188 io_ring_submit_lock(ctx, issue_flags); 189 + mutex_lock(&ctx->tctx_lock); 188 190 ret = -ENOENT; 189 191 list_for_each_entry(node, &ctx->tctx_list, ctx_node) { 190 192 ret = io_async_cancel_one(node->task->io_uring, cd); ··· 196 194 nr++; 197 195 } 198 196 } 197 + mutex_unlock(&ctx->tctx_lock); 199 198 io_ring_submit_unlock(ctx, issue_flags); 200 199 return all ? nr : ret; 201 200 } ··· 487 484 bool ret = false; 488 485 489 486 mutex_lock(&ctx->uring_lock); 487 + mutex_lock(&ctx->tctx_lock); 490 488 list_for_each_entry(node, &ctx->tctx_list, ctx_node) { 491 489 struct io_uring_task *tctx = node->task->io_uring; 492 490 ··· 500 496 cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_ctx_cb, ctx, true); 501 497 ret |= (cret != IO_WQ_CANCEL_NOTFOUND); 502 498 } 499 + mutex_unlock(&ctx->tctx_lock); 503 500 mutex_unlock(&ctx->uring_lock); 504 501 505 502 return ret;
+6 -1
io_uring/io_uring.c
··· 340 340 INIT_LIST_HEAD(&ctx->ltimeout_list); 341 341 init_llist_head(&ctx->work_llist); 342 342 INIT_LIST_HEAD(&ctx->tctx_list); 343 + mutex_init(&ctx->tctx_lock); 343 344 ctx->submit_state.free_list.next = NULL; 344 345 INIT_HLIST_HEAD(&ctx->waitid_list); 345 346 xa_init_flags(&ctx->zcrx_ctxs, XA_FLAGS_ALLOC); ··· 865 864 { 866 865 struct io_overflow_cqe *ocqe; 867 866 868 - ocqe = io_alloc_ocqe(ctx, cqe, big_cqe, GFP_ATOMIC); 867 + ocqe = io_alloc_ocqe(ctx, cqe, big_cqe, GFP_NOWAIT); 869 868 return io_cqring_add_overflow(ctx, ocqe); 870 869 } 871 870 ··· 3046 3045 exit.ctx = ctx; 3047 3046 3048 3047 mutex_lock(&ctx->uring_lock); 3048 + mutex_lock(&ctx->tctx_lock); 3049 3049 while (!list_empty(&ctx->tctx_list)) { 3050 3050 WARN_ON_ONCE(time_after(jiffies, timeout)); 3051 3051 ··· 3058 3056 if (WARN_ON_ONCE(ret)) 3059 3057 continue; 3060 3058 3059 + mutex_unlock(&ctx->tctx_lock); 3061 3060 mutex_unlock(&ctx->uring_lock); 3062 3061 /* 3063 3062 * See comment above for ··· 3067 3064 */ 3068 3065 wait_for_completion_interruptible(&exit.completion); 3069 3066 mutex_lock(&ctx->uring_lock); 3067 + mutex_lock(&ctx->tctx_lock); 3070 3068 } 3069 + mutex_unlock(&ctx->tctx_lock); 3071 3070 mutex_unlock(&ctx->uring_lock); 3072 3071 spin_lock(&ctx->completion_lock); 3073 3072 spin_unlock(&ctx->completion_lock);
+4 -5
io_uring/memmap.c
··· 268 268 return io_region_get_ptr(mr); 269 269 } 270 270 271 - static void *io_uring_validate_mmap_request(struct file *file, loff_t pgoff, 272 - size_t sz) 271 + static void *io_uring_validate_mmap_request(struct file *file, loff_t pgoff) 273 272 { 274 273 struct io_ring_ctx *ctx = file->private_data; 275 274 struct io_mapped_region *region; ··· 303 304 304 305 guard(mutex)(&ctx->mmap_lock); 305 306 306 - ptr = io_uring_validate_mmap_request(file, vma->vm_pgoff, sz); 307 + ptr = io_uring_validate_mmap_request(file, vma->vm_pgoff); 307 308 if (IS_ERR(ptr)) 308 309 return PTR_ERR(ptr); 309 310 ··· 335 336 336 337 guard(mutex)(&ctx->mmap_lock); 337 338 338 - ptr = io_uring_validate_mmap_request(filp, pgoff, len); 339 + ptr = io_uring_validate_mmap_request(filp, pgoff); 339 340 if (IS_ERR(ptr)) 340 341 return -ENOMEM; 341 342 ··· 385 386 386 387 guard(mutex)(&ctx->mmap_lock); 387 388 388 - ptr = io_uring_validate_mmap_request(file, pgoff, len); 389 + ptr = io_uring_validate_mmap_request(file, pgoff); 389 390 if (IS_ERR(ptr)) 390 391 return PTR_ERR(ptr); 391 392
+1 -1
io_uring/openclose.c
··· 73 73 open->filename = NULL; 74 74 return ret; 75 75 } 76 + req->flags |= REQ_F_NEED_CLEANUP; 76 77 77 78 open->file_slot = READ_ONCE(sqe->file_index); 78 79 if (open->file_slot && (open->how.flags & O_CLOEXEC)) 79 80 return -EINVAL; 80 81 81 82 open->nofile = rlimit(RLIMIT_NOFILE); 82 - req->flags |= REQ_F_NEED_CLEANUP; 83 83 if (io_openat_force_async(open)) 84 84 req->flags |= REQ_F_FORCE_ASYNC; 85 85 return 0;
+2
io_uring/register.c
··· 320 320 return 0; 321 321 322 322 /* now propagate the restriction to all registered users */ 323 + mutex_lock(&ctx->tctx_lock); 323 324 list_for_each_entry(node, &ctx->tctx_list, ctx_node) { 324 325 tctx = node->task->io_uring; 325 326 if (WARN_ON_ONCE(!tctx->io_wq)) ··· 331 330 /* ignore errors, it always returns zero anyway */ 332 331 (void)io_wq_max_workers(tctx->io_wq, new_count); 333 332 } 333 + mutex_unlock(&ctx->tctx_lock); 334 334 return 0; 335 335 err: 336 336 if (sqd) {
+4 -4
io_uring/tctx.c
··· 136 136 return ret; 137 137 } 138 138 139 - mutex_lock(&ctx->uring_lock); 139 + mutex_lock(&ctx->tctx_lock); 140 140 list_add(&node->ctx_node, &ctx->tctx_list); 141 - mutex_unlock(&ctx->uring_lock); 141 + mutex_unlock(&ctx->tctx_lock); 142 142 } 143 143 return 0; 144 144 } ··· 176 176 WARN_ON_ONCE(current != node->task); 177 177 WARN_ON_ONCE(list_empty(&node->ctx_node)); 178 178 179 - mutex_lock(&node->ctx->uring_lock); 179 + mutex_lock(&node->ctx->tctx_lock); 180 180 list_del(&node->ctx_node); 181 - mutex_unlock(&node->ctx->uring_lock); 181 + mutex_unlock(&node->ctx->tctx_lock); 182 182 183 183 if (tctx->last == node->ctx) 184 184 tctx->last = NULL;
+16 -5
kernel/cgroup/cpuset.c
··· 1668 1668 static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp) 1669 1669 { 1670 1670 WARN_ON_ONCE(!is_remote_partition(cs)); 1671 - WARN_ON_ONCE(!cpumask_subset(cs->effective_xcpus, subpartitions_cpus)); 1671 + /* 1672 + * When a CPU is offlined, top_cpuset may end up with no available CPUs, 1673 + * which should clear subpartitions_cpus. We should not emit a warning for this 1674 + * scenario: the hierarchy is updated from top to bottom, so subpartitions_cpus 1675 + * may already be cleared when disabling the partition. 1676 + */ 1677 + WARN_ON_ONCE(!cpumask_subset(cs->effective_xcpus, subpartitions_cpus) && 1678 + !cpumask_empty(subpartitions_cpus)); 1672 1679 1673 1680 spin_lock_irq(&callback_lock); 1674 1681 cs->remote_partition = false; ··· 3983 3976 if (remote || (is_partition_valid(cs) && is_partition_valid(parent))) 3984 3977 compute_partition_effective_cpumask(cs, &new_cpus); 3985 3978 3986 - if (remote && cpumask_empty(&new_cpus) && 3987 - partition_is_populated(cs, NULL)) { 3979 + if (remote && (cpumask_empty(subpartitions_cpus) || 3980 + (cpumask_empty(&new_cpus) && 3981 + partition_is_populated(cs, NULL)))) { 3988 3982 cs->prs_err = PERR_HOTPLUG; 3989 3983 remote_partition_disable(cs, tmp); 3990 3984 compute_effective_cpumask(&new_cpus, cs, parent); ··· 3998 3990 * 1) empty effective cpus but not valid empty partition. 3999 3991 * 2) parent is invalid or doesn't grant any cpus to child 4000 3992 * partitions. 3993 + * 3) subpartitions_cpus is empty. 4001 3994 */ 4002 - if (is_local_partition(cs) && (!is_partition_valid(parent) || 4003 - tasks_nocpu_error(parent, cs, &new_cpus))) 3995 + if (is_local_partition(cs) && 3996 + (!is_partition_valid(parent) || 3997 + tasks_nocpu_error(parent, cs, &new_cpus) || 3998 + cpumask_empty(subpartitions_cpus))) 4004 3999 partcmd = partcmd_invalidate; 4005 4000 /* 4006 4001 * On the other hand, an invalid partition root may be transitioned
+12 -4
kernel/kexec_core.c
··· 953 953 return result; 954 954 } 955 955 956 - void *kimage_map_segment(struct kimage *image, 957 - unsigned long addr, unsigned long size) 956 + void *kimage_map_segment(struct kimage *image, int idx) 958 957 { 958 + unsigned long addr, size, eaddr; 959 959 unsigned long src_page_addr, dest_page_addr = 0; 960 - unsigned long eaddr = addr + size; 961 960 kimage_entry_t *ptr, entry; 962 961 struct page **src_pages; 963 962 unsigned int npages; 963 + struct page *cma; 964 964 void *vaddr = NULL; 965 965 int i; 966 966 967 + cma = image->segment_cma[idx]; 968 + if (cma) 969 + return page_address(cma); 970 + 971 + addr = image->segment[idx].mem; 972 + size = image->segment[idx].memsz; 973 + eaddr = addr + size; 967 974 /* 968 975 * Collect the source pages and map them in a contiguous VA range. 969 976 */ ··· 1011 1004 1012 1005 void kimage_unmap_segment(void *segment_buffer) 1013 1006 { 1014 - vunmap(segment_buffer); 1007 + if (is_vmalloc_addr(segment_buffer)) 1008 + vunmap(segment_buffer); 1015 1009 } 1016 1010 1017 1011 struct kexec_load_limit {
+1
kernel/kthread.c
··· 1599 1599 1600 1600 WARN_ON_ONCE(!(tsk->flags & PF_KTHREAD)); 1601 1601 WARN_ON_ONCE(tsk->mm); 1602 + WARN_ON_ONCE(!mm->user_ns); 1602 1603 1603 1604 /* 1604 1605 * It is possible for mm to be the same as tsk->active_mm, but
+6 -3
kernel/power/suspend.c
··· 349 349 if (pm_test_level == level) { 350 350 pr_info("suspend debug: Waiting for %d second(s).\n", 351 351 pm_test_delay); 352 - for (i = 0; i < pm_test_delay && !pm_wakeup_pending(); i++) 353 - msleep(1000); 354 - 352 + for (i = 0; i < pm_test_delay && !pm_wakeup_pending(); i++) { 353 + if (level > TEST_CORE) 354 + msleep(1000); 355 + else 356 + mdelay(1000); 357 + } 355 358 return 1; 356 359 } 357 360 #endif /* !CONFIG_PM_DEBUG */
+10 -13
kernel/sched/ext.c
··· 1577 1577 * 1578 1578 * @p may go through multiple stopping <-> running transitions between 1579 1579 * here and put_prev_task_scx() if task attribute changes occur while 1580 - * balance_scx() leaves @rq unlocked. However, they don't contain any 1580 + * balance_one() leaves @rq unlocked. However, they don't contain any 1581 1581 * information meaningful to the BPF scheduler and can be suppressed by 1582 1582 * skipping the callbacks if the task is !QUEUED. 1583 1583 */ ··· 2372 2372 * preempted, and it regaining control of the CPU. 2373 2373 * 2374 2374 * ->cpu_release() complements ->cpu_acquire(), which is emitted the 2375 - * next time that balance_scx() is invoked. 2375 + * next time that balance_one() is invoked. 2376 2376 */ 2377 2377 if (!rq->scx.cpu_released) { 2378 2378 if (SCX_HAS_OP(sch, cpu_release)) { ··· 2478 2478 } 2479 2479 2480 2480 /* 2481 - * If balance_scx() is telling us to keep running @prev, replenish slice 2481 + * If balance_one() is telling us to keep running @prev, replenish slice 2482 2482 * if necessary and keep running @prev. Otherwise, pop the first one 2483 2483 * from the local DSQ. 2484 2484 */ ··· 3956 3956 nr_donor_target, nr_target); 3957 3957 } 3958 3958 3959 - for_each_cpu(cpu, resched_mask) { 3960 - struct rq *rq = cpu_rq(cpu); 3961 - 3962 - raw_spin_rq_lock_irq(rq); 3963 - resched_curr(rq); 3964 - raw_spin_rq_unlock_irq(rq); 3965 - } 3959 + for_each_cpu(cpu, resched_mask) 3960 + resched_cpu(cpu); 3966 3961 3967 3962 for_each_cpu_and(cpu, cpu_online_mask, node_mask) { 3968 3963 u32 nr = READ_ONCE(cpu_rq(cpu)->scx.bypass_dsq.nr); ··· 4020 4025 * 4021 4026 * - ops.dispatch() is ignored. 4022 4027 * 4023 - * - balance_scx() does not set %SCX_RQ_BAL_KEEP on non-zero slice as slice 4028 + * - balance_one() does not set %SCX_RQ_BAL_KEEP on non-zero slice as slice 4024 4029 * can't be trusted. Whenever a tick triggers, the running task is rotated to 4025 4030 * the tail of the queue with core_sched_at touched. 4026 4031 * ··· 4778 4783 } 4779 4784 4780 4785 sch->pcpu = alloc_percpu(struct scx_sched_pcpu); 4781 - if (!sch->pcpu) 4786 + if (!sch->pcpu) { 4787 + ret = -ENOMEM; 4782 4788 goto err_free_gdsqs; 4789 + } 4783 4790 4784 4791 sch->helper = kthread_run_worker(0, "sched_ext_helper"); 4785 4792 if (IS_ERR(sch->helper)) { ··· 6064 6067 /* 6065 6068 * A successfully consumed task can be dequeued before it starts 6066 6069 * running while the CPU is trying to migrate other dispatched 6067 - * tasks. Bump nr_tasks to tell balance_scx() to retry on empty 6070 + * tasks. Bump nr_tasks to tell balance_one() to retry on empty 6068 6071 * local DSQ. 6069 6072 */ 6070 6073 dspc->nr_tasks++;
+2
lib/idr.c
··· 40 40 41 41 if (WARN_ON_ONCE(!(idr->idr_rt.xa_flags & ROOT_IS_IDR))) 42 42 idr->idr_rt.xa_flags |= IDR_RT_MARKER; 43 + if (max < base) 44 + return -ENOSPC; 43 45 44 46 id = (id < base) ? 0 : id - base; 45 47 radix_tree_iter_init(&iter, id);
+1 -1
mm/damon/vaddr.c
··· 743 743 if (!folio) 744 744 continue; 745 745 if (damos_va_filter_out(s, folio, walk->vma, addr, pte, NULL)) 746 - return 0; 746 + continue; 747 747 damos_va_migrate_dests_add(folio, walk->vma, addr, dests, 748 748 migration_lists); 749 749 nr = folio_nr_pages(folio);
+32
mm/kasan/common.c
··· 28 28 #include <linux/string.h> 29 29 #include <linux/types.h> 30 30 #include <linux/bug.h> 31 + #include <linux/vmalloc.h> 31 32 32 33 #include "kasan.h" 33 34 #include "../slab.h" ··· 576 575 } 577 576 return true; 578 577 } 578 + 579 + #ifdef CONFIG_KASAN_VMALLOC 580 + void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms, 581 + kasan_vmalloc_flags_t flags) 582 + { 583 + unsigned long size; 584 + void *addr; 585 + int area; 586 + u8 tag; 587 + 588 + /* 589 + * If KASAN_VMALLOC_KEEP_TAG was set at this point, all vms[] pointers 590 + * would be unpoisoned with the KASAN_TAG_KERNEL which would disable 591 + * KASAN checks down the line. 592 + */ 593 + if (WARN_ON_ONCE(flags & KASAN_VMALLOC_KEEP_TAG)) 594 + return; 595 + 596 + size = vms[0]->size; 597 + addr = vms[0]->addr; 598 + vms[0]->addr = __kasan_unpoison_vmalloc(addr, size, flags); 599 + tag = get_tag(vms[0]->addr); 600 + 601 + for (area = 1 ; area < nr_vms ; area++) { 602 + size = vms[area]->size; 603 + addr = set_tag(vms[area]->addr, tag); 604 + vms[area]->addr = 605 + __kasan_unpoison_vmalloc(addr, size, flags | KASAN_VMALLOC_KEEP_TAG); 606 + } 607 + } 608 + #endif
+1 -1
mm/kasan/hw_tags.c
··· 361 361 return (void *)start; 362 362 } 363 363 364 - tag = kasan_random_tag(); 364 + tag = (flags & KASAN_VMALLOC_KEEP_TAG) ? get_tag(start) : kasan_random_tag(); 365 365 start = set_tag(start, tag); 366 366 367 367 /* Unpoison and initialize memory up to size. */
+3 -1
mm/kasan/shadow.c
··· 631 631 !(flags & KASAN_VMALLOC_PROT_NORMAL)) 632 632 return (void *)start; 633 633 634 - start = set_tag(start, kasan_random_tag()); 634 + if (unlikely(!(flags & KASAN_VMALLOC_KEEP_TAG))) 635 + start = set_tag(start, kasan_random_tag()); 636 + 635 637 kasan_unpoison(start, size, false); 636 638 return (void *)start; 637 639 }
+1 -1
mm/ksm.c
··· 650 650 } 651 651 } 652 652 out_unlock: 653 - pte_unmap_unlock(ptep, ptl); 653 + pte_unmap_unlock(start_ptep, ptl); 654 654 return found; 655 655 } 656 656
+2 -2
mm/memcontrol.c
··· 5638 5638 memcg = root_mem_cgroup; 5639 5639 5640 5640 pr_warn("Memory cgroup min protection %lukB -- low protection %lukB", 5641 - K(atomic_long_read(&memcg->memory.children_min_usage)*PAGE_SIZE), 5642 - K(atomic_long_read(&memcg->memory.children_low_usage)*PAGE_SIZE)); 5641 + K(atomic_long_read(&memcg->memory.children_min_usage)), 5642 + K(atomic_long_read(&memcg->memory.children_low_usage))); 5643 5643 }
+18 -11
mm/memory-failure.c
··· 2161 2161 { 2162 2162 guard(mutex)(&pfn_space_lock); 2163 2163 2164 + if (!pfn_space->pfn_to_vma_pgoff) 2165 + return -EINVAL; 2166 + 2164 2167 if (interval_tree_iter_first(&pfn_space_itree, 2165 2168 pfn_space->node.start, 2166 2169 pfn_space->node.last)) ··· 2186 2183 } 2187 2184 EXPORT_SYMBOL_GPL(unregister_pfn_address_space); 2188 2185 2189 - static void add_to_kill_pfn(struct task_struct *tsk, 2190 - struct vm_area_struct *vma, 2191 - struct list_head *to_kill, 2192 - unsigned long pfn) 2186 + static void add_to_kill_pgoff(struct task_struct *tsk, 2187 + struct vm_area_struct *vma, 2188 + struct list_head *to_kill, 2189 + pgoff_t pgoff) 2193 2190 { 2194 2191 struct to_kill *tk; 2195 2192 ··· 2200 2197 } 2201 2198 2202 2199 /* Check for pgoff not backed by struct page */ 2203 - tk->addr = vma_address(vma, pfn, 1); 2200 + tk->addr = vma_address(vma, pgoff, 1); 2204 2201 tk->size_shift = PAGE_SHIFT; 2205 2202 2206 2203 if (tk->addr == -EFAULT) 2207 2204 pr_info("Unable to find address %lx in %s\n", 2208 - pfn, tsk->comm); 2205 + pgoff, tsk->comm); 2209 2206 2210 2207 get_task_struct(tsk); 2211 2208 tk->tsk = tsk; ··· 2215 2212 /* 2216 2213 * Collect processes when the error hit a PFN not backed by struct page. 2217 2214 */ 2218 - static void collect_procs_pfn(struct address_space *mapping, 2215 + static void collect_procs_pfn(struct pfn_address_space *pfn_space, 2219 2216 unsigned long pfn, struct list_head *to_kill) 2220 2217 { 2221 2218 struct vm_area_struct *vma; 2222 2219 struct task_struct *tsk; 2220 + struct address_space *mapping = pfn_space->mapping; 2223 2221 2224 2222 i_mmap_lock_read(mapping); 2225 2223 rcu_read_lock(); ··· 2230 2226 t = task_early_kill(tsk, true); 2231 2227 if (!t) 2232 2228 continue; 2233 - vma_interval_tree_foreach(vma, &mapping->i_mmap, pfn, pfn) { 2234 - if (vma->vm_mm == t->mm) 2235 - add_to_kill_pfn(t, vma, to_kill, pfn); 2229 + vma_interval_tree_foreach(vma, &mapping->i_mmap, 0, ULONG_MAX) { 2230 + pgoff_t pgoff; 2231 + 2232 + if (vma->vm_mm == t->mm && 2233 + !pfn_space->pfn_to_vma_pgoff(vma, pfn, &pgoff)) 2234 + add_to_kill_pgoff(t, vma, to_kill, pgoff); 2236 2235 } 2237 2236 } 2238 2237 rcu_read_unlock(); ··· 2271 2264 struct pfn_address_space *pfn_space = 2272 2265 container_of(node, struct pfn_address_space, node); 2273 2266 2274 - collect_procs_pfn(pfn_space->mapping, pfn, &tokill); 2267 + collect_procs_pfn(pfn_space, pfn, &tokill); 2275 2268 2276 2269 mf_handled = true; 2277 2270 }
-2
mm/memremap.c
··· 427 427 if (folio_test_anon(folio)) { 428 428 for (i = 0; i < nr; i++) 429 429 __ClearPageAnonExclusive(folio_page(folio, i)); 430 - } else { 431 - VM_WARN_ON_ONCE(folio_test_large(folio)); 432 430 } 433 431 434 432 /*
+13 -13
mm/page_alloc.c
··· 914 914 NULL) != NULL; 915 915 } 916 916 917 + static void change_pageblock_range(struct page *pageblock_page, 918 + int start_order, int migratetype) 919 + { 920 + int nr_pageblocks = 1 << (start_order - pageblock_order); 921 + 922 + while (nr_pageblocks--) { 923 + set_pageblock_migratetype(pageblock_page, migratetype); 924 + pageblock_page += pageblock_nr_pages; 925 + } 926 + } 927 + 917 928 /* 918 929 * Freeing function for a buddy system allocator. 919 930 * ··· 1011 1000 * expand() down the line puts the sub-blocks 1012 1001 * on the right freelists. 1013 1002 */ 1014 - set_pageblock_migratetype(buddy, migratetype); 1003 + change_pageblock_range(buddy, order, migratetype); 1015 1004 } 1016 1005 1017 1006 combined_pfn = buddy_pfn & pfn; ··· 2157 2146 } 2158 2147 2159 2148 #endif /* CONFIG_MEMORY_ISOLATION */ 2160 - 2161 - static void change_pageblock_range(struct page *pageblock_page, 2162 - int start_order, int migratetype) 2163 - { 2164 - int nr_pageblocks = 1 << (start_order - pageblock_order); 2165 - 2166 - while (nr_pageblocks--) { 2167 - set_pageblock_migratetype(pageblock_page, migratetype); 2168 - pageblock_page += pageblock_nr_pages; 2169 - } 2170 - } 2171 2149 2172 2150 static inline bool boost_watermark(struct zone *zone) 2173 2151 { ··· 5924 5924 * recycled, this leads to the once large chunks of space being 5925 5925 * fragmented and becoming unavailable for high-order allocations. 5926 5926 */ 5927 - return 0; 5927 + return 1; 5928 5928 #endif 5929 5929 } 5930 5930
+1 -1
mm/page_owner.c
··· 952 952 .open = page_owner_stack_open, 953 953 .read = seq_read, 954 954 .llseek = seq_lseek, 955 - .release = seq_release, 955 + .release = seq_release_private, 956 956 }; 957 957 958 958 static int page_owner_threshold_get(void *data, u64 *val)
+4 -4
mm/vmalloc.c
··· 4331 4331 */ 4332 4332 if (size <= alloced_size) { 4333 4333 kasan_unpoison_vmalloc(p + old_size, size - old_size, 4334 - KASAN_VMALLOC_PROT_NORMAL); 4334 + KASAN_VMALLOC_PROT_NORMAL | 4335 + KASAN_VMALLOC_VM_ALLOC | 4336 + KASAN_VMALLOC_KEEP_TAG); 4335 4337 /* 4336 4338 * No need to zero memory here, as unused memory will have 4337 4339 * already been zeroed at initial allocation time or during ··· 5027 5025 * With hardware tag-based KASAN, marking is skipped for 5028 5026 * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc(). 5029 5027 */ 5030 - for (area = 0; area < nr_vms; area++) 5031 - vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr, 5032 - vms[area]->size, KASAN_VMALLOC_PROT_NORMAL); 5028 + kasan_unpoison_vmap_areas(vms, nr_vms, KASAN_VMALLOC_PROT_NORMAL); 5033 5029 5034 5030 kfree(vas); 5035 5031 return vms;
+6
net/bluetooth/mgmt.c
··· 849 849 if (cis_peripheral_capable(hdev)) 850 850 settings |= MGMT_SETTING_CIS_PERIPHERAL; 851 851 852 + if (bis_capable(hdev)) 853 + settings |= MGMT_SETTING_ISO_BROADCASTER; 854 + 855 + if (sync_recv_capable(hdev)) 856 + settings |= MGMT_SETTING_ISO_SYNC_RECEIVER; 857 + 852 858 if (ll_privacy_capable(hdev)) 853 859 settings |= MGMT_SETTING_LL_PRIVACY; 854 860
+1
net/bridge/br_private.h
··· 247 247 * struct net_bridge_vlan_group 248 248 * 249 249 * @vlan_hash: VLAN entry rhashtable 250 + * @tunnel_hash: Hash table to map from tunnel key ID (e.g. VXLAN VNI) to VLAN 250 251 * @vlan_list: sorted VLAN entry list 251 252 * @num_vlans: number of total VLAN entries 252 253 * @pvid: PVID VLAN id
+5 -3
net/core/dev.c
··· 4241 4241 int count = 0; 4242 4242 4243 4243 llist_for_each_entry_safe(skb, next, ll_list, ll_node) { 4244 - prefetch(next); 4245 - prefetch(&next->priority); 4246 - skb_mark_not_on_list(skb); 4244 + if (next) { 4245 + prefetch(next); 4246 + prefetch(&next->priority); 4247 + skb_mark_not_on_list(skb); 4248 + } 4247 4249 rc = dev_qdisc_enqueue(skb, q, &to_free, txq); 4248 4250 count++; 4249 4251 }
+35 -32
net/dsa/dsa.c
··· 367 367 368 368 struct net_device *dsa_tree_find_first_conduit(struct dsa_switch_tree *dst) 369 369 { 370 - struct device_node *ethernet; 371 - struct net_device *conduit; 372 370 struct dsa_port *cpu_dp; 373 371 374 372 cpu_dp = dsa_tree_find_first_cpu(dst); 375 - ethernet = of_parse_phandle(cpu_dp->dn, "ethernet", 0); 376 - conduit = of_find_net_device_by_node(ethernet); 377 - of_node_put(ethernet); 378 - 379 - return conduit; 373 + return cpu_dp->conduit; 380 374 } 381 375 382 376 /* Assign the default CPU port (the first one in the tree) to all ports of the ··· 1247 1253 if (ethernet) { 1248 1254 struct net_device *conduit; 1249 1255 const char *user_protocol; 1256 + int err; 1250 1257 1258 + rtnl_lock(); 1251 1259 conduit = of_find_net_device_by_node(ethernet); 1252 1260 of_node_put(ethernet); 1253 - if (!conduit) 1261 + if (!conduit) { 1262 + rtnl_unlock(); 1254 1263 return -EPROBE_DEFER; 1264 + } 1265 + 1266 + netdev_hold(conduit, &dp->conduit_tracker, GFP_KERNEL); 1267 + put_device(&conduit->dev); 1268 + rtnl_unlock(); 1255 1269 1256 1270 user_protocol = of_get_property(dn, "dsa-tag-protocol", NULL); 1257 - return dsa_port_parse_cpu(dp, conduit, user_protocol); 1271 + err = dsa_port_parse_cpu(dp, conduit, user_protocol); 1272 + if (err) 1273 + netdev_put(conduit, &dp->conduit_tracker); 1274 + return err; 1258 1275 } 1259 1276 1260 1277 if (link) ··· 1398 1393 return device_find_child(parent, class, dev_is_class); 1399 1394 } 1400 1395 1401 - static struct net_device *dsa_dev_to_net_device(struct device *dev) 1402 - { 1403 - struct device *d; 1404 - 1405 - d = dev_find_class(dev, "net"); 1406 - if (d != NULL) { 1407 - struct net_device *nd; 1408 - 1409 - nd = to_net_dev(d); 1410 - dev_hold(nd); 1411 - put_device(d); 1412 - 1413 - return nd; 1414 - } 1415 - 1416 - return NULL; 1417 - } 1418 - 1419 1396 static int dsa_port_parse(struct dsa_port *dp, const char *name, 1420 1397 struct device *dev) 1421 1398 { 1422 1399 if (!strcmp(name, "cpu")) { 1423 1400 struct net_device *conduit; 1401 + struct device *d; 1402 + int err; 1424 1403 1425 - conduit = dsa_dev_to_net_device(dev); 1426 - if (!conduit) 1404 + rtnl_lock(); 1405 + d = dev_find_class(dev, "net"); 1406 + if (!d) { 1407 + rtnl_unlock(); 1427 1408 return -EPROBE_DEFER; 1409 + } 1428 1410 1429 - dev_put(conduit); 1411 + conduit = to_net_dev(d); 1412 + netdev_hold(conduit, &dp->conduit_tracker, GFP_KERNEL); 1413 + put_device(d); 1414 + rtnl_unlock(); 1430 1415 1431 - return dsa_port_parse_cpu(dp, conduit, NULL); 1416 + err = dsa_port_parse_cpu(dp, conduit, NULL); 1417 + if (err) 1418 + netdev_put(conduit, &dp->conduit_tracker); 1419 + return err; 1432 1420 } 1433 1421 1434 1422 if (!strcmp(name, "dsa")) ··· 1489 1491 struct dsa_vlan *v, *n; 1490 1492 1491 1493 dsa_switch_for_each_port_safe(dp, next, ds) { 1494 + if (dsa_port_is_cpu(dp) && dp->conduit) 1495 + netdev_put(dp->conduit, &dp->conduit_tracker); 1496 + 1492 1497 /* These are either entries that upper layers lost track of 1493 1498 * (probably due to bugs), or installed through interfaces 1494 1499 * where one does not necessarily have to remove them, like ··· 1636 1635 /* Disconnect from further netdevice notifiers on the conduit, 1637 1636 * since netdev_uses_dsa() will now return false. 1638 1637 */ 1639 - dsa_switch_for_each_cpu_port(dp, ds) 1638 + dsa_switch_for_each_cpu_port(dp, ds) { 1640 1639 dp->conduit->dsa_ptr = NULL; 1640 + netdev_put(dp->conduit, &dp->conduit_tracker); 1641 + } 1641 1642 1642 1643 rtnl_unlock(); 1643 1644 out:
+2 -1
net/handshake/netlink.c
··· 126 126 } 127 127 128 128 out_complete: 129 - handshake_complete(req, -EIO, NULL); 129 + if (req) 130 + handshake_complete(req, -EIO, NULL); 130 131 out_status: 131 132 trace_handshake_cmd_accept_err(net, req, NULL, err); 132 133 return err;
+10 -16
net/ipv4/fib_semantics.c
··· 2167 2167 { 2168 2168 struct fib_info *fi = res->fi; 2169 2169 struct net *net = fi->fib_net; 2170 - bool found = false; 2171 2170 bool use_neigh; 2171 + int score = -1; 2172 2172 __be32 saddr; 2173 2173 2174 2174 if (unlikely(res->fi->nh)) { ··· 2180 2180 saddr = fl4 ? fl4->saddr : 0; 2181 2181 2182 2182 change_nexthops(fi) { 2183 - int nh_upper_bound; 2183 + int nh_upper_bound, nh_score = 0; 2184 2184 2185 2185 /* Nexthops without a carrier are assigned an upper bound of 2186 2186 * minus one when "ignore_routes_with_linkdown" is set. ··· 2190 2190 (use_neigh && !fib_good_nh(nexthop_nh))) 2191 2191 continue; 2192 2192 2193 - if (!found) { 2193 + if (saddr && nexthop_nh->nh_saddr == saddr) 2194 + nh_score += 2; 2195 + if (hash <= nh_upper_bound) 2196 + nh_score++; 2197 + if (score < nh_score) { 2194 2198 res->nh_sel = nhsel; 2195 2199 res->nhc = &nexthop_nh->nh_common; 2196 - found = !saddr || nexthop_nh->nh_saddr == saddr; 2200 + if (nh_score == 3 || (!saddr && nh_score == 1)) 2201 + return; 2202 + score = nh_score; 2197 2203 } 2198 - 2199 - if (hash > nh_upper_bound) 2200 - continue; 2201 - 2202 - if (!saddr || nexthop_nh->nh_saddr == saddr) { 2203 - res->nh_sel = nhsel; 2204 - res->nhc = &nexthop_nh->nh_common; 2205 - return; 2206 - } 2207 - 2208 - if (found) 2209 - return; 2210 2204 2211 2205 } endfor_nexthops(fi); 2212 2206 }
+4 -3
net/ipv4/fib_trie.c
··· 2053 2053 continue; 2054 2054 } 2055 2055 2056 - /* Do not flush error routes if network namespace is 2057 - * not being dismantled 2056 + /* When not flushing the entire table, skip error 2057 + * routes that are not marked for deletion. 2058 2058 */ 2059 - if (!flush_all && fib_props[fa->fa_type].error) { 2059 + if (!flush_all && fib_props[fa->fa_type].error && 2060 + !(fi->fib_flags & RTNH_F_DEAD)) { 2060 2061 slen = fa->fa_slen; 2061 2062 continue; 2062 2063 }
+4 -2
net/ipv4/ip_gre.c
··· 330 330 if (!tun_dst) 331 331 return PACKET_REJECT; 332 332 333 + /* MUST set options_len before referencing options */ 334 + info = &tun_dst->u.tun_info; 335 + info->options_len = sizeof(*md); 336 + 333 337 /* skb can be uncloned in __iptunnel_pull_header, so 334 338 * old pkt_md is no longer valid and we need to reset 335 339 * it ··· 348 344 memcpy(md2, pkt_md, ver == 1 ? ERSPAN_V1_MDSIZE : 349 345 ERSPAN_V2_MDSIZE); 350 346 351 - info = &tun_dst->u.tun_info; 352 347 __set_bit(IP_TUNNEL_ERSPAN_OPT_BIT, 353 348 info->key.tun_flags); 354 - info->options_len = sizeof(*md); 355 349 } 356 350 357 351 skb_reset_mac_header(skb);
+2 -1
net/ipv6/calipso.c
··· 1342 1342 /* At this point new_end aligns to 4n, so (new_end & 4) pads to 8n */ 1343 1343 pad = ((new_end & 4) + (end & 7)) & 7; 1344 1344 len_delta = new_end - (int)end + pad; 1345 - ret_val = skb_cow(skb, skb_headroom(skb) + len_delta); 1345 + ret_val = skb_cow(skb, 1346 + skb_headroom(skb) + (len_delta > 0 ? len_delta : 0)); 1346 1347 if (ret_val < 0) 1347 1348 return ret_val; 1348 1349
+12 -3
net/ipv6/ip6_gre.c
··· 535 535 if (!tun_dst) 536 536 return PACKET_REJECT; 537 537 538 + /* MUST set options_len before referencing options */ 539 + info = &tun_dst->u.tun_info; 540 + info->options_len = sizeof(*md); 541 + 538 542 /* skb can be uncloned in __iptunnel_pull_header, so 539 543 * old pkt_md is no longer valid and we need to reset 540 544 * it ··· 547 543 skb_network_header_len(skb); 548 544 pkt_md = (struct erspan_metadata *)(gh + gre_hdr_len + 549 545 sizeof(*ershdr)); 550 - info = &tun_dst->u.tun_info; 551 546 md = ip_tunnel_info_opts(info); 552 547 md->version = ver; 553 548 md2 = &md->u.md2; ··· 554 551 ERSPAN_V2_MDSIZE); 555 552 __set_bit(IP_TUNNEL_ERSPAN_OPT_BIT, 556 553 info->key.tun_flags); 557 - info->options_len = sizeof(*md); 558 554 559 555 ip6_tnl_rcv(tunnel, skb, tpi, tun_dst, log_ecn_error); 560 556 ··· 1368 1366 { 1369 1367 struct ip6_tnl *t = netdev_priv(dev); 1370 1368 struct ipv6hdr *ipv6h; 1369 + int needed; 1371 1370 __be16 *p; 1372 1371 1373 - ipv6h = skb_push(skb, t->hlen + sizeof(*ipv6h)); 1372 + needed = t->hlen + sizeof(*ipv6h); 1373 + if (skb_headroom(skb) < needed && 1374 + pskb_expand_head(skb, HH_DATA_ALIGN(needed - skb_headroom(skb)), 1375 + 0, GFP_ATOMIC)) 1376 + return -needed; 1377 + 1378 + ipv6h = skb_push(skb, needed); 1374 1379 ip6_flow_hdr(ipv6h, 0, ip6_make_flowlabel(dev_net(dev), skb, 1375 1380 t->fl.u.ip6.flowlabel, 1376 1381 true, &t->fl.u.ip6));
+12 -1
net/ipv6/route.c
··· 1470 1470 1471 1471 p = this_cpu_ptr(res->nh->rt6i_pcpu); 1472 1472 prev = cmpxchg(p, NULL, pcpu_rt); 1473 - BUG_ON(prev); 1473 + if (unlikely(prev)) { 1474 + /* 1475 + * Another task on this CPU already installed a pcpu_rt. 1476 + * This can happen on PREEMPT_RT where preemption is possible. 1477 + * Free our allocation and return the existing one. 1478 + */ 1479 + WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT_RT)); 1480 + 1481 + dst_dev_put(&pcpu_rt->dst); 1482 + dst_release(&pcpu_rt->dst); 1483 + return prev; 1484 + } 1474 1485 1475 1486 if (res->f6i->fib6_destroying) { 1476 1487 struct fib6_info *from;
-10
net/mac80211/cfg.c
··· 1345 1345 1346 1346 size = sizeof(*new) + new_head_len + new_tail_len; 1347 1347 1348 - /* new or old multiple BSSID elements? */ 1349 1348 if (params->mbssid_ies) { 1350 1349 mbssid = params->mbssid_ies; 1351 1350 size += struct_size(new->mbssid_ies, elem, mbssid->cnt); 1352 1351 if (params->rnr_ies) { 1353 1352 rnr = params->rnr_ies; 1354 - size += struct_size(new->rnr_ies, elem, rnr->cnt); 1355 - } 1356 - size += ieee80211_get_mbssid_beacon_len(mbssid, rnr, 1357 - mbssid->cnt); 1358 - } else if (old && old->mbssid_ies) { 1359 - mbssid = old->mbssid_ies; 1360 - size += struct_size(new->mbssid_ies, elem, mbssid->cnt); 1361 - if (old && old->rnr_ies) { 1362 - rnr = old->rnr_ies; 1363 1353 size += struct_size(new->rnr_ies, elem, rnr->cnt); 1364 1354 } 1365 1355 size += ieee80211_get_mbssid_beacon_len(mbssid, rnr,
+1 -1
net/mac80211/iface.c
··· 1251 1251 if (!creator_sdata) { 1252 1252 struct ieee80211_sub_if_data *other; 1253 1253 1254 - list_for_each_entry(other, &local->mon_list, list) { 1254 + list_for_each_entry_rcu(other, &local->mon_list, u.mntr.list) { 1255 1255 if (!other->vif.bss_conf.mu_mimo_owner) 1256 1256 continue; 1257 1257
+4 -1
net/mac80211/mlme.c
··· 1126 1126 1127 1127 while (!ieee80211_chandef_usable(sdata, &chanreq->oper, 1128 1128 IEEE80211_CHAN_DISABLED)) { 1129 - if (WARN_ON(chanreq->oper.width == NL80211_CHAN_WIDTH_20_NOHT)) { 1129 + if (chanreq->oper.width == NL80211_CHAN_WIDTH_20_NOHT) { 1130 + link_id_info(sdata, link_id, 1131 + "unusable channel (%d MHz) for connection\n", 1132 + chanreq->oper.chan->center_freq); 1130 1133 ret = -EINVAL; 1131 1134 goto free; 1132 1135 }
+3
net/mac80211/ocb.c
··· 47 47 struct sta_info *sta; 48 48 int band; 49 49 50 + if (!ifocb->joined) 51 + return; 52 + 50 53 /* XXX: Consider removing the least recently used entry and 51 54 * allow new one to be added. 52 55 */
+5
net/mac80211/rx.c
··· 3511 3511 rx->skb->len < IEEE80211_MIN_ACTION_SIZE) 3512 3512 return RX_DROP_U_RUNT_ACTION; 3513 3513 3514 + /* Drop non-broadcast Beacon frames */ 3515 + if (ieee80211_is_beacon(mgmt->frame_control) && 3516 + !is_broadcast_ether_addr(mgmt->da)) 3517 + return RX_DROP; 3518 + 3514 3519 if (rx->sdata->vif.type == NL80211_IFTYPE_AP && 3515 3520 ieee80211_is_beacon(mgmt->frame_control) && 3516 3521 !(rx->flags & IEEE80211_RX_BEACON_REPORTED)) {
+10
net/mptcp/options.c
··· 408 408 */ 409 409 subflow->snd_isn = TCP_SKB_CB(skb)->end_seq; 410 410 if (subflow->request_mptcp) { 411 + if (unlikely(subflow_simultaneous_connect(sk))) { 412 + WARN_ON_ONCE(!mptcp_try_fallback(sk, MPTCP_MIB_SIMULTCONNFALLBACK)); 413 + 414 + /* Ensure mptcp_finish_connect() will not process the 415 + * MPC handshake. 416 + */ 417 + subflow->request_mptcp = 0; 418 + return false; 419 + } 420 + 411 421 opts->suboptions = OPTION_MPTCP_MPC_SYN; 412 422 opts->csum_reqd = mptcp_is_checksum_enabled(sock_net(sk)); 413 423 opts->allow_join_id0 = mptcp_allow_join_id0(sock_net(sk));
+5 -3
net/mptcp/protocol.c
··· 2467 2467 */ 2468 2468 static void __mptcp_subflow_disconnect(struct sock *ssk, 2469 2469 struct mptcp_subflow_context *subflow, 2470 - unsigned int flags) 2470 + bool fastclosing) 2471 2471 { 2472 2472 if (((1 << ssk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) || 2473 - subflow->send_fastclose) { 2473 + fastclosing) { 2474 2474 /* The MPTCP code never wait on the subflow sockets, TCP-level 2475 2475 * disconnect should never fail 2476 2476 */ ··· 2538 2538 2539 2539 need_push = (flags & MPTCP_CF_PUSH) && __mptcp_retransmit_pending_data(sk); 2540 2540 if (!dispose_it) { 2541 - __mptcp_subflow_disconnect(ssk, subflow, flags); 2541 + __mptcp_subflow_disconnect(ssk, subflow, msk->fastclosing); 2542 2542 release_sock(ssk); 2543 2543 2544 2544 goto out; ··· 2884 2884 2885 2885 mptcp_set_state(sk, TCP_CLOSE); 2886 2886 mptcp_backlog_purge(sk); 2887 + msk->fastclosing = 1; 2887 2888 2888 2889 /* Explicitly send the fastclose reset as need */ 2889 2890 if (__mptcp_check_fallback(msk)) ··· 3419 3418 msk->bytes_sent = 0; 3420 3419 msk->bytes_retrans = 0; 3421 3420 msk->rcvspace_init = 0; 3421 + msk->fastclosing = 0; 3422 3422 3423 3423 /* for fallback's sake */ 3424 3424 WRITE_ONCE(msk->ack_seq, 0);
+4 -5
net/mptcp/protocol.h
··· 320 320 fastopening:1, 321 321 in_accept_queue:1, 322 322 free_first:1, 323 - rcvspace_init:1; 323 + rcvspace_init:1, 324 + fastclosing:1; 324 325 u32 notsent_lowat; 325 326 int keepalive_cnt; 326 327 int keepalive_idle; ··· 1338 1337 { 1339 1338 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); 1340 1339 1341 - return (1 << sk->sk_state) & 1342 - (TCPF_ESTABLISHED | TCPF_FIN_WAIT1 | TCPF_FIN_WAIT2 | TCPF_CLOSING) && 1343 - is_active_ssk(subflow) && 1344 - !subflow->conn_finished; 1340 + /* Note that the sk state implies !subflow->conn_finished. */ 1341 + return sk->sk_state == TCP_SYN_RECV && is_active_ssk(subflow); 1345 1342 } 1346 1343 1347 1344 #ifdef CONFIG_SYN_COOKIES
-6
net/mptcp/subflow.c
··· 1878 1878 1879 1879 __subflow_state_change(sk); 1880 1880 1881 - if (subflow_simultaneous_connect(sk)) { 1882 - WARN_ON_ONCE(!mptcp_try_fallback(sk, MPTCP_MIB_SIMULTCONNFALLBACK)); 1883 - subflow->conn_finished = 1; 1884 - mptcp_propagate_state(parent, sk, subflow, NULL); 1885 - } 1886 - 1887 1881 /* as recvmsg() does not acquire the subflow socket for ssk selection 1888 1882 * a fin packet carrying a DSS can be unnoticed if we don't trigger 1889 1883 * the data available machinery here.
+7 -2
net/nfc/core.c
··· 1154 1154 void nfc_unregister_device(struct nfc_dev *dev) 1155 1155 { 1156 1156 int rc; 1157 + struct rfkill *rfk = NULL; 1157 1158 1158 1159 pr_debug("dev_name=%s\n", dev_name(&dev->dev)); 1159 1160 ··· 1165 1164 1166 1165 device_lock(&dev->dev); 1167 1166 if (dev->rfkill) { 1168 - rfkill_unregister(dev->rfkill); 1169 - rfkill_destroy(dev->rfkill); 1167 + rfk = dev->rfkill; 1170 1168 dev->rfkill = NULL; 1171 1169 } 1172 1170 dev->shutting_down = true; 1173 1171 device_unlock(&dev->dev); 1172 + 1173 + if (rfk) { 1174 + rfkill_unregister(rfk); 1175 + rfkill_destroy(rfk); 1176 + } 1174 1177 1175 1178 if (dev->ops->check_presence) { 1176 1179 timer_delete_sync(&dev->check_pres_timer);
+13 -4
net/openvswitch/vport-netdev.c
··· 160 160 161 161 static void netdev_destroy(struct vport *vport) 162 162 { 163 - rtnl_lock(); 164 - if (netif_is_ovs_port(vport->dev)) 165 - ovs_netdev_detach_dev(vport); 166 - rtnl_unlock(); 163 + /* When called from ovs_db_notify_wq() after a dp_device_event(), the 164 + * port has already been detached, so we can avoid taking the RTNL by 165 + * checking this first. 166 + */ 167 + if (netif_is_ovs_port(vport->dev)) { 168 + rtnl_lock(); 169 + /* Check again while holding the lock to ensure we don't race 170 + * with the netdev notifier and detach twice. 171 + */ 172 + if (netif_is_ovs_port(vport->dev)) 173 + ovs_netdev_detach_dev(vport); 174 + rtnl_unlock(); 175 + } 167 176 168 177 call_rcu(&vport->rcu, vport_netdev_free); 169 178 }
+1 -1
net/rose/af_rose.c
··· 205 205 spin_unlock_bh(&rose_list_lock); 206 206 207 207 for (i = 0; i < cnt; i++) { 208 - sk = array[cnt]; 208 + sk = array[i]; 209 209 rose = rose_sk(sk); 210 210 lock_sock(sk); 211 211 spin_lock_bh(&rose_list_lock);
+2 -1
net/sunrpc/auth_gss/svcauth_gss.c
··· 1083 1083 } 1084 1084 1085 1085 length = min_t(unsigned int, inlen, (char *)xdr->end - (char *)xdr->p); 1086 - memcpy(page_address(in_token->pages[0]), xdr->p, length); 1086 + if (length) 1087 + memcpy(page_address(in_token->pages[0]), xdr->p, length); 1087 1088 inlen -= length; 1088 1089 1089 1090 to_offs = length;
+5 -2
net/sunrpc/xprtrdma/svc_rdma_rw.c
··· 841 841 for (page_no = 0; page_no < numpages; page_no++) { 842 842 unsigned int page_len; 843 843 844 + if (head->rc_curpage >= rqstp->rq_maxpages) 845 + return -EINVAL; 846 + 844 847 page_len = min_t(unsigned int, remaining, 845 848 PAGE_SIZE - head->rc_pageoff); 846 849 ··· 851 848 head->rc_page_count++; 852 849 853 850 dst = page_address(rqstp->rq_pages[head->rc_curpage]); 854 - memcpy(dst + head->rc_curpage, src + offset, page_len); 851 + memcpy((unsigned char *)dst + head->rc_pageoff, src + offset, page_len); 855 852 856 853 head->rc_readbytes += page_len; 857 854 head->rc_pageoff += page_len; ··· 863 860 offset += page_len; 864 861 } 865 862 866 - return -EINVAL; 863 + return 0; 867 864 } 868 865 869 866 /**
+8 -3
net/unix/af_unix.c
··· 2904 2904 unsigned int last_len; 2905 2905 struct unix_sock *u; 2906 2906 int copied = 0; 2907 + bool do_cmsg; 2907 2908 int err = 0; 2908 2909 long timeo; 2909 2910 int target; ··· 2930 2929 2931 2930 u = unix_sk(sk); 2932 2931 2932 + do_cmsg = READ_ONCE(u->recvmsg_inq); 2933 + if (do_cmsg) 2934 + msg->msg_get_inq = 1; 2933 2935 redo: 2934 2936 /* Lock the socket to prevent queue disordering 2935 2937 * while sleeps in memcpy_tomsg ··· 3092 3088 if (msg) { 3093 3089 scm_recv_unix(sock, msg, &scm, flags); 3094 3090 3095 - if (READ_ONCE(u->recvmsg_inq) || msg->msg_get_inq) { 3091 + if (msg->msg_get_inq && (copied ?: err) >= 0) { 3096 3092 msg->msg_inq = READ_ONCE(u->inq_len); 3097 - put_cmsg(msg, SOL_SOCKET, SCM_INQ, 3098 - sizeof(msg->msg_inq), &msg->msg_inq); 3093 + if (do_cmsg) 3094 + put_cmsg(msg, SOL_SOCKET, SCM_INQ, 3095 + sizeof(msg->msg_inq), &msg->msg_inq); 3099 3096 } 3100 3097 } else { 3101 3098 scm_destroy(&scm);
+1 -1
net/wireless/sme.c
··· 910 910 911 911 ssid_len = min(ssid->datalen, IEEE80211_MAX_SSID_LEN); 912 912 memcpy(wdev->u.client.ssid, ssid->data, ssid_len); 913 - wdev->u.client.ssid_len = ssid->datalen; 913 + wdev->u.client.ssid_len = ssid_len; 914 914 break; 915 915 } 916 916 rcu_read_unlock();
+21
rust/helpers/dma.c
··· 19 19 { 20 20 return dma_set_mask_and_coherent(dev, mask); 21 21 } 22 + 23 + int rust_helper_dma_set_mask(struct device *dev, u64 mask) 24 + { 25 + return dma_set_mask(dev, mask); 26 + } 27 + 28 + int rust_helper_dma_set_coherent_mask(struct device *dev, u64 mask) 29 + { 30 + return dma_set_coherent_mask(dev, mask); 31 + } 32 + 33 + int rust_helper_dma_map_sgtable(struct device *dev, struct sg_table *sgt, 34 + enum dma_data_direction dir, unsigned long attrs) 35 + { 36 + return dma_map_sgtable(dev, sgt, dir, attrs); 37 + } 38 + 39 + size_t rust_helper_dma_max_mapping_size(struct device *dev) 40 + { 41 + return dma_max_mapping_size(dev); 42 + }
+10 -1
rust/kernel/maple_tree.rs
··· 265 265 loop { 266 266 // This uses the raw accessor because we're destroying pointers without removing them 267 267 // from the maple tree, which is only valid because this is the destructor. 268 - let ptr = ma_state.mas_find_raw(usize::MAX); 268 + // 269 + // Take the rcu lock because mas_find_raw() requires that you hold either the spinlock 270 + // or the rcu read lock. This is only really required if memory reclaim might 271 + // reallocate entries in the tree, as we otherwise have exclusive access. That feature 272 + // doesn't exist yet, so for now, taking the rcu lock only serves the purpose of 273 + // silencing lockdep. 274 + let ptr = { 275 + let _rcu = kernel::sync::rcu::Guard::new(); 276 + ma_state.mas_find_raw(usize::MAX) 277 + }; 269 278 if ptr.is_null() { 270 279 break; 271 280 }
+4 -4
samples/ftrace/ftrace-direct-modify.c
··· 176 176 " st.d $t0, $sp, 0\n" 177 177 " st.d $ra, $sp, 8\n" 178 178 " bl my_direct_func1\n" 179 - " ld.d $t0, $sp, 0\n" 180 - " ld.d $ra, $sp, 8\n" 179 + " ld.d $ra, $sp, 0\n" 180 + " ld.d $t0, $sp, 8\n" 181 181 " addi.d $sp, $sp, 16\n" 182 182 " jr $t0\n" 183 183 " .size my_tramp1, .-my_tramp1\n" ··· 189 189 " st.d $t0, $sp, 0\n" 190 190 " st.d $ra, $sp, 8\n" 191 191 " bl my_direct_func2\n" 192 - " ld.d $t0, $sp, 0\n" 193 - " ld.d $ra, $sp, 8\n" 192 + " ld.d $ra, $sp, 0\n" 193 + " ld.d $t0, $sp, 8\n" 194 194 " addi.d $sp, $sp, 16\n" 195 195 " jr $t0\n" 196 196 " .size my_tramp2, .-my_tramp2\n"
+4 -4
samples/ftrace/ftrace-direct-multi-modify.c
··· 199 199 " move $a0, $t0\n" 200 200 " bl my_direct_func1\n" 201 201 " ld.d $a0, $sp, 0\n" 202 - " ld.d $t0, $sp, 8\n" 203 - " ld.d $ra, $sp, 16\n" 202 + " ld.d $ra, $sp, 8\n" 203 + " ld.d $t0, $sp, 16\n" 204 204 " addi.d $sp, $sp, 32\n" 205 205 " jr $t0\n" 206 206 " .size my_tramp1, .-my_tramp1\n" ··· 215 215 " move $a0, $t0\n" 216 216 " bl my_direct_func2\n" 217 217 " ld.d $a0, $sp, 0\n" 218 - " ld.d $t0, $sp, 8\n" 219 - " ld.d $ra, $sp, 16\n" 218 + " ld.d $ra, $sp, 8\n" 219 + " ld.d $t0, $sp, 16\n" 220 220 " addi.d $sp, $sp, 32\n" 221 221 " jr $t0\n" 222 222 " .size my_tramp2, .-my_tramp2\n"
+2 -2
samples/ftrace/ftrace-direct-multi.c
··· 131 131 " move $a0, $t0\n" 132 132 " bl my_direct_func\n" 133 133 " ld.d $a0, $sp, 0\n" 134 - " ld.d $t0, $sp, 8\n" 135 - " ld.d $ra, $sp, 16\n" 134 + " ld.d $ra, $sp, 8\n" 135 + " ld.d $t0, $sp, 16\n" 136 136 " addi.d $sp, $sp, 32\n" 137 137 " jr $t0\n" 138 138 " .size my_tramp, .-my_tramp\n"
+2 -2
samples/ftrace/ftrace-direct-too.c
··· 143 143 " ld.d $a0, $sp, 0\n" 144 144 " ld.d $a1, $sp, 8\n" 145 145 " ld.d $a2, $sp, 16\n" 146 - " ld.d $t0, $sp, 24\n" 147 - " ld.d $ra, $sp, 32\n" 146 + " ld.d $ra, $sp, 24\n" 147 + " ld.d $t0, $sp, 32\n" 148 148 " addi.d $sp, $sp, 48\n" 149 149 " jr $t0\n" 150 150 " .size my_tramp, .-my_tramp\n"
+2 -2
samples/ftrace/ftrace-direct.c
··· 124 124 " st.d $ra, $sp, 16\n" 125 125 " bl my_direct_func\n" 126 126 " ld.d $a0, $sp, 0\n" 127 - " ld.d $t0, $sp, 8\n" 128 - " ld.d $ra, $sp, 16\n" 127 + " ld.d $ra, $sp, 8\n" 128 + " ld.d $t0, $sp, 16\n" 129 129 " addi.d $sp, $sp, 32\n" 130 130 " jr $t0\n" 131 131 " .size my_tramp, .-my_tramp\n"
+1 -1
samples/rust/rust_driver_pci.rs
··· 48 48 // Select the test. 49 49 bar.write8(index.0, Regs::TEST); 50 50 51 - let offset = u32::from_le(bar.read32(Regs::OFFSET)) as usize; 51 + let offset = bar.read32(Regs::OFFSET) as usize; 52 52 let data = bar.read8(Regs::DATA); 53 53 54 54 // Write `data` to `offset` to increase `count` by one.
+14 -12
scripts/Makefile.build
··· 527 527 include $(srctree)/scripts/Makefile.userprogs 528 528 endif 529 529 530 - ifneq ($(need-dtbslist)$(dtb-y)$(dtb-)$(filter %.dtb %.dtb.o %.dtbo.o,$(targets)),) 531 - include $(srctree)/scripts/Makefile.dtbs 532 - endif 533 - 534 - # Build 535 - # --------------------------------------------------------------------------- 536 - 537 - $(obj)/: $(if $(KBUILD_BUILTIN), $(targets-for-builtin)) \ 538 - $(if $(KBUILD_MODULES), $(targets-for-modules)) \ 539 - $(subdir-ym) $(always-y) 540 - @: 541 - 542 530 # Single targets 543 531 # --------------------------------------------------------------------------- 544 532 ··· 555 567 556 568 targets += $(filter-out $(single-subdir-goals), $(MAKECMDGOALS)) 557 569 targets := $(filter-out $(PHONY), $(targets)) 570 + 571 + # Now that targets is fully known, include dtb rules if needed 572 + ifneq ($(need-dtbslist)$(dtb-y)$(dtb-)$(filter %.dtb %.dtb.o %.dtbo.o,$(targets)),) 573 + include $(srctree)/scripts/Makefile.dtbs 574 + endif 575 + 576 + # Build 577 + # Needs to be after the include of Makefile.dtbs, which updates always-y 578 + # --------------------------------------------------------------------------- 579 + 580 + $(obj)/: $(if $(KBUILD_BUILTIN), $(targets-for-builtin)) \ 581 + $(if $(KBUILD_MODULES), $(targets-for-modules)) \ 582 + $(subdir-ym) $(always-y) 583 + @: 558 584 559 585 # Read all saved command lines and dependencies for the $(targets) we 560 586 # may be building above, using $(if_changed{,_dep}). As an
+7 -128
scripts/clang-tools/gen_compile_commands.py
··· 21 21 _FILENAME_PATTERN = r'^\..*\.cmd$' 22 22 _LINE_PATTERN = r'^(saved)?cmd_[^ ]*\.o := (?P<command_prefix>.* )(?P<file_path>[^ ]*\.[cS]) *(;|$)' 23 23 _VALID_LOG_LEVELS = ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] 24 - 25 - # Pre-compiled regexes for better performance 26 - _INCLUDE_PATTERN = re.compile(r'^\s*#\s*include\s*[<"]([^>"]*)[>"]') 27 - _C_INCLUDE_PATTERN = re.compile(r'^\s*#\s*include\s*"([^"]*\.c)"\s*$') 28 - _FILENAME_MATCHER = re.compile(_FILENAME_PATTERN) 29 - 30 24 # The tools/ directory adopts a different build system, and produces .cmd 31 25 # files in a different format. Do not support it. 32 26 _EXCLUDE_DIRS = ['.git', 'Documentation', 'include', 'tools'] ··· 82 88 The path to a .cmd file. 83 89 """ 84 90 91 + filename_matcher = re.compile(_FILENAME_PATTERN) 85 92 exclude_dirs = [ os.path.join(directory, d) for d in _EXCLUDE_DIRS ] 86 93 87 94 for dirpath, dirnames, filenames in os.walk(directory, topdown=True): ··· 92 97 continue 93 98 94 99 for filename in filenames: 95 - if _FILENAME_MATCHER.match(filename): 100 + if filename_matcher.match(filename): 96 101 yield os.path.join(dirpath, filename) 97 102 98 103 ··· 149 154 yield to_cmdfile(mod_line.rstrip()) 150 155 151 156 152 - def extract_includes_from_file(source_file, root_directory): 153 - """Extract #include statements from a C file. 154 - 155 - Args: 156 - source_file: Path to the source .c file to analyze 157 - root_directory: Root directory for resolving relative paths 158 - 159 - Returns: 160 - List of header files that should be included (without quotes/brackets) 161 - """ 162 - includes = [] 163 - if not os.path.exists(source_file): 164 - return includes 165 - 166 - try: 167 - with open(source_file, 'r') as f: 168 - for line in f: 169 - line = line.strip() 170 - # Look for #include statements. 171 - # Match both #include "header.h" and #include <header.h>. 172 - match = _INCLUDE_PATTERN.match(line) 173 - if match: 174 - header = match.group(1) 175 - # Skip including other .c files to avoid circular includes. 176 - if not header.endswith('.c'): 177 - # For relative includes (quoted), resolve path relative to source file. 178 - if '"' in line: 179 - src_dir = os.path.dirname(source_file) 180 - header_path = os.path.join(src_dir, header) 181 - if os.path.exists(header_path): 182 - rel_header = os.path.relpath(header_path, root_directory) 183 - includes.append(rel_header) 184 - else: 185 - includes.append(header) 186 - else: 187 - # System include like <linux/sched.h>. 188 - includes.append(header) 189 - except IOError: 190 - pass 191 - 192 - return includes 193 - 194 - 195 - def find_included_c_files(source_file, root_directory): 196 - """Find .c files that are included by the given source file. 197 - 198 - Args: 199 - source_file: Path to the source .c file 200 - root_directory: Root directory for resolving relative paths 201 - 202 - Yields: 203 - Full paths to included .c files 204 - """ 205 - if not os.path.exists(source_file): 206 - return 207 - 208 - try: 209 - with open(source_file, 'r') as f: 210 - for line in f: 211 - line = line.strip() 212 - # Look for #include "*.c" patterns. 213 - match = _C_INCLUDE_PATTERN.match(line) 214 - if match: 215 - included_file = match.group(1) 216 - # Handle relative paths. 217 - if not os.path.isabs(included_file): 218 - src_dir = os.path.dirname(source_file) 219 - included_file = os.path.join(src_dir, included_file) 220 - 221 - # Normalize the path. 222 - included_file = os.path.normpath(included_file) 223 - 224 - # Check if the file exists. 225 - if os.path.exists(included_file): 226 - yield included_file 227 - except IOError: 228 - pass 229 - 230 - 231 157 def process_line(root_directory, command_prefix, file_path): 232 - """Extracts information from a .cmd line and creates entries from it. 158 + """Extracts information from a .cmd line and creates an entry from it. 233 159 234 160 Args: 235 161 root_directory: The directory that was searched for .cmd files. Usually ··· 160 244 Usually relative to root_directory, but sometimes absolute. 161 245 162 246 Returns: 163 - A list of entries to append to compile_commands (may include multiple 164 - entries if the source file includes other .c files). 247 + An entry to append to compile_commands. 165 248 166 249 Raises: 167 250 ValueError: Could not find the extracted file based on file_path and ··· 176 261 abs_path = os.path.realpath(os.path.join(root_directory, file_path)) 177 262 if not os.path.exists(abs_path): 178 263 raise ValueError('File %s not found' % abs_path) 179 - 180 - entries = [] 181 - 182 - # Create entry for the main source file. 183 - main_entry = { 264 + return { 184 265 'directory': root_directory, 185 266 'file': abs_path, 186 267 'command': prefix + file_path, 187 268 } 188 - entries.append(main_entry) 189 - 190 - # Find and create entries for included .c files. 191 - for included_c_file in find_included_c_files(abs_path, root_directory): 192 - # For included .c files, create a compilation command that: 193 - # 1. Uses the same compilation flags as the parent file 194 - # 2. But compiles the included file directly (not the parent) 195 - # 3. Includes necessary headers from the parent file for proper macro resolution 196 - 197 - # Convert absolute path to relative for the command. 198 - rel_path = os.path.relpath(included_c_file, root_directory) 199 - 200 - # Extract includes from the parent file to provide proper compilation context. 201 - extra_includes = '' 202 - try: 203 - parent_includes = extract_includes_from_file(abs_path, root_directory) 204 - if parent_includes: 205 - extra_includes = ' ' + ' '.join('-include ' + inc for inc in parent_includes) 206 - except IOError: 207 - pass 208 - 209 - included_entry = { 210 - 'directory': root_directory, 211 - 'file': included_c_file, 212 - # Use the same compilation prefix but target the included file directly. 213 - # Add extra headers for proper macro resolution. 214 - 'command': prefix + extra_includes + ' ' + rel_path, 215 - } 216 - entries.append(included_entry) 217 - logging.debug('Added entry for included file: %s', included_c_file) 218 - 219 - return entries 220 269 221 270 222 271 def main(): ··· 213 334 result = line_matcher.match(f.readline()) 214 335 if result: 215 336 try: 216 - entries = process_line(directory, result.group('command_prefix'), 337 + entry = process_line(directory, result.group('command_prefix'), 217 338 result.group('file_path')) 218 - compile_commands.extend(entries) 339 + compile_commands.append(entry) 219 340 except ValueError as err: 220 341 logging.info('Could not add line from %s: %s', 221 342 cmdfile, err)
+3
scripts/mod/devicetable-offsets.c
··· 199 199 DEVID(cpu_feature); 200 200 DEVID_FIELD(cpu_feature, feature); 201 201 202 + DEVID(mcb_device_id); 203 + DEVID_FIELD(mcb_device_id, device); 204 + 202 205 DEVID(mei_cl_device_id); 203 206 DEVID_FIELD(mei_cl_device_id, name); 204 207 DEVID_FIELD(mei_cl_device_id, uuid);
+9
scripts/mod/file2alias.c
··· 1110 1110 module_alias_printf(mod, false, "cpu:type:*:feature:*%04X*", feature); 1111 1111 } 1112 1112 1113 + /* Looks like: mcb:16zN */ 1114 + static void do_mcb_entry(struct module *mod, void *symval) 1115 + { 1116 + DEF_FIELD(symval, mcb_device_id, device); 1117 + 1118 + module_alias_printf(mod, false, "mcb:16z%03d", device); 1119 + } 1120 + 1113 1121 /* Looks like: mei:S:uuid:N:* */ 1114 1122 static void do_mei_entry(struct module *mod, void *symval) 1115 1123 { ··· 1452 1444 {"mipscdmm", SIZE_mips_cdmm_device_id, do_mips_cdmm_entry}, 1453 1445 {"x86cpu", SIZE_x86_cpu_id, do_x86cpu_entry}, 1454 1446 {"cpu", SIZE_cpu_feature, do_cpu_entry}, 1447 + {"mcb", SIZE_mcb_device_id, do_mcb_entry}, 1455 1448 {"mei", SIZE_mei_cl_device_id, do_mei_entry}, 1456 1449 {"rapidio", SIZE_rio_device_id, do_rio_entry}, 1457 1450 {"ulpi", SIZE_ulpi_device_id, do_ulpi_entry},
+1 -3
security/integrity/ima/ima_kexec.c
··· 250 250 if (!image->ima_buffer_addr) 251 251 return; 252 252 253 - ima_kexec_buffer = kimage_map_segment(image, 254 - image->ima_buffer_addr, 255 - image->ima_buffer_size); 253 + ima_kexec_buffer = kimage_map_segment(image, image->ima_segment_index); 256 254 if (!ima_kexec_buffer) { 257 255 pr_err("Could not map measurements buffer.\n"); 258 256 return;
+26 -6
sound/hda/codecs/realtek/alc269.c
··· 1656 1656 alc236_fixup_hp_micmute_led_vref(codec, fix, action); 1657 1657 } 1658 1658 1659 + static void alc236_fixup_hp_mute_led_micmute_gpio(struct hda_codec *codec, 1660 + const struct hda_fixup *fix, int action) 1661 + { 1662 + struct alc_spec *spec = codec->spec; 1663 + 1664 + if (action == HDA_FIXUP_ACT_PRE_PROBE) 1665 + spec->micmute_led_polarity = 1; 1666 + 1667 + alc236_fixup_hp_mute_led_coefbit2(codec, fix, action); 1668 + alc_fixup_hp_gpio_led(codec, action, 0x00, 0x01); 1669 + } 1670 + 1659 1671 static inline void alc298_samsung_write_coef_pack(struct hda_codec *codec, 1660 1672 const unsigned short coefs[2]) 1661 1673 { ··· 3765 3753 ALC295_FIXUP_DELL_TAS2781_I2C, 3766 3754 ALC245_FIXUP_TAS2781_SPI_2, 3767 3755 ALC287_FIXUP_TXNW2781_I2C, 3756 + ALC287_FIXUP_TXNW2781_I2C_ASUS, 3768 3757 ALC287_FIXUP_YOGA7_14ARB7_I2C, 3769 3758 ALC245_FIXUP_HP_MUTE_LED_COEFBIT, 3770 3759 ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT, ··· 5339 5326 }, 5340 5327 [ALC236_FIXUP_HP_MUTE_LED_MICMUTE_GPIO] = { 5341 5328 .type = HDA_FIXUP_FUNC, 5342 - .v.func = alc236_fixup_hp_mute_led_coefbit2, 5343 - .chained = true, 5344 - .chain_id = ALC236_FIXUP_HP_GPIO_LED, 5329 + .v.func = alc236_fixup_hp_mute_led_micmute_gpio, 5345 5330 }, 5346 5331 [ALC236_FIXUP_LENOVO_INV_DMIC] = { 5347 5332 .type = HDA_FIXUP_FUNC, ··· 6064 6053 .chained = true, 6065 6054 .chain_id = ALC285_FIXUP_THINKPAD_HEADSET_JACK, 6066 6055 }, 6056 + [ALC287_FIXUP_TXNW2781_I2C_ASUS] = { 6057 + .type = HDA_FIXUP_FUNC, 6058 + .v.func = tas2781_fixup_txnw_i2c, 6059 + .chained = true, 6060 + .chain_id = ALC294_FIXUP_ASUS_SPK, 6061 + }, 6067 6062 [ALC287_FIXUP_YOGA7_14ARB7_I2C] = { 6068 6063 .type = HDA_FIXUP_FUNC, 6069 6064 .v.func = yoga7_14arb7_fixup_i2c, ··· 6788 6771 SND_PCI_QUIRK(0x103c, 0x8e61, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2), 6789 6772 SND_PCI_QUIRK(0x103c, 0x8e62, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2), 6790 6773 SND_PCI_QUIRK(0x103c, 0x8e8a, "HP NexusX", ALC245_FIXUP_HP_TAS2781_I2C_MUTE_LED), 6774 + SND_PCI_QUIRK(0x103c, 0x8e9c, "HP 16 Clipper OmniBook X X360", ALC287_FIXUP_CS35L41_I2C_2), 6791 6775 SND_PCI_QUIRK(0x103c, 0x8e9d, "HP 17 Turbine OmniBook X UMA", ALC287_FIXUP_CS35L41_I2C_2), 6792 6776 SND_PCI_QUIRK(0x103c, 0x8e9e, "HP 17 Turbine OmniBook X UMA", ALC287_FIXUP_CS35L41_I2C_2), 6793 6777 SND_PCI_QUIRK(0x103c, 0x8eb6, "HP Abe A6U", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_GPIO), 6794 - SND_PCI_QUIRK(0x103c, 0x8eb7, "HP Abe A6U", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_GPIO), 6795 6778 SND_PCI_QUIRK(0x103c, 0x8eb8, "HP Abe A6U", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_GPIO), 6796 6779 SND_PCI_QUIRK(0x103c, 0x8ec1, "HP 200 G2i", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_GPIO), 6797 6780 SND_PCI_QUIRK(0x103c, 0x8ec4, "HP Bantie I6U", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_GPIO), ··· 6807 6790 SND_PCI_QUIRK(0x103c, 0x8eda, "HP ZBook Firefly 16W", ALC245_FIXUP_HP_TAS2781_SPI_MUTE_LED), 6808 6791 SND_PCI_QUIRK(0x103c, 0x8ee4, "HP Bantie A6U", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_GPIO), 6809 6792 SND_PCI_QUIRK(0x103c, 0x8ee5, "HP Bantie A6U", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_GPIO), 6793 + SND_PCI_QUIRK(0x103c, 0x8ee7, "HP Abe A6U", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_GPIO), 6810 6794 SND_PCI_QUIRK(0x103c, 0x8f0c, "HP ZBook X G2i 16W", ALC236_FIXUP_HP_GPIO_LED), 6811 6795 SND_PCI_QUIRK(0x103c, 0x8f0e, "HP ZBook X G2i 16W", ALC236_FIXUP_HP_GPIO_LED), 6812 6796 SND_PCI_QUIRK(0x103c, 0x8f40, "HP ZBook 8 G2a 14", ALC245_FIXUP_HP_TAS2781_I2C_MUTE_LED), 6813 6797 SND_PCI_QUIRK(0x103c, 0x8f41, "HP ZBook 8 G2a 16", ALC245_FIXUP_HP_TAS2781_I2C_MUTE_LED), 6814 6798 SND_PCI_QUIRK(0x103c, 0x8f42, "HP ZBook 8 G2a 14W", ALC245_FIXUP_HP_TAS2781_I2C_MUTE_LED), 6799 + SND_PCI_QUIRK(0x103c, 0x8f57, "HP Trekker G7JC", ALC287_FIXUP_CS35L41_I2C_2), 6815 6800 SND_PCI_QUIRK(0x103c, 0x8f62, "HP ZBook 8 G2a 16W", ALC245_FIXUP_HP_TAS2781_I2C_MUTE_LED), 6816 6801 SND_PCI_QUIRK(0x1043, 0x1032, "ASUS VivoBook X513EA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 6817 6802 SND_PCI_QUIRK(0x1043, 0x1034, "ASUS GU605C", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1), ··· 6846 6827 SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 6847 6828 SND_PCI_QUIRK(0x1043, 0x1313, "Asus K42JZ", ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE), 6848 6829 SND_PCI_QUIRK(0x1043, 0x1314, "ASUS GA605K", ALC285_FIXUP_ASUS_GA605K_HEADSET_MIC), 6849 - SND_PCI_QUIRK(0x1043, 0x1384, "ASUS RC73XA", ALC287_FIXUP_TXNW2781_I2C), 6850 - SND_PCI_QUIRK(0x1043, 0x1394, "ASUS RC73YA", ALC287_FIXUP_TXNW2781_I2C), 6830 + SND_PCI_QUIRK(0x1043, 0x1384, "ASUS RC73XA", ALC287_FIXUP_TXNW2781_I2C_ASUS), 6831 + SND_PCI_QUIRK(0x1043, 0x1394, "ASUS RC73YA", ALC287_FIXUP_TXNW2781_I2C_ASUS), 6851 6832 SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 6852 6833 SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK), 6853 6834 SND_PCI_QUIRK(0x1043, 0x1433, "ASUS GX650PY/PZ/PV/PU/PYV/PZV/PIV/PVV", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC), ··· 7315 7296 SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC), 7316 7297 SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC), 7317 7298 SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC), 7299 + SND_PCI_QUIRK(0x1e39, 0xca14, "MEDION NM14LNL", ALC233_FIXUP_MEDION_MTL_SPK), 7318 7300 SND_PCI_QUIRK(0x1ee7, 0x2078, "HONOR BRB-X M1010", ALC2XX_FIXUP_HEADSET_MIC), 7319 7301 SND_PCI_QUIRK(0x1f66, 0x0105, "Ayaneo Portable Game Player", ALC287_FIXUP_CS35L41_I2C_2), 7320 7302 SND_PCI_QUIRK(0x2014, 0x800a, "Positivo ARN50", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+1 -3
sound/hda/controllers/cix-ipbloq.c
··· 115 115 bus->addr = res->start; 116 116 117 117 irq_id = platform_get_irq(pdev, 0); 118 - if (irq_id < 0) { 119 - dev_err(hda->dev, "failed to get the irq, err = %d\n", irq_id); 118 + if (irq_id < 0) 120 119 return irq_id; 121 - } 122 120 123 121 err = devm_request_irq(hda->dev, irq_id, azx_interrupt, 124 122 0, KBUILD_MODNAME, chip);
+7 -1
sound/pcmcia/pdaudiocf/pdaudiocf.c
··· 131 131 link->config_index = 1; 132 132 link->config_regs = PRESENT_OPTION; 133 133 134 - return pdacf_config(link); 134 + err = pdacf_config(link); 135 + if (err < 0) { 136 + card_list[i] = NULL; 137 + snd_card_free(card); 138 + return err; 139 + } 140 + return 0; 135 141 } 136 142 137 143
+7 -1
sound/pcmcia/vx/vxpocket.c
··· 284 284 285 285 vxp->p_dev = p_dev; 286 286 287 - return vxpocket_config(p_dev); 287 + err = vxpocket_config(p_dev); 288 + if (err < 0) { 289 + card_alloc &= ~(1 << i); 290 + snd_card_free(card); 291 + return err; 292 + } 293 + return 0; 288 294 } 289 295 290 296 static void vxpocket_detach(struct pcmcia_device *link)
+7
sound/soc/amd/yc/acp6x-mach.c
··· 661 661 DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 15 C7UCX"), 662 662 } 663 663 }, 664 + { 665 + .driver_data = &acp6x_card, 666 + .matches = { 667 + DMI_MATCH(DMI_BOARD_VENDOR, "HONOR"), 668 + DMI_MATCH(DMI_PRODUCT_NAME, "GOH-X"), 669 + } 670 + }, 664 671 {} 665 672 }; 666 673
-4
sound/soc/codecs/ak4458.c
··· 783 783 784 784 pm_runtime_enable(&i2c->dev); 785 785 regcache_cache_only(ak4458->regmap, true); 786 - ak4458_reset(ak4458, false); 787 786 788 787 return 0; 789 788 } 790 789 791 790 static void ak4458_i2c_remove(struct i2c_client *i2c) 792 791 { 793 - struct ak4458_priv *ak4458 = i2c_get_clientdata(i2c); 794 - 795 - ak4458_reset(ak4458, true); 796 792 pm_runtime_disable(&i2c->dev); 797 793 } 798 794
+35 -13
sound/soc/codecs/nau8821.c
··· 11 11 #include <linux/clk.h> 12 12 #include <linux/delay.h> 13 13 #include <linux/dmi.h> 14 - #include <linux/init.h> 15 14 #include <linux/i2c.h> 16 - #include <linux/module.h> 15 + #include <linux/init.h> 17 16 #include <linux/math64.h> 17 + #include <linux/module.h> 18 18 #include <linux/regmap.h> 19 19 #include <linux/slab.h> 20 20 #include <sound/core.h> ··· 24 24 #include <sound/pcm_params.h> 25 25 #include <sound/soc.h> 26 26 #include <sound/tlv.h> 27 + 27 28 #include "nau8821.h" 28 29 29 30 #define NAU8821_QUIRK_JD_ACTIVE_HIGH BIT(0) ··· 807 806 if (stream == SNDRV_PCM_STREAM_PLAYBACK) { 808 807 regmap_read(nau8821->regmap, NAU8821_R2C_DAC_CTRL1, &osr); 809 808 osr &= NAU8821_DAC_OVERSAMPLE_MASK; 809 + 810 810 if (osr >= ARRAY_SIZE(osr_dac_sel)) 811 811 return NULL; 812 + 812 813 return &osr_dac_sel[osr]; 813 - } else { 814 - regmap_read(nau8821->regmap, NAU8821_R2B_ADC_RATE, &osr); 815 - osr &= NAU8821_ADC_SYNC_DOWN_MASK; 816 - if (osr >= ARRAY_SIZE(osr_adc_sel)) 817 - return NULL; 818 - return &osr_adc_sel[osr]; 819 814 } 815 + 816 + regmap_read(nau8821->regmap, NAU8821_R2B_ADC_RATE, &osr); 817 + osr &= NAU8821_ADC_SYNC_DOWN_MASK; 818 + 819 + if (osr >= ARRAY_SIZE(osr_adc_sel)) 820 + return NULL; 821 + 822 + return &osr_adc_sel[osr]; 820 823 } 821 824 822 825 static int nau8821_dai_startup(struct snd_pcm_substream *substream, ··· 873 868 if (ctrl_val & NAU8821_I2S_MS_MASTER) { 874 869 /* get the bclk and fs ratio */ 875 870 bclk_fs = snd_soc_params_to_bclk(params) / nau8821->fs; 871 + 876 872 if (bclk_fs <= 32) 877 873 clk_div = 3; 878 874 else if (bclk_fs <= 64) 879 875 clk_div = 2; 880 876 else if (bclk_fs <= 128) 881 877 clk_div = 1; 882 - else { 878 + else 883 879 return -EINVAL; 884 - } 880 + 885 881 regmap_update_bits(nau8821->regmap, NAU8821_R1D_I2S_PCM_CTRL2, 886 882 NAU8821_I2S_LRC_DIV_MASK | NAU8821_I2S_BLK_DIV_MASK, 887 883 (clk_div << NAU8821_I2S_LRC_DIV_SFT) | clk_div); ··· 1270 1264 return 0; 1271 1265 } 1272 1266 1267 + static void nau8821_component_remove(struct snd_soc_component *component) 1268 + { 1269 + struct nau8821 *nau8821 = snd_soc_component_get_drvdata(component); 1270 + 1271 + if (nau8821->jdet_active) 1272 + cancel_delayed_work_sync(&nau8821->jdet_work); 1273 + }; 1274 + 1273 1275 /** 1274 1276 * nau8821_calc_fll_param - Calculate FLL parameters. 1275 1277 * @fll_in: external clock provided to codec. ··· 1611 1597 1612 1598 if (nau8821->irq) 1613 1599 disable_irq(nau8821->irq); 1600 + 1601 + if (nau8821->jdet_active) 1602 + cancel_delayed_work_sync(&nau8821->jdet_work); 1603 + 1614 1604 snd_soc_dapm_force_bias_level(nau8821->dapm, SND_SOC_BIAS_OFF); 1615 1605 /* Power down codec power; don't support button wakeup */ 1616 1606 snd_soc_dapm_disable_pin(nau8821->dapm, "MICBIAS"); ··· 1639 1621 1640 1622 static const struct snd_soc_component_driver nau8821_component_driver = { 1641 1623 .probe = nau8821_component_probe, 1624 + .remove = nau8821_component_remove, 1642 1625 .set_sysclk = nau8821_set_sysclk, 1643 1626 .set_pll = nau8821_set_fll, 1644 1627 .set_bias_level = nau8821_set_bias_level, ··· 1674 1655 int ret; 1675 1656 1676 1657 nau8821->jack = jack; 1658 + 1659 + if (nau8821->jdet_active) 1660 + return 0; 1661 + 1677 1662 /* Initiate jack detection work queue */ 1678 1663 INIT_DELAYED_WORK(&nau8821->jdet_work, nau8821_jdet_work); 1664 + nau8821->jdet_active = true; 1679 1665 1680 1666 ret = devm_request_threaded_irq(nau8821->dev, nau8821->irq, NULL, 1681 1667 nau8821_interrupt, IRQF_TRIGGER_LOW | IRQF_ONESHOT, 1682 1668 "nau8821", nau8821); 1683 - if (ret) { 1669 + if (ret) 1684 1670 dev_err(nau8821->dev, "Cannot request irq %d (%d)\n", 1685 1671 nau8821->irq, ret); 1686 - return ret; 1687 - } 1688 1672 1689 1673 return ret; 1690 1674 }
+1
sound/soc/codecs/nau8821.h
··· 562 562 struct snd_soc_dapm_context *dapm; 563 563 struct snd_soc_jack *jack; 564 564 struct delayed_work jdet_work; 565 + bool jdet_active; 565 566 int irq; 566 567 int clk_id; 567 568 int micbias_voltage;
+7 -9
sound/soc/codecs/rt1320-sdw.c
··· 115 115 static const struct reg_sequence rt1320_vc_blind_write[] = { 116 116 { 0xc003, 0xe0 }, 117 117 { 0xe80a, 0x01 }, 118 - { 0xc5c3, 0xf3 }, 118 + { 0xc5c3, 0xf2 }, 119 + { 0xc5c8, 0x03 }, 119 120 { 0xc057, 0x51 }, 120 121 { 0xc054, 0x35 }, 121 122 { 0xca05, 0xd6 }, ··· 127 126 { 0xc609, 0x40 }, 128 127 { 0xc046, 0xff }, 129 128 { 0xc045, 0xff }, 130 - { 0xda81, 0x14 }, 131 - { 0xda8d, 0x14 }, 132 129 { 0xc044, 0xff }, 133 130 { 0xc043, 0xff }, 134 131 { 0xc042, 0xff }, ··· 135 136 { 0xcc10, 0x01 }, 136 137 { 0xc700, 0xf0 }, 137 138 { 0xc701, 0x13 }, 138 - { 0xc901, 0x09 }, 139 - { 0xc900, 0xd0 }, 139 + { 0xc901, 0x04 }, 140 + { 0xc900, 0x73 }, 140 141 { 0xde03, 0x05 }, 141 142 { 0xdd0b, 0x0d }, 142 143 { 0xdd0a, 0xff }, ··· 152 153 { 0xf082, 0xff }, 153 154 { 0xf081, 0xff }, 154 155 { 0xf080, 0xff }, 156 + { 0xe801, 0x01 }, 155 157 { 0xe802, 0xf8 }, 156 158 { 0xe803, 0xbe }, 157 159 { 0xc003, 0xc0 }, ··· 202 202 { 0x3fc2bfc3, 0x00 }, 203 203 { 0x3fc2bfc2, 0x00 }, 204 204 { 0x3fc2bfc1, 0x00 }, 205 - { 0x3fc2bfc0, 0x03 }, 205 + { 0x3fc2bfc0, 0x07 }, 206 206 { 0x0000d486, 0x43 }, 207 207 { SDW_SDCA_CTL(FUNC_NUM_AMP, RT1320_SDCA_ENT_PDE23, RT1320_SDCA_CTL_REQ_POWER_STATE, 0), 0x00 }, 208 208 { 0x1000db00, 0x07 }, ··· 241 241 { 0x1000db21, 0x00 }, 242 242 { 0x1000db22, 0x00 }, 243 243 { 0x1000db23, 0x00 }, 244 - { 0x0000d540, 0x01 }, 245 - { 0x0000c081, 0xfc }, 246 - { 0x0000f01e, 0x80 }, 244 + { 0x0000d540, 0x21 }, 247 245 { 0xc01b, 0xfc }, 248 246 { 0xc5d1, 0x89 }, 249 247 { 0xc5d8, 0x0a },
+4 -4
sound/soc/fsl/fsl-asoc-card.c
··· 1045 1045 * The notifier is initialized in snd_soc_card_jack_new(), then 1046 1046 * snd_soc_jack_notifier_register can be called. 1047 1047 */ 1048 - if (of_property_read_bool(np, "hp-det-gpios") || 1049 - of_property_read_bool(np, "hp-det-gpio") /* deprecated */) { 1048 + if (of_property_present(np, "hp-det-gpios") || 1049 + of_property_present(np, "hp-det-gpio") /* deprecated */) { 1050 1050 ret = simple_util_init_jack(&priv->card, &priv->hp_jack, 1051 1051 1, NULL, "Headphone Jack"); 1052 1052 if (ret) ··· 1055 1055 snd_soc_jack_notifier_register(&priv->hp_jack.jack, &hp_jack_nb); 1056 1056 } 1057 1057 1058 - if (of_property_read_bool(np, "mic-det-gpios") || 1059 - of_property_read_bool(np, "mic-det-gpio") /* deprecated */) { 1058 + if (of_property_present(np, "mic-det-gpios") || 1059 + of_property_present(np, "mic-det-gpio") /* deprecated */) { 1060 1060 ret = simple_util_init_jack(&priv->card, &priv->mic_jack, 1061 1061 0, NULL, "Mic Jack"); 1062 1062 if (ret)
+3
sound/soc/fsl/fsl_asrc_dma.c
··· 473 473 .pointer = fsl_asrc_dma_pcm_pointer, 474 474 .pcm_construct = fsl_asrc_dma_pcm_new, 475 475 .legacy_dai_naming = 1, 476 + #ifdef CONFIG_DEBUG_FS 477 + .debugfs_prefix = "asrc", 478 + #endif 476 479 }; 477 480 EXPORT_SYMBOL_GPL(fsl_asrc_component);
+3
sound/soc/fsl/fsl_easrc.c
··· 1577 1577 .controls = fsl_easrc_snd_controls, 1578 1578 .num_controls = ARRAY_SIZE(fsl_easrc_snd_controls), 1579 1579 .legacy_dai_naming = 1, 1580 + #ifdef CONFIG_DEBUG_FS 1581 + .debugfs_prefix = "easrc", 1582 + #endif 1580 1583 }; 1581 1584 1582 1585 static const struct reg_default fsl_easrc_reg_defaults[] = {
+11 -2
sound/soc/fsl/fsl_sai.c
··· 917 917 tx ? sai->dma_params_tx.maxburst : 918 918 sai->dma_params_rx.maxburst); 919 919 920 - ret = snd_pcm_hw_constraint_list(substream->runtime, 0, 921 - SNDRV_PCM_HW_PARAM_RATE, &sai->constraint_rates); 920 + if (sai->is_consumer_mode[tx]) 921 + ret = snd_pcm_hw_constraint_list(substream->runtime, 0, 922 + SNDRV_PCM_HW_PARAM_RATE, 923 + &fsl_sai_rate_constraints); 924 + else 925 + ret = snd_pcm_hw_constraint_list(substream->runtime, 0, 926 + SNDRV_PCM_HW_PARAM_RATE, 927 + &sai->constraint_rates); 922 928 923 929 return ret; 924 930 } ··· 1081 1075 {FSL_SAI_TDR6, 0}, 1082 1076 {FSL_SAI_TDR7, 0}, 1083 1077 {FSL_SAI_TMR, 0}, 1078 + {FSL_SAI_TTCTL, 0}, 1084 1079 {FSL_SAI_RCR1(0), 0}, 1085 1080 {FSL_SAI_RCR2(0), 0}, 1086 1081 {FSL_SAI_RCR3(0), 0}, ··· 1105 1098 {FSL_SAI_TDR6, 0}, 1106 1099 {FSL_SAI_TDR7, 0}, 1107 1100 {FSL_SAI_TMR, 0}, 1101 + {FSL_SAI_TTCTL, 0}, 1108 1102 {FSL_SAI_RCR1(8), 0}, 1109 1103 {FSL_SAI_RCR2(8), 0}, 1110 1104 {FSL_SAI_RCR3(8), 0}, 1111 1105 {FSL_SAI_RCR4(8), 0}, 1112 1106 {FSL_SAI_RCR5(8), 0}, 1113 1107 {FSL_SAI_RMR, 0}, 1108 + {FSL_SAI_RTCTL, 0}, 1114 1109 {FSL_SAI_MCTL, 0}, 1115 1110 {FSL_SAI_MDIV, 0}, 1116 1111 };
+3
sound/soc/fsl/fsl_xcvr.c
··· 1323 1323 }; 1324 1324 1325 1325 static const struct regmap_config fsl_xcvr_regmap_phy_cfg = { 1326 + .name = "phy", 1326 1327 .reg_bits = 8, 1327 1328 .reg_stride = 4, 1328 1329 .val_bits = 32, ··· 1336 1335 }; 1337 1336 1338 1337 static const struct regmap_config fsl_xcvr_regmap_pllv0_cfg = { 1338 + .name = "pllv0", 1339 1339 .reg_bits = 8, 1340 1340 .reg_stride = 4, 1341 1341 .val_bits = 32, ··· 1347 1345 }; 1348 1346 1349 1347 static const struct regmap_config fsl_xcvr_regmap_pllv1_cfg = { 1348 + .name = "pllv1", 1350 1349 .reg_bits = 8, 1351 1350 .reg_stride = 4, 1352 1351 .val_bits = 32,
+104
sound/soc/intel/common/soc-acpi-intel-mtl-match.c
··· 699 699 }, 700 700 }; 701 701 702 + static const struct snd_soc_acpi_adr_device cs35l56_6amp_1_fb_adr[] = { 703 + { 704 + .adr = 0x00013701FA355601ull, 705 + .num_endpoints = ARRAY_SIZE(cs35l56_r_fb_endpoints), 706 + .endpoints = cs35l56_r_fb_endpoints, 707 + .name_prefix = "AMP6" 708 + }, 709 + { 710 + .adr = 0x00013601FA355601ull, 711 + .num_endpoints = ARRAY_SIZE(cs35l56_3_fb_endpoints), 712 + .endpoints = cs35l56_3_fb_endpoints, 713 + .name_prefix = "AMP5" 714 + }, 715 + { 716 + .adr = 0x00013501FA355601ull, 717 + .num_endpoints = ARRAY_SIZE(cs35l56_5_fb_endpoints), 718 + .endpoints = cs35l56_5_fb_endpoints, 719 + .name_prefix = "AMP4" 720 + }, 721 + }; 722 + 723 + static const struct snd_soc_acpi_adr_device cs35l63_6amp_3_fb_adr[] = { 724 + { 725 + .adr = 0x00033001FA356301ull, 726 + .num_endpoints = ARRAY_SIZE(cs35l56_l_fb_endpoints), 727 + .endpoints = cs35l56_l_fb_endpoints, 728 + .name_prefix = "AMP1" 729 + }, 730 + { 731 + .adr = 0x00033201FA356301ull, 732 + .num_endpoints = ARRAY_SIZE(cs35l56_2_fb_endpoints), 733 + .endpoints = cs35l56_2_fb_endpoints, 734 + .name_prefix = "AMP3" 735 + }, 736 + { 737 + .adr = 0x00033401FA356301ull, 738 + .num_endpoints = ARRAY_SIZE(cs35l56_4_fb_endpoints), 739 + .endpoints = cs35l56_4_fb_endpoints, 740 + .name_prefix = "AMP5" 741 + }, 742 + }; 743 + 744 + static const struct snd_soc_acpi_adr_device cs35l63_6amp_2_fb_adr[] = { 745 + { 746 + .adr = 0x00023101FA356301ull, 747 + .num_endpoints = ARRAY_SIZE(cs35l56_r_fb_endpoints), 748 + .endpoints = cs35l56_r_fb_endpoints, 749 + .name_prefix = "AMP2" 750 + }, 751 + { 752 + .adr = 0x00023301FA356301ull, 753 + .num_endpoints = ARRAY_SIZE(cs35l56_3_fb_endpoints), 754 + .endpoints = cs35l56_3_fb_endpoints, 755 + .name_prefix = "AMP4" 756 + }, 757 + { 758 + .adr = 0x00023501FA356301ull, 759 + .num_endpoints = ARRAY_SIZE(cs35l56_5_fb_endpoints), 760 + .endpoints = cs35l56_5_fb_endpoints, 761 + .name_prefix = "AMP6" 762 + }, 763 + }; 764 + 702 765 static const struct snd_soc_acpi_adr_device cs35l56_2_r_adr[] = { 703 766 { 704 767 .adr = 0x00023201FA355601ull, ··· 1144 1081 {} 1145 1082 }; 1146 1083 1084 + static const struct snd_soc_acpi_link_adr mtl_cs35l56_x6_link0_link1_fb[] = { 1085 + { 1086 + .mask = BIT(1), 1087 + .num_adr = ARRAY_SIZE(cs35l56_6amp_1_fb_adr), 1088 + .adr_d = cs35l56_6amp_1_fb_adr, 1089 + }, 1090 + { 1091 + .mask = BIT(0), 1092 + /* First 3 amps in cs35l56_0_fb_adr */ 1093 + .num_adr = 3, 1094 + .adr_d = cs35l56_0_fb_adr, 1095 + }, 1096 + {} 1097 + }; 1098 + 1099 + static const struct snd_soc_acpi_link_adr mtl_cs35l63_x6_link2_link3_fb[] = { 1100 + { 1101 + .mask = BIT(3), 1102 + .num_adr = ARRAY_SIZE(cs35l63_6amp_3_fb_adr), 1103 + .adr_d = cs35l63_6amp_3_fb_adr, 1104 + }, 1105 + { 1106 + .mask = BIT(2), 1107 + .num_adr = ARRAY_SIZE(cs35l63_6amp_2_fb_adr), 1108 + .adr_d = cs35l63_6amp_2_fb_adr, 1109 + }, 1110 + {} 1111 + }; 1112 + 1147 1113 static const struct snd_soc_acpi_link_adr mtl_cs35l63_x2_link1_link3_fb[] = { 1148 1114 { 1149 1115 .mask = BIT(3), ··· 1294 1202 .get_function_tplg_files = sof_sdw_get_tplg_files, 1295 1203 }, 1296 1204 { 1205 + .link_mask = BIT(0) | BIT(1), 1206 + .links = mtl_cs35l56_x6_link0_link1_fb, 1207 + .drv_name = "sof_sdw", 1208 + .sof_tplg_filename = "sof-mtl-cs35l56-l01-fb6.tplg" 1209 + }, 1210 + { 1297 1211 .link_mask = BIT(0), 1298 1212 .links = mtl_cs42l43_l0, 1299 1213 .drv_name = "sof_sdw", ··· 1311 1213 .links = mtl_cs35l63_x2_link1_link3_fb, 1312 1214 .drv_name = "sof_sdw", 1313 1215 .sof_tplg_filename = "sof-mtl-cs35l56-l01-fb8.tplg", 1216 + }, 1217 + { 1218 + .link_mask = BIT(2) | BIT(3), 1219 + .links = mtl_cs35l63_x6_link2_link3_fb, 1220 + .drv_name = "sof_sdw", 1221 + .sof_tplg_filename = "sof-mtl-cs35l56-l01-fb6.tplg", 1314 1222 }, 1315 1223 { 1316 1224 .link_mask = GENMASK(3, 0),
-49
sound/soc/intel/common/soc-acpi-intel-nvl-match.c
··· 15 15 }; 16 16 EXPORT_SYMBOL_GPL(snd_soc_acpi_intel_nvl_machines); 17 17 18 - /* 19 - * Multi-function codecs with three endpoints created for 20 - * headset, amp and dmic functions. 21 - */ 22 - static const struct snd_soc_acpi_endpoint rt_mf_endpoints[] = { 23 - { 24 - .num = 0, 25 - .aggregated = 0, 26 - .group_position = 0, 27 - .group_id = 0, 28 - }, 29 - { 30 - .num = 1, 31 - .aggregated = 0, 32 - .group_position = 0, 33 - .group_id = 0, 34 - }, 35 - { 36 - .num = 2, 37 - .aggregated = 0, 38 - .group_position = 0, 39 - .group_id = 0, 40 - }, 41 - }; 42 - 43 - static const struct snd_soc_acpi_adr_device rt722_3_single_adr[] = { 44 - { 45 - .adr = 0x000330025d072201ull, 46 - .num_endpoints = ARRAY_SIZE(rt_mf_endpoints), 47 - .endpoints = rt_mf_endpoints, 48 - .name_prefix = "rt722" 49 - } 50 - }; 51 - 52 - static const struct snd_soc_acpi_link_adr nvl_rt722_l3[] = { 53 - { 54 - .mask = BIT(3), 55 - .num_adr = ARRAY_SIZE(rt722_3_single_adr), 56 - .adr_d = rt722_3_single_adr, 57 - }, 58 - {} 59 - }; 60 - 61 18 /* this table is used when there is no I2S codec present */ 62 19 struct snd_soc_acpi_mach snd_soc_acpi_intel_nvl_sdw_machines[] = { 63 20 /* mockup tests need to be first */ ··· 35 78 .links = sdw_mockup_mic_headset_1amp, 36 79 .drv_name = "sof_sdw", 37 80 .sof_tplg_filename = "sof-nvl-rt715-rt711-rt1308-mono.tplg", 38 - }, 39 - { 40 - .link_mask = BIT(3), 41 - .links = nvl_rt722_l3, 42 - .drv_name = "sof_sdw", 43 - .sof_tplg_filename = "sof-nvl-rt722.tplg", 44 81 }, 45 82 {}, 46 83 };
+4 -1
sound/soc/intel/common/sof-function-topology-lib.c
··· 28 28 #define SOF_INTEL_PLATFORM_NAME_MAX 4 29 29 30 30 int sof_sdw_get_tplg_files(struct snd_soc_card *card, const struct snd_soc_acpi_mach *mach, 31 - const char *prefix, const char ***tplg_files) 31 + const char *prefix, const char ***tplg_files, bool best_effort) 32 32 { 33 33 struct snd_soc_acpi_mach_params mach_params = mach->mach_params; 34 34 struct snd_soc_dai_link *dai_link; ··· 87 87 dev_dbg(card->dev, 88 88 "dai_link %s is not supported by separated tplg yet\n", 89 89 dai_link->name); 90 + if (best_effort) 91 + continue; 92 + 90 93 return 0; 91 94 } 92 95 if (tplg_mask & BIT(tplg_dev))
+1 -1
sound/soc/intel/common/sof-function-topology-lib.h
··· 10 10 #define _SND_SOC_ACPI_INTEL_GET_TPLG_H 11 11 12 12 int sof_sdw_get_tplg_files(struct snd_soc_card *card, const struct snd_soc_acpi_mach *mach, 13 - const char *prefix, const char ***tplg_files); 13 + const char *prefix, const char ***tplg_files, bool best_effort); 14 14 15 15 #endif
+2
sound/soc/qcom/sdm845.c
··· 365 365 snd_soc_dai_set_fmt(codec_dai, codec_dai_fmt); 366 366 break; 367 367 case QUATERNARY_MI2S_RX: 368 + codec_dai_fmt |= SND_SOC_DAIFMT_NB_NF | SND_SOC_DAIFMT_I2S; 368 369 snd_soc_dai_set_sysclk(cpu_dai, 369 370 Q6AFE_LPASS_CLK_ID_QUAD_MI2S_IBIT, 370 371 MI2S_BCLK_RATE, SNDRV_PCM_STREAM_PLAYBACK); 371 372 snd_soc_dai_set_fmt(cpu_dai, fmt); 373 + snd_soc_dai_set_fmt(codec_dai, codec_dai_fmt); 372 374 break; 373 375 374 376 case QUATERNARY_TDM_RX_0:
+6 -2
sound/soc/sdw_utils/soc_sdw_utils.c
··· 1548 1548 * endpoint check is not necessary 1549 1549 */ 1550 1550 if (dai_info->quirk && 1551 - !(dai_info->quirk_exclude ^ !!(dai_info->quirk & ctx->mc_quirk))) 1551 + !(dai_info->quirk_exclude ^ !!(dai_info->quirk & ctx->mc_quirk))) { 1552 + (*num_devs)--; 1552 1553 continue; 1554 + } 1553 1555 } else { 1554 1556 /* Check SDCA codec endpoint if there is no matching quirk */ 1555 1557 ret = is_sdca_endpoint_present(dev, codec_info, adr_link, i, j); ··· 1559 1557 return ret; 1560 1558 1561 1559 /* The endpoint is not present, skip */ 1562 - if (!ret) 1560 + if (!ret) { 1561 + (*num_devs)--; 1563 1562 continue; 1563 + } 1564 1564 } 1565 1565 1566 1566 dev_dbg(dev,
+20 -12
sound/soc/soc-ops.c
··· 111 111 EXPORT_SYMBOL_GPL(snd_soc_put_enum_double); 112 112 113 113 static int sdca_soc_q78_reg_to_ctl(struct soc_mixer_control *mc, unsigned int reg_val, 114 - unsigned int mask, unsigned int shift, int max) 114 + unsigned int mask, unsigned int shift, int max, 115 + bool sx) 115 116 { 116 117 int val = reg_val; 117 118 ··· 142 141 } 143 142 144 143 static int soc_mixer_reg_to_ctl(struct soc_mixer_control *mc, unsigned int reg_val, 145 - unsigned int mask, unsigned int shift, int max) 144 + unsigned int mask, unsigned int shift, int max, 145 + bool sx) 146 146 { 147 147 int val = (reg_val >> shift) & mask; 148 148 149 149 if (mc->sign_bit) 150 150 val = sign_extend32(val, mc->sign_bit); 151 151 152 - val = clamp(val, mc->min, mc->max); 153 - val -= mc->min; 152 + if (sx) { 153 + val -= mc->min; // SX controls intentionally can overflow here 154 + val = min_t(unsigned int, val & mask, max); 155 + } else { 156 + val = clamp(val, mc->min, mc->max); 157 + val -= mc->min; 158 + } 154 159 155 160 if (mc->invert) 156 161 val = max - val; 157 162 158 - return val & mask; 163 + return val; 159 164 } 160 165 161 166 static unsigned int soc_mixer_ctl_to_reg(struct soc_mixer_control *mc, int val, ··· 287 280 288 281 static int soc_get_volsw(struct snd_kcontrol *kcontrol, 289 282 struct snd_ctl_elem_value *ucontrol, 290 - struct soc_mixer_control *mc, int mask, int max) 283 + struct soc_mixer_control *mc, int mask, int max, bool sx) 291 284 { 292 - int (*reg_to_ctl)(struct soc_mixer_control *, unsigned int, unsigned int, unsigned int, int); 285 + int (*reg_to_ctl)(struct soc_mixer_control *, unsigned int, unsigned int, 286 + unsigned int, int, bool); 293 287 struct snd_soc_component *component = snd_kcontrol_chip(kcontrol); 294 288 unsigned int reg_val; 295 289 int val; ··· 301 293 reg_to_ctl = soc_mixer_reg_to_ctl; 302 294 303 295 reg_val = snd_soc_component_read(component, mc->reg); 304 - val = reg_to_ctl(mc, reg_val, mask, mc->shift, max); 296 + val = reg_to_ctl(mc, reg_val, mask, mc->shift, max, sx); 305 297 306 298 ucontrol->value.integer.value[0] = val; 307 299 308 300 if (snd_soc_volsw_is_stereo(mc)) { 309 301 if (mc->reg == mc->rreg) { 310 - val = reg_to_ctl(mc, reg_val, mask, mc->rshift, max); 302 + val = reg_to_ctl(mc, reg_val, mask, mc->rshift, max, sx); 311 303 } else { 312 304 reg_val = snd_soc_component_read(component, mc->rreg); 313 - val = reg_to_ctl(mc, reg_val, mask, mc->shift, max); 305 + val = reg_to_ctl(mc, reg_val, mask, mc->shift, max, sx); 314 306 } 315 307 316 308 ucontrol->value.integer.value[1] = val; ··· 379 371 (struct soc_mixer_control *)kcontrol->private_value; 380 372 unsigned int mask = soc_mixer_mask(mc); 381 373 382 - return soc_get_volsw(kcontrol, ucontrol, mc, mask, mc->max - mc->min); 374 + return soc_get_volsw(kcontrol, ucontrol, mc, mask, mc->max - mc->min, false); 383 375 } 384 376 EXPORT_SYMBOL_GPL(snd_soc_get_volsw); 385 377 ··· 421 413 (struct soc_mixer_control *)kcontrol->private_value; 422 414 unsigned int mask = soc_mixer_sx_mask(mc); 423 415 424 - return soc_get_volsw(kcontrol, ucontrol, mc, mask, mc->max); 416 + return soc_get_volsw(kcontrol, ucontrol, mc, mask, mc->max, true); 425 417 } 426 418 EXPORT_SYMBOL_GPL(snd_soc_get_volsw_sx); 427 419
+3 -3
sound/soc/sof/intel/pci-mtl.c
··· 47 47 [SOF_IPC_TYPE_4] = "intel/sof-ipc4-lib/mtl", 48 48 }, 49 49 .default_tplg_path = { 50 - [SOF_IPC_TYPE_4] = "intel/sof-ace-tplg", 50 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-tplg", 51 51 }, 52 52 .default_fw_filename = { 53 53 [SOF_IPC_TYPE_4] = "sof-mtl.ri", ··· 77 77 [SOF_IPC_TYPE_4] = "intel/sof-ipc4-lib/arl", 78 78 }, 79 79 .default_tplg_path = { 80 - [SOF_IPC_TYPE_4] = "intel/sof-ace-tplg", 80 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-tplg", 81 81 }, 82 82 .default_fw_filename = { 83 83 [SOF_IPC_TYPE_4] = "sof-arl.ri", ··· 107 107 [SOF_IPC_TYPE_4] = "intel/sof-ipc4-lib/arl-s", 108 108 }, 109 109 .default_tplg_path = { 110 - [SOF_IPC_TYPE_4] = "intel/sof-ace-tplg", 110 + [SOF_IPC_TYPE_4] = "intel/sof-ipc4-tplg", 111 111 }, 112 112 .default_fw_filename = { 113 113 [SOF_IPC_TYPE_4] = "sof-arl-s.ri",
+32 -17
sound/soc/sof/ipc4-topology.c
··· 1752 1752 channel_count = params_channels(params); 1753 1753 sample_rate = params_rate(params); 1754 1754 bit_depth = params_width(params); 1755 - /* 1756 - * Look for 32-bit blob first instead of 16-bit if copier 1757 - * supports multiple formats 1758 - */ 1759 - if (bit_depth == 16 && !single_bitdepth) { 1755 + 1756 + /* Prefer 32-bit blob if copier supports multiple formats */ 1757 + if (bit_depth <= 16 && !single_bitdepth) { 1760 1758 dev_dbg(sdev->dev, "Looking for 32-bit blob first for DMIC\n"); 1761 1759 format_change = true; 1762 1760 bit_depth = 32; ··· 1797 1799 if (format_change) { 1798 1800 /* 1799 1801 * The 32-bit blob was not found in NHLT table, try to 1800 - * look for one based on the params 1802 + * look for 16-bit for DMIC or based on the params for 1803 + * SSP 1801 1804 */ 1802 - bit_depth = params_width(params); 1803 - format_change = false; 1805 + if (linktype == SOF_DAI_INTEL_DMIC) { 1806 + bit_depth = 16; 1807 + if (params_width(params) == 16) 1808 + format_change = false; 1809 + } else { 1810 + bit_depth = params_width(params); 1811 + format_change = false; 1812 + } 1813 + 1804 1814 get_new_blob = true; 1805 1815 } else if (linktype == SOF_DAI_INTEL_DMIC && !single_bitdepth) { 1806 1816 /* ··· 1843 1837 *len = cfg->size >> 2; 1844 1838 *dst = (u32 *)cfg->caps; 1845 1839 1846 - if (format_change) { 1840 + if (format_change || params_format(params) == SNDRV_PCM_FORMAT_FLOAT_LE) { 1847 1841 /* 1848 1842 * Update the params to reflect that different blob was loaded 1849 1843 * instead of the requested bit depth (16 -> 32 or 32 -> 16). ··· 2286 2280 ch_map >>= 4; 2287 2281 } 2288 2282 2289 - step = ch_count / blob->alh_cfg.device_count; 2290 - mask = GENMASK(step - 1, 0); 2283 + if (swidget->id == snd_soc_dapm_dai_in && ch_count == out_ref_channels) { 2284 + /* 2285 + * For playback DAI widgets where the channel number is equal to 2286 + * the output reference channels, set the step = 0 to ensure all 2287 + * the ch_mask is applied to all alh mappings. 2288 + */ 2289 + mask = ch_mask; 2290 + step = 0; 2291 + } else { 2292 + step = ch_count / blob->alh_cfg.device_count; 2293 + mask = GENMASK(step - 1, 0); 2294 + } 2295 + 2291 2296 /* 2292 2297 * Set each gtw_cfg.node_id to blob->alh_cfg.mapping[] 2293 2298 * for all widgets with the same stream name ··· 2333 2316 } 2334 2317 2335 2318 /* 2336 - * Set the same channel mask for playback as the audio data is 2337 - * duplicated for all speakers. For capture, split the channels 2319 + * Set the same channel mask if the widget channel count is the same 2320 + * as the FE channels for playback as the audio data is duplicated 2321 + * for all speakers in this case. Otherwise, split the channels 2338 2322 * among the aggregated DAIs. For example, with 4 channels on 2 2339 2323 * aggregated DAIs, the channel_mask should be 0x3 and 0xc for the 2340 2324 * two DAI's. ··· 2344 2326 * the tables in soc_acpi files depending on the _ADR and devID 2345 2327 * registers for each codec. 2346 2328 */ 2347 - if (w->id == snd_soc_dapm_dai_in) 2348 - blob->alh_cfg.mapping[i].channel_mask = ch_mask; 2349 - else 2350 - blob->alh_cfg.mapping[i].channel_mask = mask << (step * i); 2329 + blob->alh_cfg.mapping[i].channel_mask = mask << (step * i); 2351 2330 2352 2331 i++; 2353 2332 }
+21 -5
sound/soc/sof/topology.c
··· 2106 2106 /* source component */ 2107 2107 source_swidget = snd_sof_find_swidget(scomp, (char *)route->source); 2108 2108 if (!source_swidget) { 2109 - dev_err(scomp->dev, "error: source %s not found\n", 2110 - route->source); 2109 + dev_err(scomp->dev, "source %s for sink %s is not found\n", 2110 + route->source, route->sink); 2111 2111 ret = -EINVAL; 2112 2112 goto err; 2113 2113 } ··· 2125 2125 /* sink component */ 2126 2126 sink_swidget = snd_sof_find_swidget(scomp, (char *)route->sink); 2127 2127 if (!sink_swidget) { 2128 - dev_err(scomp->dev, "error: sink %s not found\n", 2129 - route->sink); 2128 + dev_err(scomp->dev, "sink %s for source %s is not found\n", 2129 + route->sink, route->source); 2130 2130 ret = -EINVAL; 2131 2131 goto err; 2132 2132 } ··· 2506 2506 if (!tplg_files) 2507 2507 return -ENOMEM; 2508 2508 2509 + /* Try to use function topologies if possible */ 2509 2510 if (!sof_pdata->disable_function_topology && !disable_function_topology && 2510 2511 sof_pdata->machine && sof_pdata->machine->get_function_tplg_files) { 2512 + /* 2513 + * When the topology name contains 'dummy' word, it means that 2514 + * there is no fallback option to monolithic topology in case 2515 + * any of the function topologies might be missing. 2516 + * In this case we should use best effort to form the card, 2517 + * ignoring functionalities that we are missing a fragment for. 2518 + * 2519 + * Note: monolithic topologies also ignore these possibly 2520 + * missing functions, so the functionality of the card would be 2521 + * identical to the case if there would be a fallback monolithic 2522 + * topology created for the configuration. 2523 + */ 2524 + bool no_fallback = strstr(file, "dummy"); 2525 + 2511 2526 tplg_cnt = sof_pdata->machine->get_function_tplg_files(scomp->card, 2512 2527 sof_pdata->machine, 2513 2528 tplg_filename_prefix, 2514 - &tplg_files); 2529 + &tplg_files, 2530 + no_fallback); 2515 2531 if (tplg_cnt < 0) { 2516 2532 kfree(tplg_files); 2517 2533 return tplg_cnt;
+3 -3
sound/soc/tegra/tegra210_ahub.c
··· 2077 2077 .val_bits = 32, 2078 2078 .reg_stride = 4, 2079 2079 .max_register = TEGRA210_MAX_REGISTER_ADDR, 2080 - .cache_type = REGCACHE_FLAT, 2080 + .cache_type = REGCACHE_FLAT_S, 2081 2081 }; 2082 2082 2083 2083 static const struct regmap_config tegra186_ahub_regmap_config = { ··· 2085 2085 .val_bits = 32, 2086 2086 .reg_stride = 4, 2087 2087 .max_register = TEGRA186_MAX_REGISTER_ADDR, 2088 - .cache_type = REGCACHE_FLAT, 2088 + .cache_type = REGCACHE_FLAT_S, 2089 2089 }; 2090 2090 2091 2091 static const struct regmap_config tegra264_ahub_regmap_config = { ··· 2094 2094 .reg_stride = 4, 2095 2095 .writeable_reg = tegra264_ahub_wr_reg, 2096 2096 .max_register = TEGRA264_MAX_REGISTER_ADDR, 2097 - .cache_type = REGCACHE_FLAT, 2097 + .cache_type = REGCACHE_FLAT_S, 2098 2098 }; 2099 2099 2100 2100 static const struct tegra_ahub_soc_data soc_data_tegra210 = {
+4 -4
sound/usb/endpoint.c
··· 1481 1481 return err; 1482 1482 } 1483 1483 1484 + err = snd_usb_select_mode_quirk(chip, ep->cur_audiofmt); 1485 + if (err < 0) 1486 + return err; 1487 + 1484 1488 err = snd_usb_init_pitch(chip, ep->cur_audiofmt); 1485 1489 if (err < 0) 1486 1490 return err; 1487 1491 1488 1492 err = init_sample_rate(chip, ep); 1489 - if (err < 0) 1490 - return err; 1491 - 1492 - err = snd_usb_select_mode_quirk(chip, ep->cur_audiofmt); 1493 1493 if (err < 0) 1494 1494 return err; 1495 1495
+4 -1
sound/usb/format.c
··· 34 34 { 35 35 int sample_width, sample_bytes; 36 36 u64 pcm_formats = 0; 37 + u64 dsd_formats = 0; 37 38 38 39 switch (fp->protocol) { 39 40 case UAC_VERSION_1: ··· 155 154 fp->iface, fp->altsetting, format); 156 155 } 157 156 158 - pcm_formats |= snd_usb_interface_dsd_format_quirks(chip, fp, sample_bytes); 157 + dsd_formats |= snd_usb_interface_dsd_format_quirks(chip, fp, sample_bytes); 158 + if (dsd_formats && !fp->dsd_dop) 159 + pcm_formats = dsd_formats; 159 160 160 161 return pcm_formats; 161 162 }
+14 -6
sound/usb/mixer_us16x08.c
··· 655 655 u8 *meter_urb) 656 656 { 657 657 int val = MUC2(meter_urb, s) + (MUC3(meter_urb, s) << 8); 658 + int ch = MUB2(meter_urb, s) - 1; 659 + 660 + if (ch < 0) 661 + return; 658 662 659 663 if (MUA0(meter_urb, s) == 0x61 && MUA1(meter_urb, s) == 0x02 && 660 664 MUA2(meter_urb, s) == 0x04 && MUB0(meter_urb, s) == 0x62) { 661 - if (MUC0(meter_urb, s) == 0x72) 662 - store->meter_level[MUB2(meter_urb, s) - 1] = val; 663 - if (MUC0(meter_urb, s) == 0xb2) 664 - store->comp_level[MUB2(meter_urb, s) - 1] = val; 665 + if (ch < SND_US16X08_MAX_CHANNELS) { 666 + if (MUC0(meter_urb, s) == 0x72) 667 + store->meter_level[ch] = val; 668 + if (MUC0(meter_urb, s) == 0xb2) 669 + store->comp_level[ch] = val; 670 + } 665 671 } 666 672 if (MUA0(meter_urb, s) == 0x61 && MUA1(meter_urb, s) == 0x02 && 667 - MUA2(meter_urb, s) == 0x02 && MUB0(meter_urb, s) == 0x62) 668 - store->master_level[MUB2(meter_urb, s) - 1] = val; 673 + MUA2(meter_urb, s) == 0x02 && MUB0(meter_urb, s) == 0x62) { 674 + if (ch < ARRAY_SIZE(store->master_level)) 675 + store->master_level[ch] = val; 676 + } 669 677 } 670 678 671 679 /* Function to retrieve current meter values from the device.
+12 -2
sound/usb/quirks.c
··· 2221 2221 QUIRK_FLAG_IFACE_DELAY), 2222 2222 DEVICE_FLG(0x0644, 0x8044, /* Esoteric D-05X */ 2223 2223 QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY | 2224 - QUIRK_FLAG_IFACE_DELAY), 2224 + QUIRK_FLAG_IFACE_DELAY | QUIRK_FLAG_FORCE_IFACE_RESET), 2225 2225 DEVICE_FLG(0x0644, 0x804a, /* TEAC UD-301 */ 2226 2226 QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY | 2227 2227 QUIRK_FLAG_IFACE_DELAY), ··· 2229 2229 QUIRK_FLAG_FORCE_IFACE_RESET), 2230 2230 DEVICE_FLG(0x0644, 0x806b, /* TEAC UD-701 */ 2231 2231 QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY | 2232 - QUIRK_FLAG_IFACE_DELAY), 2232 + QUIRK_FLAG_IFACE_DELAY | QUIRK_FLAG_FORCE_IFACE_RESET), 2233 + DEVICE_FLG(0x0644, 0x807d, /* TEAC UD-507 */ 2234 + QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY | 2235 + QUIRK_FLAG_IFACE_DELAY | QUIRK_FLAG_FORCE_IFACE_RESET), 2236 + DEVICE_FLG(0x0644, 0x806c, /* Esoteric XD */ 2237 + QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY | 2238 + QUIRK_FLAG_IFACE_DELAY | QUIRK_FLAG_FORCE_IFACE_RESET), 2233 2239 DEVICE_FLG(0x06f8, 0xb000, /* Hercules DJ Console (Windows Edition) */ 2234 2240 QUIRK_FLAG_IGNORE_CTL_ERROR), 2235 2241 DEVICE_FLG(0x06f8, 0xd002, /* Hercules DJ Console (Macintosh Edition) */ ··· 2394 2388 QUIRK_FLAG_CTL_MSG_DELAY_1M), 2395 2389 DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */ 2396 2390 QUIRK_FLAG_IGNORE_CTL_ERROR), 2391 + DEVICE_FLG(0x3255, 0x0000, /* Luxman D-10X */ 2392 + QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY), 2397 2393 DEVICE_FLG(0x339b, 0x3a07, /* Synaptics HONOR USB-C HEADSET */ 2398 2394 QUIRK_FLAG_MIXER_PLAYBACK_MIN_MUTE), 2399 2395 DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */ ··· 2438 2430 VENDOR_FLG(0x25ce, /* Mytek devices */ 2439 2431 QUIRK_FLAG_DSD_RAW), 2440 2432 VENDOR_FLG(0x2622, /* IAG Limited devices */ 2433 + QUIRK_FLAG_DSD_RAW), 2434 + VENDOR_FLG(0x2772, /* Musical Fidelity devices */ 2441 2435 QUIRK_FLAG_DSD_RAW), 2442 2436 VENDOR_FLG(0x278b, /* Rotel? */ 2443 2437 QUIRK_FLAG_DSD_RAW),
+5 -3
tools/arch/arm64/include/asm/cputype.h
··· 81 81 #define ARM_CPU_PART_CORTEX_A78AE 0xD42 82 82 #define ARM_CPU_PART_CORTEX_X1 0xD44 83 83 #define ARM_CPU_PART_CORTEX_A510 0xD46 84 - #define ARM_CPU_PART_CORTEX_X1C 0xD4C 85 84 #define ARM_CPU_PART_CORTEX_A520 0xD80 86 85 #define ARM_CPU_PART_CORTEX_A710 0xD47 87 86 #define ARM_CPU_PART_CORTEX_A715 0xD4D ··· 92 93 #define ARM_CPU_PART_NEOVERSE_V2 0xD4F 93 94 #define ARM_CPU_PART_CORTEX_A720 0xD81 94 95 #define ARM_CPU_PART_CORTEX_X4 0xD82 96 + #define ARM_CPU_PART_NEOVERSE_V3AE 0xD83 95 97 #define ARM_CPU_PART_NEOVERSE_V3 0xD84 96 98 #define ARM_CPU_PART_CORTEX_X925 0xD85 97 99 #define ARM_CPU_PART_CORTEX_A725 0xD87 ··· 130 130 131 131 #define NVIDIA_CPU_PART_DENVER 0x003 132 132 #define NVIDIA_CPU_PART_CARMEL 0x004 133 + #define NVIDIA_CPU_PART_OLYMPUS 0x010 133 134 134 135 #define FUJITSU_CPU_PART_A64FX 0x001 135 136 ··· 172 171 #define MIDR_CORTEX_A78AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78AE) 173 172 #define MIDR_CORTEX_X1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1) 174 173 #define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510) 175 - #define MIDR_CORTEX_X1C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1C) 176 174 #define MIDR_CORTEX_A520 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A520) 177 175 #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710) 178 176 #define MIDR_CORTEX_A715 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A715) ··· 183 183 #define MIDR_NEOVERSE_V2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V2) 184 184 #define MIDR_CORTEX_A720 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A720) 185 185 #define MIDR_CORTEX_X4 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X4) 186 + #define MIDR_NEOVERSE_V3AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V3AE) 186 187 #define MIDR_NEOVERSE_V3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V3) 187 188 #define MIDR_CORTEX_X925 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X925) 188 189 #define MIDR_CORTEX_A725 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A725) ··· 223 222 224 223 #define MIDR_NVIDIA_DENVER MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_DENVER) 225 224 #define MIDR_NVIDIA_CARMEL MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_CARMEL) 225 + #define MIDR_NVIDIA_OLYMPUS MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_OLYMPUS) 226 226 #define MIDR_FUJITSU_A64FX MIDR_CPU_MODEL(ARM_CPU_IMP_FUJITSU, FUJITSU_CPU_PART_A64FX) 227 227 #define MIDR_HISI_TSV110 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV110) 228 228 #define MIDR_HISI_HIP09 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_HIP09) ··· 247 245 /* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */ 248 246 #define MIDR_FUJITSU_ERRATUM_010001 MIDR_FUJITSU_A64FX 249 247 #define MIDR_FUJITSU_ERRATUM_010001_MASK (~MIDR_CPU_VAR_REV(1, 0)) 250 - #define TCR_CLEAR_FUJITSU_ERRATUM_010001 (TCR_NFD1 | TCR_NFD0) 248 + #define TCR_CLEAR_FUJITSU_ERRATUM_010001 (TCR_EL1_NFD1 | TCR_EL1_NFD0) 251 249 252 250 #ifndef __ASSEMBLER__ 253 251
+11
tools/arch/x86/include/asm/cpufeatures.h
··· 314 314 #define X86_FEATURE_SM4 (12*32+ 2) /* SM4 instructions */ 315 315 #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* "avx_vnni" AVX VNNI instructions */ 316 316 #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* "avx512_bf16" AVX512 BFLOAT16 instructions */ 317 + #define X86_FEATURE_LASS (12*32+ 6) /* "lass" Linear Address Space Separation */ 317 318 #define X86_FEATURE_CMPCCXADD (12*32+ 7) /* CMPccXADD instructions */ 318 319 #define X86_FEATURE_ARCH_PERFMON_EXT (12*32+ 8) /* Intel Architectural PerfMon Extension */ 319 320 #define X86_FEATURE_FZRM (12*32+10) /* Fast zero-length REP MOVSB */ ··· 339 338 #define X86_FEATURE_AMD_STIBP (13*32+15) /* Single Thread Indirect Branch Predictors */ 340 339 #define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* Single Thread Indirect Branch Predictors always-on preferred */ 341 340 #define X86_FEATURE_AMD_IBRS_SAME_MODE (13*32+19) /* Indirect Branch Restricted Speculation same mode protection*/ 341 + #define X86_FEATURE_EFER_LMSLE_MBZ (13*32+20) /* EFER.LMSLE must be zero */ 342 342 #define X86_FEATURE_AMD_PPIN (13*32+23) /* "amd_ppin" Protected Processor Inventory Number */ 343 343 #define X86_FEATURE_AMD_SSBD (13*32+24) /* Speculative Store Bypass Disable */ 344 344 #define X86_FEATURE_VIRT_SSBD (13*32+25) /* "virt_ssbd" Virtualized Speculative Store Bypass Disable */ ··· 504 502 #define X86_FEATURE_IBPB_EXIT_TO_USER (21*32+14) /* Use IBPB on exit-to-userspace, see VMSCAPE bug */ 505 503 #define X86_FEATURE_ABMC (21*32+15) /* Assignable Bandwidth Monitoring Counters */ 506 504 #define X86_FEATURE_MSR_IMM (21*32+16) /* MSR immediate form instructions */ 505 + #define X86_FEATURE_SGX_EUPDATESVN (21*32+17) /* Support for ENCLS[EUPDATESVN] instruction */ 506 + 507 + #define X86_FEATURE_SDCIAE (21*32+18) /* L3 Smart Data Cache Injection Allocation Enforcement */ 508 + #define X86_FEATURE_CLEAR_CPU_BUF_VM_MMIO (21*32+19) /* 509 + * Clear CPU buffers before VM-Enter if the vCPU 510 + * can access host MMIO (ignored for all intents 511 + * and purposes if CLEAR_CPU_BUF_VM is set). 512 + */ 513 + #define X86_FEATURE_X2AVIC_EXT (21*32+20) /* AMD SVM x2AVIC support for 4k vCPUs */ 507 514 508 515 /* 509 516 * BUG word(s)
+30
tools/arch/x86/include/asm/msr-index.h
··· 166 166 * Processor MMIO stale data 167 167 * vulnerabilities. 168 168 */ 169 + #define ARCH_CAP_MCU_ENUM BIT(16) /* 170 + * Indicates the presence of microcode update 171 + * feature enumeration and status information. 172 + */ 169 173 #define ARCH_CAP_FB_CLEAR BIT(17) /* 170 174 * VERW clears CPU fill buffer 171 175 * even on MDS_NO CPUs. ··· 330 326 #define PERF_CAP_PEBS_MASK (PERF_CAP_PEBS_TRAP | PERF_CAP_ARCH_REG | \ 331 327 PERF_CAP_PEBS_FORMAT | PERF_CAP_PEBS_BASELINE | \ 332 328 PERF_CAP_PEBS_TIMING_INFO) 329 + 330 + /* Arch PEBS */ 331 + #define MSR_IA32_PEBS_BASE 0x000003f4 332 + #define MSR_IA32_PEBS_INDEX 0x000003f5 333 + #define ARCH_PEBS_OFFSET_MASK 0x7fffff 334 + #define ARCH_PEBS_INDEX_WR_SHIFT 4 335 + 336 + #define ARCH_PEBS_RELOAD 0xffffffff 337 + #define ARCH_PEBS_CNTR_ALLOW BIT_ULL(35) 338 + #define ARCH_PEBS_CNTR_GP BIT_ULL(36) 339 + #define ARCH_PEBS_CNTR_FIXED BIT_ULL(37) 340 + #define ARCH_PEBS_CNTR_METRICS BIT_ULL(38) 341 + #define ARCH_PEBS_LBR_SHIFT 40 342 + #define ARCH_PEBS_LBR (0x3ull << ARCH_PEBS_LBR_SHIFT) 343 + #define ARCH_PEBS_VECR_XMM BIT_ULL(49) 344 + #define ARCH_PEBS_GPR BIT_ULL(61) 345 + #define ARCH_PEBS_AUX BIT_ULL(62) 346 + #define ARCH_PEBS_EN BIT_ULL(63) 347 + #define ARCH_PEBS_CNTR_MASK (ARCH_PEBS_CNTR_GP | ARCH_PEBS_CNTR_FIXED | \ 348 + ARCH_PEBS_CNTR_METRICS) 333 349 334 350 #define MSR_IA32_RTIT_CTL 0x00000570 335 351 #define RTIT_CTL_TRACEEN BIT(0) ··· 953 929 #define MSR_IA32_APICBASE_BASE (0xfffff<<12) 954 930 955 931 #define MSR_IA32_UCODE_WRITE 0x00000079 932 + 933 + #define MSR_IA32_MCU_ENUMERATION 0x0000007b 934 + #define MCU_STAGING BIT(4) 935 + 956 936 #define MSR_IA32_UCODE_REV 0x0000008b 957 937 958 938 /* Intel SGX Launch Enclave Public Key Hash MSRs */ ··· 1253 1225 #define MSR_IA32_VMX_TRUE_ENTRY_CTLS 0x00000490 1254 1226 #define MSR_IA32_VMX_VMFUNC 0x00000491 1255 1227 #define MSR_IA32_VMX_PROCBASED_CTLS3 0x00000492 1228 + 1229 + #define MSR_IA32_MCU_STAGING_MBOX_ADDR 0x000007a5 1256 1230 1257 1231 /* Resctrl MSRs: */ 1258 1232 /* - Intel: */
+1
tools/arch/x86/include/uapi/asm/kvm.h
··· 502 502 /* vendor-specific groups and attributes for system fd */ 503 503 #define KVM_X86_GRP_SEV 1 504 504 # define KVM_X86_SEV_VMSA_FEATURES 0 505 + # define KVM_X86_SNP_POLICY_BITS 1 505 506 506 507 struct kvm_vmx_nested_state_data { 507 508 __u8 vmcs12[KVM_STATE_NESTED_VMX_VMCS_SIZE];
+4 -2
tools/build/Makefile.feature
··· 99 99 libzstd \ 100 100 disassembler-four-args \ 101 101 disassembler-init-styled \ 102 - file-handle 102 + file-handle \ 103 + libopenssl 103 104 104 105 # FEATURE_TESTS_BASIC + FEATURE_TESTS_EXTRA is the complete list 105 106 # of all feature tests ··· 148 147 lzma \ 149 148 bpf \ 150 149 libaio \ 151 - libzstd 150 + libzstd \ 151 + libopenssl 152 152 153 153 # 154 154 # Declare group members of a feature to display the logical OR of the detection
+7 -3
tools/build/feature/Makefile
··· 67 67 test-libopencsd.bin \ 68 68 test-clang.bin \ 69 69 test-llvm.bin \ 70 - test-llvm-perf.bin \ 70 + test-llvm-perf.bin \ 71 71 test-libaio.bin \ 72 72 test-libzstd.bin \ 73 73 test-clang-bpf-co-re.bin \ 74 74 test-file-handle.bin \ 75 - test-libpfm4.bin 75 + test-libpfm4.bin \ 76 + test-libopenssl.bin 76 77 77 78 FILES := $(addprefix $(OUTPUT),$(FILES)) 78 79 ··· 107 106 __BUILD = $(CC) $(CFLAGS) -MD -Wall -Werror -o $@ $(patsubst %.bin,%.c,$(@F)) $(LDFLAGS) 108 107 BUILD = $(__BUILD) > $(@:.bin=.make.output) 2>&1 109 108 BUILD_BFD = $(BUILD) -DPACKAGE='"perf"' -lbfd -ldl 110 - BUILD_ALL = $(BUILD) -fstack-protector-all -O2 -D_FORTIFY_SOURCE=2 -ldw -lelf -lnuma -lelf -lslang $(FLAGS_PERL_EMBED) $(FLAGS_PYTHON_EMBED) -ldl -lz -llzma -lzstd 109 + BUILD_ALL = $(BUILD) -fstack-protector-all -O2 -D_FORTIFY_SOURCE=2 -ldw -lelf -lnuma -lelf -lslang $(FLAGS_PERL_EMBED) $(FLAGS_PYTHON_EMBED) -ldl -lz -llzma -lzstd -lssl 111 110 112 111 __BUILDXX = $(CXX) $(CXXFLAGS) -MD -Wall -Werror -o $@ $(patsubst %.bin,%.cpp,$(@F)) $(LDFLAGS) 113 112 BUILDXX = $(__BUILDXX) > $(@:.bin=.make.output) 2>&1 ··· 381 380 382 381 $(OUTPUT)test-libpfm4.bin: 383 382 $(BUILD) -lpfm 383 + 384 + $(OUTPUT)test-libopenssl.bin: 385 + $(BUILD) -lssl 384 386 385 387 $(OUTPUT)test-bpftool-skeletons.bin: 386 388 $(SYSTEM_BPFTOOL) version | grep '^features:.*skeletons' \
+5
tools/build/feature/test-all.c
··· 142 142 # include "test-libtraceevent.c" 143 143 #undef main 144 144 145 + #define main main_test_libopenssl 146 + # include "test-libopenssl.c" 147 + #undef main 148 + 145 149 int main(int argc, char *argv[]) 146 150 { 147 151 main_test_libpython(); ··· 177 173 main_test_reallocarray(); 178 174 main_test_libzstd(); 179 175 main_test_libtraceevent(); 176 + main_test_libopenssl(); 180 177 181 178 return 0; 182 179 }
+7
tools/build/feature/test-libopenssl.c
··· 1 + #include <openssl/ssl.h> 2 + #include <openssl/opensslv.h> 3 + 4 + int main(void) 5 + { 6 + return SSL_library_init(); 7 + }
-6
tools/include/linux/gfp_types.h
··· 55 55 #ifdef CONFIG_LOCKDEP 56 56 ___GFP_NOLOCKDEP_BIT, 57 57 #endif 58 - #ifdef CONFIG_SLAB_OBJ_EXT 59 58 ___GFP_NO_OBJ_EXT_BIT, 60 - #endif 61 59 ___GFP_LAST_BIT 62 60 }; 63 61 ··· 96 98 #else 97 99 #define ___GFP_NOLOCKDEP 0 98 100 #endif 99 - #ifdef CONFIG_SLAB_OBJ_EXT 100 101 #define ___GFP_NO_OBJ_EXT BIT(___GFP_NO_OBJ_EXT_BIT) 101 - #else 102 - #define ___GFP_NO_OBJ_EXT 0 103 - #endif 104 102 105 103 /* 106 104 * Physical address zone modifiers (see linux/mmzone.h - low four bits)
+8
tools/include/linux/types.h
··· 88 88 # define __aligned_u64 __u64 __attribute__((aligned(8))) 89 89 #endif 90 90 91 + #ifndef __aligned_be64 92 + # define __aligned_be64 __be64 __attribute__((aligned(8))) 93 + #endif 94 + 95 + #ifndef __aligned_le64 96 + # define __aligned_le64 __le64 __attribute__((aligned(8))) 97 + #endif 98 + 91 99 struct list_head { 92 100 struct list_head *next, *prev; 93 101 };
+3 -1
tools/include/uapi/asm-generic/unistd.h
··· 857 857 __SYSCALL(__NR_file_getattr, sys_file_getattr) 858 858 #define __NR_file_setattr 469 859 859 __SYSCALL(__NR_file_setattr, sys_file_setattr) 860 + #define __NR_listns 470 861 + __SYSCALL(__NR_listns, sys_listns) 860 862 861 863 #undef __NR_syscalls 862 - #define __NR_syscalls 470 864 + #define __NR_syscalls 471 863 865 864 866 /* 865 867 * 32 bit systems traditionally used different
+15
tools/include/uapi/drm/drm.h
··· 906 906 */ 907 907 #define DRM_CLIENT_CAP_CURSOR_PLANE_HOTSPOT 6 908 908 909 + /** 910 + * DRM_CLIENT_CAP_PLANE_COLOR_PIPELINE 911 + * 912 + * If set to 1 the DRM core will allow setting the COLOR_PIPELINE 913 + * property on a &drm_plane, as well as drm_colorop properties. 914 + * 915 + * Setting of these plane properties will be rejected when this client 916 + * cap is set: 917 + * - COLOR_ENCODING 918 + * - COLOR_RANGE 919 + * 920 + * The client must enable &DRM_CLIENT_CAP_ATOMIC first. 921 + */ 922 + #define DRM_CLIENT_CAP_PLANE_COLOR_PIPELINE 7 923 + 909 924 /* DRM_IOCTL_SET_CLIENT_CAP ioctl argument type */ 910 925 struct drm_set_client_cap { 911 926 __u64 capability;
+11
tools/include/uapi/linux/kvm.h
··· 179 179 #define KVM_EXIT_LOONGARCH_IOCSR 38 180 180 #define KVM_EXIT_MEMORY_FAULT 39 181 181 #define KVM_EXIT_TDX 40 182 + #define KVM_EXIT_ARM_SEA 41 182 183 183 184 /* For KVM_EXIT_INTERNAL_ERROR */ 184 185 /* Emulate instruction failed. */ ··· 474 473 } setup_event_notify; 475 474 }; 476 475 } tdx; 476 + /* KVM_EXIT_ARM_SEA */ 477 + struct { 478 + #define KVM_EXIT_ARM_SEA_FLAG_GPA_VALID (1ULL << 0) 479 + __u64 flags; 480 + __u64 esr; 481 + __u64 gva; 482 + __u64 gpa; 483 + } arm_sea; 477 484 /* Fix the size of the union. */ 478 485 char padding[256]; 479 486 }; ··· 972 963 #define KVM_CAP_RISCV_MP_STATE_RESET 242 973 964 #define KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED 243 974 965 #define KVM_CAP_GUEST_MEMFD_FLAGS 244 966 + #define KVM_CAP_ARM_SEA_TO_USER 245 967 + #define KVM_CAP_S390_USER_OPEREXEC 246 975 968 976 969 struct kvm_irq_routing_irqchip { 977 970 __u32 irqchip;
+5 -1
tools/mm/page_owner_sort.c
··· 181 181 { 182 182 const struct block_list *l1 = p1, *l2 = p2; 183 183 184 - return l1->ts_nsec < l2->ts_nsec ? -1 : 1; 184 + if (l1->ts_nsec < l2->ts_nsec) 185 + return -1; 186 + if (l1->ts_nsec > l2->ts_nsec) 187 + return 1; 188 + return 0; 185 189 } 186 190 187 191 static int compare_cull_condition(const void *p1, const void *p2)
+8
tools/perf/Makefile.config
··· 701 701 endif 702 702 endif 703 703 704 + ifeq ($(feature-libopenssl), 1) 705 + $(call detected,CONFIG_LIBOPENSSL) 706 + CFLAGS += -DHAVE_LIBOPENSSL_SUPPORT 707 + endif 708 + 704 709 ifndef BUILD_BPF_SKEL 705 710 # BPF skeletons control a large number of perf features, by default 706 711 # they are enabled. ··· 721 716 BUILD_BPF_SKEL := 0 722 717 else ifeq ($(filter -DHAVE_LIBBPF_SUPPORT, $(CFLAGS)),) 723 718 $(warning Warning: Disabled BPF skeletons as libbpf is required) 719 + BUILD_BPF_SKEL := 0 720 + else ifeq ($(filter -DHAVE_LIBOPENSSL_SUPPORT, $(CFLAGS)),) 721 + $(warning Warning: Disabled BPF skeletons as libopenssl is required) 724 722 BUILD_BPF_SKEL := 0 725 723 else ifeq ($(call get-executable,$(CLANG)),) 726 724 $(warning Warning: Disabled BPF skeletons as clang ($(CLANG)) is missing)
+1
tools/perf/arch/arm/entry/syscalls/syscall.tbl
··· 484 484 467 common open_tree_attr sys_open_tree_attr 485 485 468 common file_getattr sys_file_getattr 486 486 469 common file_setattr sys_file_setattr 487 + 470 common listns sys_listns
+1
tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl
··· 384 384 467 n64 open_tree_attr sys_open_tree_attr 385 385 468 n64 file_getattr sys_file_getattr 386 386 469 n64 file_setattr sys_file_setattr 387 + 470 n64 listns sys_listns
+1
tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
··· 560 560 467 common open_tree_attr sys_open_tree_attr 561 561 468 common file_getattr sys_file_getattr 562 562 469 common file_setattr sys_file_setattr 563 + 470 common listns sys_listns
+1
tools/perf/arch/s390/entry/syscalls/syscall.tbl
··· 472 472 467 common open_tree_attr sys_open_tree_attr sys_open_tree_attr 473 473 468 common file_getattr sys_file_getattr sys_file_getattr 474 474 469 common file_setattr sys_file_setattr sys_file_setattr 475 + 470 common listns sys_listns sys_listns
+1
tools/perf/arch/sh/entry/syscalls/syscall.tbl
··· 473 473 467 common open_tree_attr sys_open_tree_attr 474 474 468 common file_getattr sys_file_getattr 475 475 469 common file_setattr sys_file_setattr 476 + 470 common listns sys_listns
+1
tools/perf/arch/sparc/entry/syscalls/syscall.tbl
··· 515 515 467 common open_tree_attr sys_open_tree_attr 516 516 468 common file_getattr sys_file_getattr 517 517 469 common file_setattr sys_file_setattr 518 + 470 common listns sys_listns
+1
tools/perf/arch/x86/entry/syscalls/syscall_32.tbl
··· 475 475 467 i386 open_tree_attr sys_open_tree_attr 476 476 468 i386 file_getattr sys_file_getattr 477 477 469 i386 file_setattr sys_file_setattr 478 + 470 i386 listns sys_listns
+1
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
··· 394 394 467 common open_tree_attr sys_open_tree_attr 395 395 468 common file_getattr sys_file_getattr 396 396 469 common file_setattr sys_file_setattr 397 + 470 common listns sys_listns 397 398 398 399 # 399 400 # Due to a historical design error, certain syscalls are numbered differently
+1
tools/perf/arch/xtensa/entry/syscalls/syscall.tbl
··· 440 440 467 common open_tree_attr sys_open_tree_attr 441 441 468 common file_getattr sys_file_getattr 442 442 469 common file_setattr sys_file_setattr 443 + 470 common listns sys_listns
+4 -2
tools/perf/builtin-buildid-cache.c
··· 276 276 { 277 277 char filename[PATH_MAX]; 278 278 struct build_id bid = { .size = 0, }; 279 + int err; 279 280 280 281 if (!dso__build_id_filename(dso, filename, sizeof(filename), false)) 281 282 return true; 282 283 283 - if (filename__read_build_id(filename, &bid) == -1) { 284 - if (errno == ENOENT) 284 + err = filename__read_build_id(filename, &bid); 285 + if (err < 0) { 286 + if (err == -ENOENT) 285 287 return false; 286 288 287 289 pr_warning("Problems with %s file, consider removing it from the cache\n",
+1 -1
tools/perf/tests/shell/kvm.sh
··· 118 118 skip "/dev/kvm not accessible" 119 119 fi 120 120 121 - if ! perf kvm stat record -a sleep 0.01 >/dev/null 2>&1; then 121 + if ! perf kvm stat record -o /dev/null -a sleep 0.01 >/dev/null 2>&1; then 122 122 skip "No permission to record kvm events" 123 123 fi 124 124
+1 -1
tools/perf/tests/shell/top.sh
··· 1 1 #!/bin/bash 2 - # perf top tests 2 + # perf top tests (exclusive) 3 3 # SPDX-License-Identifier: GPL-2.0 4 4 5 5 set -e
+21 -3
tools/perf/trace/beauty/include/linux/socket.h
··· 32 32 * 1003.1g requires sa_family_t and that sa_data is char. 33 33 */ 34 34 35 + /* Deprecated for in-kernel use. Use struct sockaddr_unsized instead. */ 35 36 struct sockaddr { 36 37 sa_family_t sa_family; /* address family, AF_xxx */ 37 38 char sa_data[14]; /* 14 bytes of protocol address */ 39 + }; 40 + 41 + /** 42 + * struct sockaddr_unsized - Unspecified size sockaddr for callbacks 43 + * @sa_family: Address family (AF_UNIX, AF_INET, AF_INET6, etc.) 44 + * @sa_data: Flexible array for address data 45 + * 46 + * This structure is designed for callback interfaces where the 47 + * total size is known via the sockaddr_len parameter. Unlike struct 48 + * sockaddr which has a fixed 14-byte sa_data limit or struct 49 + * sockaddr_storage which has a fixed 128-byte sa_data limit, this 50 + * structure can accommodate addresses of any size, but must be used 51 + * carefully. 52 + */ 53 + struct sockaddr_unsized { 54 + __kernel_sa_family_t sa_family; /* address family, AF_xxx */ 55 + char sa_data[]; /* flexible address data */ 38 56 }; 39 57 40 58 struct linger { ··· 468 450 int addrlen); 469 451 extern int __sys_listen(int fd, int backlog); 470 452 extern int __sys_listen_socket(struct socket *sock, int backlog); 453 + extern int do_getsockname(struct socket *sock, int peer, 454 + struct sockaddr __user *usockaddr, int __user *usockaddr_len); 471 455 extern int __sys_getsockname(int fd, struct sockaddr __user *usockaddr, 472 - int __user *usockaddr_len); 473 - extern int __sys_getpeername(int fd, struct sockaddr __user *usockaddr, 474 - int __user *usockaddr_len); 456 + int __user *usockaddr_len, int peer); 475 457 extern int __sys_socketpair(int family, int type, int protocol, 476 458 int __user *usockvec); 477 459 extern int __sys_shutdown_sock(struct socket *sock, int how);
+12
tools/perf/trace/beauty/include/uapi/linux/fcntl.h
··· 4 4 5 5 #include <asm/fcntl.h> 6 6 #include <linux/openat2.h> 7 + #include <linux/types.h> 7 8 8 9 #define F_SETLEASE (F_LINUX_SPECIFIC_BASE + 0) 9 10 #define F_GETLEASE (F_LINUX_SPECIFIC_BASE + 1) ··· 79 78 * v4.13-rc1~212^2~51. 80 79 */ 81 80 #define RWF_WRITE_LIFE_NOT_SET RWH_WRITE_LIFE_NOT_SET 81 + 82 + /* Set/Get delegations */ 83 + #define F_GETDELEG (F_LINUX_SPECIFIC_BASE + 15) 84 + #define F_SETDELEG (F_LINUX_SPECIFIC_BASE + 16) 85 + 86 + /* Argument structure for F_GETDELEG and F_SETDELEG */ 87 + struct delegation { 88 + __u32 d_flags; /* Must be 0 */ 89 + __u16 d_type; /* F_RDLCK, F_WRLCK, F_UNLCK */ 90 + __u16 __pad; /* Must be 0 */ 91 + }; 82 92 83 93 /* 84 94 * Types of directory notifications that may be requested.
+2 -1
tools/perf/trace/beauty/include/uapi/linux/fs.h
··· 298 298 #define BLKROTATIONAL _IO(0x12,126) 299 299 #define BLKZEROOUT _IO(0x12,127) 300 300 #define BLKGETDISKSEQ _IOR(0x12,128,__u64) 301 - /* 130-136 are used by zoned block device ioctls (uapi/linux/blkzoned.h) */ 301 + /* 130-136 and 142 are used by zoned block device ioctls (uapi/linux/blkzoned.h) */ 302 302 /* 137-141 are used by blk-crypto ioctls (uapi/linux/blk-crypto.h) */ 303 + #define BLKTRACESETUP2 _IOWR(0x12, 142, struct blk_user_trace_setup2) 303 304 304 305 #define BMAP_IOCTL 1 /* obsolete - kept for compatibility */ 305 306 #define FIBMAP _IO(0x00,1) /* bmap access */
+1 -1
tools/perf/trace/beauty/include/uapi/linux/mount.h
··· 197 197 */ 198 198 struct mnt_id_req { 199 199 __u32 size; 200 - __u32 spare; 200 + __u32 mnt_ns_fd; 201 201 __u64 mnt_id; 202 202 __u64 param; 203 203 __u64 mnt_ns_id;
+1 -1
tools/perf/trace/beauty/include/uapi/sound/asound.h
··· 60 60 unsigned char db2_sf_ss; /* sample frequency and size */ 61 61 unsigned char db3; /* not used, all zeros */ 62 62 unsigned char db4_ca; /* channel allocation code */ 63 - unsigned char db5_dminh_lsv; /* downmix inhibit & level-shit values */ 63 + unsigned char db5_dminh_lsv; /* downmix inhibit & level-shift values */ 64 64 }; 65 65 66 66 /****************************************************************************
+1
tools/perf/util/arm-spe.c
··· 587 587 MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1), 588 588 MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V2), 589 589 MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V3), 590 + MIDR_ALL_VERSIONS(MIDR_NVIDIA_OLYMPUS), 590 591 {}, 591 592 }; 592 593
+3 -1
tools/perf/util/libbfd.c
··· 426 426 427 427 if (!filename) 428 428 return -EFAULT; 429 + 430 + errno = 0; 429 431 if (!is_regular_file(filename)) 430 - return -EWOULDBLOCK; 432 + return errno == 0 ? -EWOULDBLOCK : -errno; 431 433 432 434 fd = open(filename, O_RDONLY); 433 435 if (fd < 0)
+3 -1
tools/perf/util/symbol-elf.c
··· 902 902 903 903 if (!filename) 904 904 return -EFAULT; 905 + 906 + errno = 0; 905 907 if (!is_regular_file(filename)) 906 - return -EWOULDBLOCK; 908 + return errno == 0 ? -EWOULDBLOCK : -errno; 907 909 908 910 err = kmod_path__parse(&m, filename); 909 911 if (err)
+3 -1
tools/perf/util/symbol-minimal.c
··· 104 104 105 105 if (!filename) 106 106 return -EFAULT; 107 + 108 + errno = 0; 107 109 if (!is_regular_file(filename)) 108 - return -EWOULDBLOCK; 110 + return errno == 0 ? -EWOULDBLOCK : -errno; 109 111 110 112 fd = open(filename, O_RDONLY); 111 113 if (fd < 0)
+6 -4
tools/sched_ext/scx_show_state.py
··· 27 27 def state_str(state): 28 28 return prog['scx_enable_state_str'][state].string_().decode() 29 29 30 - ops = prog['scx_ops'] 30 + root = prog['scx_root'] 31 31 enable_state = read_atomic("scx_enable_state_var") 32 32 33 - print(f'ops : {ops.name.string_().decode()}') 33 + if root: 34 + print(f'ops : {root.ops.name.string_().decode()}') 35 + else: 36 + print('ops : ') 34 37 print(f'enabled : {read_static_key("__scx_enabled")}') 35 38 print(f'switching_all : {read_int("scx_switching_all")}') 36 39 print(f'switched_all : {read_static_key("__scx_switched_all")}') 37 40 print(f'enable_state : {state_str(enable_state)} ({enable_state})') 38 - print(f'in_softlockup : {prog["scx_in_softlockup"].value_()}') 39 - print(f'breather_depth: {read_atomic("scx_breather_depth")}') 41 + print(f'aborting : {prog["scx_aborting"].value_()}') 40 42 print(f'bypass_depth : {prog["scx_bypass_depth"].value_()}') 41 43 print(f'nr_rejected : {read_atomic("scx_nr_rejected")}') 42 44 print(f'enable_seq : {read_atomic("scx_enable_seq")}')
+1
tools/scripts/syscall.tbl
··· 410 410 467 common open_tree_attr sys_open_tree_attr 411 411 468 common file_getattr sys_file_getattr 412 412 469 common file_setattr sys_file_setattr 413 + 470 common listns sys_listns
+21
tools/testing/radix-tree/idr-test.c
··· 57 57 idr_destroy(&idr); 58 58 } 59 59 60 + void idr_alloc2_test(void) 61 + { 62 + int id; 63 + struct idr idr = IDR_INIT_BASE(idr, 1); 64 + 65 + id = idr_alloc(&idr, idr_alloc2_test, 0, 1, GFP_KERNEL); 66 + assert(id == -ENOSPC); 67 + 68 + id = idr_alloc(&idr, idr_alloc2_test, 1, 2, GFP_KERNEL); 69 + assert(id == 1); 70 + 71 + id = idr_alloc(&idr, idr_alloc2_test, 0, 1, GFP_KERNEL); 72 + assert(id == -ENOSPC); 73 + 74 + id = idr_alloc(&idr, idr_alloc2_test, 0, 2, GFP_KERNEL); 75 + assert(id == -ENOSPC); 76 + 77 + idr_destroy(&idr); 78 + } 79 + 60 80 void idr_replace_test(void) 61 81 { 62 82 DEFINE_IDR(idr); ··· 429 409 430 410 idr_replace_test(); 431 411 idr_alloc_test(); 412 + idr_alloc2_test(); 432 413 idr_null_test(); 433 414 idr_nowait_test(); 434 415 idr_get_next_test(0);
+4 -2
tools/testing/selftests/drivers/net/psp.py
··· 573 573 """Build test cases for each combo of PSP version and IP version""" 574 574 def test_case(cfg): 575 575 cfg.require_ipver(ipver) 576 - test_case.__name__ = f"{name}_v{psp_ver}_ip{ipver}" 577 576 test_func(cfg, psp_ver, ipver) 577 + 578 + test_case.__name__ = f"{name}_v{psp_ver}_ip{ipver}" 578 579 return test_case 579 580 580 581 ··· 583 582 """Build test cases for each IP version""" 584 583 def test_case(cfg): 585 584 cfg.require_ipver(ipver) 586 - test_case.__name__ = f"{name}_ip{ipver}" 587 585 test_func(cfg, ipver) 586 + 587 + test_case.__name__ = f"{name}_ip{ipver}" 588 588 return test_case 589 589 590 590
+2 -1
tools/testing/selftests/ftrace/test.d/event/toplevel-enable.tc
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # description: event tracing - enable/disable with top level files 4 - # requires: available_events set_event events/enable 4 + # requires: set_event events/enable 5 + # flags: instance 5 6 6 7 do_reset() { 7 8 echo > set_event
+3 -2
tools/testing/selftests/ftrace/test.d/ftrace/func_traceonoff_triggers.tc
··· 90 90 fail "Tracing is not off" 91 91 fi 92 92 93 - csum1=`md5sum trace` 93 + # Cannot rely on names being around as they are only cached, strip them 94 + csum1=`cat trace | sed -e 's/^ *[^ ]*\(-[0-9][0-9]*\)/\1/' | md5sum` 94 95 sleep $SLEEP_TIME 95 - csum2=`md5sum trace` 96 + csum2=`cat trace | sed -e 's/^ *[^ ]*\(-[0-9][0-9]*\)/\1/' | md5sum` 96 97 97 98 if [ "$csum1" != "$csum2" ]; then 98 99 fail "Tracing file is still changing"
+7 -1
tools/testing/selftests/kselftest_harness.h
··· 70 70 71 71 #include "kselftest.h" 72 72 73 + static inline void __kselftest_memset_safe(void *s, int c, size_t n) 74 + { 75 + if (n > 0) 76 + memset(s, c, n); 77 + } 78 + 73 79 #define TEST_TIMEOUT_DEFAULT 30 74 80 75 81 /* Utilities exposed to the test definitions */ ··· 422 416 self = mmap(NULL, sizeof(*self), PROT_READ | PROT_WRITE, \ 423 417 MAP_SHARED | MAP_ANONYMOUS, -1, 0); \ 424 418 } else { \ 425 - memset(&self_private, 0, sizeof(self_private)); \ 419 + __kselftest_memset_safe(&self_private, 0, sizeof(self_private)); \ 426 420 self = &self_private; \ 427 421 } \ 428 422 } \
+1 -1
tools/testing/selftests/mm/uffd-unit-tests.c
··· 1317 1317 p = strstr(tmp, header); 1318 1318 if (p) { 1319 1319 /* For example, "State:\tD (disk sleep)" */ 1320 - c = *(p + sizeof(header) - 1); 1320 + c = *(p + strlen(header)); 1321 1321 return c == 'D' ? 1322 1322 THR_STATE_UNINTERRUPTIBLE : THR_STATE_UNKNOWN; 1323 1323 }
+15
tools/testing/selftests/net/fib_nexthops.sh
··· 800 800 set +e 801 801 check_nexthop "dev veth1" "" 802 802 log_test $? 0 "Nexthops removed on admin down" 803 + 804 + # error routes should be deleted when their nexthop is deleted 805 + run_cmd "$IP li set dev veth1 up" 806 + run_cmd "$IP -6 nexthop add id 58 dev veth1" 807 + run_cmd "$IP ro add blackhole 2001:db8:101::1/128 nhid 58" 808 + run_cmd "$IP nexthop del id 58" 809 + check_route6 "2001:db8:101::1" "" 810 + log_test $? 0 "Error route removed on nexthop deletion" 803 811 } 804 812 805 813 ipv6_grp_refs() ··· 1467 1459 1468 1460 run_cmd "$IP ro del 172.16.102.0/24" 1469 1461 log_test $? 0 "Delete route when not specifying nexthop attributes" 1462 + 1463 + # error routes should be deleted when their nexthop is deleted 1464 + run_cmd "$IP nexthop add id 23 dev veth1" 1465 + run_cmd "$IP ro add blackhole 172.16.102.100/32 nhid 23" 1466 + run_cmd "$IP nexthop del id 23" 1467 + check_route "172.16.102.100" "" 1468 + log_test $? 0 "Error route removed on nexthop deletion" 1470 1469 } 1471 1470 1472 1471 ipv4_grp_fcnal()
+69 -1
tools/testing/selftests/net/fib_tests.sh
··· 12 12 ipv4_route_metrics ipv4_route_v6_gw rp_filter ipv4_del_addr \ 13 13 ipv6_del_addr ipv4_mangle ipv6_mangle ipv4_bcast_neigh fib6_gc_test \ 14 14 ipv4_mpath_list ipv6_mpath_list ipv4_mpath_balance ipv6_mpath_balance \ 15 - fib6_ra_to_static" 15 + ipv4_mpath_balance_preferred fib6_ra_to_static" 16 16 17 17 VERBOSE=0 18 18 PAUSE_ON_FAIL=no ··· 2751 2751 forwarding_cleanup 2752 2752 } 2753 2753 2754 + get_route_dev_src() 2755 + { 2756 + local pfx="$1" 2757 + local src="$2" 2758 + local out 2759 + 2760 + if out=$($IP -j route get "$pfx" from "$src" | jq -re ".[0].dev"); then 2761 + echo "$out" 2762 + fi 2763 + } 2764 + 2765 + ipv4_mpath_preferred() 2766 + { 2767 + local src_ip=$1 2768 + local pref_dev=$2 2769 + local dev routes 2770 + local route0=0 2771 + local route1=0 2772 + local pref_route=0 2773 + num_routes=254 2774 + 2775 + for i in $(seq 1 $num_routes) ; do 2776 + dev=$(get_route_dev_src 172.16.105.$i $src_ip) 2777 + if [ "$dev" = "$pref_dev" ]; then 2778 + pref_route=$((pref_route+1)) 2779 + elif [ "$dev" = "veth1" ]; then 2780 + route0=$((route0+1)) 2781 + elif [ "$dev" = "veth3" ]; then 2782 + route1=$((route1+1)) 2783 + fi 2784 + done 2785 + 2786 + routes=$((route0+route1)) 2787 + 2788 + [ "$VERBOSE" = "1" ] && echo "multipath: routes seen: ($route0,$route1,$pref_route)" 2789 + 2790 + if [ x"$pref_dev" = x"" ]; then 2791 + [[ $routes -ge $num_routes ]] && [[ $route0 -gt 0 ]] && [[ $route1 -gt 0 ]] 2792 + else 2793 + [[ $pref_route -ge $num_routes ]] 2794 + fi 2795 + 2796 + } 2797 + 2798 + ipv4_mpath_balance_preferred_test() 2799 + { 2800 + echo 2801 + echo "IPv4 multipath load balance preferred route" 2802 + 2803 + forwarding_setup 2804 + 2805 + $IP route add 172.16.105.0/24 \ 2806 + nexthop via 172.16.101.2 \ 2807 + nexthop via 172.16.103.2 2808 + 2809 + ipv4_mpath_preferred 172.16.101.1 veth1 2810 + log_test $? 0 "IPv4 multipath loadbalance from veth1" 2811 + 2812 + ipv4_mpath_preferred 172.16.103.1 veth3 2813 + log_test $? 0 "IPv4 multipath loadbalance from veth3" 2814 + 2815 + ipv4_mpath_preferred 198.51.100.1 2816 + log_test $? 0 "IPv4 multipath loadbalance from dummy" 2817 + 2818 + forwarding_cleanup 2819 + } 2820 + 2754 2821 ipv6_mpath_balance_test() 2755 2822 { 2756 2823 echo ··· 2928 2861 ipv6_mpath_list) ipv6_mpath_list_test;; 2929 2862 ipv4_mpath_balance) ipv4_mpath_balance_test;; 2930 2863 ipv6_mpath_balance) ipv6_mpath_balance_test;; 2864 + ipv4_mpath_balance_preferred) ipv4_mpath_balance_preferred_test;; 2931 2865 fib6_ra_to_static) fib6_ra_to_static;; 2932 2866 2933 2867 help) echo "Test names: $TESTS"; exit 0;;
+5 -11
tools/testing/selftests/net/tap.c
··· 56 56 static struct rtattr *rtattr_add_str(struct nlmsghdr *nh, unsigned short type, 57 57 const char *s) 58 58 { 59 - struct rtattr *rta = rtattr_add(nh, type, strlen(s)); 59 + unsigned int strsz = strlen(s) + 1; 60 + struct rtattr *rta; 60 61 61 - memcpy(RTA_DATA(rta), s, strlen(s)); 62 - return rta; 63 - } 62 + rta = rtattr_add(nh, type, strsz); 64 63 65 - static struct rtattr *rtattr_add_strsz(struct nlmsghdr *nh, unsigned short type, 66 - const char *s) 67 - { 68 - struct rtattr *rta = rtattr_add(nh, type, strlen(s) + 1); 69 - 70 - strcpy(RTA_DATA(rta), s); 64 + memcpy(RTA_DATA(rta), s, strsz); 71 65 return rta; 72 66 } 73 67 ··· 113 119 114 120 link_info = rtattr_begin(&req.nh, IFLA_LINKINFO); 115 121 116 - rtattr_add_strsz(&req.nh, IFLA_INFO_KIND, link_type); 122 + rtattr_add_str(&req.nh, IFLA_INFO_KIND, link_type); 117 123 118 124 if (fill_info_data) { 119 125 info_data = rtattr_begin(&req.nh, IFLA_INFO_DATA);
+1
tools/testing/selftests/powerpc/pmu/sampling_tests/.gitignore
··· 1 1 bhrb_filter_map_test 2 2 bhrb_no_crash_wo_pmu_test 3 + check_extended_reg_test 3 4 intr_regs_no_crash_wo_pmu_test 4 5 mmcr0_cc56run_test 5 6 mmcr0_exceptionbits_test
+3 -2
tools/testing/selftests/ublk/Makefile
··· 22 22 TEST_PROGS += test_generic_12.sh 23 23 TEST_PROGS += test_generic_13.sh 24 24 TEST_PROGS += test_generic_14.sh 25 + TEST_PROGS += test_generic_15.sh 25 26 26 27 TEST_PROGS += test_null_01.sh 27 28 TEST_PROGS += test_null_02.sh ··· 51 50 52 51 TEST_GEN_PROGS_EXTENDED = kublk 53 52 53 + LOCAL_HDRS += $(wildcard *.h) 54 54 include ../lib.mk 55 55 56 - $(TEST_GEN_PROGS_EXTENDED): kublk.c null.c file_backed.c common.c stripe.c \ 57 - fault_inject.c 56 + $(TEST_GEN_PROGS_EXTENDED): $(wildcard *.c) 58 57 59 58 check: 60 59 shellcheck -x -f gcc *.sh
+12 -4
tools/testing/selftests/ublk/test_common.sh
··· 178 178 _create_ublk_dev() { 179 179 local dev_id; 180 180 local cmd=$1 181 + local settle=$2 181 182 182 - shift 1 183 + shift 2 183 184 184 185 if [ ! -c /dev/ublk-control ]; then 185 186 return ${UBLK_SKIP_CODE} ··· 195 194 echo "fail to add ublk dev $*" 196 195 return 255 197 196 fi 198 - udevadm settle 197 + 198 + if [ "$settle" = "yes" ]; then 199 + udevadm settle 200 + fi 199 201 200 202 if [[ "$dev_id" =~ ^[0-9]+$ ]]; then 201 203 echo "${dev_id}" ··· 208 204 } 209 205 210 206 _add_ublk_dev() { 211 - _create_ublk_dev "add" "$@" 207 + _create_ublk_dev "add" "yes" "$@" 208 + } 209 + 210 + _add_ublk_dev_no_settle() { 211 + _create_ublk_dev "add" "no" "$@" 212 212 } 213 213 214 214 _recover_ublk_dev() { 215 215 local dev_id 216 216 local state 217 217 218 - dev_id=$(_create_ublk_dev "recover" "$@") 218 + dev_id=$(_create_ublk_dev "recover" "yes" "$@") 219 219 for ((j=0;j<20;j++)); do 220 220 state=$(_get_ublk_dev_state "${dev_id}") 221 221 [ "$state" == "LIVE" ] && break
+68
tools/testing/selftests/ublk/test_generic_15.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + . "$(cd "$(dirname "$0")" && pwd)"/test_common.sh 5 + 6 + TID="generic_15" 7 + ERR_CODE=0 8 + 9 + _test_partition_scan_no_hang() 10 + { 11 + local recovery_flag=$1 12 + local expected_state=$2 13 + local dev_id 14 + local state 15 + local daemon_pid 16 + local start_time 17 + local elapsed 18 + 19 + # Create ublk device with fault_inject target and very large delay 20 + # to simulate hang during partition table read 21 + # --delay_us 60000000 = 60 seconds delay 22 + # Use _add_ublk_dev_no_settle to avoid udevadm settle hang waiting 23 + # for partition scan events to complete 24 + if [ "$recovery_flag" = "yes" ]; then 25 + echo "Testing partition scan with recovery support..." 26 + dev_id=$(_add_ublk_dev_no_settle -t fault_inject -q 1 -d 1 --delay_us 60000000 -r 1) 27 + else 28 + echo "Testing partition scan without recovery..." 29 + dev_id=$(_add_ublk_dev_no_settle -t fault_inject -q 1 -d 1 --delay_us 60000000) 30 + fi 31 + 32 + _check_add_dev "$TID" $? 33 + 34 + # The add command should return quickly because partition scan is async. 35 + # Now sleep briefly to let the async partition scan work start and hit 36 + # the delay in the fault_inject handler. 37 + sleep 1 38 + 39 + # Kill the ublk daemon while partition scan is potentially blocked 40 + # And check state transitions properly 41 + start_time=${SECONDS} 42 + daemon_pid=$(_get_ublk_daemon_pid "${dev_id}") 43 + state=$(__ublk_kill_daemon "${dev_id}" "${expected_state}") 44 + elapsed=$((SECONDS - start_time)) 45 + 46 + # Verify the device transitioned to expected state 47 + if [ "$state" != "${expected_state}" ]; then 48 + echo "FAIL: Device state is $state, expected ${expected_state}" 49 + ERR_CODE=255 50 + ${UBLK_PROG} del -n "${dev_id}" > /dev/null 2>&1 51 + return 52 + fi 53 + echo "PASS: Device transitioned to ${expected_state} in ${elapsed}s without hanging" 54 + 55 + # Clean up the device 56 + ${UBLK_PROG} del -n "${dev_id}" > /dev/null 2>&1 57 + } 58 + 59 + _prep_test "partition_scan" "verify async partition scan prevents IO hang" 60 + 61 + # Test 1: Without recovery support - should transition to DEAD 62 + _test_partition_scan_no_hang "no" "DEAD" 63 + 64 + # Test 2: With recovery support - should transition to QUIESCED 65 + _test_partition_scan_no_hang "yes" "QUIESCED" 66 + 67 + _cleanup_test "partition_scan" 68 + _show_result $TID $ERR_CODE
-1
tools/testing/selftests/vfio/lib/include/libvfio/iova_allocator.h
··· 2 2 #ifndef SELFTESTS_VFIO_LIB_INCLUDE_LIBVFIO_IOVA_ALLOCATOR_H 3 3 #define SELFTESTS_VFIO_LIB_INCLUDE_LIBVFIO_IOVA_ALLOCATOR_H 4 4 5 - #include <uapi/linux/types.h> 6 5 #include <linux/list.h> 7 6 #include <linux/types.h> 8 7 #include <linux/iommufd.h>
-1
tools/testing/selftests/vfio/lib/iommu.c
··· 11 11 #include <sys/ioctl.h> 12 12 #include <sys/mman.h> 13 13 14 - #include <uapi/linux/types.h> 15 14 #include <linux/limits.h> 16 15 #include <linux/mman.h> 17 16 #include <linux/types.h>
-1
tools/testing/selftests/vfio/lib/iova_allocator.c
··· 11 11 #include <sys/ioctl.h> 12 12 #include <sys/mman.h> 13 13 14 - #include <uapi/linux/types.h> 15 14 #include <linux/iommufd.h> 16 15 #include <linux/limits.h> 17 16 #include <linux/mman.h>
-1
tools/testing/selftests/vfio/lib/vfio_pci_device.c
··· 11 11 #include <sys/ioctl.h> 12 12 #include <sys/mman.h> 13 13 14 - #include <uapi/linux/types.h> 15 14 #include <linux/iommufd.h> 16 15 #include <linux/limits.h> 17 16 #include <linux/mman.h>
-1
tools/testing/selftests/vfio/vfio_dma_mapping_test.c
··· 3 3 #include <sys/mman.h> 4 4 #include <unistd.h> 5 5 6 - #include <uapi/linux/types.h> 7 6 #include <linux/iommufd.h> 8 7 #include <linux/limits.h> 9 8 #include <linux/mman.h>
-1
tools/testing/selftests/vfio/vfio_iommufd_setup_test.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - #include <uapi/linux/types.h> 3 2 #include <linux/limits.h> 4 3 #include <linux/sizes.h> 5 4 #include <linux/vfio.h>
+5 -3
tools/virtio/Makefile
··· 20 20 CFLAGS += -pthread 21 21 LDFLAGS += -pthread 22 22 vpath %.c ../../drivers/virtio ../../drivers/vhost 23 + BUILD=KCFLAGS="-I "`pwd`/../../drivers/vhost ${MAKE} -C `pwd`/../.. V=${V} 23 24 mod: 24 - ${MAKE} -C `pwd`/../.. M=`pwd`/vhost_test V=${V} 25 + ${BUILD} M=`pwd`/vhost_test 25 26 26 27 #oot: build vhost as an out of tree module for a distro kernel 27 28 #no effort is taken to make it actually build or work, but tends to mostly work ··· 38 37 CONFIG_VHOST_NET=n \ 39 38 CONFIG_VHOST_SCSI=n \ 40 39 CONFIG_VHOST_VSOCK=n \ 41 - CONFIG_VHOST_RING=n 42 - OOT_BUILD=KCFLAGS="-I "${OOT_VHOST} ${MAKE} -C ${OOT_KSRC} V=${V} 40 + CONFIG_VHOST_RING=n \ 41 + CONFIG_VHOST_VDPA=n 42 + OOT_BUILD=KCFLAGS="-include "`pwd`"/oot-stubs.h -I "${OOT_VHOST} ${MAKE} -C ${OOT_KSRC} V=${V} 43 43 oot-build: 44 44 echo "UNSUPPORTED! Don't use the resulting modules in production!" 45 45 ${OOT_BUILD} M=`pwd`/vhost_test
+6
tools/virtio/linux/compiler.h
··· 2 2 #ifndef LINUX_COMPILER_H 3 3 #define LINUX_COMPILER_H 4 4 5 + /* Avoid redefinition warnings */ 6 + #undef __user 5 7 #include "../../../include/linux/compiler_types.h" 8 + #undef __user 9 + #define __user 6 10 7 11 #define WRITE_ONCE(var, val) \ 8 12 (*((volatile typeof(val) *)(&(var))) = (val)) ··· 38 34 auto __v = (expr); \ 39 35 __v; \ 40 36 }) 37 + 38 + #define __must_check 41 39 42 40 #endif
+4
tools/virtio/linux/cpumask.h
··· 4 4 5 5 #include <linux/kernel.h> 6 6 7 + struct cpumask { 8 + unsigned long bits[1]; 9 + }; 10 + 7 11 #endif /* _LINUX_CPUMASK_H */
+8
tools/virtio/linux/device.h
··· 1 1 #ifndef LINUX_DEVICE_H 2 + 3 + struct device { 4 + void *parent; 5 + }; 6 + 7 + struct device_driver { 8 + const char *name; 9 + }; 2 10 #endif
+4
tools/virtio/linux/dma-mapping.h
··· 22 22 #define dma_free_coherent(d, s, p, h) kfree(p) 23 23 24 24 #define dma_map_page(d, p, o, s, dir) (page_to_phys(p) + (o)) 25 + #define dma_map_page_attrs(d, p, o, s, dir, a) (page_to_phys(p) + (o)) 25 26 26 27 #define dma_map_single(d, p, s, dir) (virt_to_phys(p)) 27 28 #define dma_map_single_attrs(d, p, s, dir, a) (virt_to_phys(p)) ··· 30 29 31 30 #define dma_unmap_single(d, a, s, r) do { (void)(d); (void)(a); (void)(s); (void)(r); } while (0) 32 31 #define dma_unmap_page(d, a, s, r) do { (void)(d); (void)(a); (void)(s); (void)(r); } while (0) 32 + #define dma_unmap_page_attrs(d, a, s, r, t) do { \ 33 + (void)(d); (void)(a); (void)(s); (void)(r); (void)(t); \ 34 + } while (0) 33 35 34 36 #define sg_dma_address(sg) (0) 35 37 #define sg_dma_len(sg) (0)
+16
tools/virtio/linux/kernel.h
··· 14 14 #include <linux/log2.h> 15 15 #include <linux/types.h> 16 16 #include <linux/overflow.h> 17 + #include <linux/limits.h> 17 18 #include <linux/list.h> 18 19 #include <linux/printk.h> 19 20 #include <linux/bug.h> ··· 135 134 #define dev_err(dev, format, ...) fprintf (stderr, format, ## __VA_ARGS__) 136 135 #define dev_warn(dev, format, ...) fprintf (stderr, format, ## __VA_ARGS__) 137 136 #define dev_warn_once(dev, format, ...) fprintf (stderr, format, ## __VA_ARGS__) 137 + 138 + #define dev_WARN_ONCE(dev, condition, format...) \ 139 + WARN_ONCE(condition, format) 140 + 141 + static inline bool is_vmalloc_addr(const void *x) 142 + { 143 + return false; 144 + } 145 + 146 + #define might_sleep() do { } while (0) 147 + 148 + static inline void synchronize_rcu(void) 149 + { 150 + assert(0); 151 + } 138 152 139 153 #define min(x, y) ({ \ 140 154 typeof(x) _min1 = (x); \
+2
tools/virtio/linux/module.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 #include <linux/export.h> 3 3 4 + struct module; 5 + 4 6 #define MODULE_LICENSE(__MODULE_LICENSE_value) \ 5 7 static __attribute__((unused)) const char *__MODULE_LICENSE_name = \ 6 8 __MODULE_LICENSE_value
+21
tools/virtio/linux/ucopysize.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __LINUX_UCOPYSIZE_H__ 3 + #define __LINUX_UCOPYSIZE_H__ 4 + 5 + #include <linux/bug.h> 6 + 7 + static inline void check_object_size(const void *ptr, unsigned long n, 8 + bool to_user) 9 + { } 10 + 11 + static inline void copy_overflow(int size, unsigned long count) 12 + { 13 + } 14 + 15 + static __always_inline __must_check bool 16 + check_copy_size(const void *addr, size_t bytes, bool is_source) 17 + { 18 + return true; 19 + } 20 + 21 + #endif /* __LINUX_UCOPYSIZE_H__ */
+1 -72
tools/virtio/linux/virtio.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef LINUX_VIRTIO_H 3 - #define LINUX_VIRTIO_H 4 - #include <linux/scatterlist.h> 5 - #include <linux/kernel.h> 6 - #include <linux/spinlock.h> 7 - 8 - struct device { 9 - void *parent; 10 - }; 11 - 12 - struct virtio_device { 13 - struct device dev; 14 - u64 features; 15 - struct list_head vqs; 16 - spinlock_t vqs_list_lock; 17 - const struct virtio_config_ops *config; 18 - }; 19 - 20 - struct virtqueue { 21 - struct list_head list; 22 - void (*callback)(struct virtqueue *vq); 23 - const char *name; 24 - struct virtio_device *vdev; 25 - unsigned int index; 26 - unsigned int num_free; 27 - unsigned int num_max; 28 - void *priv; 29 - bool reset; 30 - }; 31 - 32 - /* Interfaces exported by virtio_ring. */ 33 - int virtqueue_add_sgs(struct virtqueue *vq, 34 - struct scatterlist *sgs[], 35 - unsigned int out_sgs, 36 - unsigned int in_sgs, 37 - void *data, 38 - gfp_t gfp); 39 - 40 - int virtqueue_add_outbuf(struct virtqueue *vq, 41 - struct scatterlist sg[], unsigned int num, 42 - void *data, 43 - gfp_t gfp); 44 - 45 - int virtqueue_add_inbuf(struct virtqueue *vq, 46 - struct scatterlist sg[], unsigned int num, 47 - void *data, 48 - gfp_t gfp); 49 - 50 - bool virtqueue_kick(struct virtqueue *vq); 51 - 52 - void *virtqueue_get_buf(struct virtqueue *vq, unsigned int *len); 53 - 54 - void virtqueue_disable_cb(struct virtqueue *vq); 55 - 56 - bool virtqueue_enable_cb(struct virtqueue *vq); 57 - bool virtqueue_enable_cb_delayed(struct virtqueue *vq); 58 - 59 - void *virtqueue_detach_unused_buf(struct virtqueue *vq); 60 - struct virtqueue *vring_new_virtqueue(unsigned int index, 61 - unsigned int num, 62 - unsigned int vring_align, 63 - struct virtio_device *vdev, 64 - bool weak_barriers, 65 - bool ctx, 66 - void *pages, 67 - bool (*notify)(struct virtqueue *vq), 68 - void (*callback)(struct virtqueue *vq), 69 - const char *name); 70 - void vring_del_virtqueue(struct virtqueue *vq); 71 - 72 - #endif 1 + #include <../../include/linux/virtio.h>
+1 -101
tools/virtio/linux/virtio_config.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef LINUX_VIRTIO_CONFIG_H 3 - #define LINUX_VIRTIO_CONFIG_H 4 - #include <linux/virtio_byteorder.h> 5 - #include <linux/virtio.h> 6 - #include <uapi/linux/virtio_config.h> 7 - 8 - struct virtio_config_ops { 9 - int (*disable_vq_and_reset)(struct virtqueue *vq); 10 - int (*enable_vq_after_reset)(struct virtqueue *vq); 11 - }; 12 - 13 - /* 14 - * __virtio_test_bit - helper to test feature bits. For use by transports. 15 - * Devices should normally use virtio_has_feature, 16 - * which includes more checks. 17 - * @vdev: the device 18 - * @fbit: the feature bit 19 - */ 20 - static inline bool __virtio_test_bit(const struct virtio_device *vdev, 21 - unsigned int fbit) 22 - { 23 - return vdev->features & (1ULL << fbit); 24 - } 25 - 26 - /** 27 - * __virtio_set_bit - helper to set feature bits. For use by transports. 28 - * @vdev: the device 29 - * @fbit: the feature bit 30 - */ 31 - static inline void __virtio_set_bit(struct virtio_device *vdev, 32 - unsigned int fbit) 33 - { 34 - vdev->features |= (1ULL << fbit); 35 - } 36 - 37 - /** 38 - * __virtio_clear_bit - helper to clear feature bits. For use by transports. 39 - * @vdev: the device 40 - * @fbit: the feature bit 41 - */ 42 - static inline void __virtio_clear_bit(struct virtio_device *vdev, 43 - unsigned int fbit) 44 - { 45 - vdev->features &= ~(1ULL << fbit); 46 - } 47 - 48 - #define virtio_has_feature(dev, feature) \ 49 - (__virtio_test_bit((dev), feature)) 50 - 51 - /** 52 - * virtio_has_dma_quirk - determine whether this device has the DMA quirk 53 - * @vdev: the device 54 - */ 55 - static inline bool virtio_has_dma_quirk(const struct virtio_device *vdev) 56 - { 57 - /* 58 - * Note the reverse polarity of the quirk feature (compared to most 59 - * other features), this is for compatibility with legacy systems. 60 - */ 61 - return !virtio_has_feature(vdev, VIRTIO_F_ACCESS_PLATFORM); 62 - } 63 - 64 - static inline bool virtio_is_little_endian(struct virtio_device *vdev) 65 - { 66 - return virtio_has_feature(vdev, VIRTIO_F_VERSION_1) || 67 - virtio_legacy_is_little_endian(); 68 - } 69 - 70 - /* Memory accessors */ 71 - static inline u16 virtio16_to_cpu(struct virtio_device *vdev, __virtio16 val) 72 - { 73 - return __virtio16_to_cpu(virtio_is_little_endian(vdev), val); 74 - } 75 - 76 - static inline __virtio16 cpu_to_virtio16(struct virtio_device *vdev, u16 val) 77 - { 78 - return __cpu_to_virtio16(virtio_is_little_endian(vdev), val); 79 - } 80 - 81 - static inline u32 virtio32_to_cpu(struct virtio_device *vdev, __virtio32 val) 82 - { 83 - return __virtio32_to_cpu(virtio_is_little_endian(vdev), val); 84 - } 85 - 86 - static inline __virtio32 cpu_to_virtio32(struct virtio_device *vdev, u32 val) 87 - { 88 - return __cpu_to_virtio32(virtio_is_little_endian(vdev), val); 89 - } 90 - 91 - static inline u64 virtio64_to_cpu(struct virtio_device *vdev, __virtio64 val) 92 - { 93 - return __virtio64_to_cpu(virtio_is_little_endian(vdev), val); 94 - } 95 - 96 - static inline __virtio64 cpu_to_virtio64(struct virtio_device *vdev, u64 val) 97 - { 98 - return __cpu_to_virtio64(virtio_is_little_endian(vdev), val); 99 - } 100 - 101 - #endif 1 + #include "../../include/linux/virtio_config.h"
+10
tools/virtio/oot-stubs.h
··· 1 + #include <linux/bug.h> 2 + #include <linux/string.h> 3 + #include <linux/virtio_features.h> 4 + 5 + #ifndef VIRTIO_FEATURES_BITS 6 + #define VIRTIO_FEATURES_BITS 128 7 + #endif 8 + #ifndef VIRTIO_U64 9 + #define VIRTIO_U64(b) ((b) >> 6) 10 + #endif