Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge 6.9-rc2 into usb-next

We need the USB fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3472 -1890
+4 -2
.mailmap
··· 340 340 Lee Jones <lee@kernel.org> <lee.jones@canonical.com> 341 341 Lee Jones <lee@kernel.org> <lee.jones@linaro.org> 342 342 Lee Jones <lee@kernel.org> <lee@ubuntu.com> 343 - Leonard Crestez <leonard.crestez@nxp.com> Leonard Crestez <cdleonard@gmail.com> 343 + Leonard Crestez <cdleonard@gmail.com> <leonard.crestez@nxp.com> 344 + Leonard Crestez <cdleonard@gmail.com> <leonard.crestez@intel.com> 344 345 Leonardo Bras <leobras.c@gmail.com> <leonardo@linux.ibm.com> 345 346 Leonard Göhrs <l.goehrs@pengutronix.de> 346 347 Leonid I Ananiev <leonid.i.ananiev@intel.com> ··· 498 497 Punit Agrawal <punitagrawal@gmail.com> <punit.agrawal@arm.com> 499 498 Qais Yousef <qyousef@layalina.io> <qais.yousef@imgtec.com> 500 499 Qais Yousef <qyousef@layalina.io> <qais.yousef@arm.com> 501 - Quentin Monnet <quentin@isovalent.com> <quentin.monnet@netronome.com> 500 + Quentin Monnet <qmo@kernel.org> <quentin.monnet@netronome.com> 501 + Quentin Monnet <qmo@kernel.org> <quentin@isovalent.com> 502 502 Quentin Perret <qperret@qperret.net> <quentin.perret@arm.com> 503 503 Rafael J. Wysocki <rjw@rjwysocki.net> <rjw@sisk.pl> 504 504 Rajeev Nandan <quic_rajeevny@quicinc.com> <rajeevny@codeaurora.org>
+1 -1
Documentation/arch/x86/resctrl.rst
··· 574 574 MB:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;... 575 575 576 576 Memory bandwidth Allocation specified in MiBps 577 - --------------------------------------------- 577 + ---------------------------------------------- 578 578 579 579 Memory bandwidth domain is L3 cache. 580 580 ::
+1 -1
Documentation/kbuild/llvm.rst
··· 178 178 - ``LLVM=1`` 179 179 * - s390 180 180 - Maintained 181 - - ``CC=clang`` 181 + - ``LLVM=1`` (LLVM >= 18.1.0), ``CC=clang`` (LLVM < 18.1.0) 182 182 * - um (User Mode) 183 183 - Maintained 184 184 - ``LLVM=1``
+61 -17
MAINTAINERS
··· 3942 3942 3943 3943 BPF [SECURITY & LSM] (Security Audit and Enforcement using BPF) 3944 3944 M: KP Singh <kpsingh@kernel.org> 3945 - R: Florent Revest <revest@chromium.org> 3946 - R: Brendan Jackman <jackmanb@chromium.org> 3945 + R: Matt Bobrowski <mattbobrowski@google.com> 3947 3946 L: bpf@vger.kernel.org 3948 3947 S: Maintained 3949 3948 F: Documentation/bpf/prog_lsm.rst ··· 3967 3968 F: kernel/bpf/cgroup.c 3968 3969 3969 3970 BPF [TOOLING] (bpftool) 3970 - M: Quentin Monnet <quentin@isovalent.com> 3971 + M: Quentin Monnet <qmo@kernel.org> 3971 3972 L: bpf@vger.kernel.org 3972 3973 S: Maintained 3973 3974 F: kernel/bpf/disasm.* ··· 6156 6157 M: Alasdair Kergon <agk@redhat.com> 6157 6158 M: Mike Snitzer <snitzer@kernel.org> 6158 6159 M: Mikulas Patocka <mpatocka@redhat.com> 6159 - M: dm-devel@lists.linux.dev 6160 6160 L: dm-devel@lists.linux.dev 6161 6161 S: Maintained 6162 6162 Q: http://patchwork.kernel.org/project/dm-devel/list/ ··· 6171 6173 6172 6174 DEVICE-MAPPER VDO TARGET 6173 6175 M: Matthew Sakai <msakai@redhat.com> 6174 - M: dm-devel@lists.linux.dev 6175 6176 L: dm-devel@lists.linux.dev 6176 6177 S: Maintained 6177 6178 F: Documentation/admin-guide/device-mapper/vdo*.rst ··· 7938 7941 M: Chao Yu <chao@kernel.org> 7939 7942 R: Yue Hu <huyue2@coolpad.com> 7940 7943 R: Jeffle Xu <jefflexu@linux.alibaba.com> 7944 + R: Sandeep Dhavale <dhavale@google.com> 7941 7945 L: linux-erofs@lists.ozlabs.org 7942 7946 S: Maintained 7943 7947 W: https://erofs.docs.kernel.org ··· 9651 9653 S: Maintained 9652 9654 F: drivers/hid/hid-logitech-hidpp.c 9653 9655 9654 - HIGH-RESOLUTION TIMERS, CLOCKEVENTS 9656 + HIGH-RESOLUTION TIMERS, TIMER WHEEL, CLOCKEVENTS 9657 + M: Anna-Maria Behnsen <anna-maria@linutronix.de> 9658 + M: Frederic Weisbecker <frederic@kernel.org> 9655 9659 M: Thomas Gleixner <tglx@linutronix.de> 9656 9660 L: linux-kernel@vger.kernel.org 9657 9661 S: Maintained ··· 9661 9661 F: Documentation/timers/ 9662 9662 F: include/linux/clockchips.h 9663 9663 F: include/linux/hrtimer.h 9664 + F: include/linux/timer.h 9664 9665 F: kernel/time/clockevents.c 9665 9666 F: kernel/time/hrtimer.c 9666 - F: kernel/time/timer_*.c 9667 + F: kernel/time/timer.c 9668 + F: kernel/time/timer_list.c 9669 + F: kernel/time/timer_migration.* 9670 + F: tools/testing/selftests/timers/ 9667 9671 9668 9672 HIGH-SPEED SCC DRIVER FOR AX.25 9669 9673 L: linux-hams@vger.kernel.org ··· 13138 13134 13139 13135 MARVELL MWIFIEX WIRELESS DRIVER 13140 13136 M: Brian Norris <briannorris@chromium.org> 13137 + R: Francesco Dolcini <francesco@dolcini.it> 13141 13138 L: linux-wireless@vger.kernel.org 13142 13139 S: Odd Fixes 13143 13140 F: drivers/net/wireless/marvell/mwifiex/ ··· 15632 15627 F: include/uapi/linux/nsm.h 15633 15628 15634 15629 NOHZ, DYNTICKS SUPPORT 15630 + M: Anna-Maria Behnsen <anna-maria@linutronix.de> 15635 15631 M: Frederic Weisbecker <frederic@kernel.org> 15636 - M: Thomas Gleixner <tglx@linutronix.de> 15637 15632 M: Ingo Molnar <mingo@kernel.org> 15633 + M: Thomas Gleixner <tglx@linutronix.de> 15638 15634 L: linux-kernel@vger.kernel.org 15639 15635 S: Maintained 15640 15636 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/nohz ··· 17596 17590 F: include/linux/pnp.h 17597 17591 17598 17592 POSIX CLOCKS and TIMERS 17593 + M: Anna-Maria Behnsen <anna-maria@linutronix.de> 17594 + M: Frederic Weisbecker <frederic@kernel.org> 17599 17595 M: Thomas Gleixner <tglx@linutronix.de> 17600 17596 L: linux-kernel@vger.kernel.org 17601 17597 S: Maintained 17602 17598 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core 17603 17599 F: fs/timerfd.c 17604 17600 F: include/linux/time_namespace.h 17605 - F: include/linux/timer* 17601 + F: include/linux/timerfd.h 17602 + F: include/uapi/linux/time.h 17603 + F: include/uapi/linux/timerfd.h 17606 17604 F: include/trace/events/timer* 17607 - F: kernel/time/*timer* 17605 + F: kernel/time/itimer.c 17606 + F: kernel/time/posix-* 17608 17607 F: kernel/time/namespace.c 17609 17608 17610 17609 POWER MANAGEMENT CORE ··· 18656 18645 M: Ping-Ke Shih <pkshih@realtek.com> 18657 18646 L: linux-wireless@vger.kernel.org 18658 18647 S: Maintained 18648 + T: git https://github.com/pkshih/rtw.git 18659 18649 F: drivers/net/wireless/realtek/rtlwifi/ 18660 18650 18661 18651 REALTEK WIRELESS DRIVER (rtw88) 18662 18652 M: Ping-Ke Shih <pkshih@realtek.com> 18663 18653 L: linux-wireless@vger.kernel.org 18664 18654 S: Maintained 18655 + T: git https://github.com/pkshih/rtw.git 18665 18656 F: drivers/net/wireless/realtek/rtw88/ 18666 18657 18667 18658 REALTEK WIRELESS DRIVER (rtw89) 18668 18659 M: Ping-Ke Shih <pkshih@realtek.com> 18669 18660 L: linux-wireless@vger.kernel.org 18670 18661 S: Maintained 18662 + T: git https://github.com/pkshih/rtw.git 18671 18663 F: drivers/net/wireless/realtek/rtw89/ 18672 18664 18673 18665 REDPINE WIRELESS DRIVER ··· 18741 18727 F: Documentation/devicetree/bindings/i2c/renesas,iic-emev2.yaml 18742 18728 F: drivers/i2c/busses/i2c-emev2.c 18743 18729 18744 - RENESAS ETHERNET DRIVERS 18730 + RENESAS ETHERNET AVB DRIVER 18745 18731 R: Sergey Shtylyov <s.shtylyov@omp.ru> 18746 18732 L: netdev@vger.kernel.org 18747 18733 L: linux-renesas-soc@vger.kernel.org 18748 - F: Documentation/devicetree/bindings/net/renesas,*.yaml 18749 - F: drivers/net/ethernet/renesas/ 18750 - F: include/linux/sh_eth.h 18734 + F: Documentation/devicetree/bindings/net/renesas,etheravb.yaml 18735 + F: drivers/net/ethernet/renesas/Kconfig 18736 + F: drivers/net/ethernet/renesas/Makefile 18737 + F: drivers/net/ethernet/renesas/ravb* 18738 + 18739 + RENESAS ETHERNET SWITCH DRIVER 18740 + R: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> 18741 + L: netdev@vger.kernel.org 18742 + L: linux-renesas-soc@vger.kernel.org 18743 + F: Documentation/devicetree/bindings/net/renesas,*ether-switch.yaml 18744 + F: drivers/net/ethernet/renesas/Kconfig 18745 + F: drivers/net/ethernet/renesas/Makefile 18746 + F: drivers/net/ethernet/renesas/rcar_gen4* 18747 + F: drivers/net/ethernet/renesas/rswitch* 18751 18748 18752 18749 RENESAS IDT821034 ASoC CODEC 18753 18750 M: Herve Codina <herve.codina@bootlin.com> ··· 18867 18842 S: Supported 18868 18843 F: Documentation/devicetree/bindings/i2c/renesas,rzv2m.yaml 18869 18844 F: drivers/i2c/busses/i2c-rzv2m.c 18845 + 18846 + RENESAS SUPERH ETHERNET DRIVER 18847 + R: Sergey Shtylyov <s.shtylyov@omp.ru> 18848 + L: netdev@vger.kernel.org 18849 + L: linux-renesas-soc@vger.kernel.org 18850 + F: Documentation/devicetree/bindings/net/renesas,ether.yaml 18851 + F: drivers/net/ethernet/renesas/Kconfig 18852 + F: drivers/net/ethernet/renesas/Makefile 18853 + F: drivers/net/ethernet/renesas/sh_eth* 18854 + F: include/linux/sh_eth.h 18870 18855 18871 18856 RENESAS USB PHY DRIVER 18872 18857 M: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> ··· 19214 19179 M: Larry Finger <Larry.Finger@lwfinger.net> 19215 19180 L: linux-wireless@vger.kernel.org 19216 19181 S: Maintained 19182 + T: git https://github.com/pkshih/rtw.git 19217 19183 F: drivers/net/wireless/realtek/rtl818x/rtl8187/ 19218 19184 19219 19185 RTL8XXXU WIRELESS DRIVER (rtl8xxxu) 19220 19186 M: Jes Sorensen <Jes.Sorensen@gmail.com> 19221 19187 L: linux-wireless@vger.kernel.org 19222 19188 S: Maintained 19189 + T: git https://github.com/pkshih/rtw.git 19223 19190 F: drivers/net/wireless/realtek/rtl8xxxu/ 19224 19191 19225 19192 RTRS TRANSPORT DRIVERS ··· 22291 22254 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core 22292 22255 F: include/linux/clocksource.h 22293 22256 F: include/linux/time.h 22257 + F: include/linux/timekeeper_internal.h 22258 + F: include/linux/timekeeping.h 22294 22259 F: include/linux/timex.h 22295 22260 F: include/uapi/linux/time.h 22296 22261 F: include/uapi/linux/timex.h 22297 22262 F: kernel/time/alarmtimer.c 22298 - F: kernel/time/clocksource.c 22299 - F: kernel/time/ntp.c 22300 - F: kernel/time/time*.c 22263 + F: kernel/time/clocksource* 22264 + F: kernel/time/ntp* 22265 + F: kernel/time/time.c 22266 + F: kernel/time/timeconst.bc 22267 + F: kernel/time/timeconv.c 22268 + F: kernel/time/timecounter.c 22269 + F: kernel/time/timekeeping* 22270 + F: kernel/time/time_test.c 22301 22271 F: tools/testing/selftests/timers/ 22302 22272 22303 22273 TIPC NETWORK LAYER
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 9 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc1 5 + EXTRAVERSION = -rc2 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+14
arch/arm/include/asm/mman.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __ASM_MMAN_H__ 3 + #define __ASM_MMAN_H__ 4 + 5 + #include <asm/system_info.h> 6 + #include <uapi/asm/mman.h> 7 + 8 + static inline bool arch_memory_deny_write_exec_supported(void) 9 + { 10 + return cpu_architecture() >= CPU_ARCH_ARMv6; 11 + } 12 + #define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_supported 13 + 14 + #endif /* __ASM_MMAN_H__ */
+2 -2
arch/arm64/net/bpf_jit_comp.c
··· 943 943 emit(A64_UXTH(is64, dst, dst), ctx); 944 944 break; 945 945 case 32: 946 - emit(A64_REV32(is64, dst, dst), ctx); 946 + emit(A64_REV32(0, dst, dst), ctx); 947 947 /* upper 32 bits already cleared */ 948 948 break; 949 949 case 64: ··· 1256 1256 } else { 1257 1257 emit_a64_mov_i(1, tmp, off, ctx); 1258 1258 if (sign_extend) 1259 - emit(A64_LDRSW(dst, src_adj, off_adj), ctx); 1259 + emit(A64_LDRSW(dst, src, tmp), ctx); 1260 1260 else 1261 1261 emit(A64_LDR32(dst, src, tmp), ctx); 1262 1262 }
+1
arch/hexagon/kernel/vmlinux.lds.S
··· 63 63 STABS_DEBUG 64 64 DWARF_DEBUG 65 65 ELF_DETAILS 66 + .hexagon.attributes 0 : { *(.hexagon.attributes) } 66 67 67 68 DISCARDS 68 69 }
+9 -9
arch/mips/Kconfig
··· 619 619 620 620 bool 621 621 622 - config FIT_IMAGE_FDT_EPM5 623 - bool "Include FDT for Mobileye EyeQ5 development platforms" 624 - depends on MACH_EYEQ5 625 - default n 626 - help 627 - Enable this to include the FDT for the EyeQ5 development platforms 628 - from Mobileye in the FIT kernel image. 629 - This requires u-boot on the platform. 630 - 631 622 config MACH_NINTENDO64 632 623 bool "Nintendo 64 console" 633 624 select CEVT_R4K ··· 1001 1010 Say Y here for most Octeon reference boards. 1002 1011 1003 1012 endchoice 1013 + 1014 + config FIT_IMAGE_FDT_EPM5 1015 + bool "Include FDT for Mobileye EyeQ5 development platforms" 1016 + depends on MACH_EYEQ5 1017 + default n 1018 + help 1019 + Enable this to include the FDT for the EyeQ5 development platforms 1020 + from Mobileye in the FIT kernel image. 1021 + This requires u-boot on the platform. 1004 1022 1005 1023 source "arch/mips/alchemy/Kconfig" 1006 1024 source "arch/mips/ath25/Kconfig"
+14
arch/parisc/include/asm/mman.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __ASM_MMAN_H__ 3 + #define __ASM_MMAN_H__ 4 + 5 + #include <uapi/asm/mman.h> 6 + 7 + /* PARISC cannot allow mdwe as it needs writable stacks */ 8 + static inline bool arch_memory_deny_write_exec_supported(void) 9 + { 10 + return false; 11 + } 12 + #define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_supported 13 + 14 + #endif /* __ASM_MMAN_H__ */
+16
arch/riscv/net/bpf_jit_comp64.c
··· 1463 1463 if (ret < 0) 1464 1464 return ret; 1465 1465 1466 + if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { 1467 + const struct btf_func_model *fm; 1468 + int idx; 1469 + 1470 + fm = bpf_jit_find_kfunc_model(ctx->prog, insn); 1471 + if (!fm) 1472 + return -EINVAL; 1473 + 1474 + for (idx = 0; idx < fm->nr_args; idx++) { 1475 + u8 reg = bpf_to_rv_reg(BPF_REG_1 + idx, ctx); 1476 + 1477 + if (fm->arg_size[idx] == sizeof(int)) 1478 + emit_sextw(reg, reg, ctx); 1479 + } 1480 + } 1481 + 1466 1482 ret = emit_call(addr, fixed_addr, ctx); 1467 1483 if (ret) 1468 1484 return ret;
+20 -26
arch/s390/net/bpf_jit_comp.c
··· 516 516 * PLT for hotpatchable calls. The calling convention is the same as for the 517 517 * ftrace hotpatch trampolines: %r0 is return address, %r1 is clobbered. 518 518 */ 519 - extern const char bpf_plt[]; 520 - extern const char bpf_plt_ret[]; 521 - extern const char bpf_plt_target[]; 522 - extern const char bpf_plt_end[]; 523 - #define BPF_PLT_SIZE 32 519 + struct bpf_plt { 520 + char code[16]; 521 + void *ret; 522 + void *target; 523 + } __packed; 524 + extern const struct bpf_plt bpf_plt; 524 525 asm( 525 526 ".pushsection .rodata\n" 526 527 " .balign 8\n" ··· 532 531 " .balign 8\n" 533 532 "bpf_plt_ret: .quad 0\n" 534 533 "bpf_plt_target: .quad 0\n" 535 - "bpf_plt_end:\n" 536 534 " .popsection\n" 537 535 ); 538 536 539 - static void bpf_jit_plt(void *plt, void *ret, void *target) 537 + static void bpf_jit_plt(struct bpf_plt *plt, void *ret, void *target) 540 538 { 541 - memcpy(plt, bpf_plt, BPF_PLT_SIZE); 542 - *(void **)((char *)plt + (bpf_plt_ret - bpf_plt)) = ret; 543 - *(void **)((char *)plt + (bpf_plt_target - bpf_plt)) = target ?: ret; 539 + memcpy(plt, &bpf_plt, sizeof(*plt)); 540 + plt->ret = ret; 541 + plt->target = target; 544 542 } 545 543 546 544 /* ··· 662 662 jit->prg = ALIGN(jit->prg, 8); 663 663 jit->prologue_plt = jit->prg; 664 664 if (jit->prg_buf) 665 - bpf_jit_plt(jit->prg_buf + jit->prg, 665 + bpf_jit_plt((struct bpf_plt *)(jit->prg_buf + jit->prg), 666 666 jit->prg_buf + jit->prologue_plt_ret, NULL); 667 - jit->prg += BPF_PLT_SIZE; 667 + jit->prg += sizeof(struct bpf_plt); 668 668 } 669 669 670 670 static int get_probe_mem_regno(const u8 *insn) ··· 2040 2040 struct bpf_jit jit; 2041 2041 int pass; 2042 2042 2043 - if (WARN_ON_ONCE(bpf_plt_end - bpf_plt != BPF_PLT_SIZE)) 2044 - return orig_fp; 2045 - 2046 2043 if (!fp->jit_requested) 2047 2044 return orig_fp; 2048 2045 ··· 2145 2148 int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t, 2146 2149 void *old_addr, void *new_addr) 2147 2150 { 2151 + struct bpf_plt expected_plt, current_plt, new_plt, *plt; 2148 2152 struct { 2149 2153 u16 opc; 2150 2154 s32 disp; 2151 2155 } __packed insn; 2152 - char expected_plt[BPF_PLT_SIZE]; 2153 - char current_plt[BPF_PLT_SIZE]; 2154 - char new_plt[BPF_PLT_SIZE]; 2155 - char *plt; 2156 2156 char *ret; 2157 2157 int err; 2158 2158 ··· 2168 2174 */ 2169 2175 } else { 2170 2176 /* Verify the PLT. */ 2171 - plt = (char *)ip + (insn.disp << 1); 2172 - err = copy_from_kernel_nofault(current_plt, plt, BPF_PLT_SIZE); 2177 + plt = ip + (insn.disp << 1); 2178 + err = copy_from_kernel_nofault(&current_plt, plt, 2179 + sizeof(current_plt)); 2173 2180 if (err < 0) 2174 2181 return err; 2175 2182 ret = (char *)ip + 6; 2176 - bpf_jit_plt(expected_plt, ret, old_addr); 2177 - if (memcmp(current_plt, expected_plt, BPF_PLT_SIZE)) 2183 + bpf_jit_plt(&expected_plt, ret, old_addr); 2184 + if (memcmp(&current_plt, &expected_plt, sizeof(current_plt))) 2178 2185 return -EINVAL; 2179 2186 /* Adjust the call address. */ 2180 - bpf_jit_plt(new_plt, ret, new_addr); 2181 - s390_kernel_write(plt + (bpf_plt_target - bpf_plt), 2182 - new_plt + (bpf_plt_target - bpf_plt), 2187 + bpf_jit_plt(&new_plt, ret, new_addr); 2188 + s390_kernel_write(&plt->target, &new_plt.target, 2183 2189 sizeof(void *)); 2184 2190 } 2185 2191
+1 -1
arch/x86/Kbuild
··· 28 28 29 29 obj-$(CONFIG_KEXEC_FILE) += purgatory/ 30 30 31 - obj-y += virt/svm/ 31 + obj-y += virt/ 32 32 33 33 # for cleaning 34 34 subdir- += boot tools
+2
arch/x86/Kconfig
··· 2439 2439 # with named address spaces - see GCC PR sanitizer/111736. 2440 2440 # 2441 2441 depends on !KASAN 2442 + # -fsanitize=thread (KCSAN) is also incompatible. 2443 + depends on !KCSAN 2442 2444 2443 2445 config CC_HAS_SLS 2444 2446 def_bool $(cc-option,-mharden-sls=all)
-2
arch/x86/Makefile
··· 251 251 252 252 libs-y += arch/x86/lib/ 253 253 254 - core-y += arch/x86/virt/ 255 - 256 254 # drivers-y are linked after core-y 257 255 drivers-$(CONFIG_MATH_EMULATION) += arch/x86/math-emu/ 258 256 drivers-$(CONFIG_PCI) += arch/x86/pci/
+15 -5
arch/x86/boot/compressed/efi_mixed.S
··· 15 15 */ 16 16 17 17 #include <linux/linkage.h> 18 + #include <asm/asm-offsets.h> 18 19 #include <asm/msr.h> 19 20 #include <asm/page_types.h> 20 21 #include <asm/processor-flags.h> 21 22 #include <asm/segment.h> 23 + #include <asm/setup.h> 22 24 23 25 .code64 24 26 .text ··· 151 149 SYM_FUNC_START(efi32_stub_entry) 152 150 call 1f 153 151 1: popl %ecx 152 + leal (efi32_boot_args - 1b)(%ecx), %ebx 154 153 155 154 /* Clear BSS */ 156 155 xorl %eax, %eax ··· 166 163 popl %ecx 167 164 popl %edx 168 165 popl %esi 166 + movl %esi, 8(%ebx) 169 167 jmp efi32_entry 170 168 SYM_FUNC_END(efi32_stub_entry) 171 169 #endif ··· 243 239 * 244 240 * Arguments: %ecx image handle 245 241 * %edx EFI system table pointer 246 - * %esi struct bootparams pointer (or NULL when not using 247 - * the EFI handover protocol) 248 242 * 249 243 * Since this is the point of no return for ordinary execution, no registers 250 244 * are considered live except for the function parameters. [Note that the EFI ··· 268 266 leal (efi32_boot_args - 1b)(%ebx), %ebx 269 267 movl %ecx, 0(%ebx) 270 268 movl %edx, 4(%ebx) 271 - movl %esi, 8(%ebx) 272 269 movb $0x0, 12(%ebx) // efi_is64 270 + 271 + /* 272 + * Allocate some memory for a temporary struct boot_params, which only 273 + * needs the minimal pieces that startup_32() relies on. 274 + */ 275 + subl $PARAM_SIZE, %esp 276 + movl %esp, %esi 277 + movl $PAGE_SIZE, BP_kernel_alignment(%esi) 278 + movl $_end - 1b, BP_init_size(%esi) 279 + subl $startup_32 - 1b, BP_init_size(%esi) 273 280 274 281 /* Disable paging */ 275 282 movl %cr0, %eax ··· 305 294 306 295 movl 8(%ebp), %ecx // image_handle 307 296 movl 12(%ebp), %edx // sys_table 308 - xorl %esi, %esi 309 - jmp efi32_entry // pass %ecx, %edx, %esi 297 + jmp efi32_entry // pass %ecx, %edx 310 298 // no other registers remain live 311 299 312 300 2: popl %edi // restore callee-save registers
+1
arch/x86/entry/vdso/Makefile
··· 41 41 obj-$(CONFIG_COMPAT_32) += vdso-image-32.o vdso32-setup.o 42 42 43 43 OBJECT_FILES_NON_STANDARD_vdso-image-32.o := n 44 + OBJECT_FILES_NON_STANDARD_vdso-image-x32.o := n 44 45 OBJECT_FILES_NON_STANDARD_vdso-image-64.o := n 45 46 OBJECT_FILES_NON_STANDARD_vdso32-setup.o := n 46 47
+34 -5
arch/x86/events/amd/core.c
··· 250 250 /* 251 251 * AMD Performance Monitor Family 17h and later: 252 252 */ 253 - static const u64 amd_f17h_perfmon_event_map[PERF_COUNT_HW_MAX] = 253 + static const u64 amd_zen1_perfmon_event_map[PERF_COUNT_HW_MAX] = 254 254 { 255 255 [PERF_COUNT_HW_CPU_CYCLES] = 0x0076, 256 256 [PERF_COUNT_HW_INSTRUCTIONS] = 0x00c0, ··· 262 262 [PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = 0x0187, 263 263 }; 264 264 265 + static const u64 amd_zen2_perfmon_event_map[PERF_COUNT_HW_MAX] = 266 + { 267 + [PERF_COUNT_HW_CPU_CYCLES] = 0x0076, 268 + [PERF_COUNT_HW_INSTRUCTIONS] = 0x00c0, 269 + [PERF_COUNT_HW_CACHE_REFERENCES] = 0xff60, 270 + [PERF_COUNT_HW_CACHE_MISSES] = 0x0964, 271 + [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = 0x00c2, 272 + [PERF_COUNT_HW_BRANCH_MISSES] = 0x00c3, 273 + [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = 0x00a9, 274 + }; 275 + 276 + static const u64 amd_zen4_perfmon_event_map[PERF_COUNT_HW_MAX] = 277 + { 278 + [PERF_COUNT_HW_CPU_CYCLES] = 0x0076, 279 + [PERF_COUNT_HW_INSTRUCTIONS] = 0x00c0, 280 + [PERF_COUNT_HW_CACHE_REFERENCES] = 0xff60, 281 + [PERF_COUNT_HW_CACHE_MISSES] = 0x0964, 282 + [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = 0x00c2, 283 + [PERF_COUNT_HW_BRANCH_MISSES] = 0x00c3, 284 + [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = 0x00a9, 285 + [PERF_COUNT_HW_REF_CPU_CYCLES] = 0x100000120, 286 + }; 287 + 265 288 static u64 amd_pmu_event_map(int hw_event) 266 289 { 267 - if (boot_cpu_data.x86 >= 0x17) 268 - return amd_f17h_perfmon_event_map[hw_event]; 290 + if (cpu_feature_enabled(X86_FEATURE_ZEN4) || boot_cpu_data.x86 >= 0x1a) 291 + return amd_zen4_perfmon_event_map[hw_event]; 292 + 293 + if (cpu_feature_enabled(X86_FEATURE_ZEN2) || boot_cpu_data.x86 >= 0x19) 294 + return amd_zen2_perfmon_event_map[hw_event]; 295 + 296 + if (cpu_feature_enabled(X86_FEATURE_ZEN1)) 297 + return amd_zen1_perfmon_event_map[hw_event]; 269 298 270 299 return amd_perfmon_event_map[hw_event]; 271 300 } ··· 933 904 if (!status) 934 905 goto done; 935 906 936 - /* Read branch records before unfreezing */ 937 - if (status & GLOBAL_STATUS_LBRS_FROZEN) { 907 + /* Read branch records */ 908 + if (x86_pmu.lbr_nr) { 938 909 amd_pmu_lbr_read(); 939 910 status &= ~GLOBAL_STATUS_LBRS_FROZEN; 940 911 }
+10 -6
arch/x86/events/amd/lbr.c
··· 402 402 wrmsrl(MSR_AMD64_LBR_SELECT, lbr_select); 403 403 } 404 404 405 - rdmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl); 406 - rdmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg); 405 + if (cpu_feature_enabled(X86_FEATURE_AMD_LBR_PMC_FREEZE)) { 406 + rdmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl); 407 + wrmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI); 408 + } 407 409 408 - wrmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI); 410 + rdmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg); 409 411 wrmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg | DBG_EXTN_CFG_LBRV2EN); 410 412 } 411 413 ··· 420 418 return; 421 419 422 420 rdmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg); 423 - rdmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl); 424 - 425 421 wrmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg & ~DBG_EXTN_CFG_LBRV2EN); 426 - wrmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl & ~DEBUGCTLMSR_FREEZE_LBRS_ON_PMI); 422 + 423 + if (cpu_feature_enabled(X86_FEATURE_AMD_LBR_PMC_FREEZE)) { 424 + rdmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl); 425 + wrmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl & ~DEBUGCTLMSR_FREEZE_LBRS_ON_PMI); 426 + } 427 427 } 428 428 429 429 __init int amd_pmu_lbr_init(void)
+1
arch/x86/include/asm/asm-prototypes.h
··· 14 14 #include <asm/asm.h> 15 15 #include <asm/fred.h> 16 16 #include <asm/gsseg.h> 17 + #include <asm/nospec-branch.h> 17 18 18 19 #ifndef CONFIG_X86_CMPXCHG64 19 20 extern void cmpxchg8b_emu(void);
+4 -2
arch/x86/include/asm/cpufeature.h
··· 91 91 CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 18, feature_bit) || \ 92 92 CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 19, feature_bit) || \ 93 93 CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 20, feature_bit) || \ 94 + CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 21, feature_bit) || \ 94 95 REQUIRED_MASK_CHECK || \ 95 - BUILD_BUG_ON_ZERO(NCAPINTS != 21)) 96 + BUILD_BUG_ON_ZERO(NCAPINTS != 22)) 96 97 97 98 #define DISABLED_MASK_BIT_SET(feature_bit) \ 98 99 ( CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 0, feature_bit) || \ ··· 117 116 CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 18, feature_bit) || \ 118 117 CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 19, feature_bit) || \ 119 118 CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 20, feature_bit) || \ 119 + CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 21, feature_bit) || \ 120 120 DISABLED_MASK_CHECK || \ 121 - BUILD_BUG_ON_ZERO(NCAPINTS != 21)) 121 + BUILD_BUG_ON_ZERO(NCAPINTS != 22)) 122 122 123 123 #define cpu_has(c, bit) \ 124 124 (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
+9 -1
arch/x86/include/asm/cpufeatures.h
··· 13 13 /* 14 14 * Defines x86 CPU feature bits 15 15 */ 16 - #define NCAPINTS 21 /* N 32-bit words worth of info */ 16 + #define NCAPINTS 22 /* N 32-bit words worth of info */ 17 17 #define NBUGINTS 2 /* N 32-bit bug flags */ 18 18 19 19 /* ··· 458 458 #define X86_FEATURE_SBPB (20*32+27) /* "" Selective Branch Prediction Barrier */ 459 459 #define X86_FEATURE_IBPB_BRTYPE (20*32+28) /* "" MSR_PRED_CMD[IBPB] flushes all branch type predictions */ 460 460 #define X86_FEATURE_SRSO_NO (20*32+29) /* "" CPU is not affected by SRSO */ 461 + 462 + /* 463 + * Extended auxiliary flags: Linux defined - for features scattered in various 464 + * CPUID levels like 0x80000022, etc. 465 + * 466 + * Reuse free bits when adding new feature flags! 467 + */ 468 + #define X86_FEATURE_AMD_LBR_PMC_FREEZE (21*32+ 0) /* AMD LBR and PMC Freeze */ 461 469 462 470 /* 463 471 * BUG word(s)
+2
arch/x86/include/asm/crash_reserve.h
··· 39 39 #endif 40 40 } 41 41 42 + #define HAVE_ARCH_ADD_CRASH_RES_TO_IOMEM_EARLY 43 + 42 44 #endif /* _X86_CRASH_RESERVE_H */
+2 -1
arch/x86/include/asm/disabled-features.h
··· 155 155 #define DISABLED_MASK18 (DISABLE_IBT) 156 156 #define DISABLED_MASK19 (DISABLE_SEV_SNP) 157 157 #define DISABLED_MASK20 0 158 - #define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21) 158 + #define DISABLED_MASK21 0 159 + #define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 22) 159 160 160 161 #endif /* _ASM_X86_DISABLED_FEATURES_H */
+16 -5
arch/x86/include/asm/nospec-branch.h
··· 262 262 .Lskip_rsb_\@: 263 263 .endm 264 264 265 + /* 266 + * The CALL to srso_alias_untrain_ret() must be patched in directly at 267 + * the spot where untraining must be done, ie., srso_alias_untrain_ret() 268 + * must be the target of a CALL instruction instead of indirectly 269 + * jumping to a wrapper which then calls it. Therefore, this macro is 270 + * called outside of __UNTRAIN_RET below, for the time being, before the 271 + * kernel can support nested alternatives with arbitrary nesting. 272 + */ 273 + .macro CALL_UNTRAIN_RET 265 274 #if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_MITIGATION_SRSO) 266 - #define CALL_UNTRAIN_RET "call entry_untrain_ret" 267 - #else 268 - #define CALL_UNTRAIN_RET "" 275 + ALTERNATIVE_2 "", "call entry_untrain_ret", X86_FEATURE_UNRET, \ 276 + "call srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS 269 277 #endif 278 + .endm 270 279 271 280 /* 272 281 * Mitigate RETBleed for AMD/Hygon Zen uarch. Requires KERNEL CR3 because the ··· 291 282 .macro __UNTRAIN_RET ibpb_feature, call_depth_insns 292 283 #if defined(CONFIG_MITIGATION_RETHUNK) || defined(CONFIG_MITIGATION_IBPB_ENTRY) 293 284 VALIDATE_UNRET_END 294 - ALTERNATIVE_3 "", \ 295 - CALL_UNTRAIN_RET, X86_FEATURE_UNRET, \ 285 + CALL_UNTRAIN_RET 286 + ALTERNATIVE_2 "", \ 296 287 "call entry_ibpb", \ibpb_feature, \ 297 288 __stringify(\call_depth_insns), X86_FEATURE_CALL_DEPTH 298 289 #endif ··· 350 341 #else 351 342 static inline void retbleed_return_thunk(void) {} 352 343 #endif 344 + 345 + extern void srso_alias_untrain_ret(void); 353 346 354 347 #ifdef CONFIG_MITIGATION_SRSO 355 348 extern void srso_return_thunk(void);
+2 -1
arch/x86/include/asm/required-features.h
··· 99 99 #define REQUIRED_MASK18 0 100 100 #define REQUIRED_MASK19 0 101 101 #define REQUIRED_MASK20 0 102 - #define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21) 102 + #define REQUIRED_MASK21 0 103 + #define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 22) 103 104 104 105 #endif /* _ASM_X86_REQUIRED_FEATURES_H */
+2 -2
arch/x86/include/asm/sev.h
··· 218 218 unsigned long npages); 219 219 void early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, 220 220 unsigned long npages); 221 - void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op); 222 221 void snp_set_memory_shared(unsigned long vaddr, unsigned long npages); 223 222 void snp_set_memory_private(unsigned long vaddr, unsigned long npages); 224 223 void snp_set_wakeup_secondary_cpu(void); 225 224 bool snp_init(struct boot_params *bp); 226 225 void __noreturn snp_abort(void); 226 + void snp_dmi_setup(void); 227 227 int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct snp_guest_request_ioctl *rio); 228 228 void snp_accept_memory(phys_addr_t start, phys_addr_t end); 229 229 u64 snp_get_unsupported_features(u64 status); ··· 244 244 early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, unsigned long npages) { } 245 245 static inline void __init 246 246 early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsigned long npages) { } 247 - static inline void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op) { } 248 247 static inline void snp_set_memory_shared(unsigned long vaddr, unsigned long npages) { } 249 248 static inline void snp_set_memory_private(unsigned long vaddr, unsigned long npages) { } 250 249 static inline void snp_set_wakeup_secondary_cpu(void) { } 251 250 static inline bool snp_init(struct boot_params *bp) { return false; } 252 251 static inline void snp_abort(void) { } 252 + static inline void snp_dmi_setup(void) { } 253 253 static inline int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct snp_guest_request_ioctl *rio) 254 254 { 255 255 return -ENOTTY;
+2 -1
arch/x86/include/asm/x86_init.h
··· 30 30 * @reserve_resources: reserve the standard resources for the 31 31 * platform 32 32 * @memory_setup: platform specific memory setup 33 - * 33 + * @dmi_setup: platform specific DMI setup 34 34 */ 35 35 struct x86_init_resources { 36 36 void (*probe_roms)(void); 37 37 void (*reserve_resources)(void); 38 38 char *(*memory_setup)(void); 39 + void (*dmi_setup)(void); 39 40 }; 40 41 41 42 /**
+1
arch/x86/kernel/cpu/scattered.c
··· 49 49 { X86_FEATURE_BMEC, CPUID_EBX, 3, 0x80000020, 0 }, 50 50 { X86_FEATURE_PERFMON_V2, CPUID_EAX, 0, 0x80000022, 0 }, 51 51 { X86_FEATURE_AMD_LBR_V2, CPUID_EAX, 1, 0x80000022, 0 }, 52 + { X86_FEATURE_AMD_LBR_PMC_FREEZE, CPUID_EAX, 2, 0x80000022, 0 }, 52 53 { 0, 0, 0, 0, 0 } 53 54 }; 54 55
+2 -1
arch/x86/kernel/eisa.c
··· 2 2 /* 3 3 * EISA specific code 4 4 */ 5 + #include <linux/cc_platform.h> 5 6 #include <linux/ioport.h> 6 7 #include <linux/eisa.h> 7 8 #include <linux/io.h> ··· 13 12 { 14 13 void __iomem *p; 15 14 16 - if (xen_pv_domain() && !xen_initial_domain()) 15 + if ((xen_pv_domain() && !xen_initial_domain()) || cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) 17 16 return 0; 18 17 19 18 p = ioremap(0x0FFFD9, 4);
+14 -10
arch/x86/kernel/nmi.c
··· 580 580 581 581 static char *nmi_check_stall_msg[] = { 582 582 /* */ 583 - /* +--------- nsp->idt_seq_snap & 0x1: CPU is in NMI handler. */ 583 + /* +--------- nmi_seq & 0x1: CPU is currently in NMI handler. */ 584 584 /* | +------ cpu_is_offline(cpu) */ 585 585 /* | | +--- nsp->idt_calls_snap != atomic_long_read(&nsp->idt_calls): */ 586 586 /* | | | NMI handler has been invoked. */ ··· 628 628 nmi_seq = READ_ONCE(nsp->idt_nmi_seq); 629 629 if (nsp->idt_nmi_seq_snap + 1 == nmi_seq && (nmi_seq & 0x1)) { 630 630 msgp = "CPU entered NMI handler function, but has not exited"; 631 - } else if ((nsp->idt_nmi_seq_snap & 0x1) != (nmi_seq & 0x1)) { 632 - msgp = "CPU is handling NMIs"; 633 - } else { 634 - idx = ((nsp->idt_seq_snap & 0x1) << 2) | 631 + } else if (nsp->idt_nmi_seq_snap == nmi_seq || 632 + nsp->idt_nmi_seq_snap + 1 == nmi_seq) { 633 + idx = ((nmi_seq & 0x1) << 2) | 635 634 (cpu_is_offline(cpu) << 1) | 636 635 (nsp->idt_calls_snap != atomic_long_read(&nsp->idt_calls)); 637 636 msgp = nmi_check_stall_msg[idx]; 638 637 if (nsp->idt_ignored_snap != READ_ONCE(nsp->idt_ignored) && (idx & 0x1)) 639 638 modp = ", but OK because ignore_nmis was set"; 640 - if (nmi_seq & 0x1) 641 - msghp = " (CPU currently in NMI handler function)"; 642 - else if (nsp->idt_nmi_seq_snap + 1 == nmi_seq) 639 + if (nsp->idt_nmi_seq_snap + 1 == nmi_seq) 643 640 msghp = " (CPU exited one NMI handler function)"; 641 + else if (nmi_seq & 0x1) 642 + msghp = " (CPU currently in NMI handler function)"; 643 + else 644 + msghp = " (CPU was never in an NMI handler function)"; 645 + } else { 646 + msgp = "CPU is handling NMIs"; 644 647 } 645 - pr_alert("%s: CPU %d: %s%s%s, last activity: %lu jiffies ago.\n", 646 - __func__, cpu, msgp, modp, msghp, j - READ_ONCE(nsp->recv_jiffies)); 648 + pr_alert("%s: CPU %d: %s%s%s\n", __func__, cpu, msgp, modp, msghp); 649 + pr_alert("%s: last activity: %lu jiffies ago.\n", 650 + __func__, j - READ_ONCE(nsp->recv_jiffies)); 647 651 } 648 652 } 649 653
-10
arch/x86/kernel/probe_roms.c
··· 203 203 unsigned char c; 204 204 int i; 205 205 206 - /* 207 - * The ROM memory range is not part of the e820 table and is therefore not 208 - * pre-validated by BIOS. The kernel page table maps the ROM region as encrypted 209 - * memory, and SNP requires encrypted memory to be validated before access. 210 - * Do that here. 211 - */ 212 - snp_prep_memory(video_rom_resource.start, 213 - ((system_rom_resource.end + 1) - video_rom_resource.start), 214 - SNP_PAGE_STATE_PRIVATE); 215 - 216 206 /* video rom */ 217 207 upper = adapter_rom_resources[0].start; 218 208 for (start = video_rom_resource.start; start < upper; start += 2048) {
+1 -2
arch/x86/kernel/setup.c
··· 9 9 #include <linux/console.h> 10 10 #include <linux/crash_dump.h> 11 11 #include <linux/dma-map-ops.h> 12 - #include <linux/dmi.h> 13 12 #include <linux/efi.h> 14 13 #include <linux/ima.h> 15 14 #include <linux/init_ohci1394_dma.h> ··· 901 902 efi_init(); 902 903 903 904 reserve_ibft_region(); 904 - dmi_setup(); 905 + x86_init.resources.dmi_setup(); 905 906 906 907 /* 907 908 * VMware detection requires dmi to be available, so this
+12 -15
arch/x86/kernel/sev.c
··· 23 23 #include <linux/platform_device.h> 24 24 #include <linux/io.h> 25 25 #include <linux/psp-sev.h> 26 + #include <linux/dmi.h> 26 27 #include <uapi/linux/sev-guest.h> 27 28 28 29 #include <asm/init.h> ··· 794 793 795 794 /* Ask hypervisor to mark the memory pages shared in the RMP table. */ 796 795 early_set_pages_state(vaddr, paddr, npages, SNP_PAGE_STATE_SHARED); 797 - } 798 - 799 - void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op) 800 - { 801 - unsigned long vaddr, npages; 802 - 803 - vaddr = (unsigned long)__va(paddr); 804 - npages = PAGE_ALIGN(sz) >> PAGE_SHIFT; 805 - 806 - if (op == SNP_PAGE_STATE_PRIVATE) 807 - early_snp_set_memory_private(vaddr, paddr, npages); 808 - else if (op == SNP_PAGE_STATE_SHARED) 809 - early_snp_set_memory_shared(vaddr, paddr, npages); 810 - else 811 - WARN(1, "invalid memory op %d\n", op); 812 796 } 813 797 814 798 static unsigned long __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr, ··· 2120 2134 void __head __noreturn snp_abort(void) 2121 2135 { 2122 2136 sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED); 2137 + } 2138 + 2139 + /* 2140 + * SEV-SNP guests should only execute dmi_setup() if EFI_CONFIG_TABLES are 2141 + * enabled, as the alternative (fallback) logic for DMI probing in the legacy 2142 + * ROM region can cause a crash since this region is not pre-validated. 2143 + */ 2144 + void __init snp_dmi_setup(void) 2145 + { 2146 + if (efi_enabled(EFI_CONFIG_TABLES)) 2147 + dmi_setup(); 2123 2148 } 2124 2149 2125 2150 static void dump_cpuid_table(void)
+2
arch/x86/kernel/x86_init.c
··· 3 3 * 4 4 * For licencing details see kernel-base/COPYING 5 5 */ 6 + #include <linux/dmi.h> 6 7 #include <linux/init.h> 7 8 #include <linux/ioport.h> 8 9 #include <linux/export.h> ··· 67 66 .probe_roms = probe_roms, 68 67 .reserve_resources = reserve_standard_io_resources, 69 68 .memory_setup = e820__memory_setup_default, 69 + .dmi_setup = dmi_setup, 70 70 }, 71 71 72 72 .mpparse = {
+6 -5
arch/x86/lib/retpoline.S
··· 163 163 lfence 164 164 jmp srso_alias_return_thunk 165 165 SYM_FUNC_END(srso_alias_untrain_ret) 166 + __EXPORT_THUNK(srso_alias_untrain_ret) 166 167 .popsection 167 168 168 169 .pushsection .text..__x86.rethunk_safe ··· 225 224 SYM_CODE_END(srso_return_thunk) 226 225 227 226 #define JMP_SRSO_UNTRAIN_RET "jmp srso_untrain_ret" 228 - #define JMP_SRSO_ALIAS_UNTRAIN_RET "jmp srso_alias_untrain_ret" 229 227 #else /* !CONFIG_MITIGATION_SRSO */ 228 + /* Dummy for the alternative in CALL_UNTRAIN_RET. */ 229 + SYM_CODE_START(srso_alias_untrain_ret) 230 + RET 231 + SYM_FUNC_END(srso_alias_untrain_ret) 230 232 #define JMP_SRSO_UNTRAIN_RET "ud2" 231 - #define JMP_SRSO_ALIAS_UNTRAIN_RET "ud2" 232 233 #endif /* CONFIG_MITIGATION_SRSO */ 233 234 234 235 #ifdef CONFIG_MITIGATION_UNRET_ENTRY ··· 322 319 #if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_MITIGATION_SRSO) 323 320 324 321 SYM_FUNC_START(entry_untrain_ret) 325 - ALTERNATIVE_2 JMP_RETBLEED_UNTRAIN_RET, \ 326 - JMP_SRSO_UNTRAIN_RET, X86_FEATURE_SRSO, \ 327 - JMP_SRSO_ALIAS_UNTRAIN_RET, X86_FEATURE_SRSO_ALIAS 322 + ALTERNATIVE JMP_RETBLEED_UNTRAIN_RET, JMP_SRSO_UNTRAIN_RET, X86_FEATURE_SRSO 328 323 SYM_FUNC_END(entry_untrain_ret) 329 324 __EXPORT_THUNK(entry_untrain_ret) 330 325
+5 -18
arch/x86/mm/ident_map.c
··· 26 26 for (; addr < end; addr = next) { 27 27 pud_t *pud = pud_page + pud_index(addr); 28 28 pmd_t *pmd; 29 - bool use_gbpage; 30 29 31 30 next = (addr & PUD_MASK) + PUD_SIZE; 32 31 if (next > end) 33 32 next = end; 34 33 35 - /* if this is already a gbpage, this portion is already mapped */ 36 - if (pud_leaf(*pud)) 37 - continue; 38 - 39 - /* Is using a gbpage allowed? */ 40 - use_gbpage = info->direct_gbpages; 41 - 42 - /* Don't use gbpage if it maps more than the requested region. */ 43 - /* at the begining: */ 44 - use_gbpage &= ((addr & ~PUD_MASK) == 0); 45 - /* ... or at the end: */ 46 - use_gbpage &= ((next & ~PUD_MASK) == 0); 47 - 48 - /* Never overwrite existing mappings */ 49 - use_gbpage &= !pud_present(*pud); 50 - 51 - if (use_gbpage) { 34 + if (info->direct_gbpages) { 52 35 pud_t pudval; 53 36 37 + if (pud_present(*pud)) 38 + continue; 39 + 40 + addr &= PUD_MASK; 54 41 pudval = __pud((addr - info->offset) | info->page_flag); 55 42 set_pud(pud, pudval); 56 43 continue;
+18
arch/x86/mm/mem_encrypt_amd.c
··· 492 492 */ 493 493 if (sev_status & MSR_AMD64_SEV_ENABLED) 494 494 ia32_disable(); 495 + 496 + /* 497 + * Override init functions that scan the ROM region in SEV-SNP guests, 498 + * as this memory is not pre-validated and would thus cause a crash. 499 + */ 500 + if (sev_status & MSR_AMD64_SEV_SNP_ENABLED) { 501 + x86_init.mpparse.find_mptable = x86_init_noop; 502 + x86_init.pci.init_irq = x86_init_noop; 503 + x86_init.resources.probe_roms = x86_init_noop; 504 + 505 + /* 506 + * DMI setup behavior for SEV-SNP guests depends on 507 + * efi_enabled(EFI_CONFIG_TABLES), which hasn't been 508 + * parsed yet. snp_dmi_setup() will run after that 509 + * parsing has happened. 510 + */ 511 + x86_init.resources.dmi_setup = snp_dmi_setup; 512 + } 495 513 } 496 514 497 515 void __init mem_encrypt_free_decrypted_mem(void)
+1 -1
arch/x86/virt/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - obj-y += vmx/ 2 + obj-y += svm/ vmx/
+1 -1
block/blk-merge.c
··· 726 726 * which can be mixed are set in each bio and mark @rq as mixed 727 727 * merged. 728 728 */ 729 - void blk_rq_set_mixed_merge(struct request *rq) 729 + static void blk_rq_set_mixed_merge(struct request *rq) 730 730 { 731 731 blk_opf_t ff = rq->cmd_flags & REQ_FAILFAST_MASK; 732 732 struct bio *bio;
+2 -7
block/blk-mq.c
··· 770 770 /* 771 771 * Partial zone append completions cannot be supported as the 772 772 * BIO fragments may end up not being written sequentially. 773 - * For such case, force the completed nbytes to be equal to 774 - * the BIO size so that bio_advance() sets the BIO remaining 775 - * size to 0 and we end up calling bio_endio() before returning. 776 773 */ 777 - if (bio->bi_iter.bi_size != nbytes) { 774 + if (bio->bi_iter.bi_size != nbytes) 778 775 bio->bi_status = BLK_STS_IOERR; 779 - nbytes = bio->bi_iter.bi_size; 780 - } else { 776 + else 781 777 bio->bi_iter.bi_sector = rq->__sector; 782 - } 783 778 } 784 779 785 780 bio_advance(bio, nbytes);
+1 -2
block/blk-settings.c
··· 146 146 max_hw_sectors = min_not_zero(lim->max_hw_sectors, 147 147 lim->max_dev_sectors); 148 148 if (lim->max_user_sectors) { 149 - if (lim->max_user_sectors > max_hw_sectors || 150 - lim->max_user_sectors < PAGE_SIZE / SECTOR_SIZE) 149 + if (lim->max_user_sectors < PAGE_SIZE / SECTOR_SIZE) 151 150 return -EINVAL; 152 151 lim->max_sectors = min(max_hw_sectors, lim->max_user_sectors); 153 152 } else {
-1
block/blk.h
··· 339 339 bool blk_attempt_req_merge(struct request_queue *q, struct request *rq, 340 340 struct request *next); 341 341 unsigned int blk_recalc_rq_segments(struct request *rq); 342 - void blk_rq_set_mixed_merge(struct request *rq); 343 342 bool blk_rq_merge_ok(struct request *rq, struct bio *bio); 344 343 enum elv_merge blk_try_merge(struct request *rq, struct bio *bio); 345 344
+3
crypto/asymmetric_keys/mscode_parser.c
··· 75 75 76 76 oid = look_up_OID(value, vlen); 77 77 switch (oid) { 78 + case OID_sha1: 79 + ctx->digest_algo = "sha1"; 80 + break; 78 81 case OID_sha256: 79 82 ctx->digest_algo = "sha256"; 80 83 break;
+4
crypto/asymmetric_keys/pkcs7_parser.c
··· 227 227 struct pkcs7_parse_context *ctx = context; 228 228 229 229 switch (ctx->last_oid) { 230 + case OID_sha1: 231 + ctx->sinfo->sig->hash_algo = "sha1"; 232 + break; 230 233 case OID_sha256: 231 234 ctx->sinfo->sig->hash_algo = "sha256"; 232 235 break; ··· 281 278 ctx->sinfo->sig->pkey_algo = "rsa"; 282 279 ctx->sinfo->sig->encoding = "pkcs1"; 283 280 break; 281 + case OID_id_ecdsa_with_sha1: 284 282 case OID_id_ecdsa_with_sha224: 285 283 case OID_id_ecdsa_with_sha256: 286 284 case OID_id_ecdsa_with_sha384:
+2 -1
crypto/asymmetric_keys/public_key.c
··· 115 115 */ 116 116 if (!hash_algo) 117 117 return -EINVAL; 118 - if (strcmp(hash_algo, "sha224") != 0 && 118 + if (strcmp(hash_algo, "sha1") != 0 && 119 + strcmp(hash_algo, "sha224") != 0 && 119 120 strcmp(hash_algo, "sha256") != 0 && 120 121 strcmp(hash_algo, "sha384") != 0 && 121 122 strcmp(hash_algo, "sha512") != 0 &&
+1 -1
crypto/asymmetric_keys/signature.c
··· 115 115 * Sign the specified data blob using the private key specified by params->key. 116 116 * The signature is wrapped in an encoding if params->encoding is specified 117 117 * (eg. "pkcs1"). If the encoding needs to know the digest type, this can be 118 - * passed through params->hash_algo (eg. "sha512"). 118 + * passed through params->hash_algo (eg. "sha1"). 119 119 * 120 120 * Returns the length of the data placed in the signature buffer or an error. 121 121 */
+8
crypto/asymmetric_keys/x509_cert_parser.c
··· 198 198 default: 199 199 return -ENOPKG; /* Unsupported combination */ 200 200 201 + case OID_sha1WithRSAEncryption: 202 + ctx->cert->sig->hash_algo = "sha1"; 203 + goto rsa_pkcs1; 204 + 201 205 case OID_sha256WithRSAEncryption: 202 206 ctx->cert->sig->hash_algo = "sha256"; 203 207 goto rsa_pkcs1; ··· 217 213 case OID_sha224WithRSAEncryption: 218 214 ctx->cert->sig->hash_algo = "sha224"; 219 215 goto rsa_pkcs1; 216 + 217 + case OID_id_ecdsa_with_sha1: 218 + ctx->cert->sig->hash_algo = "sha1"; 219 + goto ecdsa; 220 220 221 221 case OID_id_rsassa_pkcs1_v1_5_with_sha3_256: 222 222 ctx->cert->sig->hash_algo = "sha3-256";
+80
crypto/testmgr.h
··· 653 653 static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = { 654 654 { 655 655 .key = 656 + "\x04\xf7\x46\xf8\x2f\x15\xf6\x22\x8e\xd7\x57\x4f\xcc\xe7\xbb\xc1" 657 + "\xd4\x09\x73\xcf\xea\xd0\x15\x07\x3d\xa5\x8a\x8a\x95\x43\xe4\x68" 658 + "\xea\xc6\x25\xc1\xc1\x01\x25\x4c\x7e\xc3\x3c\xa6\x04\x0a\xe7\x08" 659 + "\x98", 660 + .key_len = 49, 661 + .params = 662 + "\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48" 663 + "\xce\x3d\x03\x01\x01", 664 + .param_len = 21, 665 + .m = 666 + "\xcd\xb9\xd2\x1c\xb7\x6f\xcd\x44\xb3\xfd\x63\xea\xa3\x66\x7f\xae" 667 + "\x63\x85\xe7\x82", 668 + .m_size = 20, 669 + .algo = OID_id_ecdsa_with_sha1, 670 + .c = 671 + "\x30\x35\x02\x19\x00\xba\xe5\x93\x83\x6e\xb6\x3b\x63\xa0\x27\x91" 672 + "\xc6\xf6\x7f\xc3\x09\xad\x59\xad\x88\x27\xd6\x92\x6b\x02\x18\x10" 673 + "\x68\x01\x9d\xba\xce\x83\x08\xef\x95\x52\x7b\xa0\x0f\xe4\x18\x86" 674 + "\x80\x6f\xa5\x79\x77\xda\xd0", 675 + .c_size = 55, 676 + .public_key_vec = true, 677 + .siggen_sigver_test = true, 678 + }, { 679 + .key = 656 680 "\x04\xb6\x4b\xb1\xd1\xac\xba\x24\x8f\x65\xb2\x60\x00\x90\xbf\xbd" 657 681 "\x78\x05\x73\xe9\x79\x1d\x6f\x7c\x0b\xd2\xc3\x93\xa7\x28\xe1\x75" 658 682 "\xf7\xd5\x95\x1d\x28\x10\xc0\x75\x50\x5c\x1a\x4f\x3f\x8f\xa5\xee" ··· 779 755 780 756 static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = { 781 757 { 758 + .key = 759 + "\x04\xb9\x7b\xbb\xd7\x17\x64\xd2\x7e\xfc\x81\x5d\x87\x06\x83\x41" 760 + "\x22\xd6\x9a\xaa\x87\x17\xec\x4f\x63\x55\x2f\x94\xba\xdd\x83\xe9" 761 + "\x34\x4b\xf3\xe9\x91\x13\x50\xb6\xcb\xca\x62\x08\xe7\x3b\x09\xdc" 762 + "\xc3\x63\x4b\x2d\xb9\x73\x53\xe4\x45\xe6\x7c\xad\xe7\x6b\xb0\xe8" 763 + "\xaf", 764 + .key_len = 65, 765 + .params = 766 + "\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48" 767 + "\xce\x3d\x03\x01\x07", 768 + .param_len = 21, 769 + .m = 770 + "\xc2\x2b\x5f\x91\x78\x34\x26\x09\x42\x8d\x6f\x51\xb2\xc5\xaf\x4c" 771 + "\x0b\xde\x6a\x42", 772 + .m_size = 20, 773 + .algo = OID_id_ecdsa_with_sha1, 774 + .c = 775 + "\x30\x46\x02\x21\x00\xf9\x25\xce\x9f\x3a\xa6\x35\x81\xcf\xd4\xe7" 776 + "\xb7\xf0\x82\x56\x41\xf7\xd4\xad\x8d\x94\x5a\x69\x89\xee\xca\x6a" 777 + "\x52\x0e\x48\x4d\xcc\x02\x21\x00\xd7\xe4\xef\x52\x66\xd3\x5b\x9d" 778 + "\x8a\xfa\x54\x93\x29\xa7\x70\x86\xf1\x03\x03\xf3\x3b\xe2\x73\xf7" 779 + "\xfb\x9d\x8b\xde\xd4\x8d\x6f\xad", 780 + .c_size = 72, 781 + .public_key_vec = true, 782 + .siggen_sigver_test = true, 783 + }, { 782 784 .key = 783 785 "\x04\x8b\x6d\xc0\x33\x8e\x2d\x8b\x67\xf5\xeb\xc4\x7f\xa0\xf5\xd9" 784 786 "\x7b\x03\xa5\x78\x9a\xb5\xea\x14\xe4\x23\xd0\xaf\xd7\x0e\x2e\xa0" ··· 916 866 917 867 static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = { 918 868 { 869 + .key = /* secp384r1(sha1) */ 870 + "\x04\x89\x25\xf3\x97\x88\xcb\xb0\x78\xc5\x72\x9a\x14\x6e\x7a\xb1" 871 + "\x5a\xa5\x24\xf1\x95\x06\x9e\x28\xfb\xc4\xb9\xbe\x5a\x0d\xd9\x9f" 872 + "\xf3\xd1\x4d\x2d\x07\x99\xbd\xda\xa7\x66\xec\xbb\xea\xba\x79\x42" 873 + "\xc9\x34\x89\x6a\xe7\x0b\xc3\xf2\xfe\x32\x30\xbe\xba\xf9\xdf\x7e" 874 + "\x4b\x6a\x07\x8e\x26\x66\x3f\x1d\xec\xa2\x57\x91\x51\xdd\x17\x0e" 875 + "\x0b\x25\xd6\x80\x5c\x3b\xe6\x1a\x98\x48\x91\x45\x7a\x73\xb0\xc3" 876 + "\xf1", 877 + .key_len = 97, 878 + .params = 879 + "\x30\x10\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x05\x2b\x81\x04" 880 + "\x00\x22", 881 + .param_len = 18, 882 + .m = 883 + "\x12\x55\x28\xf0\x77\xd5\xb6\x21\x71\x32\x48\xcd\x28\xa8\x25\x22" 884 + "\x3a\x69\xc1\x93", 885 + .m_size = 20, 886 + .algo = OID_id_ecdsa_with_sha1, 887 + .c = 888 + "\x30\x66\x02\x31\x00\xf5\x0f\x24\x4c\x07\x93\x6f\x21\x57\x55\x07" 889 + "\x20\x43\x30\xde\xa0\x8d\x26\x8e\xae\x63\x3f\xbc\x20\x3a\xc6\xf1" 890 + "\x32\x3c\xce\x70\x2b\x78\xf1\x4c\x26\xe6\x5b\x86\xcf\xec\x7c\x7e" 891 + "\xd0\x87\xd7\xd7\x6e\x02\x31\x00\xcd\xbb\x7e\x81\x5d\x8f\x63\xc0" 892 + "\x5f\x63\xb1\xbe\x5e\x4c\x0e\xa1\xdf\x28\x8c\x1b\xfa\xf9\x95\x88" 893 + "\x74\xa0\x0f\xbf\xaf\xc3\x36\x76\x4a\xa1\x59\xf1\x1c\xa4\x58\x26" 894 + "\x79\x12\x2a\xb7\xc5\x15\x92\xc5", 895 + .c_size = 104, 896 + .public_key_vec = true, 897 + .siggen_sigver_test = true, 898 + }, { 919 899 .key = /* secp384r1(sha224) */ 920 900 "\x04\x69\x6c\xcf\x62\xee\xd0\x0d\xe5\xb5\x2f\x70\x54\xcf\x26\xa0" 921 901 "\xd9\x98\x8d\x92\x2a\xab\x9b\x11\xcb\x48\x18\xa1\xa9\x0d\xd5\x18"
+6 -2
drivers/acpi/acpica/dbnames.c
··· 550 550 ACPI_FREE(buffer.pointer); 551 551 552 552 buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER; 553 - acpi_evaluate_object(obj_handle, NULL, NULL, &buffer); 554 - 553 + status = acpi_evaluate_object(obj_handle, NULL, NULL, &buffer); 554 + if (ACPI_FAILURE(status)) { 555 + acpi_os_printf("Could Not evaluate object %p\n", 556 + obj_handle); 557 + return (AE_OK); 558 + } 555 559 /* 556 560 * Since this is a field unit, surround the output in braces 557 561 */
+1 -1
drivers/acpi/apei/einj-core.c
··· 851 851 return rc; 852 852 } 853 853 854 - static void __exit einj_remove(struct platform_device *pdev) 854 + static void einj_remove(struct platform_device *pdev) 855 855 { 856 856 struct apei_exec_context ctx; 857 857
+4 -1
drivers/ata/libata-eh.c
··· 712 712 ehc->saved_ncq_enabled |= 1 << devno; 713 713 714 714 /* If we are resuming, wake up the device */ 715 - if (ap->pflags & ATA_PFLAG_RESUMING) 715 + if (ap->pflags & ATA_PFLAG_RESUMING) { 716 + dev->flags |= ATA_DFLAG_RESUMING; 716 717 ehc->i.dev_action[devno] |= ATA_EH_SET_ACTIVE; 718 + } 717 719 } 718 720 } 719 721 ··· 3171 3169 return 0; 3172 3170 3173 3171 err: 3172 + dev->flags &= ~ATA_DFLAG_RESUMING; 3174 3173 *r_failed_dev = dev; 3175 3174 return rc; 3176 3175 }
+9
drivers/ata/libata-scsi.c
··· 4730 4730 struct ata_link *link; 4731 4731 struct ata_device *dev; 4732 4732 unsigned long flags; 4733 + bool do_resume; 4733 4734 int ret = 0; 4734 4735 4735 4736 mutex_lock(&ap->scsi_scan_mutex); ··· 4752 4751 if (scsi_device_get(sdev)) 4753 4752 continue; 4754 4753 4754 + do_resume = dev->flags & ATA_DFLAG_RESUMING; 4755 + 4755 4756 spin_unlock_irqrestore(ap->lock, flags); 4757 + if (do_resume) { 4758 + ret = scsi_resume_device(sdev); 4759 + if (ret == -EWOULDBLOCK) 4760 + goto unlock; 4761 + dev->flags &= ~ATA_DFLAG_RESUMING; 4762 + } 4756 4763 ret = scsi_rescan_device(sdev); 4757 4764 scsi_device_put(sdev); 4758 4765 spin_lock_irqsave(ap->lock, flags);
+7 -3
drivers/crypto/intel/iaa/iaa_crypto_main.c
··· 806 806 return -EINVAL; 807 807 808 808 cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa; 809 + if (!cpus_per_iaa) 810 + cpus_per_iaa = 1; 809 811 out: 810 812 return 0; 811 813 } ··· 823 821 } 824 822 } 825 823 826 - if (nr_iaa) 824 + if (nr_iaa) { 827 825 cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa; 828 - else 829 - cpus_per_iaa = 0; 826 + if (!cpus_per_iaa) 827 + cpus_per_iaa = 1; 828 + } else 829 + cpus_per_iaa = 1; 830 830 } 831 831 832 832 static int wq_table_add_wqs(int iaa, int cpu)
-13
drivers/cxl/Kconfig
··· 144 144 If unsure, or if this kernel is meant for production environments, 145 145 say N. 146 146 147 - config CXL_PMU 148 - tristate "CXL Performance Monitoring Unit" 149 - default CXL_BUS 150 - depends on PERF_EVENTS 151 - help 152 - Support performance monitoring as defined in CXL rev 3.0 153 - section 13.2: Performance Monitoring. CXL components may have 154 - one or more CXL Performance Monitoring Units (CPMUs). 155 - 156 - Say 'y/m' to enable a driver that will attach to performance 157 - monitoring units and provide standard perf based interfaces. 158 - 159 - If unsure say 'm'. 160 147 endif
+3 -3
drivers/dma-buf/st-dma-fence-chain.c
··· 84 84 return -ENOMEM; 85 85 86 86 chain = mock_chain(NULL, f, 1); 87 - if (!chain) 87 + if (chain) 88 + dma_fence_enable_sw_signaling(chain); 89 + else 88 90 err = -ENOMEM; 89 - 90 - dma_fence_enable_sw_signaling(chain); 91 91 92 92 dma_fence_signal(f); 93 93 dma_fence_put(f);
+1 -1
drivers/dpll/Kconfig
··· 4 4 # 5 5 6 6 config DPLL 7 - bool 7 + bool
+1 -1
drivers/firmware/efi/libstub/randomalloc.c
··· 120 120 continue; 121 121 } 122 122 123 - target = round_up(max(md->phys_addr, alloc_min), align) + target_slot * align; 123 + target = round_up(max_t(u64, md->phys_addr, alloc_min), align) + target_slot * align; 124 124 pages = size / EFI_PAGE_SIZE; 125 125 126 126 status = efi_bs_call(allocate_pages, EFI_ALLOCATE_ADDRESS,
+1
drivers/firmware/efi/libstub/x86-stub.c
··· 496 496 hdr->vid_mode = 0xffff; 497 497 498 498 hdr->type_of_loader = 0x21; 499 + hdr->initrd_addr_max = INT_MAX; 499 500 500 501 /* Convert unicode cmdline to ascii */ 501 502 cmdline_ptr = efi_convert_cmdline(image, &options_size);
+32 -6
drivers/gpio/gpiolib-cdev.c
··· 1083 1083 return 0; 1084 1084 } 1085 1085 1086 + static inline char *make_irq_label(const char *orig) 1087 + { 1088 + return kstrdup_and_replace(orig, '/', ':', GFP_KERNEL); 1089 + } 1090 + 1091 + static inline void free_irq_label(const char *label) 1092 + { 1093 + kfree(label); 1094 + } 1095 + 1086 1096 static void edge_detector_stop(struct line *line) 1087 1097 { 1088 1098 if (line->irq) { 1089 - free_irq(line->irq, line); 1099 + free_irq_label(free_irq(line->irq, line)); 1090 1100 line->irq = 0; 1091 1101 } 1092 1102 ··· 1120 1110 unsigned long irqflags = 0; 1121 1111 u64 eflags; 1122 1112 int irq, ret; 1113 + char *label; 1123 1114 1124 1115 eflags = edflags & GPIO_V2_LINE_EDGE_FLAGS; 1125 1116 if (eflags && !kfifo_initialized(&line->req->events)) { ··· 1157 1146 IRQF_TRIGGER_RISING : IRQF_TRIGGER_FALLING; 1158 1147 irqflags |= IRQF_ONESHOT; 1159 1148 1149 + label = make_irq_label(line->req->label); 1150 + if (!label) 1151 + return -ENOMEM; 1152 + 1160 1153 /* Request a thread to read the events */ 1161 1154 ret = request_threaded_irq(irq, edge_irq_handler, edge_irq_thread, 1162 - irqflags, line->req->label, line); 1163 - if (ret) 1155 + irqflags, label, line); 1156 + if (ret) { 1157 + free_irq_label(label); 1164 1158 return ret; 1159 + } 1165 1160 1166 1161 line->irq = irq; 1167 1162 return 0; ··· 1990 1973 blocking_notifier_chain_unregister(&le->gdev->device_notifier, 1991 1974 &le->device_unregistered_nb); 1992 1975 if (le->irq) 1993 - free_irq(le->irq, le); 1976 + free_irq_label(free_irq(le->irq, le)); 1994 1977 if (le->desc) 1995 1978 gpiod_free(le->desc); 1996 1979 kfree(le->label); ··· 2131 2114 int fd; 2132 2115 int ret; 2133 2116 int irq, irqflags = 0; 2117 + char *label; 2134 2118 2135 2119 if (copy_from_user(&eventreq, ip, sizeof(eventreq))) 2136 2120 return -EFAULT; ··· 2216 2198 if (ret) 2217 2199 goto out_free_le; 2218 2200 2201 + label = make_irq_label(le->label); 2202 + if (!label) { 2203 + ret = -ENOMEM; 2204 + goto out_free_le; 2205 + } 2206 + 2219 2207 /* Request a thread to read the events */ 2220 2208 ret = request_threaded_irq(irq, 2221 2209 lineevent_irq_handler, 2222 2210 lineevent_irq_thread, 2223 2211 irqflags, 2224 - le->label, 2212 + label, 2225 2213 le); 2226 - if (ret) 2214 + if (ret) { 2215 + free_irq_label(label); 2227 2216 goto out_free_le; 2217 + } 2228 2218 2229 2219 le->irq = irq; 2230 2220
+18 -14
drivers/gpio/gpiolib.c
··· 2397 2397 } 2398 2398 EXPORT_SYMBOL_GPL(gpiochip_dup_line_label); 2399 2399 2400 + static inline const char *function_name_or_default(const char *con_id) 2401 + { 2402 + return con_id ?: "(default)"; 2403 + } 2404 + 2400 2405 /** 2401 2406 * gpiochip_request_own_desc - Allow GPIO chip to request its own descriptor 2402 2407 * @gc: GPIO chip ··· 2430 2425 enum gpiod_flags dflags) 2431 2426 { 2432 2427 struct gpio_desc *desc = gpiochip_get_desc(gc, hwnum); 2428 + const char *name = function_name_or_default(label); 2433 2429 int ret; 2434 2430 2435 2431 if (IS_ERR(desc)) { 2436 - chip_err(gc, "failed to get GPIO descriptor\n"); 2432 + chip_err(gc, "failed to get GPIO %s descriptor\n", name); 2437 2433 return desc; 2438 2434 } 2439 2435 ··· 2444 2438 2445 2439 ret = gpiod_configure_flags(desc, label, lflags, dflags); 2446 2440 if (ret) { 2447 - chip_err(gc, "setup of own GPIO %s failed\n", label); 2448 2441 gpiod_free_commit(desc); 2442 + chip_err(gc, "setup of own GPIO %s failed\n", name); 2449 2443 return ERR_PTR(ret); 2450 2444 } 2451 2445 ··· 4159 4153 enum gpiod_flags *flags, 4160 4154 unsigned long *lookupflags) 4161 4155 { 4156 + const char *name = function_name_or_default(con_id); 4162 4157 struct gpio_desc *desc = ERR_PTR(-ENOENT); 4163 4158 4164 4159 if (is_of_node(fwnode)) { 4165 - dev_dbg(consumer, "using DT '%pfw' for '%s' GPIO lookup\n", 4166 - fwnode, con_id); 4160 + dev_dbg(consumer, "using DT '%pfw' for '%s' GPIO lookup\n", fwnode, name); 4167 4161 desc = of_find_gpio(to_of_node(fwnode), con_id, idx, lookupflags); 4168 4162 } else if (is_acpi_node(fwnode)) { 4169 - dev_dbg(consumer, "using ACPI '%pfw' for '%s' GPIO lookup\n", 4170 - fwnode, con_id); 4163 + dev_dbg(consumer, "using ACPI '%pfw' for '%s' GPIO lookup\n", fwnode, name); 4171 4164 desc = acpi_find_gpio(fwnode, con_id, idx, flags, lookupflags); 4172 4165 } else if (is_software_node(fwnode)) { 4173 - dev_dbg(consumer, "using swnode '%pfw' for '%s' GPIO lookup\n", 4174 - fwnode, con_id); 4166 + dev_dbg(consumer, "using swnode '%pfw' for '%s' GPIO lookup\n", fwnode, name); 4175 4167 desc = swnode_find_gpio(fwnode, con_id, idx, lookupflags); 4176 4168 } 4177 4169 ··· 4185 4181 bool platform_lookup_allowed) 4186 4182 { 4187 4183 unsigned long lookupflags = GPIO_LOOKUP_FLAGS_DEFAULT; 4184 + const char *name = function_name_or_default(con_id); 4188 4185 /* 4189 4186 * scoped_guard() is implemented as a for loop, meaning static 4190 4187 * analyzers will complain about these two not being initialized. ··· 4208 4203 } 4209 4204 4210 4205 if (IS_ERR(desc)) { 4211 - dev_dbg(consumer, "No GPIO consumer %s found\n", 4212 - con_id); 4206 + dev_dbg(consumer, "No GPIO consumer %s found\n", name); 4213 4207 return desc; 4214 4208 } 4215 4209 ··· 4230 4226 * 4231 4227 * FIXME: Make this more sane and safe. 4232 4228 */ 4233 - dev_info(consumer, 4234 - "nonexclusive access to GPIO for %s\n", con_id); 4229 + dev_info(consumer, "nonexclusive access to GPIO for %s\n", name); 4235 4230 return desc; 4236 4231 } 4237 4232 4238 4233 ret = gpiod_configure_flags(desc, con_id, lookupflags, flags); 4239 4234 if (ret < 0) { 4240 - dev_dbg(consumer, "setup of GPIO %s failed\n", con_id); 4241 4235 gpiod_put(desc); 4236 + dev_dbg(consumer, "setup of GPIO %s failed\n", name); 4242 4237 return ERR_PTR(ret); 4243 4238 } 4244 4239 ··· 4353 4350 int gpiod_configure_flags(struct gpio_desc *desc, const char *con_id, 4354 4351 unsigned long lflags, enum gpiod_flags dflags) 4355 4352 { 4353 + const char *name = function_name_or_default(con_id); 4356 4354 int ret; 4357 4355 4358 4356 if (lflags & GPIO_ACTIVE_LOW) ··· 4397 4393 4398 4394 /* No particular flag request, return here... */ 4399 4395 if (!(dflags & GPIOD_FLAGS_BIT_DIR_SET)) { 4400 - gpiod_dbg(desc, "no flags found for %s\n", con_id); 4396 + gpiod_dbg(desc, "no flags found for GPIO %s\n", name); 4401 4397 return 0; 4402 4398 } 4403 4399
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 4539 4539 if (r) 4540 4540 goto unprepare; 4541 4541 4542 + flush_delayed_work(&adev->gfx.gfx_off_delay_work); 4543 + 4542 4544 for (i = 0; i < adev->num_ip_blocks; i++) { 4543 4545 if (!adev->ip_blocks[i].status.valid) 4544 4546 continue;
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
··· 2237 2237 { 2238 2238 switch (amdgpu_ip_version(adev, VCN_HWIP, 0)) { 2239 2239 case IP_VERSION(4, 0, 5): 2240 + case IP_VERSION(4, 0, 6): 2240 2241 if (amdgpu_umsch_mm & 0x1) { 2241 2242 amdgpu_device_ip_block_add(adev, &umsch_mm_v4_0_ip_block); 2242 2243 adev->enable_umsch_mm = true;
+29 -17
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
··· 524 524 { 525 525 struct amdgpu_ring *ring = file_inode(f)->i_private; 526 526 volatile u32 *mqd; 527 - int r; 527 + u32 *kbuf; 528 + int r, i; 528 529 uint32_t value, result; 529 530 530 531 if (*pos & 3 || size & 3) 531 532 return -EINVAL; 532 533 533 - result = 0; 534 + kbuf = kmalloc(ring->mqd_size, GFP_KERNEL); 535 + if (!kbuf) 536 + return -ENOMEM; 534 537 535 538 r = amdgpu_bo_reserve(ring->mqd_obj, false); 536 539 if (unlikely(r != 0)) 537 - return r; 540 + goto err_free; 538 541 539 542 r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&mqd); 540 - if (r) { 541 - amdgpu_bo_unreserve(ring->mqd_obj); 542 - return r; 543 - } 543 + if (r) 544 + goto err_unreserve; 544 545 546 + /* 547 + * Copy to local buffer to avoid put_user(), which might fault 548 + * and acquire mmap_sem, under reservation_ww_class_mutex. 549 + */ 550 + for (i = 0; i < ring->mqd_size/sizeof(u32); i++) 551 + kbuf[i] = mqd[i]; 552 + 553 + amdgpu_bo_kunmap(ring->mqd_obj); 554 + amdgpu_bo_unreserve(ring->mqd_obj); 555 + 556 + result = 0; 545 557 while (size) { 546 558 if (*pos >= ring->mqd_size) 547 - goto done; 559 + break; 548 560 549 - value = mqd[*pos/4]; 561 + value = kbuf[*pos/4]; 550 562 r = put_user(value, (uint32_t *)buf); 551 563 if (r) 552 - goto done; 564 + goto err_free; 553 565 buf += 4; 554 566 result += 4; 555 567 size -= 4; 556 568 *pos += 4; 557 569 } 558 570 559 - done: 560 - amdgpu_bo_kunmap(ring->mqd_obj); 561 - mqd = NULL; 562 - amdgpu_bo_unreserve(ring->mqd_obj); 563 - if (r) 564 - return r; 565 - 571 + kfree(kbuf); 566 572 return result; 573 + 574 + err_unreserve: 575 + amdgpu_bo_unreserve(ring->mqd_obj); 576 + err_free: 577 + kfree(kbuf); 578 + return r; 567 579 } 568 580 569 581 static const struct file_operations amdgpu_debugfs_mqd_fops = {
+10 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c
··· 189 189 mqd->rptr_val = 0; 190 190 mqd->unmapped = 1; 191 191 192 + if (adev->vpe.collaborate_mode) 193 + memcpy(++mqd, test->mqd_data_cpu_addr, sizeof(struct MQD_INFO)); 194 + 192 195 qinfo->mqd_addr = test->mqd_data_gpu_addr; 193 196 qinfo->csa_addr = test->ctx_data_gpu_addr + 194 197 offsetof(struct umsch_mm_test_ctx_data, vpe_ctx_csa); 195 - qinfo->doorbell_offset_0 = (adev->doorbell_index.vpe_ring + 1) << 1; 198 + qinfo->doorbell_offset_0 = 0; 196 199 qinfo->doorbell_offset_1 = 0; 197 200 } 198 201 ··· 290 287 ring[5] = 0; 291 288 292 289 mqd->wptr_val = (6 << 2); 293 - // WDOORBELL32(adev->umsch_mm.agdb_index[CONTEXT_PRIORITY_LEVEL_NORMAL], mqd->wptr_val); 290 + if (adev->vpe.collaborate_mode) 291 + (++mqd)->wptr_val = (6 << 2); 292 + 293 + WDOORBELL32(adev->umsch_mm.agdb_index[CONTEXT_PRIORITY_LEVEL_NORMAL], mqd->wptr_val); 294 294 295 295 for (i = 0; i < adev->usec_timeout; i++) { 296 296 if (*fence == test_pattern) ··· 577 571 578 572 switch (amdgpu_ip_version(adev, VCN_HWIP, 0)) { 579 573 case IP_VERSION(4, 0, 5): 574 + case IP_VERSION(4, 0, 6): 580 575 fw_name = "amdgpu/umsch_mm_4_0_0.bin"; 581 576 break; 582 577 default: ··· 757 750 758 751 switch (amdgpu_ip_version(adev, VCN_HWIP, 0)) { 759 752 case IP_VERSION(4, 0, 5): 753 + case IP_VERSION(4, 0, 6): 760 754 umsch_mm_v4_0_set_funcs(&adev->umsch_mm); 761 755 break; 762 756 default:
+10 -10
drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.h
··· 33 33 UMSCH_SWIP_ENGINE_TYPE_MAX 34 34 }; 35 35 36 - enum UMSCH_SWIP_AFFINITY_TYPE { 37 - UMSCH_SWIP_AFFINITY_TYPE_ANY = 0, 38 - UMSCH_SWIP_AFFINITY_TYPE_VCN0 = 1, 39 - UMSCH_SWIP_AFFINITY_TYPE_VCN1 = 2, 40 - UMSCH_SWIP_AFFINITY_TYPE_MAX 41 - }; 42 - 43 36 enum UMSCH_CONTEXT_PRIORITY_LEVEL { 44 37 CONTEXT_PRIORITY_LEVEL_IDLE = 0, 45 38 CONTEXT_PRIORITY_LEVEL_NORMAL = 1, ··· 44 51 struct umsch_mm_set_resource_input { 45 52 uint32_t vmid_mask_mm_vcn; 46 53 uint32_t vmid_mask_mm_vpe; 54 + uint32_t collaboration_mask_vpe; 47 55 uint32_t logging_vmid; 48 56 uint32_t engine_mask; 49 57 union { 50 58 struct { 51 59 uint32_t disable_reset : 1; 52 60 uint32_t disable_umsch_mm_log : 1; 53 - uint32_t reserved : 30; 61 + uint32_t use_rs64mem_for_proc_ctx_csa : 1; 62 + uint32_t reserved : 29; 54 63 }; 55 64 uint32_t uint32_all; 56 65 }; ··· 73 78 uint32_t doorbell_offset_1; 74 79 enum UMSCH_SWIP_ENGINE_TYPE engine_type; 75 80 uint32_t affinity; 76 - enum UMSCH_SWIP_AFFINITY_TYPE affinity_type; 77 81 uint64_t mqd_addr; 78 82 uint64_t h_context; 79 83 uint64_t h_queue; 80 84 uint32_t vm_context_cntl; 81 85 86 + uint32_t process_csa_array_index; 87 + uint32_t context_csa_array_index; 88 + 82 89 struct { 83 90 uint32_t is_context_suspended : 1; 84 - uint32_t reserved : 31; 91 + uint32_t collaboration_mode : 1; 92 + uint32_t reserved : 30; 85 93 }; 86 94 }; 87 95 ··· 92 94 uint32_t doorbell_offset_0; 93 95 uint32_t doorbell_offset_1; 94 96 uint64_t context_csa_addr; 97 + uint32_t context_csa_array_index; 95 98 }; 96 99 97 100 struct MQD_INFO { ··· 102 103 uint32_t wptr_val; 103 104 uint32_t rptr_val; 104 105 uint32_t unmapped; 106 + uint32_t vmid; 105 107 }; 106 108 107 109 struct amdgpu_umsch_mm;
+6
drivers/gpu/drm/amd/amdgpu/amdgpu_vpe.c
··· 396 396 struct amdgpu_vpe *vpe = &adev->vpe; 397 397 int ret; 398 398 399 + /* Power on VPE */ 400 + ret = amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VPE, 401 + AMD_PG_STATE_UNGATE); 402 + if (ret) 403 + return ret; 404 + 399 405 ret = vpe_load_microcode(vpe); 400 406 if (ret) 401 407 return ret;
+5 -2
drivers/gpu/drm/amd/amdgpu/umsch_mm_v4_0.c
··· 60 60 61 61 umsch->cmd_buf_curr_ptr = umsch->cmd_buf_ptr; 62 62 63 - if (amdgpu_ip_version(adev, VCN_HWIP, 0) == IP_VERSION(4, 0, 5)) { 63 + if (amdgpu_ip_version(adev, VCN_HWIP, 0) >= IP_VERSION(4, 0, 5)) { 64 64 WREG32_SOC15(VCN, 0, regUVD_IPX_DLDO_CONFIG, 65 65 1 << UVD_IPX_DLDO_CONFIG__ONO0_PWR_CONFIG__SHIFT); 66 66 SOC15_WAIT_ON_RREG(VCN, 0, regUVD_IPX_DLDO_STATUS, ··· 248 248 data = REG_SET_FIELD(data, VCN_UMSCH_RB_DB_CTRL, EN, 0); 249 249 WREG32_SOC15(VCN, 0, regVCN_UMSCH_RB_DB_CTRL, data); 250 250 251 - if (amdgpu_ip_version(adev, VCN_HWIP, 0) == IP_VERSION(4, 0, 5)) { 251 + if (amdgpu_ip_version(adev, VCN_HWIP, 0) >= IP_VERSION(4, 0, 5)) { 252 252 WREG32_SOC15(VCN, 0, regUVD_IPX_DLDO_CONFIG, 253 253 2 << UVD_IPX_DLDO_CONFIG__ONO0_PWR_CONFIG__SHIFT); 254 254 SOC15_WAIT_ON_RREG(VCN, 0, regUVD_IPX_DLDO_STATUS, ··· 271 271 272 272 set_hw_resources.vmid_mask_mm_vcn = umsch->vmid_mask_mm_vcn; 273 273 set_hw_resources.vmid_mask_mm_vpe = umsch->vmid_mask_mm_vpe; 274 + set_hw_resources.collaboration_mask_vpe = 275 + adev->vpe.collaborate_mode ? 0x3 : 0x0; 274 276 set_hw_resources.engine_mask = umsch->engine_mask; 275 277 276 278 set_hw_resources.vcn0_hqd_mask[0] = umsch->vcn0_hqd_mask; ··· 348 346 add_queue.h_queue = input_ptr->h_queue; 349 347 add_queue.vm_context_cntl = input_ptr->vm_context_cntl; 350 348 add_queue.is_context_suspended = input_ptr->is_context_suspended; 349 + add_queue.collaboration_mode = adev->vpe.collaborate_mode ? 1 : 0; 351 350 352 351 add_queue.api_status.api_completion_fence_addr = umsch->ring.fence_drv.gpu_addr; 353 352 add_queue.api_status.api_completion_fence_value = ++umsch->ring.fence_drv.sync_seq;
+2 -2
drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
··· 1523 1523 1524 1524 /* Find a KFD GPU device that supports the get_dmabuf_info query */ 1525 1525 for (i = 0; kfd_topology_enum_kfd_devices(i, &dev) == 0; i++) 1526 - if (dev) 1526 + if (dev && !kfd_devcgroup_check_permission(dev)) 1527 1527 break; 1528 1528 if (!dev) 1529 1529 return -EINVAL; ··· 1545 1545 if (xcp_id >= 0) 1546 1546 args->gpu_id = dmabuf_adev->kfd.dev->nodes[xcp_id]->id; 1547 1547 else 1548 - args->gpu_id = dmabuf_adev->kfd.dev->nodes[0]->id; 1548 + args->gpu_id = dev->id; 1549 1549 args->flags = flags; 1550 1550 1551 1551 /* Copy metadata buffer to user mode */
+2 -1
drivers/gpu/drm/amd/amdkfd/kfd_int_process_v10.c
··· 339 339 break; 340 340 } 341 341 kfd_signal_event_interrupt(pasid, context_id0 & 0x7fffff, 23); 342 - } else if (source_id == SOC15_INTSRC_CP_BAD_OPCODE) { 342 + } else if (source_id == SOC15_INTSRC_CP_BAD_OPCODE && 343 + KFD_DBG_EC_TYPE_IS_PACKET(KFD_DEBUG_CP_BAD_OP_ECODE(context_id0))) { 343 344 kfd_set_dbg_ev_from_interrupt(dev, pasid, 344 345 KFD_DEBUG_DOORBELL_ID(context_id0), 345 346 KFD_EC_MASK(KFD_DEBUG_CP_BAD_OP_ECODE(context_id0)),
+2 -1
drivers/gpu/drm/amd/amdkfd/kfd_int_process_v11.c
··· 328 328 /* CP */ 329 329 if (source_id == SOC15_INTSRC_CP_END_OF_PIPE) 330 330 kfd_signal_event_interrupt(pasid, context_id0, 32); 331 - else if (source_id == SOC15_INTSRC_CP_BAD_OPCODE) 331 + else if (source_id == SOC15_INTSRC_CP_BAD_OPCODE && 332 + KFD_DBG_EC_TYPE_IS_PACKET(KFD_CTXID0_CP_BAD_OP_ECODE(context_id0))) 332 333 kfd_set_dbg_ev_from_interrupt(dev, pasid, 333 334 KFD_CTXID0_DOORBELL_ID(context_id0), 334 335 KFD_EC_MASK(KFD_CTXID0_CP_BAD_OP_ECODE(context_id0)),
+2 -1
drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
··· 388 388 break; 389 389 } 390 390 kfd_signal_event_interrupt(pasid, sq_int_data, 24); 391 - } else if (source_id == SOC15_INTSRC_CP_BAD_OPCODE) { 391 + } else if (source_id == SOC15_INTSRC_CP_BAD_OPCODE && 392 + KFD_DBG_EC_TYPE_IS_PACKET(KFD_DEBUG_CP_BAD_OP_ECODE(context_id0))) { 392 393 kfd_set_dbg_ev_from_interrupt(dev, pasid, 393 394 KFD_DEBUG_DOORBELL_ID(context_id0), 394 395 KFD_EC_MASK(KFD_DEBUG_CP_BAD_OP_ECODE(context_id0)),
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_priv.h
··· 1473 1473 1474 1474 static inline bool kfd_flush_tlb_after_unmap(struct kfd_dev *dev) 1475 1475 { 1476 - return KFD_GC_VERSION(dev) > IP_VERSION(9, 4, 2) || 1476 + return KFD_GC_VERSION(dev) >= IP_VERSION(9, 4, 2) || 1477 1477 (KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 1) && dev->sdma_fw_version >= 18) || 1478 1478 KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 0); 1479 1479 }
+3 -5
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 6305 6305 6306 6306 if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A) 6307 6307 mod_build_hf_vsif_infopacket(stream, &stream->vsp_infopacket); 6308 - else if (stream->signal == SIGNAL_TYPE_DISPLAY_PORT || 6309 - stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST || 6310 - stream->signal == SIGNAL_TYPE_EDP) { 6308 + 6309 + if (stream->link->psr_settings.psr_feature_enabled || stream->link->replay_settings.replay_feature_enabled) { 6311 6310 // 6312 6311 // should decide stream support vsc sdp colorimetry capability 6313 6312 // before building vsc info packet ··· 6322 6323 if (stream->out_transfer_func->tf == TRANSFER_FUNCTION_GAMMA22) 6323 6324 tf = TRANSFER_FUNC_GAMMA_22; 6324 6325 mod_build_vsc_infopacket(stream, &stream->vsc_infopacket, stream->output_color_space, tf); 6326 + aconnector->psr_skip_count = AMDGPU_DM_PSR_ENTRY_DELAY; 6325 6327 6326 - if (stream->link->psr_settings.psr_feature_enabled) 6327 - aconnector->psr_skip_count = AMDGPU_DM_PSR_ENTRY_DELAY; 6328 6328 } 6329 6329 finish: 6330 6330 dc_sink_release(sink);
+5 -3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
··· 141 141 * amdgpu_dm_psr_enable() - enable psr f/w 142 142 * @stream: stream state 143 143 * 144 - * Return: true if success 145 144 */ 146 - bool amdgpu_dm_psr_enable(struct dc_stream_state *stream) 145 + void amdgpu_dm_psr_enable(struct dc_stream_state *stream) 147 146 { 148 147 struct dc_link *link = stream->link; 149 148 unsigned int vsync_rate_hz = 0; ··· 189 190 if (link->psr_settings.psr_version < DC_PSR_VERSION_SU_1) 190 191 power_opt |= psr_power_opt_z10_static_screen; 191 192 192 - return dc_link_set_psr_allow_active(link, &psr_enable, false, false, &power_opt); 193 + dc_link_set_psr_allow_active(link, &psr_enable, false, false, &power_opt); 194 + 195 + if (link->ctx->dc->caps.ips_support) 196 + dc_allow_idle_optimizations(link->ctx->dc, true); 193 197 } 194 198 195 199 /*
+1 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h
··· 32 32 #define AMDGPU_DM_PSR_ENTRY_DELAY 5 33 33 34 34 void amdgpu_dm_set_psr_caps(struct dc_link *link); 35 - bool amdgpu_dm_psr_enable(struct dc_stream_state *stream); 35 + void amdgpu_dm_psr_enable(struct dc_stream_state *stream); 36 36 bool amdgpu_dm_link_setup_psr(struct dc_stream_state *stream); 37 37 bool amdgpu_dm_psr_disable(struct dc_stream_state *stream); 38 38 bool amdgpu_dm_psr_disable_all(struct amdgpu_display_manager *dm);
+6 -1
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
··· 73 73 #define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_SEL_MASK 0x00000007L 74 74 #define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_DIV_MASK 0x000F0000L 75 75 76 + #define SMU_VER_THRESHOLD 0x5D4A00 //93.74.0 77 + 76 78 #define REG(reg_name) \ 77 79 (ctx->clk_reg_offsets[reg ## reg_name ## _BASE_IDX] + reg ## reg_name) 78 80 ··· 413 411 414 412 static void init_clk_states(struct clk_mgr *clk_mgr) 415 413 { 414 + struct clk_mgr_internal *clk_mgr_int = TO_CLK_MGR_INTERNAL(clk_mgr); 416 415 uint32_t ref_dtbclk = clk_mgr->clks.ref_dtbclk_khz; 417 416 memset(&(clk_mgr->clks), 0, sizeof(struct dc_clocks)); 418 417 418 + if (clk_mgr_int->smu_ver >= SMU_VER_THRESHOLD) 419 + clk_mgr->clks.dtbclk_en = true; // request DTBCLK disable on first commit 419 420 clk_mgr->clks.ref_dtbclk_khz = ref_dtbclk; // restore ref_dtbclk 420 421 clk_mgr->clks.p_state_change_support = true; 421 422 clk_mgr->clks.prev_p_state_change_support = true; ··· 714 709 clock_table->NumFclkLevelsEnabled; 715 710 max_fclk = find_max_clk_value(clock_table->FclkClocks_Freq, num_fclk); 716 711 717 - num_dcfclk = (clock_table->NumFclkLevelsEnabled > NUM_DCFCLK_DPM_LEVELS) ? NUM_DCFCLK_DPM_LEVELS : 712 + num_dcfclk = (clock_table->NumDcfClkLevelsEnabled > NUM_DCFCLK_DPM_LEVELS) ? NUM_DCFCLK_DPM_LEVELS : 718 713 clock_table->NumDcfClkLevelsEnabled; 719 714 for (i = 0; i < num_dcfclk; i++) { 720 715 int j;
+4 -2
drivers/gpu/drm/amd/display/dc/core/dc.c
··· 3024 3024 scratch->blend_tf[i] = *status->plane_states[i]->blend_tf; 3025 3025 } 3026 3026 scratch->stream_state = *stream; 3027 - scratch->out_transfer_func = *stream->out_transfer_func; 3027 + if (stream->out_transfer_func) 3028 + scratch->out_transfer_func = *stream->out_transfer_func; 3028 3029 } 3029 3030 3030 3031 static void restore_planes_and_stream_state( ··· 3047 3046 *status->plane_states[i]->blend_tf = scratch->blend_tf[i]; 3048 3047 } 3049 3048 *stream = scratch->stream_state; 3050 - *stream->out_transfer_func = scratch->out_transfer_func; 3049 + if (stream->out_transfer_func) 3050 + *stream->out_transfer_func = scratch->out_transfer_func; 3051 3051 } 3052 3052 3053 3053 static bool update_planes_and_stream_state(struct dc *dc,
+1 -1
drivers/gpu/drm/amd/display/dc/dce110/Makefile
··· 23 23 # Makefile for the 'controller' sub-component of DAL. 24 24 # It provides the control and status of HW CRTC block. 25 25 26 - CFLAGS_$(AMDDALPATH)/dc/dce110/dce110_resource.o = $(call cc-disable-warning, override-init) 26 + CFLAGS_$(AMDDALPATH)/dc/dce110/dce110_resource.o = -Wno-override-init 27 27 28 28 DCE110 = dce110_timing_generator.o \ 29 29 dce110_compressor.o dce110_opp_regamma_v.o \
+1 -1
drivers/gpu/drm/amd/display/dc/dce112/Makefile
··· 23 23 # Makefile for the 'controller' sub-component of DAL. 24 24 # It provides the control and status of HW CRTC block. 25 25 26 - CFLAGS_$(AMDDALPATH)/dc/dce112/dce112_resource.o = $(call cc-disable-warning, override-init) 26 + CFLAGS_$(AMDDALPATH)/dc/dce112/dce112_resource.o = -Wno-override-init 27 27 28 28 DCE112 = dce112_compressor.o 29 29
+1 -1
drivers/gpu/drm/amd/display/dc/dce120/Makefile
··· 24 24 # It provides the control and status of HW CRTC block. 25 25 26 26 27 - CFLAGS_$(AMDDALPATH)/dc/dce120/dce120_resource.o = $(call cc-disable-warning, override-init) 27 + CFLAGS_$(AMDDALPATH)/dc/dce120/dce120_resource.o = -Wno-override-init 28 28 29 29 DCE120 = dce120_timing_generator.o 30 30
+1 -1
drivers/gpu/drm/amd/display/dc/dce60/Makefile
··· 23 23 # Makefile for the 'controller' sub-component of DAL. 24 24 # It provides the control and status of HW CRTC block. 25 25 26 - CFLAGS_$(AMDDALPATH)/dc/dce60/dce60_resource.o = $(call cc-disable-warning, override-init) 26 + CFLAGS_$(AMDDALPATH)/dc/dce60/dce60_resource.o = -Wno-override-init 27 27 28 28 DCE60 = dce60_timing_generator.o dce60_hw_sequencer.o \ 29 29 dce60_resource.o
+1 -1
drivers/gpu/drm/amd/display/dc/dce80/Makefile
··· 23 23 # Makefile for the 'controller' sub-component of DAL. 24 24 # It provides the control and status of HW CRTC block. 25 25 26 - CFLAGS_$(AMDDALPATH)/dc/dce80/dce80_resource.o = $(call cc-disable-warning, override-init) 26 + CFLAGS_$(AMDDALPATH)/dc/dce80/dce80_resource.o = -Wno-override-init 27 27 28 28 DCE80 = dce80_timing_generator.o 29 29
+32 -22
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c
··· 44 44 #define NUM_ELEMENTS(a) (sizeof(a) / sizeof((a)[0])) 45 45 46 46 47 + void mpc3_mpc_init(struct mpc *mpc) 48 + { 49 + struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc); 50 + int opp_id; 51 + 52 + mpc1_mpc_init(mpc); 53 + 54 + for (opp_id = 0; opp_id < MAX_OPP; opp_id++) { 55 + if (REG(MUX[opp_id])) 56 + /* disable mpc out rate and flow control */ 57 + REG_UPDATE_2(MUX[opp_id], MPC_OUT_RATE_CONTROL_DISABLE, 58 + 1, MPC_OUT_FLOW_CONTROL_COUNT, 0); 59 + } 60 + } 61 + 62 + void mpc3_mpc_init_single_inst(struct mpc *mpc, unsigned int mpcc_id) 63 + { 64 + struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc); 65 + 66 + mpc1_mpc_init_single_inst(mpc, mpcc_id); 67 + 68 + /* assuming mpc out mux is connected to opp with the same index at this 69 + * point in time (e.g. transitioning from vbios to driver) 70 + */ 71 + if (mpcc_id < MAX_OPP && REG(MUX[mpcc_id])) 72 + /* disable mpc out rate and flow control */ 73 + REG_UPDATE_2(MUX[mpcc_id], MPC_OUT_RATE_CONTROL_DISABLE, 74 + 1, MPC_OUT_FLOW_CONTROL_COUNT, 0); 75 + } 76 + 47 77 bool mpc3_is_dwb_idle( 48 78 struct mpc *mpc, 49 79 int dwb_id) ··· 108 78 109 79 REG_SET(DWB_MUX[dwb_id], 0, 110 80 MPC_DWB0_MUX, 0xf); 111 - } 112 - 113 - void mpc3_set_out_rate_control( 114 - struct mpc *mpc, 115 - int opp_id, 116 - bool enable, 117 - bool rate_2x_mode, 118 - struct mpc_dwb_flow_control *flow_control) 119 - { 120 - struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc); 121 - 122 - REG_UPDATE_2(MUX[opp_id], 123 - MPC_OUT_RATE_CONTROL_DISABLE, !enable, 124 - MPC_OUT_RATE_CONTROL, rate_2x_mode); 125 - 126 - if (flow_control) 127 - REG_UPDATE_2(MUX[opp_id], 128 - MPC_OUT_FLOW_CONTROL_MODE, flow_control->flow_ctrl_mode, 129 - MPC_OUT_FLOW_CONTROL_COUNT, flow_control->flow_ctrl_cnt1); 130 81 } 131 82 132 83 enum dc_lut_mode mpc3_get_ogam_current(struct mpc *mpc, int mpcc_id) ··· 1501 1490 .read_mpcc_state = mpc3_read_mpcc_state, 1502 1491 .insert_plane = mpc1_insert_plane, 1503 1492 .remove_mpcc = mpc1_remove_mpcc, 1504 - .mpc_init = mpc1_mpc_init, 1505 - .mpc_init_single_inst = mpc1_mpc_init_single_inst, 1493 + .mpc_init = mpc3_mpc_init, 1494 + .mpc_init_single_inst = mpc3_mpc_init_single_inst, 1506 1495 .update_blending = mpc2_update_blending, 1507 1496 .cursor_lock = mpc1_cursor_lock, 1508 1497 .get_mpcc_for_dpp = mpc1_get_mpcc_for_dpp, ··· 1519 1508 .set_dwb_mux = mpc3_set_dwb_mux, 1520 1509 .disable_dwb_mux = mpc3_disable_dwb_mux, 1521 1510 .is_dwb_idle = mpc3_is_dwb_idle, 1522 - .set_out_rate_control = mpc3_set_out_rate_control, 1523 1511 .set_gamut_remap = mpc3_set_gamut_remap, 1524 1512 .program_shaper = mpc3_program_shaper, 1525 1513 .acquire_rmu = mpcc3_acquire_rmu,
+7 -7
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h
··· 1007 1007 int num_mpcc, 1008 1008 int num_rmu); 1009 1009 1010 + void mpc3_mpc_init( 1011 + struct mpc *mpc); 1012 + 1013 + void mpc3_mpc_init_single_inst( 1014 + struct mpc *mpc, 1015 + unsigned int mpcc_id); 1016 + 1010 1017 bool mpc3_program_shaper( 1011 1018 struct mpc *mpc, 1012 1019 const struct pwl_params *params, ··· 1084 1077 bool mpc3_is_dwb_idle( 1085 1078 struct mpc *mpc, 1086 1079 int dwb_id); 1087 - 1088 - void mpc3_set_out_rate_control( 1089 - struct mpc *mpc, 1090 - int opp_id, 1091 - bool enable, 1092 - bool rate_2x_mode, 1093 - struct mpc_dwb_flow_control *flow_control); 1094 1080 1095 1081 void mpc3_power_on_ogam_lut( 1096 1082 struct mpc *mpc, int mpcc_id,
+2 -3
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c
··· 47 47 struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc); 48 48 int mpcc_id; 49 49 50 - mpc1_mpc_init(mpc); 50 + mpc3_mpc_init(mpc); 51 51 52 52 if (mpc->ctx->dc->debug.enable_mem_low_power.bits.mpc) { 53 53 if (mpc30->mpc_mask->MPCC_MCM_SHAPER_MEM_LOW_PWR_MODE && mpc30->mpc_mask->MPCC_MCM_3DLUT_MEM_LOW_PWR_MODE) { ··· 991 991 .insert_plane = mpc1_insert_plane, 992 992 .remove_mpcc = mpc1_remove_mpcc, 993 993 .mpc_init = mpc32_mpc_init, 994 - .mpc_init_single_inst = mpc1_mpc_init_single_inst, 994 + .mpc_init_single_inst = mpc3_mpc_init_single_inst, 995 995 .update_blending = mpc2_update_blending, 996 996 .cursor_lock = mpc1_cursor_lock, 997 997 .get_mpcc_for_dpp = mpc1_get_mpcc_for_dpp, ··· 1008 1008 .set_dwb_mux = mpc3_set_dwb_mux, 1009 1009 .disable_dwb_mux = mpc3_disable_dwb_mux, 1010 1010 .is_dwb_idle = mpc3_is_dwb_idle, 1011 - .set_out_rate_control = mpc3_set_out_rate_control, 1012 1011 .set_gamut_remap = mpc3_set_gamut_remap, 1013 1012 .program_shaper = mpc32_program_shaper, 1014 1013 .program_3dlut = mpc32_program_3dlut,
+2 -2
drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
··· 166 166 .num_states = 5, 167 167 .sr_exit_time_us = 28.0, 168 168 .sr_enter_plus_exit_time_us = 30.0, 169 - .sr_exit_z8_time_us = 210.0, 170 - .sr_enter_plus_exit_z8_time_us = 320.0, 169 + .sr_exit_z8_time_us = 250.0, 170 + .sr_enter_plus_exit_z8_time_us = 350.0, 171 171 .fclk_change_latency_us = 24.0, 172 172 .usr_retraining_latency_us = 2, 173 173 .writeback_latency_us = 12.0,
+84 -19
drivers/gpu/drm/amd/display/dc/dml/dcn351/dcn351_fpu.c
··· 98 98 .clock_limits = { 99 99 { 100 100 .state = 0, 101 - .dispclk_mhz = 1200.0, 102 - .dppclk_mhz = 1200.0, 101 + .dcfclk_mhz = 400.0, 102 + .fabricclk_mhz = 400.0, 103 + .socclk_mhz = 600.0, 104 + .dram_speed_mts = 3200.0, 105 + .dispclk_mhz = 600.0, 106 + .dppclk_mhz = 600.0, 103 107 .phyclk_mhz = 600.0, 104 108 .phyclk_d18_mhz = 667.0, 105 - .dscclk_mhz = 186.0, 109 + .dscclk_mhz = 200.0, 106 110 .dtbclk_mhz = 600.0, 107 111 }, 108 112 { 109 113 .state = 1, 110 - .dispclk_mhz = 1200.0, 111 - .dppclk_mhz = 1200.0, 114 + .dcfclk_mhz = 600.0, 115 + .fabricclk_mhz = 1000.0, 116 + .socclk_mhz = 733.0, 117 + .dram_speed_mts = 6400.0, 118 + .dispclk_mhz = 800.0, 119 + .dppclk_mhz = 800.0, 112 120 .phyclk_mhz = 810.0, 113 121 .phyclk_d18_mhz = 667.0, 114 - .dscclk_mhz = 209.0, 122 + .dscclk_mhz = 266.7, 115 123 .dtbclk_mhz = 600.0, 116 124 }, 117 125 { 118 126 .state = 2, 119 - .dispclk_mhz = 1200.0, 120 - .dppclk_mhz = 1200.0, 127 + .dcfclk_mhz = 738.0, 128 + .fabricclk_mhz = 1200.0, 129 + .socclk_mhz = 880.0, 130 + .dram_speed_mts = 7500.0, 131 + .dispclk_mhz = 800.0, 132 + .dppclk_mhz = 800.0, 121 133 .phyclk_mhz = 810.0, 122 134 .phyclk_d18_mhz = 667.0, 123 - .dscclk_mhz = 209.0, 135 + .dscclk_mhz = 266.7, 124 136 .dtbclk_mhz = 600.0, 125 137 }, 126 138 { 127 139 .state = 3, 128 - .dispclk_mhz = 1200.0, 129 - .dppclk_mhz = 1200.0, 140 + .dcfclk_mhz = 800.0, 141 + .fabricclk_mhz = 1400.0, 142 + .socclk_mhz = 978.0, 143 + .dram_speed_mts = 7500.0, 144 + .dispclk_mhz = 960.0, 145 + .dppclk_mhz = 960.0, 130 146 .phyclk_mhz = 810.0, 131 147 .phyclk_d18_mhz = 667.0, 132 - .dscclk_mhz = 371.0, 148 + .dscclk_mhz = 320.0, 133 149 .dtbclk_mhz = 600.0, 134 150 }, 135 151 { 136 152 .state = 4, 153 + .dcfclk_mhz = 873.0, 154 + .fabricclk_mhz = 1600.0, 155 + .socclk_mhz = 1100.0, 156 + .dram_speed_mts = 8533.0, 157 + .dispclk_mhz = 1066.7, 158 + .dppclk_mhz = 1066.7, 159 + .phyclk_mhz = 810.0, 160 + .phyclk_d18_mhz = 667.0, 161 + .dscclk_mhz = 355.6, 162 + .dtbclk_mhz = 600.0, 163 + }, 164 + { 165 + .state = 5, 166 + .dcfclk_mhz = 960.0, 167 + .fabricclk_mhz = 1700.0, 168 + .socclk_mhz = 1257.0, 169 + .dram_speed_mts = 8533.0, 137 170 .dispclk_mhz = 1200.0, 138 171 .dppclk_mhz = 1200.0, 139 172 .phyclk_mhz = 810.0, 140 173 .phyclk_d18_mhz = 667.0, 141 - .dscclk_mhz = 417.0, 174 + .dscclk_mhz = 400.0, 175 + .dtbclk_mhz = 600.0, 176 + }, 177 + { 178 + .state = 6, 179 + .dcfclk_mhz = 1067.0, 180 + .fabricclk_mhz = 1850.0, 181 + .socclk_mhz = 1257.0, 182 + .dram_speed_mts = 8533.0, 183 + .dispclk_mhz = 1371.4, 184 + .dppclk_mhz = 1371.4, 185 + .phyclk_mhz = 810.0, 186 + .phyclk_d18_mhz = 667.0, 187 + .dscclk_mhz = 457.1, 188 + .dtbclk_mhz = 600.0, 189 + }, 190 + { 191 + .state = 7, 192 + .dcfclk_mhz = 1200.0, 193 + .fabricclk_mhz = 2000.0, 194 + .socclk_mhz = 1467.0, 195 + .dram_speed_mts = 8533.0, 196 + .dispclk_mhz = 1600.0, 197 + .dppclk_mhz = 1600.0, 198 + .phyclk_mhz = 810.0, 199 + .phyclk_d18_mhz = 667.0, 200 + .dscclk_mhz = 533.3, 142 201 .dtbclk_mhz = 600.0, 143 202 }, 144 203 }, 145 - .num_states = 5, 204 + .num_states = 8, 146 205 .sr_exit_time_us = 28.0, 147 206 .sr_enter_plus_exit_time_us = 30.0, 148 - .sr_exit_z8_time_us = 210.0, 149 - .sr_enter_plus_exit_z8_time_us = 320.0, 207 + .sr_exit_z8_time_us = 250.0, 208 + .sr_enter_plus_exit_z8_time_us = 350.0, 150 209 .fclk_change_latency_us = 24.0, 151 210 .usr_retraining_latency_us = 2, 152 211 .writeback_latency_us = 12.0, ··· 236 177 .do_urgent_latency_adjustment = 0, 237 178 .urgent_latency_adjustment_fabric_clock_component_us = 0, 238 179 .urgent_latency_adjustment_fabric_clock_reference_mhz = 0, 180 + .num_chans = 4, 181 + .dram_clock_change_latency_us = 11.72, 182 + .dispclk_dppclk_vco_speed_mhz = 2400.0, 239 183 }; 240 184 241 185 /* ··· 402 340 clock_limits[i].socclk_mhz; 403 341 dc->dml2_options.bbox_overrides.clks_table.clk_entries[i].memclk_mhz = 404 342 clk_table->entries[i].memclk_mhz * clk_table->entries[i].wck_ratio; 343 + dc->dml2_options.bbox_overrides.clks_table.clk_entries[i].dtbclk_mhz = 344 + clock_limits[i].dtbclk_mhz; 405 345 dc->dml2_options.bbox_overrides.clks_table.num_entries_per_clk.num_dcfclk_levels = 406 346 clk_table->num_entries; 407 347 dc->dml2_options.bbox_overrides.clks_table.num_entries_per_clk.num_fclk_levels = ··· 415 351 dc->dml2_options.bbox_overrides.clks_table.num_entries_per_clk.num_socclk_levels = 416 352 clk_table->num_entries; 417 353 dc->dml2_options.bbox_overrides.clks_table.num_entries_per_clk.num_memclk_levels = 354 + clk_table->num_entries; 355 + dc->dml2_options.bbox_overrides.clks_table.num_entries_per_clk.num_dtbclk_levels = 418 356 clk_table->num_entries; 419 357 } 420 358 } ··· 617 551 if (context->res_ctx.pipe_ctx[i].plane_state) 618 552 plane_count++; 619 553 } 554 + 620 555 /*dcn351 does not support z9/z10*/ 621 556 if (context->stream_count == 0 || plane_count == 0) { 622 557 support = DCN_ZSTATE_SUPPORT_ALLOW_Z8_ONLY; ··· 631 564 dc->debug.minimum_z8_residency_time > 0 ? dc->debug.minimum_z8_residency_time : 1000; 632 565 bool allow_z8 = context->bw_ctx.dml.vba.StutterPeriod > (double)minmum_z8_residency; 633 566 634 - 635 567 /*for psr1/psr-su, we allow z8 and z10 based on latency, for replay with IPS enabled, it will enter ips2*/ 636 - if (is_pwrseq0 && (is_psr || is_replay)) 568 + if (is_pwrseq0 && (is_psr || is_replay)) 637 569 support = allow_z8 ? allow_z8 : DCN_ZSTATE_SUPPORT_DISALLOW; 638 - 639 570 } 640 571 context->bw_ctx.bw.dcn.clk.zstate_support = support; 641 572 }
+1 -5
drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
··· 228 228 break; 229 229 230 230 case dml_project_dcn35: 231 + case dml_project_dcn351: 231 232 out->num_chans = 4; 232 233 out->round_trip_ping_latency_dcfclk_cycles = 106; 233 234 out->smn_latency_us = 2; 234 235 out->dispclk_dppclk_vco_speed_mhz = 3600; 235 236 break; 236 237 237 - case dml_project_dcn351: 238 - out->num_chans = 16; 239 - out->round_trip_ping_latency_dcfclk_cycles = 1100; 240 - out->smn_latency_us = 2; 241 - break; 242 238 } 243 239 /* ---Overrides if available--- */ 244 240 if (dml2->config.bbox_overrides.dram_num_chan)
+2 -1
drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
··· 1185 1185 if (dccg) { 1186 1186 dccg->funcs->disable_symclk32_se(dccg, dp_hpo_inst); 1187 1187 dccg->funcs->set_dpstreamclk(dccg, REFCLK, tg->inst, dp_hpo_inst); 1188 - dccg->funcs->set_dtbclk_dto(dccg, &dto_params); 1188 + if (dccg && dccg->funcs->set_dtbclk_dto) 1189 + dccg->funcs->set_dtbclk_dto(dccg, &dto_params); 1189 1190 } 1190 1191 } else if (dccg && dccg->funcs->disable_symclk_se) { 1191 1192 dccg->funcs->disable_symclk_se(dccg, stream_enc->stream_enc_inst,
-41
drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
··· 69 69 #define FN(reg_name, field_name) \ 70 70 hws->shifts->field_name, hws->masks->field_name 71 71 72 - static int calc_mpc_flow_ctrl_cnt(const struct dc_stream_state *stream, 73 - int opp_cnt) 74 - { 75 - bool hblank_halved = optc2_is_two_pixels_per_containter(&stream->timing); 76 - int flow_ctrl_cnt; 77 - 78 - if (opp_cnt >= 2) 79 - hblank_halved = true; 80 - 81 - flow_ctrl_cnt = stream->timing.h_total - stream->timing.h_addressable - 82 - stream->timing.h_border_left - 83 - stream->timing.h_border_right; 84 - 85 - if (hblank_halved) 86 - flow_ctrl_cnt /= 2; 87 - 88 - /* ODM combine 4:1 case */ 89 - if (opp_cnt == 4) 90 - flow_ctrl_cnt /= 2; 91 - 92 - return flow_ctrl_cnt; 93 - } 94 - 95 72 static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable) 96 73 { 97 74 struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc; ··· 160 183 struct pipe_ctx *odm_pipe; 161 184 int opp_cnt = 0; 162 185 int opp_inst[MAX_PIPES] = {0}; 163 - bool rate_control_2x_pclk = (pipe_ctx->stream->timing.flags.INTERLACE || optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing)); 164 - struct mpc_dwb_flow_control flow_control; 165 - struct mpc *mpc = dc->res_pool->mpc; 166 - int i; 167 186 168 187 opp_cnt = get_odm_config(pipe_ctx, opp_inst); 169 188 ··· 171 198 else 172 199 pipe_ctx->stream_res.tg->funcs->set_odm_bypass( 173 200 pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing); 174 - 175 - rate_control_2x_pclk = rate_control_2x_pclk || opp_cnt > 1; 176 - flow_control.flow_ctrl_mode = 0; 177 - flow_control.flow_ctrl_cnt0 = 0x80; 178 - flow_control.flow_ctrl_cnt1 = calc_mpc_flow_ctrl_cnt(pipe_ctx->stream, opp_cnt); 179 - if (mpc->funcs->set_out_rate_control) { 180 - for (i = 0; i < opp_cnt; ++i) { 181 - mpc->funcs->set_out_rate_control( 182 - mpc, opp_inst[i], 183 - true, 184 - rate_control_2x_pclk, 185 - &flow_control); 186 - } 187 - } 188 201 189 202 for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) { 190 203 odm_pipe->stream_res.opp->funcs->opp_pipe_clock_control(
-41
drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
··· 966 966 } 967 967 } 968 968 969 - static int calc_mpc_flow_ctrl_cnt(const struct dc_stream_state *stream, 970 - int opp_cnt) 971 - { 972 - bool hblank_halved = optc2_is_two_pixels_per_containter(&stream->timing); 973 - int flow_ctrl_cnt; 974 - 975 - if (opp_cnt >= 2) 976 - hblank_halved = true; 977 - 978 - flow_ctrl_cnt = stream->timing.h_total - stream->timing.h_addressable - 979 - stream->timing.h_border_left - 980 - stream->timing.h_border_right; 981 - 982 - if (hblank_halved) 983 - flow_ctrl_cnt /= 2; 984 - 985 - /* ODM combine 4:1 case */ 986 - if (opp_cnt == 4) 987 - flow_ctrl_cnt /= 2; 988 - 989 - return flow_ctrl_cnt; 990 - } 991 - 992 969 static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable) 993 970 { 994 971 struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc; ··· 1080 1103 struct pipe_ctx *odm_pipe; 1081 1104 int opp_cnt = 0; 1082 1105 int opp_inst[MAX_PIPES] = {0}; 1083 - bool rate_control_2x_pclk = (pipe_ctx->stream->timing.flags.INTERLACE || optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing)); 1084 - struct mpc_dwb_flow_control flow_control; 1085 - struct mpc *mpc = dc->res_pool->mpc; 1086 - int i; 1087 1106 1088 1107 opp_cnt = get_odm_config(pipe_ctx, opp_inst); 1089 1108 ··· 1091 1118 else 1092 1119 pipe_ctx->stream_res.tg->funcs->set_odm_bypass( 1093 1120 pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing); 1094 - 1095 - rate_control_2x_pclk = rate_control_2x_pclk || opp_cnt > 1; 1096 - flow_control.flow_ctrl_mode = 0; 1097 - flow_control.flow_ctrl_cnt0 = 0x80; 1098 - flow_control.flow_ctrl_cnt1 = calc_mpc_flow_ctrl_cnt(pipe_ctx->stream, opp_cnt); 1099 - if (mpc->funcs->set_out_rate_control) { 1100 - for (i = 0; i < opp_cnt; ++i) { 1101 - mpc->funcs->set_out_rate_control( 1102 - mpc, opp_inst[i], 1103 - true, 1104 - rate_control_2x_pclk, 1105 - &flow_control); 1106 - } 1107 - } 1108 1121 1109 1122 for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) { 1110 1123 odm_pipe->stream_res.opp->funcs->opp_pipe_clock_control(
-41
drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
··· 358 358 } 359 359 } 360 360 361 - static int calc_mpc_flow_ctrl_cnt(const struct dc_stream_state *stream, 362 - int opp_cnt) 363 - { 364 - bool hblank_halved = optc2_is_two_pixels_per_containter(&stream->timing); 365 - int flow_ctrl_cnt; 366 - 367 - if (opp_cnt >= 2) 368 - hblank_halved = true; 369 - 370 - flow_ctrl_cnt = stream->timing.h_total - stream->timing.h_addressable - 371 - stream->timing.h_border_left - 372 - stream->timing.h_border_right; 373 - 374 - if (hblank_halved) 375 - flow_ctrl_cnt /= 2; 376 - 377 - /* ODM combine 4:1 case */ 378 - if (opp_cnt == 4) 379 - flow_ctrl_cnt /= 2; 380 - 381 - return flow_ctrl_cnt; 382 - } 383 - 384 361 static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable) 385 362 { 386 363 struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc; ··· 451 474 struct pipe_ctx *odm_pipe; 452 475 int opp_cnt = 0; 453 476 int opp_inst[MAX_PIPES] = {0}; 454 - bool rate_control_2x_pclk = (pipe_ctx->stream->timing.flags.INTERLACE || optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing)); 455 - struct mpc_dwb_flow_control flow_control; 456 - struct mpc *mpc = dc->res_pool->mpc; 457 - int i; 458 477 459 478 opp_cnt = get_odm_config(pipe_ctx, opp_inst); 460 479 ··· 462 489 else 463 490 pipe_ctx->stream_res.tg->funcs->set_odm_bypass( 464 491 pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing); 465 - 466 - rate_control_2x_pclk = rate_control_2x_pclk || opp_cnt > 1; 467 - flow_control.flow_ctrl_mode = 0; 468 - flow_control.flow_ctrl_cnt0 = 0x80; 469 - flow_control.flow_ctrl_cnt1 = calc_mpc_flow_ctrl_cnt(pipe_ctx->stream, opp_cnt); 470 - if (mpc->funcs->set_out_rate_control) { 471 - for (i = 0; i < opp_cnt; ++i) { 472 - mpc->funcs->set_out_rate_control( 473 - mpc, opp_inst[i], 474 - true, 475 - rate_control_2x_pclk, 476 - &flow_control); 477 - } 478 - } 479 492 480 493 for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) { 481 494 odm_pipe->stream_res.opp->funcs->opp_pipe_clock_control(
+1 -1
drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
··· 67 67 .prepare_bandwidth = dcn35_prepare_bandwidth, 68 68 .optimize_bandwidth = dcn35_optimize_bandwidth, 69 69 .update_bandwidth = dcn20_update_bandwidth, 70 - .set_drr = dcn10_set_drr, 70 + .set_drr = dcn35_set_drr, 71 71 .get_position = dcn10_get_position, 72 72 .set_static_screen_control = dcn35_set_static_screen_control, 73 73 .setup_stereo = dcn10_setup_stereo,
+8 -3
drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
··· 700 700 .disable_dcc = DCC_ENABLE, 701 701 .disable_dpp_power_gate = true, 702 702 .disable_hubp_power_gate = true, 703 + .disable_optc_power_gate = true, /*should the same as above two*/ 704 + .disable_hpo_power_gate = true, /*dmubfw force domain25 on*/ 703 705 .disable_clock_gate = false, 704 706 .disable_dsc_power_gate = true, 705 707 .vsr_support = true, ··· 744 742 }, 745 743 .seamless_boot_odm_combine = DML_FAIL_SOURCE_PIXEL_FORMAT, 746 744 .enable_z9_disable_interface = true, /* Allow support for the PMFW interface for disable Z9*/ 745 + .minimum_z8_residency_time = 2100, 747 746 .using_dml2 = true, 748 747 .support_eDP1_5 = true, 749 748 .enable_hpo_pg_support = false, 750 749 .enable_legacy_fast_update = true, 751 750 .enable_single_display_2to1_odm_policy = true, 752 - .disable_idle_power_optimizations = true, 751 + .disable_idle_power_optimizations = false, 753 752 .dmcub_emulation = false, 754 753 .disable_boot_optimizations = false, 755 754 .disable_unbounded_requesting = false, ··· 761 758 .disable_z10 = true, 762 759 .ignore_pg = true, 763 760 .psp_disabled_wa = true, 764 - .ips2_eval_delay_us = 200, 765 - .ips2_entry_delay_us = 400 761 + .ips2_eval_delay_us = 2000, 762 + .ips2_entry_delay_us = 800, 763 + .disable_dmub_reallow_idle = true, 764 + .static_screen_wait_frames = 2, 766 765 }; 767 766 768 767 static const struct dc_panel_config panel_config_defaults = {
+5 -8
drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
··· 147 147 } 148 148 149 149 /* VSC packet set to 4 for PSR-SU, or 2 for PSR1 */ 150 - if (stream->link->psr_settings.psr_feature_enabled) { 151 - if (stream->link->psr_settings.psr_version == DC_PSR_VERSION_SU_1) 152 - vsc_packet_revision = vsc_packet_rev4; 153 - else if (stream->link->psr_settings.psr_version == DC_PSR_VERSION_1) 154 - vsc_packet_revision = vsc_packet_rev2; 155 - } 156 - 157 - if (stream->link->replay_settings.config.replay_supported) 150 + if (stream->link->psr_settings.psr_version == DC_PSR_VERSION_SU_1) 158 151 vsc_packet_revision = vsc_packet_rev4; 152 + else if (stream->link->replay_settings.config.replay_supported) 153 + vsc_packet_revision = vsc_packet_rev4; 154 + else if (stream->link->psr_settings.psr_version == DC_PSR_VERSION_1) 155 + vsc_packet_revision = vsc_packet_rev2; 159 156 160 157 /* Update to revision 5 for extended colorimetry support */ 161 158 if (stream->use_vsc_sdp_for_colorimetry)
+11 -2
drivers/gpu/drm/amd/include/umsch_mm_4_0_api_def.h
··· 234 234 uint32_t enable_level_process_quantum_check : 1; 235 235 uint32_t is_vcn0_enabled : 1; 236 236 uint32_t is_vcn1_enabled : 1; 237 - uint32_t reserved : 27; 237 + uint32_t use_rs64mem_for_proc_ctx_csa : 1; 238 + uint32_t reserved : 26; 238 239 }; 239 240 uint32_t uint32_all; 240 241 }; ··· 298 297 299 298 struct { 300 299 uint32_t is_context_suspended : 1; 301 - uint32_t reserved : 31; 300 + uint32_t collaboration_mode : 1; 301 + uint32_t reserved : 30; 302 302 }; 303 303 struct UMSCH_API_STATUS api_status; 304 + uint32_t process_csa_array_index; 305 + uint32_t context_csa_array_index; 304 306 }; 305 307 306 308 uint32_t max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS]; ··· 318 314 uint64_t context_csa_addr; 319 315 320 316 struct UMSCH_API_STATUS api_status; 317 + uint32_t context_csa_array_index; 321 318 }; 322 319 323 320 uint32_t max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS]; ··· 342 337 uint32_t suspend_fence_value; 343 338 344 339 struct UMSCH_API_STATUS api_status; 340 + uint32_t context_csa_array_index; 345 341 }; 346 342 347 343 uint32_t max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS]; ··· 362 356 enum UMSCH_ENGINE_TYPE engine_type; 363 357 364 358 struct UMSCH_API_STATUS api_status; 359 + uint32_t context_csa_array_index; 365 360 }; 366 361 367 362 uint32_t max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS]; ··· 411 404 union UMSCH_AFFINITY affinity; 412 405 uint64_t context_csa_addr; 413 406 struct UMSCH_API_STATUS api_status; 407 + uint32_t context_csa_array_index; 414 408 }; 415 409 416 410 uint32_t max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS]; ··· 425 417 uint64_t context_quantum; 426 418 uint64_t context_csa_addr; 427 419 struct UMSCH_API_STATUS api_status; 420 + uint32_t context_csa_array_index; 428 421 }; 429 422 430 423 uint32_t max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+14 -14
drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu_v14_0_0_ppsmc.h
··· 54 54 #define PPSMC_MSG_TestMessage 0x01 ///< To check if PMFW is alive and responding. Requirement specified by PMFW team 55 55 #define PPSMC_MSG_GetPmfwVersion 0x02 ///< Get PMFW version 56 56 #define PPSMC_MSG_GetDriverIfVersion 0x03 ///< Get PMFW_DRIVER_IF version 57 - #define PPSMC_MSG_SPARE0 0x04 ///< SPARE 58 - #define PPSMC_MSG_SPARE1 0x05 ///< SPARE 59 - #define PPSMC_MSG_PowerDownVcn 0x06 ///< Power down VCN 60 - #define PPSMC_MSG_PowerUpVcn 0x07 ///< Power up VCN; VCN is power gated by default 61 - #define PPSMC_MSG_SetHardMinVcn 0x08 ///< For wireless display 57 + #define PPSMC_MSG_PowerDownVcn1 0x04 ///< Power down VCN1 58 + #define PPSMC_MSG_PowerUpVcn1 0x05 ///< Power up VCN1; VCN1 is power gated by default 59 + #define PPSMC_MSG_PowerDownVcn0 0x06 ///< Power down VCN0 60 + #define PPSMC_MSG_PowerUpVcn0 0x07 ///< Power up VCN0; VCN0 is power gated by default 61 + #define PPSMC_MSG_SetHardMinVcn0 0x08 ///< For wireless display 62 62 #define PPSMC_MSG_SetSoftMinGfxclk 0x09 ///< Set SoftMin for GFXCLK, argument is frequency in MHz 63 - #define PPSMC_MSG_SPARE2 0x0A ///< SPARE 64 - #define PPSMC_MSG_SPARE3 0x0B ///< SPARE 63 + #define PPSMC_MSG_SetHardMinVcn1 0x0A ///< For wireless display 64 + #define PPSMC_MSG_SetSoftMinVcn1 0x0B ///< Set soft min for VCN1 clocks (VCLK1 and DCLK1) 65 65 #define PPSMC_MSG_PrepareMp1ForUnload 0x0C ///< Prepare PMFW for GFX driver unload 66 66 #define PPSMC_MSG_SetDriverDramAddrHigh 0x0D ///< Set high 32 bits of DRAM address for Driver table transfer 67 67 #define PPSMC_MSG_SetDriverDramAddrLow 0x0E ///< Set low 32 bits of DRAM address for Driver table transfer ··· 71 71 #define PPSMC_MSG_GetEnabledSmuFeatures 0x12 ///< Get enabled features in PMFW 72 72 #define PPSMC_MSG_SetHardMinSocclkByFreq 0x13 ///< Set hard min for SOC CLK 73 73 #define PPSMC_MSG_SetSoftMinFclk 0x14 ///< Set hard min for FCLK 74 - #define PPSMC_MSG_SetSoftMinVcn 0x15 ///< Set soft min for VCN clocks (VCLK and DCLK) 74 + #define PPSMC_MSG_SetSoftMinVcn0 0x15 ///< Set soft min for VCN0 clocks (VCLK0 and DCLK0) 75 75 76 76 #define PPSMC_MSG_EnableGfxImu 0x16 ///< Enable GFX IMU 77 77 ··· 84 84 85 85 #define PPSMC_MSG_SetSoftMaxSocclkByFreq 0x1D ///< Set soft max for SOC CLK 86 86 #define PPSMC_MSG_SetSoftMaxFclkByFreq 0x1E ///< Set soft max for FCLK 87 - #define PPSMC_MSG_SetSoftMaxVcn 0x1F ///< Set soft max for VCN clocks (VCLK and DCLK) 87 + #define PPSMC_MSG_SetSoftMaxVcn0 0x1F ///< Set soft max for VCN0 clocks (VCLK0 and DCLK0) 88 88 #define PPSMC_MSG_spare_0x20 0x20 89 - #define PPSMC_MSG_PowerDownJpeg 0x21 ///< Power down Jpeg 90 - #define PPSMC_MSG_PowerUpJpeg 0x22 ///< Power up Jpeg; VCN is power gated by default 89 + #define PPSMC_MSG_PowerDownJpeg0 0x21 ///< Power down Jpeg of VCN0 90 + #define PPSMC_MSG_PowerUpJpeg0 0x22 ///< Power up Jpeg of VCN0; VCN0 is power gated by default 91 91 92 92 #define PPSMC_MSG_SetHardMinFclkByFreq 0x23 ///< Set hard min for FCLK 93 93 #define PPSMC_MSG_SetSoftMinSocclkByFreq 0x24 ///< Set soft min for SOC CLK 94 94 #define PPSMC_MSG_AllowZstates 0x25 ///< Inform PMFM of allowing Zstate entry, i.e. no Miracast activity 95 - #define PPSMC_MSG_Reserved 0x26 ///< Not used 96 - #define PPSMC_MSG_Reserved1 0x27 ///< Not used, previously PPSMC_MSG_RequestActiveWgp 97 - #define PPSMC_MSG_Reserved2 0x28 ///< Not used, previously PPSMC_MSG_QueryActiveWgp 95 + #define PPSMC_MSG_PowerDownJpeg1 0x26 ///< Power down Jpeg of VCN1 96 + #define PPSMC_MSG_PowerUpJpeg1 0x27 ///< Power up Jpeg of VCN1; VCN1 is power gated by default 97 + #define PPSMC_MSG_SetSoftMaxVcn1 0x28 ///< Set soft max for VCN1 clocks (VCLK1 and DCLK1) 98 98 #define PPSMC_MSG_PowerDownIspByTile 0x29 ///< ISP is power gated by default 99 99 #define PPSMC_MSG_PowerUpIspByTile 0x2A ///< This message is used to power up ISP tiles and enable the ISP DPM 100 100 #define PPSMC_MSG_SetHardMinIspiclkByFreq 0x2B ///< Set HardMin by frequency for ISPICLK
+10
drivers/gpu/drm/amd/pm/swsmu/inc/smu_types.h
··· 115 115 __SMU_DUMMY_MAP(PowerDownVcn), \ 116 116 __SMU_DUMMY_MAP(PowerUpJpeg), \ 117 117 __SMU_DUMMY_MAP(PowerDownJpeg), \ 118 + __SMU_DUMMY_MAP(PowerUpJpeg0), \ 119 + __SMU_DUMMY_MAP(PowerDownJpeg0), \ 120 + __SMU_DUMMY_MAP(PowerUpJpeg1), \ 121 + __SMU_DUMMY_MAP(PowerDownJpeg1), \ 118 122 __SMU_DUMMY_MAP(BacoAudioD3PME), \ 119 123 __SMU_DUMMY_MAP(ArmD3), \ 120 124 __SMU_DUMMY_MAP(RunDcBtc), \ ··· 139 135 __SMU_DUMMY_MAP(PowerUpSdma), \ 140 136 __SMU_DUMMY_MAP(SetHardMinIspclkByFreq), \ 141 137 __SMU_DUMMY_MAP(SetHardMinVcn), \ 138 + __SMU_DUMMY_MAP(SetHardMinVcn0), \ 139 + __SMU_DUMMY_MAP(SetHardMinVcn1), \ 142 140 __SMU_DUMMY_MAP(SetAllowFclkSwitch), \ 143 141 __SMU_DUMMY_MAP(SetMinVideoGfxclkFreq), \ 144 142 __SMU_DUMMY_MAP(ActiveProcessNotify), \ ··· 156 150 __SMU_DUMMY_MAP(SetPhyclkVoltageByFreq), \ 157 151 __SMU_DUMMY_MAP(SetDppclkVoltageByFreq), \ 158 152 __SMU_DUMMY_MAP(SetSoftMinVcn), \ 153 + __SMU_DUMMY_MAP(SetSoftMinVcn0), \ 154 + __SMU_DUMMY_MAP(SetSoftMinVcn1), \ 159 155 __SMU_DUMMY_MAP(EnablePostCode), \ 160 156 __SMU_DUMMY_MAP(GetGfxclkFrequency), \ 161 157 __SMU_DUMMY_MAP(GetFclkFrequency), \ ··· 169 161 __SMU_DUMMY_MAP(SetSoftMaxSocclkByFreq), \ 170 162 __SMU_DUMMY_MAP(SetSoftMaxFclkByFreq), \ 171 163 __SMU_DUMMY_MAP(SetSoftMaxVcn), \ 164 + __SMU_DUMMY_MAP(SetSoftMaxVcn0), \ 165 + __SMU_DUMMY_MAP(SetSoftMaxVcn1), \ 172 166 __SMU_DUMMY_MAP(PowerGateMmHub), \ 173 167 __SMU_DUMMY_MAP(UpdatePmeRestore), \ 174 168 __SMU_DUMMY_MAP(GpuChangeState), \
+44 -6
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
··· 1402 1402 if (adev->vcn.harvest_config & (1 << i)) 1403 1403 continue; 1404 1404 1405 - ret = smu_cmn_send_smc_msg_with_param(smu, enable ? 1406 - SMU_MSG_PowerUpVcn : SMU_MSG_PowerDownVcn, 1407 - i << 16U, NULL); 1405 + if (amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(14, 0, 0) || 1406 + amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(14, 0, 1)) { 1407 + if (i == 0) 1408 + ret = smu_cmn_send_smc_msg_with_param(smu, enable ? 1409 + SMU_MSG_PowerUpVcn0 : SMU_MSG_PowerDownVcn0, 1410 + i << 16U, NULL); 1411 + else if (i == 1) 1412 + ret = smu_cmn_send_smc_msg_with_param(smu, enable ? 1413 + SMU_MSG_PowerUpVcn1 : SMU_MSG_PowerDownVcn1, 1414 + i << 16U, NULL); 1415 + } else { 1416 + ret = smu_cmn_send_smc_msg_with_param(smu, enable ? 1417 + SMU_MSG_PowerUpVcn : SMU_MSG_PowerDownVcn, 1418 + i << 16U, NULL); 1419 + } 1420 + 1408 1421 if (ret) 1409 1422 return ret; 1410 1423 } ··· 1428 1415 int smu_v14_0_set_jpeg_enable(struct smu_context *smu, 1429 1416 bool enable) 1430 1417 { 1431 - return smu_cmn_send_smc_msg_with_param(smu, enable ? 1432 - SMU_MSG_PowerUpJpeg : SMU_MSG_PowerDownJpeg, 1433 - 0, NULL); 1418 + struct amdgpu_device *adev = smu->adev; 1419 + int i, ret = 0; 1420 + 1421 + for (i = 0; i < adev->jpeg.num_jpeg_inst; i++) { 1422 + if (adev->jpeg.harvest_config & (1 << i)) 1423 + continue; 1424 + 1425 + if (amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(14, 0, 0) || 1426 + amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(14, 0, 1)) { 1427 + if (i == 0) 1428 + ret = smu_cmn_send_smc_msg_with_param(smu, enable ? 1429 + SMU_MSG_PowerUpJpeg0 : SMU_MSG_PowerDownJpeg0, 1430 + i << 16U, NULL); 1431 + else if (i == 1 && amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(14, 0, 1)) 1432 + ret = smu_cmn_send_smc_msg_with_param(smu, enable ? 1433 + SMU_MSG_PowerUpJpeg1 : SMU_MSG_PowerDownJpeg1, 1434 + i << 16U, NULL); 1435 + } else { 1436 + ret = smu_cmn_send_smc_msg_with_param(smu, enable ? 1437 + SMU_MSG_PowerUpJpeg : SMU_MSG_PowerDownJpeg, 1438 + i << 16U, NULL); 1439 + } 1440 + 1441 + if (ret) 1442 + return ret; 1443 + } 1444 + 1445 + return ret; 1434 1446 } 1435 1447 1436 1448 int smu_v14_0_run_btc(struct smu_context *smu)
+14 -7
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_0_ppt.c
··· 70 70 MSG_MAP(TestMessage, PPSMC_MSG_TestMessage, 1), 71 71 MSG_MAP(GetSmuVersion, PPSMC_MSG_GetPmfwVersion, 1), 72 72 MSG_MAP(GetDriverIfVersion, PPSMC_MSG_GetDriverIfVersion, 1), 73 - MSG_MAP(PowerDownVcn, PPSMC_MSG_PowerDownVcn, 1), 74 - MSG_MAP(PowerUpVcn, PPSMC_MSG_PowerUpVcn, 1), 75 - MSG_MAP(SetHardMinVcn, PPSMC_MSG_SetHardMinVcn, 1), 73 + MSG_MAP(PowerDownVcn0, PPSMC_MSG_PowerDownVcn0, 1), 74 + MSG_MAP(PowerUpVcn0, PPSMC_MSG_PowerUpVcn0, 1), 75 + MSG_MAP(SetHardMinVcn0, PPSMC_MSG_SetHardMinVcn0, 1), 76 + MSG_MAP(PowerDownVcn1, PPSMC_MSG_PowerDownVcn1, 1), 77 + MSG_MAP(PowerUpVcn1, PPSMC_MSG_PowerUpVcn1, 1), 78 + MSG_MAP(SetHardMinVcn1, PPSMC_MSG_SetHardMinVcn1, 1), 76 79 MSG_MAP(SetSoftMinGfxclk, PPSMC_MSG_SetSoftMinGfxclk, 1), 77 80 MSG_MAP(PrepareMp1ForUnload, PPSMC_MSG_PrepareMp1ForUnload, 1), 78 81 MSG_MAP(SetDriverDramAddrHigh, PPSMC_MSG_SetDriverDramAddrHigh, 1), ··· 86 83 MSG_MAP(GetEnabledSmuFeatures, PPSMC_MSG_GetEnabledSmuFeatures, 1), 87 84 MSG_MAP(SetHardMinSocclkByFreq, PPSMC_MSG_SetHardMinSocclkByFreq, 1), 88 85 MSG_MAP(SetSoftMinFclk, PPSMC_MSG_SetSoftMinFclk, 1), 89 - MSG_MAP(SetSoftMinVcn, PPSMC_MSG_SetSoftMinVcn, 1), 86 + MSG_MAP(SetSoftMinVcn0, PPSMC_MSG_SetSoftMinVcn0, 1), 87 + MSG_MAP(SetSoftMinVcn1, PPSMC_MSG_SetSoftMinVcn1, 1), 90 88 MSG_MAP(EnableGfxImu, PPSMC_MSG_EnableGfxImu, 1), 91 89 MSG_MAP(AllowGfxOff, PPSMC_MSG_AllowGfxOff, 1), 92 90 MSG_MAP(DisallowGfxOff, PPSMC_MSG_DisallowGfxOff, 1), ··· 95 91 MSG_MAP(SetHardMinGfxClk, PPSMC_MSG_SetHardMinGfxClk, 1), 96 92 MSG_MAP(SetSoftMaxSocclkByFreq, PPSMC_MSG_SetSoftMaxSocclkByFreq, 1), 97 93 MSG_MAP(SetSoftMaxFclkByFreq, PPSMC_MSG_SetSoftMaxFclkByFreq, 1), 98 - MSG_MAP(SetSoftMaxVcn, PPSMC_MSG_SetSoftMaxVcn, 1), 99 - MSG_MAP(PowerDownJpeg, PPSMC_MSG_PowerDownJpeg, 1), 100 - MSG_MAP(PowerUpJpeg, PPSMC_MSG_PowerUpJpeg, 1), 94 + MSG_MAP(SetSoftMaxVcn0, PPSMC_MSG_SetSoftMaxVcn0, 1), 95 + MSG_MAP(SetSoftMaxVcn1, PPSMC_MSG_SetSoftMaxVcn1, 1), 96 + MSG_MAP(PowerDownJpeg0, PPSMC_MSG_PowerDownJpeg0, 1), 97 + MSG_MAP(PowerUpJpeg0, PPSMC_MSG_PowerUpJpeg0, 1), 98 + MSG_MAP(PowerDownJpeg1, PPSMC_MSG_PowerDownJpeg1, 1), 99 + MSG_MAP(PowerUpJpeg1, PPSMC_MSG_PowerUpJpeg1, 1), 101 100 MSG_MAP(SetHardMinFclkByFreq, PPSMC_MSG_SetHardMinFclkByFreq, 1), 102 101 MSG_MAP(SetSoftMinSocclkByFreq, PPSMC_MSG_SetSoftMinSocclkByFreq, 1), 103 102 MSG_MAP(PowerDownIspByTile, PPSMC_MSG_PowerDownIspByTile, 1),
+7
drivers/gpu/drm/display/drm_dp_helper.c
··· 4111 4111 u32 overhead = 1000000; 4112 4112 int symbol_cycles; 4113 4113 4114 + if (lane_count == 0 || hactive == 0 || bpp_x16 == 0) { 4115 + DRM_DEBUG_KMS("Invalid BW overhead params: lane_count %d, hactive %d, bpp_x16 %d.%04d\n", 4116 + lane_count, hactive, 4117 + bpp_x16 >> 4, (bpp_x16 & 0xf) * 625); 4118 + return 0; 4119 + } 4120 + 4114 4121 /* 4115 4122 * DP Standard v2.1 2.6.4.1 4116 4123 * SSC downspread and ref clock variation margin:
+3 -3
drivers/gpu/drm/i915/Makefile
··· 33 33 subdir-ccflags-$(CONFIG_DRM_I915_WERROR) += -Werror 34 34 35 35 # Fine grained warnings disable 36 - CFLAGS_i915_pci.o = $(call cc-disable-warning, override-init) 37 - CFLAGS_display/intel_display_device.o = $(call cc-disable-warning, override-init) 38 - CFLAGS_display/intel_fbdev.o = $(call cc-disable-warning, override-init) 36 + CFLAGS_i915_pci.o = -Wno-override-init 37 + CFLAGS_display/intel_display_device.o = -Wno-override-init 38 + CFLAGS_display/intel_fbdev.o = -Wno-override-init 39 39 40 40 # Support compiling the display code separately for both i915 and xe 41 41 # drivers. Define I915 when building i915.
-2
drivers/gpu/drm/i915/display/g4x_dp.c
··· 717 717 { 718 718 intel_enable_dp(state, encoder, pipe_config, conn_state); 719 719 intel_edp_backlight_on(pipe_config, conn_state); 720 - encoder->audio_enable(encoder, pipe_config, conn_state); 721 720 } 722 721 723 722 static void vlv_enable_dp(struct intel_atomic_state *state, ··· 725 726 const struct drm_connector_state *conn_state) 726 727 { 727 728 intel_edp_backlight_on(pipe_config, conn_state); 728 - encoder->audio_enable(encoder, pipe_config, conn_state); 729 729 } 730 730 731 731 static void g4x_pre_enable_dp(struct intel_atomic_state *state,
+2 -1
drivers/gpu/drm/i915/display/icl_dsi.c
··· 1155 1155 } 1156 1156 1157 1157 intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_INIT_OTP); 1158 - intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DISPLAY_ON); 1159 1158 1160 1159 /* ensure all panel commands dispatched before enabling transcoder */ 1161 1160 wait_for_cmds_dispatched_to_panel(encoder); ··· 1254 1255 1255 1256 /* step6d: enable dsi transcoder */ 1256 1257 gen11_dsi_enable_transcoder(encoder); 1258 + 1259 + intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DISPLAY_ON); 1257 1260 1258 1261 /* step7: enable backlight */ 1259 1262 intel_backlight_enable(crtc_state, conn_state);
+40 -6
drivers/gpu/drm/i915/display/intel_bios.c
··· 1955 1955 * these devices we split the init OTP sequence into a deassert sequence and 1956 1956 * the actual init OTP part. 1957 1957 */ 1958 - static void fixup_mipi_sequences(struct drm_i915_private *i915, 1959 - struct intel_panel *panel) 1958 + static void vlv_fixup_mipi_sequences(struct drm_i915_private *i915, 1959 + struct intel_panel *panel) 1960 1960 { 1961 1961 u8 *init_otp; 1962 1962 int len; 1963 - 1964 - /* Limit this to VLV for now. */ 1965 - if (!IS_VALLEYVIEW(i915)) 1966 - return; 1967 1963 1968 1964 /* Limit this to v1 vid-mode sequences */ 1969 1965 if (panel->vbt.dsi.config->is_cmd_mode || ··· 1994 1998 init_otp[len - 1] = MIPI_SEQ_INIT_OTP; 1995 1999 /* And make MIPI_MIPI_SEQ_INIT_OTP point to it */ 1996 2000 panel->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP] = init_otp + len - 1; 2001 + } 2002 + 2003 + /* 2004 + * Some machines (eg. Lenovo 82TQ) appear to have broken 2005 + * VBT sequences: 2006 + * - INIT_OTP is not present at all 2007 + * - what should be in INIT_OTP is in DISPLAY_ON 2008 + * - what should be in DISPLAY_ON is in BACKLIGHT_ON 2009 + * (along with the actual backlight stuff) 2010 + * 2011 + * To make those work we simply swap DISPLAY_ON and INIT_OTP. 2012 + * 2013 + * TODO: Do we need to limit this to specific machines, 2014 + * or examine the contents of the sequences to 2015 + * avoid false positives? 2016 + */ 2017 + static void icl_fixup_mipi_sequences(struct drm_i915_private *i915, 2018 + struct intel_panel *panel) 2019 + { 2020 + if (!panel->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP] && 2021 + panel->vbt.dsi.sequence[MIPI_SEQ_DISPLAY_ON]) { 2022 + drm_dbg_kms(&i915->drm, "Broken VBT: Swapping INIT_OTP and DISPLAY_ON sequences\n"); 2023 + 2024 + swap(panel->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP], 2025 + panel->vbt.dsi.sequence[MIPI_SEQ_DISPLAY_ON]); 2026 + } 2027 + } 2028 + 2029 + static void fixup_mipi_sequences(struct drm_i915_private *i915, 2030 + struct intel_panel *panel) 2031 + { 2032 + if (DISPLAY_VER(i915) >= 11) 2033 + icl_fixup_mipi_sequences(i915, panel); 2034 + else if (IS_VALLEYVIEW(i915)) 2035 + vlv_fixup_mipi_sequences(i915, panel); 1997 2036 } 1998 2037 1999 2038 static void ··· 3381 3350 bool intel_bios_encoder_supports_dp_dual_mode(const struct intel_bios_encoder_data *devdata) 3382 3351 { 3383 3352 const struct child_device_config *child = &devdata->child; 3353 + 3354 + if (!devdata) 3355 + return false; 3384 3356 3385 3357 if (!intel_bios_encoder_supports_dp(devdata) || 3386 3358 !intel_bios_encoder_supports_hdmi(devdata))
+1 -3
drivers/gpu/drm/i915/display/intel_cursor.c
··· 36 36 { 37 37 struct drm_i915_private *dev_priv = 38 38 to_i915(plane_state->uapi.plane->dev); 39 - const struct drm_framebuffer *fb = plane_state->hw.fb; 40 - struct drm_i915_gem_object *obj = intel_fb_obj(fb); 41 39 u32 base; 42 40 43 41 if (DISPLAY_INFO(dev_priv)->cursor_needs_physical) 44 - base = i915_gem_object_get_dma_address(obj, 0); 42 + base = plane_state->phys_dma_addr; 45 43 else 46 44 base = intel_plane_ggtt_offset(plane_state); 47 45
+1
drivers/gpu/drm/i915/display/intel_display_types.h
··· 727 727 #define PLANE_HAS_FENCE BIT(0) 728 728 729 729 struct intel_fb_view view; 730 + u32 phys_dma_addr; /* for cursor_needs_physical */ 730 731 731 732 /* Plane pxp decryption state */ 732 733 bool decrypt;
+2 -10
drivers/gpu/drm/i915/display/intel_dp.c
··· 67 67 #include "intel_dp_tunnel.h" 68 68 #include "intel_dpio_phy.h" 69 69 #include "intel_dpll.h" 70 + #include "intel_drrs.h" 70 71 #include "intel_fifo_underrun.h" 71 72 #include "intel_hdcp.h" 72 73 #include "intel_hdmi.h" ··· 2684 2683 intel_hdmi_infoframe_enable(HDMI_PACKET_TYPE_GAMUT_METADATA); 2685 2684 } 2686 2685 2687 - static bool cpu_transcoder_has_drrs(struct drm_i915_private *i915, 2688 - enum transcoder cpu_transcoder) 2689 - { 2690 - if (HAS_DOUBLE_BUFFERED_M_N(i915)) 2691 - return true; 2692 - 2693 - return intel_cpu_transcoder_has_m2_n2(i915, cpu_transcoder); 2694 - } 2695 - 2696 2686 static bool can_enable_drrs(struct intel_connector *connector, 2697 2687 const struct intel_crtc_state *pipe_config, 2698 2688 const struct drm_display_mode *downclock_mode) ··· 2706 2714 if (pipe_config->has_pch_encoder) 2707 2715 return false; 2708 2716 2709 - if (!cpu_transcoder_has_drrs(i915, pipe_config->cpu_transcoder)) 2717 + if (!intel_cpu_transcoder_has_drrs(i915, pipe_config->cpu_transcoder)) 2710 2718 return false; 2711 2719 2712 2720 return downclock_mode &&
+1 -1
drivers/gpu/drm/i915/display/intel_dpll_mgr.c
··· 2554 2554 static bool 2555 2555 ehl_combo_pll_div_frac_wa_needed(struct drm_i915_private *i915) 2556 2556 { 2557 - return (((IS_ELKHARTLAKE(i915) || IS_JASPERLAKE(i915)) && 2557 + return ((IS_ELKHARTLAKE(i915) && 2558 2558 IS_DISPLAY_STEP(i915, STEP_B0, STEP_FOREVER)) || 2559 2559 IS_TIGERLAKE(i915) || IS_ALDERLAKE_S(i915) || IS_ALDERLAKE_P(i915)) && 2560 2560 i915->display.dpll.ref_clks.nssc == 38400;
+11 -3
drivers/gpu/drm/i915/display/intel_drrs.c
··· 63 63 return str[drrs_type]; 64 64 } 65 65 66 + bool intel_cpu_transcoder_has_drrs(struct drm_i915_private *i915, 67 + enum transcoder cpu_transcoder) 68 + { 69 + if (HAS_DOUBLE_BUFFERED_M_N(i915)) 70 + return true; 71 + 72 + return intel_cpu_transcoder_has_m2_n2(i915, cpu_transcoder); 73 + } 74 + 66 75 static void 67 76 intel_drrs_set_refresh_rate_pipeconf(struct intel_crtc *crtc, 68 77 enum drrs_refresh_rate refresh_rate) ··· 321 312 mutex_lock(&crtc->drrs.mutex); 322 313 323 314 seq_printf(m, "DRRS capable: %s\n", 324 - str_yes_no(crtc_state->has_drrs || 325 - HAS_DOUBLE_BUFFERED_M_N(i915) || 326 - intel_cpu_transcoder_has_m2_n2(i915, crtc_state->cpu_transcoder))); 315 + str_yes_no(intel_cpu_transcoder_has_drrs(i915, 316 + crtc_state->cpu_transcoder))); 327 317 328 318 seq_printf(m, "DRRS enabled: %s\n", 329 319 str_yes_no(crtc_state->has_drrs));
+3
drivers/gpu/drm/i915/display/intel_drrs.h
··· 9 9 #include <linux/types.h> 10 10 11 11 enum drrs_type; 12 + enum transcoder; 12 13 struct drm_i915_private; 13 14 struct intel_atomic_state; 14 15 struct intel_crtc; 15 16 struct intel_crtc_state; 16 17 struct intel_connector; 17 18 19 + bool intel_cpu_transcoder_has_drrs(struct drm_i915_private *i915, 20 + enum transcoder cpu_transcoder); 18 21 const char *intel_drrs_type_str(enum drrs_type drrs_type); 19 22 bool intel_drrs_is_active(struct intel_crtc *crtc); 20 23 void intel_drrs_activate(const struct intel_crtc_state *crtc_state);
+14
drivers/gpu/drm/i915/display/intel_dsb.c
··· 340 340 return max(0, vblank_start - intel_usecs_to_scanlines(adjusted_mode, latency)); 341 341 } 342 342 343 + static u32 dsb_chicken(struct intel_crtc *crtc) 344 + { 345 + if (crtc->mode_flags & I915_MODE_FLAG_VRR) 346 + return DSB_CTRL_WAIT_SAFE_WINDOW | 347 + DSB_CTRL_NO_WAIT_VBLANK | 348 + DSB_INST_WAIT_SAFE_WINDOW | 349 + DSB_INST_NO_WAIT_VBLANK; 350 + else 351 + return 0; 352 + } 353 + 343 354 static void _intel_dsb_commit(struct intel_dsb *dsb, u32 ctrl, 344 355 int dewake_scanline) 345 356 { ··· 371 360 372 361 intel_de_write_fw(dev_priv, DSB_CTRL(pipe, dsb->id), 373 362 ctrl | DSB_ENABLE); 363 + 364 + intel_de_write_fw(dev_priv, DSB_CHICKEN(pipe, dsb->id), 365 + dsb_chicken(crtc)); 374 366 375 367 intel_de_write_fw(dev_priv, DSB_HEAD(pipe, dsb->id), 376 368 intel_dsb_buffer_ggtt_offset(&dsb->dsb_buf));
+10
drivers/gpu/drm/i915/display/intel_fb_pin.c
··· 255 255 return PTR_ERR(vma); 256 256 257 257 plane_state->ggtt_vma = vma; 258 + 259 + /* 260 + * Pre-populate the dma address before we enter the vblank 261 + * evade critical section as i915_gem_object_get_dma_address() 262 + * will trigger might_sleep() even if it won't actually sleep, 263 + * which is the case when the fb has already been pinned. 264 + */ 265 + if (phys_cursor) 266 + plane_state->phys_dma_addr = 267 + i915_gem_object_get_dma_address(intel_fb_obj(fb), 0); 258 268 } else { 259 269 struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb); 260 270
-4
drivers/gpu/drm/i915/display/intel_sdvo.c
··· 1842 1842 struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->uapi.crtc); 1843 1843 u32 temp; 1844 1844 1845 - encoder->audio_disable(encoder, old_crtc_state, conn_state); 1846 - 1847 1845 intel_sdvo_set_active_outputs(intel_sdvo, 0); 1848 1846 if (0) 1849 1847 intel_sdvo_set_encoder_power_state(intel_sdvo, ··· 1933 1935 intel_sdvo_set_encoder_power_state(intel_sdvo, 1934 1936 DRM_MODE_DPMS_ON); 1935 1937 intel_sdvo_set_active_outputs(intel_sdvo, intel_sdvo_connector->output_flag); 1936 - 1937 - encoder->audio_enable(encoder, pipe_config, conn_state); 1938 1938 } 1939 1939 1940 1940 static enum drm_mode_status
+4 -3
drivers/gpu/drm/i915/display/intel_vrr.c
··· 187 187 enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; 188 188 189 189 /* 190 - * TRANS_SET_CONTEXT_LATENCY with VRR enabled 191 - * requires this chicken bit on ADL/DG2. 190 + * This bit seems to have two meanings depending on the platform: 191 + * TGL: generate VRR "safe window" for DSB vblank waits 192 + * ADL/DG2: make TRANS_SET_CONTEXT_LATENCY effective with VRR 192 193 */ 193 - if (DISPLAY_VER(dev_priv) == 13) 194 + if (IS_DISPLAY_VER(dev_priv, 12, 13)) 194 195 intel_de_rmw(dev_priv, CHICKEN_TRANS(cpu_transcoder), 195 196 0, PIPE_VBLANK_WITH_DELAY); 196 197
+3
drivers/gpu/drm/i915/display/skl_universal_plane.c
··· 2295 2295 if (HAS_4TILE(i915)) 2296 2296 caps |= INTEL_PLANE_CAP_TILING_4; 2297 2297 2298 + if (!IS_ENABLED(I915) && !HAS_FLAT_CCS(i915)) 2299 + return caps; 2300 + 2298 2301 if (skl_plane_has_rc_ccs(i915, pipe, plane_id)) { 2299 2302 caps |= INTEL_PLANE_CAP_CCS_RC; 2300 2303 if (DISPLAY_VER(i915) >= 12)
-3
drivers/gpu/drm/i915/gt/intel_engine_pm.c
··· 279 279 intel_engine_park_heartbeat(engine); 280 280 intel_breadcrumbs_park(engine->breadcrumbs); 281 281 282 - /* Must be reset upon idling, or we may miss the busy wakeup. */ 283 - GEM_BUG_ON(engine->sched_engine->queue_priority_hint != INT_MIN); 284 - 285 282 if (engine->park) 286 283 engine->park(engine); 287 284
+3
drivers/gpu/drm/i915/gt/intel_execlists_submission.c
··· 3272 3272 { 3273 3273 cancel_timer(&engine->execlists.timer); 3274 3274 cancel_timer(&engine->execlists.preempt); 3275 + 3276 + /* Reset upon idling, or we may delay the busy wakeup. */ 3277 + WRITE_ONCE(engine->sched_engine->queue_priority_hint, INT_MIN); 3275 3278 } 3276 3279 3277 3280 static void add_to_engine(struct i915_request *rq)
+1
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 1653 1653 xelpg_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal) 1654 1654 { 1655 1655 /* Wa_14018575942 / Wa_18018781329 */ 1656 + wa_mcr_write_or(wal, RENDER_MOD_CTRL, FORCE_MISS_FTLB); 1656 1657 wa_mcr_write_or(wal, COMP_MOD_CTRL, FORCE_MISS_FTLB); 1657 1658 1658 1659 /* Wa_22016670082 */
+1 -1
drivers/gpu/drm/i915/i915_driver.c
··· 800 800 goto out_cleanup_modeset2; 801 801 802 802 ret = intel_pxp_init(i915); 803 - if (ret != -ENODEV) 803 + if (ret && ret != -ENODEV) 804 804 drm_dbg(&i915->drm, "pxp init failed with %d\n", ret); 805 805 806 806 ret = intel_display_driver_probe(i915);
+19 -18
drivers/gpu/drm/i915/i915_hwmon.c
··· 72 72 struct intel_uncore *uncore = ddat->uncore; 73 73 intel_wakeref_t wakeref; 74 74 75 - mutex_lock(&hwmon->hwmon_lock); 75 + with_intel_runtime_pm(uncore->rpm, wakeref) { 76 + mutex_lock(&hwmon->hwmon_lock); 76 77 77 - with_intel_runtime_pm(uncore->rpm, wakeref) 78 78 intel_uncore_rmw(uncore, reg, clear, set); 79 79 80 - mutex_unlock(&hwmon->hwmon_lock); 80 + mutex_unlock(&hwmon->hwmon_lock); 81 + } 81 82 } 82 83 83 84 /* ··· 137 136 else 138 137 rgaddr = hwmon->rg.energy_status_all; 139 138 140 - mutex_lock(&hwmon->hwmon_lock); 139 + with_intel_runtime_pm(uncore->rpm, wakeref) { 140 + mutex_lock(&hwmon->hwmon_lock); 141 141 142 - with_intel_runtime_pm(uncore->rpm, wakeref) 143 142 reg_val = intel_uncore_read(uncore, rgaddr); 144 143 145 - if (reg_val >= ei->reg_val_prev) 146 - ei->accum_energy += reg_val - ei->reg_val_prev; 147 - else 148 - ei->accum_energy += UINT_MAX - ei->reg_val_prev + reg_val; 149 - ei->reg_val_prev = reg_val; 144 + if (reg_val >= ei->reg_val_prev) 145 + ei->accum_energy += reg_val - ei->reg_val_prev; 146 + else 147 + ei->accum_energy += UINT_MAX - ei->reg_val_prev + reg_val; 148 + ei->reg_val_prev = reg_val; 150 149 151 - *energy = mul_u64_u32_shr(ei->accum_energy, SF_ENERGY, 152 - hwmon->scl_shift_energy); 153 - mutex_unlock(&hwmon->hwmon_lock); 150 + *energy = mul_u64_u32_shr(ei->accum_energy, SF_ENERGY, 151 + hwmon->scl_shift_energy); 152 + mutex_unlock(&hwmon->hwmon_lock); 153 + } 154 154 } 155 155 156 156 static ssize_t ··· 406 404 407 405 /* Block waiting for GuC reset to complete when needed */ 408 406 for (;;) { 407 + wakeref = intel_runtime_pm_get(ddat->uncore->rpm); 409 408 mutex_lock(&hwmon->hwmon_lock); 410 409 411 410 prepare_to_wait(&ddat->waitq, &wait, TASK_INTERRUPTIBLE); ··· 420 417 } 421 418 422 419 mutex_unlock(&hwmon->hwmon_lock); 420 + intel_runtime_pm_put(ddat->uncore->rpm, wakeref); 423 421 424 422 schedule(); 425 423 } 426 424 finish_wait(&ddat->waitq, &wait); 427 425 if (ret) 428 - goto unlock; 429 - 430 - wakeref = intel_runtime_pm_get(ddat->uncore->rpm); 426 + goto exit; 431 427 432 428 /* Disable PL1 limit and verify, because the limit cannot be disabled on all platforms */ 433 429 if (val == PL1_DISABLE) { ··· 446 444 intel_uncore_rmw(ddat->uncore, hwmon->rg.pkg_rapl_limit, 447 445 PKG_PWR_LIM_1_EN | PKG_PWR_LIM_1, nval); 448 446 exit: 449 - intel_runtime_pm_put(ddat->uncore->rpm, wakeref); 450 - unlock: 451 447 mutex_unlock(&hwmon->hwmon_lock); 448 + intel_runtime_pm_put(ddat->uncore->rpm, wakeref); 452 449 return ret; 453 450 } 454 451
+2
drivers/gpu/drm/i915/i915_memcpy.c
··· 25 25 #include <linux/kernel.h> 26 26 #include <linux/string.h> 27 27 #include <linux/cpufeature.h> 28 + #include <linux/bug.h> 29 + #include <linux/build_bug.h> 28 30 #include <asm/fpu/api.h> 29 31 30 32 #include "i915_memcpy.h"
+1 -1
drivers/gpu/drm/i915/i915_reg.h
··· 4599 4599 #define MTL_CHICKEN_TRANS(trans) _MMIO_TRANS((trans), \ 4600 4600 _MTL_CHICKEN_TRANS_A, \ 4601 4601 _MTL_CHICKEN_TRANS_B) 4602 - #define PIPE_VBLANK_WITH_DELAY REG_BIT(31) /* ADL/DG2 */ 4602 + #define PIPE_VBLANK_WITH_DELAY REG_BIT(31) /* tgl+ */ 4603 4603 #define SKL_UNMASK_VBL_TO_PIPE_IN_SRD REG_BIT(30) /* skl+ */ 4604 4604 #define HSW_FRAME_START_DELAY_MASK REG_GENMASK(28, 27) 4605 4605 #define HSW_FRAME_START_DELAY(x) REG_FIELD_PREP(HSW_FRAME_START_DELAY_MASK, x)
+43 -7
drivers/gpu/drm/i915/i915_vma.c
··· 34 34 #include "gt/intel_engine.h" 35 35 #include "gt/intel_engine_heartbeat.h" 36 36 #include "gt/intel_gt.h" 37 + #include "gt/intel_gt_pm.h" 37 38 #include "gt/intel_gt_requests.h" 38 39 #include "gt/intel_tlb.h" 39 40 ··· 104 103 105 104 static int __i915_vma_active(struct i915_active *ref) 106 105 { 107 - return i915_vma_tryget(active_to_vma(ref)) ? 0 : -ENOENT; 106 + struct i915_vma *vma = active_to_vma(ref); 107 + 108 + if (!i915_vma_tryget(vma)) 109 + return -ENOENT; 110 + 111 + /* 112 + * Exclude global GTT VMA from holding a GT wakeref 113 + * while active, otherwise GPU never goes idle. 114 + */ 115 + if (!i915_vma_is_ggtt(vma)) { 116 + /* 117 + * Since we and our _retire() counterpart can be 118 + * called asynchronously, storing a wakeref tracking 119 + * handle inside struct i915_vma is not safe, and 120 + * there is no other good place for that. Hence, 121 + * use untracked variants of intel_gt_pm_get/put(). 122 + */ 123 + intel_gt_pm_get_untracked(vma->vm->gt); 124 + } 125 + 126 + return 0; 108 127 } 109 128 110 129 static void __i915_vma_retire(struct i915_active *ref) 111 130 { 112 - i915_vma_put(active_to_vma(ref)); 131 + struct i915_vma *vma = active_to_vma(ref); 132 + 133 + if (!i915_vma_is_ggtt(vma)) { 134 + /* 135 + * Since we can be called from atomic contexts, 136 + * use an async variant of intel_gt_pm_put(). 137 + */ 138 + intel_gt_pm_put_async_untracked(vma->vm->gt); 139 + } 140 + 141 + i915_vma_put(vma); 113 142 } 114 143 115 144 static struct i915_vma * ··· 1435 1404 struct i915_vma_work *work = NULL; 1436 1405 struct dma_fence *moving = NULL; 1437 1406 struct i915_vma_resource *vma_res = NULL; 1438 - intel_wakeref_t wakeref = 0; 1407 + intel_wakeref_t wakeref; 1439 1408 unsigned int bound; 1440 1409 int err; 1441 1410 ··· 1455 1424 if (err) 1456 1425 return err; 1457 1426 1458 - if (flags & PIN_GLOBAL) 1459 - wakeref = intel_runtime_pm_get(&vma->vm->i915->runtime_pm); 1427 + /* 1428 + * In case of a global GTT, we must hold a runtime-pm wakeref 1429 + * while global PTEs are updated. In other cases, we hold 1430 + * the rpm reference while the VMA is active. Since runtime 1431 + * resume may require allocations, which are forbidden inside 1432 + * vm->mutex, get the first rpm wakeref outside of the mutex. 1433 + */ 1434 + wakeref = intel_runtime_pm_get(&vma->vm->i915->runtime_pm); 1460 1435 1461 1436 if (flags & vma->vm->bind_async_flags) { 1462 1437 /* lock VM */ ··· 1598 1561 if (work) 1599 1562 dma_fence_work_commit_imm(&work->base); 1600 1563 err_rpm: 1601 - if (wakeref) 1602 - intel_runtime_pm_put(&vma->vm->i915->runtime_pm, wakeref); 1564 + intel_runtime_pm_put(&vma->vm->i915->runtime_pm, wakeref); 1603 1565 1604 1566 if (moving) 1605 1567 dma_fence_put(moving);
+6 -6
drivers/gpu/drm/nouveau/nouveau_dmem.c
··· 378 378 dma_addr_t *dma_addrs; 379 379 struct nouveau_fence *fence; 380 380 381 - src_pfns = kcalloc(npages, sizeof(*src_pfns), GFP_KERNEL); 382 - dst_pfns = kcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL); 383 - dma_addrs = kcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL); 381 + src_pfns = kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAIL); 382 + dst_pfns = kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAIL); 383 + dma_addrs = kvcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL | __GFP_NOFAIL); 384 384 385 385 migrate_device_range(src_pfns, chunk->pagemap.range.start >> PAGE_SHIFT, 386 386 npages); ··· 406 406 migrate_device_pages(src_pfns, dst_pfns, npages); 407 407 nouveau_dmem_fence_done(&fence); 408 408 migrate_device_finalize(src_pfns, dst_pfns, npages); 409 - kfree(src_pfns); 410 - kfree(dst_pfns); 409 + kvfree(src_pfns); 410 + kvfree(dst_pfns); 411 411 for (i = 0; i < npages; i++) 412 412 dma_unmap_page(chunk->drm->dev->dev, dma_addrs[i], PAGE_SIZE, DMA_BIDIRECTIONAL); 413 - kfree(dma_addrs); 413 + kvfree(dma_addrs); 414 414 } 415 415 416 416 void
-2
drivers/gpu/drm/qxl/qxl_cmd.c
··· 421 421 { 422 422 uint32_t handle; 423 423 int idr_ret; 424 - int count = 0; 425 424 again: 426 425 idr_preload(GFP_ATOMIC); 427 426 spin_lock(&qdev->surf_id_idr_lock); ··· 432 433 handle = idr_ret; 433 434 434 435 if (handle >= qdev->rom->n_surfaces) { 435 - count++; 436 436 spin_lock(&qdev->surf_id_idr_lock); 437 437 idr_remove(&qdev->surf_id_idr, handle); 438 438 spin_unlock(&qdev->surf_id_idr_lock);
+1 -3
drivers/gpu/drm/qxl/qxl_ioctl.c
··· 145 145 struct qxl_release *release; 146 146 struct qxl_bo *cmd_bo; 147 147 void *fb_cmd; 148 - int i, ret, num_relocs; 148 + int i, ret; 149 149 int unwritten; 150 150 151 151 switch (cmd->type) { ··· 200 200 } 201 201 202 202 /* fill out reloc info structs */ 203 - num_relocs = 0; 204 203 for (i = 0; i < cmd->relocs_num; ++i) { 205 204 struct drm_qxl_reloc reloc; 206 205 struct drm_qxl_reloc __user *u = u64_to_user_ptr(cmd->relocs); ··· 229 230 reloc_info[i].dst_bo = cmd_bo; 230 231 reloc_info[i].dst_offset = reloc.dst_offset + release->release_offset; 231 232 } 232 - num_relocs++; 233 233 234 234 /* reserve and validate the reloc dst bo */ 235 235 if (reloc.reloc_type == QXL_RELOC_TYPE_BO || reloc.src_handle) {
-2
drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
··· 17 17 18 18 static const uint32_t formats_cluster[] = { 19 19 DRM_FORMAT_XRGB2101010, 20 - DRM_FORMAT_ARGB2101010, 21 20 DRM_FORMAT_XBGR2101010, 22 - DRM_FORMAT_ABGR2101010, 23 21 DRM_FORMAT_XRGB8888, 24 22 DRM_FORMAT_ARGB8888, 25 23 DRM_FORMAT_XBGR8888,
+9 -3
drivers/gpu/drm/scheduler/sched_entity.c
··· 71 71 entity->guilty = guilty; 72 72 entity->num_sched_list = num_sched_list; 73 73 entity->priority = priority; 74 + /* 75 + * It's perfectly valid to initialize an entity without having a valid 76 + * scheduler attached. It's just not valid to use the scheduler before it 77 + * is initialized itself. 78 + */ 74 79 entity->sched_list = num_sched_list > 1 ? sched_list : NULL; 75 80 RCU_INIT_POINTER(entity->last_scheduled, NULL); 76 81 RB_CLEAR_NODE(&entity->rb_tree_node); 77 82 78 - if (!sched_list[0]->sched_rq) { 79 - /* Warn drivers not to do this and to fix their DRM 80 - * calling order. 83 + if (num_sched_list && !sched_list[0]->sched_rq) { 84 + /* Since every entry covered by num_sched_list 85 + * should be non-NULL and therefore we warn drivers 86 + * not to do this and to fix their DRM calling order. 81 87 */ 82 88 pr_warn("%s: called with uninitialized scheduler\n", __func__); 83 89 } else if (num_sched_list) {
+9 -6
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 1444 1444 root, "system_ttm"); 1445 1445 ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, TTM_PL_VRAM), 1446 1446 root, "vram_ttm"); 1447 - ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_GMR), 1448 - root, "gmr_ttm"); 1449 - ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_MOB), 1450 - root, "mob_ttm"); 1451 - ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_SYSTEM), 1452 - root, "system_mob_ttm"); 1447 + if (vmw->has_gmr) 1448 + ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_GMR), 1449 + root, "gmr_ttm"); 1450 + if (vmw->has_mob) { 1451 + ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_MOB), 1452 + root, "mob_ttm"); 1453 + ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_SYSTEM), 1454 + root, "system_mob_ttm"); 1455 + } 1453 1456 } 1454 1457 1455 1458 static int vmwgfx_pm_notifier(struct notifier_block *nb, unsigned long val,
+2 -2
drivers/gpu/drm/xe/Makefile
··· 172 172 -Ddrm_i915_gem_object=xe_bo \ 173 173 -Ddrm_i915_private=xe_device 174 174 175 - CFLAGS_i915-display/intel_fbdev.o = $(call cc-disable-warning, override-init) 176 - CFLAGS_i915-display/intel_display_device.o = $(call cc-disable-warning, override-init) 175 + CFLAGS_i915-display/intel_fbdev.o = -Wno-override-init 176 + CFLAGS_i915-display/intel_display_device.o = -Wno-override-init 177 177 178 178 # Rule to build SOC code shared with i915 179 179 $(obj)/i915-soc/%.o: $(srctree)/drivers/gpu/drm/i915/soc/%.c FORCE
+9 -50
drivers/gpu/drm/xe/xe_bo.c
··· 144 144 .mem_type = XE_PL_TT, 145 145 }; 146 146 *c += 1; 147 - 148 - if (bo->props.preferred_mem_type == XE_BO_PROPS_INVALID) 149 - bo->props.preferred_mem_type = XE_PL_TT; 150 147 } 151 148 } 152 149 ··· 178 181 } 179 182 places[*c] = place; 180 183 *c += 1; 181 - 182 - if (bo->props.preferred_mem_type == XE_BO_PROPS_INVALID) 183 - bo->props.preferred_mem_type = mem_type; 184 184 } 185 185 186 186 static void try_add_vram(struct xe_device *xe, struct xe_bo *bo, 187 187 u32 bo_flags, u32 *c) 188 188 { 189 - if (bo->props.preferred_gt == XE_GT1) { 190 - if (bo_flags & XE_BO_CREATE_VRAM1_BIT) 191 - add_vram(xe, bo, bo->placements, bo_flags, XE_PL_VRAM1, c); 192 - if (bo_flags & XE_BO_CREATE_VRAM0_BIT) 193 - add_vram(xe, bo, bo->placements, bo_flags, XE_PL_VRAM0, c); 194 - } else { 195 - if (bo_flags & XE_BO_CREATE_VRAM0_BIT) 196 - add_vram(xe, bo, bo->placements, bo_flags, XE_PL_VRAM0, c); 197 - if (bo_flags & XE_BO_CREATE_VRAM1_BIT) 198 - add_vram(xe, bo, bo->placements, bo_flags, XE_PL_VRAM1, c); 199 - } 189 + if (bo_flags & XE_BO_CREATE_VRAM0_BIT) 190 + add_vram(xe, bo, bo->placements, bo_flags, XE_PL_VRAM0, c); 191 + if (bo_flags & XE_BO_CREATE_VRAM1_BIT) 192 + add_vram(xe, bo, bo->placements, bo_flags, XE_PL_VRAM1, c); 200 193 } 201 194 202 195 static void try_add_stolen(struct xe_device *xe, struct xe_bo *bo, ··· 210 223 { 211 224 u32 c = 0; 212 225 213 - bo->props.preferred_mem_type = XE_BO_PROPS_INVALID; 214 - 215 - /* The order of placements should indicate preferred location */ 216 - 217 - if (bo->props.preferred_mem_class == DRM_XE_MEM_REGION_CLASS_SYSMEM) { 218 - try_add_system(xe, bo, bo_flags, &c); 219 - try_add_vram(xe, bo, bo_flags, &c); 220 - } else { 221 - try_add_vram(xe, bo, bo_flags, &c); 222 - try_add_system(xe, bo, bo_flags, &c); 223 - } 226 + try_add_vram(xe, bo, bo_flags, &c); 227 + try_add_system(xe, bo, bo_flags, &c); 224 228 try_add_stolen(xe, bo, bo_flags, &c); 225 229 226 230 if (!c) ··· 1104 1126 } 1105 1127 } 1106 1128 1107 - static bool should_migrate_to_system(struct xe_bo *bo) 1108 - { 1109 - struct xe_device *xe = xe_bo_device(bo); 1110 - 1111 - return xe_device_in_fault_mode(xe) && bo->props.cpu_atomic; 1112 - } 1113 - 1114 1129 static vm_fault_t xe_gem_fault(struct vm_fault *vmf) 1115 1130 { 1116 1131 struct ttm_buffer_object *tbo = vmf->vma->vm_private_data; ··· 1112 1141 struct xe_bo *bo = ttm_to_xe_bo(tbo); 1113 1142 bool needs_rpm = bo->flags & XE_BO_CREATE_VRAM_MASK; 1114 1143 vm_fault_t ret; 1115 - int idx, r = 0; 1144 + int idx; 1116 1145 1117 1146 if (needs_rpm) 1118 1147 xe_device_mem_access_get(xe); ··· 1124 1153 if (drm_dev_enter(ddev, &idx)) { 1125 1154 trace_xe_bo_cpu_fault(bo); 1126 1155 1127 - if (should_migrate_to_system(bo)) { 1128 - r = xe_bo_migrate(bo, XE_PL_TT); 1129 - if (r == -EBUSY || r == -ERESTARTSYS || r == -EINTR) 1130 - ret = VM_FAULT_NOPAGE; 1131 - else if (r) 1132 - ret = VM_FAULT_SIGBUS; 1133 - } 1134 - if (!ret) 1135 - ret = ttm_bo_vm_fault_reserved(vmf, 1136 - vmf->vma->vm_page_prot, 1137 - TTM_BO_VM_NUM_PREFAULT); 1156 + ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, 1157 + TTM_BO_VM_NUM_PREFAULT); 1138 1158 drm_dev_exit(idx); 1139 1159 } else { 1140 1160 ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot); ··· 1253 1291 bo->flags = flags; 1254 1292 bo->cpu_caching = cpu_caching; 1255 1293 bo->ttm.base.funcs = &xe_gem_object_funcs; 1256 - bo->props.preferred_mem_class = XE_BO_PROPS_INVALID; 1257 - bo->props.preferred_gt = XE_BO_PROPS_INVALID; 1258 - bo->props.preferred_mem_type = XE_BO_PROPS_INVALID; 1259 1294 bo->ttm.priority = XE_BO_PRIORITY_NORMAL; 1260 1295 INIT_LIST_HEAD(&bo->pinned_link); 1261 1296 #ifdef CONFIG_PROC_FS
-19
drivers/gpu/drm/xe/xe_bo_types.h
··· 56 56 */ 57 57 struct list_head client_link; 58 58 #endif 59 - /** @props: BO user controlled properties */ 60 - struct { 61 - /** @preferred_mem: preferred memory class for this BO */ 62 - s16 preferred_mem_class; 63 - /** @prefered_gt: preferred GT for this BO */ 64 - s16 preferred_gt; 65 - /** @preferred_mem_type: preferred memory type */ 66 - s32 preferred_mem_type; 67 - /** 68 - * @cpu_atomic: the CPU expects to do atomics operations to 69 - * this BO 70 - */ 71 - bool cpu_atomic; 72 - /** 73 - * @device_atomic: the device expects to do atomics operations 74 - * to this BO 75 - */ 76 - bool device_atomic; 77 - } props; 78 59 /** @freed: List node for delayed put. */ 79 60 struct llist_node freed; 80 61 /** @created: Whether the bo has passed initial creation */
+2 -2
drivers/gpu/drm/xe/xe_device.h
··· 58 58 59 59 static inline struct xe_gt *xe_tile_get_gt(struct xe_tile *tile, u8 gt_id) 60 60 { 61 - if (drm_WARN_ON(&tile_to_xe(tile)->drm, gt_id > XE_MAX_GT_PER_TILE)) 61 + if (drm_WARN_ON(&tile_to_xe(tile)->drm, gt_id >= XE_MAX_GT_PER_TILE)) 62 62 gt_id = 0; 63 63 64 64 return gt_id ? tile->media_gt : tile->primary_gt; ··· 79 79 if (MEDIA_VER(xe) >= 13) { 80 80 gt = xe_tile_get_gt(root_tile, gt_id); 81 81 } else { 82 - if (drm_WARN_ON(&xe->drm, gt_id > XE_MAX_TILES_PER_DEVICE)) 82 + if (drm_WARN_ON(&xe->drm, gt_id >= XE_MAX_TILES_PER_DEVICE)) 83 83 gt_id = 0; 84 84 85 85 gt = xe->tiles[gt_id].primary_gt;
+1 -1
drivers/gpu/drm/xe/xe_exec_queue.c
··· 448 448 { 449 449 u32 idx; 450 450 451 - if (eci.engine_class > ARRAY_SIZE(user_to_xe_engine_class)) 451 + if (eci.engine_class >= ARRAY_SIZE(user_to_xe_engine_class)) 452 452 return NULL; 453 453 454 454 if (eci.gt_id >= xe->info.gt_count)
+1 -1
drivers/gpu/drm/xe/xe_guc_submit.c
··· 1220 1220 init_waitqueue_head(&ge->suspend_wait); 1221 1221 1222 1222 timeout = (q->vm && xe_vm_in_lr_mode(q->vm)) ? MAX_SCHEDULE_TIMEOUT : 1223 - q->sched_props.job_timeout_ms; 1223 + msecs_to_jiffies(q->sched_props.job_timeout_ms); 1224 1224 err = xe_sched_init(&ge->sched, &drm_sched_ops, &xe_sched_ops, 1225 1225 get_submit_wq(guc), 1226 1226 q->lrc[0].ring.size / MAX_JOB_SIZE_BYTES, 64,
+9 -11
drivers/gpu/drm/xe/xe_lrc.c
··· 97 97 #define REG16(x) \ 98 98 (((x) >> 9) | BIT(7) | BUILD_BUG_ON_ZERO(x >= 0x10000)), \ 99 99 (((x) >> 2) & 0x7f) 100 - #define END 0 101 100 { 102 101 const u32 base = hwe->mmio_base; 103 102 ··· 167 168 REG16(0x274), 168 169 REG16(0x270), 169 170 170 - END 171 + 0 171 172 }; 172 173 173 174 static const u8 dg2_xcs_offsets[] = { ··· 201 202 REG16(0x274), 202 203 REG16(0x270), 203 204 204 - END 205 + 0 205 206 }; 206 207 207 208 static const u8 gen12_rcs_offsets[] = { ··· 297 298 REG(0x084), 298 299 NOP(1), 299 300 300 - END 301 + 0 301 302 }; 302 303 303 304 static const u8 xehp_rcs_offsets[] = { ··· 338 339 LRI(1, 0), 339 340 REG(0x0c8), 340 341 341 - END 342 + 0 342 343 }; 343 344 344 345 static const u8 dg2_rcs_offsets[] = { ··· 381 382 LRI(1, 0), 382 383 REG(0x0c8), 383 384 384 - END 385 + 0 385 386 }; 386 387 387 388 static const u8 mtl_rcs_offsets[] = { ··· 424 425 LRI(1, 0), 425 426 REG(0x0c8), 426 427 427 - END 428 + 0 428 429 }; 429 430 430 431 #define XE2_CTX_COMMON \ ··· 470 471 LRI(1, 0), /* [0x47] */ 471 472 REG(0x0c8), /* [0x48] R_PWR_CLK_STATE */ 472 473 473 - END 474 + 0 474 475 }; 475 476 476 477 static const u8 xe2_bcs_offsets[] = { ··· 481 482 REG16(0x200), /* [0x42] BCS_SWCTRL */ 482 483 REG16(0x204), /* [0x44] BLIT_CCTL */ 483 484 484 - END 485 + 0 485 486 }; 486 487 487 488 static const u8 xe2_xcs_offsets[] = { 488 489 XE2_CTX_COMMON, 489 490 490 - END 491 + 0 491 492 }; 492 493 493 - #undef END 494 494 #undef REG16 495 495 #undef REG 496 496 #undef LRI
+1 -1
drivers/gpu/drm/xe/xe_query.c
··· 132 132 return -EINVAL; 133 133 134 134 eci = &resp.eci; 135 - if (eci->gt_id > XE_MAX_GT_PER_TILE) 135 + if (eci->gt_id >= XE_MAX_GT_PER_TILE) 136 136 return -EINVAL; 137 137 138 138 gt = xe_device_get_gt(xe, eci->gt_id);
+4 -3
drivers/i2c/busses/i2c-i801.c
··· 536 536 537 537 if (read_write == I2C_SMBUS_READ || 538 538 command == I2C_SMBUS_BLOCK_PROC_CALL) { 539 - status = i801_get_block_len(priv); 540 - if (status < 0) 539 + len = i801_get_block_len(priv); 540 + if (len < 0) { 541 + status = len; 541 542 goto out; 543 + } 542 544 543 - len = status; 544 545 data->block[0] = len; 545 546 inb_p(SMBHSTCNT(priv)); /* reset the data buffer index */ 546 547 for (i = 0; i < len; i++)
+25 -13
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
··· 1139 1139 * requires a breaking update, zero the V bit, write all qwords 1140 1140 * but 0, then set qword 0 1141 1141 */ 1142 - unused_update.data[0] = entry->data[0] & (~STRTAB_STE_0_V); 1142 + unused_update.data[0] = entry->data[0] & 1143 + cpu_to_le64(~STRTAB_STE_0_V); 1143 1144 entry_set(smmu, sid, entry, &unused_update, 0, 1); 1144 1145 entry_set(smmu, sid, entry, target, 1, num_entry_qwords - 1); 1145 1146 entry_set(smmu, sid, entry, target, 0, 1); ··· 1454 1453 FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT)); 1455 1454 } 1456 1455 1457 - static void arm_smmu_make_bypass_ste(struct arm_smmu_ste *target) 1456 + static void arm_smmu_make_bypass_ste(struct arm_smmu_device *smmu, 1457 + struct arm_smmu_ste *target) 1458 1458 { 1459 1459 memset(target, 0, sizeof(*target)); 1460 1460 target->data[0] = cpu_to_le64( 1461 1461 STRTAB_STE_0_V | 1462 1462 FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_BYPASS)); 1463 - target->data[1] = cpu_to_le64( 1464 - FIELD_PREP(STRTAB_STE_1_SHCFG, STRTAB_STE_1_SHCFG_INCOMING)); 1463 + 1464 + if (smmu->features & ARM_SMMU_FEAT_ATTR_TYPES_OVR) 1465 + target->data[1] = cpu_to_le64(FIELD_PREP(STRTAB_STE_1_SHCFG, 1466 + STRTAB_STE_1_SHCFG_INCOMING)); 1465 1467 } 1466 1468 1467 1469 static void arm_smmu_make_cdtable_ste(struct arm_smmu_ste *target, ··· 1527 1523 typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr = 1528 1524 &pgtbl_cfg->arm_lpae_s2_cfg.vtcr; 1529 1525 u64 vtcr_val; 1526 + struct arm_smmu_device *smmu = master->smmu; 1530 1527 1531 1528 memset(target, 0, sizeof(*target)); 1532 1529 target->data[0] = cpu_to_le64( ··· 1536 1531 1537 1532 target->data[1] = cpu_to_le64( 1538 1533 FIELD_PREP(STRTAB_STE_1_EATS, 1539 - master->ats_enabled ? STRTAB_STE_1_EATS_TRANS : 0) | 1540 - FIELD_PREP(STRTAB_STE_1_SHCFG, 1541 - STRTAB_STE_1_SHCFG_INCOMING)); 1534 + master->ats_enabled ? STRTAB_STE_1_EATS_TRANS : 0)); 1535 + 1536 + if (smmu->features & ARM_SMMU_FEAT_ATTR_TYPES_OVR) 1537 + target->data[1] |= cpu_to_le64(FIELD_PREP(STRTAB_STE_1_SHCFG, 1538 + STRTAB_STE_1_SHCFG_INCOMING)); 1542 1539 1543 1540 vtcr_val = FIELD_PREP(STRTAB_STE_2_VTCR_S2T0SZ, vtcr->tsz) | 1544 1541 FIELD_PREP(STRTAB_STE_2_VTCR_S2SL0, vtcr->sl) | ··· 1567 1560 * This can safely directly manipulate the STE memory without a sync sequence 1568 1561 * because the STE table has not been installed in the SMMU yet. 1569 1562 */ 1570 - static void arm_smmu_init_initial_stes(struct arm_smmu_ste *strtab, 1563 + static void arm_smmu_init_initial_stes(struct arm_smmu_device *smmu, 1564 + struct arm_smmu_ste *strtab, 1571 1565 unsigned int nent) 1572 1566 { 1573 1567 unsigned int i; ··· 1577 1569 if (disable_bypass) 1578 1570 arm_smmu_make_abort_ste(strtab); 1579 1571 else 1580 - arm_smmu_make_bypass_ste(strtab); 1572 + arm_smmu_make_bypass_ste(smmu, strtab); 1581 1573 strtab++; 1582 1574 } 1583 1575 } ··· 1605 1597 return -ENOMEM; 1606 1598 } 1607 1599 1608 - arm_smmu_init_initial_stes(desc->l2ptr, 1 << STRTAB_SPLIT); 1600 + arm_smmu_init_initial_stes(smmu, desc->l2ptr, 1 << STRTAB_SPLIT); 1609 1601 arm_smmu_write_strtab_l1_desc(strtab, desc); 1610 1602 return 0; 1611 1603 } ··· 2645 2637 struct device *dev) 2646 2638 { 2647 2639 struct arm_smmu_ste ste; 2640 + struct arm_smmu_master *master = dev_iommu_priv_get(dev); 2648 2641 2649 - arm_smmu_make_bypass_ste(&ste); 2642 + arm_smmu_make_bypass_ste(master->smmu, &ste); 2650 2643 return arm_smmu_attach_dev_ste(dev, &ste); 2651 2644 } 2652 2645 ··· 3273 3264 reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, smmu->sid_bits); 3274 3265 cfg->strtab_base_cfg = reg; 3275 3266 3276 - arm_smmu_init_initial_stes(strtab, cfg->num_l1_ents); 3267 + arm_smmu_init_initial_stes(smmu, strtab, cfg->num_l1_ents); 3277 3268 return 0; 3278 3269 } 3279 3270 ··· 3786 3777 return -ENXIO; 3787 3778 } 3788 3779 3780 + if (reg & IDR1_ATTR_TYPES_OVR) 3781 + smmu->features |= ARM_SMMU_FEAT_ATTR_TYPES_OVR; 3782 + 3789 3783 /* Queue sizes, capped to ensure natural alignment */ 3790 3784 smmu->cmdq.q.llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT, 3791 3785 FIELD_GET(IDR1_CMDQS, reg)); ··· 4004 3992 * STE table is not programmed to HW, see 4005 3993 * arm_smmu_initial_bypass_stes() 4006 3994 */ 4007 - arm_smmu_make_bypass_ste( 3995 + arm_smmu_make_bypass_ste(smmu, 4008 3996 arm_smmu_get_step_for_sid(smmu, rmr->sids[i])); 4009 3997 } 4010 3998 }
+2
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
··· 44 44 #define IDR1_TABLES_PRESET (1 << 30) 45 45 #define IDR1_QUEUES_PRESET (1 << 29) 46 46 #define IDR1_REL (1 << 28) 47 + #define IDR1_ATTR_TYPES_OVR (1 << 27) 47 48 #define IDR1_CMDQS GENMASK(25, 21) 48 49 #define IDR1_EVTQS GENMASK(20, 16) 49 50 #define IDR1_PRIQS GENMASK(15, 11) ··· 648 647 #define ARM_SMMU_FEAT_SVA (1 << 17) 649 648 #define ARM_SMMU_FEAT_E2H (1 << 18) 650 649 #define ARM_SMMU_FEAT_NESTING (1 << 19) 650 + #define ARM_SMMU_FEAT_ATTR_TYPES_OVR (1 << 20) 651 651 u32 features; 652 652 653 653 #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0)
+10 -1
drivers/iommu/iommu.c
··· 3354 3354 { 3355 3355 /* Caller must be a probed driver on dev */ 3356 3356 struct iommu_group *group = dev->iommu_group; 3357 + struct group_device *device; 3357 3358 void *curr; 3358 3359 int ret; 3359 3360 ··· 3364 3363 if (!group) 3365 3364 return -ENODEV; 3366 3365 3367 - if (!dev_has_iommu(dev) || dev_iommu_ops(dev) != domain->owner) 3366 + if (!dev_has_iommu(dev) || dev_iommu_ops(dev) != domain->owner || 3367 + pasid == IOMMU_NO_PASID) 3368 3368 return -EINVAL; 3369 3369 3370 3370 mutex_lock(&group->mutex); 3371 + for_each_group_device(group, device) { 3372 + if (pasid >= device->dev->iommu->max_pasids) { 3373 + ret = -EINVAL; 3374 + goto out_unlock; 3375 + } 3376 + } 3377 + 3371 3378 curr = xa_cmpxchg(&group->pasid_array, pasid, NULL, domain, GFP_KERNEL); 3372 3379 if (curr) { 3373 3380 ret = xa_err(curr) ? : -EBUSY;
+1 -1
drivers/irqchip/irq-armada-370-xp.c
··· 316 316 return 0; 317 317 } 318 318 #else 319 - static void armada_370_xp_msi_reenable_percpu(void) {} 319 + static __maybe_unused void armada_370_xp_msi_reenable_percpu(void) {} 320 320 321 321 static inline int armada_370_xp_msi_init(struct device_node *node, 322 322 phys_addr_t main_int_phys_base)
+1 -1
drivers/md/dm-integrity.c
··· 4221 4221 } else if (sscanf(opt_string, "sectors_per_bit:%llu%c", &llval, &dummy) == 1) { 4222 4222 log2_sectors_per_bitmap_bit = !llval ? 0 : __ilog2_u64(llval); 4223 4223 } else if (sscanf(opt_string, "bitmap_flush_interval:%u%c", &val, &dummy) == 1) { 4224 - if (val >= (uint64_t)UINT_MAX * 1000 / HZ) { 4224 + if ((uint64_t)val >= (uint64_t)UINT_MAX * 1000 / HZ) { 4225 4225 r = -EINVAL; 4226 4226 ti->error = "Invalid bitmap_flush_interval argument"; 4227 4227 goto bad;
+8 -25
drivers/md/dm-vdo/murmurhash3.c
··· 8 8 9 9 #include "murmurhash3.h" 10 10 11 + #include <asm/unaligned.h> 12 + 11 13 static inline u64 rotl64(u64 x, s8 r) 12 14 { 13 15 return (x << r) | (x >> (64 - r)); 14 16 } 15 17 16 18 #define ROTL64(x, y) rotl64(x, y) 17 - static __always_inline u64 getblock64(const u64 *p, int i) 18 - { 19 - #if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ 20 - return p[i]; 21 - #elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ 22 - return __builtin_bswap64(p[i]); 23 - #else 24 - #error "can't figure out byte order" 25 - #endif 26 - } 27 - 28 - static __always_inline void putblock64(u64 *p, int i, u64 value) 29 - { 30 - #if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ 31 - p[i] = value; 32 - #elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ 33 - p[i] = __builtin_bswap64(value); 34 - #else 35 - #error "can't figure out byte order" 36 - #endif 37 - } 38 19 39 20 /* Finalization mix - force all bits of a hash block to avalanche */ 40 21 ··· 41 60 const u64 c1 = 0x87c37b91114253d5LLU; 42 61 const u64 c2 = 0x4cf5ad432745937fLLU; 43 62 63 + u64 *hash_out = out; 64 + 44 65 /* body */ 45 66 46 67 const u64 *blocks = (const u64 *)(data); ··· 50 67 int i; 51 68 52 69 for (i = 0; i < nblocks; i++) { 53 - u64 k1 = getblock64(blocks, i * 2 + 0); 54 - u64 k2 = getblock64(blocks, i * 2 + 1); 70 + u64 k1 = get_unaligned_le64(&blocks[i * 2]); 71 + u64 k2 = get_unaligned_le64(&blocks[i * 2 + 1]); 55 72 56 73 k1 *= c1; 57 74 k1 = ROTL64(k1, 31); ··· 153 170 h1 += h2; 154 171 h2 += h1; 155 172 156 - putblock64((u64 *)out, 0, h1); 157 - putblock64((u64 *)out, 1, h2); 173 + put_unaligned_le64(h1, &hash_out[0]); 174 + put_unaligned_le64(h2, &hash_out[1]); 158 175 }
+2 -2
drivers/mmc/core/block.c
··· 413 413 struct mmc_blk_ioc_data *idata; 414 414 int err; 415 415 416 - idata = kmalloc(sizeof(*idata), GFP_KERNEL); 416 + idata = kzalloc(sizeof(*idata), GFP_KERNEL); 417 417 if (!idata) { 418 418 err = -ENOMEM; 419 419 goto out; ··· 488 488 if (idata->flags & MMC_BLK_IOC_DROP) 489 489 return 0; 490 490 491 - if (idata->flags & MMC_BLK_IOC_SBC) 491 + if (idata->flags & MMC_BLK_IOC_SBC && i > 0) 492 492 prev_idata = idatas[i - 1]; 493 493 494 494 /*
+17 -11
drivers/mmc/host/sdhci-of-dwcmshc.c
··· 999 999 return err; 1000 1000 } 1001 1001 1002 + static void dwcmshc_disable_card_clk(struct sdhci_host *host) 1003 + { 1004 + u16 ctrl; 1005 + 1006 + ctrl = sdhci_readw(host, SDHCI_CLOCK_CONTROL); 1007 + if (ctrl & SDHCI_CLOCK_CARD_EN) { 1008 + ctrl &= ~SDHCI_CLOCK_CARD_EN; 1009 + sdhci_writew(host, ctrl, SDHCI_CLOCK_CONTROL); 1010 + } 1011 + } 1012 + 1002 1013 static void dwcmshc_remove(struct platform_device *pdev) 1003 1014 { 1004 1015 struct sdhci_host *host = platform_get_drvdata(pdev); ··· 1017 1006 struct dwcmshc_priv *priv = sdhci_pltfm_priv(pltfm_host); 1018 1007 struct rk35xx_priv *rk_priv = priv->priv; 1019 1008 1009 + pm_runtime_get_sync(&pdev->dev); 1010 + pm_runtime_disable(&pdev->dev); 1011 + pm_runtime_put_noidle(&pdev->dev); 1012 + 1020 1013 sdhci_remove_host(host, 0); 1014 + 1015 + dwcmshc_disable_card_clk(host); 1021 1016 1022 1017 clk_disable_unprepare(pltfm_host->clk); 1023 1018 clk_disable_unprepare(priv->bus_clk); ··· 1112 1095 ctrl = sdhci_readw(host, SDHCI_CLOCK_CONTROL); 1113 1096 if ((ctrl & SDHCI_CLOCK_INT_EN) && !(ctrl & SDHCI_CLOCK_CARD_EN)) { 1114 1097 ctrl |= SDHCI_CLOCK_CARD_EN; 1115 - sdhci_writew(host, ctrl, SDHCI_CLOCK_CONTROL); 1116 - } 1117 - } 1118 - 1119 - static void dwcmshc_disable_card_clk(struct sdhci_host *host) 1120 - { 1121 - u16 ctrl; 1122 - 1123 - ctrl = sdhci_readw(host, SDHCI_CLOCK_CONTROL); 1124 - if (ctrl & SDHCI_CLOCK_CARD_EN) { 1125 - ctrl &= ~SDHCI_CLOCK_CARD_EN; 1126 1098 sdhci_writew(host, ctrl, SDHCI_CLOCK_CONTROL); 1127 1099 } 1128 1100 }
+3
drivers/mmc/host/sdhci-omap.c
··· 1439 1439 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1440 1440 struct sdhci_omap_host *omap_host = sdhci_pltfm_priv(pltfm_host); 1441 1441 1442 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 1443 + mmc_retune_needed(host->mmc); 1444 + 1442 1445 if (omap_host->con != -EINVAL) 1443 1446 sdhci_runtime_suspend_host(host); 1444 1447
+3 -2
drivers/net/dsa/mt7530.c
··· 2268 2268 SYS_CTRL_PHY_RST | SYS_CTRL_SW_RST | 2269 2269 SYS_CTRL_REG_RST); 2270 2270 2271 - mt7530_pll_setup(priv); 2272 - 2273 2271 /* Lower Tx driving for TRGMII path */ 2274 2272 for (i = 0; i < NUM_TRGMII_CTRL; i++) 2275 2273 mt7530_write(priv, MT7530_TRGMII_TD_ODT(i), ··· 2282 2284 val &= ~MHWTRAP_P6_DIS & ~MHWTRAP_PHY_ACCESS; 2283 2285 val |= MHWTRAP_MANUAL; 2284 2286 mt7530_write(priv, MT7530_MHWTRAP, val); 2287 + 2288 + if ((val & HWTRAP_XTAL_MASK) == HWTRAP_XTAL_40MHZ) 2289 + mt7530_pll_setup(priv); 2285 2290 2286 2291 mt753x_trap_frames(priv); 2287 2292
+20 -23
drivers/net/ethernet/broadcom/asp2/bcmasp_intf.c
··· 392 392 umac_wl(intf, 0x0, UMC_CMD); 393 393 umac_wl(intf, UMC_CMD_SW_RESET, UMC_CMD); 394 394 usleep_range(10, 100); 395 - umac_wl(intf, 0x0, UMC_CMD); 395 + /* We hold the umac in reset and bring it out of 396 + * reset when phy link is up. 397 + */ 396 398 } 397 399 398 400 static void umac_set_hw_addr(struct bcmasp_intf *intf, ··· 414 412 u32 reg; 415 413 416 414 reg = umac_rl(intf, UMC_CMD); 415 + if (reg & UMC_CMD_SW_RESET) 416 + return; 417 417 if (enable) 418 418 reg |= mask; 419 419 else ··· 434 430 umac_wl(intf, 0x800, UMC_FRM_LEN); 435 431 umac_wl(intf, 0xffff, UMC_PAUSE_CNTRL); 436 432 umac_wl(intf, 0x800, UMC_RX_MAX_PKT_SZ); 437 - umac_enable_set(intf, UMC_CMD_PROMISC, 1); 438 433 } 439 434 440 435 static int bcmasp_tx_poll(struct napi_struct *napi, int budget) ··· 661 658 UMC_CMD_HD_EN | UMC_CMD_RX_PAUSE_IGNORE | 662 659 UMC_CMD_TX_PAUSE_IGNORE); 663 660 reg |= cmd_bits; 661 + if (reg & UMC_CMD_SW_RESET) { 662 + reg &= ~UMC_CMD_SW_RESET; 663 + umac_wl(intf, reg, UMC_CMD); 664 + udelay(2); 665 + reg |= UMC_CMD_TX_EN | UMC_CMD_RX_EN | UMC_CMD_PROMISC; 666 + } 664 667 umac_wl(intf, reg, UMC_CMD); 665 668 666 669 active = phy_init_eee(phydev, 0) >= 0; ··· 1044 1035 1045 1036 /* Indicate that the MAC is responsible for PHY PM */ 1046 1037 phydev->mac_managed_pm = true; 1047 - } else if (!intf->wolopts) { 1048 - ret = phy_resume(dev->phydev); 1049 - if (ret) 1050 - goto err_phy_disable; 1051 1038 } 1052 1039 1053 1040 umac_reset(intf); 1054 1041 1055 1042 umac_init(intf); 1056 - 1057 - /* Disable the UniMAC RX/TX */ 1058 - umac_enable_set(intf, (UMC_CMD_RX_EN | UMC_CMD_TX_EN), 0); 1059 1043 1060 1044 umac_set_hw_addr(intf, dev->dev_addr); 1061 1045 ··· 1063 1061 bcmasp_init_rx(intf); 1064 1062 netif_napi_add(intf->ndev, &intf->rx_napi, bcmasp_rx_poll); 1065 1063 bcmasp_enable_rx(intf, 1); 1066 - 1067 - /* Turn on UniMAC TX/RX */ 1068 - umac_enable_set(intf, (UMC_CMD_RX_EN | UMC_CMD_TX_EN), 1); 1069 1064 1070 1065 intf->crc_fwd = !!(umac_rl(intf, UMC_CMD) & UMC_CMD_CRC_FWD); 1071 1066 ··· 1305 1306 if (intf->wolopts & WAKE_FILTER) 1306 1307 bcmasp_netfilt_suspend(intf); 1307 1308 1308 - /* UniMAC receive needs to be turned on */ 1309 + /* Bring UniMAC out of reset if needed and enable RX */ 1310 + reg = umac_rl(intf, UMC_CMD); 1311 + if (reg & UMC_CMD_SW_RESET) 1312 + reg &= ~UMC_CMD_SW_RESET; 1313 + 1314 + reg |= UMC_CMD_RX_EN | UMC_CMD_PROMISC; 1315 + umac_wl(intf, reg, UMC_CMD); 1316 + 1309 1317 umac_enable_set(intf, UMC_CMD_RX_EN, 1); 1310 1318 1311 1319 if (intf->parent->wol_irq > 0) { ··· 1330 1324 { 1331 1325 struct device *kdev = &intf->parent->pdev->dev; 1332 1326 struct net_device *dev = intf->ndev; 1333 - int ret = 0; 1334 1327 1335 1328 if (!netif_running(dev)) 1336 1329 return 0; ··· 1339 1334 bcmasp_netif_deinit(dev); 1340 1335 1341 1336 if (!intf->wolopts) { 1342 - ret = phy_suspend(dev->phydev); 1343 - if (ret) 1344 - goto out; 1345 - 1346 1337 if (intf->internal_phy) 1347 1338 bcmasp_ephy_enable_set(intf, false); 1348 1339 else ··· 1355 1354 1356 1355 clk_disable_unprepare(intf->parent->clk); 1357 1356 1358 - return ret; 1359 - 1360 - out: 1361 - bcmasp_netif_init(dev, false); 1362 - return ret; 1357 + return 0; 1363 1358 } 1364 1359 1365 1360 static void bcmasp_resume_from_wol(struct bcmasp_intf *intf)
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_tqp_stats.c
··· 85 85 hclge_comm_cmd_setup_basic_desc(&desc, HCLGE_OPC_QUERY_TX_STATS, 86 86 true); 87 87 88 - desc.data[0] = cpu_to_le32(tqp->index & 0x1ff); 88 + desc.data[0] = cpu_to_le32(tqp->index); 89 89 ret = hclge_comm_cmd_send(hw, &desc, 1); 90 90 if (ret) { 91 91 dev_err(&hw->cmq.csq.pdev->dev,
+17 -2
drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
··· 78 78 #define HNS3_NIC_LB_TEST_NO_MEM_ERR 1 79 79 #define HNS3_NIC_LB_TEST_TX_CNT_ERR 2 80 80 #define HNS3_NIC_LB_TEST_RX_CNT_ERR 3 81 + #define HNS3_NIC_LB_TEST_UNEXECUTED 4 82 + 83 + static int hns3_get_sset_count(struct net_device *netdev, int stringset); 81 84 82 85 static int hns3_lp_setup(struct net_device *ndev, enum hnae3_loop loop, bool en) 83 86 { ··· 421 418 static void hns3_self_test(struct net_device *ndev, 422 419 struct ethtool_test *eth_test, u64 *data) 423 420 { 421 + int cnt = hns3_get_sset_count(ndev, ETH_SS_TEST); 424 422 struct hns3_nic_priv *priv = netdev_priv(ndev); 425 423 struct hnae3_handle *h = priv->ae_handle; 426 424 int st_param[HNAE3_LOOP_NONE][2]; 427 425 bool if_running = netif_running(ndev); 426 + int i; 427 + 428 + /* initialize the loopback test result, avoid marking an unexcuted 429 + * loopback test as PASS. 430 + */ 431 + for (i = 0; i < cnt; i++) 432 + data[i] = HNS3_NIC_LB_TEST_UNEXECUTED; 428 433 429 434 if (hns3_nic_resetting(ndev)) { 430 435 netdev_err(ndev, "dev resetting!"); 431 - return; 436 + goto failure; 432 437 } 433 438 434 439 if (!(eth_test->flags & ETH_TEST_FL_OFFLINE)) 435 - return; 440 + goto failure; 436 441 437 442 if (netif_msg_ifdown(h)) 438 443 netdev_info(ndev, "self test start\n"); ··· 462 451 463 452 if (netif_msg_ifdown(h)) 464 453 netdev_info(ndev, "self test end\n"); 454 + return; 455 + 456 + failure: 457 + eth_test->flags |= ETH_TEST_FL_FAILED; 465 458 } 466 459 467 460 static void hns3_update_limit_promisc_mode(struct net_device *netdev,
+4
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 11626 11626 if (ret) 11627 11627 goto err_pci_uninit; 11628 11628 11629 + devl_lock(hdev->devlink); 11630 + 11629 11631 /* Firmware command queue initialize */ 11630 11632 ret = hclge_comm_cmd_queue_init(hdev->pdev, &hdev->hw.hw); 11631 11633 if (ret) ··· 11807 11805 11808 11806 hclge_task_schedule(hdev, round_jiffies_relative(HZ)); 11809 11807 11808 + devl_unlock(hdev->devlink); 11810 11809 return 0; 11811 11810 11812 11811 err_mdiobus_unreg: ··· 11820 11817 err_cmd_uninit: 11821 11818 hclge_comm_cmd_uninit(hdev->ae_dev, &hdev->hw.hw); 11822 11819 err_devlink_uninit: 11820 + devl_unlock(hdev->devlink); 11823 11821 hclge_devlink_uninit(hdev); 11824 11822 err_pci_uninit: 11825 11823 pcim_iounmap(pdev, hdev->hw.hw.io_base);
+2 -1
drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
··· 593 593 struct ice_aqc_recipe_to_profile { 594 594 __le16 profile_id; 595 595 u8 rsvd[6]; 596 - DECLARE_BITMAP(recipe_assoc, ICE_MAX_NUM_RECIPES); 596 + __le64 recipe_assoc; 597 597 }; 598 + static_assert(sizeof(struct ice_aqc_recipe_to_profile) == 16); 598 599 599 600 /* Add/Update/Remove/Get switch rules (indirect 0x02A0, 0x02A1, 0x02A2, 0x02A3) 600 601 */
+2 -2
drivers/net/ethernet/intel/ice/ice_lag.c
··· 2041 2041 /* associate recipes to profiles */ 2042 2042 for (n = 0; n < ICE_PROFID_IPV6_GTPU_IPV6_TCP_INNER; n++) { 2043 2043 err = ice_aq_get_recipe_to_profile(&pf->hw, n, 2044 - (u8 *)&recipe_bits, NULL); 2044 + &recipe_bits, NULL); 2045 2045 if (err) 2046 2046 continue; 2047 2047 ··· 2049 2049 recipe_bits |= BIT(lag->pf_recipe) | 2050 2050 BIT(lag->lport_recipe); 2051 2051 ice_aq_map_recipe_to_profile(&pf->hw, n, 2052 - (u8 *)&recipe_bits, NULL); 2052 + recipe_bits, NULL); 2053 2053 } 2054 2054 } 2055 2055
+9 -9
drivers/net/ethernet/intel/ice/ice_lib.c
··· 3091 3091 { 3092 3092 struct ice_vsi_cfg_params params = {}; 3093 3093 struct ice_coalesce_stored *coalesce; 3094 - int prev_num_q_vectors = 0; 3094 + int prev_num_q_vectors; 3095 3095 struct ice_pf *pf; 3096 3096 int ret; 3097 3097 ··· 3105 3105 if (WARN_ON(vsi->type == ICE_VSI_VF && !vsi->vf)) 3106 3106 return -EINVAL; 3107 3107 3108 - coalesce = kcalloc(vsi->num_q_vectors, 3109 - sizeof(struct ice_coalesce_stored), GFP_KERNEL); 3110 - if (!coalesce) 3111 - return -ENOMEM; 3112 - 3113 - prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce); 3114 - 3115 3108 ret = ice_vsi_realloc_stat_arrays(vsi); 3116 3109 if (ret) 3117 3110 goto err_vsi_cfg; ··· 3113 3120 ret = ice_vsi_cfg_def(vsi, &params); 3114 3121 if (ret) 3115 3122 goto err_vsi_cfg; 3123 + 3124 + coalesce = kcalloc(vsi->num_q_vectors, 3125 + sizeof(struct ice_coalesce_stored), GFP_KERNEL); 3126 + if (!coalesce) 3127 + return -ENOMEM; 3128 + 3129 + prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce); 3116 3130 3117 3131 ret = ice_vsi_cfg_tc_lan(pf, vsi); 3118 3132 if (ret) { ··· 3139 3139 3140 3140 err_vsi_cfg_tc_lan: 3141 3141 ice_vsi_decfg(vsi); 3142 - err_vsi_cfg: 3143 3142 kfree(coalesce); 3143 + err_vsi_cfg: 3144 3144 return ret; 3145 3145 } 3146 3146
+14 -10
drivers/net/ethernet/intel/ice/ice_switch.c
··· 2025 2025 * ice_aq_map_recipe_to_profile - Map recipe to packet profile 2026 2026 * @hw: pointer to the HW struct 2027 2027 * @profile_id: package profile ID to associate the recipe with 2028 - * @r_bitmap: Recipe bitmap filled in and need to be returned as response 2028 + * @r_assoc: Recipe bitmap filled in and need to be returned as response 2029 2029 * @cd: pointer to command details structure or NULL 2030 2030 * Recipe to profile association (0x0291) 2031 2031 */ 2032 2032 int 2033 - ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, 2033 + ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u64 r_assoc, 2034 2034 struct ice_sq_cd *cd) 2035 2035 { 2036 2036 struct ice_aqc_recipe_to_profile *cmd; ··· 2042 2042 /* Set the recipe ID bit in the bitmask to let the device know which 2043 2043 * profile we are associating the recipe to 2044 2044 */ 2045 - memcpy(cmd->recipe_assoc, r_bitmap, sizeof(cmd->recipe_assoc)); 2045 + cmd->recipe_assoc = cpu_to_le64(r_assoc); 2046 2046 2047 2047 return ice_aq_send_cmd(hw, &desc, NULL, 0, cd); 2048 2048 } ··· 2051 2051 * ice_aq_get_recipe_to_profile - Map recipe to packet profile 2052 2052 * @hw: pointer to the HW struct 2053 2053 * @profile_id: package profile ID to associate the recipe with 2054 - * @r_bitmap: Recipe bitmap filled in and need to be returned as response 2054 + * @r_assoc: Recipe bitmap filled in and need to be returned as response 2055 2055 * @cd: pointer to command details structure or NULL 2056 2056 * Associate profile ID with given recipe (0x0293) 2057 2057 */ 2058 2058 int 2059 - ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, 2059 + ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u64 *r_assoc, 2060 2060 struct ice_sq_cd *cd) 2061 2061 { 2062 2062 struct ice_aqc_recipe_to_profile *cmd; ··· 2069 2069 2070 2070 status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd); 2071 2071 if (!status) 2072 - memcpy(r_bitmap, cmd->recipe_assoc, sizeof(cmd->recipe_assoc)); 2072 + *r_assoc = le64_to_cpu(cmd->recipe_assoc); 2073 2073 2074 2074 return status; 2075 2075 } ··· 2108 2108 static void ice_get_recp_to_prof_map(struct ice_hw *hw) 2109 2109 { 2110 2110 DECLARE_BITMAP(r_bitmap, ICE_MAX_NUM_RECIPES); 2111 + u64 recp_assoc; 2111 2112 u16 i; 2112 2113 2113 2114 for (i = 0; i < hw->switch_info->max_used_prof_index + 1; i++) { ··· 2116 2115 2117 2116 bitmap_zero(profile_to_recipe[i], ICE_MAX_NUM_RECIPES); 2118 2117 bitmap_zero(r_bitmap, ICE_MAX_NUM_RECIPES); 2119 - if (ice_aq_get_recipe_to_profile(hw, i, (u8 *)r_bitmap, NULL)) 2118 + if (ice_aq_get_recipe_to_profile(hw, i, &recp_assoc, NULL)) 2120 2119 continue; 2120 + bitmap_from_arr64(r_bitmap, &recp_assoc, ICE_MAX_NUM_RECIPES); 2121 2121 bitmap_copy(profile_to_recipe[i], r_bitmap, 2122 2122 ICE_MAX_NUM_RECIPES); 2123 2123 for_each_set_bit(j, r_bitmap, ICE_MAX_NUM_RECIPES) ··· 5392 5390 */ 5393 5391 list_for_each_entry(fvit, &rm->fv_list, list_entry) { 5394 5392 DECLARE_BITMAP(r_bitmap, ICE_MAX_NUM_RECIPES); 5393 + u64 recp_assoc; 5395 5394 u16 j; 5396 5395 5397 5396 status = ice_aq_get_recipe_to_profile(hw, fvit->profile_id, 5398 - (u8 *)r_bitmap, NULL); 5397 + &recp_assoc, NULL); 5399 5398 if (status) 5400 5399 goto err_unroll; 5401 5400 5401 + bitmap_from_arr64(r_bitmap, &recp_assoc, ICE_MAX_NUM_RECIPES); 5402 5402 bitmap_or(r_bitmap, r_bitmap, rm->r_bitmap, 5403 5403 ICE_MAX_NUM_RECIPES); 5404 5404 status = ice_acquire_change_lock(hw, ICE_RES_WRITE); 5405 5405 if (status) 5406 5406 goto err_unroll; 5407 5407 5408 + bitmap_to_arr64(&recp_assoc, r_bitmap, ICE_MAX_NUM_RECIPES); 5408 5409 status = ice_aq_map_recipe_to_profile(hw, fvit->profile_id, 5409 - (u8 *)r_bitmap, 5410 - NULL); 5410 + recp_assoc, NULL); 5411 5411 ice_release_change_lock(hw); 5412 5412 5413 5413 if (status)
+2 -2
drivers/net/ethernet/intel/ice/ice_switch.h
··· 424 424 struct ice_aqc_recipe_data_elem *s_recipe_list, 425 425 u16 num_recipes, struct ice_sq_cd *cd); 426 426 int 427 - ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, 427 + ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u64 *r_assoc, 428 428 struct ice_sq_cd *cd); 429 429 int 430 - ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, 430 + ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u64 r_assoc, 431 431 struct ice_sq_cd *cd); 432 432 433 433 #endif /* _ICE_SWITCH_H_ */
-4
drivers/net/ethernet/intel/igc/igc_main.c
··· 1642 1642 1643 1643 if (unlikely(test_bit(IGC_RING_FLAG_TX_HWTSTAMP, &tx_ring->flags) && 1644 1644 skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) { 1645 - /* FIXME: add support for retrieving timestamps from 1646 - * the other timer registers before skipping the 1647 - * timestamping request. 1648 - */ 1649 1645 unsigned long flags; 1650 1646 u32 tstamp_flags; 1651 1647
+8 -8
drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
··· 914 914 goto err_out; 915 915 } 916 916 917 - xs = kzalloc(sizeof(*xs), GFP_KERNEL); 917 + algo = xfrm_aead_get_byname(aes_gcm_name, IXGBE_IPSEC_AUTH_BITS, 1); 918 + if (unlikely(!algo)) { 919 + err = -ENOENT; 920 + goto err_out; 921 + } 922 + 923 + xs = kzalloc(sizeof(*xs), GFP_ATOMIC); 918 924 if (unlikely(!xs)) { 919 925 err = -ENOMEM; 920 926 goto err_out; ··· 936 930 memcpy(&xs->id.daddr.a4, sam->addr, sizeof(xs->id.daddr.a4)); 937 931 xs->xso.dev = adapter->netdev; 938 932 939 - algo = xfrm_aead_get_byname(aes_gcm_name, IXGBE_IPSEC_AUTH_BITS, 1); 940 - if (unlikely(!algo)) { 941 - err = -ENOENT; 942 - goto err_xs; 943 - } 944 - 945 933 aead_len = sizeof(*xs->aead) + IXGBE_IPSEC_KEY_BITS / 8; 946 - xs->aead = kzalloc(aead_len, GFP_KERNEL); 934 + xs->aead = kzalloc(aead_len, GFP_ATOMIC); 947 935 if (unlikely(!xs->aead)) { 948 936 err = -ENOMEM; 949 937 goto err_xs;
+5
drivers/net/ethernet/marvell/octeontx2/af/cgx.c
··· 808 808 if (!is_lmac_valid(cgx, lmac_id)) 809 809 return -ENODEV; 810 810 811 + cfg = cgx_read(cgx, lmac_id, CGXX_GMP_GMI_RXX_FRM_CTL); 812 + cfg &= ~CGX_GMP_GMI_RXX_FRM_CTL_CTL_BCK; 813 + cfg |= rx_pause ? CGX_GMP_GMI_RXX_FRM_CTL_CTL_BCK : 0x0; 814 + cgx_write(cgx, lmac_id, CGXX_GMP_GMI_RXX_FRM_CTL, cfg); 815 + 811 816 cfg = cgx_read(cgx, lmac_id, CGXX_SMUX_RX_FRM_CTL); 812 817 cfg &= ~CGX_SMUX_RX_FRM_CTL_CTL_BCK; 813 818 cfg |= rx_pause ? CGX_SMUX_RX_FRM_CTL_CTL_BCK : 0x0;
+14 -7
drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
··· 139 139 control |= MLXBF_GIGE_CONTROL_PORT_EN; 140 140 writeq(control, priv->base + MLXBF_GIGE_CONTROL); 141 141 142 - err = mlxbf_gige_request_irqs(priv); 143 - if (err) 144 - return err; 145 142 mlxbf_gige_cache_stats(priv); 146 143 err = mlxbf_gige_clean_port(priv); 147 144 if (err) 148 - goto free_irqs; 145 + return err; 149 146 150 147 /* Clear driver's valid_polarity to match hardware, 151 148 * since the above call to clean_port() resets the ··· 154 157 155 158 err = mlxbf_gige_tx_init(priv); 156 159 if (err) 157 - goto free_irqs; 160 + goto phy_deinit; 158 161 err = mlxbf_gige_rx_init(priv); 159 162 if (err) 160 163 goto tx_deinit; ··· 162 165 netif_napi_add(netdev, &priv->napi, mlxbf_gige_poll); 163 166 napi_enable(&priv->napi); 164 167 netif_start_queue(netdev); 168 + 169 + err = mlxbf_gige_request_irqs(priv); 170 + if (err) 171 + goto napi_deinit; 165 172 166 173 /* Set bits in INT_EN that we care about */ 167 174 int_en = MLXBF_GIGE_INT_EN_HW_ACCESS_ERROR | ··· 183 182 184 183 return 0; 185 184 185 + napi_deinit: 186 + netif_stop_queue(netdev); 187 + napi_disable(&priv->napi); 188 + netif_napi_del(&priv->napi); 189 + mlxbf_gige_rx_deinit(priv); 190 + 186 191 tx_deinit: 187 192 mlxbf_gige_tx_deinit(priv); 188 193 189 - free_irqs: 190 - mlxbf_gige_free_irqs(priv); 194 + phy_deinit: 195 + phy_stop(phydev); 191 196 return err; 192 197 } 193 198
+18
drivers/net/ethernet/microchip/lan743x_main.c
··· 25 25 #define PCS_POWER_STATE_DOWN 0x6 26 26 #define PCS_POWER_STATE_UP 0x4 27 27 28 + #define RFE_RD_FIFO_TH_3_DWORDS 0x3 29 + 28 30 static void pci11x1x_strap_get_status(struct lan743x_adapter *adapter) 29 31 { 30 32 u32 chip_rev; ··· 3274 3272 lan743x_pci_cleanup(adapter); 3275 3273 } 3276 3274 3275 + static void pci11x1x_set_rfe_rd_fifo_threshold(struct lan743x_adapter *adapter) 3276 + { 3277 + u16 rev = adapter->csr.id_rev & ID_REV_CHIP_REV_MASK_; 3278 + 3279 + if (rev == ID_REV_CHIP_REV_PCI11X1X_B0_) { 3280 + u32 misc_ctl; 3281 + 3282 + misc_ctl = lan743x_csr_read(adapter, MISC_CTL_0); 3283 + misc_ctl &= ~MISC_CTL_0_RFE_READ_FIFO_MASK_; 3284 + misc_ctl |= FIELD_PREP(MISC_CTL_0_RFE_READ_FIFO_MASK_, 3285 + RFE_RD_FIFO_TH_3_DWORDS); 3286 + lan743x_csr_write(adapter, MISC_CTL_0, misc_ctl); 3287 + } 3288 + } 3289 + 3277 3290 static int lan743x_hardware_init(struct lan743x_adapter *adapter, 3278 3291 struct pci_dev *pdev) 3279 3292 { ··· 3304 3287 pci11x1x_strap_get_status(adapter); 3305 3288 spin_lock_init(&adapter->eth_syslock_spinlock); 3306 3289 mutex_init(&adapter->sgmii_rw_lock); 3290 + pci11x1x_set_rfe_rd_fifo_threshold(adapter); 3307 3291 } else { 3308 3292 adapter->max_tx_channels = LAN743X_MAX_TX_CHANNELS; 3309 3293 adapter->used_tx_channels = LAN743X_USED_TX_CHANNELS;
+4
drivers/net/ethernet/microchip/lan743x_main.h
··· 26 26 #define ID_REV_CHIP_REV_MASK_ (0x0000FFFF) 27 27 #define ID_REV_CHIP_REV_A0_ (0x00000000) 28 28 #define ID_REV_CHIP_REV_B0_ (0x00000010) 29 + #define ID_REV_CHIP_REV_PCI11X1X_B0_ (0x000000B0) 29 30 30 31 #define FPGA_REV (0x04) 31 32 #define FPGA_REV_GET_MINOR_(fpga_rev) (((fpga_rev) >> 8) & 0x000000FF) ··· 311 310 #define SGMII_CTL_SGMII_ENABLE_ BIT(31) 312 311 #define SGMII_CTL_LINK_STATUS_SOURCE_ BIT(8) 313 312 #define SGMII_CTL_SGMII_POWER_DN_ BIT(1) 313 + 314 + #define MISC_CTL_0 (0x920) 315 + #define MISC_CTL_0_RFE_READ_FIFO_MASK_ GENMASK(6, 4) 314 316 315 317 /* Vendor Specific SGMII MMD details */ 316 318 #define SR_VSMMD_PCS_ID1 0x0004
+1 -1
drivers/net/ethernet/renesas/sh_eth.c
··· 50 50 * the macros available to do this only define GCC 8. 51 51 */ 52 52 __diag_push(); 53 - __diag_ignore(GCC, 8, "-Woverride-init", 53 + __diag_ignore_all("-Woverride-init", 54 54 "logic to initialize all and then override some is OK"); 55 55 static const u16 sh_eth_offset_gigabit[SH_ETH_MAX_REGISTER_OFFSET] = { 56 56 SH_ETH_OFFSET_DEFAULTS,
+1 -1
drivers/net/ethernet/xilinx/ll_temac_main.c
··· 1443 1443 } 1444 1444 1445 1445 /* map device registers */ 1446 - lp->regs = devm_platform_ioremap_resource_byname(pdev, 0); 1446 + lp->regs = devm_platform_ioremap_resource(pdev, 0); 1447 1447 if (IS_ERR(lp->regs)) { 1448 1448 dev_err(&pdev->dev, "could not map TEMAC registers\n"); 1449 1449 return -ENOMEM;
+3 -1
drivers/net/phy/qcom/at803x.c
··· 797 797 798 798 static int at8031_probe(struct phy_device *phydev) 799 799 { 800 - struct at803x_priv *priv = phydev->priv; 800 + struct at803x_priv *priv; 801 801 int mode_cfg; 802 802 int ccr; 803 803 int ret; ··· 805 805 ret = at803x_probe(phydev); 806 806 if (ret) 807 807 return ret; 808 + 809 + priv = phydev->priv; 808 810 809 811 /* Only supported on AR8031/AR8033, the AR8030/AR8035 use strapping 810 812 * options.
+8 -7
drivers/net/wireless/intel/iwlwifi/fw/dbg.c
··· 3081 3081 struct iwl_fw_dbg_params params = {0}; 3082 3082 struct iwl_fwrt_dump_data *dump_data = 3083 3083 &fwrt->dump.wks[wk_idx].dump_data; 3084 - u32 policy; 3085 - u32 time_point; 3086 3084 if (!test_bit(wk_idx, &fwrt->dump.active_wks)) 3087 3085 return; 3088 3086 ··· 3111 3113 3112 3114 iwl_fw_dbg_stop_restart_recording(fwrt, &params, false); 3113 3115 3114 - policy = le32_to_cpu(dump_data->trig->apply_policy); 3115 - time_point = le32_to_cpu(dump_data->trig->time_point); 3116 + if (iwl_trans_dbg_ini_valid(fwrt->trans)) { 3117 + u32 policy = le32_to_cpu(dump_data->trig->apply_policy); 3118 + u32 time_point = le32_to_cpu(dump_data->trig->time_point); 3116 3119 3117 - if (policy & IWL_FW_INI_APPLY_POLICY_DUMP_COMPLETE_CMD) { 3118 - IWL_DEBUG_FW_INFO(fwrt, "WRT: sending dump complete\n"); 3119 - iwl_send_dbg_dump_complete_cmd(fwrt, time_point, 0); 3120 + if (policy & IWL_FW_INI_APPLY_POLICY_DUMP_COMPLETE_CMD) { 3121 + IWL_DEBUG_FW_INFO(fwrt, "WRT: sending dump complete\n"); 3122 + iwl_send_dbg_dump_complete_cmd(fwrt, time_point, 0); 3123 + } 3120 3124 } 3125 + 3121 3126 if (fwrt->trans->dbg.last_tp_resetfw == IWL_FW_INI_RESET_FW_MODE_STOP_FW_ONLY) 3122 3127 iwl_force_nmi(fwrt->trans); 3123 3128
+9 -7
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
··· 1260 1260 if (IS_ERR_OR_NULL(vif)) 1261 1261 return 1; 1262 1262 1263 - if (ieee80211_vif_is_mld(vif) && vif->cfg.assoc) { 1263 + if (hweight16(vif->active_links) > 1) { 1264 1264 /* 1265 - * Select the 'best' link. May need to revisit, it seems 1266 - * better to not optimize for throughput but rather range, 1267 - * reliability and power here - and select 2.4 GHz ... 1265 + * Select the 'best' link. 1266 + * May need to revisit, it seems better to not optimize 1267 + * for throughput but rather range, reliability and 1268 + * power here - and select 2.4 GHz ... 1268 1269 */ 1269 - primary_link = 1270 - iwl_mvm_mld_get_primary_link(mvm, vif, 1271 - vif->active_links); 1270 + primary_link = iwl_mvm_mld_get_primary_link(mvm, vif, 1271 + vif->active_links); 1272 1272 1273 1273 if (WARN_ONCE(primary_link < 0, "no primary link in 0x%x\n", 1274 1274 vif->active_links)) ··· 1277 1277 ret = ieee80211_set_active_links(vif, BIT(primary_link)); 1278 1278 if (ret) 1279 1279 return ret; 1280 + } else if (vif->active_links) { 1281 + primary_link = __ffs(vif->active_links); 1280 1282 } else { 1281 1283 primary_link = 0; 1282 1284 }
+7 -4
drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
··· 748 748 { 749 749 struct dentry *dbgfs_dir = vif->debugfs_dir; 750 750 struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); 751 - char buf[100]; 751 + char buf[3 * 3 + 11 + (NL80211_WIPHY_NAME_MAXLEN + 1) + 752 + (7 + IFNAMSIZ + 1) + 6 + 1]; 753 + char name[7 + IFNAMSIZ + 1]; 752 754 753 755 /* this will happen in monitor mode */ 754 756 if (!dbgfs_dir) ··· 763 761 * find 764 762 * netdev:wlan0 -> ../../../ieee80211/phy0/netdev:wlan0/iwlmvm/ 765 763 */ 766 - snprintf(buf, 100, "../../../%pd3/iwlmvm", dbgfs_dir); 764 + snprintf(name, sizeof(name), "%pd", dbgfs_dir); 765 + snprintf(buf, sizeof(buf), "../../../%pd3/iwlmvm", dbgfs_dir); 767 766 768 - mvmvif->dbgfs_slink = debugfs_create_symlink(dbgfs_dir->d_name.name, 769 - mvm->debugfs_dir, buf); 767 + mvmvif->dbgfs_slink = 768 + debugfs_create_symlink(name, mvm->debugfs_dir, buf); 770 769 } 771 770 772 771 void iwl_mvm_vif_dbgfs_rm_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+45 -14
drivers/net/wireless/intel/iwlwifi/mvm/link.c
··· 46 46 return ret; 47 47 } 48 48 49 + int iwl_mvm_set_link_mapping(struct iwl_mvm *mvm, struct ieee80211_vif *vif, 50 + struct ieee80211_bss_conf *link_conf) 51 + { 52 + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); 53 + struct iwl_mvm_vif_link_info *link_info = 54 + mvmvif->link[link_conf->link_id]; 55 + 56 + if (link_info->fw_link_id == IWL_MVM_FW_LINK_ID_INVALID) { 57 + link_info->fw_link_id = iwl_mvm_get_free_fw_link_id(mvm, 58 + mvmvif); 59 + if (link_info->fw_link_id >= 60 + ARRAY_SIZE(mvm->link_id_to_link_conf)) 61 + return -EINVAL; 62 + 63 + rcu_assign_pointer(mvm->link_id_to_link_conf[link_info->fw_link_id], 64 + link_conf); 65 + } 66 + 67 + return 0; 68 + } 69 + 49 70 int iwl_mvm_add_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif, 50 71 struct ieee80211_bss_conf *link_conf) 51 72 { ··· 76 55 struct iwl_link_config_cmd cmd = {}; 77 56 unsigned int cmd_id = WIDE_ID(MAC_CONF_GROUP, LINK_CONFIG_CMD); 78 57 u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw, cmd_id, 1); 58 + int ret; 79 59 80 60 if (WARN_ON_ONCE(!link_info)) 81 61 return -EINVAL; 82 62 83 - if (link_info->fw_link_id == IWL_MVM_FW_LINK_ID_INVALID) { 84 - link_info->fw_link_id = iwl_mvm_get_free_fw_link_id(mvm, 85 - mvmvif); 86 - if (link_info->fw_link_id >= ARRAY_SIZE(mvm->link_id_to_link_conf)) 87 - return -EINVAL; 88 - 89 - rcu_assign_pointer(mvm->link_id_to_link_conf[link_info->fw_link_id], 90 - link_conf); 91 - } 63 + ret = iwl_mvm_set_link_mapping(mvm, vif, link_conf); 64 + if (ret) 65 + return ret; 92 66 93 67 /* Update SF - Disable if needed. if this fails, SF might still be on 94 68 * while many macs are bound, which is forbidden - so fail the binding. ··· 264 248 return ret; 265 249 } 266 250 251 + int iwl_mvm_unset_link_mapping(struct iwl_mvm *mvm, struct ieee80211_vif *vif, 252 + struct ieee80211_bss_conf *link_conf) 253 + { 254 + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); 255 + struct iwl_mvm_vif_link_info *link_info = 256 + mvmvif->link[link_conf->link_id]; 257 + 258 + /* mac80211 thought we have the link, but it was never configured */ 259 + if (WARN_ON(!link_info || 260 + link_info->fw_link_id >= 261 + ARRAY_SIZE(mvm->link_id_to_link_conf))) 262 + return -EINVAL; 263 + 264 + RCU_INIT_POINTER(mvm->link_id_to_link_conf[link_info->fw_link_id], 265 + NULL); 266 + return 0; 267 + } 268 + 267 269 int iwl_mvm_remove_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif, 268 270 struct ieee80211_bss_conf *link_conf) 269 271 { ··· 291 257 struct iwl_link_config_cmd cmd = {}; 292 258 int ret; 293 259 294 - /* mac80211 thought we have the link, but it was never configured */ 295 - if (WARN_ON(!link_info || 296 - link_info->fw_link_id >= ARRAY_SIZE(mvm->link_id_to_link_conf))) 260 + ret = iwl_mvm_unset_link_mapping(mvm, vif, link_conf); 261 + if (ret) 297 262 return 0; 298 263 299 - RCU_INIT_POINTER(mvm->link_id_to_link_conf[link_info->fw_link_id], 300 - NULL); 301 264 cmd.link_id = cpu_to_le32(link_info->fw_link_id); 302 265 iwl_mvm_release_fw_link_id(mvm, link_info->fw_link_id); 303 266 link_info->fw_link_id = IWL_MVM_FW_LINK_ID_INVALID;
+8 -1
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 360 360 if (mvm->mld_api_is_used && mvm->nvm_data->sku_cap_11be_enable && 361 361 !iwlwifi_mod_params.disable_11ax && 362 362 !iwlwifi_mod_params.disable_11be) 363 - hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_MLO; 363 + hw->wiphy->flags |= WIPHY_FLAG_DISABLE_WEXT; 364 364 365 365 /* With MLD FW API, it tracks timing by itself, 366 366 * no need for any timing from the host ··· 1577 1577 mvmvif->mvm = mvm; 1578 1578 1579 1579 /* the first link always points to the default one */ 1580 + mvmvif->deflink.fw_link_id = IWL_MVM_FW_LINK_ID_INVALID; 1581 + mvmvif->deflink.active = 0; 1580 1582 mvmvif->link[0] = &mvmvif->deflink; 1583 + 1584 + ret = iwl_mvm_set_link_mapping(mvm, vif, &vif->bss_conf); 1585 + if (ret) 1586 + goto out; 1581 1587 1582 1588 /* 1583 1589 * Not much to do here. The stack will not allow interface ··· 1789 1783 mvm->p2p_device_vif = NULL; 1790 1784 } 1791 1785 1786 + iwl_mvm_unset_link_mapping(mvm, vif, &vif->bss_conf); 1792 1787 iwl_mvm_mac_ctxt_remove(mvm, vif); 1793 1788 1794 1789 RCU_INIT_POINTER(mvm->vif_id_to_mac[mvmvif->id], NULL);
+6 -1
drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c
··· 855 855 856 856 int iwl_mvm_mld_rm_sta_id(struct iwl_mvm *mvm, u8 sta_id) 857 857 { 858 - int ret = iwl_mvm_mld_rm_sta_from_fw(mvm, sta_id); 858 + int ret; 859 859 860 860 lockdep_assert_held(&mvm->mutex); 861 + 862 + if (WARN_ON(sta_id == IWL_MVM_INVALID_STA)) 863 + return 0; 864 + 865 + ret = iwl_mvm_mld_rm_sta_from_fw(mvm, sta_id); 861 866 862 867 RCU_INIT_POINTER(mvm->fw_id_to_mac_id[sta_id], NULL); 863 868 RCU_INIT_POINTER(mvm->fw_id_to_link_sta[sta_id], NULL);
+4
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
··· 1916 1916 u32 iwl_mvm_get_lmac_id(struct iwl_mvm *mvm, enum nl80211_band band); 1917 1917 1918 1918 /* Links */ 1919 + int iwl_mvm_set_link_mapping(struct iwl_mvm *mvm, struct ieee80211_vif *vif, 1920 + struct ieee80211_bss_conf *link_conf); 1919 1921 int iwl_mvm_add_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif, 1920 1922 struct ieee80211_bss_conf *link_conf); 1921 1923 int iwl_mvm_link_changed(struct iwl_mvm *mvm, struct ieee80211_vif *vif, 1922 1924 struct ieee80211_bss_conf *link_conf, 1923 1925 u32 changes, bool active); 1926 + int iwl_mvm_unset_link_mapping(struct iwl_mvm *mvm, struct ieee80211_vif *vif, 1927 + struct ieee80211_bss_conf *link_conf); 1924 1928 int iwl_mvm_remove_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif, 1925 1929 struct ieee80211_bss_conf *link_conf); 1926 1930 int iwl_mvm_disable_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+6 -2
drivers/net/wireless/intel/iwlwifi/mvm/rfi.c
··· 132 132 if (ret) 133 133 return ERR_PTR(ret); 134 134 135 - if (WARN_ON_ONCE(iwl_rx_packet_payload_len(cmd.resp_pkt) != resp_size)) 135 + if (WARN_ON_ONCE(iwl_rx_packet_payload_len(cmd.resp_pkt) != 136 + resp_size)) { 137 + iwl_free_resp(&cmd); 136 138 return ERR_PTR(-EIO); 139 + } 137 140 138 141 resp = kmemdup(cmd.resp_pkt->data, resp_size, GFP_KERNEL); 142 + iwl_free_resp(&cmd); 143 + 139 144 if (!resp) 140 145 return ERR_PTR(-ENOMEM); 141 146 142 - iwl_free_resp(&cmd); 143 147 return resp; 144 148 } 145 149
+8 -12
drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
··· 236 236 static void iwl_mvm_pass_packet_to_mac80211(struct iwl_mvm *mvm, 237 237 struct napi_struct *napi, 238 238 struct sk_buff *skb, int queue, 239 - struct ieee80211_sta *sta, 240 - struct ieee80211_link_sta *link_sta) 239 + struct ieee80211_sta *sta) 241 240 { 242 241 if (unlikely(iwl_mvm_check_pn(mvm, skb, queue, sta))) { 243 242 kfree_skb(skb); 244 243 return; 245 - } 246 - 247 - if (sta && sta->valid_links && link_sta) { 248 - struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb); 249 - 250 - rx_status->link_valid = 1; 251 - rx_status->link_id = link_sta->link_id; 252 244 } 253 245 254 246 ieee80211_rx_napi(mvm->hw, sta, skb, napi); ··· 580 588 while ((skb = __skb_dequeue(skb_list))) { 581 589 iwl_mvm_pass_packet_to_mac80211(mvm, napi, skb, 582 590 reorder_buf->queue, 583 - sta, NULL /* FIXME */); 591 + sta); 584 592 reorder_buf->num_stored--; 585 593 } 586 594 } ··· 2205 2213 if (IS_ERR(sta)) 2206 2214 sta = NULL; 2207 2215 link_sta = rcu_dereference(mvm->fw_id_to_link_sta[id]); 2216 + 2217 + if (sta && sta->valid_links && link_sta) { 2218 + rx_status->link_valid = 1; 2219 + rx_status->link_id = link_sta->link_id; 2220 + } 2208 2221 } 2209 2222 } else if (!is_multicast_ether_addr(hdr->addr2)) { 2210 2223 /* ··· 2353 2356 !(desc->amsdu_info & IWL_RX_MPDU_AMSDU_LAST_SUBFRAME)) 2354 2357 rx_status->flag |= RX_FLAG_AMSDU_MORE; 2355 2358 2356 - iwl_mvm_pass_packet_to_mac80211(mvm, napi, skb, queue, sta, 2357 - link_sta); 2359 + iwl_mvm_pass_packet_to_mac80211(mvm, napi, skb, queue, sta); 2358 2360 } 2359 2361 out: 2360 2362 rcu_read_unlock();
+2 -3
drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
··· 879 879 struct iwl_rx_packet *pkt = rxb_addr(rxb); 880 880 struct iwl_mvm_session_prot_notif *notif = (void *)pkt->data; 881 881 unsigned int ver = 882 - iwl_fw_lookup_cmd_ver(mvm->fw, 883 - WIDE_ID(MAC_CONF_GROUP, 884 - SESSION_PROTECTION_CMD), 2); 882 + iwl_fw_lookup_notif_ver(mvm->fw, MAC_CONF_GROUP, 883 + SESSION_PROTECTION_NOTIF, 2); 885 884 int id = le32_to_cpu(notif->mac_link_id); 886 885 struct ieee80211_vif *vif; 887 886 struct iwl_mvm_vif *mvmvif;
+1 -1
drivers/net/wireless/intel/iwlwifi/queue/tx.c
··· 1589 1589 return; 1590 1590 1591 1591 tfd_num = iwl_txq_get_cmd_index(txq, ssn); 1592 - read_ptr = iwl_txq_get_cmd_index(txq, txq->read_ptr); 1593 1592 1594 1593 spin_lock_bh(&txq->lock); 1594 + read_ptr = iwl_txq_get_cmd_index(txq, txq->read_ptr); 1595 1595 1596 1596 if (!test_bit(txq_id, trans->txqs.queue_used)) { 1597 1597 IWL_DEBUG_TX_QUEUES(trans, "Q %d inactive - ignoring idx %d\n",
+1 -1
drivers/net/wireless/realtek/rtw89/rtw8922a.c
··· 2233 2233 * Shared-Ant && BTG-path:WL mask(0x55f), others:WL THRU(0x5ff) 2234 2234 */ 2235 2235 if (btc->ant_type == BTC_ANT_SHARED && btc->btg_pos == path) 2236 - rtw8922a_set_trx_mask(rtwdev, path, BTC_BT_TX_GROUP, 0x5ff); 2236 + rtw8922a_set_trx_mask(rtwdev, path, BTC_BT_TX_GROUP, 0x55f); 2237 2237 else 2238 2238 rtw8922a_set_trx_mask(rtwdev, path, BTC_BT_TX_GROUP, 0x5ff); 2239 2239
+2 -2
drivers/net/wwan/t7xx/t7xx_cldma.c
··· 106 106 { 107 107 u32 offset = REG_CLDMA_UL_START_ADDRL_0 + qno * ADDR_SIZE; 108 108 109 - return ioread64(hw_info->ap_pdn_base + offset); 109 + return ioread64_lo_hi(hw_info->ap_pdn_base + offset); 110 110 } 111 111 112 112 void t7xx_cldma_hw_set_start_addr(struct t7xx_cldma_hw *hw_info, unsigned int qno, u64 address, ··· 117 117 118 118 reg = tx_rx == MTK_RX ? hw_info->ap_ao_base + REG_CLDMA_DL_START_ADDRL_0 : 119 119 hw_info->ap_pdn_base + REG_CLDMA_UL_START_ADDRL_0; 120 - iowrite64(address, reg + offset); 120 + iowrite64_lo_hi(address, reg + offset); 121 121 } 122 122 123 123 void t7xx_cldma_hw_resume_queue(struct t7xx_cldma_hw *hw_info, unsigned int qno,
+5 -4
drivers/net/wwan/t7xx/t7xx_hif_cldma.c
··· 137 137 return -ENODEV; 138 138 } 139 139 140 - gpd_addr = ioread64(hw_info->ap_pdn_base + REG_CLDMA_DL_CURRENT_ADDRL_0 + 141 - queue->index * sizeof(u64)); 140 + gpd_addr = ioread64_lo_hi(hw_info->ap_pdn_base + 141 + REG_CLDMA_DL_CURRENT_ADDRL_0 + 142 + queue->index * sizeof(u64)); 142 143 if (req->gpd_addr == gpd_addr || hwo_polling_count++ >= 100) 143 144 return 0; 144 145 ··· 317 316 struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info; 318 317 319 318 /* Check current processing TGPD, 64-bit address is in a table by Q index */ 320 - ul_curr_addr = ioread64(hw_info->ap_pdn_base + REG_CLDMA_UL_CURRENT_ADDRL_0 + 321 - queue->index * sizeof(u64)); 319 + ul_curr_addr = ioread64_lo_hi(hw_info->ap_pdn_base + REG_CLDMA_UL_CURRENT_ADDRL_0 + 320 + queue->index * sizeof(u64)); 322 321 if (req->gpd_addr != ul_curr_addr) { 323 322 spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); 324 323 dev_err(md_ctrl->dev, "CLDMA%d queue %d is not empty\n",
+4 -4
drivers/net/wwan/t7xx/t7xx_pcie_mac.c
··· 75 75 for (i = 0; i < ATR_TABLE_NUM_PER_ATR; i++) { 76 76 offset = ATR_PORT_OFFSET * port + ATR_TABLE_OFFSET * i; 77 77 reg = pbase + ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR + offset; 78 - iowrite64(0, reg); 78 + iowrite64_lo_hi(0, reg); 79 79 } 80 80 } 81 81 ··· 112 112 113 113 reg = pbase + ATR_PCIE_WIN0_T0_TRSL_ADDR + offset; 114 114 value = cfg->trsl_addr & ATR_PCIE_WIN0_ADDR_ALGMT; 115 - iowrite64(value, reg); 115 + iowrite64_lo_hi(value, reg); 116 116 117 117 reg = pbase + ATR_PCIE_WIN0_T0_TRSL_PARAM + offset; 118 118 iowrite32(cfg->trsl_id, reg); 119 119 120 120 reg = pbase + ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR + offset; 121 121 value = (cfg->src_addr & ATR_PCIE_WIN0_ADDR_ALGMT) | (atr_size << 1) | BIT(0); 122 - iowrite64(value, reg); 122 + iowrite64_lo_hi(value, reg); 123 123 124 124 /* Ensure ATR is set */ 125 - ioread64(reg); 125 + ioread64_lo_hi(reg); 126 126 return 0; 127 127 } 128 128
+1 -1
drivers/pinctrl/aspeed/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 # Aspeed pinctrl support 3 3 4 - ccflags-y += $(call cc-option,-Woverride-init) 4 + ccflags-y += -Woverride-init 5 5 obj-$(CONFIG_PINCTRL_ASPEED) += pinctrl-aspeed.o pinmux-aspeed.o 6 6 obj-$(CONFIG_PINCTRL_ASPEED_G4) += pinctrl-aspeed-g4.o 7 7 obj-$(CONFIG_PINCTRL_ASPEED_G5) += pinctrl-aspeed-g5.o
+1 -1
drivers/pinctrl/pinctrl-amd.c
··· 1159 1159 } 1160 1160 1161 1161 ret = devm_request_irq(&pdev->dev, gpio_dev->irq, amd_gpio_irq_handler, 1162 - IRQF_SHARED | IRQF_ONESHOT, KBUILD_MODNAME, gpio_dev); 1162 + IRQF_SHARED | IRQF_COND_ONESHOT, KBUILD_MODNAME, gpio_dev); 1163 1163 if (ret) 1164 1164 goto out2; 1165 1165
+2 -2
drivers/pwm/pwm-img.c
··· 284 284 return PTR_ERR(imgchip->sys_clk); 285 285 } 286 286 287 - imgchip->pwm_clk = devm_clk_get(&pdev->dev, "imgchip"); 287 + imgchip->pwm_clk = devm_clk_get(&pdev->dev, "pwm"); 288 288 if (IS_ERR(imgchip->pwm_clk)) { 289 - dev_err(&pdev->dev, "failed to get imgchip clock\n"); 289 + dev_err(&pdev->dev, "failed to get pwm clock\n"); 290 290 return PTR_ERR(imgchip->pwm_clk); 291 291 } 292 292
+39 -18
drivers/ras/amd/fmpm.c
··· 150 150 /* Total length of record including headers and list of descriptor entries. */ 151 151 static size_t max_rec_len; 152 152 153 + #define FMPM_MAX_REC_LEN (sizeof(struct fru_rec) + (sizeof(struct cper_fru_poison_desc) * 255)) 154 + 153 155 /* Total number of SPA entries across all FRUs. */ 154 156 static unsigned int spa_nr_entries; 155 157 ··· 477 475 struct cper_section_descriptor *sec_desc = &rec->sec_desc; 478 476 struct cper_record_header *hdr = &rec->hdr; 479 477 478 + /* 479 + * This is a saved record created with fewer max_nr_entries. 480 + * Update the record lengths and keep everything else as-is. 481 + */ 482 + if (hdr->record_length && hdr->record_length < max_rec_len) { 483 + pr_debug("Growing record 0x%016llx from %u to %zu bytes\n", 484 + hdr->record_id, hdr->record_length, max_rec_len); 485 + goto update_lengths; 486 + } 487 + 480 488 memcpy(hdr->signature, CPER_SIG_RECORD, CPER_SIG_SIZE); 481 489 hdr->revision = CPER_RECORD_REV; 482 490 hdr->signature_end = CPER_SIG_END; ··· 501 489 hdr->error_severity = CPER_SEV_RECOVERABLE; 502 490 503 491 hdr->validation_bits = 0; 504 - hdr->record_length = max_rec_len; 505 492 hdr->creator_id = CPER_CREATOR_FMP; 506 493 hdr->notification_type = CPER_NOTIFY_MCE; 507 494 hdr->record_id = cper_next_record_id(); 508 495 hdr->flags = CPER_HW_ERROR_FLAGS_PREVERR; 509 496 510 497 sec_desc->section_offset = sizeof(struct cper_record_header); 511 - sec_desc->section_length = max_rec_len - sizeof(struct cper_record_header); 512 498 sec_desc->revision = CPER_SEC_REV; 513 499 sec_desc->validation_bits = 0; 514 500 sec_desc->flags = CPER_SEC_PRIMARY; 515 501 sec_desc->section_type = CPER_SECTION_TYPE_FMP; 516 502 sec_desc->section_severity = CPER_SEV_RECOVERABLE; 503 + 504 + update_lengths: 505 + hdr->record_length = max_rec_len; 506 + sec_desc->section_length = max_rec_len - sizeof(struct cper_record_header); 517 507 } 518 508 519 509 static int save_new_records(void) ··· 526 512 int ret = 0; 527 513 528 514 for_each_fru(i, rec) { 529 - if (rec->hdr.record_length) 515 + /* No need to update saved records that match the current record size. */ 516 + if (rec->hdr.record_length == max_rec_len) 530 517 continue; 518 + 519 + if (!rec->hdr.record_length) 520 + set_bit(i, new_records); 531 521 532 522 set_rec_fields(rec); 533 523 534 524 ret = update_record_on_storage(rec); 535 525 if (ret) 536 526 goto out_clear; 537 - 538 - set_bit(i, new_records); 539 527 } 540 528 541 529 return ret; ··· 657 641 int ret, pos; 658 642 ssize_t len; 659 643 660 - /* 661 - * Assume saved records match current max size. 662 - * 663 - * However, this may not be true depending on module parameters. 664 - */ 665 - old = kmalloc(max_rec_len, GFP_KERNEL); 644 + old = kmalloc(FMPM_MAX_REC_LEN, GFP_KERNEL); 666 645 if (!old) { 667 646 ret = -ENOMEM; 668 647 goto out; ··· 674 663 * Make sure to clear temporary buffer between reads to avoid 675 664 * leftover data from records of various sizes. 676 665 */ 677 - memset(old, 0, max_rec_len); 666 + memset(old, 0, FMPM_MAX_REC_LEN); 678 667 679 - len = erst_read_record(record_id, &old->hdr, max_rec_len, 668 + len = erst_read_record(record_id, &old->hdr, FMPM_MAX_REC_LEN, 680 669 sizeof(struct fru_rec), &CPER_CREATOR_FMP); 681 670 if (len < 0) 682 671 continue; 683 672 684 - if (len > max_rec_len) { 685 - pr_debug("Found record larger than max_rec_len\n"); 673 + new = get_valid_record(old); 674 + if (!new) { 675 + erst_clear(record_id); 686 676 continue; 687 677 } 688 678 689 - new = get_valid_record(old); 690 - if (!new) 691 - erst_clear(record_id); 679 + if (len > max_rec_len) { 680 + unsigned int saved_nr_entries; 681 + 682 + saved_nr_entries = len - sizeof(struct fru_rec); 683 + saved_nr_entries /= sizeof(struct cper_fru_poison_desc); 684 + 685 + pr_warn("Saved record found with %u entries.\n", saved_nr_entries); 686 + pr_warn("Please increase max_nr_entries to %u.\n", saved_nr_entries); 687 + 688 + ret = -EINVAL; 689 + goto out_end; 690 + } 692 691 693 692 /* Restore the record */ 694 693 memcpy(new, old, len);
+4
drivers/ras/debugfs.h
··· 4 4 5 5 #include <linux/debugfs.h> 6 6 7 + #if IS_ENABLED(CONFIG_DEBUG_FS) 7 8 struct dentry *ras_get_debugfs_root(void); 9 + #else 10 + static inline struct dentry *ras_get_debugfs_root(void) { return NULL; } 11 + #endif /* DEBUG_FS */ 8 12 9 13 #endif /* __RAS_DEBUGFS_H__ */
+36 -2
drivers/s390/net/qeth_core_main.c
··· 1179 1179 } 1180 1180 } 1181 1181 1182 + /** 1183 + * qeth_irq() - qeth interrupt handler 1184 + * @cdev: ccw device 1185 + * @intparm: expect pointer to iob 1186 + * @irb: Interruption Response Block 1187 + * 1188 + * In the good path: 1189 + * corresponding qeth channel is locked with last used iob as active_cmd. 1190 + * But this function is also called for error interrupts. 1191 + * 1192 + * Caller ensures that: 1193 + * Interrupts are disabled; ccw device lock is held; 1194 + * 1195 + */ 1182 1196 static void qeth_irq(struct ccw_device *cdev, unsigned long intparm, 1183 1197 struct irb *irb) 1184 1198 { ··· 1234 1220 iob = (struct qeth_cmd_buffer *) (addr_t)intparm; 1235 1221 } 1236 1222 1237 - qeth_unlock_channel(card, channel); 1238 - 1239 1223 rc = qeth_check_irb_error(card, cdev, irb); 1240 1224 if (rc) { 1241 1225 /* IO was terminated, free its resources. */ 1226 + qeth_unlock_channel(card, channel); 1242 1227 if (iob) 1243 1228 qeth_cancel_cmd(iob, rc); 1244 1229 return; ··· 1281 1268 rc = qeth_get_problem(card, cdev, irb); 1282 1269 if (rc) { 1283 1270 card->read_or_write_problem = 1; 1271 + qeth_unlock_channel(card, channel); 1284 1272 if (iob) 1285 1273 qeth_cancel_cmd(iob, rc); 1286 1274 qeth_clear_ipacmd_list(card); ··· 1289 1275 return; 1290 1276 } 1291 1277 } 1278 + 1279 + if (scsw_cmd_is_valid_cc(&irb->scsw) && irb->scsw.cmd.cc == 1 && iob) { 1280 + /* channel command hasn't started: retry. 1281 + * active_cmd is still set to last iob 1282 + */ 1283 + QETH_CARD_TEXT(card, 2, "irqcc1"); 1284 + rc = ccw_device_start_timeout(cdev, __ccw_from_cmd(iob), 1285 + (addr_t)iob, 0, 0, iob->timeout); 1286 + if (rc) { 1287 + QETH_DBF_MESSAGE(2, 1288 + "ccw retry on %x failed, rc = %i\n", 1289 + CARD_DEVID(card), rc); 1290 + QETH_CARD_TEXT_(card, 2, " err%d", rc); 1291 + qeth_unlock_channel(card, channel); 1292 + qeth_cancel_cmd(iob, rc); 1293 + } 1294 + return; 1295 + } 1296 + 1297 + qeth_unlock_channel(card, channel); 1292 1298 1293 1299 if (iob) { 1294 1300 /* sanity check: */
-2
drivers/scsi/bnx2fc/bnx2fc_tgt.c
··· 833 833 834 834 BNX2FC_TGT_DBG(tgt, "Freeing up session resources\n"); 835 835 836 - spin_lock_bh(&tgt->cq_lock); 837 836 ctx_base_ptr = tgt->ctx_base; 838 837 tgt->ctx_base = NULL; 839 838 ··· 888 889 tgt->sq, tgt->sq_dma); 889 890 tgt->sq = NULL; 890 891 } 891 - spin_unlock_bh(&tgt->cq_lock); 892 892 893 893 if (ctx_base_ptr) 894 894 iounmap(ctx_base_ptr);
+10 -10
drivers/scsi/ch.c
··· 102 102 103 103 #define MAX_RETRIES 1 104 104 105 - static struct class * ch_sysfs_class; 105 + static const struct class ch_sysfs_class = { 106 + .name = "scsi_changer", 107 + }; 106 108 107 109 typedef struct { 108 110 struct kref ref; ··· 932 930 mutex_init(&ch->lock); 933 931 kref_init(&ch->ref); 934 932 ch->device = sd; 935 - class_dev = device_create(ch_sysfs_class, dev, 933 + class_dev = device_create(&ch_sysfs_class, dev, 936 934 MKDEV(SCSI_CHANGER_MAJOR, ch->minor), ch, 937 935 "s%s", ch->name); 938 936 if (IS_ERR(class_dev)) { ··· 957 955 958 956 return 0; 959 957 destroy_dev: 960 - device_destroy(ch_sysfs_class, MKDEV(SCSI_CHANGER_MAJOR, ch->minor)); 958 + device_destroy(&ch_sysfs_class, MKDEV(SCSI_CHANGER_MAJOR, ch->minor)); 961 959 put_device: 962 960 scsi_device_put(sd); 963 961 remove_idr: ··· 976 974 dev_set_drvdata(dev, NULL); 977 975 spin_unlock(&ch_index_lock); 978 976 979 - device_destroy(ch_sysfs_class, MKDEV(SCSI_CHANGER_MAJOR,ch->minor)); 977 + device_destroy(&ch_sysfs_class, MKDEV(SCSI_CHANGER_MAJOR, ch->minor)); 980 978 scsi_device_put(ch->device); 981 979 kref_put(&ch->ref, ch_destroy); 982 980 return 0; ··· 1005 1003 int rc; 1006 1004 1007 1005 printk(KERN_INFO "SCSI Media Changer driver v" VERSION " \n"); 1008 - ch_sysfs_class = class_create("scsi_changer"); 1009 - if (IS_ERR(ch_sysfs_class)) { 1010 - rc = PTR_ERR(ch_sysfs_class); 1006 + rc = class_register(&ch_sysfs_class); 1007 + if (rc) 1011 1008 return rc; 1012 - } 1013 1009 rc = register_chrdev(SCSI_CHANGER_MAJOR,"ch",&changer_fops); 1014 1010 if (rc < 0) { 1015 1011 printk("Unable to get major %d for SCSI-Changer\n", ··· 1022 1022 fail2: 1023 1023 unregister_chrdev(SCSI_CHANGER_MAJOR, "ch"); 1024 1024 fail1: 1025 - class_destroy(ch_sysfs_class); 1025 + class_unregister(&ch_sysfs_class); 1026 1026 return rc; 1027 1027 } 1028 1028 ··· 1030 1030 { 1031 1031 scsi_unregister_driver(&ch_template.gendrv); 1032 1032 unregister_chrdev(SCSI_CHANGER_MAJOR, "ch"); 1033 - class_destroy(ch_sysfs_class); 1033 + class_unregister(&ch_sysfs_class); 1034 1034 idr_destroy(&ch_index_idr); 1035 1035 } 1036 1036
+10 -7
drivers/scsi/cxlflash/main.c
··· 28 28 MODULE_AUTHOR("Matthew R. Ochs <mrochs@linux.vnet.ibm.com>"); 29 29 MODULE_LICENSE("GPL"); 30 30 31 - static struct class *cxlflash_class; 31 + static char *cxlflash_devnode(const struct device *dev, umode_t *mode); 32 + static const struct class cxlflash_class = { 33 + .name = "cxlflash", 34 + .devnode = cxlflash_devnode, 35 + }; 36 + 32 37 static u32 cxlflash_major; 33 38 static DECLARE_BITMAP(cxlflash_minor, CXLFLASH_MAX_ADAPTERS); 34 39 ··· 3607 3602 goto err1; 3608 3603 } 3609 3604 3610 - char_dev = device_create(cxlflash_class, NULL, devno, 3605 + char_dev = device_create(&cxlflash_class, NULL, devno, 3611 3606 NULL, "cxlflash%d", minor); 3612 3607 if (IS_ERR(char_dev)) { 3613 3608 rc = PTR_ERR(char_dev); ··· 3885 3880 3886 3881 cxlflash_major = MAJOR(devno); 3887 3882 3888 - cxlflash_class = class_create("cxlflash"); 3889 - if (IS_ERR(cxlflash_class)) { 3890 - rc = PTR_ERR(cxlflash_class); 3883 + rc = class_register(&cxlflash_class); 3884 + if (rc) { 3891 3885 pr_err("%s: class_create failed rc=%d\n", __func__, rc); 3892 3886 goto err; 3893 3887 } 3894 3888 3895 - cxlflash_class->devnode = cxlflash_devnode; 3896 3889 out: 3897 3890 pr_debug("%s: returning rc=%d\n", __func__, rc); 3898 3891 return rc; ··· 3906 3903 { 3907 3904 dev_t devno = MKDEV(cxlflash_major, 0); 3908 3905 3909 - class_destroy(cxlflash_class); 3906 + class_unregister(&cxlflash_class); 3910 3907 unregister_chrdev_region(devno, CXLFLASH_MAX_ADAPTERS); 3911 3908 } 3912 3909
+4 -3
drivers/scsi/hosts.c
··· 353 353 354 354 if (shost->shost_state == SHOST_CREATED) { 355 355 /* 356 - * Free the shost_dev device name here if scsi_host_alloc() 357 - * and scsi_host_put() have been called but neither 356 + * Free the shost_dev device name and remove the proc host dir 357 + * here if scsi_host_{alloc,put}() have been called but neither 358 358 * scsi_host_add() nor scsi_remove_host() has been called. 359 359 * This avoids that the memory allocated for the shost_dev 360 - * name is leaked. 360 + * name as well as the proc dir structure are leaked. 361 361 */ 362 + scsi_proc_hostdir_rm(shost->hostt); 362 363 kfree(dev_name(&shost->shost_dev)); 363 364 } 364 365
+34 -17
drivers/scsi/libsas/sas_expander.c
··· 1621 1621 1622 1622 /* ---------- Domain revalidation ---------- */ 1623 1623 1624 + static void sas_get_sas_addr_and_dev_type(struct smp_disc_resp *disc_resp, 1625 + u8 *sas_addr, 1626 + enum sas_device_type *type) 1627 + { 1628 + memcpy(sas_addr, disc_resp->disc.attached_sas_addr, SAS_ADDR_SIZE); 1629 + *type = to_dev_type(&disc_resp->disc); 1630 + if (*type == SAS_PHY_UNUSED) 1631 + memset(sas_addr, 0, SAS_ADDR_SIZE); 1632 + } 1633 + 1624 1634 static int sas_get_phy_discover(struct domain_device *dev, 1625 1635 int phy_id, struct smp_disc_resp *disc_resp) 1626 1636 { ··· 1684 1674 return -ENOMEM; 1685 1675 1686 1676 res = sas_get_phy_discover(dev, phy_id, disc_resp); 1687 - if (res == 0) { 1688 - memcpy(sas_addr, disc_resp->disc.attached_sas_addr, 1689 - SAS_ADDR_SIZE); 1690 - *type = to_dev_type(&disc_resp->disc); 1691 - if (*type == 0) 1692 - memset(sas_addr, 0, SAS_ADDR_SIZE); 1693 - } 1677 + if (res == 0) 1678 + sas_get_sas_addr_and_dev_type(disc_resp, sas_addr, type); 1694 1679 kfree(disc_resp); 1695 1680 return res; 1696 1681 } ··· 1945 1940 struct expander_device *ex = &dev->ex_dev; 1946 1941 struct ex_phy *phy = &ex->ex_phy[phy_id]; 1947 1942 enum sas_device_type type = SAS_PHY_UNUSED; 1943 + struct smp_disc_resp *disc_resp; 1948 1944 u8 sas_addr[SAS_ADDR_SIZE]; 1949 1945 char msg[80] = ""; 1950 1946 int res; ··· 1957 1951 SAS_ADDR(dev->sas_addr), phy_id, msg); 1958 1952 1959 1953 memset(sas_addr, 0, SAS_ADDR_SIZE); 1960 - res = sas_get_phy_attached_dev(dev, phy_id, sas_addr, &type); 1954 + disc_resp = alloc_smp_resp(DISCOVER_RESP_SIZE); 1955 + if (!disc_resp) 1956 + return -ENOMEM; 1957 + 1958 + res = sas_get_phy_discover(dev, phy_id, disc_resp); 1961 1959 switch (res) { 1962 1960 case SMP_RESP_NO_PHY: 1963 1961 phy->phy_state = PHY_NOT_PRESENT; 1964 1962 sas_unregister_devs_sas_addr(dev, phy_id, last); 1965 - return res; 1963 + goto out_free_resp; 1966 1964 case SMP_RESP_PHY_VACANT: 1967 1965 phy->phy_state = PHY_VACANT; 1968 1966 sas_unregister_devs_sas_addr(dev, phy_id, last); 1969 - return res; 1967 + goto out_free_resp; 1970 1968 case SMP_RESP_FUNC_ACC: 1971 1969 break; 1972 1970 case -ECOMM: 1973 1971 break; 1974 1972 default: 1975 - return res; 1973 + goto out_free_resp; 1976 1974 } 1975 + 1976 + if (res == 0) 1977 + sas_get_sas_addr_and_dev_type(disc_resp, sas_addr, &type); 1977 1978 1978 1979 if ((SAS_ADDR(sas_addr) == 0) || (res == -ECOMM)) { 1979 1980 phy->phy_state = PHY_EMPTY; 1980 1981 sas_unregister_devs_sas_addr(dev, phy_id, last); 1981 1982 /* 1982 - * Even though the PHY is empty, for convenience we discover 1983 - * the PHY to update the PHY info, like negotiated linkrate. 1983 + * Even though the PHY is empty, for convenience we update 1984 + * the PHY info, like negotiated linkrate. 1984 1985 */ 1985 - sas_ex_phy_discover(dev, phy_id); 1986 - return res; 1986 + if (res == 0) 1987 + sas_set_ex_phy(dev, phy_id, disc_resp); 1988 + goto out_free_resp; 1987 1989 } else if (SAS_ADDR(sas_addr) == SAS_ADDR(phy->attached_sas_addr) && 1988 1990 dev_type_flutter(type, phy->attached_dev_type)) { 1989 1991 struct domain_device *ata_dev = sas_ex_to_ata(dev, phy_id); ··· 2003 1989 action = ", needs recovery"; 2004 1990 pr_debug("ex %016llx phy%02d broadcast flutter%s\n", 2005 1991 SAS_ADDR(dev->sas_addr), phy_id, action); 2006 - return res; 1992 + goto out_free_resp; 2007 1993 } 2008 1994 2009 1995 /* we always have to delete the old device when we went here */ ··· 2012 1998 SAS_ADDR(phy->attached_sas_addr)); 2013 1999 sas_unregister_devs_sas_addr(dev, phy_id, last); 2014 2000 2015 - return sas_discover_new(dev, phy_id); 2001 + res = sas_discover_new(dev, phy_id); 2002 + out_free_resp: 2003 + kfree(disc_resp); 2004 + return res; 2016 2005 } 2017 2006 2018 2007 /**
+1 -1
drivers/scsi/lpfc/lpfc.h
··· 1333 1333 struct timer_list fabric_block_timer; 1334 1334 unsigned long bit_flags; 1335 1335 atomic_t num_rsrc_err; 1336 - atomic_t num_cmd_success; 1337 1336 unsigned long last_rsrc_error_time; 1338 1337 unsigned long last_ramp_down_time; 1339 1338 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS ··· 1437 1438 struct timer_list inactive_vmid_poll; 1438 1439 1439 1440 /* RAS Support */ 1441 + spinlock_t ras_fwlog_lock; /* do not take while holding another lock */ 1440 1442 struct lpfc_ras_fwlog ras_fwlog; 1441 1443 1442 1444 uint32_t iocb_cnt;
+2 -2
drivers/scsi/lpfc/lpfc_attr.c
··· 5865 5865 if (phba->cfg_ras_fwlog_func != PCI_FUNC(phba->pcidev->devfn)) 5866 5866 return -EINVAL; 5867 5867 5868 - spin_lock_irq(&phba->hbalock); 5868 + spin_lock_irq(&phba->ras_fwlog_lock); 5869 5869 state = phba->ras_fwlog.state; 5870 - spin_unlock_irq(&phba->hbalock); 5870 + spin_unlock_irq(&phba->ras_fwlog_lock); 5871 5871 5872 5872 if (state == REG_INPROGRESS) { 5873 5873 lpfc_printf_log(phba, KERN_ERR, LOG_SLI, "6147 RAS Logging "
+20 -20
drivers/scsi/lpfc/lpfc_bsg.c
··· 2513 2513 return -ENOMEM; 2514 2514 } 2515 2515 2516 - dmabuff = (struct lpfc_dmabuf *)mbox->ctx_buf; 2516 + dmabuff = mbox->ctx_buf; 2517 2517 mbox->ctx_buf = NULL; 2518 2518 mbox->ctx_ndlp = NULL; 2519 2519 status = lpfc_sli_issue_mbox_wait(phba, mbox, LPFC_MBOX_TMO); ··· 3169 3169 } 3170 3170 3171 3171 cmdwqe = &cmdiocbq->wqe; 3172 - memset(cmdwqe, 0, sizeof(union lpfc_wqe)); 3172 + memset(cmdwqe, 0, sizeof(*cmdwqe)); 3173 3173 if (phba->sli_rev < LPFC_SLI_REV4) { 3174 3174 rspwqe = &rspiocbq->wqe; 3175 - memset(rspwqe, 0, sizeof(union lpfc_wqe)); 3175 + memset(rspwqe, 0, sizeof(*rspwqe)); 3176 3176 } 3177 3177 3178 3178 INIT_LIST_HEAD(&head); ··· 3376 3376 unsigned long flags; 3377 3377 uint8_t *pmb, *pmb_buf; 3378 3378 3379 - dd_data = pmboxq->ctx_ndlp; 3379 + dd_data = pmboxq->ctx_u.dd_data; 3380 3380 3381 3381 /* 3382 3382 * The outgoing buffer is readily referred from the dma buffer, ··· 3553 3553 struct lpfc_sli_config_mbox *sli_cfg_mbx; 3554 3554 uint8_t *pmbx; 3555 3555 3556 - dd_data = pmboxq->ctx_buf; 3556 + dd_data = pmboxq->ctx_u.dd_data; 3557 3557 3558 3558 /* Determine if job has been aborted */ 3559 3559 spin_lock_irqsave(&phba->ct_ev_lock, flags); ··· 3940 3940 pmboxq->mbox_cmpl = lpfc_bsg_issue_read_mbox_ext_cmpl; 3941 3941 3942 3942 /* context fields to callback function */ 3943 - pmboxq->ctx_buf = dd_data; 3943 + pmboxq->ctx_u.dd_data = dd_data; 3944 3944 dd_data->type = TYPE_MBOX; 3945 3945 dd_data->set_job = job; 3946 3946 dd_data->context_un.mbox.pmboxq = pmboxq; ··· 4112 4112 pmboxq->mbox_cmpl = lpfc_bsg_issue_write_mbox_ext_cmpl; 4113 4113 4114 4114 /* context fields to callback function */ 4115 - pmboxq->ctx_buf = dd_data; 4115 + pmboxq->ctx_u.dd_data = dd_data; 4116 4116 dd_data->type = TYPE_MBOX; 4117 4117 dd_data->set_job = job; 4118 4118 dd_data->context_un.mbox.pmboxq = pmboxq; ··· 4460 4460 pmboxq->mbox_cmpl = lpfc_bsg_issue_write_mbox_ext_cmpl; 4461 4461 4462 4462 /* context fields to callback function */ 4463 - pmboxq->ctx_buf = dd_data; 4463 + pmboxq->ctx_u.dd_data = dd_data; 4464 4464 dd_data->type = TYPE_MBOX; 4465 4465 dd_data->set_job = job; 4466 4466 dd_data->context_un.mbox.pmboxq = pmboxq; ··· 4747 4747 if (mbox_req->inExtWLen || mbox_req->outExtWLen) { 4748 4748 from = pmbx; 4749 4749 ext = from + sizeof(MAILBOX_t); 4750 - pmboxq->ctx_buf = ext; 4750 + pmboxq->ext_buf = ext; 4751 4751 pmboxq->in_ext_byte_len = 4752 4752 mbox_req->inExtWLen * sizeof(uint32_t); 4753 4753 pmboxq->out_ext_byte_len = ··· 4875 4875 pmboxq->mbox_cmpl = lpfc_bsg_issue_mbox_cmpl; 4876 4876 4877 4877 /* setup context field to pass wait_queue pointer to wake function */ 4878 - pmboxq->ctx_ndlp = dd_data; 4878 + pmboxq->ctx_u.dd_data = dd_data; 4879 4879 dd_data->type = TYPE_MBOX; 4880 4880 dd_data->set_job = job; 4881 4881 dd_data->context_un.mbox.pmboxq = pmboxq; ··· 5070 5070 bsg_reply->reply_data.vendor_reply.vendor_rsp; 5071 5071 5072 5072 /* Current logging state */ 5073 - spin_lock_irq(&phba->hbalock); 5073 + spin_lock_irq(&phba->ras_fwlog_lock); 5074 5074 if (ras_fwlog->state == ACTIVE) 5075 5075 ras_reply->state = LPFC_RASLOG_STATE_RUNNING; 5076 5076 else 5077 5077 ras_reply->state = LPFC_RASLOG_STATE_STOPPED; 5078 - spin_unlock_irq(&phba->hbalock); 5078 + spin_unlock_irq(&phba->ras_fwlog_lock); 5079 5079 5080 5080 ras_reply->log_level = phba->ras_fwlog.fw_loglevel; 5081 5081 ras_reply->log_buff_sz = phba->cfg_ras_fwlog_buffsize; ··· 5132 5132 5133 5133 if (action == LPFC_RASACTION_STOP_LOGGING) { 5134 5134 /* Check if already disabled */ 5135 - spin_lock_irq(&phba->hbalock); 5135 + spin_lock_irq(&phba->ras_fwlog_lock); 5136 5136 if (ras_fwlog->state != ACTIVE) { 5137 - spin_unlock_irq(&phba->hbalock); 5137 + spin_unlock_irq(&phba->ras_fwlog_lock); 5138 5138 rc = -ESRCH; 5139 5139 goto ras_job_error; 5140 5140 } 5141 - spin_unlock_irq(&phba->hbalock); 5141 + spin_unlock_irq(&phba->ras_fwlog_lock); 5142 5142 5143 5143 /* Disable logging */ 5144 5144 lpfc_ras_stop_fwlog(phba); ··· 5149 5149 * FW-logging with new log-level. Return status 5150 5150 * "Logging already Running" to caller. 5151 5151 **/ 5152 - spin_lock_irq(&phba->hbalock); 5152 + spin_lock_irq(&phba->ras_fwlog_lock); 5153 5153 if (ras_fwlog->state != INACTIVE) 5154 5154 action_status = -EINPROGRESS; 5155 - spin_unlock_irq(&phba->hbalock); 5155 + spin_unlock_irq(&phba->ras_fwlog_lock); 5156 5156 5157 5157 /* Enable logging */ 5158 5158 rc = lpfc_sli4_ras_fwlog_init(phba, log_level, ··· 5268 5268 goto ras_job_error; 5269 5269 5270 5270 /* Logging to be stopped before reading */ 5271 - spin_lock_irq(&phba->hbalock); 5271 + spin_lock_irq(&phba->ras_fwlog_lock); 5272 5272 if (ras_fwlog->state == ACTIVE) { 5273 - spin_unlock_irq(&phba->hbalock); 5273 + spin_unlock_irq(&phba->ras_fwlog_lock); 5274 5274 rc = -EINPROGRESS; 5275 5275 goto ras_job_error; 5276 5276 } 5277 - spin_unlock_irq(&phba->hbalock); 5277 + spin_unlock_irq(&phba->ras_fwlog_lock); 5278 5278 5279 5279 if (job->request_len < 5280 5280 sizeof(struct fc_bsg_request) +
+6 -6
drivers/scsi/lpfc/lpfc_debugfs.c
··· 2194 2194 2195 2195 memset(buffer, 0, size); 2196 2196 2197 - spin_lock_irq(&phba->hbalock); 2197 + spin_lock_irq(&phba->ras_fwlog_lock); 2198 2198 if (phba->ras_fwlog.state != ACTIVE) { 2199 - spin_unlock_irq(&phba->hbalock); 2199 + spin_unlock_irq(&phba->ras_fwlog_lock); 2200 2200 return -EINVAL; 2201 2201 } 2202 - spin_unlock_irq(&phba->hbalock); 2202 + spin_unlock_irq(&phba->ras_fwlog_lock); 2203 2203 2204 2204 list_for_each_entry_safe(dmabuf, next, 2205 2205 &phba->ras_fwlog.fwlog_buff_list, list) { ··· 2250 2250 int size; 2251 2251 int rc = -ENOMEM; 2252 2252 2253 - spin_lock_irq(&phba->hbalock); 2253 + spin_lock_irq(&phba->ras_fwlog_lock); 2254 2254 if (phba->ras_fwlog.state != ACTIVE) { 2255 - spin_unlock_irq(&phba->hbalock); 2255 + spin_unlock_irq(&phba->ras_fwlog_lock); 2256 2256 rc = -EINVAL; 2257 2257 goto out; 2258 2258 } 2259 - spin_unlock_irq(&phba->hbalock); 2259 + spin_unlock_irq(&phba->ras_fwlog_lock); 2260 2260 2261 2261 if (check_mul_overflow(LPFC_RAS_MIN_BUFF_POST_SIZE, 2262 2262 phba->cfg_ras_fwlog_buffsize, &size))
+21 -24
drivers/scsi/lpfc/lpfc_els.c
··· 4437 4437 unsigned long flags; 4438 4438 struct lpfc_work_evt *evtp = &ndlp->els_retry_evt; 4439 4439 4440 + /* Hold a node reference for outstanding queued work */ 4441 + if (!lpfc_nlp_get(ndlp)) 4442 + return; 4443 + 4440 4444 spin_lock_irqsave(&phba->hbalock, flags); 4441 4445 if (!list_empty(&evtp->evt_listp)) { 4442 4446 spin_unlock_irqrestore(&phba->hbalock, flags); 4447 + lpfc_nlp_put(ndlp); 4443 4448 return; 4444 4449 } 4445 4450 4446 - /* We need to hold the node by incrementing the reference 4447 - * count until the queued work is done 4448 - */ 4449 - evtp->evt_arg1 = lpfc_nlp_get(ndlp); 4450 - if (evtp->evt_arg1) { 4451 - evtp->evt = LPFC_EVT_ELS_RETRY; 4452 - list_add_tail(&evtp->evt_listp, &phba->work_list); 4453 - lpfc_worker_wake_up(phba); 4454 - } 4451 + evtp->evt_arg1 = ndlp; 4452 + evtp->evt = LPFC_EVT_ELS_RETRY; 4453 + list_add_tail(&evtp->evt_listp, &phba->work_list); 4455 4454 spin_unlock_irqrestore(&phba->hbalock, flags); 4456 - return; 4455 + 4456 + lpfc_worker_wake_up(phba); 4457 4457 } 4458 4458 4459 4459 /** ··· 7238 7238 goto rdp_fail; 7239 7239 mbox->vport = rdp_context->ndlp->vport; 7240 7240 mbox->mbox_cmpl = lpfc_mbx_cmpl_rdp_page_a0; 7241 - mbox->ctx_ndlp = (struct lpfc_rdp_context *)rdp_context; 7241 + mbox->ctx_u.rdp = rdp_context; 7242 7242 rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT); 7243 7243 if (rc == MBX_NOT_FINISHED) { 7244 7244 lpfc_mbox_rsrc_cleanup(phba, mbox, MBOX_THD_UNLOCKED); ··· 7290 7290 mbox->in_ext_byte_len = DMP_SFF_PAGE_A0_SIZE; 7291 7291 mbox->out_ext_byte_len = DMP_SFF_PAGE_A0_SIZE; 7292 7292 mbox->mbox_offset_word = 5; 7293 - mbox->ctx_buf = virt; 7293 + mbox->ext_buf = virt; 7294 7294 } else { 7295 7295 bf_set(lpfc_mbx_memory_dump_type3_length, 7296 7296 &mbox->u.mqe.un.mem_dump_type3, DMP_SFF_PAGE_A0_SIZE); ··· 7298 7298 mbox->u.mqe.un.mem_dump_type3.addr_hi = putPaddrHigh(mp->phys); 7299 7299 } 7300 7300 mbox->vport = phba->pport; 7301 - mbox->ctx_ndlp = (struct lpfc_rdp_context *)rdp_context; 7302 7301 7303 7302 rc = lpfc_sli_issue_mbox_wait(phba, mbox, 30); 7304 7303 if (rc == MBX_NOT_FINISHED) { ··· 7306 7307 } 7307 7308 7308 7309 if (phba->sli_rev == LPFC_SLI_REV4) 7309 - mp = (struct lpfc_dmabuf *)(mbox->ctx_buf); 7310 + mp = mbox->ctx_buf; 7310 7311 else 7311 7312 mp = mpsave; 7312 7313 ··· 7349 7350 mbox->in_ext_byte_len = DMP_SFF_PAGE_A2_SIZE; 7350 7351 mbox->out_ext_byte_len = DMP_SFF_PAGE_A2_SIZE; 7351 7352 mbox->mbox_offset_word = 5; 7352 - mbox->ctx_buf = virt; 7353 + mbox->ext_buf = virt; 7353 7354 } else { 7354 7355 bf_set(lpfc_mbx_memory_dump_type3_length, 7355 7356 &mbox->u.mqe.un.mem_dump_type3, DMP_SFF_PAGE_A2_SIZE); ··· 7357 7358 mbox->u.mqe.un.mem_dump_type3.addr_hi = putPaddrHigh(mp->phys); 7358 7359 } 7359 7360 7360 - mbox->ctx_ndlp = (struct lpfc_rdp_context *)rdp_context; 7361 7361 rc = lpfc_sli_issue_mbox_wait(phba, mbox, 30); 7362 7362 if (bf_get(lpfc_mqe_status, &mbox->u.mqe)) { 7363 7363 rc = 1; ··· 7498 7500 int rc; 7499 7501 7500 7502 mb = &pmb->u.mb; 7501 - lcb_context = (struct lpfc_lcb_context *)pmb->ctx_ndlp; 7503 + lcb_context = pmb->ctx_u.lcb; 7502 7504 ndlp = lcb_context->ndlp; 7503 - pmb->ctx_ndlp = NULL; 7505 + memset(&pmb->ctx_u, 0, sizeof(pmb->ctx_u)); 7504 7506 pmb->ctx_buf = NULL; 7505 7507 7506 7508 shdr = (union lpfc_sli4_cfg_shdr *) ··· 7640 7642 lpfc_sli4_config(phba, mbox, LPFC_MBOX_SUBSYSTEM_COMMON, 7641 7643 LPFC_MBOX_OPCODE_SET_BEACON_CONFIG, len, 7642 7644 LPFC_SLI4_MBX_EMBED); 7643 - mbox->ctx_ndlp = (void *)lcb_context; 7645 + mbox->ctx_u.lcb = lcb_context; 7644 7646 mbox->vport = phba->pport; 7645 7647 mbox->mbox_cmpl = lpfc_els_lcb_rsp; 7646 7648 bf_set(lpfc_mbx_set_beacon_port_num, &mbox->u.mqe.un.beacon_config, ··· 8637 8639 mb = &pmb->u.mb; 8638 8640 8639 8641 ndlp = pmb->ctx_ndlp; 8640 - rxid = (uint16_t)((unsigned long)(pmb->ctx_buf) & 0xffff); 8641 - oxid = (uint16_t)(((unsigned long)(pmb->ctx_buf) >> 16) & 0xffff); 8642 - pmb->ctx_buf = NULL; 8642 + rxid = (uint16_t)(pmb->ctx_u.ox_rx_id & 0xffff); 8643 + oxid = (uint16_t)((pmb->ctx_u.ox_rx_id >> 16) & 0xffff); 8644 + memset(&pmb->ctx_u, 0, sizeof(pmb->ctx_u)); 8643 8645 pmb->ctx_ndlp = NULL; 8644 8646 8645 8647 if (mb->mbxStatus) { ··· 8743 8745 mbox = mempool_alloc(phba->mbox_mem_pool, GFP_ATOMIC); 8744 8746 if (mbox) { 8745 8747 lpfc_read_lnk_stat(phba, mbox); 8746 - mbox->ctx_buf = (void *)((unsigned long) 8747 - (ox_id << 16 | ctx)); 8748 + mbox->ctx_u.ox_rx_id = ox_id << 16 | ctx; 8748 8749 mbox->ctx_ndlp = lpfc_nlp_get(ndlp); 8749 8750 if (!mbox->ctx_ndlp) 8750 8751 goto node_err;
+16 -17
drivers/scsi/lpfc/lpfc_hbadisc.c
··· 257 257 if (evtp->evt_arg1) { 258 258 evtp->evt = LPFC_EVT_DEV_LOSS; 259 259 list_add_tail(&evtp->evt_listp, &phba->work_list); 260 + spin_unlock_irqrestore(&phba->hbalock, iflags); 260 261 lpfc_worker_wake_up(phba); 262 + return; 261 263 } 262 264 spin_unlock_irqrestore(&phba->hbalock, iflags); 263 265 } else { ··· 277 275 lpfc_disc_state_machine(vport, ndlp, NULL, 278 276 NLP_EVT_DEVICE_RM); 279 277 } 280 - 281 278 } 282 - 283 - return; 284 279 } 285 280 286 281 /** ··· 3428 3429 lpfc_mbx_cmpl_read_sparam(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) 3429 3430 { 3430 3431 MAILBOX_t *mb = &pmb->u.mb; 3431 - struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)pmb->ctx_buf; 3432 + struct lpfc_dmabuf *mp = pmb->ctx_buf; 3432 3433 struct lpfc_vport *vport = pmb->vport; 3433 3434 struct Scsi_Host *shost = lpfc_shost_from_vport(vport); 3434 3435 struct serv_parm *sp = &vport->fc_sparam; ··· 3736 3737 struct lpfc_mbx_read_top *la; 3737 3738 struct lpfc_sli_ring *pring; 3738 3739 MAILBOX_t *mb = &pmb->u.mb; 3739 - struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)(pmb->ctx_buf); 3740 + struct lpfc_dmabuf *mp = pmb->ctx_buf; 3740 3741 uint8_t attn_type; 3741 3742 3742 3743 /* Unblock ELS traffic */ ··· 3850 3851 lpfc_mbx_cmpl_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) 3851 3852 { 3852 3853 struct lpfc_vport *vport = pmb->vport; 3853 - struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)pmb->ctx_buf; 3854 - struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; 3854 + struct lpfc_dmabuf *mp = pmb->ctx_buf; 3855 + struct lpfc_nodelist *ndlp = pmb->ctx_ndlp; 3855 3856 3856 3857 /* The driver calls the state machine with the pmb pointer 3857 3858 * but wants to make sure a stale ctx_buf isn't acted on. ··· 4065 4066 * the dump routine is a single-use construct. 4066 4067 */ 4067 4068 if (pmb->ctx_buf) { 4068 - mp = (struct lpfc_dmabuf *)pmb->ctx_buf; 4069 + mp = pmb->ctx_buf; 4069 4070 lpfc_mbuf_free(phba, mp->virt, mp->phys); 4070 4071 kfree(mp); 4071 4072 pmb->ctx_buf = NULL; ··· 4088 4089 4089 4090 if (phba->sli_rev == LPFC_SLI_REV4) { 4090 4091 byte_count = pmb->u.mqe.un.mb_words[5]; 4091 - mp = (struct lpfc_dmabuf *)pmb->ctx_buf; 4092 + mp = pmb->ctx_buf; 4092 4093 if (byte_count > sizeof(struct static_vport_info) - 4093 4094 offset) 4094 4095 byte_count = sizeof(struct static_vport_info) ··· 4168 4169 { 4169 4170 struct lpfc_vport *vport = pmb->vport; 4170 4171 MAILBOX_t *mb = &pmb->u.mb; 4171 - struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; 4172 + struct lpfc_nodelist *ndlp = pmb->ctx_ndlp; 4172 4173 4173 4174 pmb->ctx_ndlp = NULL; 4174 4175 ··· 4306 4307 lpfc_mbx_cmpl_ns_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) 4307 4308 { 4308 4309 MAILBOX_t *mb = &pmb->u.mb; 4309 - struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; 4310 + struct lpfc_nodelist *ndlp = pmb->ctx_ndlp; 4310 4311 struct lpfc_vport *vport = pmb->vport; 4311 4312 int rc; 4312 4313 ··· 4430 4431 { 4431 4432 struct lpfc_vport *vport = pmb->vport; 4432 4433 MAILBOX_t *mb = &pmb->u.mb; 4433 - struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; 4434 + struct lpfc_nodelist *ndlp = pmb->ctx_ndlp; 4434 4435 4435 4436 pmb->ctx_ndlp = NULL; 4436 4437 if (mb->mbxStatus) { ··· 5173 5174 struct lpfc_vport *vport = pmb->vport; 5174 5175 struct lpfc_nodelist *ndlp; 5175 5176 5176 - ndlp = (struct lpfc_nodelist *)(pmb->ctx_ndlp); 5177 + ndlp = pmb->ctx_ndlp; 5177 5178 if (!ndlp) 5178 5179 return; 5179 5180 lpfc_issue_els_logo(vport, ndlp, 0); ··· 5495 5496 if ((mb = phba->sli.mbox_active)) { 5496 5497 if ((mb->u.mb.mbxCommand == MBX_REG_LOGIN64) && 5497 5498 !(mb->mbox_flag & LPFC_MBX_IMED_UNREG) && 5498 - (ndlp == (struct lpfc_nodelist *)mb->ctx_ndlp)) { 5499 + (ndlp == mb->ctx_ndlp)) { 5499 5500 mb->ctx_ndlp = NULL; 5500 5501 mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl; 5501 5502 } ··· 5506 5507 list_for_each_entry(mb, &phba->sli.mboxq_cmpl, list) { 5507 5508 if ((mb->u.mb.mbxCommand != MBX_REG_LOGIN64) || 5508 5509 (mb->mbox_flag & LPFC_MBX_IMED_UNREG) || 5509 - (ndlp != (struct lpfc_nodelist *)mb->ctx_ndlp)) 5510 + (ndlp != mb->ctx_ndlp)) 5510 5511 continue; 5511 5512 5512 5513 mb->ctx_ndlp = NULL; ··· 5516 5517 list_for_each_entry_safe(mb, nextmb, &phba->sli.mboxq, list) { 5517 5518 if ((mb->u.mb.mbxCommand == MBX_REG_LOGIN64) && 5518 5519 !(mb->mbox_flag & LPFC_MBX_IMED_UNREG) && 5519 - (ndlp == (struct lpfc_nodelist *)mb->ctx_ndlp)) { 5520 + (ndlp == mb->ctx_ndlp)) { 5520 5521 list_del(&mb->list); 5521 5522 lpfc_mbox_rsrc_cleanup(phba, mb, MBOX_THD_LOCKED); 5522 5523 ··· 6356 6357 lpfc_mbx_cmpl_fdmi_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) 6357 6358 { 6358 6359 MAILBOX_t *mb = &pmb->u.mb; 6359 - struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; 6360 + struct lpfc_nodelist *ndlp = pmb->ctx_ndlp; 6360 6361 struct lpfc_vport *vport = pmb->vport; 6361 6362 6362 6363 pmb->ctx_ndlp = NULL;
+8 -5
drivers/scsi/lpfc/lpfc_init.c
··· 460 460 return -EIO; 461 461 } 462 462 463 - mp = (struct lpfc_dmabuf *)pmb->ctx_buf; 463 + mp = pmb->ctx_buf; 464 464 465 465 /* This dmabuf was allocated by lpfc_read_sparam. The dmabuf is no 466 466 * longer needed. Prevent unintended ctx_buf access as the mbox is ··· 2217 2217 /* Cleanup any outstanding ELS commands */ 2218 2218 lpfc_els_flush_all_cmd(phba); 2219 2219 psli->slistat.link_event++; 2220 - lpfc_read_topology(phba, pmb, (struct lpfc_dmabuf *)pmb->ctx_buf); 2220 + lpfc_read_topology(phba, pmb, pmb->ctx_buf); 2221 2221 pmb->mbox_cmpl = lpfc_mbx_cmpl_read_topology; 2222 2222 pmb->vport = vport; 2223 2223 /* Block ELS IOCBs until we have processed this mbox command */ ··· 5454 5454 phba->sli.slistat.link_event++; 5455 5455 5456 5456 /* Create lpfc_handle_latt mailbox command from link ACQE */ 5457 - lpfc_read_topology(phba, pmb, (struct lpfc_dmabuf *)pmb->ctx_buf); 5457 + lpfc_read_topology(phba, pmb, pmb->ctx_buf); 5458 5458 pmb->mbox_cmpl = lpfc_mbx_cmpl_read_topology; 5459 5459 pmb->vport = phba->pport; 5460 5460 ··· 6347 6347 phba->sli.slistat.link_event++; 6348 6348 6349 6349 /* Create lpfc_handle_latt mailbox command from link ACQE */ 6350 - lpfc_read_topology(phba, pmb, (struct lpfc_dmabuf *)pmb->ctx_buf); 6350 + lpfc_read_topology(phba, pmb, pmb->ctx_buf); 6351 6351 pmb->mbox_cmpl = lpfc_mbx_cmpl_read_topology; 6352 6352 pmb->vport = phba->pport; 6353 6353 ··· 7704 7704 ((phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) ? 7705 7705 "NVME" : " "), 7706 7706 (phba->nvmet_support ? "NVMET" : " ")); 7707 + 7708 + /* ras_fwlog state */ 7709 + spin_lock_init(&phba->ras_fwlog_lock); 7707 7710 7708 7711 /* Initialize the IO buffer list used by driver for SLI3 SCSI */ 7709 7712 spin_lock_init(&phba->scsi_buf_list_get_lock); ··· 13058 13055 rc = request_threaded_irq(eqhdl->irq, 13059 13056 &lpfc_sli4_hba_intr_handler, 13060 13057 &lpfc_sli4_hba_intr_handler_th, 13061 - IRQF_ONESHOT, name, eqhdl); 13058 + 0, name, eqhdl); 13062 13059 if (rc) { 13063 13060 lpfc_printf_log(phba, KERN_WARNING, LOG_INIT, 13064 13061 "0486 MSI-X fast-path (%d) "
+10 -20
drivers/scsi/lpfc/lpfc_mbox.c
··· 102 102 { 103 103 struct lpfc_dmabuf *mp; 104 104 105 - mp = (struct lpfc_dmabuf *)mbox->ctx_buf; 105 + mp = mbox->ctx_buf; 106 106 mbox->ctx_buf = NULL; 107 107 108 108 /* Release the generic BPL buffer memory. */ ··· 204 204 uint16_t region_id) 205 205 { 206 206 MAILBOX_t *mb; 207 - void *ctx; 208 207 209 208 mb = &pmb->u.mb; 210 - ctx = pmb->ctx_buf; 211 209 212 210 /* Setup to dump VPD region */ 213 211 memset(pmb, 0, sizeof (LPFC_MBOXQ_t)); ··· 217 219 mb->un.varDmp.word_cnt = (DMP_RSP_SIZE / sizeof (uint32_t)); 218 220 mb->un.varDmp.co = 0; 219 221 mb->un.varDmp.resp_offset = 0; 220 - pmb->ctx_buf = ctx; 221 222 mb->mbxOwner = OWN_HOST; 222 223 return; 223 224 } ··· 233 236 lpfc_dump_wakeup_param(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) 234 237 { 235 238 MAILBOX_t *mb; 236 - void *ctx; 237 239 238 240 mb = &pmb->u.mb; 239 - /* Save context so that we can restore after memset */ 240 - ctx = pmb->ctx_buf; 241 241 242 242 /* Setup to dump VPD region */ 243 243 memset(pmb, 0, sizeof(LPFC_MBOXQ_t)); ··· 248 254 mb->un.varDmp.word_cnt = WAKE_UP_PARMS_WORD_SIZE; 249 255 mb->un.varDmp.co = 0; 250 256 mb->un.varDmp.resp_offset = 0; 251 - pmb->ctx_buf = ctx; 252 257 return; 253 258 } 254 259 ··· 365 372 /* Save address for later completion and set the owner to host so that 366 373 * the FW knows this mailbox is available for processing. 367 374 */ 368 - pmb->ctx_buf = (uint8_t *)mp; 375 + pmb->ctx_buf = mp; 369 376 mb->mbxOwner = OWN_HOST; 370 377 return (0); 371 378 } ··· 1809 1816 } 1810 1817 /* Reinitialize the context pointers to avoid stale usage. */ 1811 1818 mbox->ctx_buf = NULL; 1812 - mbox->context3 = NULL; 1819 + memset(&mbox->ctx_u, 0, sizeof(mbox->ctx_u)); 1813 1820 kfree(mbox->sge_array); 1814 1821 /* Finally, free the mailbox command itself */ 1815 1822 mempool_free(mbox, phba->mbox_mem_pool); ··· 2359 2366 { 2360 2367 MAILBOX_t *mb; 2361 2368 int rc = FAILURE; 2362 - struct lpfc_rdp_context *rdp_context = 2363 - (struct lpfc_rdp_context *)(mboxq->ctx_ndlp); 2369 + struct lpfc_rdp_context *rdp_context = mboxq->ctx_u.rdp; 2364 2370 2365 2371 mb = &mboxq->u.mb; 2366 2372 if (mb->mbxStatus) ··· 2377 2385 static void 2378 2386 lpfc_mbx_cmpl_rdp_page_a2(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox) 2379 2387 { 2380 - struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)mbox->ctx_buf; 2381 - struct lpfc_rdp_context *rdp_context = 2382 - (struct lpfc_rdp_context *)(mbox->ctx_ndlp); 2388 + struct lpfc_dmabuf *mp = mbox->ctx_buf; 2389 + struct lpfc_rdp_context *rdp_context = mbox->ctx_u.rdp; 2383 2390 2384 2391 if (bf_get(lpfc_mqe_status, &mbox->u.mqe)) 2385 2392 goto error_mbox_free; ··· 2392 2401 /* Save the dma buffer for cleanup in the final completion. */ 2393 2402 mbox->ctx_buf = mp; 2394 2403 mbox->mbox_cmpl = lpfc_mbx_cmpl_rdp_link_stat; 2395 - mbox->ctx_ndlp = (struct lpfc_rdp_context *)rdp_context; 2404 + mbox->ctx_u.rdp = rdp_context; 2396 2405 if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT) == MBX_NOT_FINISHED) 2397 2406 goto error_mbox_free; 2398 2407 ··· 2407 2416 lpfc_mbx_cmpl_rdp_page_a0(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox) 2408 2417 { 2409 2418 int rc; 2410 - struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)(mbox->ctx_buf); 2411 - struct lpfc_rdp_context *rdp_context = 2412 - (struct lpfc_rdp_context *)(mbox->ctx_ndlp); 2419 + struct lpfc_dmabuf *mp = mbox->ctx_buf; 2420 + struct lpfc_rdp_context *rdp_context = mbox->ctx_u.rdp; 2413 2421 2414 2422 if (bf_get(lpfc_mqe_status, &mbox->u.mqe)) 2415 2423 goto error; ··· 2438 2448 mbox->u.mqe.un.mem_dump_type3.addr_hi = putPaddrHigh(mp->phys); 2439 2449 2440 2450 mbox->mbox_cmpl = lpfc_mbx_cmpl_rdp_page_a2; 2441 - mbox->ctx_ndlp = (struct lpfc_rdp_context *)rdp_context; 2451 + mbox->ctx_u.rdp = rdp_context; 2442 2452 rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT); 2443 2453 if (rc == MBX_NOT_FINISHED) 2444 2454 goto error;
+6 -6
drivers/scsi/lpfc/lpfc_nportdisc.c
··· 300 300 int rc; 301 301 302 302 ndlp = login_mbox->ctx_ndlp; 303 - save_iocb = login_mbox->context3; 303 + save_iocb = login_mbox->ctx_u.save_iocb; 304 304 305 305 if (mb->mbxStatus == MBX_SUCCESS) { 306 306 /* Now that REG_RPI completed successfully, ··· 640 640 if (!login_mbox->ctx_ndlp) 641 641 goto out; 642 642 643 - login_mbox->context3 = save_iocb; /* For PLOGI ACC */ 643 + login_mbox->ctx_u.save_iocb = save_iocb; /* For PLOGI ACC */ 644 644 645 645 spin_lock_irq(&ndlp->lock); 646 646 ndlp->nlp_flag |= (NLP_ACC_REGLOGIN | NLP_RCV_PLOGI); ··· 682 682 struct lpfc_nodelist *ndlp; 683 683 uint32_t cmd; 684 684 685 - elsiocb = (struct lpfc_iocbq *)mboxq->ctx_buf; 686 - ndlp = (struct lpfc_nodelist *)mboxq->ctx_ndlp; 685 + elsiocb = mboxq->ctx_u.save_iocb; 686 + ndlp = mboxq->ctx_ndlp; 687 687 vport = mboxq->vport; 688 688 cmd = elsiocb->drvrTimeout; 689 689 ··· 1875 1875 /* cleanup any ndlp on mbox q waiting for reglogin cmpl */ 1876 1876 if ((mb = phba->sli.mbox_active)) { 1877 1877 if ((mb->u.mb.mbxCommand == MBX_REG_LOGIN64) && 1878 - (ndlp == (struct lpfc_nodelist *)mb->ctx_ndlp)) { 1878 + (ndlp == mb->ctx_ndlp)) { 1879 1879 ndlp->nlp_flag &= ~NLP_REG_LOGIN_SEND; 1880 1880 lpfc_nlp_put(ndlp); 1881 1881 mb->ctx_ndlp = NULL; ··· 1886 1886 spin_lock_irq(&phba->hbalock); 1887 1887 list_for_each_entry_safe(mb, nextmb, &phba->sli.mboxq, list) { 1888 1888 if ((mb->u.mb.mbxCommand == MBX_REG_LOGIN64) && 1889 - (ndlp == (struct lpfc_nodelist *)mb->ctx_ndlp)) { 1889 + (ndlp == mb->ctx_ndlp)) { 1890 1890 ndlp->nlp_flag &= ~NLP_REG_LOGIN_SEND; 1891 1891 lpfc_nlp_put(ndlp); 1892 1892 list_del(&mb->list);
+2 -2
drivers/scsi/lpfc/lpfc_nvme.c
··· 2616 2616 /* No concern about the role change on the nvme remoteport. 2617 2617 * The transport will update it. 2618 2618 */ 2619 - spin_lock_irq(&vport->phba->hbalock); 2619 + spin_lock_irq(&ndlp->lock); 2620 2620 ndlp->fc4_xpt_flags |= NVME_XPT_UNREG_WAIT; 2621 - spin_unlock_irq(&vport->phba->hbalock); 2621 + spin_unlock_irq(&ndlp->lock); 2622 2622 2623 2623 /* Don't let the host nvme transport keep sending keep-alives 2624 2624 * on this remoteport. Vport is unloading, no recovery. The
+1 -1
drivers/scsi/lpfc/lpfc_nvmet.c
··· 1586 1586 wqe = &nvmewqe->wqe; 1587 1587 1588 1588 /* Initialize WQE */ 1589 - memset(wqe, 0, sizeof(union lpfc_wqe)); 1589 + memset(wqe, 0, sizeof(*wqe)); 1590 1590 1591 1591 ctx_buf->iocbq->cmd_dmabuf = NULL; 1592 1592 spin_lock(&phba->sli4_hba.sgl_list_lock);
+4 -19
drivers/scsi/lpfc/lpfc_scsi.c
··· 167 167 struct Scsi_Host *shost; 168 168 struct scsi_device *sdev; 169 169 unsigned long new_queue_depth; 170 - unsigned long num_rsrc_err, num_cmd_success; 170 + unsigned long num_rsrc_err; 171 171 int i; 172 172 173 173 num_rsrc_err = atomic_read(&phba->num_rsrc_err); 174 - num_cmd_success = atomic_read(&phba->num_cmd_success); 175 174 176 175 /* 177 176 * The error and success command counters are global per ··· 185 186 for (i = 0; i <= phba->max_vports && vports[i] != NULL; i++) { 186 187 shost = lpfc_shost_from_vport(vports[i]); 187 188 shost_for_each_device(sdev, shost) { 188 - new_queue_depth = 189 - sdev->queue_depth * num_rsrc_err / 190 - (num_rsrc_err + num_cmd_success); 191 - if (!new_queue_depth) 192 - new_queue_depth = sdev->queue_depth - 1; 189 + if (num_rsrc_err >= sdev->queue_depth) 190 + new_queue_depth = 1; 193 191 else 194 192 new_queue_depth = sdev->queue_depth - 195 - new_queue_depth; 193 + num_rsrc_err; 196 194 scsi_change_queue_depth(sdev, new_queue_depth); 197 195 } 198 196 } 199 197 lpfc_destroy_vport_work_array(phba, vports); 200 198 atomic_set(&phba->num_rsrc_err, 0); 201 - atomic_set(&phba->num_cmd_success, 0); 202 199 } 203 200 204 201 /** ··· 5331 5336 } 5332 5337 err = lpfc_bg_scsi_prep_dma_buf(phba, lpfc_cmd); 5333 5338 } else { 5334 - if (vport->phba->cfg_enable_bg) { 5335 - lpfc_printf_vlog(vport, 5336 - KERN_INFO, LOG_SCSI_CMD, 5337 - "9038 BLKGRD: rcvd PROT_NORMAL cmd: " 5338 - "x%x reftag x%x cnt %u pt %x\n", 5339 - cmnd->cmnd[0], 5340 - scsi_prot_ref_tag(cmnd), 5341 - scsi_logical_block_count(cmnd), 5342 - (cmnd->cmnd[1]>>5)); 5343 - } 5344 5339 err = lpfc_scsi_prep_dma_buf(phba, lpfc_cmd); 5345 5340 } 5346 5341
+49 -50
drivers/scsi/lpfc/lpfc_sli.c
··· 1217 1217 empty = list_empty(&phba->active_rrq_list); 1218 1218 list_add_tail(&rrq->list, &phba->active_rrq_list); 1219 1219 phba->hba_flag |= HBA_RRQ_ACTIVE; 1220 + spin_unlock_irqrestore(&phba->hbalock, iflags); 1220 1221 if (empty) 1221 1222 lpfc_worker_wake_up(phba); 1222 - spin_unlock_irqrestore(&phba->hbalock, iflags); 1223 1223 return 0; 1224 1224 out: 1225 1225 spin_unlock_irqrestore(&phba->hbalock, iflags); ··· 2830 2830 */ 2831 2831 pmboxq->mbox_flag |= LPFC_MBX_WAKE; 2832 2832 spin_lock_irqsave(&phba->hbalock, drvr_flag); 2833 - pmbox_done = (struct completion *)pmboxq->context3; 2833 + pmbox_done = pmboxq->ctx_u.mbox_wait; 2834 2834 if (pmbox_done) 2835 2835 complete(pmbox_done); 2836 2836 spin_unlock_irqrestore(&phba->hbalock, drvr_flag); ··· 2885 2885 if (!test_bit(FC_UNLOADING, &phba->pport->load_flag) && 2886 2886 pmb->u.mb.mbxCommand == MBX_REG_LOGIN64 && 2887 2887 !pmb->u.mb.mbxStatus) { 2888 - mp = (struct lpfc_dmabuf *)pmb->ctx_buf; 2888 + mp = pmb->ctx_buf; 2889 2889 if (mp) { 2890 2890 pmb->ctx_buf = NULL; 2891 2891 lpfc_mbuf_free(phba, mp->virt, mp->phys); ··· 2914 2914 } 2915 2915 2916 2916 if (pmb->u.mb.mbxCommand == MBX_REG_LOGIN64) { 2917 - ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; 2917 + ndlp = pmb->ctx_ndlp; 2918 2918 lpfc_nlp_put(ndlp); 2919 2919 } 2920 2920 2921 2921 if (pmb->u.mb.mbxCommand == MBX_UNREG_LOGIN) { 2922 - ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; 2922 + ndlp = pmb->ctx_ndlp; 2923 2923 2924 2924 /* Check to see if there are any deferred events to process */ 2925 2925 if (ndlp) { ··· 2952 2952 2953 2953 /* This nlp_put pairs with lpfc_sli4_resume_rpi */ 2954 2954 if (pmb->u.mb.mbxCommand == MBX_RESUME_RPI) { 2955 - ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; 2955 + ndlp = pmb->ctx_ndlp; 2956 2956 lpfc_nlp_put(ndlp); 2957 2957 } 2958 2958 ··· 5819 5819 goto out_free_mboxq; 5820 5820 } 5821 5821 5822 - mp = (struct lpfc_dmabuf *)mboxq->ctx_buf; 5822 + mp = mboxq->ctx_buf; 5823 5823 rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL); 5824 5824 5825 5825 lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI, ··· 6849 6849 { 6850 6850 struct lpfc_ras_fwlog *ras_fwlog = &phba->ras_fwlog; 6851 6851 6852 - spin_lock_irq(&phba->hbalock); 6852 + spin_lock_irq(&phba->ras_fwlog_lock); 6853 6853 ras_fwlog->state = INACTIVE; 6854 - spin_unlock_irq(&phba->hbalock); 6854 + spin_unlock_irq(&phba->ras_fwlog_lock); 6855 6855 6856 6856 /* Disable FW logging to host memory */ 6857 6857 writel(LPFC_CTL_PDEV_CTL_DDL_RAS, ··· 6894 6894 ras_fwlog->lwpd.virt = NULL; 6895 6895 } 6896 6896 6897 - spin_lock_irq(&phba->hbalock); 6897 + spin_lock_irq(&phba->ras_fwlog_lock); 6898 6898 ras_fwlog->state = INACTIVE; 6899 - spin_unlock_irq(&phba->hbalock); 6899 + spin_unlock_irq(&phba->ras_fwlog_lock); 6900 6900 } 6901 6901 6902 6902 /** ··· 6998 6998 goto disable_ras; 6999 6999 } 7000 7000 7001 - spin_lock_irq(&phba->hbalock); 7001 + spin_lock_irq(&phba->ras_fwlog_lock); 7002 7002 ras_fwlog->state = ACTIVE; 7003 - spin_unlock_irq(&phba->hbalock); 7003 + spin_unlock_irq(&phba->ras_fwlog_lock); 7004 7004 mempool_free(pmb, phba->mbox_mem_pool); 7005 7005 7006 7006 return; ··· 7032 7032 uint32_t len = 0, fwlog_buffsize, fwlog_entry_count; 7033 7033 int rc = 0; 7034 7034 7035 - spin_lock_irq(&phba->hbalock); 7035 + spin_lock_irq(&phba->ras_fwlog_lock); 7036 7036 ras_fwlog->state = INACTIVE; 7037 - spin_unlock_irq(&phba->hbalock); 7037 + spin_unlock_irq(&phba->ras_fwlog_lock); 7038 7038 7039 7039 fwlog_buffsize = (LPFC_RAS_MIN_BUFF_POST_SIZE * 7040 7040 phba->cfg_ras_fwlog_buffsize); ··· 7095 7095 mbx_fwlog->u.request.lwpd.addr_lo = putPaddrLow(ras_fwlog->lwpd.phys); 7096 7096 mbx_fwlog->u.request.lwpd.addr_hi = putPaddrHigh(ras_fwlog->lwpd.phys); 7097 7097 7098 - spin_lock_irq(&phba->hbalock); 7098 + spin_lock_irq(&phba->ras_fwlog_lock); 7099 7099 ras_fwlog->state = REG_INPROGRESS; 7100 - spin_unlock_irq(&phba->hbalock); 7100 + spin_unlock_irq(&phba->ras_fwlog_lock); 7101 7101 mbox->vport = phba->pport; 7102 7102 mbox->mbox_cmpl = lpfc_sli4_ras_mbox_cmpl; 7103 7103 ··· 8766 8766 8767 8767 mboxq->vport = vport; 8768 8768 rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL); 8769 - mp = (struct lpfc_dmabuf *)mboxq->ctx_buf; 8769 + mp = mboxq->ctx_buf; 8770 8770 if (rc == MBX_SUCCESS) { 8771 8771 memcpy(&vport->fc_sparam, mp->virt, sizeof(struct serv_parm)); 8772 8772 rc = 0; ··· 9548 9548 } 9549 9549 9550 9550 /* Copy the mailbox extension data */ 9551 - if (pmbox->in_ext_byte_len && pmbox->ctx_buf) { 9552 - lpfc_sli_pcimem_bcopy(pmbox->ctx_buf, 9551 + if (pmbox->in_ext_byte_len && pmbox->ext_buf) { 9552 + lpfc_sli_pcimem_bcopy(pmbox->ext_buf, 9553 9553 (uint8_t *)phba->mbox_ext, 9554 9554 pmbox->in_ext_byte_len); 9555 9555 } ··· 9562 9562 = MAILBOX_HBA_EXT_OFFSET; 9563 9563 9564 9564 /* Copy the mailbox extension data */ 9565 - if (pmbox->in_ext_byte_len && pmbox->ctx_buf) 9565 + if (pmbox->in_ext_byte_len && pmbox->ext_buf) 9566 9566 lpfc_memcpy_to_slim(phba->MBslimaddr + 9567 9567 MAILBOX_HBA_EXT_OFFSET, 9568 - pmbox->ctx_buf, pmbox->in_ext_byte_len); 9568 + pmbox->ext_buf, pmbox->in_ext_byte_len); 9569 9569 9570 9570 if (mbx->mbxCommand == MBX_CONFIG_PORT) 9571 9571 /* copy command data into host mbox for cmpl */ ··· 9688 9688 lpfc_sli_pcimem_bcopy(phba->mbox, mbx, 9689 9689 MAILBOX_CMD_SIZE); 9690 9690 /* Copy the mailbox extension data */ 9691 - if (pmbox->out_ext_byte_len && pmbox->ctx_buf) { 9691 + if (pmbox->out_ext_byte_len && pmbox->ext_buf) { 9692 9692 lpfc_sli_pcimem_bcopy(phba->mbox_ext, 9693 - pmbox->ctx_buf, 9693 + pmbox->ext_buf, 9694 9694 pmbox->out_ext_byte_len); 9695 9695 } 9696 9696 } else { ··· 9698 9698 lpfc_memcpy_from_slim(mbx, phba->MBslimaddr, 9699 9699 MAILBOX_CMD_SIZE); 9700 9700 /* Copy the mailbox extension data */ 9701 - if (pmbox->out_ext_byte_len && pmbox->ctx_buf) { 9701 + if (pmbox->out_ext_byte_len && pmbox->ext_buf) { 9702 9702 lpfc_memcpy_from_slim( 9703 - pmbox->ctx_buf, 9703 + pmbox->ext_buf, 9704 9704 phba->MBslimaddr + 9705 9705 MAILBOX_HBA_EXT_OFFSET, 9706 9706 pmbox->out_ext_byte_len); ··· 11373 11373 unsigned long iflags; 11374 11374 struct lpfc_work_evt *evtp = &ndlp->recovery_evt; 11375 11375 11376 + /* Hold a node reference for outstanding queued work */ 11377 + if (!lpfc_nlp_get(ndlp)) 11378 + return; 11379 + 11376 11380 spin_lock_irqsave(&phba->hbalock, iflags); 11377 11381 if (!list_empty(&evtp->evt_listp)) { 11378 11382 spin_unlock_irqrestore(&phba->hbalock, iflags); 11383 + lpfc_nlp_put(ndlp); 11379 11384 return; 11380 11385 } 11381 11386 11382 - /* Incrementing the reference count until the queued work is done. */ 11383 - evtp->evt_arg1 = lpfc_nlp_get(ndlp); 11384 - if (!evtp->evt_arg1) { 11385 - spin_unlock_irqrestore(&phba->hbalock, iflags); 11386 - return; 11387 - } 11387 + evtp->evt_arg1 = ndlp; 11388 11388 evtp->evt = LPFC_EVT_RECOVER_PORT; 11389 11389 list_add_tail(&evtp->evt_listp, &phba->work_list); 11390 11390 spin_unlock_irqrestore(&phba->hbalock, iflags); ··· 13262 13262 /* setup wake call as IOCB callback */ 13263 13263 pmboxq->mbox_cmpl = lpfc_sli_wake_mbox_wait; 13264 13264 13265 - /* setup context3 field to pass wait_queue pointer to wake function */ 13265 + /* setup ctx_u field to pass wait_queue pointer to wake function */ 13266 13266 init_completion(&mbox_done); 13267 - pmboxq->context3 = &mbox_done; 13267 + pmboxq->ctx_u.mbox_wait = &mbox_done; 13268 13268 /* now issue the command */ 13269 13269 retval = lpfc_sli_issue_mbox(phba, pmboxq, MBX_NOWAIT); 13270 13270 if (retval == MBX_BUSY || retval == MBX_SUCCESS) { ··· 13272 13272 msecs_to_jiffies(timeout * 1000)); 13273 13273 13274 13274 spin_lock_irqsave(&phba->hbalock, flag); 13275 - pmboxq->context3 = NULL; 13275 + pmboxq->ctx_u.mbox_wait = NULL; 13276 13276 /* 13277 13277 * if LPFC_MBX_WAKE flag is set the mailbox is completed 13278 13278 * else do not free the resources. ··· 13813 13813 lpfc_sli_pcimem_bcopy(mbox, pmbox, 13814 13814 MAILBOX_CMD_SIZE); 13815 13815 if (pmb->out_ext_byte_len && 13816 - pmb->ctx_buf) 13816 + pmb->ext_buf) 13817 13817 lpfc_sli_pcimem_bcopy( 13818 13818 phba->mbox_ext, 13819 - pmb->ctx_buf, 13819 + pmb->ext_buf, 13820 13820 pmb->out_ext_byte_len); 13821 13821 } 13822 13822 if (pmb->mbox_flag & LPFC_MBX_IMED_UNREG) { ··· 13830 13830 pmbox->un.varWords[0], 0); 13831 13831 13832 13832 if (!pmbox->mbxStatus) { 13833 - mp = (struct lpfc_dmabuf *) 13834 - (pmb->ctx_buf); 13835 - ndlp = (struct lpfc_nodelist *) 13836 - pmb->ctx_ndlp; 13833 + mp = pmb->ctx_buf; 13834 + ndlp = pmb->ctx_ndlp; 13837 13835 13838 13836 /* Reg_LOGIN of dflt RPI was 13839 13837 * successful. new lets get ··· 14338 14340 mcqe_status, 14339 14341 pmbox->un.varWords[0], 0); 14340 14342 if (mcqe_status == MB_CQE_STATUS_SUCCESS) { 14341 - mp = (struct lpfc_dmabuf *)(pmb->ctx_buf); 14342 - ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp; 14343 + mp = pmb->ctx_buf; 14344 + ndlp = pmb->ctx_ndlp; 14343 14345 14344 14346 /* Reg_LOGIN of dflt RPI was successful. Mark the 14345 14347 * node as having an UNREG_LOGIN in progress to stop ··· 19821 19823 * lpfc_sli4_resume_rpi - Remove the rpi bitmask region 19822 19824 * @ndlp: pointer to lpfc nodelist data structure. 19823 19825 * @cmpl: completion call-back. 19824 - * @arg: data to load as MBox 'caller buffer information' 19826 + * @iocbq: data to load as mbox ctx_u information 19825 19827 * 19826 19828 * This routine is invoked to remove the memory region that 19827 19829 * provided rpi via a bitmask. 19828 19830 **/ 19829 19831 int 19830 19832 lpfc_sli4_resume_rpi(struct lpfc_nodelist *ndlp, 19831 - void (*cmpl)(struct lpfc_hba *, LPFC_MBOXQ_t *), void *arg) 19833 + void (*cmpl)(struct lpfc_hba *, LPFC_MBOXQ_t *), 19834 + struct lpfc_iocbq *iocbq) 19832 19835 { 19833 19836 LPFC_MBOXQ_t *mboxq; 19834 19837 struct lpfc_hba *phba = ndlp->phba; ··· 19858 19859 lpfc_resume_rpi(mboxq, ndlp); 19859 19860 if (cmpl) { 19860 19861 mboxq->mbox_cmpl = cmpl; 19861 - mboxq->ctx_buf = arg; 19862 + mboxq->ctx_u.save_iocb = iocbq; 19862 19863 } else 19863 19864 mboxq->mbox_cmpl = lpfc_sli_def_mbox_cmpl; 19864 19865 mboxq->ctx_ndlp = ndlp; ··· 20675 20676 if (lpfc_sli4_dump_cfg_rg23(phba, mboxq)) 20676 20677 goto out; 20677 20678 mqe = &mboxq->u.mqe; 20678 - mp = (struct lpfc_dmabuf *)mboxq->ctx_buf; 20679 + mp = mboxq->ctx_buf; 20679 20680 rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL); 20680 20681 if (rc) 20681 20682 goto out; ··· 21034 21035 (mb->u.mb.mbxCommand == MBX_REG_VPI)) 21035 21036 mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl; 21036 21037 if (mb->u.mb.mbxCommand == MBX_REG_LOGIN64) { 21037 - act_mbx_ndlp = (struct lpfc_nodelist *)mb->ctx_ndlp; 21038 + act_mbx_ndlp = mb->ctx_ndlp; 21038 21039 21039 21040 /* This reference is local to this routine. The 21040 21041 * reference is removed at routine exit. ··· 21063 21064 21064 21065 mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl; 21065 21066 if (mb->u.mb.mbxCommand == MBX_REG_LOGIN64) { 21066 - ndlp = (struct lpfc_nodelist *)mb->ctx_ndlp; 21067 + ndlp = mb->ctx_ndlp; 21067 21068 /* Unregister the RPI when mailbox complete */ 21068 21069 mb->mbox_flag |= LPFC_MBX_IMED_UNREG; 21069 21070 restart_loop = 1; ··· 21083 21084 while (!list_empty(&mbox_cmd_list)) { 21084 21085 list_remove_head(&mbox_cmd_list, mb, LPFC_MBOXQ_t, list); 21085 21086 if (mb->u.mb.mbxCommand == MBX_REG_LOGIN64) { 21086 - ndlp = (struct lpfc_nodelist *)mb->ctx_ndlp; 21087 + ndlp = mb->ctx_ndlp; 21087 21088 mb->ctx_ndlp = NULL; 21088 21089 if (ndlp) { 21089 21090 spin_lock(&ndlp->lock);
+24 -6
drivers/scsi/lpfc/lpfc_sli.h
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2017-2023 Broadcom. All Rights Reserved. The term * 4 + * Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term * 5 5 * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. * 6 6 * Copyright (C) 2004-2016 Emulex. All rights reserved. * 7 7 * EMULEX and SLI are trademarks of Emulex. * ··· 182 182 struct lpfc_mqe mqe; 183 183 } u; 184 184 struct lpfc_vport *vport; /* virtual port pointer */ 185 - void *ctx_ndlp; /* an lpfc_nodelist pointer */ 186 - void *ctx_buf; /* an lpfc_dmabuf pointer */ 187 - void *context3; /* a generic pointer. Code must 188 - * accommodate the actual datatype. 189 - */ 185 + struct lpfc_nodelist *ctx_ndlp; /* caller ndlp pointer */ 186 + struct lpfc_dmabuf *ctx_buf; /* caller buffer information */ 187 + void *ext_buf; /* extended buffer for extended mbox 188 + * cmds. Not a generic pointer. 189 + * Use for storing virtual address. 190 + */ 191 + 192 + /* Pointers that are seldom used during mbox execution, but require 193 + * a saved context. 194 + */ 195 + union { 196 + unsigned long ox_rx_id; /* Used in els_rsp_rls_acc */ 197 + struct lpfc_rdp_context *rdp; /* Used in get_rdp_info */ 198 + struct lpfc_lcb_context *lcb; /* Used in set_beacon */ 199 + struct completion *mbox_wait; /* Used in issue_mbox_wait */ 200 + struct bsg_job_data *dd_data; /* Used in bsg_issue_mbox_cmpl 201 + * and 202 + * bsg_issue_mbox_ext_handle_job 203 + */ 204 + struct lpfc_iocbq *save_iocb; /* Used in defer_plogi_acc and 205 + * lpfc_mbx_cmpl_resume_rpi 206 + */ 207 + } ctx_u; 190 208 191 209 void (*mbox_cmpl) (struct lpfc_hba *, struct lpfcMboxq *); 192 210 uint8_t mbox_flag;
+4 -3
drivers/scsi/lpfc/lpfc_sli4.h
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2017-2023 Broadcom. All Rights Reserved. The term * 4 + * Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term * 5 5 * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. * 6 6 * Copyright (C) 2009-2016 Emulex. All rights reserved. * 7 7 * EMULEX and SLI are trademarks of Emulex. * ··· 1118 1118 void lpfc_sli4_remove_rpis(struct lpfc_hba *); 1119 1119 void lpfc_sli4_async_event_proc(struct lpfc_hba *); 1120 1120 void lpfc_sli4_fcf_redisc_event_proc(struct lpfc_hba *); 1121 - int lpfc_sli4_resume_rpi(struct lpfc_nodelist *, 1122 - void (*)(struct lpfc_hba *, LPFC_MBOXQ_t *), void *); 1121 + int lpfc_sli4_resume_rpi(struct lpfc_nodelist *ndlp, 1122 + void (*cmpl)(struct lpfc_hba *, LPFC_MBOXQ_t *), 1123 + struct lpfc_iocbq *iocbq); 1123 1124 void lpfc_sli4_els_xri_abort_event_proc(struct lpfc_hba *phba); 1124 1125 void lpfc_sli4_nvme_pci_offline_aborted(struct lpfc_hba *phba, 1125 1126 struct lpfc_io_buf *lpfc_ncmd);
+1 -1
drivers/scsi/lpfc/lpfc_version.h
··· 20 20 * included with this package. * 21 21 *******************************************************************/ 22 22 23 - #define LPFC_DRIVER_VERSION "14.4.0.0" 23 + #define LPFC_DRIVER_VERSION "14.4.0.1" 24 24 #define LPFC_DRIVER_NAME "lpfc" 25 25 26 26 /* Used for SLI 2/3 */
+5 -5
drivers/scsi/lpfc/lpfc_vport.c
··· 166 166 } 167 167 } 168 168 169 - mp = (struct lpfc_dmabuf *)pmb->ctx_buf; 169 + mp = pmb->ctx_buf; 170 170 memcpy(&vport->fc_sparam, mp->virt, sizeof (struct serv_parm)); 171 171 memcpy(&vport->fc_nodename, &vport->fc_sparam.nodeName, 172 172 sizeof (struct lpfc_name)); ··· 674 674 lpfc_free_sysfs_attr(vport); 675 675 lpfc_debugfs_terminate(vport); 676 676 677 - /* Remove FC host to break driver binding. */ 678 - fc_remove_host(shost); 679 - scsi_remove_host(shost); 680 - 681 677 /* Send the DA_ID and Fabric LOGO to cleanup Nameserver entries. */ 682 678 ndlp = lpfc_findnode_did(vport, Fabric_DID); 683 679 if (!ndlp) ··· 716 720 lpfc_discovery_wait(vport); 717 721 718 722 skip_logo: 723 + 724 + /* Remove FC host to break driver binding. */ 725 + fc_remove_host(shost); 726 + scsi_remove_host(shost); 719 727 720 728 lpfc_cleanup(vport); 721 729
+1 -1
drivers/scsi/mpi3mr/mpi3mr_app.c
··· 1644 1644 if ((mpirep_offset != 0xFF) && 1645 1645 drv_bufs[mpirep_offset].bsg_buf_len) { 1646 1646 drv_buf_iter = &drv_bufs[mpirep_offset]; 1647 - drv_buf_iter->kern_buf_len = (sizeof(*bsg_reply_buf) - 1 + 1647 + drv_buf_iter->kern_buf_len = (sizeof(*bsg_reply_buf) + 1648 1648 mrioc->reply_sz); 1649 1649 bsg_reply_buf = kzalloc(drv_buf_iter->kern_buf_len, GFP_KERNEL); 1650 1650
+11 -9
drivers/scsi/pmcraid.c
··· 61 61 * pmcraid_minor - minor number(s) to use 62 62 */ 63 63 static unsigned int pmcraid_major; 64 - static struct class *pmcraid_class; 64 + static const struct class pmcraid_class = { 65 + .name = PMCRAID_DEVFILE, 66 + }; 65 67 static DECLARE_BITMAP(pmcraid_minor, PMCRAID_MAX_ADAPTERS); 66 68 67 69 /* ··· 4725 4723 if (error) 4726 4724 pmcraid_release_minor(minor); 4727 4725 else 4728 - device_create(pmcraid_class, NULL, MKDEV(pmcraid_major, minor), 4726 + device_create(&pmcraid_class, NULL, MKDEV(pmcraid_major, minor), 4729 4727 NULL, "%s%u", PMCRAID_DEVFILE, minor); 4730 4728 return error; 4731 4729 } ··· 4741 4739 static void pmcraid_release_chrdev(struct pmcraid_instance *pinstance) 4742 4740 { 4743 4741 pmcraid_release_minor(MINOR(pinstance->cdev.dev)); 4744 - device_destroy(pmcraid_class, 4742 + device_destroy(&pmcraid_class, 4745 4743 MKDEV(pmcraid_major, MINOR(pinstance->cdev.dev))); 4746 4744 cdev_del(&pinstance->cdev); 4747 4745 } ··· 5392 5390 } 5393 5391 5394 5392 pmcraid_major = MAJOR(dev); 5395 - pmcraid_class = class_create(PMCRAID_DEVFILE); 5396 5393 5397 - if (IS_ERR(pmcraid_class)) { 5398 - error = PTR_ERR(pmcraid_class); 5394 + error = class_register(&pmcraid_class); 5395 + 5396 + if (error) { 5399 5397 pmcraid_err("failed to register with sysfs, error = %x\n", 5400 5398 error); 5401 5399 goto out_unreg_chrdev; ··· 5404 5402 error = pmcraid_netlink_init(); 5405 5403 5406 5404 if (error) { 5407 - class_destroy(pmcraid_class); 5405 + class_unregister(&pmcraid_class); 5408 5406 goto out_unreg_chrdev; 5409 5407 } 5410 5408 ··· 5415 5413 5416 5414 pmcraid_err("failed to register pmcraid driver, error = %x\n", 5417 5415 error); 5418 - class_destroy(pmcraid_class); 5416 + class_unregister(&pmcraid_class); 5419 5417 pmcraid_netlink_release(); 5420 5418 5421 5419 out_unreg_chrdev: ··· 5434 5432 unregister_chrdev_region(MKDEV(pmcraid_major, 0), 5435 5433 PMCRAID_MAX_ADAPTERS); 5436 5434 pci_unregister_driver(&pmcraid_driver); 5437 - class_destroy(pmcraid_class); 5435 + class_unregister(&pmcraid_class); 5438 5436 } 5439 5437 5440 5438 module_init(pmcraid_init);
+12 -2
drivers/scsi/qla2xxx/qla_attr.c
··· 2741 2741 return; 2742 2742 2743 2743 if (unlikely(pci_channel_offline(fcport->vha->hw->pdev))) { 2744 - qla2x00_abort_all_cmds(fcport->vha, DID_NO_CONNECT << 16); 2744 + /* Will wait for wind down of adapter */ 2745 + ql_dbg(ql_dbg_aer, fcport->vha, 0x900c, 2746 + "%s pci offline detected (id %06x)\n", __func__, 2747 + fcport->d_id.b24); 2748 + qla_pci_set_eeh_busy(fcport->vha); 2749 + qla2x00_eh_wait_for_pending_commands(fcport->vha, fcport->d_id.b24, 2750 + 0, WAIT_TARGET); 2745 2751 return; 2746 2752 } 2747 2753 } ··· 2769 2763 vha = fcport->vha; 2770 2764 2771 2765 if (unlikely(pci_channel_offline(fcport->vha->hw->pdev))) { 2772 - qla2x00_abort_all_cmds(fcport->vha, DID_NO_CONNECT << 16); 2766 + /* Will wait for wind down of adapter */ 2767 + ql_dbg(ql_dbg_aer, fcport->vha, 0x900b, 2768 + "%s pci offline detected (id %06x)\n", __func__, 2769 + fcport->d_id.b24); 2770 + qla_pci_set_eeh_busy(vha); 2773 2771 qla2x00_eh_wait_for_pending_commands(fcport->vha, fcport->d_id.b24, 2774 2772 0, WAIT_TARGET); 2775 2773 return;
+1 -1
drivers/scsi/qla2xxx/qla_def.h
··· 82 82 #include "qla_nvme.h" 83 83 #define QLA2XXX_DRIVER_NAME "qla2xxx" 84 84 #define QLA2XXX_APIDEV "ql2xapidev" 85 - #define QLA2XXX_MANUFACTURER "Marvell Semiconductor, Inc." 85 + #define QLA2XXX_MANUFACTURER "Marvell" 86 86 87 87 /* 88 88 * We have MAILBOX_REGISTER_COUNT sized arrays in a few places,
+1 -1
drivers/scsi/qla2xxx/qla_gbl.h
··· 44 44 extern int qla2x00_local_device_login(scsi_qla_host_t *, fc_port_t *); 45 45 46 46 extern int qla24xx_els_dcmd_iocb(scsi_qla_host_t *, int, port_id_t); 47 - extern int qla24xx_els_dcmd2_iocb(scsi_qla_host_t *, int, fc_port_t *, bool); 47 + extern int qla24xx_els_dcmd2_iocb(scsi_qla_host_t *, int, fc_port_t *); 48 48 extern void qla2x00_els_dcmd2_free(scsi_qla_host_t *vha, 49 49 struct els_plogi *els_plogi); 50 50
+65 -63
drivers/scsi/qla2xxx/qla_init.c
··· 1193 1193 return rval; 1194 1194 1195 1195 done_free_sp: 1196 - /* ref: INIT */ 1197 - kref_put(&sp->cmd_kref, qla2x00_sp_release); 1196 + /* 1197 + * use qla24xx_async_gnl_sp_done to purge all pending gnl request. 1198 + * kref_put is call behind the scene. 1199 + */ 1200 + sp->u.iocb_cmd.u.mbx.in_mb[0] = MBS_COMMAND_ERROR; 1201 + qla24xx_async_gnl_sp_done(sp, QLA_COMMAND_ERROR); 1198 1202 fcport->flags &= ~(FCF_ASYNC_SENT); 1199 1203 done: 1200 1204 fcport->flags &= ~(FCF_ASYNC_ACTIVE); ··· 2669 2665 return rval; 2670 2666 } 2671 2667 2668 + static void qla_enable_fce_trace(scsi_qla_host_t *vha) 2669 + { 2670 + int rval; 2671 + struct qla_hw_data *ha = vha->hw; 2672 + 2673 + if (ha->fce) { 2674 + ha->flags.fce_enabled = 1; 2675 + memset(ha->fce, 0, fce_calc_size(ha->fce_bufs)); 2676 + rval = qla2x00_enable_fce_trace(vha, 2677 + ha->fce_dma, ha->fce_bufs, ha->fce_mb, &ha->fce_bufs); 2678 + 2679 + if (rval) { 2680 + ql_log(ql_log_warn, vha, 0x8033, 2681 + "Unable to reinitialize FCE (%d).\n", rval); 2682 + ha->flags.fce_enabled = 0; 2683 + } 2684 + } 2685 + } 2686 + 2687 + static void qla_enable_eft_trace(scsi_qla_host_t *vha) 2688 + { 2689 + int rval; 2690 + struct qla_hw_data *ha = vha->hw; 2691 + 2692 + if (ha->eft) { 2693 + memset(ha->eft, 0, EFT_SIZE); 2694 + rval = qla2x00_enable_eft_trace(vha, ha->eft_dma, EFT_NUM_BUFFERS); 2695 + 2696 + if (rval) { 2697 + ql_log(ql_log_warn, vha, 0x8034, 2698 + "Unable to reinitialize EFT (%d).\n", rval); 2699 + } 2700 + } 2701 + } 2672 2702 /* 2673 2703 * qla2x00_initialize_adapter 2674 2704 * Initialize board. ··· 3706 3668 } 3707 3669 3708 3670 static void 3709 - qla2x00_init_fce_trace(scsi_qla_host_t *vha) 3671 + qla2x00_alloc_fce_trace(scsi_qla_host_t *vha) 3710 3672 { 3711 - int rval; 3712 3673 dma_addr_t tc_dma; 3713 3674 void *tc; 3714 3675 struct qla_hw_data *ha = vha->hw; ··· 3736 3699 return; 3737 3700 } 3738 3701 3739 - rval = qla2x00_enable_fce_trace(vha, tc_dma, FCE_NUM_BUFFERS, 3740 - ha->fce_mb, &ha->fce_bufs); 3741 - if (rval) { 3742 - ql_log(ql_log_warn, vha, 0x00bf, 3743 - "Unable to initialize FCE (%d).\n", rval); 3744 - dma_free_coherent(&ha->pdev->dev, FCE_SIZE, tc, tc_dma); 3745 - return; 3746 - } 3747 - 3748 3702 ql_dbg(ql_dbg_init, vha, 0x00c0, 3749 3703 "Allocated (%d KB) for FCE...\n", FCE_SIZE / 1024); 3750 3704 3751 - ha->flags.fce_enabled = 1; 3752 3705 ha->fce_dma = tc_dma; 3753 3706 ha->fce = tc; 3707 + ha->fce_bufs = FCE_NUM_BUFFERS; 3754 3708 } 3755 3709 3756 3710 static void 3757 - qla2x00_init_eft_trace(scsi_qla_host_t *vha) 3711 + qla2x00_alloc_eft_trace(scsi_qla_host_t *vha) 3758 3712 { 3759 - int rval; 3760 3713 dma_addr_t tc_dma; 3761 3714 void *tc; 3762 3715 struct qla_hw_data *ha = vha->hw; ··· 3771 3744 return; 3772 3745 } 3773 3746 3774 - rval = qla2x00_enable_eft_trace(vha, tc_dma, EFT_NUM_BUFFERS); 3775 - if (rval) { 3776 - ql_log(ql_log_warn, vha, 0x00c2, 3777 - "Unable to initialize EFT (%d).\n", rval); 3778 - dma_free_coherent(&ha->pdev->dev, EFT_SIZE, tc, tc_dma); 3779 - return; 3780 - } 3781 - 3782 3747 ql_dbg(ql_dbg_init, vha, 0x00c3, 3783 3748 "Allocated (%d KB) EFT ...\n", EFT_SIZE / 1024); 3784 3749 3785 3750 ha->eft_dma = tc_dma; 3786 3751 ha->eft = tc; 3787 - } 3788 - 3789 - static void 3790 - qla2x00_alloc_offload_mem(scsi_qla_host_t *vha) 3791 - { 3792 - qla2x00_init_fce_trace(vha); 3793 - qla2x00_init_eft_trace(vha); 3794 3752 } 3795 3753 3796 3754 void ··· 3832 3820 if (ha->tgt.atio_ring) 3833 3821 mq_size += ha->tgt.atio_q_length * sizeof(request_t); 3834 3822 3835 - qla2x00_init_fce_trace(vha); 3823 + qla2x00_alloc_fce_trace(vha); 3836 3824 if (ha->fce) 3837 3825 fce_size = sizeof(struct qla2xxx_fce_chain) + FCE_SIZE; 3838 - qla2x00_init_eft_trace(vha); 3826 + qla2x00_alloc_eft_trace(vha); 3839 3827 if (ha->eft) 3840 3828 eft_size = EFT_SIZE; 3841 3829 } ··· 4265 4253 struct qla_hw_data *ha = vha->hw; 4266 4254 struct device_reg_2xxx __iomem *reg = &ha->iobase->isp; 4267 4255 unsigned long flags; 4268 - uint16_t fw_major_version; 4269 4256 int done_once = 0; 4270 4257 4271 4258 if (IS_P3P_TYPE(ha)) { ··· 4331 4320 goto failed; 4332 4321 4333 4322 enable_82xx_npiv: 4334 - fw_major_version = ha->fw_major_version; 4335 4323 if (IS_P3P_TYPE(ha)) 4336 4324 qla82xx_check_md_needed(vha); 4337 4325 else ··· 4359 4349 if (rval != QLA_SUCCESS) 4360 4350 goto failed; 4361 4351 4362 - if (!fw_major_version && !(IS_P3P_TYPE(ha))) 4363 - qla2x00_alloc_offload_mem(vha); 4364 - 4365 4352 if (ql2xallocfwdump && !(IS_P3P_TYPE(ha))) 4366 4353 qla2x00_alloc_fw_dump(vha); 4367 4354 4355 + qla_enable_fce_trace(vha); 4356 + qla_enable_eft_trace(vha); 4368 4357 } else { 4369 4358 goto failed; 4370 4359 } ··· 7496 7487 int 7497 7488 qla2x00_abort_isp(scsi_qla_host_t *vha) 7498 7489 { 7499 - int rval; 7500 7490 uint8_t status = 0; 7501 7491 struct qla_hw_data *ha = vha->hw; 7502 7492 struct scsi_qla_host *vp, *tvp; 7503 7493 struct req_que *req = ha->req_q_map[0]; 7504 7494 unsigned long flags; 7495 + fc_port_t *fcport; 7505 7496 7506 7497 if (vha->flags.online) { 7507 7498 qla2x00_abort_isp_cleanup(vha); ··· 7570 7561 "ISP Abort - ISP reg disconnect post nvmram config, exiting.\n"); 7571 7562 return status; 7572 7563 } 7564 + 7565 + /* User may have updated [fcp|nvme] prefer in flash */ 7566 + list_for_each_entry(fcport, &vha->vp_fcports, list) { 7567 + if (NVME_PRIORITY(ha, fcport)) 7568 + fcport->do_prli_nvme = 1; 7569 + else 7570 + fcport->do_prli_nvme = 0; 7571 + } 7572 + 7573 7573 if (!qla2x00_restart_isp(vha)) { 7574 7574 clear_bit(RESET_MARKER_NEEDED, &vha->dpc_flags); 7575 7575 ··· 7599 7581 7600 7582 if (IS_QLA81XX(ha) || IS_QLA8031(ha)) 7601 7583 qla2x00_get_fw_version(vha); 7602 - if (ha->fce) { 7603 - ha->flags.fce_enabled = 1; 7604 - memset(ha->fce, 0, 7605 - fce_calc_size(ha->fce_bufs)); 7606 - rval = qla2x00_enable_fce_trace(vha, 7607 - ha->fce_dma, ha->fce_bufs, ha->fce_mb, 7608 - &ha->fce_bufs); 7609 - if (rval) { 7610 - ql_log(ql_log_warn, vha, 0x8033, 7611 - "Unable to reinitialize FCE " 7612 - "(%d).\n", rval); 7613 - ha->flags.fce_enabled = 0; 7614 - } 7615 - } 7616 7584 7617 - if (ha->eft) { 7618 - memset(ha->eft, 0, EFT_SIZE); 7619 - rval = qla2x00_enable_eft_trace(vha, 7620 - ha->eft_dma, EFT_NUM_BUFFERS); 7621 - if (rval) { 7622 - ql_log(ql_log_warn, vha, 0x8034, 7623 - "Unable to reinitialize EFT " 7624 - "(%d).\n", rval); 7625 - } 7626 - } 7627 7585 } else { /* failed the ISP abort */ 7628 7586 vha->flags.online = 1; 7629 7587 if (test_bit(ISP_ABORT_RETRY, &vha->dpc_flags)) { ··· 7648 7654 if (vp->vp_idx) { 7649 7655 atomic_inc(&vp->vref_count); 7650 7656 spin_unlock_irqrestore(&ha->vport_slock, flags); 7657 + 7658 + /* User may have updated [fcp|nvme] prefer in flash */ 7659 + list_for_each_entry(fcport, &vp->vp_fcports, list) { 7660 + if (NVME_PRIORITY(ha, fcport)) 7661 + fcport->do_prli_nvme = 1; 7662 + else 7663 + fcport->do_prli_nvme = 0; 7664 + } 7651 7665 7652 7666 qla2x00_vp_abort_isp(vp); 7653 7667
+44 -24
drivers/scsi/qla2xxx/qla_iocb.c
··· 2587 2587 qla2x00_sp_release(struct kref *kref) 2588 2588 { 2589 2589 struct srb *sp = container_of(kref, struct srb, cmd_kref); 2590 + struct scsi_qla_host *vha = sp->vha; 2591 + 2592 + switch (sp->type) { 2593 + case SRB_CT_PTHRU_CMD: 2594 + /* GPSC & GFPNID use fcport->ct_desc.ct_sns for both req & rsp */ 2595 + if (sp->u.iocb_cmd.u.ctarg.req && 2596 + (!sp->fcport || 2597 + sp->u.iocb_cmd.u.ctarg.req != sp->fcport->ct_desc.ct_sns)) { 2598 + dma_free_coherent(&vha->hw->pdev->dev, 2599 + sp->u.iocb_cmd.u.ctarg.req_allocated_size, 2600 + sp->u.iocb_cmd.u.ctarg.req, 2601 + sp->u.iocb_cmd.u.ctarg.req_dma); 2602 + sp->u.iocb_cmd.u.ctarg.req = NULL; 2603 + } 2604 + if (sp->u.iocb_cmd.u.ctarg.rsp && 2605 + (!sp->fcport || 2606 + sp->u.iocb_cmd.u.ctarg.rsp != sp->fcport->ct_desc.ct_sns)) { 2607 + dma_free_coherent(&vha->hw->pdev->dev, 2608 + sp->u.iocb_cmd.u.ctarg.rsp_allocated_size, 2609 + sp->u.iocb_cmd.u.ctarg.rsp, 2610 + sp->u.iocb_cmd.u.ctarg.rsp_dma); 2611 + sp->u.iocb_cmd.u.ctarg.rsp = NULL; 2612 + } 2613 + break; 2614 + default: 2615 + break; 2616 + } 2590 2617 2591 2618 sp->free(sp); 2592 2619 } ··· 2637 2610 { 2638 2611 struct srb_iocb *elsio = &sp->u.iocb_cmd; 2639 2612 2640 - kfree(sp->fcport); 2613 + if (sp->fcport) 2614 + qla2x00_free_fcport(sp->fcport); 2641 2615 2642 2616 if (elsio->u.els_logo.els_logo_pyld) 2643 2617 dma_free_coherent(&sp->vha->hw->pdev->dev, DMA_POOL_SIZE, ··· 2720 2692 */ 2721 2693 sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL); 2722 2694 if (!sp) { 2723 - kfree(fcport); 2695 + qla2x00_free_fcport(fcport); 2724 2696 ql_log(ql_log_info, vha, 0x70e6, 2725 2697 "SRB allocation failed\n"); 2726 2698 return -ENOMEM; ··· 2751 2723 if (!elsio->u.els_logo.els_logo_pyld) { 2752 2724 /* ref: INIT */ 2753 2725 kref_put(&sp->cmd_kref, qla2x00_sp_release); 2726 + qla2x00_free_fcport(fcport); 2754 2727 return QLA_FUNCTION_FAILED; 2755 2728 } 2756 2729 ··· 2776 2747 if (rval != QLA_SUCCESS) { 2777 2748 /* ref: INIT */ 2778 2749 kref_put(&sp->cmd_kref, qla2x00_sp_release); 2750 + qla2x00_free_fcport(fcport); 2779 2751 return QLA_FUNCTION_FAILED; 2780 2752 } 2781 2753 ··· 3042 3012 3043 3013 int 3044 3014 qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode, 3045 - fc_port_t *fcport, bool wait) 3015 + fc_port_t *fcport) 3046 3016 { 3047 3017 srb_t *sp; 3048 3018 struct srb_iocb *elsio = NULL; ··· 3057 3027 if (!sp) { 3058 3028 ql_log(ql_log_info, vha, 0x70e6, 3059 3029 "SRB allocation failed\n"); 3060 - fcport->flags &= ~FCF_ASYNC_ACTIVE; 3061 - return -ENOMEM; 3030 + goto done; 3062 3031 } 3063 3032 3064 3033 fcport->flags |= FCF_ASYNC_SENT; ··· 3065 3036 elsio = &sp->u.iocb_cmd; 3066 3037 ql_dbg(ql_dbg_io, vha, 0x3073, 3067 3038 "%s Enter: PLOGI portid=%06x\n", __func__, fcport->d_id.b24); 3068 - 3069 - if (wait) 3070 - sp->flags = SRB_WAKEUP_ON_COMP; 3071 3039 3072 3040 sp->type = SRB_ELS_DCMD; 3073 3041 sp->name = "ELS_DCMD"; ··· 3081 3055 3082 3056 if (!elsio->u.els_plogi.els_plogi_pyld) { 3083 3057 rval = QLA_FUNCTION_FAILED; 3084 - goto out; 3058 + goto done_free_sp; 3085 3059 } 3086 3060 3087 3061 resp_ptr = elsio->u.els_plogi.els_resp_pyld = ··· 3090 3064 3091 3065 if (!elsio->u.els_plogi.els_resp_pyld) { 3092 3066 rval = QLA_FUNCTION_FAILED; 3093 - goto out; 3067 + goto done_free_sp; 3094 3068 } 3095 3069 3096 3070 ql_dbg(ql_dbg_io, vha, 0x3073, "PLOGI %p %p\n", ptr, resp_ptr); ··· 3106 3080 3107 3081 if (els_opcode == ELS_DCMD_PLOGI && DBELL_ACTIVE(vha)) { 3108 3082 struct fc_els_flogi *p = ptr; 3109 - 3110 3083 p->fl_csp.sp_features |= cpu_to_be16(FC_SP_FT_SEC); 3111 3084 } 3112 3085 ··· 3114 3089 (uint8_t *)elsio->u.els_plogi.els_plogi_pyld, 3115 3090 sizeof(*elsio->u.els_plogi.els_plogi_pyld)); 3116 3091 3117 - init_completion(&elsio->u.els_plogi.comp); 3118 3092 rval = qla2x00_start_sp(sp); 3119 3093 if (rval != QLA_SUCCESS) { 3120 - rval = QLA_FUNCTION_FAILED; 3094 + fcport->flags |= FCF_LOGIN_NEEDED; 3095 + set_bit(RELOGIN_NEEDED, &vha->dpc_flags); 3096 + goto done_free_sp; 3121 3097 } else { 3122 3098 ql_dbg(ql_dbg_disc, vha, 0x3074, 3123 3099 "%s PLOGI sent, hdl=%x, loopid=%x, to port_id %06x from port_id %06x\n", ··· 3126 3100 fcport->d_id.b24, vha->d_id.b24); 3127 3101 } 3128 3102 3129 - if (wait) { 3130 - wait_for_completion(&elsio->u.els_plogi.comp); 3103 + return rval; 3131 3104 3132 - if (elsio->u.els_plogi.comp_status != CS_COMPLETE) 3133 - rval = QLA_FUNCTION_FAILED; 3134 - } else { 3135 - goto done; 3136 - } 3137 - 3138 - out: 3139 - fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE); 3105 + done_free_sp: 3140 3106 qla2x00_els_dcmd2_free(vha, &elsio->u.els_plogi); 3141 3107 /* ref: INIT */ 3142 3108 kref_put(&sp->cmd_kref, qla2x00_sp_release); 3143 3109 done: 3110 + fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE); 3111 + qla2x00_set_fcport_disc_state(fcport, DSC_DELETED); 3144 3112 return rval; 3145 3113 } 3146 3114 ··· 3938 3918 return -EAGAIN; 3939 3919 } 3940 3920 3941 - pkt = __qla2x00_alloc_iocbs(sp->qpair, sp); 3921 + pkt = qla2x00_alloc_iocbs_ready(sp->qpair, sp); 3942 3922 if (!pkt) { 3943 3923 rval = -EAGAIN; 3944 3924 ql_log(ql_log_warn, vha, 0x700c,
+1 -1
drivers/scsi/qla2xxx/qla_mbx.c
··· 194 194 if (ha->flags.purge_mbox || chip_reset != ha->chip_reset || 195 195 ha->flags.eeh_busy) { 196 196 ql_log(ql_log_warn, vha, 0xd035, 197 - "Error detected: purge[%d] eeh[%d] cmd=0x%x, Exiting.\n", 197 + "Purge mbox: purge[%d] eeh[%d] cmd=0x%x, Exiting.\n", 198 198 ha->flags.purge_mbox, ha->flags.eeh_busy, mcp->mb[0]); 199 199 rval = QLA_ABORTED; 200 200 goto premature_exit;
+2 -1
drivers/scsi/qla2xxx/qla_os.c
··· 4602 4602 ha->init_cb_dma = 0; 4603 4603 fail_free_vp_map: 4604 4604 kfree(ha->vp_map); 4605 + ha->vp_map = NULL; 4605 4606 fail: 4606 4607 ql_log(ql_log_fatal, NULL, 0x0030, 4607 4608 "Memory allocation failure.\n"); ··· 5584 5583 break; 5585 5584 case QLA_EVT_ELS_PLOGI: 5586 5585 qla24xx_els_dcmd2_iocb(vha, ELS_DCMD_PLOGI, 5587 - e->u.fcport.fcport, false); 5586 + e->u.fcport.fcport); 5588 5587 break; 5589 5588 case QLA_EVT_SA_REPLACE: 5590 5589 rc = qla24xx_issue_sa_replace_iocb(vha, e);
+10
drivers/scsi/qla2xxx/qla_target.c
··· 1062 1062 "%s: sess %p logout completed\n", __func__, sess); 1063 1063 } 1064 1064 1065 + /* check for any straggling io left behind */ 1066 + if (!(sess->flags & FCF_FCP2_DEVICE) && 1067 + qla2x00_eh_wait_for_pending_commands(sess->vha, sess->d_id.b24, 0, WAIT_TARGET)) { 1068 + ql_log(ql_log_warn, vha, 0x3027, 1069 + "IO not return. Resetting.\n"); 1070 + set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); 1071 + qla2xxx_wake_dpc(vha); 1072 + qla2x00_wait_for_chip_reset(vha); 1073 + } 1074 + 1065 1075 if (sess->logo_ack_needed) { 1066 1076 sess->logo_ack_needed = 0; 1067 1077 qla24xx_async_notify_ack(vha, sess,
+2 -2
drivers/scsi/qla2xxx/qla_version.h
··· 6 6 /* 7 7 * Driver version 8 8 */ 9 - #define QLA2XXX_VERSION "10.02.09.100-k" 9 + #define QLA2XXX_VERSION "10.02.09.200-k" 10 10 11 11 #define QLA_DRIVER_MAJOR_VER 10 12 12 #define QLA_DRIVER_MINOR_VER 2 13 13 #define QLA_DRIVER_PATCH_VER 9 14 - #define QLA_DRIVER_BETA_VER 100 14 + #define QLA_DRIVER_BETA_VER 200
+34
drivers/scsi/scsi_scan.c
··· 1642 1642 } 1643 1643 EXPORT_SYMBOL(scsi_add_device); 1644 1644 1645 + int scsi_resume_device(struct scsi_device *sdev) 1646 + { 1647 + struct device *dev = &sdev->sdev_gendev; 1648 + int ret = 0; 1649 + 1650 + device_lock(dev); 1651 + 1652 + /* 1653 + * Bail out if the device or its queue are not running. Otherwise, 1654 + * the rescan may block waiting for commands to be executed, with us 1655 + * holding the device lock. This can result in a potential deadlock 1656 + * in the power management core code when system resume is on-going. 1657 + */ 1658 + if (sdev->sdev_state != SDEV_RUNNING || 1659 + blk_queue_pm_only(sdev->request_queue)) { 1660 + ret = -EWOULDBLOCK; 1661 + goto unlock; 1662 + } 1663 + 1664 + if (dev->driver && try_module_get(dev->driver->owner)) { 1665 + struct scsi_driver *drv = to_scsi_driver(dev->driver); 1666 + 1667 + if (drv->resume) 1668 + ret = drv->resume(dev); 1669 + module_put(dev->driver->owner); 1670 + } 1671 + 1672 + unlock: 1673 + device_unlock(dev); 1674 + 1675 + return ret; 1676 + } 1677 + EXPORT_SYMBOL(scsi_resume_device); 1678 + 1645 1679 int scsi_rescan_device(struct scsi_device *sdev) 1646 1680 { 1647 1681 struct device *dev = &sdev->sdev_gendev;
+19 -4
drivers/scsi/sd.c
··· 4108 4108 return sd_suspend_common(dev, true); 4109 4109 } 4110 4110 4111 - static int sd_resume(struct device *dev, bool runtime) 4111 + static int sd_resume(struct device *dev) 4112 + { 4113 + struct scsi_disk *sdkp = dev_get_drvdata(dev); 4114 + 4115 + sd_printk(KERN_NOTICE, sdkp, "Starting disk\n"); 4116 + 4117 + if (opal_unlock_from_suspend(sdkp->opal_dev)) { 4118 + sd_printk(KERN_NOTICE, sdkp, "OPAL unlock failed\n"); 4119 + return -EIO; 4120 + } 4121 + 4122 + return 0; 4123 + } 4124 + 4125 + static int sd_resume_common(struct device *dev, bool runtime) 4112 4126 { 4113 4127 struct scsi_disk *sdkp = dev_get_drvdata(dev); 4114 4128 int ret; ··· 4138 4124 sd_printk(KERN_NOTICE, sdkp, "Starting disk\n"); 4139 4125 ret = sd_start_stop_device(sdkp, 1); 4140 4126 if (!ret) { 4141 - opal_unlock_from_suspend(sdkp->opal_dev); 4127 + sd_resume(dev); 4142 4128 sdkp->suspended = false; 4143 4129 } 4144 4130 ··· 4157 4143 return 0; 4158 4144 } 4159 4145 4160 - return sd_resume(dev, false); 4146 + return sd_resume_common(dev, false); 4161 4147 } 4162 4148 4163 4149 static int sd_resume_runtime(struct device *dev) ··· 4184 4170 "Failed to clear sense data\n"); 4185 4171 } 4186 4172 4187 - return sd_resume(dev, true); 4173 + return sd_resume_common(dev, true); 4188 4174 } 4189 4175 4190 4176 static const struct dev_pm_ops sd_pm_ops = { ··· 4207 4193 .pm = &sd_pm_ops, 4208 4194 }, 4209 4195 .rescan = sd_rescan, 4196 + .resume = sd_resume, 4210 4197 .init_command = sd_init_command, 4211 4198 .uninit_command = sd_uninit_command, 4212 4199 .done = sd_done,
+12 -10
drivers/scsi/sg.c
··· 1424 1424 .llseek = no_llseek, 1425 1425 }; 1426 1426 1427 - static struct class *sg_sysfs_class; 1427 + static const struct class sg_sysfs_class = { 1428 + .name = "scsi_generic" 1429 + }; 1428 1430 1429 1431 static int sg_sysfs_valid = 0; 1430 1432 ··· 1528 1526 if (sg_sysfs_valid) { 1529 1527 struct device *sg_class_member; 1530 1528 1531 - sg_class_member = device_create(sg_sysfs_class, cl_dev->parent, 1529 + sg_class_member = device_create(&sg_sysfs_class, cl_dev->parent, 1532 1530 MKDEV(SCSI_GENERIC_MAJOR, 1533 1531 sdp->index), 1534 1532 sdp, "%s", sdp->name); ··· 1618 1616 read_unlock_irqrestore(&sdp->sfd_lock, iflags); 1619 1617 1620 1618 sysfs_remove_link(&scsidp->sdev_gendev.kobj, "generic"); 1621 - device_destroy(sg_sysfs_class, MKDEV(SCSI_GENERIC_MAJOR, sdp->index)); 1619 + device_destroy(&sg_sysfs_class, MKDEV(SCSI_GENERIC_MAJOR, sdp->index)); 1622 1620 cdev_del(sdp->cdev); 1623 1621 sdp->cdev = NULL; 1624 1622 ··· 1689 1687 SG_MAX_DEVS, "sg"); 1690 1688 if (rc) 1691 1689 return rc; 1692 - sg_sysfs_class = class_create("scsi_generic"); 1693 - if ( IS_ERR(sg_sysfs_class) ) { 1694 - rc = PTR_ERR(sg_sysfs_class); 1690 + rc = class_register(&sg_sysfs_class); 1691 + if (rc) 1695 1692 goto err_out; 1696 - } 1697 1693 sg_sysfs_valid = 1; 1698 1694 rc = scsi_register_interface(&sg_interface); 1699 1695 if (0 == rc) { ··· 1700 1700 #endif /* CONFIG_SCSI_PROC_FS */ 1701 1701 return 0; 1702 1702 } 1703 - class_destroy(sg_sysfs_class); 1703 + class_unregister(&sg_sysfs_class); 1704 1704 register_sg_sysctls(); 1705 1705 err_out: 1706 1706 unregister_chrdev_region(MKDEV(SCSI_GENERIC_MAJOR, 0), SG_MAX_DEVS); ··· 1715 1715 remove_proc_subtree("scsi/sg", NULL); 1716 1716 #endif /* CONFIG_SCSI_PROC_FS */ 1717 1717 scsi_unregister_interface(&sg_interface); 1718 - class_destroy(sg_sysfs_class); 1718 + class_unregister(&sg_sysfs_class); 1719 1719 sg_sysfs_valid = 0; 1720 1720 unregister_chrdev_region(MKDEV(SCSI_GENERIC_MAJOR, 0), 1721 1721 SG_MAX_DEVS); ··· 2207 2207 { 2208 2208 struct sg_fd *sfp = container_of(work, struct sg_fd, ew.work); 2209 2209 struct sg_device *sdp = sfp->parentdp; 2210 + struct scsi_device *device = sdp->device; 2210 2211 Sg_request *srp; 2211 2212 unsigned long iflags; 2212 2213 ··· 2233 2232 "sg_remove_sfp: sfp=0x%p\n", sfp)); 2234 2233 kfree(sfp); 2235 2234 2236 - scsi_device_put(sdp->device); 2235 + WARN_ON_ONCE(kref_read(&sdp->d_ref) != 1); 2237 2236 kref_put(&sdp->d_ref, sg_device_destroy); 2237 + scsi_device_put(device); 2238 2238 module_put(THIS_MODULE); 2239 2239 } 2240 2240
+2 -2
drivers/scsi/st.c
··· 87 87 static int try_wdio = 1; 88 88 static int debug_flag; 89 89 90 - static struct class st_sysfs_class; 90 + static const struct class st_sysfs_class; 91 91 static const struct attribute_group *st_dev_groups[]; 92 92 static const struct attribute_group *st_drv_groups[]; 93 93 ··· 4438 4438 return; 4439 4439 } 4440 4440 4441 - static struct class st_sysfs_class = { 4441 + static const struct class st_sysfs_class = { 4442 4442 .name = "scsi_tape", 4443 4443 .dev_groups = st_dev_groups, 4444 4444 };
+3 -2
drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
··· 937 937 /* build component create message */ 938 938 m.h.type = MMAL_MSG_TYPE_COMPONENT_CREATE; 939 939 m.u.component_create.client_component = component->client_component; 940 - strncpy(m.u.component_create.name, name, 941 - sizeof(m.u.component_create.name)); 940 + strscpy_pad(m.u.component_create.name, name, 941 + sizeof(m.u.component_create.name)); 942 + m.u.component_create.pid = 0; 942 943 943 944 ret = send_synchronous_mmal_msg(instance, &m, 944 945 sizeof(m.u.component_create),
+1 -2
drivers/target/iscsi/iscsi_target_erl1.c
··· 583 583 struct iscsi_pdu *pdu) 584 584 { 585 585 int i, send_recovery_r2t = 0, recovery = 0; 586 - u32 length = 0, offset = 0, pdu_count = 0, xfer_len = 0; 586 + u32 length = 0, offset = 0, pdu_count = 0; 587 587 struct iscsit_conn *conn = cmd->conn; 588 588 struct iscsi_pdu *first_pdu = NULL; 589 589 ··· 596 596 if (cmd->pdu_list[i].seq_no == pdu->seq_no) { 597 597 if (!first_pdu) 598 598 first_pdu = &cmd->pdu_list[i]; 599 - xfer_len += cmd->pdu_list[i].length; 600 599 pdu_count++; 601 600 } else if (pdu_count) 602 601 break;
+1 -1
drivers/thermal/devfreq_cooling.c
··· 214 214 215 215 res = dfc->power_ops->get_real_power(df, power, freq, voltage); 216 216 if (!res) { 217 - state = dfc->capped_state; 217 + state = dfc->max_state - dfc->capped_state; 218 218 219 219 /* Convert EM power into milli-Watts first */ 220 220 rcu_read_lock();
+2 -17
drivers/thermal/thermal_trip.c
··· 65 65 { 66 66 const struct thermal_trip *trip; 67 67 int low = -INT_MAX, high = INT_MAX; 68 - bool same_trip = false; 69 68 int ret; 70 69 71 70 lockdep_assert_held(&tz->lock); ··· 73 74 return; 74 75 75 76 for_each_trip(tz, trip) { 76 - bool low_set = false; 77 77 int trip_low; 78 78 79 79 trip_low = trip->temperature - trip->hysteresis; 80 80 81 - if (trip_low < tz->temperature && trip_low > low) { 81 + if (trip_low < tz->temperature && trip_low > low) 82 82 low = trip_low; 83 - low_set = true; 84 - same_trip = false; 85 - } 86 83 87 84 if (trip->temperature > tz->temperature && 88 - trip->temperature < high) { 85 + trip->temperature < high) 89 86 high = trip->temperature; 90 - same_trip = low_set; 91 - } 92 87 } 93 88 94 89 /* No need to change trip points */ 95 90 if (tz->prev_low_trip == low && tz->prev_high_trip == high) 96 - return; 97 - 98 - /* 99 - * If "high" and "low" are the same, skip the change unless this is the 100 - * first time. 101 - */ 102 - if (same_trip && (tz->prev_low_trip != -INT_MAX || 103 - tz->prev_high_trip != INT_MAX)) 104 91 return; 105 92 106 93 tz->prev_low_trip = low;
+1 -1
drivers/ufs/core/ufs-mcq.c
··· 94 94 95 95 val = ufshcd_readl(hba, REG_UFS_MCQ_CFG); 96 96 val &= ~MCQ_CFG_MAC_MASK; 97 - val |= FIELD_PREP(MCQ_CFG_MAC_MASK, max_active_cmds); 97 + val |= FIELD_PREP(MCQ_CFG_MAC_MASK, max_active_cmds - 1); 98 98 ufshcd_writel(hba, val, REG_UFS_MCQ_CFG); 99 99 } 100 100 EXPORT_SYMBOL_GPL(ufshcd_mcq_config_mac);
+4 -2
drivers/ufs/host/ufs-qcom.c
··· 1210 1210 1211 1211 list_for_each_entry(clki, head, list) { 1212 1212 if (!IS_ERR_OR_NULL(clki->clk) && 1213 - !strcmp(clki->name, "core_clk_unipro")) { 1214 - if (is_scale_up) 1213 + !strcmp(clki->name, "core_clk_unipro")) { 1214 + if (!clki->max_freq) 1215 + cycles_in_1us = 150; /* default for backwards compatibility */ 1216 + else if (is_scale_up) 1215 1217 cycles_in_1us = ceil(clki->max_freq, (1000 * 1000)); 1216 1218 else 1217 1219 cycles_in_1us = ceil(clk_get_rate(clki->clk), (1000 * 1000));
+1 -1
drivers/uio/uio.c
··· 792 792 */ 793 793 vma->vm_pgoff = 0; 794 794 795 - addr = (void *)mem->addr; 795 + addr = (void *)(uintptr_t)mem->addr; 796 796 ret = dma_mmap_coherent(mem->dma_device, 797 797 vma, 798 798 addr,
+2 -2
drivers/uio/uio_dmem_genirq.c
··· 60 60 61 61 addr = dma_alloc_coherent(&priv->pdev->dev, uiomem->size, 62 62 &uiomem->dma_addr, GFP_KERNEL); 63 - uiomem->addr = addr ? (phys_addr_t) addr : DMEM_MAP_ERROR; 63 + uiomem->addr = addr ? (uintptr_t) addr : DMEM_MAP_ERROR; 64 64 ++uiomem; 65 65 } 66 66 priv->refcnt++; ··· 89 89 break; 90 90 if (uiomem->addr) { 91 91 dma_free_coherent(uiomem->dma_device, uiomem->size, 92 - (void *) uiomem->addr, 92 + (void *) (uintptr_t) uiomem->addr, 93 93 uiomem->dma_addr); 94 94 } 95 95 uiomem->addr = DMEM_MAP_ERROR;
+1 -1
drivers/uio/uio_pruss.c
··· 191 191 p->mem[1].size = sram_pool_sz; 192 192 p->mem[1].memtype = UIO_MEM_PHYS; 193 193 194 - p->mem[2].addr = (phys_addr_t) gdev->ddr_vaddr; 194 + p->mem[2].addr = (uintptr_t) gdev->ddr_vaddr; 195 195 p->mem[2].dma_addr = gdev->ddr_paddr; 196 196 p->mem[2].size = extram_pool_sz; 197 197 p->mem[2].memtype = UIO_MEM_DMA_COHERENT;
+5 -1
drivers/usb/class/cdc-wdm.c
··· 485 485 static int service_outstanding_interrupt(struct wdm_device *desc) 486 486 { 487 487 int rv = 0; 488 + int used; 488 489 489 490 /* submit read urb only if the device is waiting for it */ 490 491 if (!desc->resp_count || !--desc->resp_count) ··· 500 499 goto out; 501 500 } 502 501 503 - set_bit(WDM_RESPONDING, &desc->flags); 502 + used = test_and_set_bit(WDM_RESPONDING, &desc->flags); 503 + if (used) 504 + goto out; 505 + 504 506 spin_unlock_irq(&desc->iuspin); 505 507 rv = usb_submit_urb(desc->response, GFP_KERNEL); 506 508 spin_lock_irq(&desc->iuspin);
+16 -7
drivers/usb/core/hub.c
··· 130 130 #define HUB_DEBOUNCE_STEP 25 131 131 #define HUB_DEBOUNCE_STABLE 100 132 132 133 - static void hub_release(struct kref *kref); 134 133 static int usb_reset_and_verify_device(struct usb_device *udev); 135 134 static int hub_port_disable(struct usb_hub *hub, int port1, int set_state); 136 135 static bool hub_port_warm_reset_required(struct usb_hub *hub, int port1, ··· 719 720 */ 720 721 intf = to_usb_interface(hub->intfdev); 721 722 usb_autopm_get_interface_no_resume(intf); 722 - kref_get(&hub->kref); 723 + hub_get(hub); 723 724 724 725 if (queue_work(hub_wq, &hub->events)) 725 726 return; 726 727 727 728 /* the work has already been scheduled */ 728 729 usb_autopm_put_interface_async(intf); 729 - kref_put(&hub->kref, hub_release); 730 + hub_put(hub); 730 731 } 731 732 732 733 void usb_kick_hub_wq(struct usb_device *hdev) ··· 1094 1095 goto init2; 1095 1096 goto init3; 1096 1097 } 1097 - kref_get(&hub->kref); 1098 + hub_get(hub); 1098 1099 1099 1100 /* The superspeed hub except for root hub has to use Hub Depth 1100 1101 * value as an offset into the route string to locate the bits ··· 1342 1343 device_unlock(&hdev->dev); 1343 1344 } 1344 1345 1345 - kref_put(&hub->kref, hub_release); 1346 + hub_put(hub); 1346 1347 } 1347 1348 1348 1349 /* Implement the continuations for the delays above */ ··· 1758 1759 kfree(hub); 1759 1760 } 1760 1761 1762 + void hub_get(struct usb_hub *hub) 1763 + { 1764 + kref_get(&hub->kref); 1765 + } 1766 + 1767 + void hub_put(struct usb_hub *hub) 1768 + { 1769 + kref_put(&hub->kref, hub_release); 1770 + } 1771 + 1761 1772 static unsigned highspeed_hubs; 1762 1773 1763 1774 static void hub_disconnect(struct usb_interface *intf) ··· 1816 1807 1817 1808 onboard_dev_destroy_pdevs(&hub->onboard_devs); 1818 1809 1819 - kref_put(&hub->kref, hub_release); 1810 + hub_put(hub); 1820 1811 } 1821 1812 1822 1813 static bool hub_descriptor_is_sane(struct usb_host_interface *desc) ··· 5943 5934 5944 5935 /* Balance the stuff in kick_hub_wq() and allow autosuspend */ 5945 5936 usb_autopm_put_interface(intf); 5946 - kref_put(&hub->kref, hub_release); 5937 + hub_put(hub); 5947 5938 5948 5939 kcov_remote_stop(); 5949 5940 }
+2
drivers/usb/core/hub.h
··· 129 129 extern int usb_hub_set_port_power(struct usb_device *hdev, struct usb_hub *hub, 130 130 int port1, bool set); 131 131 extern struct usb_hub *usb_hub_to_struct_hub(struct usb_device *hdev); 132 + extern void hub_get(struct usb_hub *hub); 133 + extern void hub_put(struct usb_hub *hub); 132 134 extern int hub_port_debounce(struct usb_hub *hub, int port1, 133 135 bool must_be_connected); 134 136 extern int usb_clear_port_feature(struct usb_device *hdev,
+34 -4
drivers/usb/core/port.c
··· 56 56 u16 portstatus, unused; 57 57 bool disabled; 58 58 int rc; 59 + struct kernfs_node *kn; 59 60 61 + hub_get(hub); 60 62 rc = usb_autopm_get_interface(intf); 61 63 if (rc < 0) 62 - return rc; 64 + goto out_hub_get; 63 65 66 + /* 67 + * Prevent deadlock if another process is concurrently 68 + * trying to unregister hdev. 69 + */ 70 + kn = sysfs_break_active_protection(&dev->kobj, &attr->attr); 71 + if (!kn) { 72 + rc = -ENODEV; 73 + goto out_autopm; 74 + } 64 75 usb_lock_device(hdev); 65 76 if (hub->disconnected) { 66 77 rc = -ENODEV; ··· 81 70 usb_hub_port_status(hub, port1, &portstatus, &unused); 82 71 disabled = !usb_port_is_power_on(hub, portstatus); 83 72 84 - out_hdev_lock: 73 + out_hdev_lock: 85 74 usb_unlock_device(hdev); 75 + sysfs_unbreak_active_protection(kn); 76 + out_autopm: 86 77 usb_autopm_put_interface(intf); 78 + out_hub_get: 79 + hub_put(hub); 87 80 88 81 if (rc) 89 82 return rc; ··· 105 90 int port1 = port_dev->portnum; 106 91 bool disabled; 107 92 int rc; 93 + struct kernfs_node *kn; 108 94 109 95 rc = kstrtobool(buf, &disabled); 110 96 if (rc) 111 97 return rc; 112 98 99 + hub_get(hub); 113 100 rc = usb_autopm_get_interface(intf); 114 101 if (rc < 0) 115 - return rc; 102 + goto out_hub_get; 116 103 104 + /* 105 + * Prevent deadlock if another process is concurrently 106 + * trying to unregister hdev. 107 + */ 108 + kn = sysfs_break_active_protection(&dev->kobj, &attr->attr); 109 + if (!kn) { 110 + rc = -ENODEV; 111 + goto out_autopm; 112 + } 117 113 usb_lock_device(hdev); 118 114 if (hub->disconnected) { 119 115 rc = -ENODEV; ··· 145 119 if (!rc) 146 120 rc = count; 147 121 148 - out_hdev_lock: 122 + out_hdev_lock: 149 123 usb_unlock_device(hdev); 124 + sysfs_unbreak_active_protection(kn); 125 + out_autopm: 150 126 usb_autopm_put_interface(intf); 127 + out_hub_get: 128 + hub_put(hub); 151 129 152 130 return rc; 153 131 }
+13 -3
drivers/usb/core/sysfs.c
··· 1217 1217 { 1218 1218 struct usb_interface *intf = to_usb_interface(dev); 1219 1219 bool val; 1220 + struct kernfs_node *kn; 1220 1221 1221 1222 if (kstrtobool(buf, &val) != 0) 1222 1223 return -EINVAL; 1223 1224 1224 - if (val) 1225 + if (val) { 1225 1226 usb_authorize_interface(intf); 1226 - else 1227 - usb_deauthorize_interface(intf); 1227 + } else { 1228 + /* 1229 + * Prevent deadlock if another process is concurrently 1230 + * trying to unregister intf. 1231 + */ 1232 + kn = sysfs_break_active_protection(&dev->kobj, &attr->attr); 1233 + if (kn) { 1234 + usb_deauthorize_interface(intf); 1235 + sysfs_unbreak_active_protection(kn); 1236 + } 1237 + } 1228 1238 1229 1239 return count; 1230 1240 }
+14
drivers/usb/dwc2/core.h
··· 735 735 * struct dwc2_hregs_backup - Holds host registers state before 736 736 * entering partial power down 737 737 * @hcfg: Backup of HCFG register 738 + * @hflbaddr: Backup of HFLBADDR register 738 739 * @haintmsk: Backup of HAINTMSK register 740 + * @hcchar: Backup of HCCHAR register 741 + * @hcsplt: Backup of HCSPLT register 739 742 * @hcintmsk: Backup of HCINTMSK register 743 + * @hctsiz: Backup of HCTSIZ register 744 + * @hdma: Backup of HCDMA register 745 + * @hcdmab: Backup of HCDMAB register 740 746 * @hprt0: Backup of HPTR0 register 741 747 * @hfir: Backup of HFIR register 742 748 * @hptxfsiz: Backup of HPTXFSIZ register ··· 750 744 */ 751 745 struct dwc2_hregs_backup { 752 746 u32 hcfg; 747 + u32 hflbaddr; 753 748 u32 haintmsk; 749 + u32 hcchar[MAX_EPS_CHANNELS]; 750 + u32 hcsplt[MAX_EPS_CHANNELS]; 754 751 u32 hcintmsk[MAX_EPS_CHANNELS]; 752 + u32 hctsiz[MAX_EPS_CHANNELS]; 753 + u32 hcidma[MAX_EPS_CHANNELS]; 754 + u32 hcidmab[MAX_EPS_CHANNELS]; 755 755 u32 hprt0; 756 756 u32 hfir; 757 757 u32 hptxfsiz; ··· 1104 1092 bool needs_byte_swap; 1105 1093 1106 1094 /* DWC OTG HW Release versions */ 1095 + #define DWC2_CORE_REV_4_30a 0x4f54430a 1107 1096 #define DWC2_CORE_REV_2_71a 0x4f54271a 1108 1097 #define DWC2_CORE_REV_2_72a 0x4f54272a 1109 1098 #define DWC2_CORE_REV_2_80a 0x4f54280a ··· 1344 1331 int dwc2_restore_global_registers(struct dwc2_hsotg *hsotg); 1345 1332 1346 1333 void dwc2_enable_acg(struct dwc2_hsotg *hsotg); 1334 + void dwc2_wakeup_from_lpm_l1(struct dwc2_hsotg *hsotg, bool remotewakeup); 1347 1335 1348 1336 /* This function should be called on every hardware interrupt. */ 1349 1337 irqreturn_t dwc2_handle_common_intr(int irq, void *dev);
+48 -24
drivers/usb/dwc2/core_intr.c
··· 312 312 313 313 /* Exit gadget mode clock gating. */ 314 314 if (hsotg->params.power_down == 315 - DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended) 315 + DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended && 316 + !hsotg->params.no_clock_gating) 316 317 dwc2_gadget_exit_clock_gating(hsotg, 0); 317 318 } 318 319 ··· 338 337 * @hsotg: Programming view of DWC_otg controller 339 338 * 340 339 */ 341 - static void dwc2_wakeup_from_lpm_l1(struct dwc2_hsotg *hsotg) 340 + void dwc2_wakeup_from_lpm_l1(struct dwc2_hsotg *hsotg, bool remotewakeup) 342 341 { 343 342 u32 glpmcfg; 344 - u32 i = 0; 343 + u32 pcgctl; 344 + u32 dctl; 345 345 346 346 if (hsotg->lx_state != DWC2_L1) { 347 347 dev_err(hsotg->dev, "Core isn't in DWC2_L1 state\n"); ··· 351 349 352 350 glpmcfg = dwc2_readl(hsotg, GLPMCFG); 353 351 if (dwc2_is_device_mode(hsotg)) { 354 - dev_dbg(hsotg->dev, "Exit from L1 state\n"); 352 + dev_dbg(hsotg->dev, "Exit from L1 state, remotewakeup=%d\n", remotewakeup); 355 353 glpmcfg &= ~GLPMCFG_ENBLSLPM; 356 - glpmcfg &= ~GLPMCFG_HIRD_THRES_EN; 354 + glpmcfg &= ~GLPMCFG_HIRD_THRES_MASK; 357 355 dwc2_writel(hsotg, glpmcfg, GLPMCFG); 358 356 359 - do { 360 - glpmcfg = dwc2_readl(hsotg, GLPMCFG); 357 + pcgctl = dwc2_readl(hsotg, PCGCTL); 358 + pcgctl &= ~PCGCTL_ENBL_SLEEP_GATING; 359 + dwc2_writel(hsotg, pcgctl, PCGCTL); 361 360 362 - if (!(glpmcfg & (GLPMCFG_COREL1RES_MASK | 363 - GLPMCFG_L1RESUMEOK | GLPMCFG_SLPSTS))) 364 - break; 361 + glpmcfg = dwc2_readl(hsotg, GLPMCFG); 362 + if (glpmcfg & GLPMCFG_ENBESL) { 363 + glpmcfg |= GLPMCFG_RSTRSLPSTS; 364 + dwc2_writel(hsotg, glpmcfg, GLPMCFG); 365 + } 365 366 366 - udelay(1); 367 - } while (++i < 200); 367 + if (remotewakeup) { 368 + if (dwc2_hsotg_wait_bit_set(hsotg, GLPMCFG, GLPMCFG_L1RESUMEOK, 1000)) { 369 + dev_warn(hsotg->dev, "%s: timeout GLPMCFG_L1RESUMEOK\n", __func__); 370 + goto fail; 371 + return; 372 + } 368 373 369 - if (i == 200) { 370 - dev_err(hsotg->dev, "Failed to exit L1 sleep state in 200us.\n"); 374 + dctl = dwc2_readl(hsotg, DCTL); 375 + dctl |= DCTL_RMTWKUPSIG; 376 + dwc2_writel(hsotg, dctl, DCTL); 377 + 378 + if (dwc2_hsotg_wait_bit_set(hsotg, GINTSTS, GINTSTS_WKUPINT, 1000)) { 379 + dev_warn(hsotg->dev, "%s: timeout GINTSTS_WKUPINT\n", __func__); 380 + goto fail; 381 + return; 382 + } 383 + } 384 + 385 + glpmcfg = dwc2_readl(hsotg, GLPMCFG); 386 + if (glpmcfg & GLPMCFG_COREL1RES_MASK || glpmcfg & GLPMCFG_SLPSTS || 387 + glpmcfg & GLPMCFG_L1RESUMEOK) { 388 + goto fail; 371 389 return; 372 390 } 373 - dwc2_gadget_init_lpm(hsotg); 391 + 392 + /* Inform gadget to exit from L1 */ 393 + call_gadget(hsotg, resume); 394 + /* Change to L0 state */ 395 + hsotg->lx_state = DWC2_L0; 396 + hsotg->bus_suspended = false; 397 + fail: dwc2_gadget_init_lpm(hsotg); 374 398 } else { 375 399 /* TODO */ 376 400 dev_err(hsotg->dev, "Host side LPM is not supported.\n"); 377 401 return; 378 402 } 379 - 380 - /* Change to L0 state */ 381 - hsotg->lx_state = DWC2_L0; 382 - 383 - /* Inform gadget to exit from L1 */ 384 - call_gadget(hsotg, resume); 385 403 } 386 404 387 405 /* ··· 422 400 dev_dbg(hsotg->dev, "%s lxstate = %d\n", __func__, hsotg->lx_state); 423 401 424 402 if (hsotg->lx_state == DWC2_L1) { 425 - dwc2_wakeup_from_lpm_l1(hsotg); 403 + dwc2_wakeup_from_lpm_l1(hsotg, false); 426 404 return; 427 405 } 428 406 ··· 445 423 446 424 /* Exit gadget mode clock gating. */ 447 425 if (hsotg->params.power_down == 448 - DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended) 426 + DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended && 427 + !hsotg->params.no_clock_gating) 449 428 dwc2_gadget_exit_clock_gating(hsotg, 0); 450 429 } else { 451 430 /* Change to L0 state */ ··· 463 440 } 464 441 465 442 if (hsotg->params.power_down == 466 - DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended) 443 + DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended && 444 + !hsotg->params.no_clock_gating) 467 445 dwc2_host_exit_clock_gating(hsotg, 1); 468 446 469 447 /*
+10
drivers/usb/dwc2/gadget.c
··· 1415 1415 ep->name, req, req->length, req->buf, req->no_interrupt, 1416 1416 req->zero, req->short_not_ok); 1417 1417 1418 + if (hs->lx_state == DWC2_L1) { 1419 + dwc2_wakeup_from_lpm_l1(hs, true); 1420 + } 1421 + 1418 1422 /* Prevent new request submission when controller is suspended */ 1419 1423 if (hs->lx_state != DWC2_L0) { 1420 1424 dev_dbg(hs->dev, "%s: submit request only in active state\n", ··· 3733 3729 /* This event must be used only if controller is suspended */ 3734 3730 if (hsotg->in_ppd && hsotg->lx_state == DWC2_L2) 3735 3731 dwc2_exit_partial_power_down(hsotg, 0, true); 3732 + 3733 + /* Exit gadget mode clock gating. */ 3734 + if (hsotg->params.power_down == 3735 + DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended && 3736 + !hsotg->params.no_clock_gating) 3737 + dwc2_gadget_exit_clock_gating(hsotg, 0); 3736 3738 3737 3739 hsotg->lx_state = DWC2_L0; 3738 3740 }
+40 -9
drivers/usb/dwc2/hcd.c
··· 2701 2701 hsotg->available_host_channels--; 2702 2702 } 2703 2703 qh = list_entry(qh_ptr, struct dwc2_qh, qh_list_entry); 2704 - if (dwc2_assign_and_init_hc(hsotg, qh)) 2704 + if (dwc2_assign_and_init_hc(hsotg, qh)) { 2705 + if (hsotg->params.uframe_sched) 2706 + hsotg->available_host_channels++; 2705 2707 break; 2708 + } 2706 2709 2707 2710 /* 2708 2711 * Move the QH from the periodic ready schedule to the ··· 2738 2735 hsotg->available_host_channels--; 2739 2736 } 2740 2737 2741 - if (dwc2_assign_and_init_hc(hsotg, qh)) 2738 + if (dwc2_assign_and_init_hc(hsotg, qh)) { 2739 + if (hsotg->params.uframe_sched) 2740 + hsotg->available_host_channels++; 2742 2741 break; 2742 + } 2743 2743 2744 2744 /* 2745 2745 * Move the QH from the non-periodic inactive schedule to the ··· 4149 4143 urb->actual_length); 4150 4144 4151 4145 if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) { 4146 + if (!hsotg->params.dma_desc_enable) 4147 + urb->start_frame = qtd->qh->start_active_frame; 4152 4148 urb->error_count = dwc2_hcd_urb_get_error_count(qtd->urb); 4153 4149 for (i = 0; i < urb->number_of_packets; ++i) { 4154 4150 urb->iso_frame_desc[i].actual_length = ··· 4657 4649 } 4658 4650 4659 4651 if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_NONE && 4660 - hsotg->bus_suspended) { 4652 + hsotg->bus_suspended && !hsotg->params.no_clock_gating) { 4661 4653 if (dwc2_is_device_mode(hsotg)) 4662 4654 dwc2_gadget_exit_clock_gating(hsotg, 0); 4663 4655 else ··· 5414 5406 /* Backup Host regs */ 5415 5407 hr = &hsotg->hr_backup; 5416 5408 hr->hcfg = dwc2_readl(hsotg, HCFG); 5409 + hr->hflbaddr = dwc2_readl(hsotg, HFLBADDR); 5417 5410 hr->haintmsk = dwc2_readl(hsotg, HAINTMSK); 5418 - for (i = 0; i < hsotg->params.host_channels; ++i) 5411 + for (i = 0; i < hsotg->params.host_channels; ++i) { 5412 + hr->hcchar[i] = dwc2_readl(hsotg, HCCHAR(i)); 5413 + hr->hcsplt[i] = dwc2_readl(hsotg, HCSPLT(i)); 5419 5414 hr->hcintmsk[i] = dwc2_readl(hsotg, HCINTMSK(i)); 5415 + hr->hctsiz[i] = dwc2_readl(hsotg, HCTSIZ(i)); 5416 + hr->hcidma[i] = dwc2_readl(hsotg, HCDMA(i)); 5417 + hr->hcidmab[i] = dwc2_readl(hsotg, HCDMAB(i)); 5418 + } 5420 5419 5421 5420 hr->hprt0 = dwc2_read_hprt0(hsotg); 5422 5421 hr->hfir = dwc2_readl(hsotg, HFIR); ··· 5457 5442 hr->valid = false; 5458 5443 5459 5444 dwc2_writel(hsotg, hr->hcfg, HCFG); 5445 + dwc2_writel(hsotg, hr->hflbaddr, HFLBADDR); 5460 5446 dwc2_writel(hsotg, hr->haintmsk, HAINTMSK); 5461 5447 5462 - for (i = 0; i < hsotg->params.host_channels; ++i) 5448 + for (i = 0; i < hsotg->params.host_channels; ++i) { 5449 + dwc2_writel(hsotg, hr->hcchar[i], HCCHAR(i)); 5450 + dwc2_writel(hsotg, hr->hcsplt[i], HCSPLT(i)); 5463 5451 dwc2_writel(hsotg, hr->hcintmsk[i], HCINTMSK(i)); 5452 + dwc2_writel(hsotg, hr->hctsiz[i], HCTSIZ(i)); 5453 + dwc2_writel(hsotg, hr->hcidma[i], HCDMA(i)); 5454 + dwc2_writel(hsotg, hr->hcidmab[i], HCDMAB(i)); 5455 + } 5464 5456 5465 5457 dwc2_writel(hsotg, hr->hprt0, HPRT0); 5466 5458 dwc2_writel(hsotg, hr->hfir, HFIR); ··· 5642 5620 dwc2_writel(hsotg, gpwrdn, GPWRDN); 5643 5621 5644 5622 /* De-assert Wakeup Logic */ 5645 - gpwrdn = dwc2_readl(hsotg, GPWRDN); 5646 - gpwrdn &= ~GPWRDN_PMUACTV; 5647 - dwc2_writel(hsotg, gpwrdn, GPWRDN); 5648 - udelay(10); 5623 + if (!(rem_wakeup && hsotg->hw_params.snpsid >= DWC2_CORE_REV_4_30a)) { 5624 + gpwrdn = dwc2_readl(hsotg, GPWRDN); 5625 + gpwrdn &= ~GPWRDN_PMUACTV; 5626 + dwc2_writel(hsotg, gpwrdn, GPWRDN); 5627 + udelay(10); 5628 + } 5649 5629 5650 5630 hprt0 = hr->hprt0; 5651 5631 hprt0 |= HPRT0_PWR; ··· 5672 5648 hprt0 |= HPRT0_RES; 5673 5649 dwc2_writel(hsotg, hprt0, HPRT0); 5674 5650 5651 + /* De-assert Wakeup Logic */ 5652 + if ((rem_wakeup && hsotg->hw_params.snpsid >= DWC2_CORE_REV_4_30a)) { 5653 + gpwrdn = dwc2_readl(hsotg, GPWRDN); 5654 + gpwrdn &= ~GPWRDN_PMUACTV; 5655 + dwc2_writel(hsotg, gpwrdn, GPWRDN); 5656 + udelay(10); 5657 + } 5675 5658 /* Wait for Resume time and then program HPRT again */ 5676 5659 mdelay(100); 5677 5660 hprt0 &= ~HPRT0_RES;
+11 -6
drivers/usb/dwc2/hcd_ddma.c
··· 559 559 idx = qh->td_last; 560 560 inc = qh->host_interval; 561 561 hsotg->frame_number = dwc2_hcd_get_frame_number(hsotg); 562 - cur_idx = dwc2_frame_list_idx(hsotg->frame_number); 562 + cur_idx = idx; 563 563 next_idx = dwc2_desclist_idx_inc(qh->td_last, inc, qh->dev_speed); 564 564 565 565 /* ··· 866 866 { 867 867 struct dwc2_dma_desc *dma_desc; 868 868 struct dwc2_hcd_iso_packet_desc *frame_desc; 869 + u16 frame_desc_idx; 870 + struct urb *usb_urb = qtd->urb->priv; 869 871 u16 remain = 0; 870 872 int rc = 0; 871 873 ··· 880 878 DMA_FROM_DEVICE); 881 879 882 880 dma_desc = &qh->desc_list[idx]; 881 + frame_desc_idx = (idx - qtd->isoc_td_first) & (usb_urb->number_of_packets - 1); 883 882 884 - frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index_last]; 883 + frame_desc = &qtd->urb->iso_descs[frame_desc_idx]; 884 + if (idx == qtd->isoc_td_first) 885 + usb_urb->start_frame = dwc2_hcd_get_frame_number(hsotg); 885 886 dma_desc->buf = (u32)(qtd->urb->dma + frame_desc->offset); 886 887 if (chan->ep_is_in) 887 888 remain = (dma_desc->status & HOST_DMA_ISOC_NBYTES_MASK) >> ··· 905 900 frame_desc->status = 0; 906 901 } 907 902 908 - if (++qtd->isoc_frame_index == qtd->urb->packet_count) { 903 + if (++qtd->isoc_frame_index == usb_urb->number_of_packets) { 909 904 /* 910 905 * urb->status is not used for isoc transfers here. The 911 906 * individual frame_desc status are used instead. ··· 1010 1005 return; 1011 1006 idx = dwc2_desclist_idx_inc(idx, qh->host_interval, 1012 1007 chan->speed); 1013 - if (!rc) 1008 + if (rc == 0) 1014 1009 continue; 1015 1010 1016 - if (rc == DWC2_CMPL_DONE) 1017 - break; 1011 + if (rc == DWC2_CMPL_DONE || rc == DWC2_CMPL_STOP) 1012 + goto stop_scan; 1018 1013 1019 1014 /* rc == DWC2_CMPL_STOP */ 1020 1015
+1 -1
drivers/usb/dwc2/hw.h
··· 712 712 #define TXSTS_QTOP_TOKEN_MASK (0x3 << 25) 713 713 #define TXSTS_QTOP_TOKEN_SHIFT 25 714 714 #define TXSTS_QTOP_TERMINATE BIT(24) 715 - #define TXSTS_QSPCAVAIL_MASK (0xff << 16) 715 + #define TXSTS_QSPCAVAIL_MASK (0x7f << 16) 716 716 #define TXSTS_QSPCAVAIL_SHIFT 16 717 717 #define TXSTS_FSPCAVAIL_MASK (0xffff << 0) 718 718 #define TXSTS_FSPCAVAIL_SHIFT 0
+1 -1
drivers/usb/dwc2/platform.c
··· 331 331 332 332 /* Exit clock gating when driver is removed. */ 333 333 if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_NONE && 334 - hsotg->bus_suspended) { 334 + hsotg->bus_suspended && !hsotg->params.no_clock_gating) { 335 335 if (dwc2_is_device_mode(hsotg)) 336 336 dwc2_gadget_exit_clock_gating(hsotg, 0); 337 337 else
+2
drivers/usb/dwc3/core.c
··· 1519 1519 else 1520 1520 dwc->sysdev = dwc->dev; 1521 1521 1522 + dwc->sys_wakeup = device_may_wakeup(dwc->sysdev); 1523 + 1522 1524 ret = device_property_read_string(dev, "usb-psy-name", &usb_psy_name); 1523 1525 if (ret >= 0) { 1524 1526 dwc->usb_psy = power_supply_get_by_name(usb_psy_name);
+2
drivers/usb/dwc3/core.h
··· 1133 1133 * 3 - Reserved 1134 1134 * @dis_metastability_quirk: set to disable metastability quirk. 1135 1135 * @dis_split_quirk: set to disable split boundary. 1136 + * @sys_wakeup: set if the device may do system wakeup. 1136 1137 * @wakeup_configured: set if the device is configured for remote wakeup. 1137 1138 * @suspended: set to track suspend event due to U3/L2. 1138 1139 * @imod_interval: set the interrupt moderation interval in 250ns ··· 1358 1357 1359 1358 unsigned dis_split_quirk:1; 1360 1359 unsigned async_callbacks:1; 1360 + unsigned sys_wakeup:1; 1361 1361 unsigned wakeup_configured:1; 1362 1362 unsigned suspended:1; 1363 1363
-2
drivers/usb/dwc3/dwc3-pci.c
··· 51 51 #define PCI_DEVICE_ID_INTEL_MTLP 0x7ec1 52 52 #define PCI_DEVICE_ID_INTEL_MTLS 0x7f6f 53 53 #define PCI_DEVICE_ID_INTEL_MTL 0x7e7e 54 - #define PCI_DEVICE_ID_INTEL_ARLH 0x7ec1 55 54 #define PCI_DEVICE_ID_INTEL_ARLH_PCH 0x777e 56 55 #define PCI_DEVICE_ID_INTEL_TGL 0x9a15 57 56 #define PCI_DEVICE_ID_AMD_MR 0x163a ··· 422 423 { PCI_DEVICE_DATA(INTEL, MTLP, &dwc3_pci_intel_swnode) }, 423 424 { PCI_DEVICE_DATA(INTEL, MTL, &dwc3_pci_intel_swnode) }, 424 425 { PCI_DEVICE_DATA(INTEL, MTLS, &dwc3_pci_intel_swnode) }, 425 - { PCI_DEVICE_DATA(INTEL, ARLH, &dwc3_pci_intel_swnode) }, 426 426 { PCI_DEVICE_DATA(INTEL, ARLH_PCH, &dwc3_pci_intel_swnode) }, 427 427 { PCI_DEVICE_DATA(INTEL, TGL, &dwc3_pci_intel_swnode) }, 428 428
+10
drivers/usb/dwc3/gadget.c
··· 2955 2955 dwc->gadget_driver = driver; 2956 2956 spin_unlock_irqrestore(&dwc->lock, flags); 2957 2957 2958 + if (dwc->sys_wakeup) 2959 + device_wakeup_enable(dwc->sysdev); 2960 + 2958 2961 return 0; 2959 2962 } 2960 2963 ··· 2972 2969 { 2973 2970 struct dwc3 *dwc = gadget_to_dwc(g); 2974 2971 unsigned long flags; 2972 + 2973 + if (dwc->sys_wakeup) 2974 + device_wakeup_disable(dwc->sysdev); 2975 2975 2976 2976 spin_lock_irqsave(&dwc->lock, flags); 2977 2977 dwc->gadget_driver = NULL; ··· 4656 4650 dwc3_gadget_set_ssp_rate(dwc->gadget, dwc->max_ssp_rate); 4657 4651 else 4658 4652 dwc3_gadget_set_speed(dwc->gadget, dwc->maximum_speed); 4653 + 4654 + /* No system wakeup if no gadget driver bound */ 4655 + if (dwc->sys_wakeup) 4656 + device_wakeup_disable(dwc->sysdev); 4659 4657 4660 4658 return 0; 4661 4659
+11
drivers/usb/dwc3/host.c
··· 173 173 goto err; 174 174 } 175 175 176 + if (dwc->sys_wakeup) { 177 + /* Restore wakeup setting if switched from device */ 178 + device_wakeup_enable(dwc->sysdev); 179 + 180 + /* Pass on wakeup setting to the new xhci platform device */ 181 + device_init_wakeup(&xhci->dev, true); 182 + } 183 + 176 184 return 0; 177 185 err: 178 186 platform_device_put(xhci); ··· 189 181 190 182 void dwc3_host_exit(struct dwc3 *dwc) 191 183 { 184 + if (dwc->sys_wakeup) 185 + device_init_wakeup(&dwc->xhci->dev, false); 186 + 192 187 platform_device_unregister(dwc->xhci); 193 188 dwc->xhci = NULL; 194 189 }
+3 -1
drivers/usb/gadget/udc/core.c
··· 292 292 { 293 293 int ret = 0; 294 294 295 - if (WARN_ON_ONCE(!ep->enabled && ep->address)) { 295 + if (!ep->enabled && ep->address) { 296 + pr_debug("USB gadget: queue request to disabled ep 0x%x (%s)\n", 297 + ep->address, ep->name); 296 298 ret = -ESHUTDOWN; 297 299 goto out; 298 300 }
+9 -13
drivers/usb/misc/usb-ljca.c
··· 518 518 int ret; 519 519 520 520 client = kzalloc(sizeof *client, GFP_KERNEL); 521 - if (!client) 521 + if (!client) { 522 + kfree(data); 522 523 return -ENOMEM; 524 + } 523 525 524 526 client->type = type; 525 527 client->id = id; ··· 537 535 auxdev->dev.release = ljca_auxdev_release; 538 536 539 537 ret = auxiliary_device_init(auxdev); 540 - if (ret) 538 + if (ret) { 539 + kfree(data); 541 540 goto err_free; 541 + } 542 542 543 543 ljca_auxdev_acpi_bind(adap, auxdev, adr, id); 544 544 ··· 594 590 valid_pin[i] = get_unaligned_le32(&desc->bank_desc[i].valid_pins); 595 591 bitmap_from_arr32(gpio_info->valid_pin_map, valid_pin, gpio_num); 596 592 597 - ret = ljca_new_client_device(adap, LJCA_CLIENT_GPIO, 0, "ljca-gpio", 593 + return ljca_new_client_device(adap, LJCA_CLIENT_GPIO, 0, "ljca-gpio", 598 594 gpio_info, LJCA_GPIO_ACPI_ADR); 599 - if (ret) 600 - kfree(gpio_info); 601 - 602 - return ret; 603 595 } 604 596 605 597 static int ljca_enumerate_i2c(struct ljca_adapter *adap) ··· 629 629 ret = ljca_new_client_device(adap, LJCA_CLIENT_I2C, i, 630 630 "ljca-i2c", i2c_info, 631 631 LJCA_I2C1_ACPI_ADR + i); 632 - if (ret) { 633 - kfree(i2c_info); 632 + if (ret) 634 633 return ret; 635 - } 636 634 } 637 635 638 636 return 0; ··· 667 669 ret = ljca_new_client_device(adap, LJCA_CLIENT_SPI, i, 668 670 "ljca-spi", spi_info, 669 671 LJCA_SPI1_ACPI_ADR + i); 670 - if (ret) { 671 - kfree(spi_info); 672 + if (ret) 672 673 return ret; 673 - } 674 674 } 675 675 676 676 return 0;
-7
drivers/usb/phy/phy-generic.c
··· 262 262 return dev_err_probe(dev, PTR_ERR(nop->vbus_draw), 263 263 "could not get vbus regulator\n"); 264 264 265 - nop->vbus_draw = devm_regulator_get_exclusive(dev, "vbus"); 266 - if (PTR_ERR(nop->vbus_draw) == -ENODEV) 267 - nop->vbus_draw = NULL; 268 - if (IS_ERR(nop->vbus_draw)) 269 - return dev_err_probe(dev, PTR_ERR(nop->vbus_draw), 270 - "could not get vbus regulator\n"); 271 - 272 265 nop->dev = dev; 273 266 nop->phy.dev = nop->dev; 274 267 nop->phy.label = "nop-xceiv";
+13 -15
drivers/usb/storage/uas.c
··· 533 533 * daft to me. 534 534 */ 535 535 536 - static struct urb *uas_submit_sense_urb(struct scsi_cmnd *cmnd, gfp_t gfp) 536 + static int uas_submit_sense_urb(struct scsi_cmnd *cmnd, gfp_t gfp) 537 537 { 538 538 struct uas_dev_info *devinfo = cmnd->device->hostdata; 539 539 struct urb *urb; ··· 541 541 542 542 urb = uas_alloc_sense_urb(devinfo, gfp, cmnd); 543 543 if (!urb) 544 - return NULL; 544 + return -ENOMEM; 545 545 usb_anchor_urb(urb, &devinfo->sense_urbs); 546 546 err = usb_submit_urb(urb, gfp); 547 547 if (err) { 548 548 usb_unanchor_urb(urb); 549 549 uas_log_cmd_state(cmnd, "sense submit err", err); 550 550 usb_free_urb(urb); 551 - return NULL; 552 551 } 553 - return urb; 552 + return err; 554 553 } 555 554 556 555 static int uas_submit_urbs(struct scsi_cmnd *cmnd, 557 556 struct uas_dev_info *devinfo) 558 557 { 559 558 struct uas_cmd_info *cmdinfo = scsi_cmd_priv(cmnd); 560 - struct urb *urb; 561 559 int err; 562 560 563 561 lockdep_assert_held(&devinfo->lock); 564 562 if (cmdinfo->state & SUBMIT_STATUS_URB) { 565 - urb = uas_submit_sense_urb(cmnd, GFP_ATOMIC); 566 - if (!urb) 567 - return SCSI_MLQUEUE_DEVICE_BUSY; 563 + err = uas_submit_sense_urb(cmnd, GFP_ATOMIC); 564 + if (err) 565 + return err; 568 566 cmdinfo->state &= ~SUBMIT_STATUS_URB; 569 567 } 570 568 ··· 570 572 cmdinfo->data_in_urb = uas_alloc_data_urb(devinfo, GFP_ATOMIC, 571 573 cmnd, DMA_FROM_DEVICE); 572 574 if (!cmdinfo->data_in_urb) 573 - return SCSI_MLQUEUE_DEVICE_BUSY; 575 + return -ENOMEM; 574 576 cmdinfo->state &= ~ALLOC_DATA_IN_URB; 575 577 } 576 578 ··· 580 582 if (err) { 581 583 usb_unanchor_urb(cmdinfo->data_in_urb); 582 584 uas_log_cmd_state(cmnd, "data in submit err", err); 583 - return SCSI_MLQUEUE_DEVICE_BUSY; 585 + return err; 584 586 } 585 587 cmdinfo->state &= ~SUBMIT_DATA_IN_URB; 586 588 cmdinfo->state |= DATA_IN_URB_INFLIGHT; ··· 590 592 cmdinfo->data_out_urb = uas_alloc_data_urb(devinfo, GFP_ATOMIC, 591 593 cmnd, DMA_TO_DEVICE); 592 594 if (!cmdinfo->data_out_urb) 593 - return SCSI_MLQUEUE_DEVICE_BUSY; 595 + return -ENOMEM; 594 596 cmdinfo->state &= ~ALLOC_DATA_OUT_URB; 595 597 } 596 598 ··· 600 602 if (err) { 601 603 usb_unanchor_urb(cmdinfo->data_out_urb); 602 604 uas_log_cmd_state(cmnd, "data out submit err", err); 603 - return SCSI_MLQUEUE_DEVICE_BUSY; 605 + return err; 604 606 } 605 607 cmdinfo->state &= ~SUBMIT_DATA_OUT_URB; 606 608 cmdinfo->state |= DATA_OUT_URB_INFLIGHT; ··· 609 611 if (cmdinfo->state & ALLOC_CMD_URB) { 610 612 cmdinfo->cmd_urb = uas_alloc_cmd_urb(devinfo, GFP_ATOMIC, cmnd); 611 613 if (!cmdinfo->cmd_urb) 612 - return SCSI_MLQUEUE_DEVICE_BUSY; 614 + return -ENOMEM; 613 615 cmdinfo->state &= ~ALLOC_CMD_URB; 614 616 } 615 617 ··· 619 621 if (err) { 620 622 usb_unanchor_urb(cmdinfo->cmd_urb); 621 623 uas_log_cmd_state(cmnd, "cmd submit err", err); 622 - return SCSI_MLQUEUE_DEVICE_BUSY; 624 + return err; 623 625 } 624 626 cmdinfo->cmd_urb = NULL; 625 627 cmdinfo->state &= ~SUBMIT_CMD_URB; ··· 696 698 * of queueing, no matter how fatal the error 697 699 */ 698 700 if (err == -ENODEV) { 699 - set_host_byte(cmnd, DID_ERROR); 701 + set_host_byte(cmnd, DID_NO_CONNECT); 700 702 scsi_done(cmnd); 701 703 goto zombie; 702 704 }
+6 -1
drivers/usb/typec/class.c
··· 1310 1310 { 1311 1311 struct typec_port *port = to_typec_port(dev); 1312 1312 struct usb_power_delivery *pd; 1313 + int ret; 1313 1314 1314 1315 if (!port->ops || !port->ops->pd_set) 1315 1316 return -EOPNOTSUPP; ··· 1319 1318 if (!pd) 1320 1319 return -EINVAL; 1321 1320 1322 - return port->ops->pd_set(port, pd); 1321 + ret = port->ops->pd_set(port, pd); 1322 + if (ret) 1323 + return ret; 1324 + 1325 + return size; 1323 1326 } 1324 1327 1325 1328 static ssize_t select_usb_power_delivery_show(struct device *dev,
+3 -3
drivers/usb/typec/tcpm/tcpm.c
··· 6861 6861 6862 6862 if (data->source_desc.pdo[0]) { 6863 6863 for (i = 0; i < PDO_MAX_OBJECTS && data->source_desc.pdo[i]; i++) 6864 - port->snk_pdo[i] = data->source_desc.pdo[i]; 6864 + port->src_pdo[i] = data->source_desc.pdo[i]; 6865 6865 port->nr_src_pdo = i + 1; 6866 6866 } 6867 6867 ··· 6910 6910 6911 6911 port->port_source_caps = data->source_cap; 6912 6912 port->port_sink_caps = data->sink_cap; 6913 + typec_port_set_usb_power_delivery(p, NULL); 6913 6914 port->selected_pd = pd; 6915 + typec_port_set_usb_power_delivery(p, port->selected_pd); 6914 6916 unlock: 6915 6917 mutex_unlock(&port->lock); 6916 6918 return ret; ··· 6945 6943 port->port_source_caps = NULL; 6946 6944 for (i = 0; i < port->pd_count; i++) { 6947 6945 usb_power_delivery_unregister_capabilities(port->pd_list[i]->sink_cap); 6948 - kfree(port->pd_list[i]->sink_cap); 6949 6946 usb_power_delivery_unregister_capabilities(port->pd_list[i]->source_cap); 6950 - kfree(port->pd_list[i]->source_cap); 6951 6947 devm_kfree(port->dev, port->pd_list[i]); 6952 6948 port->pd_list[i] = NULL; 6953 6949 usb_power_delivery_unregister(port->pds[i]);
+72 -18
drivers/usb/typec/ucsi/ucsi.c
··· 151 151 if (!(cci & UCSI_CCI_COMMAND_COMPLETE)) 152 152 return -EIO; 153 153 154 - if (cci & UCSI_CCI_NOT_SUPPORTED) 154 + if (cci & UCSI_CCI_NOT_SUPPORTED) { 155 + if (ucsi_acknowledge_command(ucsi) < 0) 156 + dev_err(ucsi->dev, 157 + "ACK of unsupported command failed\n"); 155 158 return -EOPNOTSUPP; 159 + } 156 160 157 161 if (cci & UCSI_CCI_ERROR) { 158 162 if (cmd == UCSI_GET_ERROR_STATUS) ··· 1137 1133 if (ret < 0) 1138 1134 return ret; 1139 1135 1140 - ret = ucsi_get_cable_identity(con); 1141 - if (ret < 0) 1142 - return ret; 1136 + if (con->ucsi->cap.features & UCSI_CAP_GET_PD_MESSAGE) { 1137 + ret = ucsi_get_cable_identity(con); 1138 + if (ret < 0) 1139 + return ret; 1140 + } 1143 1141 1144 - ret = ucsi_register_plug(con); 1145 - if (ret < 0) 1146 - return ret; 1142 + if (con->ucsi->cap.features & UCSI_CAP_ALT_MODE_DETAILS) { 1143 + ret = ucsi_register_plug(con); 1144 + if (ret < 0) 1145 + return ret; 1147 1146 1148 - ret = ucsi_register_altmodes(con, UCSI_RECIPIENT_SOP_P); 1149 - if (ret < 0) 1150 - return ret; 1147 + ret = ucsi_register_altmodes(con, UCSI_RECIPIENT_SOP_P); 1148 + if (ret < 0) 1149 + return ret; 1150 + } 1151 1151 1152 1152 return 0; 1153 1153 } ··· 1197 1189 ucsi_register_partner(con); 1198 1190 ucsi_partner_task(con, ucsi_check_connection, 1, HZ); 1199 1191 ucsi_partner_task(con, ucsi_check_connector_capability, 1, HZ); 1200 - ucsi_partner_task(con, ucsi_get_partner_identity, 1, HZ); 1201 - ucsi_partner_task(con, ucsi_check_cable, 1, HZ); 1192 + if (con->ucsi->cap.features & UCSI_CAP_GET_PD_MESSAGE) 1193 + ucsi_partner_task(con, ucsi_get_partner_identity, 1, HZ); 1194 + if (con->ucsi->cap.features & UCSI_CAP_CABLE_DETAILS) 1195 + ucsi_partner_task(con, ucsi_check_cable, 1, HZ); 1202 1196 1203 1197 if (UCSI_CONSTAT_PWR_OPMODE(con->status.flags) == 1204 1198 UCSI_CONSTAT_PWR_OPMODE_PD) ··· 1225 1215 if (con->status.change & UCSI_CONSTAT_CAM_CHANGE) 1226 1216 ucsi_partner_task(con, ucsi_check_altmodes, 1, 0); 1227 1217 1228 - clear_bit(EVENT_PENDING, &con->ucsi->flags); 1229 - 1230 1218 mutex_lock(&ucsi->ppm_lock); 1219 + clear_bit(EVENT_PENDING, &con->ucsi->flags); 1231 1220 ret = ucsi_acknowledge_connector_change(ucsi); 1232 1221 mutex_unlock(&ucsi->ppm_lock); 1222 + 1233 1223 if (ret) 1234 1224 dev_err(ucsi->dev, "%s: ACK failed (%d)", __func__, ret); 1235 1225 ··· 1247 1237 struct ucsi_connector *con = &ucsi->connector[num - 1]; 1248 1238 1249 1239 if (!(ucsi->ntfy & UCSI_ENABLE_NTFY_CONNECTOR_CHANGE)) { 1250 - dev_dbg(ucsi->dev, "Bogus connector change event\n"); 1240 + dev_dbg(ucsi->dev, "Early connector change event\n"); 1251 1241 return; 1252 1242 } 1253 1243 ··· 1270 1260 1271 1261 static int ucsi_reset_ppm(struct ucsi *ucsi) 1272 1262 { 1273 - u64 command = UCSI_PPM_RESET; 1263 + u64 command; 1274 1264 unsigned long tmo; 1275 1265 u32 cci; 1276 1266 int ret; 1277 1267 1278 1268 mutex_lock(&ucsi->ppm_lock); 1279 1269 1270 + ret = ucsi->ops->read(ucsi, UCSI_CCI, &cci, sizeof(cci)); 1271 + if (ret < 0) 1272 + goto out; 1273 + 1274 + /* 1275 + * If UCSI_CCI_RESET_COMPLETE is already set we must clear 1276 + * the flag before we start another reset. Send a 1277 + * UCSI_SET_NOTIFICATION_ENABLE command to achieve this. 1278 + * Ignore a timeout and try the reset anyway if this fails. 1279 + */ 1280 + if (cci & UCSI_CCI_RESET_COMPLETE) { 1281 + command = UCSI_SET_NOTIFICATION_ENABLE; 1282 + ret = ucsi->ops->async_write(ucsi, UCSI_CONTROL, &command, 1283 + sizeof(command)); 1284 + if (ret < 0) 1285 + goto out; 1286 + 1287 + tmo = jiffies + msecs_to_jiffies(UCSI_TIMEOUT_MS); 1288 + do { 1289 + ret = ucsi->ops->read(ucsi, UCSI_CCI, 1290 + &cci, sizeof(cci)); 1291 + if (ret < 0) 1292 + goto out; 1293 + if (cci & UCSI_CCI_COMMAND_COMPLETE) 1294 + break; 1295 + if (time_is_before_jiffies(tmo)) 1296 + break; 1297 + msleep(20); 1298 + } while (1); 1299 + 1300 + WARN_ON(cci & UCSI_CCI_RESET_COMPLETE); 1301 + } 1302 + 1303 + command = UCSI_PPM_RESET; 1280 1304 ret = ucsi->ops->async_write(ucsi, UCSI_CONTROL, &command, 1281 1305 sizeof(command)); 1282 1306 if (ret < 0) ··· 1633 1589 ucsi_register_partner(con); 1634 1590 ucsi_pwr_opmode_change(con); 1635 1591 ucsi_port_psy_changed(con); 1636 - ucsi_get_partner_identity(con); 1637 - ucsi_check_cable(con); 1592 + if (con->ucsi->cap.features & UCSI_CAP_GET_PD_MESSAGE) 1593 + ucsi_get_partner_identity(con); 1594 + if (con->ucsi->cap.features & UCSI_CAP_CABLE_DETAILS) 1595 + ucsi_check_cable(con); 1638 1596 } 1639 1597 1640 1598 /* Only notify USB controller if partner supports USB data */ ··· 1682 1636 { 1683 1637 struct ucsi_connector *con, *connector; 1684 1638 u64 command, ntfy; 1639 + u32 cci; 1685 1640 int ret; 1686 1641 int i; 1687 1642 ··· 1735 1688 1736 1689 ucsi->connector = connector; 1737 1690 ucsi->ntfy = ntfy; 1691 + 1692 + ret = ucsi->ops->read(ucsi, UCSI_CCI, &cci, sizeof(cci)); 1693 + if (ret) 1694 + return ret; 1695 + if (UCSI_CCI_CONNECTOR(READ_ONCE(cci))) 1696 + ucsi_connector_change(ucsi, cci); 1697 + 1738 1698 return 0; 1739 1699 1740 1700 err_unregister:
+3 -2
drivers/usb/typec/ucsi/ucsi.h
··· 206 206 #define UCSI_CAP_ATTR_POWER_OTHER BIT(10) 207 207 #define UCSI_CAP_ATTR_POWER_VBUS BIT(14) 208 208 u8 num_connectors; 209 - u8 features; 209 + u16 features; 210 210 #define UCSI_CAP_SET_UOM BIT(0) 211 211 #define UCSI_CAP_SET_PDM BIT(1) 212 212 #define UCSI_CAP_ALT_MODE_DETAILS BIT(2) ··· 215 215 #define UCSI_CAP_CABLE_DETAILS BIT(5) 216 216 #define UCSI_CAP_EXT_SUPPLY_NOTIFICATIONS BIT(6) 217 217 #define UCSI_CAP_PD_RESET BIT(7) 218 - u16 reserved_1; 218 + #define UCSI_CAP_GET_PD_MESSAGE BIT(8) 219 + u8 reserved_1; 219 220 u8 num_alt_modes; 220 221 u8 reserved_2; 221 222 u16 bc_version;
+31 -40
drivers/usb/typec/ucsi/ucsi_acpi.c
··· 23 23 void *base; 24 24 struct completion complete; 25 25 unsigned long flags; 26 + #define UCSI_ACPI_SUPPRESS_EVENT 0 27 + #define UCSI_ACPI_COMMAND_PENDING 1 28 + #define UCSI_ACPI_ACK_PENDING 2 26 29 guid_t guid; 27 30 u64 cmd; 28 - bool dell_quirk_probed; 29 - bool dell_quirk_active; 30 31 }; 31 32 32 33 static int ucsi_acpi_dsm(struct ucsi_acpi *ua, int func) ··· 80 79 int ret; 81 80 82 81 if (ack) 83 - set_bit(ACK_PENDING, &ua->flags); 82 + set_bit(UCSI_ACPI_ACK_PENDING, &ua->flags); 84 83 else 85 - set_bit(COMMAND_PENDING, &ua->flags); 84 + set_bit(UCSI_ACPI_COMMAND_PENDING, &ua->flags); 86 85 87 86 ret = ucsi_acpi_async_write(ucsi, offset, val, val_len); 88 87 if (ret) ··· 93 92 94 93 out_clear_bit: 95 94 if (ack) 96 - clear_bit(ACK_PENDING, &ua->flags); 95 + clear_bit(UCSI_ACPI_ACK_PENDING, &ua->flags); 97 96 else 98 - clear_bit(COMMAND_PENDING, &ua->flags); 97 + clear_bit(UCSI_ACPI_COMMAND_PENDING, &ua->flags); 99 98 100 99 return ret; 101 100 } ··· 130 129 }; 131 130 132 131 /* 133 - * Some Dell laptops expect that an ACK command with the 134 - * UCSI_ACK_CONNECTOR_CHANGE bit set is followed by a (separate) 135 - * ACK command that only has the UCSI_ACK_COMMAND_COMPLETE bit set. 136 - * If this is not done events are not delivered to OSPM and 137 - * subsequent commands will timeout. 132 + * Some Dell laptops don't like ACK commands with the 133 + * UCSI_ACK_CONNECTOR_CHANGE but not the UCSI_ACK_COMMAND_COMPLETE 134 + * bit set. To work around this send a dummy command and bundle the 135 + * UCSI_ACK_CONNECTOR_CHANGE with the UCSI_ACK_COMMAND_COMPLETE 136 + * for the dummy command. 138 137 */ 139 138 static int 140 139 ucsi_dell_sync_write(struct ucsi *ucsi, unsigned int offset, 141 140 const void *val, size_t val_len) 142 141 { 143 142 struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); 144 - u64 cmd = *(u64 *)val, ack = 0; 143 + u64 cmd = *(u64 *)val; 144 + u64 dummycmd = UCSI_GET_CAPABILITY; 145 145 int ret; 146 146 147 - if (UCSI_COMMAND(cmd) == UCSI_ACK_CC_CI && 148 - cmd & UCSI_ACK_CONNECTOR_CHANGE) 149 - ack = UCSI_ACK_CC_CI | UCSI_ACK_COMMAND_COMPLETE; 147 + if (cmd == (UCSI_ACK_CC_CI | UCSI_ACK_CONNECTOR_CHANGE)) { 148 + cmd |= UCSI_ACK_COMMAND_COMPLETE; 150 149 151 - ret = ucsi_acpi_sync_write(ucsi, offset, val, val_len); 152 - if (ret != 0) 153 - return ret; 154 - if (ack == 0) 155 - return ret; 150 + /* 151 + * The UCSI core thinks it is sending a connector change ack 152 + * and will accept new connector change events. We don't want 153 + * this to happen for the dummy command as its response will 154 + * still report the very event that the core is trying to clear. 155 + */ 156 + set_bit(UCSI_ACPI_SUPPRESS_EVENT, &ua->flags); 157 + ret = ucsi_acpi_sync_write(ucsi, UCSI_CONTROL, &dummycmd, 158 + sizeof(dummycmd)); 159 + clear_bit(UCSI_ACPI_SUPPRESS_EVENT, &ua->flags); 156 160 157 - if (!ua->dell_quirk_probed) { 158 - ua->dell_quirk_probed = true; 159 - 160 - cmd = UCSI_GET_CAPABILITY; 161 - ret = ucsi_acpi_sync_write(ucsi, UCSI_CONTROL, &cmd, 162 - sizeof(cmd)); 163 - if (ret == 0) 164 - return ucsi_acpi_sync_write(ucsi, UCSI_CONTROL, 165 - &ack, sizeof(ack)); 166 - if (ret != -ETIMEDOUT) 161 + if (ret < 0) 167 162 return ret; 168 - 169 - ua->dell_quirk_active = true; 170 - dev_err(ua->dev, "Firmware bug: Additional ACK required after ACKing a connector change.\n"); 171 - dev_err(ua->dev, "Firmware bug: Enabling workaround\n"); 172 163 } 173 164 174 - if (!ua->dell_quirk_active) 175 - return ret; 176 - 177 - return ucsi_acpi_sync_write(ucsi, UCSI_CONTROL, &ack, sizeof(ack)); 165 + return ucsi_acpi_sync_write(ucsi, UCSI_CONTROL, &cmd, sizeof(cmd)); 178 166 } 179 167 180 168 static const struct ucsi_operations ucsi_dell_ops = { ··· 199 209 if (ret) 200 210 return; 201 211 202 - if (UCSI_CCI_CONNECTOR(cci)) 212 + if (UCSI_CCI_CONNECTOR(cci) && 213 + !test_bit(UCSI_ACPI_SUPPRESS_EVENT, &ua->flags)) 203 214 ucsi_connector_change(ua->ucsi, UCSI_CCI_CONNECTOR(cci)); 204 215 205 216 if (cci & UCSI_CCI_ACK_COMPLETE && test_bit(ACK_PENDING, &ua->flags)) 206 217 complete(&ua->complete); 207 218 if (cci & UCSI_CCI_COMMAND_COMPLETE && 208 - test_bit(COMMAND_PENDING, &ua->flags)) 219 + test_bit(UCSI_ACPI_COMMAND_PENDING, &ua->flags)) 209 220 complete(&ua->complete); 210 221 } 211 222
+14
drivers/usb/typec/ucsi/ucsi_glink.c
··· 255 255 static void pmic_glink_ucsi_register(struct work_struct *work) 256 256 { 257 257 struct pmic_glink_ucsi *ucsi = container_of(work, struct pmic_glink_ucsi, register_work); 258 + int orientation; 259 + int i; 260 + 261 + for (i = 0; i < PMIC_GLINK_MAX_PORTS; i++) { 262 + if (!ucsi->port_orientation[i]) 263 + continue; 264 + orientation = gpiod_get_value(ucsi->port_orientation[i]); 265 + 266 + if (orientation >= 0) { 267 + typec_switch_set(ucsi->port_switch[i], 268 + orientation ? TYPEC_ORIENTATION_REVERSE 269 + : TYPEC_ORIENTATION_NORMAL); 270 + } 271 + } 258 272 259 273 ucsi_register(ucsi->ucsi); 260 274 }
+3
drivers/video/fbdev/Kconfig
··· 494 494 select FB_CFB_COPYAREA 495 495 select FB_CFB_FILLRECT 496 496 select FB_CFB_IMAGEBLIT 497 + select FB_IOMEM_FOPS 497 498 498 499 config FB_BW2 499 500 bool "BWtwo support" ··· 515 514 depends on (FB = y) && (SPARC && FB_SBUS) 516 515 select FB_CFB_COPYAREA 517 516 select FB_CFB_IMAGEBLIT 517 + select FB_IOMEM_FOPS 518 518 help 519 519 This is the frame buffer device driver for the CGsix (GX, TurboGX) 520 520 frame buffer. ··· 525 523 depends on FB_SBUS && SPARC64 526 524 select FB_CFB_COPYAREA 527 525 select FB_CFB_IMAGEBLIT 526 + select FB_IOMEM_FOPS 528 527 help 529 528 This is the frame buffer device driver for the Creator, Creator3D, 530 529 and Elite3D graphics boards.
+10 -6
fs/9p/vfs_inode.c
··· 344 344 struct v9fs_inode __maybe_unused *v9inode = V9FS_I(inode); 345 345 __le32 __maybe_unused version; 346 346 347 - truncate_inode_pages_final(&inode->i_data); 347 + if (!is_bad_inode(inode)) { 348 + truncate_inode_pages_final(&inode->i_data); 348 349 349 - version = cpu_to_le32(v9inode->qid.version); 350 - netfs_clear_inode_writeback(inode, &version); 350 + version = cpu_to_le32(v9inode->qid.version); 351 + netfs_clear_inode_writeback(inode, &version); 351 352 352 - clear_inode(inode); 353 - filemap_fdatawrite(&inode->i_data); 353 + clear_inode(inode); 354 + filemap_fdatawrite(&inode->i_data); 354 355 355 356 #ifdef CONFIG_9P_FSCACHE 356 - fscache_relinquish_cookie(v9fs_inode_cookie(v9inode), false); 357 + if (v9fs_inode_cookie(v9inode)) 358 + fscache_relinquish_cookie(v9fs_inode_cookie(v9inode), false); 357 359 #endif 360 + } else 361 + clear_inode(inode); 358 362 } 359 363 360 364 struct inode *v9fs_fid_iget(struct super_block *sb, struct p9_fid *fid)
+1 -5
fs/9p/vfs_inode_dotl.c
··· 78 78 79 79 retval = v9fs_init_inode(v9ses, inode, &fid->qid, 80 80 st->st_mode, new_decode_dev(st->st_rdev)); 81 + v9fs_stat2inode_dotl(st, inode, 0); 81 82 kfree(st); 82 83 if (retval) 83 84 goto error; 84 85 85 - v9fs_stat2inode_dotl(st, inode, 0); 86 86 v9fs_set_netfs_context(inode); 87 87 v9fs_cache_inode_get_cookie(inode); 88 88 retval = v9fs_get_acl(inode, fid); ··· 297 297 umode_t omode) 298 298 { 299 299 int err; 300 - struct v9fs_session_info *v9ses; 301 300 struct p9_fid *fid = NULL, *dfid = NULL; 302 301 kgid_t gid; 303 302 const unsigned char *name; ··· 306 307 struct posix_acl *dacl = NULL, *pacl = NULL; 307 308 308 309 p9_debug(P9_DEBUG_VFS, "name %pd\n", dentry); 309 - v9ses = v9fs_inode2v9ses(dir); 310 310 311 311 omode |= S_IFDIR; 312 312 if (dir->i_mode & S_ISGID) ··· 737 739 kgid_t gid; 738 740 const unsigned char *name; 739 741 umode_t mode; 740 - struct v9fs_session_info *v9ses; 741 742 struct p9_fid *fid = NULL, *dfid = NULL; 742 743 struct inode *inode; 743 744 struct p9_qid qid; ··· 746 749 dir->i_ino, dentry, omode, 747 750 MAJOR(rdev), MINOR(rdev)); 748 751 749 - v9ses = v9fs_inode2v9ses(dir); 750 752 dfid = v9fs_parent_fid(dentry); 751 753 if (IS_ERR(dfid)) { 752 754 err = PTR_ERR(dfid);
+1 -1
fs/binfmt_elf_fdpic.c
··· 1359 1359 SET_UID(psinfo->pr_uid, from_kuid_munged(cred->user_ns, cred->uid)); 1360 1360 SET_GID(psinfo->pr_gid, from_kgid_munged(cred->user_ns, cred->gid)); 1361 1361 rcu_read_unlock(); 1362 - strncpy(psinfo->pr_fname, p->comm, sizeof(psinfo->pr_fname)); 1362 + get_task_comm(psinfo->pr_fname, p); 1363 1363 1364 1364 return 0; 1365 1365 }
+2 -1
fs/btrfs/block-group.c
··· 1559 1559 * needing to allocate extents from the block group. 1560 1560 */ 1561 1561 used = btrfs_space_info_used(space_info, true); 1562 - if (space_info->total_bytes - block_group->length < used) { 1562 + if (space_info->total_bytes - block_group->length < used && 1563 + block_group->zone_unusable < block_group->length) { 1563 1564 /* 1564 1565 * Add a reference for the list, compensate for the ref 1565 1566 * drop under the "next" label for the
+13
fs/btrfs/extent_io.c
··· 4333 4333 if (test_and_set_bit(EXTENT_BUFFER_READING, &eb->bflags)) 4334 4334 goto done; 4335 4335 4336 + /* 4337 + * Between the initial test_bit(EXTENT_BUFFER_UPTODATE) and the above 4338 + * test_and_set_bit(EXTENT_BUFFER_READING), someone else could have 4339 + * started and finished reading the same eb. In this case, UPTODATE 4340 + * will now be set, and we shouldn't read it in again. 4341 + */ 4342 + if (unlikely(test_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags))) { 4343 + clear_bit(EXTENT_BUFFER_READING, &eb->bflags); 4344 + smp_mb__after_atomic(); 4345 + wake_up_bit(&eb->bflags, EXTENT_BUFFER_READING); 4346 + return 0; 4347 + } 4348 + 4336 4349 clear_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags); 4337 4350 eb->read_mirror = 0; 4338 4351 check_buffer_tree_ref(eb);
+8 -8
fs/btrfs/extent_map.c
··· 309 309 btrfs_warn(fs_info, 310 310 "no extent map found for inode %llu (root %lld) when unpinning extent range [%llu, %llu), generation %llu", 311 311 btrfs_ino(inode), btrfs_root_id(inode->root), 312 - start, len, gen); 312 + start, start + len, gen); 313 313 ret = -ENOENT; 314 314 goto out; 315 315 } ··· 318 318 btrfs_warn(fs_info, 319 319 "found extent map for inode %llu (root %lld) with unexpected start offset %llu when unpinning extent range [%llu, %llu), generation %llu", 320 320 btrfs_ino(inode), btrfs_root_id(inode->root), 321 - em->start, start, len, gen); 321 + em->start, start, start + len, gen); 322 322 ret = -EUCLEAN; 323 323 goto out; 324 324 } ··· 340 340 em->mod_len = em->len; 341 341 } 342 342 343 - free_extent_map(em); 344 343 out: 345 344 write_unlock(&tree->lock); 345 + free_extent_map(em); 346 346 return ret; 347 347 348 348 } ··· 629 629 */ 630 630 ret = merge_extent_mapping(em_tree, existing, 631 631 em, start); 632 - if (ret) { 632 + if (WARN_ON(ret)) { 633 633 free_extent_map(em); 634 634 *em_in = NULL; 635 - WARN_ONCE(ret, 636 - "extent map merge error existing [%llu, %llu) with em [%llu, %llu) start %llu\n", 637 - existing->start, existing->len, 638 - orig_start, orig_len, start); 635 + btrfs_warn(fs_info, 636 + "extent map merge error existing [%llu, %llu) with em [%llu, %llu) start %llu", 637 + existing->start, extent_map_end(existing), 638 + orig_start, orig_start + orig_len, start); 639 639 } 640 640 free_extent_map(existing); 641 641 }
+11 -1
fs/btrfs/scrub.c
··· 2812 2812 gen = btrfs_get_last_trans_committed(fs_info); 2813 2813 2814 2814 for (i = 0; i < BTRFS_SUPER_MIRROR_MAX; i++) { 2815 - bytenr = btrfs_sb_offset(i); 2815 + ret = btrfs_sb_log_location(scrub_dev, i, 0, &bytenr); 2816 + if (ret == -ENOENT) 2817 + break; 2818 + 2819 + if (ret) { 2820 + spin_lock(&sctx->stat_lock); 2821 + sctx->stat.super_errors++; 2822 + spin_unlock(&sctx->stat_lock); 2823 + continue; 2824 + } 2825 + 2816 2826 if (bytenr + BTRFS_SUPER_INFO_SIZE > 2817 2827 scrub_dev->commit_total_bytes) 2818 2828 break;
+22 -5
fs/btrfs/volumes.c
··· 692 692 device->bdev = file_bdev(bdev_file); 693 693 clear_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &device->dev_state); 694 694 695 + if (device->devt != device->bdev->bd_dev) { 696 + btrfs_warn(NULL, 697 + "device %s maj:min changed from %d:%d to %d:%d", 698 + device->name->str, MAJOR(device->devt), 699 + MINOR(device->devt), MAJOR(device->bdev->bd_dev), 700 + MINOR(device->bdev->bd_dev)); 701 + 702 + device->devt = device->bdev->bd_dev; 703 + } 704 + 695 705 fs_devices->open_devices++; 696 706 if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state) && 697 707 device->devid != BTRFS_DEV_REPLACE_DEVID) { ··· 1184 1174 struct btrfs_device *device; 1185 1175 struct btrfs_device *latest_dev = NULL; 1186 1176 struct btrfs_device *tmp_device; 1177 + int ret = 0; 1187 1178 1188 1179 list_for_each_entry_safe(device, tmp_device, &fs_devices->devices, 1189 1180 dev_list) { 1190 - int ret; 1181 + int ret2; 1191 1182 1192 - ret = btrfs_open_one_device(fs_devices, device, flags, holder); 1193 - if (ret == 0 && 1183 + ret2 = btrfs_open_one_device(fs_devices, device, flags, holder); 1184 + if (ret2 == 0 && 1194 1185 (!latest_dev || device->generation > latest_dev->generation)) { 1195 1186 latest_dev = device; 1196 - } else if (ret == -ENODATA) { 1187 + } else if (ret2 == -ENODATA) { 1197 1188 fs_devices->num_devices--; 1198 1189 list_del(&device->dev_list); 1199 1190 btrfs_free_device(device); 1200 1191 } 1192 + if (ret == 0 && ret2 != 0) 1193 + ret = ret2; 1201 1194 } 1202 - if (fs_devices->open_devices == 0) 1195 + 1196 + if (fs_devices->open_devices == 0) { 1197 + if (ret) 1198 + return ret; 1203 1199 return -EINVAL; 1200 + } 1204 1201 1205 1202 fs_devices->opened = 1; 1206 1203 fs_devices->latest_dev = latest_dev;
+7 -7
fs/btrfs/zoned.c
··· 1574 1574 if (!map) 1575 1575 return -EINVAL; 1576 1576 1577 - cache->physical_map = btrfs_clone_chunk_map(map, GFP_NOFS); 1578 - if (!cache->physical_map) { 1579 - ret = -ENOMEM; 1580 - goto out; 1581 - } 1577 + cache->physical_map = map; 1582 1578 1583 1579 zone_info = kcalloc(map->num_stripes, sizeof(*zone_info), GFP_NOFS); 1584 1580 if (!zone_info) { ··· 1686 1690 } 1687 1691 bitmap_free(active); 1688 1692 kfree(zone_info); 1689 - btrfs_free_chunk_map(map); 1690 1693 1691 1694 return ret; 1692 1695 } ··· 2170 2175 struct btrfs_chunk_map *map; 2171 2176 const bool is_metadata = (block_group->flags & 2172 2177 (BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_SYSTEM)); 2178 + struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace; 2173 2179 int ret = 0; 2174 2180 int i; 2175 2181 ··· 2246 2250 btrfs_clear_data_reloc_bg(block_group); 2247 2251 spin_unlock(&block_group->lock); 2248 2252 2253 + down_read(&dev_replace->rwsem); 2249 2254 map = block_group->physical_map; 2250 2255 for (i = 0; i < map->num_stripes; i++) { 2251 2256 struct btrfs_device *device = map->stripes[i].dev; ··· 2263 2266 zinfo->zone_size >> SECTOR_SHIFT); 2264 2267 memalloc_nofs_restore(nofs_flags); 2265 2268 2266 - if (ret) 2269 + if (ret) { 2270 + up_read(&dev_replace->rwsem); 2267 2271 return ret; 2272 + } 2268 2273 2269 2274 if (!(block_group->flags & BTRFS_BLOCK_GROUP_DATA)) 2270 2275 zinfo->reserved_active_zones++; 2271 2276 btrfs_dev_clear_active_zone(device, physical); 2272 2277 } 2278 + up_read(&dev_replace->rwsem); 2273 2279 2274 2280 if (!fully_written) 2275 2281 btrfs_dec_block_group_ro(block_group);
-1
fs/erofs/super.c
··· 430 430 431 431 switch (mode) { 432 432 case EROFS_MOUNT_DAX_ALWAYS: 433 - warnfc(fc, "DAX enabled. Warning: EXPERIMENTAL, use at your own risk"); 434 433 set_opt(&ctx->opt, DAX_ALWAYS); 435 434 clear_opt(&ctx->opt, DAX_NEVER); 436 435 return true;
+1
fs/exec.c
··· 895 895 goto out; 896 896 } 897 897 898 + bprm->exec += *sp_location - MAX_ARG_PAGES * PAGE_SIZE; 898 899 *sp_location = sp; 899 900 900 901 out:
+3 -2
fs/gfs2/bmap.c
··· 1718 1718 struct buffer_head *dibh, *bh; 1719 1719 struct gfs2_holder rd_gh; 1720 1720 unsigned int bsize_shift = sdp->sd_sb.sb_bsize_shift; 1721 - u64 lblock = (offset + (1 << bsize_shift) - 1) >> bsize_shift; 1721 + unsigned int bsize = 1 << bsize_shift; 1722 + u64 lblock = (offset + bsize - 1) >> bsize_shift; 1722 1723 __u16 start_list[GFS2_MAX_META_HEIGHT]; 1723 1724 __u16 __end_list[GFS2_MAX_META_HEIGHT], *end_list = NULL; 1724 1725 unsigned int start_aligned, end_aligned; ··· 1730 1729 u64 prev_bnr = 0; 1731 1730 __be64 *start, *end; 1732 1731 1733 - if (offset >= maxsize) { 1732 + if (offset + bsize - 1 >= maxsize) { 1734 1733 /* 1735 1734 * The starting point lies beyond the allocated metadata; 1736 1735 * there are no blocks to deallocate.
+25 -11
fs/nfsd/nfs4state.c
··· 3831 3831 else 3832 3832 cs_slot = &unconf->cl_cs_slot; 3833 3833 status = check_slot_seqid(cr_ses->seqid, cs_slot->sl_seqid, 0); 3834 - if (status) { 3835 - if (status == nfserr_replay_cache) { 3836 - status = nfsd4_replay_create_session(cr_ses, cs_slot); 3837 - goto out_free_conn; 3838 - } 3834 + switch (status) { 3835 + case nfs_ok: 3836 + cs_slot->sl_seqid++; 3837 + cr_ses->seqid = cs_slot->sl_seqid; 3838 + break; 3839 + case nfserr_replay_cache: 3840 + status = nfsd4_replay_create_session(cr_ses, cs_slot); 3841 + fallthrough; 3842 + case nfserr_jukebox: 3843 + /* The server MUST NOT cache NFS4ERR_DELAY */ 3844 + goto out_free_conn; 3845 + default: 3839 3846 goto out_cache_error; 3840 3847 } 3841 - cs_slot->sl_seqid++; 3842 - cr_ses->seqid = cs_slot->sl_seqid; 3843 3848 3844 3849 /* RFC 8881 Section 18.36.4 Phase 3: Client ID confirmation. */ 3845 3850 if (conf) { ··· 3864 3859 old = find_confirmed_client_by_name(&unconf->cl_name, nn); 3865 3860 if (old) { 3866 3861 status = mark_client_expired_locked(old); 3867 - if (status) { 3868 - old = NULL; 3869 - goto out_cache_error; 3870 - } 3862 + if (status) 3863 + goto out_expired_error; 3871 3864 trace_nfsd_clid_replaced(&old->cl_clientid); 3872 3865 } 3873 3866 move_to_confirmed(unconf); ··· 3897 3894 expire_client(old); 3898 3895 return status; 3899 3896 3897 + out_expired_error: 3898 + old = NULL; 3899 + /* 3900 + * Revert the slot seq_nr change so the server will process 3901 + * the client's resend instead of returning a cached response. 3902 + */ 3903 + if (status == nfserr_jukebox) { 3904 + cs_slot->sl_seqid--; 3905 + cr_ses->seqid = cs_slot->sl_seqid; 3906 + goto out_free_conn; 3907 + } 3900 3908 out_cache_error: 3901 3909 nfsd4_cache_create_session(cr_ses, cs_slot, status); 3902 3910 out_free_conn:
+2 -1
fs/nfsd/vfs.c
··· 1852 1852 trap = lock_rename(tdentry, fdentry); 1853 1853 if (IS_ERR(trap)) { 1854 1854 err = (rqstp->rq_vers == 2) ? nfserr_acces : nfserr_xdev; 1855 - goto out; 1855 + goto out_want_write; 1856 1856 } 1857 1857 err = fh_fill_pre_attrs(ffhp); 1858 1858 if (err != nfs_ok) ··· 1922 1922 } 1923 1923 out_unlock: 1924 1924 unlock_rename(tdentry, fdentry); 1925 + out_want_write: 1925 1926 fh_drop_write(ffhp); 1926 1927 1927 1928 /*
+1 -1
fs/proc/Makefile
··· 5 5 6 6 obj-y += proc.o 7 7 8 - CFLAGS_task_mmu.o += $(call cc-option,-Wno-override-init,) 8 + CFLAGS_task_mmu.o += -Wno-override-init 9 9 proc-y := nommu.o task_nommu.o 10 10 proc-$(CONFIG_MMU) := task_mmu.o 11 11
+7
fs/smb/client/dir.c
··· 612 612 goto mknod_out; 613 613 } 614 614 615 + trace_smb3_mknod_enter(xid, tcon->ses->Suid, tcon->tid, full_path); 616 + 615 617 rc = tcon->ses->server->ops->make_node(xid, inode, direntry, tcon, 616 618 full_path, mode, 617 619 device_number); 618 620 619 621 mknod_out: 622 + if (rc) 623 + trace_smb3_mknod_err(xid, tcon->ses->Suid, tcon->tid, rc); 624 + else 625 + trace_smb3_mknod_done(xid, tcon->ses->Suid, tcon->tid); 626 + 620 627 free_dentry_path(page); 621 628 free_xid(xid); 622 629 cifs_put_tlink(tlink);
+15 -1
fs/smb/client/fscache.c
··· 12 12 #include "cifs_fs_sb.h" 13 13 #include "cifsproto.h" 14 14 15 + /* 16 + * Key for fscache inode. [!] Contents must match comparisons in cifs_find_inode(). 17 + */ 18 + struct cifs_fscache_inode_key { 19 + 20 + __le64 uniqueid; /* server inode number */ 21 + __le64 createtime; /* creation time on server */ 22 + u8 type; /* S_IFMT file type */ 23 + } __packed; 24 + 15 25 static void cifs_fscache_fill_volume_coherency( 16 26 struct cifs_tcon *tcon, 17 27 struct cifs_fscache_volume_coherency_data *cd) ··· 107 97 void cifs_fscache_get_inode_cookie(struct inode *inode) 108 98 { 109 99 struct cifs_fscache_inode_coherency_data cd; 100 + struct cifs_fscache_inode_key key; 110 101 struct cifsInodeInfo *cifsi = CIFS_I(inode); 111 102 struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); 112 103 struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb); 113 104 105 + key.uniqueid = cpu_to_le64(cifsi->uniqueid); 106 + key.createtime = cpu_to_le64(cifsi->createtime); 107 + key.type = (inode->i_mode & S_IFMT) >> 12; 114 108 cifs_fscache_fill_coherency(&cifsi->netfs.inode, &cd); 115 109 116 110 cifsi->netfs.cache = 117 111 fscache_acquire_cookie(tcon->fscache, 0, 118 - &cifsi->uniqueid, sizeof(cifsi->uniqueid), 112 + &key, sizeof(key), 119 113 &cd, sizeof(cd), 120 114 i_size_read(&cifsi->netfs.inode)); 121 115 if (cifsi->netfs.cache)
+2
fs/smb/client/inode.c
··· 1351 1351 { 1352 1352 struct cifs_fattr *fattr = opaque; 1353 1353 1354 + /* [!] The compared values must be the same in struct cifs_fscache_inode_key. */ 1355 + 1354 1356 /* don't match inode with different uniqueid */ 1355 1357 if (CIFS_I(inode)->uniqueid != fattr->cf_uniqueid) 1356 1358 return 0;
+3 -1
fs/smb/client/trace.h
··· 375 375 DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(delete_enter); 376 376 DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(mkdir_enter); 377 377 DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(tdis_enter); 378 + DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(mknod_enter); 378 379 379 380 DECLARE_EVENT_CLASS(smb3_inf_compound_done_class, 380 381 TP_PROTO(unsigned int xid, ··· 416 415 DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(delete_done); 417 416 DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(mkdir_done); 418 417 DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(tdis_done); 419 - 418 + DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(mknod_done); 420 419 421 420 DECLARE_EVENT_CLASS(smb3_inf_compound_err_class, 422 421 TP_PROTO(unsigned int xid, ··· 462 461 DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(mkdir_err); 463 462 DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(delete_err); 464 463 DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(tdis_err); 464 + DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(mknod_err); 465 465 466 466 /* 467 467 * For logging SMB3 Status code and Command for responses which return errors
+31 -9
fs/xfs/libxfs/xfs_sb.c
··· 530 530 } 531 531 532 532 if (!xfs_validate_stripe_geometry(mp, XFS_FSB_TO_B(mp, sbp->sb_unit), 533 - XFS_FSB_TO_B(mp, sbp->sb_width), 0, false)) 533 + XFS_FSB_TO_B(mp, sbp->sb_width), 0, 534 + xfs_buf_daddr(bp) == XFS_SB_DADDR, false)) 534 535 return -EFSCORRUPTED; 535 536 536 537 /* ··· 1324 1323 } 1325 1324 1326 1325 /* 1327 - * sunit, swidth, sectorsize(optional with 0) should be all in bytes, 1328 - * so users won't be confused by values in error messages. 1326 + * sunit, swidth, sectorsize(optional with 0) should be all in bytes, so users 1327 + * won't be confused by values in error messages. This function returns false 1328 + * if the stripe geometry is invalid and the caller is unable to repair the 1329 + * stripe configuration later in the mount process. 1329 1330 */ 1330 1331 bool 1331 1332 xfs_validate_stripe_geometry( ··· 1335 1332 __s64 sunit, 1336 1333 __s64 swidth, 1337 1334 int sectorsize, 1335 + bool may_repair, 1338 1336 bool silent) 1339 1337 { 1340 1338 if (swidth > INT_MAX) { 1341 1339 if (!silent) 1342 1340 xfs_notice(mp, 1343 1341 "stripe width (%lld) is too large", swidth); 1344 - return false; 1342 + goto check_override; 1345 1343 } 1346 1344 1347 1345 if (sunit > swidth) { 1348 1346 if (!silent) 1349 1347 xfs_notice(mp, 1350 1348 "stripe unit (%lld) is larger than the stripe width (%lld)", sunit, swidth); 1351 - return false; 1349 + goto check_override; 1352 1350 } 1353 1351 1354 1352 if (sectorsize && (int)sunit % sectorsize) { ··· 1357 1353 xfs_notice(mp, 1358 1354 "stripe unit (%lld) must be a multiple of the sector size (%d)", 1359 1355 sunit, sectorsize); 1360 - return false; 1356 + goto check_override; 1361 1357 } 1362 1358 1363 1359 if (sunit && !swidth) { 1364 1360 if (!silent) 1365 1361 xfs_notice(mp, 1366 1362 "invalid stripe unit (%lld) and stripe width of 0", sunit); 1367 - return false; 1363 + goto check_override; 1368 1364 } 1369 1365 1370 1366 if (!sunit && swidth) { 1371 1367 if (!silent) 1372 1368 xfs_notice(mp, 1373 1369 "invalid stripe width (%lld) and stripe unit of 0", swidth); 1374 - return false; 1370 + goto check_override; 1375 1371 } 1376 1372 1377 1373 if (sunit && (int)swidth % (int)sunit) { ··· 1379 1375 xfs_notice(mp, 1380 1376 "stripe width (%lld) must be a multiple of the stripe unit (%lld)", 1381 1377 swidth, sunit); 1382 - return false; 1378 + goto check_override; 1383 1379 } 1380 + return true; 1381 + 1382 + check_override: 1383 + if (!may_repair) 1384 + return false; 1385 + /* 1386 + * During mount, mp->m_dalign will not be set unless the sunit mount 1387 + * option was set. If it was set, ignore the bad stripe alignment values 1388 + * and allow the validation and overwrite later in the mount process to 1389 + * attempt to overwrite the bad stripe alignment values with the values 1390 + * supplied by mount options. 1391 + */ 1392 + if (!mp->m_dalign) 1393 + return false; 1394 + if (!silent) 1395 + xfs_notice(mp, 1396 + "Will try to correct with specified mount options sunit (%d) and swidth (%d)", 1397 + BBTOB(mp->m_dalign), BBTOB(mp->m_swidth)); 1384 1398 return true; 1385 1399 } 1386 1400
+3 -2
fs/xfs/libxfs/xfs_sb.h
··· 35 35 struct xfs_trans *tp, xfs_agnumber_t agno, 36 36 struct xfs_buf **bpp); 37 37 38 - extern bool xfs_validate_stripe_geometry(struct xfs_mount *mp, 39 - __s64 sunit, __s64 swidth, int sectorsize, bool silent); 38 + bool xfs_validate_stripe_geometry(struct xfs_mount *mp, 39 + __s64 sunit, __s64 swidth, int sectorsize, bool may_repair, 40 + bool silent); 40 41 41 42 uint8_t xfs_compute_rextslog(xfs_rtbxlen_t rtextents); 42 43
+1 -3
fs/xfs/scrub/common.c
··· 1044 1044 struct xfs_scrub *sc, 1045 1045 struct xfs_inode *ip) 1046 1046 { 1047 - if (current->journal_info != NULL) { 1048 - ASSERT(current->journal_info == sc->tp); 1049 - 1047 + if (sc->tp) { 1050 1048 /* 1051 1049 * If we are in a transaction, we /cannot/ drop the inode 1052 1050 * ourselves, because the VFS will trigger writeback, which
-7
fs/xfs/xfs_aops.c
··· 503 503 { 504 504 struct xfs_writepage_ctx wpc = { }; 505 505 506 - /* 507 - * Writing back data in a transaction context can result in recursive 508 - * transactions. This is bad, so issue a warning and get out of here. 509 - */ 510 - if (WARN_ON_ONCE(current->journal_info)) 511 - return 0; 512 - 513 506 xfs_iflags_clear(XFS_I(mapping->host), XFS_ITRUNCATED); 514 507 return iomap_writepages(mapping, wbc, &wpc.ctx, &xfs_writeback_ops); 515 508 }
+5 -3
fs/xfs/xfs_icache.c
··· 2039 2039 * - Memory shrinkers queued the inactivation worker and it hasn't finished. 2040 2040 * - The queue depth exceeds the maximum allowable percpu backlog. 2041 2041 * 2042 - * Note: If the current thread is running a transaction, we don't ever want to 2043 - * wait for other transactions because that could introduce a deadlock. 2042 + * Note: If we are in a NOFS context here (e.g. current thread is running a 2043 + * transaction) the we don't want to block here as inodegc progress may require 2044 + * filesystem resources we hold to make progress and that could result in a 2045 + * deadlock. Hence we skip out of here if we are in a scoped NOFS context. 2044 2046 */ 2045 2047 static inline bool 2046 2048 xfs_inodegc_want_flush_work( ··· 2050 2048 unsigned int items, 2051 2049 unsigned int shrinker_hits) 2052 2050 { 2053 - if (current->journal_info) 2051 + if (current->flags & PF_MEMALLOC_NOFS) 2054 2052 return false; 2055 2053 2056 2054 if (shrinker_hits > 0)
+1 -8
fs/xfs/xfs_trans.h
··· 268 268 xfs_trans_set_context( 269 269 struct xfs_trans *tp) 270 270 { 271 - ASSERT(current->journal_info == NULL); 272 271 tp->t_pflags = memalloc_nofs_save(); 273 - current->journal_info = tp; 274 272 } 275 273 276 274 static inline void 277 275 xfs_trans_clear_context( 278 276 struct xfs_trans *tp) 279 277 { 280 - if (current->journal_info == tp) { 281 - memalloc_nofs_restore(tp->t_pflags); 282 - current->journal_info = NULL; 283 - } 278 + memalloc_nofs_restore(tp->t_pflags); 284 279 } 285 280 286 281 static inline void ··· 283 288 struct xfs_trans *old_tp, 284 289 struct xfs_trans *new_tp) 285 290 { 286 - ASSERT(current->journal_info == old_tp); 287 291 new_tp->t_pflags = old_tp->t_pflags; 288 292 old_tp->t_pflags = 0; 289 - current->journal_info = new_tp; 290 293 } 291 294 292 295 #endif /* __XFS_TRANS_H__ */
-11
include/asm-generic/export.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - #ifndef __ASM_GENERIC_EXPORT_H 3 - #define __ASM_GENERIC_EXPORT_H 4 - 5 - /* 6 - * <asm/export.h> and <asm-generic/export.h> are deprecated. 7 - * Please include <linux/export.h> directly. 8 - */ 9 - #include <linux/export.h> 10 - 11 - #endif
+2 -2
include/linux/framer/framer.h
··· 181 181 return -ENOSYS; 182 182 } 183 183 184 - struct framer *framer_get(struct device *dev, const char *con_id) 184 + static inline struct framer *framer_get(struct device *dev, const char *con_id) 185 185 { 186 186 return ERR_PTR(-ENOSYS); 187 187 } 188 188 189 - void framer_put(struct device *dev, struct framer *framer) 189 + static inline void framer_put(struct device *dev, struct framer *framer) 190 190 { 191 191 } 192 192
+15 -2
include/linux/gpio/driver.h
··· 646 646 struct gpio_device *gpio_device_find(const void *data, 647 647 int (*match)(struct gpio_chip *gc, 648 648 const void *data)); 649 - struct gpio_device *gpio_device_find_by_label(const char *label); 650 - struct gpio_device *gpio_device_find_by_fwnode(const struct fwnode_handle *fwnode); 651 649 652 650 struct gpio_device *gpio_device_get(struct gpio_device *gdev); 653 651 void gpio_device_put(struct gpio_device *gdev); ··· 812 814 int gpio_device_get_base(struct gpio_device *gdev); 813 815 const char *gpio_device_get_label(struct gpio_device *gdev); 814 816 817 + struct gpio_device *gpio_device_find_by_label(const char *label); 818 + struct gpio_device *gpio_device_find_by_fwnode(const struct fwnode_handle *fwnode); 819 + 815 820 #else /* CONFIG_GPIOLIB */ 816 821 817 822 #include <asm/bug.h> ··· 839 838 } 840 839 841 840 static inline const char *gpio_device_get_label(struct gpio_device *gdev) 841 + { 842 + WARN_ON(1); 843 + return NULL; 844 + } 845 + 846 + static inline struct gpio_device *gpio_device_find_by_label(const char *label) 847 + { 848 + WARN_ON(1); 849 + return NULL; 850 + } 851 + 852 + static inline struct gpio_device *gpio_device_find_by_fwnode(const struct fwnode_handle *fwnode) 842 853 { 843 854 WARN_ON(1); 844 855 return NULL;
+3
include/linux/interrupt.h
··· 67 67 * later. 68 68 * IRQF_NO_DEBUG - Exclude from runnaway detection for IPI and similar handlers, 69 69 * depends on IRQF_PERCPU. 70 + * IRQF_COND_ONESHOT - Agree to do IRQF_ONESHOT if already set for a shared 71 + * interrupt. 70 72 */ 71 73 #define IRQF_SHARED 0x00000080 72 74 #define IRQF_PROBE_SHARED 0x00000100 ··· 84 82 #define IRQF_COND_SUSPEND 0x00040000 85 83 #define IRQF_NO_AUTOEN 0x00080000 86 84 #define IRQF_NO_DEBUG 0x00100000 85 + #define IRQF_COND_ONESHOT 0x00200000 87 86 88 87 #define IRQF_TIMER (__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD) 89 88
+1
include/linux/libata.h
··· 107 107 108 108 ATA_DFLAG_NCQ_PRIO_ENABLED = (1 << 20), /* Priority cmds sent to dev */ 109 109 ATA_DFLAG_CDL_ENABLED = (1 << 21), /* cmd duration limits is enabled */ 110 + ATA_DFLAG_RESUMING = (1 << 22), /* Device is resuming */ 110 111 ATA_DFLAG_DETACH = (1 << 24), 111 112 ATA_DFLAG_DETACHED = (1 << 25), 112 113 ATA_DFLAG_DA = (1 << 26), /* device supports Device Attention */
+8
include/linux/mman.h
··· 162 162 163 163 unsigned long vm_commit_limit(void); 164 164 165 + #ifndef arch_memory_deny_write_exec_supported 166 + static inline bool arch_memory_deny_write_exec_supported(void) 167 + { 168 + return true; 169 + } 170 + #define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_supported 171 + #endif 172 + 165 173 /* 166 174 * Denies creating a writable executable mapping or gaining executable permissions. 167 175 *
+4
include/linux/oid_registry.h
··· 17 17 * build_OID_registry.pl to generate the data for look_up_OID(). 18 18 */ 19 19 enum OID { 20 + OID_id_dsa_with_sha1, /* 1.2.840.10030.4.3 */ 20 21 OID_id_dsa, /* 1.2.840.10040.4.1 */ 21 22 OID_id_ecPublicKey, /* 1.2.840.10045.2.1 */ 22 23 OID_id_prime192v1, /* 1.2.840.10045.3.1.1 */ 23 24 OID_id_prime256v1, /* 1.2.840.10045.3.1.7 */ 25 + OID_id_ecdsa_with_sha1, /* 1.2.840.10045.4.1 */ 24 26 OID_id_ecdsa_with_sha224, /* 1.2.840.10045.4.3.1 */ 25 27 OID_id_ecdsa_with_sha256, /* 1.2.840.10045.4.3.2 */ 26 28 OID_id_ecdsa_with_sha384, /* 1.2.840.10045.4.3.3 */ ··· 30 28 31 29 /* PKCS#1 {iso(1) member-body(2) us(840) rsadsi(113549) pkcs(1) pkcs-1(1)} */ 32 30 OID_rsaEncryption, /* 1.2.840.113549.1.1.1 */ 31 + OID_sha1WithRSAEncryption, /* 1.2.840.113549.1.1.5 */ 33 32 OID_sha256WithRSAEncryption, /* 1.2.840.113549.1.1.11 */ 34 33 OID_sha384WithRSAEncryption, /* 1.2.840.113549.1.1.12 */ 35 34 OID_sha512WithRSAEncryption, /* 1.2.840.113549.1.1.13 */ ··· 67 64 OID_PKU2U, /* 1.3.5.1.5.2.7 */ 68 65 OID_Scram, /* 1.3.6.1.5.5.14 */ 69 66 OID_certAuthInfoAccess, /* 1.3.6.1.5.5.7.1.1 */ 67 + OID_sha1, /* 1.3.14.3.2.26 */ 70 68 OID_id_ansip384r1, /* 1.3.132.0.34 */ 71 69 OID_sha256, /* 2.16.840.1.101.3.4.2.1 */ 72 70 OID_sha384, /* 2.16.840.1.101.3.4.2.2 */
+2 -2
include/linux/pagevec.h
··· 11 11 12 12 #include <linux/types.h> 13 13 14 - /* 15 pointers + header align the folio_batch structure to a power of two */ 15 - #define PAGEVEC_SIZE 15 14 + /* 31 pointers + header align the folio_batch structure to a power of two */ 15 + #define PAGEVEC_SIZE 31 16 16 17 17 struct folio; 18 18
+1 -6
include/linux/skbuff.h
··· 753 753 * @list: queue head 754 754 * @ll_node: anchor in an llist (eg socket defer_list) 755 755 * @sk: Socket we are owned by 756 - * @ip_defrag_offset: (aka @sk) alternate use of @sk, used in 757 - * fragmentation management 758 756 * @dev: Device we arrived on/are leaving by 759 757 * @dev_scratch: (aka @dev) alternate use of @dev when @dev would be %NULL 760 758 * @cb: Control buffer. Free for use by every layer. Put private vars here ··· 873 875 struct llist_node ll_node; 874 876 }; 875 877 876 - union { 877 - struct sock *sk; 878 - int ip_defrag_offset; 879 - }; 878 + struct sock *sk; 880 879 881 880 union { 882 881 ktime_t tstamp;
+2
include/net/cfg80211.h
··· 4991 4991 * set this flag to update channels on beacon hints. 4992 4992 * @WIPHY_FLAG_SUPPORTS_NSTR_NONPRIMARY: support connection to non-primary link 4993 4993 * of an NSTR mobile AP MLD. 4994 + * @WIPHY_FLAG_DISABLE_WEXT: disable wireless extensions for this device 4994 4995 */ 4995 4996 enum wiphy_flags { 4996 4997 WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK = BIT(0), ··· 5003 5002 WIPHY_FLAG_4ADDR_STATION = BIT(6), 5004 5003 WIPHY_FLAG_CONTROL_PORT_PROTOCOL = BIT(7), 5005 5004 WIPHY_FLAG_IBSS_RSN = BIT(8), 5005 + WIPHY_FLAG_DISABLE_WEXT = BIT(9), 5006 5006 WIPHY_FLAG_MESH_AUTH = BIT(10), 5007 5007 WIPHY_FLAG_SUPPORTS_EXT_KCK_32 = BIT(11), 5008 5008 WIPHY_FLAG_SUPPORTS_NSTR_NONPRIMARY = BIT(12),
+1
include/net/inet_connection_sock.h
··· 175 175 void (*delack_handler)(struct timer_list *), 176 176 void (*keepalive_handler)(struct timer_list *)); 177 177 void inet_csk_clear_xmit_timers(struct sock *sk); 178 + void inet_csk_clear_xmit_timers_sync(struct sock *sk); 178 179 179 180 static inline void inet_csk_schedule_ack(struct sock *sk) 180 181 {
+7
include/net/sock.h
··· 1759 1759 #endif 1760 1760 } 1761 1761 1762 + static inline void sock_not_owned_by_me(const struct sock *sk) 1763 + { 1764 + #ifdef CONFIG_LOCKDEP 1765 + WARN_ON_ONCE(lockdep_sock_is_held(sk) && debug_locks); 1766 + #endif 1767 + } 1768 + 1762 1769 static inline bool sock_owned_by_user(const struct sock *sk) 1763 1770 { 1764 1771 sock_owned_by_me(sk);
+2
include/net/xdp_sock.h
··· 188 188 { 189 189 if (!compl) 190 190 return; 191 + if (!compl->tx_timestamp) 192 + return; 191 193 192 194 *compl->tx_timestamp = ops->tmo_fill_timestamp(priv); 193 195 }
+1
include/scsi/scsi_driver.h
··· 12 12 struct scsi_driver { 13 13 struct device_driver gendrv; 14 14 15 + int (*resume)(struct device *); 15 16 void (*rescan)(struct device *); 16 17 blk_status_t (*init_command)(struct scsi_cmnd *); 17 18 void (*uninit_command)(struct scsi_cmnd *);
+1
include/scsi/scsi_host.h
··· 767 767 #define scsi_template_proc_dir(sht) NULL 768 768 #endif 769 769 extern void scsi_scan_host(struct Scsi_Host *); 770 + extern int scsi_resume_device(struct scsi_device *sdev); 770 771 extern int scsi_rescan_device(struct scsi_device *sdev); 771 772 extern void scsi_remove_host(struct Scsi_Host *); 772 773 extern struct Scsi_Host *scsi_host_get(struct Scsi_Host *);
+10
include/sound/intel-nhlt.h
··· 143 143 u32 bus_id, u8 link_type, u8 vbps, u8 bps, 144 144 u8 num_ch, u32 rate, u8 dir, u8 dev_type); 145 145 146 + int intel_nhlt_ssp_device_type(struct device *dev, struct nhlt_acpi_table *nhlt, 147 + u8 virtual_bus_id); 148 + 146 149 #else 147 150 148 151 static inline struct nhlt_acpi_table *intel_nhlt_init(struct device *dev) ··· 185 182 u8 num_ch, u32 rate, u8 dir, u8 dev_type) 186 183 { 187 184 return NULL; 185 + } 186 + 187 + static inline int intel_nhlt_ssp_device_type(struct device *dev, 188 + struct nhlt_acpi_table *nhlt, 189 + u8 virtual_bus_id) 190 + { 191 + return -EINVAL; 188 192 } 189 193 190 194 #endif
+14 -3
include/uapi/linux/kfd_ioctl.h
··· 913 913 KFD_EC_MASK(EC_DEVICE_NEW)) 914 914 #define KFD_EC_MASK_PROCESS (KFD_EC_MASK(EC_PROCESS_RUNTIME) | \ 915 915 KFD_EC_MASK(EC_PROCESS_DEVICE_REMOVE)) 916 + #define KFD_EC_MASK_PACKET (KFD_EC_MASK(EC_QUEUE_PACKET_DISPATCH_DIM_INVALID) | \ 917 + KFD_EC_MASK(EC_QUEUE_PACKET_DISPATCH_GROUP_SEGMENT_SIZE_INVALID) | \ 918 + KFD_EC_MASK(EC_QUEUE_PACKET_DISPATCH_CODE_INVALID) | \ 919 + KFD_EC_MASK(EC_QUEUE_PACKET_RESERVED) | \ 920 + KFD_EC_MASK(EC_QUEUE_PACKET_UNSUPPORTED) | \ 921 + KFD_EC_MASK(EC_QUEUE_PACKET_DISPATCH_WORK_GROUP_SIZE_INVALID) | \ 922 + KFD_EC_MASK(EC_QUEUE_PACKET_DISPATCH_REGISTER_INVALID) | \ 923 + KFD_EC_MASK(EC_QUEUE_PACKET_VENDOR_UNSUPPORTED)) 916 924 917 925 /* Checks for exception code types for KFD search */ 926 + #define KFD_DBG_EC_IS_VALID(ecode) (ecode > EC_NONE && ecode < EC_MAX) 918 927 #define KFD_DBG_EC_TYPE_IS_QUEUE(ecode) \ 919 - (!!(KFD_EC_MASK(ecode) & KFD_EC_MASK_QUEUE)) 928 + (KFD_DBG_EC_IS_VALID(ecode) && !!(KFD_EC_MASK(ecode) & KFD_EC_MASK_QUEUE)) 920 929 #define KFD_DBG_EC_TYPE_IS_DEVICE(ecode) \ 921 - (!!(KFD_EC_MASK(ecode) & KFD_EC_MASK_DEVICE)) 930 + (KFD_DBG_EC_IS_VALID(ecode) && !!(KFD_EC_MASK(ecode) & KFD_EC_MASK_DEVICE)) 922 931 #define KFD_DBG_EC_TYPE_IS_PROCESS(ecode) \ 923 - (!!(KFD_EC_MASK(ecode) & KFD_EC_MASK_PROCESS)) 932 + (KFD_DBG_EC_IS_VALID(ecode) && !!(KFD_EC_MASK(ecode) & KFD_EC_MASK_PROCESS)) 933 + #define KFD_DBG_EC_TYPE_IS_PACKET(ecode) \ 934 + (KFD_DBG_EC_IS_VALID(ecode) && !!(KFD_EC_MASK(ecode) & KFD_EC_MASK_PACKET)) 924 935 925 936 926 937 /* Runtime enable states */
+1 -1
include/uapi/scsi/scsi_bsg_mpi3mr.h
··· 382 382 __u8 mpi_reply_type; 383 383 __u8 rsvd1; 384 384 __u16 rsvd2; 385 - __u8 reply_buf[1]; 385 + __u8 reply_buf[]; 386 386 }; 387 387 388 388 /**
+1
include/ufs/ufshcd.h
··· 328 328 * @op_runtime_config: called to config Operation and runtime regs Pointers 329 329 * @get_outstanding_cqs: called to get outstanding completion queues 330 330 * @config_esi: called to config Event Specific Interrupt 331 + * @config_scsi_dev: called to configure SCSI device parameters 331 332 */ 332 333 struct ufs_hba_variant_ops { 333 334 const char *name;
+1 -1
init/initramfs.c
··· 682 682 683 683 printk(KERN_INFO "rootfs image is not initramfs (%s); looks like an initrd\n", 684 684 err); 685 - file = filp_open("/initrd.image", O_WRONLY | O_CREAT, 0700); 685 + file = filp_open("/initrd.image", O_WRONLY|O_CREAT|O_LARGEFILE, 0700); 686 686 if (IS_ERR(file)) 687 687 return; 688 688
+1 -1
kernel/bpf/Makefile
··· 4 4 # ___bpf_prog_run() needs GCSE disabled on x86; see 3193c0836f203 for details 5 5 cflags-nogcse-$(CONFIG_X86)$(CONFIG_CC_IS_GCC) := -fno-gcse 6 6 endif 7 - CFLAGS_core.o += $(call cc-disable-warning, override-init) $(cflags-nogcse-yy) 7 + CFLAGS_core.o += -Wno-override-init $(cflags-nogcse-yy) 8 8 9 9 obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o log.o token.o 10 10 obj-$(CONFIG_BPF_SYSCALL) += bpf_iter.o map_iter.o task_iter.o prog_iter.o link_iter.o
+18 -7
kernel/bpf/arena.c
··· 38 38 39 39 /* number of bytes addressable by LDX/STX insn with 16-bit 'off' field */ 40 40 #define GUARD_SZ (1ull << sizeof(((struct bpf_insn *)0)->off) * 8) 41 - #define KERN_VM_SZ ((1ull << 32) + GUARD_SZ) 41 + #define KERN_VM_SZ (SZ_4G + GUARD_SZ) 42 42 43 43 struct bpf_arena { 44 44 struct bpf_map map; ··· 110 110 return ERR_PTR(-EINVAL); 111 111 112 112 vm_range = (u64)attr->max_entries * PAGE_SIZE; 113 - if (vm_range > (1ull << 32)) 113 + if (vm_range > SZ_4G) 114 114 return ERR_PTR(-E2BIG); 115 115 116 116 if ((attr->map_extra >> 32) != ((attr->map_extra + vm_range - 1) >> 32)) ··· 301 301 302 302 if (pgoff) 303 303 return -EINVAL; 304 - if (len > (1ull << 32)) 304 + if (len > SZ_4G) 305 305 return -E2BIG; 306 306 307 307 /* if user_vm_start was specified at arena creation time */ ··· 322 322 if (WARN_ON_ONCE(arena->user_vm_start)) 323 323 /* checks at map creation time should prevent this */ 324 324 return -EFAULT; 325 - return round_up(ret, 1ull << 32); 325 + return round_up(ret, SZ_4G); 326 326 } 327 327 328 328 static int arena_map_mmap(struct bpf_map *map, struct vm_area_struct *vma) ··· 346 346 return -EBUSY; 347 347 348 348 /* Earlier checks should prevent this */ 349 - if (WARN_ON_ONCE(vma->vm_end - vma->vm_start > (1ull << 32) || vma->vm_pgoff)) 349 + if (WARN_ON_ONCE(vma->vm_end - vma->vm_start > SZ_4G || vma->vm_pgoff)) 350 350 return -EFAULT; 351 351 352 352 if (remember_vma(arena, vma)) ··· 420 420 if (uaddr & ~PAGE_MASK) 421 421 return 0; 422 422 pgoff = compute_pgoff(arena, uaddr); 423 - if (pgoff + page_cnt > page_cnt_max) 423 + if (pgoff > page_cnt_max - page_cnt) 424 424 /* requested address will be outside of user VMA */ 425 425 return 0; 426 426 } ··· 447 447 goto out; 448 448 449 449 uaddr32 = (u32)(arena->user_vm_start + pgoff * PAGE_SIZE); 450 - /* Earlier checks make sure that uaddr32 + page_cnt * PAGE_SIZE will not overflow 32-bit */ 450 + /* Earlier checks made sure that uaddr32 + page_cnt * PAGE_SIZE - 1 451 + * will not overflow 32-bit. Lower 32-bit need to represent 452 + * contiguous user address range. 453 + * Map these pages at kern_vm_start base. 454 + * kern_vm_start + uaddr32 + page_cnt * PAGE_SIZE - 1 can overflow 455 + * lower 32-bit and it's ok. 456 + */ 451 457 ret = vm_area_map_pages(arena->kern_vm, kern_vm_start + uaddr32, 452 458 kern_vm_start + uaddr32 + page_cnt * PAGE_SIZE, pages); 453 459 if (ret) { ··· 516 510 if (!page) 517 511 continue; 518 512 if (page_cnt == 1 && page_mapped(page)) /* mapped by some user process */ 513 + /* Optimization for the common case of page_cnt==1: 514 + * If page wasn't mapped into some user vma there 515 + * is no need to call zap_pages which is slow. When 516 + * page_cnt is big it's faster to do the batched zap. 517 + */ 519 518 zap_pages(arena, full_uaddr, 1); 520 519 vm_area_unmap_pages(arena->kern_vm, kaddr, kaddr + PAGE_SIZE); 521 520 __free_page(page);
+13
kernel/bpf/bloom_filter.c
··· 80 80 return -EOPNOTSUPP; 81 81 } 82 82 83 + /* Called from syscall */ 84 + static int bloom_map_alloc_check(union bpf_attr *attr) 85 + { 86 + if (attr->value_size > KMALLOC_MAX_SIZE) 87 + /* if value_size is bigger, the user space won't be able to 88 + * access the elements. 89 + */ 90 + return -E2BIG; 91 + 92 + return 0; 93 + } 94 + 83 95 static struct bpf_map *bloom_map_alloc(union bpf_attr *attr) 84 96 { 85 97 u32 bitset_bytes, bitset_mask, nr_hash_funcs, nr_bits; ··· 203 191 BTF_ID_LIST_SINGLE(bpf_bloom_map_btf_ids, struct, bpf_bloom_filter) 204 192 const struct bpf_map_ops bloom_filter_map_ops = { 205 193 .map_meta_equal = bpf_map_meta_equal, 194 + .map_alloc_check = bloom_map_alloc_check, 206 195 .map_alloc = bloom_map_alloc, 207 196 .map_free = bloom_map_free, 208 197 .map_get_next_key = bloom_map_get_next_key,
+1 -1
kernel/bpf/helpers.c
··· 2548 2548 __bpf_kfunc_end_defs(); 2549 2549 2550 2550 BTF_KFUNCS_START(generic_btf_ids) 2551 - #ifdef CONFIG_KEXEC_CORE 2551 + #ifdef CONFIG_CRASH_DUMP 2552 2552 BTF_ID_FLAGS(func, crash_kexec, KF_DESTRUCTIVE) 2553 2553 #endif 2554 2554 BTF_ID_FLAGS(func, bpf_obj_new_impl, KF_ACQUIRE | KF_RET_NULL)
+24 -3
kernel/bpf/verifier.c
··· 5682 5682 return reg->type == PTR_TO_FLOW_KEYS; 5683 5683 } 5684 5684 5685 + static bool is_arena_reg(struct bpf_verifier_env *env, int regno) 5686 + { 5687 + const struct bpf_reg_state *reg = reg_state(env, regno); 5688 + 5689 + return reg->type == PTR_TO_ARENA; 5690 + } 5691 + 5685 5692 static u32 *reg2btf_ids[__BPF_REG_TYPE_MAX] = { 5686 5693 #ifdef CONFIG_NET 5687 5694 [PTR_TO_SOCKET] = &btf_sock_ids[BTF_SOCK_TYPE_SOCK], ··· 6701 6694 err = check_stack_slot_within_bounds(env, min_off, state, type); 6702 6695 if (!err && max_off > 0) 6703 6696 err = -EINVAL; /* out of stack access into non-negative offsets */ 6697 + if (!err && access_size < 0) 6698 + /* access_size should not be negative (or overflow an int); others checks 6699 + * along the way should have prevented such an access. 6700 + */ 6701 + err = -EFAULT; /* invalid negative access size; integer overflow? */ 6704 6702 6705 6703 if (err) { 6706 6704 if (tnum_is_const(reg->var_off)) { ··· 7031 7019 if (is_ctx_reg(env, insn->dst_reg) || 7032 7020 is_pkt_reg(env, insn->dst_reg) || 7033 7021 is_flow_key_reg(env, insn->dst_reg) || 7034 - is_sk_reg(env, insn->dst_reg)) { 7022 + is_sk_reg(env, insn->dst_reg) || 7023 + is_arena_reg(env, insn->dst_reg)) { 7035 7024 verbose(env, "BPF_ATOMIC stores into R%d %s is not allowed\n", 7036 7025 insn->dst_reg, 7037 7026 reg_type_str(env, reg_state(env, insn->dst_reg)->type)); ··· 14027 14014 verbose(env, "addr_space_cast insn can only convert between address space 1 and 0\n"); 14028 14015 return -EINVAL; 14029 14016 } 14017 + if (!env->prog->aux->arena) { 14018 + verbose(env, "addr_space_cast insn can only be used in a program that has an associated arena\n"); 14019 + return -EINVAL; 14020 + } 14030 14021 } else { 14031 14022 if ((insn->off != 0 && insn->off != 8 && insn->off != 16 && 14032 14023 insn->off != 32) || insn->imm) { ··· 14063 14046 if (insn->imm) { 14064 14047 /* off == BPF_ADDR_SPACE_CAST */ 14065 14048 mark_reg_unknown(env, regs, insn->dst_reg); 14066 - if (insn->imm == 1) /* cast from as(1) to as(0) */ 14049 + if (insn->imm == 1) { /* cast from as(1) to as(0) */ 14067 14050 dst_reg->type = PTR_TO_ARENA; 14051 + /* PTR_TO_ARENA is 32-bit */ 14052 + dst_reg->subreg_def = env->insn_idx + 1; 14053 + } 14068 14054 } else if (insn->off == 0) { 14069 14055 /* case: R1 = R2 14070 14056 * copy register state to dest reg ··· 19621 19601 (((struct bpf_map *)env->prog->aux->arena)->map_flags & BPF_F_NO_USER_CONV)) { 19622 19602 /* convert to 32-bit mov that clears upper 32-bit */ 19623 19603 insn->code = BPF_ALU | BPF_MOV | BPF_X; 19624 - /* clear off, so it's a normal 'wX = wY' from JIT pov */ 19604 + /* clear off and imm, so it's a normal 'wX = wY' from JIT pov */ 19625 19605 insn->off = 0; 19606 + insn->imm = 0; 19626 19607 } /* cast from as(0) to as(1) should be handled by JIT */ 19627 19608 goto next_insn; 19628 19609 }
+7
kernel/crash_reserve.c
··· 366 366 367 367 crashk_low_res.start = low_base; 368 368 crashk_low_res.end = low_base + low_size - 1; 369 + #ifdef HAVE_ARCH_ADD_CRASH_RES_TO_IOMEM_EARLY 369 370 insert_resource(&iomem_resource, &crashk_low_res); 371 + #endif 370 372 #endif 371 373 return 0; 372 374 } ··· 450 448 451 449 crashk_res.start = crash_base; 452 450 crashk_res.end = crash_base + crash_size - 1; 451 + #ifdef HAVE_ARCH_ADD_CRASH_RES_TO_IOMEM_EARLY 452 + insert_resource(&iomem_resource, &crashk_res); 453 + #endif 453 454 } 454 455 456 + #ifndef HAVE_ARCH_ADD_CRASH_RES_TO_IOMEM_EARLY 455 457 static __init int insert_crashkernel_resources(void) 456 458 { 457 459 if (crashk_res.start < crashk_res.end) ··· 467 461 return 0; 468 462 } 469 463 early_initcall(insert_crashkernel_resources); 464 + #endif 470 465 #endif
+7 -2
kernel/irq/manage.c
··· 1643 1643 } 1644 1644 1645 1645 if (!((old->flags & new->flags) & IRQF_SHARED) || 1646 - (oldtype != (new->flags & IRQF_TRIGGER_MASK)) || 1647 - ((old->flags ^ new->flags) & IRQF_ONESHOT)) 1646 + (oldtype != (new->flags & IRQF_TRIGGER_MASK))) 1647 + goto mismatch; 1648 + 1649 + if ((old->flags & IRQF_ONESHOT) && 1650 + (new->flags & IRQF_COND_ONESHOT)) 1651 + new->flags |= IRQF_ONESHOT; 1652 + else if ((old->flags ^ new->flags) & IRQF_ONESHOT) 1648 1653 goto mismatch; 1649 1654 1650 1655 /* All handlers must agree on per-cpuness */
+5
kernel/module/Kconfig
··· 236 236 possible to load a signed module containing the algorithm to check 237 237 the signature on that module. 238 238 239 + config MODULE_SIG_SHA1 240 + bool "Sign modules with SHA-1" 241 + select CRYPTO_SHA1 242 + 239 243 config MODULE_SIG_SHA256 240 244 bool "Sign modules with SHA-256" 241 245 select CRYPTO_SHA256 ··· 269 265 config MODULE_SIG_HASH 270 266 string 271 267 depends on MODULE_SIG || IMA_APPRAISE_MODSIG 268 + default "sha1" if MODULE_SIG_SHA1 272 269 default "sha256" if MODULE_SIG_SHA256 273 270 default "sha384" if MODULE_SIG_SHA384 274 271 default "sha512" if MODULE_SIG_SHA512
+6
kernel/printk/printk.c
··· 2009 2009 */ 2010 2010 mutex_acquire(&console_lock_dep_map, 0, 1, _THIS_IP_); 2011 2011 2012 + /* 2013 + * Update @console_may_schedule for trylock because the previous 2014 + * owner may have been schedulable. 2015 + */ 2016 + console_may_schedule = 0; 2017 + 2012 2018 return 1; 2013 2019 } 2014 2020
+5 -2
kernel/sys.c
··· 2408 2408 if (bits & PR_MDWE_NO_INHERIT && !(bits & PR_MDWE_REFUSE_EXEC_GAIN)) 2409 2409 return -EINVAL; 2410 2410 2411 - /* PARISC cannot allow mdwe as it needs writable stacks */ 2412 - if (IS_ENABLED(CONFIG_PARISC)) 2411 + /* 2412 + * EOPNOTSUPP might be more appropriate here in principle, but 2413 + * existing userspace depends on EINVAL specifically. 2414 + */ 2415 + if (!arch_memory_deny_write_exec_supported()) 2413 2416 return -EINVAL; 2414 2417 2415 2418 current_bits = get_current_mdwe();
+9 -7
kernel/time/posix-clock.c
··· 129 129 goto out; 130 130 } 131 131 pccontext->clk = clk; 132 - fp->private_data = pccontext; 133 - if (clk->ops.open) 132 + if (clk->ops.open) { 134 133 err = clk->ops.open(pccontext, fp->f_mode); 135 - else 136 - err = 0; 137 - 138 - if (!err) { 139 - get_device(clk->dev); 134 + if (err) { 135 + kfree(pccontext); 136 + goto out; 137 + } 140 138 } 139 + 140 + fp->private_data = pccontext; 141 + get_device(clk->dev); 142 + err = 0; 141 143 out: 142 144 up_read(&clk->rwsem); 143 145 return err;
+1 -1
kernel/trace/trace_probe.c
··· 839 839 void store_trace_entry_data(void *edata, struct trace_probe *tp, struct pt_regs *regs) 840 840 { 841 841 struct probe_entry_arg *earg = tp->entry_arg; 842 - unsigned long val; 842 + unsigned long val = 0; 843 843 int i; 844 844 845 845 if (!earg)
+1 -2
mm/Makefile
··· 29 29 KCOV_INSTRUMENT_vmstat.o := n 30 30 KCOV_INSTRUMENT_failslab.o := n 31 31 32 - CFLAGS_init-mm.o += $(call cc-disable-warning, override-init) 33 - CFLAGS_init-mm.o += $(call cc-disable-warning, initializer-overrides) 32 + CFLAGS_init-mm.o += -Wno-override-init 34 33 35 34 mmu-y := nommu.o 36 35 mmu-$(CONFIG_MMU) := highmem.o memory.o mincore.o \
+16
mm/filemap.c
··· 4197 4197 /* shmem file - in swap cache */ 4198 4198 swp_entry_t swp = radix_to_swp_entry(folio); 4199 4199 4200 + /* swapin error results in poisoned entry */ 4201 + if (non_swap_entry(swp)) 4202 + goto resched; 4203 + 4204 + /* 4205 + * Getting a swap entry from the shmem 4206 + * inode means we beat 4207 + * shmem_unuse(). rcu_read_lock() 4208 + * ensures swapoff waits for us before 4209 + * freeing the swapper space. However, 4210 + * we can race with swapping and 4211 + * invalidation, so there might not be 4212 + * a shadow in the swapcache (yet). 4213 + */ 4200 4214 shadow = get_shadow_from_swap_cache(swp); 4215 + if (!shadow) 4216 + goto resched; 4201 4217 } 4202 4218 #endif 4203 4219 if (workingset_test_recent(shadow, true, &workingset))
+8 -6
mm/gup.c
··· 1653 1653 if (vma->vm_flags & VM_LOCKONFAULT) 1654 1654 return nr_pages; 1655 1655 1656 + /* ... similarly, we've never faulted in PROT_NONE pages */ 1657 + if (!vma_is_accessible(vma)) 1658 + return -EFAULT; 1659 + 1656 1660 gup_flags = FOLL_TOUCH; 1657 1661 /* 1658 1662 * We want to touch writable mappings with a write fault in order 1659 1663 * to break COW, except for shared mappings because these don't COW 1660 1664 * and we would not want to dirty them for nothing. 1665 + * 1666 + * Otherwise, do a read fault, and use FOLL_FORCE in case it's not 1667 + * readable (ie write-only or executable). 1661 1668 */ 1662 1669 if ((vma->vm_flags & (VM_WRITE | VM_SHARED)) == VM_WRITE) 1663 1670 gup_flags |= FOLL_WRITE; 1664 - 1665 - /* 1666 - * We want mlock to succeed for regions that have any permissions 1667 - * other than PROT_NONE. 1668 - */ 1669 - if (vma_is_accessible(vma)) 1671 + else 1670 1672 gup_flags |= FOLL_FORCE; 1671 1673 1672 1674 if (locked)
+3 -1
mm/memory.c
··· 1536 1536 ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); 1537 1537 arch_check_zapped_pte(vma, ptent); 1538 1538 tlb_remove_tlb_entry(tlb, pte, addr); 1539 - VM_WARN_ON_ONCE(userfaultfd_wp(vma)); 1539 + if (userfaultfd_pte_wp(vma, ptent)) 1540 + zap_install_uffd_wp_if_needed(vma, addr, pte, 1, 1541 + details, ptent); 1540 1542 ksm_might_unmap_zero_page(mm, ptent); 1541 1543 return 1; 1542 1544 }
+23 -10
mm/page_owner.c
··· 54 54 55 55 static void init_early_allocated_pages(void); 56 56 57 + static inline void set_current_in_page_owner(void) 58 + { 59 + /* 60 + * Avoid recursion. 61 + * 62 + * We might need to allocate more memory from page_owner code, so make 63 + * sure to signal it in order to avoid recursion. 64 + */ 65 + current->in_page_owner = 1; 66 + } 67 + 68 + static inline void unset_current_in_page_owner(void) 69 + { 70 + current->in_page_owner = 0; 71 + } 72 + 57 73 static int __init early_page_owner_param(char *buf) 58 74 { 59 75 int ret = kstrtobool(buf, &page_owner_enabled); ··· 149 133 depot_stack_handle_t handle; 150 134 unsigned int nr_entries; 151 135 152 - /* 153 - * Avoid recursion. 154 - * 155 - * Sometimes page metadata allocation tracking requires more 156 - * memory to be allocated: 157 - * - when new stack trace is saved to stack depot 158 - */ 159 136 if (current->in_page_owner) 160 137 return dummy_handle; 161 - current->in_page_owner = 1; 162 138 139 + set_current_in_page_owner(); 163 140 nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2); 164 141 handle = stack_depot_save(entries, nr_entries, flags); 165 142 if (!handle) 166 143 handle = failure_handle; 144 + unset_current_in_page_owner(); 167 145 168 - current->in_page_owner = 0; 169 146 return handle; 170 147 } 171 148 ··· 173 164 gfp_mask &= (GFP_ATOMIC | GFP_KERNEL); 174 165 gfp_mask |= __GFP_NOWARN; 175 166 167 + set_current_in_page_owner(); 176 168 stack = kmalloc(sizeof(*stack), gfp_mask); 177 - if (!stack) 169 + if (!stack) { 170 + unset_current_in_page_owner(); 178 171 return; 172 + } 173 + unset_current_in_page_owner(); 179 174 180 175 stack->stack_record = stack_record; 181 176 stack->next = NULL;
+7 -3
mm/shmem_quota.c
··· 116 116 static int shmem_get_next_id(struct super_block *sb, struct kqid *qid) 117 117 { 118 118 struct mem_dqinfo *info = sb_dqinfo(sb, qid->type); 119 - struct rb_node *node = ((struct rb_root *)info->dqi_priv)->rb_node; 119 + struct rb_node *node; 120 120 qid_t id = from_kqid(&init_user_ns, *qid); 121 121 struct quota_info *dqopt = sb_dqopt(sb); 122 122 struct quota_id *entry = NULL; ··· 126 126 return -ESRCH; 127 127 128 128 down_read(&dqopt->dqio_sem); 129 + node = ((struct rb_root *)info->dqi_priv)->rb_node; 129 130 while (node) { 130 131 entry = rb_entry(node, struct quota_id, node); 131 132 ··· 166 165 static int shmem_acquire_dquot(struct dquot *dquot) 167 166 { 168 167 struct mem_dqinfo *info = sb_dqinfo(dquot->dq_sb, dquot->dq_id.type); 169 - struct rb_node **n = &((struct rb_root *)info->dqi_priv)->rb_node; 168 + struct rb_node **n; 170 169 struct shmem_sb_info *sbinfo = dquot->dq_sb->s_fs_info; 171 170 struct rb_node *parent = NULL, *new_node = NULL; 172 171 struct quota_id *new_entry, *entry; ··· 177 176 mutex_lock(&dquot->dq_lock); 178 177 179 178 down_write(&dqopt->dqio_sem); 179 + n = &((struct rb_root *)info->dqi_priv)->rb_node; 180 + 180 181 while (*n) { 181 182 parent = *n; 182 183 entry = rb_entry(parent, struct quota_id, node); ··· 267 264 static int shmem_release_dquot(struct dquot *dquot) 268 265 { 269 266 struct mem_dqinfo *info = sb_dqinfo(dquot->dq_sb, dquot->dq_id.type); 270 - struct rb_node *node = ((struct rb_root *)info->dqi_priv)->rb_node; 267 + struct rb_node *node; 271 268 qid_t id = from_kqid(&init_user_ns, dquot->dq_id); 272 269 struct quota_info *dqopt = sb_dqopt(dquot->dq_sb); 273 270 struct quota_id *entry = NULL; ··· 278 275 goto out_dqlock; 279 276 280 277 down_write(&dqopt->dqio_sem); 278 + node = ((struct rb_root *)info->dqi_priv)->rb_node; 281 279 while (node) { 282 280 entry = rb_entry(node, struct quota_id, node); 283 281
+2 -1
mm/userfaultfd.c
··· 1444 1444 */ 1445 1445 down_read(&(*dst_vmap)->vm_lock->lock); 1446 1446 if (*dst_vmap != *src_vmap) 1447 - down_read(&(*src_vmap)->vm_lock->lock); 1447 + down_read_nested(&(*src_vmap)->vm_lock->lock, 1448 + SINGLE_DEPTH_NESTING); 1448 1449 } 1449 1450 mmap_read_unlock(mm); 1450 1451 return err;
+39 -6
mm/zswap.c
··· 1080 1080 mutex_lock(&acomp_ctx->mutex); 1081 1081 1082 1082 src = zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO); 1083 - if (acomp_ctx->is_sleepable && !zpool_can_sleep_mapped(zpool)) { 1083 + /* 1084 + * If zpool_map_handle is atomic, we cannot reliably utilize its mapped buffer 1085 + * to do crypto_acomp_decompress() which might sleep. In such cases, we must 1086 + * resort to copying the buffer to a temporary one. 1087 + * Meanwhile, zpool_map_handle() might return a non-linearly mapped buffer, 1088 + * such as a kmap address of high memory or even ever a vmap address. 1089 + * However, sg_init_one is only equipped to handle linearly mapped low memory. 1090 + * In such cases, we also must copy the buffer to a temporary and lowmem one. 1091 + */ 1092 + if ((acomp_ctx->is_sleepable && !zpool_can_sleep_mapped(zpool)) || 1093 + !virt_addr_valid(src)) { 1084 1094 memcpy(acomp_ctx->buffer, src, entry->length); 1085 1095 src = acomp_ctx->buffer; 1086 1096 zpool_unmap_handle(zpool, entry->handle); ··· 1104 1094 BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE); 1105 1095 mutex_unlock(&acomp_ctx->mutex); 1106 1096 1107 - if (!acomp_ctx->is_sleepable || zpool_can_sleep_mapped(zpool)) 1097 + if (src != acomp_ctx->buffer) 1108 1098 zpool_unmap_handle(zpool, entry->handle); 1109 1099 } 1110 1100 ··· 1321 1311 unsigned long nr_backing, nr_stored, nr_freeable, nr_protected; 1322 1312 1323 1313 if (!zswap_shrinker_enabled || !mem_cgroup_zswap_writeback_enabled(memcg)) 1314 + return 0; 1315 + 1316 + /* 1317 + * The shrinker resumes swap writeback, which will enter block 1318 + * and may enter fs. XXX: Harmonize with vmscan.c __GFP_FS 1319 + * rules (may_enter_fs()), which apply on a per-folio basis. 1320 + */ 1321 + if (!gfp_has_io_fs(sc->gfp_mask)) 1324 1322 return 0; 1325 1323 1326 1324 #ifdef CONFIG_MEMCG_KMEM ··· 1636 1618 swp_entry_t swp = folio->swap; 1637 1619 pgoff_t offset = swp_offset(swp); 1638 1620 struct page *page = &folio->page; 1621 + bool swapcache = folio_test_swapcache(folio); 1639 1622 struct zswap_tree *tree = swap_zswap_tree(swp); 1640 1623 struct zswap_entry *entry; 1641 1624 u8 *dst; ··· 1649 1630 spin_unlock(&tree->lock); 1650 1631 return false; 1651 1632 } 1652 - zswap_rb_erase(&tree->rbroot, entry); 1633 + /* 1634 + * When reading into the swapcache, invalidate our entry. The 1635 + * swapcache can be the authoritative owner of the page and 1636 + * its mappings, and the pressure that results from having two 1637 + * in-memory copies outweighs any benefits of caching the 1638 + * compression work. 1639 + * 1640 + * (Most swapins go through the swapcache. The notable 1641 + * exception is the singleton fault on SWP_SYNCHRONOUS_IO 1642 + * files, which reads into a private page and may free it if 1643 + * the fault fails. We remain the primary owner of the entry.) 1644 + */ 1645 + if (swapcache) 1646 + zswap_rb_erase(&tree->rbroot, entry); 1653 1647 spin_unlock(&tree->lock); 1654 1648 1655 1649 if (entry->length) ··· 1677 1645 if (entry->objcg) 1678 1646 count_objcg_event(entry->objcg, ZSWPIN); 1679 1647 1680 - zswap_entry_free(entry); 1681 - 1682 - folio_mark_dirty(folio); 1648 + if (swapcache) { 1649 + zswap_entry_free(entry); 1650 + folio_mark_dirty(folio); 1651 + } 1683 1652 1684 1653 return true; 1685 1654 }
+2 -2
net/core/sock.c
··· 482 482 unsigned long flags; 483 483 struct sk_buff_head *list = &sk->sk_receive_queue; 484 484 485 - if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf) { 485 + if (atomic_read(&sk->sk_rmem_alloc) >= READ_ONCE(sk->sk_rcvbuf)) { 486 486 atomic_inc(&sk->sk_drops); 487 487 trace_sock_rcvqueue_full(sk, skb); 488 488 return -ENOMEM; ··· 552 552 553 553 skb->dev = NULL; 554 554 555 - if (sk_rcvqueues_full(sk, sk->sk_rcvbuf)) { 555 + if (sk_rcvqueues_full(sk, READ_ONCE(sk->sk_rcvbuf))) { 556 556 atomic_inc(&sk->sk_drops); 557 557 goto discard_and_relse; 558 558 }
+2 -1
net/hsr/hsr_slave.c
··· 220 220 netdev_update_features(master->dev); 221 221 dev_set_mtu(master->dev, hsr_get_max_mtu(hsr)); 222 222 netdev_rx_handler_unregister(port->dev); 223 - dev_set_promiscuity(port->dev, -1); 223 + if (!port->hsr->fwd_offloaded) 224 + dev_set_promiscuity(port->dev, -1); 224 225 netdev_upper_dev_unlink(port->dev, master->dev); 225 226 } 226 227
+14
net/ipv4/inet_connection_sock.c
··· 771 771 } 772 772 EXPORT_SYMBOL(inet_csk_clear_xmit_timers); 773 773 774 + void inet_csk_clear_xmit_timers_sync(struct sock *sk) 775 + { 776 + struct inet_connection_sock *icsk = inet_csk(sk); 777 + 778 + /* ongoing timer handlers need to acquire socket lock. */ 779 + sock_not_owned_by_me(sk); 780 + 781 + icsk->icsk_pending = icsk->icsk_ack.pending = 0; 782 + 783 + sk_stop_timer_sync(sk, &icsk->icsk_retransmit_timer); 784 + sk_stop_timer_sync(sk, &icsk->icsk_delack_timer); 785 + sk_stop_timer_sync(sk, &sk->sk_timer); 786 + } 787 + 774 788 void inet_csk_delete_keepalive_timer(struct sock *sk) 775 789 { 776 790 sk_stop_timer(sk, &sk->sk_timer);
+57 -13
net/ipv4/inet_fragment.c
··· 24 24 #include <net/ip.h> 25 25 #include <net/ipv6.h> 26 26 27 + #include "../core/sock_destructor.h" 28 + 27 29 /* Use skb->cb to track consecutive/adjacent fragments coming at 28 30 * the end of the queue. Nodes in the rb-tree queue will 29 31 * contain "runs" of one or more adjacent fragments. ··· 41 39 }; 42 40 struct sk_buff *next_frag; 43 41 int frag_run_len; 42 + int ip_defrag_offset; 44 43 }; 45 44 46 45 #define FRAG_CB(skb) ((struct ipfrag_skb_cb *)((skb)->cb)) ··· 399 396 */ 400 397 if (!last) 401 398 fragrun_create(q, skb); /* First fragment. */ 402 - else if (last->ip_defrag_offset + last->len < end) { 399 + else if (FRAG_CB(last)->ip_defrag_offset + last->len < end) { 403 400 /* This is the common case: skb goes to the end. */ 404 401 /* Detect and discard overlaps. */ 405 - if (offset < last->ip_defrag_offset + last->len) 402 + if (offset < FRAG_CB(last)->ip_defrag_offset + last->len) 406 403 return IPFRAG_OVERLAP; 407 - if (offset == last->ip_defrag_offset + last->len) 404 + if (offset == FRAG_CB(last)->ip_defrag_offset + last->len) 408 405 fragrun_append_to_last(q, skb); 409 406 else 410 407 fragrun_create(q, skb); ··· 421 418 422 419 parent = *rbn; 423 420 curr = rb_to_skb(parent); 424 - curr_run_end = curr->ip_defrag_offset + 421 + curr_run_end = FRAG_CB(curr)->ip_defrag_offset + 425 422 FRAG_CB(curr)->frag_run_len; 426 - if (end <= curr->ip_defrag_offset) 423 + if (end <= FRAG_CB(curr)->ip_defrag_offset) 427 424 rbn = &parent->rb_left; 428 425 else if (offset >= curr_run_end) 429 426 rbn = &parent->rb_right; 430 - else if (offset >= curr->ip_defrag_offset && 427 + else if (offset >= FRAG_CB(curr)->ip_defrag_offset && 431 428 end <= curr_run_end) 432 429 return IPFRAG_DUP; 433 430 else ··· 441 438 rb_insert_color(&skb->rbnode, &q->rb_fragments); 442 439 } 443 440 444 - skb->ip_defrag_offset = offset; 441 + FRAG_CB(skb)->ip_defrag_offset = offset; 445 442 446 443 return IPFRAG_OK; 447 444 } ··· 451 448 struct sk_buff *parent) 452 449 { 453 450 struct sk_buff *fp, *head = skb_rb_first(&q->rb_fragments); 454 - struct sk_buff **nextp; 451 + void (*destructor)(struct sk_buff *); 452 + unsigned int orig_truesize = 0; 453 + struct sk_buff **nextp = NULL; 454 + struct sock *sk = skb->sk; 455 455 int delta; 456 + 457 + if (sk && is_skb_wmem(skb)) { 458 + /* TX: skb->sk might have been passed as argument to 459 + * dst->output and must remain valid until tx completes. 460 + * 461 + * Move sk to reassembled skb and fix up wmem accounting. 462 + */ 463 + orig_truesize = skb->truesize; 464 + destructor = skb->destructor; 465 + } 456 466 457 467 if (head != skb) { 458 468 fp = skb_clone(skb, GFP_ATOMIC); 459 - if (!fp) 460 - return NULL; 469 + if (!fp) { 470 + head = skb; 471 + goto out_restore_sk; 472 + } 461 473 FRAG_CB(fp)->next_frag = FRAG_CB(skb)->next_frag; 462 474 if (RB_EMPTY_NODE(&skb->rbnode)) 463 475 FRAG_CB(parent)->next_frag = fp; ··· 481 463 &q->rb_fragments); 482 464 if (q->fragments_tail == skb) 483 465 q->fragments_tail = fp; 466 + 467 + if (orig_truesize) { 468 + /* prevent skb_morph from releasing sk */ 469 + skb->sk = NULL; 470 + skb->destructor = NULL; 471 + } 484 472 skb_morph(skb, head); 485 473 FRAG_CB(skb)->next_frag = FRAG_CB(head)->next_frag; 486 474 rb_replace_node(&head->rbnode, &skb->rbnode, ··· 494 470 consume_skb(head); 495 471 head = skb; 496 472 } 497 - WARN_ON(head->ip_defrag_offset != 0); 473 + WARN_ON(FRAG_CB(head)->ip_defrag_offset != 0); 498 474 499 475 delta = -head->truesize; 500 476 501 477 /* Head of list must not be cloned. */ 502 478 if (skb_unclone(head, GFP_ATOMIC)) 503 - return NULL; 479 + goto out_restore_sk; 504 480 505 481 delta += head->truesize; 506 482 if (delta) ··· 516 492 517 493 clone = alloc_skb(0, GFP_ATOMIC); 518 494 if (!clone) 519 - return NULL; 495 + goto out_restore_sk; 520 496 skb_shinfo(clone)->frag_list = skb_shinfo(head)->frag_list; 521 497 skb_frag_list_init(head); 522 498 for (i = 0; i < skb_shinfo(head)->nr_frags; i++) ··· 533 509 nextp = &skb_shinfo(head)->frag_list; 534 510 } 535 511 512 + out_restore_sk: 513 + if (orig_truesize) { 514 + int ts_delta = head->truesize - orig_truesize; 515 + 516 + /* if this reassembled skb is fragmented later, 517 + * fraglist skbs will get skb->sk assigned from head->sk, 518 + * and each frag skb will be released via sock_wfree. 519 + * 520 + * Update sk_wmem_alloc. 521 + */ 522 + head->sk = sk; 523 + head->destructor = destructor; 524 + refcount_add(ts_delta, &sk->sk_wmem_alloc); 525 + } 526 + 536 527 return nextp; 537 528 } 538 529 EXPORT_SYMBOL(inet_frag_reasm_prepare); ··· 555 516 void inet_frag_reasm_finish(struct inet_frag_queue *q, struct sk_buff *head, 556 517 void *reasm_data, bool try_coalesce) 557 518 { 519 + struct sock *sk = is_skb_wmem(head) ? head->sk : NULL; 520 + const unsigned int head_truesize = head->truesize; 558 521 struct sk_buff **nextp = reasm_data; 559 522 struct rb_node *rbn; 560 523 struct sk_buff *fp; ··· 620 579 head->prev = NULL; 621 580 head->tstamp = q->stamp; 622 581 head->mono_delivery_time = q->mono_delivery_time; 582 + 583 + if (sk) 584 + refcount_add(sum_truesize - head_truesize, &sk->sk_wmem_alloc); 623 585 } 624 586 EXPORT_SYMBOL(inet_frag_reasm_finish); 625 587
+1 -1
net/ipv4/ip_fragment.c
··· 384 384 } 385 385 386 386 skb_dst_drop(skb); 387 + skb_orphan(skb); 387 388 return -EINPROGRESS; 388 389 389 390 insert_error: ··· 488 487 struct ipq *qp; 489 488 490 489 __IP_INC_STATS(net, IPSTATS_MIB_REASMREQDS); 491 - skb_orphan(skb); 492 490 493 491 /* Lookup (or create) queue header */ 494 492 qp = ip_find(net, ip_hdr(skb), user, vif);
+1
net/ipv4/netfilter/Kconfig
··· 329 329 config IP_NF_ARPFILTER 330 330 tristate "arptables-legacy packet filtering support" 331 331 select IP_NF_ARPTABLES 332 + select NETFILTER_FAMILY_ARP 332 333 depends on NETFILTER_XTABLES 333 334 help 334 335 ARP packet filtering defines a table `filter', which has a series of
+3 -1
net/ipv4/nexthop.c
··· 768 768 struct net *net = nh->net; 769 769 int err; 770 770 771 - if (nexthop_notifiers_is_empty(net)) 771 + if (nexthop_notifiers_is_empty(net)) { 772 + *hw_stats_used = false; 772 773 return 0; 774 + } 773 775 774 776 err = nh_notifier_grp_hw_stats_init(&info, nh); 775 777 if (err)
+2
net/ipv4/tcp.c
··· 2931 2931 lock_sock(sk); 2932 2932 __tcp_close(sk, timeout); 2933 2933 release_sock(sk); 2934 + if (!sk->sk_net_refcnt) 2935 + inet_csk_clear_xmit_timers_sync(sk); 2934 2936 sock_put(sk); 2935 2937 } 2936 2938 EXPORT_SYMBOL(tcp_close);
+3 -2
net/ipv6/addrconf.c
··· 5416 5416 5417 5417 err = 0; 5418 5418 if (fillargs.ifindex) { 5419 - err = -ENODEV; 5420 5419 dev = dev_get_by_index_rcu(tgt_net, fillargs.ifindex); 5421 - if (!dev) 5420 + if (!dev) { 5421 + err = -ENODEV; 5422 5422 goto done; 5423 + } 5423 5424 idev = __in6_dev_get(dev); 5424 5425 if (idev) 5425 5426 err = in6_dump_addrs(idev, skb, cb,
+1 -1
net/ipv6/netfilter/nf_conntrack_reasm.c
··· 294 294 } 295 295 296 296 skb_dst_drop(skb); 297 + skb_orphan(skb); 297 298 return -EINPROGRESS; 298 299 299 300 insert_error: ··· 470 469 hdr = ipv6_hdr(skb); 471 470 fhdr = (struct frag_hdr *)skb_transport_header(skb); 472 471 473 - skb_orphan(skb); 474 472 fq = fq_find(net, fhdr->identification, user, hdr, 475 473 skb->dev ? skb->dev->ifindex : 0); 476 474 if (fq == NULL) {
+2 -3
net/mac80211/cfg.c
··· 2199 2199 } 2200 2200 2201 2201 if (sta->sdata->vif.type == NL80211_IFTYPE_AP_VLAN && 2202 - sta->sdata->u.vlan.sta) { 2203 - ieee80211_clear_fast_rx(sta); 2202 + sta->sdata->u.vlan.sta) 2204 2203 RCU_INIT_POINTER(sta->sdata->u.vlan.sta, NULL); 2205 - } 2206 2204 2207 2205 if (test_sta_flag(sta, WLAN_STA_AUTHORIZED)) 2208 2206 ieee80211_vif_dec_num_mcast(sta->sdata); 2209 2207 2210 2208 sta->sdata = vlansdata; 2209 + ieee80211_check_fast_rx(sta); 2211 2210 ieee80211_check_fast_xmit(sta); 2212 2211 2213 2212 if (test_sta_flag(sta, WLAN_STA_AUTHORIZED)) {
+1 -1
net/mac80211/debug.h
··· 158 158 _sdata_dbg(print, sdata, "[link %d] " fmt, \ 159 159 link_id, ##__VA_ARGS__); \ 160 160 else \ 161 - _sdata_dbg(1, sdata, fmt, ##__VA_ARGS__); \ 161 + _sdata_dbg(print, sdata, fmt, ##__VA_ARGS__); \ 162 162 } while (0) 163 163 #define link_dbg(link, fmt, ...) \ 164 164 _link_id_dbg(1, (link)->sdata, (link)->link_id, \
+2 -2
net/mac80211/ieee80211_i.h
··· 131 131 }; 132 132 133 133 /** 134 - * enum ieee80211_corrupt_data_flags - BSS data corruption flags 134 + * enum ieee80211_bss_corrupt_data_flags - BSS data corruption flags 135 135 * @IEEE80211_BSS_CORRUPT_BEACON: last beacon frame received was corrupted 136 136 * @IEEE80211_BSS_CORRUPT_PROBE_RESP: last probe response received was corrupted 137 137 * ··· 144 144 }; 145 145 146 146 /** 147 - * enum ieee80211_valid_data_flags - BSS valid data flags 147 + * enum ieee80211_bss_valid_data_flags - BSS valid data flags 148 148 * @IEEE80211_BSS_VALID_WMM: WMM/UAPSD data was gathered from non-corrupt IE 149 149 * @IEEE80211_BSS_VALID_RATES: Supported rates were gathered from non-corrupt IE 150 150 * @IEEE80211_BSS_VALID_ERP: ERP flag was gathered from non-corrupt IE
+12 -3
net/mac80211/mlme.c
··· 5874 5874 } 5875 5875 5876 5876 if (sdata->vif.active_links != active_links) { 5877 + /* usable links are affected when active_links are changed, 5878 + * so notify the driver about the status change 5879 + */ 5880 + changed |= BSS_CHANGED_MLD_VALID_LINKS; 5881 + active_links &= sdata->vif.active_links; 5882 + if (!active_links) 5883 + active_links = 5884 + BIT(__ffs(sdata->vif.valid_links & 5885 + ~dormant_links)); 5877 5886 ret = ieee80211_set_active_links(&sdata->vif, active_links); 5878 5887 if (ret) { 5879 5888 sdata_info(sdata, "Failed to set TTLM active links\n"); ··· 5897 5888 goto out; 5898 5889 } 5899 5890 5900 - changed |= BSS_CHANGED_MLD_VALID_LINKS; 5901 5891 sdata->vif.suspended_links = suspended_links; 5902 5892 if (sdata->vif.suspended_links) 5903 5893 changed |= BSS_CHANGED_MLD_TTLM; ··· 7660 7652 sdata_info(sdata, 7661 7653 "failed to insert STA entry for the AP (error %d)\n", 7662 7654 err); 7663 - goto out_err; 7655 + goto out_release_chan; 7664 7656 } 7665 7657 } else 7666 7658 WARN_ON_ONCE(!ether_addr_equal(link->u.mgd.bssid, cbss->bssid)); ··· 7671 7663 7672 7664 return 0; 7673 7665 7666 + out_release_chan: 7667 + ieee80211_link_release_channel(link); 7674 7668 out_err: 7675 - ieee80211_link_release_channel(&sdata->deflink); 7676 7669 ieee80211_vif_set_links(sdata, 0, 0); 7677 7670 return err; 7678 7671 }
+42 -8
net/netfilter/nf_tables_api.c
··· 1200 1200 __NFT_TABLE_F_WAS_AWAKEN | \ 1201 1201 __NFT_TABLE_F_WAS_ORPHAN) 1202 1202 1203 + static bool nft_table_pending_update(const struct nft_ctx *ctx) 1204 + { 1205 + struct nftables_pernet *nft_net = nft_pernet(ctx->net); 1206 + struct nft_trans *trans; 1207 + 1208 + if (ctx->table->flags & __NFT_TABLE_F_UPDATE) 1209 + return true; 1210 + 1211 + list_for_each_entry(trans, &nft_net->commit_list, list) { 1212 + if ((trans->msg_type == NFT_MSG_NEWCHAIN || 1213 + trans->msg_type == NFT_MSG_DELCHAIN) && 1214 + trans->ctx.table == ctx->table && 1215 + nft_trans_chain_update(trans)) 1216 + return true; 1217 + } 1218 + 1219 + return false; 1220 + } 1221 + 1203 1222 static int nf_tables_updtable(struct nft_ctx *ctx) 1204 1223 { 1205 1224 struct nft_trans *trans; ··· 1245 1226 return -EOPNOTSUPP; 1246 1227 1247 1228 /* No dormant off/on/off/on games in single transaction */ 1248 - if (ctx->table->flags & __NFT_TABLE_F_UPDATE) 1229 + if (nft_table_pending_update(ctx)) 1249 1230 return -EINVAL; 1250 1231 1251 1232 trans = nft_trans_alloc(ctx, NFT_MSG_NEWTABLE, ··· 2650 2631 } 2651 2632 } 2652 2633 2634 + if (table->flags & __NFT_TABLE_F_UPDATE && 2635 + !list_empty(&hook.list)) { 2636 + NL_SET_BAD_ATTR(extack, attr); 2637 + err = -EOPNOTSUPP; 2638 + goto err_hooks; 2639 + } 2640 + 2653 2641 if (!(table->flags & NFT_TABLE_F_DORMANT) && 2654 2642 nft_is_base_chain(chain) && 2655 2643 !list_empty(&hook.list)) { ··· 2886 2860 struct nft_trans *trans; 2887 2861 int err; 2888 2862 2863 + if (ctx->table->flags & __NFT_TABLE_F_UPDATE) 2864 + return -EOPNOTSUPP; 2865 + 2889 2866 err = nft_chain_parse_hook(ctx->net, basechain, nla, &chain_hook, 2890 2867 ctx->family, chain->flags, extack); 2891 2868 if (err < 0) ··· 2973 2944 nft_ctx_init(&ctx, net, skb, info->nlh, family, table, chain, nla); 2974 2945 2975 2946 if (nla[NFTA_CHAIN_HOOK]) { 2976 - if (chain->flags & NFT_CHAIN_HW_OFFLOAD) 2947 + if (NFNL_MSG_TYPE(info->nlh->nlmsg_type) == NFT_MSG_DESTROYCHAIN || 2948 + chain->flags & NFT_CHAIN_HW_OFFLOAD) 2977 2949 return -EOPNOTSUPP; 2978 2950 2979 2951 if (nft_is_base_chain(chain)) { ··· 10212 10182 if (nft_trans_chain_update(trans)) { 10213 10183 nf_tables_chain_notify(&trans->ctx, NFT_MSG_DELCHAIN, 10214 10184 &nft_trans_chain_hooks(trans)); 10215 - nft_netdev_unregister_hooks(net, 10216 - &nft_trans_chain_hooks(trans), 10217 - true); 10185 + if (!(trans->ctx.table->flags & NFT_TABLE_F_DORMANT)) { 10186 + nft_netdev_unregister_hooks(net, 10187 + &nft_trans_chain_hooks(trans), 10188 + true); 10189 + } 10218 10190 } else { 10219 10191 nft_chain_del(trans->ctx.chain); 10220 10192 nf_tables_chain_notify(&trans->ctx, NFT_MSG_DELCHAIN, ··· 10492 10460 break; 10493 10461 case NFT_MSG_NEWCHAIN: 10494 10462 if (nft_trans_chain_update(trans)) { 10495 - nft_netdev_unregister_hooks(net, 10496 - &nft_trans_chain_hooks(trans), 10497 - true); 10463 + if (!(trans->ctx.table->flags & NFT_TABLE_F_DORMANT)) { 10464 + nft_netdev_unregister_hooks(net, 10465 + &nft_trans_chain_hooks(trans), 10466 + true); 10467 + } 10498 10468 free_percpu(nft_trans_chain_stats(trans)); 10499 10469 kfree(nft_trans_chain_name(trans)); 10500 10470 nft_trans_destroy(trans);
+5
net/nfc/nci/core.c
··· 1516 1516 nfc_send_to_raw_sock(ndev->nfc_dev, skb, 1517 1517 RAW_PAYLOAD_NCI, NFC_DIRECTION_RX); 1518 1518 1519 + if (!nci_plen(skb->data)) { 1520 + kfree_skb(skb); 1521 + break; 1522 + } 1523 + 1519 1524 /* Process frame */ 1520 1525 switch (nci_mt(skb->data)) { 1521 1526 case NCI_MT_RSP_PKT:
+8 -6
net/sunrpc/auth_gss/gss_krb5_crypto.c
··· 921 921 * Caller provides the truncation length of the output token (h) in 922 922 * cksumout.len. 923 923 * 924 - * Note that for RPCSEC, the "initial cipher state" is always all zeroes. 925 - * 926 924 * Return values: 927 925 * %GSS_S_COMPLETE: Digest computed, @cksumout filled in 928 926 * %GSS_S_FAILURE: Call failed ··· 931 933 int body_offset, struct xdr_netobj *cksumout) 932 934 { 933 935 unsigned int ivsize = crypto_sync_skcipher_ivsize(cipher); 934 - static const u8 iv[GSS_KRB5_MAX_BLOCKSIZE]; 935 936 struct ahash_request *req; 936 937 struct scatterlist sg[1]; 938 + u8 *iv, *checksumdata; 937 939 int err = -ENOMEM; 938 - u8 *checksumdata; 939 940 940 941 checksumdata = kmalloc(crypto_ahash_digestsize(tfm), GFP_KERNEL); 941 942 if (!checksumdata) 942 943 return GSS_S_FAILURE; 944 + /* For RPCSEC, the "initial cipher state" is always all zeroes. */ 945 + iv = kzalloc(ivsize, GFP_KERNEL); 946 + if (!iv) 947 + goto out_free_mem; 943 948 944 949 req = ahash_request_alloc(tfm, GFP_KERNEL); 945 950 if (!req) 946 - goto out_free_cksumdata; 951 + goto out_free_mem; 947 952 ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL); 948 953 err = crypto_ahash_init(req); 949 954 if (err) ··· 970 969 971 970 out_free_ahash: 972 971 ahash_request_free(req); 973 - out_free_cksumdata: 972 + out_free_mem: 973 + kfree(iv); 974 974 kfree_sensitive(checksumdata); 975 975 return err ? GSS_S_FAILURE : GSS_S_COMPLETE; 976 976 }
+5 -2
net/tls/tls_sw.c
··· 1976 1976 if (unlikely(flags & MSG_ERRQUEUE)) 1977 1977 return sock_recv_errqueue(sk, msg, len, SOL_IP, IP_RECVERR); 1978 1978 1979 - psock = sk_psock_get(sk); 1980 1979 err = tls_rx_reader_lock(sk, ctx, flags & MSG_DONTWAIT); 1981 1980 if (err < 0) 1982 1981 return err; 1982 + psock = sk_psock_get(sk); 1983 1983 bpf_strp_enabled = sk_psock_strp_enabled(psock); 1984 1984 1985 1985 /* If crypto failed the connection is broken */ ··· 2152 2152 } 2153 2153 2154 2154 /* Drain records from the rx_list & copy if required */ 2155 - if (is_peek || is_kvec) 2155 + if (is_peek) 2156 2156 err = process_rx_list(ctx, msg, &control, copied + peeked, 2157 2157 decrypted - peeked, is_peek, NULL); 2158 2158 else 2159 2159 err = process_rx_list(ctx, msg, &control, 0, 2160 2160 async_copy_bytes, is_peek, NULL); 2161 + 2162 + /* we could have copied less than we wanted, and possibly nothing */ 2163 + decrypted += max(err, 0) - async_copy_bytes; 2161 2164 } 2162 2165 2163 2166 copied += decrypted;
+1 -1
net/wireless/trace.h
··· 1024 1024 TRACE_EVENT(rdev_dump_mpp, 1025 1025 TP_PROTO(struct wiphy *wiphy, struct net_device *netdev, int _idx, 1026 1026 u8 *dst, u8 *mpp), 1027 - TP_ARGS(wiphy, netdev, _idx, mpp, dst), 1027 + TP_ARGS(wiphy, netdev, _idx, dst, mpp), 1028 1028 TP_STRUCT__entry( 1029 1029 WIPHY_ENTRY 1030 1030 NETDEV_ENTRY
+5 -2
net/wireless/wext-core.c
··· 4 4 * Authors : Jean Tourrilhes - HPL - <jt@hpl.hp.com> 5 5 * Copyright (c) 1997-2007 Jean Tourrilhes, All Rights Reserved. 6 6 * Copyright 2009 Johannes Berg <johannes@sipsolutions.net> 7 + * Copyright (C) 2024 Intel Corporation 7 8 * 8 9 * (As all part of the Linux kernel, this file is GPL) 9 10 */ ··· 663 662 dev->ieee80211_ptr->wiphy->wext && 664 663 dev->ieee80211_ptr->wiphy->wext->get_wireless_stats) { 665 664 wireless_warn_cfg80211_wext(); 666 - if (dev->ieee80211_ptr->wiphy->flags & WIPHY_FLAG_SUPPORTS_MLO) 665 + if (dev->ieee80211_ptr->wiphy->flags & (WIPHY_FLAG_SUPPORTS_MLO | 666 + WIPHY_FLAG_DISABLE_WEXT)) 667 667 return NULL; 668 668 return dev->ieee80211_ptr->wiphy->wext->get_wireless_stats(dev); 669 669 } ··· 706 704 #ifdef CONFIG_CFG80211_WEXT 707 705 if (dev->ieee80211_ptr && dev->ieee80211_ptr->wiphy) { 708 706 wireless_warn_cfg80211_wext(); 709 - if (dev->ieee80211_ptr->wiphy->flags & WIPHY_FLAG_SUPPORTS_MLO) 707 + if (dev->ieee80211_ptr->wiphy->flags & (WIPHY_FLAG_SUPPORTS_MLO | 708 + WIPHY_FLAG_DISABLE_WEXT)) 710 709 return NULL; 711 710 handlers = dev->ieee80211_ptr->wiphy->wext; 712 711 }
+3 -7
scripts/Makefile.extrawarn
··· 114 114 KBUILD_CFLAGS += $(call cc-disable-warning, format-truncation) 115 115 KBUILD_CFLAGS += $(call cc-disable-warning, stringop-truncation) 116 116 117 + KBUILD_CFLAGS += -Wno-override-init # alias for -Wno-initializer-overrides in clang 118 + 117 119 ifdef CONFIG_CC_IS_CLANG 118 120 # Clang before clang-16 would warn on default argument promotions. 119 121 ifneq ($(call clang-min-version, 160000),y) ··· 153 151 KBUILD_CFLAGS += $(call cc-option, -Wmaybe-uninitialized) 154 152 KBUILD_CFLAGS += $(call cc-option, -Wunused-macros) 155 153 156 - ifdef CONFIG_CC_IS_CLANG 157 - KBUILD_CFLAGS += -Winitializer-overrides 158 - endif 159 - 160 154 KBUILD_CPPFLAGS += -DKBUILD_EXTRA_WARN2 161 155 162 156 else ··· 162 164 KBUILD_CFLAGS += -Wno-type-limits 163 165 KBUILD_CFLAGS += -Wno-shift-negative-value 164 166 165 - ifdef CONFIG_CC_IS_CLANG 166 - KBUILD_CFLAGS += -Wno-initializer-overrides 167 - else 167 + ifdef CONFIG_CC_IS_GCC 168 168 KBUILD_CFLAGS += -Wno-maybe-uninitialized 169 169 endif 170 170
+1 -1
scripts/Makefile.modfinal
··· 23 23 part-of-module = y 24 24 25 25 quiet_cmd_cc_o_c = CC [M] $@ 26 - cmd_cc_o_c = $(CC) $(filter-out $(CC_FLAGS_CFI) $(CFLAGS_GCOV), $(c_flags)) -c -o $@ $< 26 + cmd_cc_o_c = $(CC) $(filter-out $(CC_FLAGS_CFI) $(CFLAGS_GCOV) $(CFLAGS_KCSAN), $(c_flags)) -c -o $@ $< 27 27 28 28 %.mod.o: %.mod.c FORCE 29 29 $(call if_changed_dep,cc_o_c)
+2 -2
scripts/bpf_doc.py
··· 414 414 version = version.stdout.decode().rstrip() 415 415 except: 416 416 try: 417 - version = subprocess.run(['make', 'kernelversion'], cwd=linuxRoot, 418 - capture_output=True, check=True) 417 + version = subprocess.run(['make', '-s', '--no-print-directory', 'kernelversion'], 418 + cwd=linuxRoot, capture_output=True, check=True) 419 419 version = version.stdout.decode().rstrip() 420 420 except: 421 421 return 'Linux'
-5
scripts/kconfig/conf.c
··· 552 552 continue; 553 553 } 554 554 sym_set_tristate_value(child->sym, yes); 555 - for (child = child->list; child; child = child->next) { 556 - indent += 2; 557 - conf(child); 558 - indent -= 2; 559 - } 560 555 return 1; 561 556 } 562 557 }
+1 -1
scripts/kconfig/lkc.h
··· 89 89 struct property *menu_add_prompt(enum prop_type type, char *prompt, struct expr *dep); 90 90 void menu_add_expr(enum prop_type type, struct expr *expr, struct expr *dep); 91 91 void menu_add_symbol(enum prop_type type, struct symbol *sym, struct expr *dep); 92 - void menu_finalize(struct menu *parent); 92 + void menu_finalize(void); 93 93 void menu_set_type(int type); 94 94 95 95 extern struct menu rootmenu;
+1 -1
scripts/kconfig/lxdialog/checklist.c
··· 119 119 } 120 120 121 121 do_resize: 122 - if (getmaxy(stdscr) < (height + CHECKLIST_HEIGTH_MIN)) 122 + if (getmaxy(stdscr) < (height + CHECKLIST_HEIGHT_MIN)) 123 123 return -ERRDISPLAYTOOSMALL; 124 124 if (getmaxx(stdscr) < (width + CHECKLIST_WIDTH_MIN)) 125 125 return -ERRDISPLAYTOOSMALL;
+6 -6
scripts/kconfig/lxdialog/dialog.h
··· 162 162 int on_key_resize(void); 163 163 164 164 /* minimum (re)size values */ 165 - #define CHECKLIST_HEIGTH_MIN 6 /* For dialog_checklist() */ 165 + #define CHECKLIST_HEIGHT_MIN 6 /* For dialog_checklist() */ 166 166 #define CHECKLIST_WIDTH_MIN 6 167 - #define INPUTBOX_HEIGTH_MIN 2 /* For dialog_inputbox() */ 167 + #define INPUTBOX_HEIGHT_MIN 2 /* For dialog_inputbox() */ 168 168 #define INPUTBOX_WIDTH_MIN 2 169 - #define MENUBOX_HEIGTH_MIN 15 /* For dialog_menu() */ 169 + #define MENUBOX_HEIGHT_MIN 15 /* For dialog_menu() */ 170 170 #define MENUBOX_WIDTH_MIN 65 171 - #define TEXTBOX_HEIGTH_MIN 8 /* For dialog_textbox() */ 171 + #define TEXTBOX_HEIGHT_MIN 8 /* For dialog_textbox() */ 172 172 #define TEXTBOX_WIDTH_MIN 8 173 - #define YESNO_HEIGTH_MIN 4 /* For dialog_yesno() */ 173 + #define YESNO_HEIGHT_MIN 4 /* For dialog_yesno() */ 174 174 #define YESNO_WIDTH_MIN 4 175 - #define WINDOW_HEIGTH_MIN 19 /* For init_dialog() */ 175 + #define WINDOW_HEIGHT_MIN 19 /* For init_dialog() */ 176 176 #define WINDOW_WIDTH_MIN 80 177 177 178 178 int init_dialog(const char *backtitle);
+1 -1
scripts/kconfig/lxdialog/inputbox.c
··· 43 43 strcpy(instr, init); 44 44 45 45 do_resize: 46 - if (getmaxy(stdscr) <= (height - INPUTBOX_HEIGTH_MIN)) 46 + if (getmaxy(stdscr) <= (height - INPUTBOX_HEIGHT_MIN)) 47 47 return -ERRDISPLAYTOOSMALL; 48 48 if (getmaxx(stdscr) <= (width - INPUTBOX_WIDTH_MIN)) 49 49 return -ERRDISPLAYTOOSMALL;
+1 -1
scripts/kconfig/lxdialog/menubox.c
··· 172 172 do_resize: 173 173 height = getmaxy(stdscr); 174 174 width = getmaxx(stdscr); 175 - if (height < MENUBOX_HEIGTH_MIN || width < MENUBOX_WIDTH_MIN) 175 + if (height < MENUBOX_HEIGHT_MIN || width < MENUBOX_WIDTH_MIN) 176 176 return -ERRDISPLAYTOOSMALL; 177 177 178 178 height -= 4;
+1 -1
scripts/kconfig/lxdialog/textbox.c
··· 175 175 176 176 do_resize: 177 177 getmaxyx(stdscr, height, width); 178 - if (height < TEXTBOX_HEIGTH_MIN || width < TEXTBOX_WIDTH_MIN) 178 + if (height < TEXTBOX_HEIGHT_MIN || width < TEXTBOX_WIDTH_MIN) 179 179 return -ERRDISPLAYTOOSMALL; 180 180 if (initial_height != 0) 181 181 height = initial_height;
+1 -1
scripts/kconfig/lxdialog/util.c
··· 291 291 getyx(stdscr, saved_y, saved_x); 292 292 293 293 getmaxyx(stdscr, height, width); 294 - if (height < WINDOW_HEIGTH_MIN || width < WINDOW_WIDTH_MIN) { 294 + if (height < WINDOW_HEIGHT_MIN || width < WINDOW_WIDTH_MIN) { 295 295 endwin(); 296 296 return -ERRDISPLAYTOOSMALL; 297 297 }
+1 -1
scripts/kconfig/lxdialog/yesno.c
··· 32 32 WINDOW *dialog; 33 33 34 34 do_resize: 35 - if (getmaxy(stdscr) < (height + YESNO_HEIGTH_MIN)) 35 + if (getmaxy(stdscr) < (height + YESNO_HEIGHT_MIN)) 36 36 return -ERRDISPLAYTOOSMALL; 37 37 if (getmaxx(stdscr) < (width + YESNO_WIDTH_MIN)) 38 38 return -ERRDISPLAYTOOSMALL;
+2 -2
scripts/kconfig/mconf.c
··· 659 659 dialog_clear(); 660 660 res = dialog_checklist(prompt ? prompt : "Main Menu", 661 661 radiolist_instructions, 662 - MENUBOX_HEIGTH_MIN, 662 + MENUBOX_HEIGHT_MIN, 663 663 MENUBOX_WIDTH_MIN, 664 - CHECKLIST_HEIGTH_MIN); 664 + CHECKLIST_HEIGHT_MIN); 665 665 selected = item_activate_selected(); 666 666 switch (res) { 667 667 case 0:
+16 -6
scripts/kconfig/menu.c
··· 282 282 } 283 283 } 284 284 285 - void menu_finalize(struct menu *parent) 285 + static void _menu_finalize(struct menu *parent, bool inside_choice) 286 286 { 287 287 struct menu *menu, *last_menu; 288 288 struct symbol *sym; ··· 296 296 * and propagate parent dependencies before moving on. 297 297 */ 298 298 299 - if (sym && sym_is_choice(sym)) { 299 + bool is_choice = false; 300 + 301 + if (sym && sym_is_choice(sym)) 302 + is_choice = true; 303 + 304 + if (is_choice) { 300 305 if (sym->type == S_UNKNOWN) { 301 306 /* find the first choice value to find out choice type */ 302 307 current_entry = parent; ··· 399 394 } 400 395 } 401 396 402 - if (sym && sym_is_choice(sym)) 397 + if (is_choice) 403 398 expr_free(parentdep); 404 399 405 400 /* ··· 407 402 * moving on 408 403 */ 409 404 for (menu = parent->list; menu; menu = menu->next) 410 - menu_finalize(menu); 411 - } else if (sym) { 405 + _menu_finalize(menu, is_choice); 406 + } else if (!inside_choice && sym) { 412 407 /* 413 408 * Automatic submenu creation. If sym is a symbol and A, B, C, 414 409 * ... are consecutive items (symbols, menus, ifs, etc.) that ··· 468 463 /* Superset, put in submenu */ 469 464 expr_free(dep2); 470 465 next: 471 - menu_finalize(menu); 466 + _menu_finalize(menu, false); 472 467 menu->parent = parent; 473 468 last_menu = menu; 474 469 } ··· 585 580 expr_alloc_and(parent->prompt->visible.expr, 586 581 expr_alloc_symbol(&symbol_mod))); 587 582 } 583 + } 584 + 585 + void menu_finalize(void) 586 + { 587 + _menu_finalize(&rootmenu, false); 588 588 } 589 589 590 590 bool menu_has_prompt(struct menu *menu)
+1 -1
scripts/kconfig/parser.y
··· 515 515 menu_add_prompt(P_MENU, "Main menu", NULL); 516 516 } 517 517 518 - menu_finalize(&rootmenu); 518 + menu_finalize(); 519 519 520 520 menu = &rootmenu; 521 521 while (menu) {
+5 -2
scripts/mod/modpost.c
··· 1007 1007 1008 1008 static Elf_Sym *find_tosym(struct elf_info *elf, Elf_Addr addr, Elf_Sym *sym) 1009 1009 { 1010 + Elf_Sym *new_sym; 1011 + 1010 1012 /* If the supplied symbol has a valid name, return it */ 1011 1013 if (is_valid_name(elf, sym)) 1012 1014 return sym; ··· 1017 1015 * Strive to find a better symbol name, but the resulting name may not 1018 1016 * match the symbol referenced in the original code. 1019 1017 */ 1020 - return symsearch_find_nearest(elf, addr, get_secindex(elf, sym), 1021 - true, 20); 1018 + new_sym = symsearch_find_nearest(elf, addr, get_secindex(elf, sym), 1019 + true, 20); 1020 + return new_sym ? new_sym : sym; 1022 1021 } 1023 1022 1024 1023 static bool is_executable_section(struct elf_info *elf, unsigned int secndx)
+1 -1
sound/aoa/soundbus/i2sbus/core.c
··· 158 158 struct device_node *child, *sound = NULL; 159 159 struct resource *r; 160 160 int i, layout = 0, rlen, ok = force; 161 - char node_name[6]; 161 + char node_name[8]; 162 162 static const char *rnames[] = { "i2sbus: %pOFn (control)", 163 163 "i2sbus: %pOFn (tx)", 164 164 "i2sbus: %pOFn (rx)" };
+26
sound/hda/intel-nhlt.c
··· 343 343 return NULL; 344 344 } 345 345 EXPORT_SYMBOL(intel_nhlt_get_endpoint_blob); 346 + 347 + int intel_nhlt_ssp_device_type(struct device *dev, struct nhlt_acpi_table *nhlt, 348 + u8 virtual_bus_id) 349 + { 350 + struct nhlt_endpoint *epnt; 351 + int i; 352 + 353 + if (!nhlt) 354 + return -EINVAL; 355 + 356 + epnt = (struct nhlt_endpoint *)nhlt->desc; 357 + for (i = 0; i < nhlt->endpoint_count; i++) { 358 + /* for SSP link the virtual bus id is the SSP port number */ 359 + if (epnt->linktype == NHLT_LINK_SSP && 360 + epnt->virtual_bus_id == virtual_bus_id) { 361 + dev_dbg(dev, "SSP%d: dev_type=%d\n", virtual_bus_id, 362 + epnt->device_type); 363 + return epnt->device_type; 364 + } 365 + 366 + epnt = (struct nhlt_endpoint *)((u8 *)epnt + epnt->length); 367 + } 368 + 369 + return -EINVAL; 370 + } 371 + EXPORT_SYMBOL(intel_nhlt_ssp_device_type);
+4 -4
sound/pci/hda/cs35l56_hda.c
··· 1024 1024 goto err; 1025 1025 } 1026 1026 1027 - dev_dbg(cs35l56->base.dev, "DSP system name: '%s', amp name: '%s'\n", 1028 - cs35l56->system_name, cs35l56->amp_name); 1027 + dev_info(cs35l56->base.dev, "DSP system name: '%s', amp name: '%s'\n", 1028 + cs35l56->system_name, cs35l56->amp_name); 1029 1029 1030 1030 regmap_multi_reg_write(cs35l56->base.regmap, cs35l56_hda_dai_config, 1031 1031 ARRAY_SIZE(cs35l56_hda_dai_config)); ··· 1045 1045 pm_runtime_mark_last_busy(cs35l56->base.dev); 1046 1046 pm_runtime_enable(cs35l56->base.dev); 1047 1047 1048 + cs35l56->base.init_done = true; 1049 + 1048 1050 ret = component_add(cs35l56->base.dev, &cs35l56_hda_comp_ops); 1049 1051 if (ret) { 1050 1052 dev_err(cs35l56->base.dev, "Register component failed: %d\n", ret); 1051 1053 goto pm_err; 1052 1054 } 1053 - 1054 - cs35l56->base.init_done = true; 1055 1055 1056 1056 return 0; 1057 1057
+78 -42
sound/pci/hda/tas2781_hda_i2c.c
··· 89 89 struct snd_kcontrol *dsp_prog_ctl; 90 90 struct snd_kcontrol *dsp_conf_ctl; 91 91 struct snd_kcontrol *prof_ctl; 92 - struct snd_kcontrol *snd_ctls[3]; 92 + struct snd_kcontrol *snd_ctls[2]; 93 93 }; 94 94 95 95 static int tas2781_get_i2c_res(struct acpi_resource *ares, void *data) ··· 161 161 pm_runtime_put_autosuspend(dev); 162 162 break; 163 163 default: 164 - dev_dbg(tas_hda->dev, "Playback action not supported: %d\n", 165 - action); 166 164 break; 167 165 } 168 166 } ··· 183 185 { 184 186 struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol); 185 187 188 + mutex_lock(&tas_priv->codec_lock); 189 + 186 190 ucontrol->value.integer.value[0] = tas_priv->rcabin.profile_cfg_id; 191 + 192 + dev_dbg(tas_priv->dev, "%s: kcontrol %s: %d\n", 193 + __func__, kcontrol->id.name, tas_priv->rcabin.profile_cfg_id); 194 + 195 + mutex_unlock(&tas_priv->codec_lock); 187 196 188 197 return 0; 189 198 } ··· 205 200 206 201 val = clamp(nr_profile, 0, max); 207 202 203 + mutex_lock(&tas_priv->codec_lock); 204 + 205 + dev_dbg(tas_priv->dev, "%s: kcontrol %s: %d -> %d\n", 206 + __func__, kcontrol->id.name, 207 + tas_priv->rcabin.profile_cfg_id, val); 208 + 208 209 if (tas_priv->rcabin.profile_cfg_id != val) { 209 210 tas_priv->rcabin.profile_cfg_id = val; 210 211 ret = 1; 211 212 } 213 + 214 + mutex_unlock(&tas_priv->codec_lock); 212 215 213 216 return ret; 214 217 } ··· 254 241 { 255 242 struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol); 256 243 244 + mutex_lock(&tas_priv->codec_lock); 245 + 257 246 ucontrol->value.integer.value[0] = tas_priv->cur_prog; 247 + 248 + dev_dbg(tas_priv->dev, "%s: kcontrol %s: %d\n", 249 + __func__, kcontrol->id.name, tas_priv->cur_prog); 250 + 251 + mutex_unlock(&tas_priv->codec_lock); 258 252 259 253 return 0; 260 254 } ··· 277 257 278 258 val = clamp(nr_program, 0, max); 279 259 260 + mutex_lock(&tas_priv->codec_lock); 261 + 262 + dev_dbg(tas_priv->dev, "%s: kcontrol %s: %d -> %d\n", 263 + __func__, kcontrol->id.name, tas_priv->cur_prog, val); 264 + 280 265 if (tas_priv->cur_prog != val) { 281 266 tas_priv->cur_prog = val; 282 267 ret = 1; 283 268 } 269 + 270 + mutex_unlock(&tas_priv->codec_lock); 284 271 285 272 return ret; 286 273 } ··· 297 270 { 298 271 struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol); 299 272 273 + mutex_lock(&tas_priv->codec_lock); 274 + 300 275 ucontrol->value.integer.value[0] = tas_priv->cur_conf; 276 + 277 + dev_dbg(tas_priv->dev, "%s: kcontrol %s: %d\n", 278 + __func__, kcontrol->id.name, tas_priv->cur_conf); 279 + 280 + mutex_unlock(&tas_priv->codec_lock); 301 281 302 282 return 0; 303 283 } ··· 320 286 321 287 val = clamp(nr_config, 0, max); 322 288 289 + mutex_lock(&tas_priv->codec_lock); 290 + 291 + dev_dbg(tas_priv->dev, "%s: kcontrol %s: %d -> %d\n", 292 + __func__, kcontrol->id.name, tas_priv->cur_conf, val); 293 + 323 294 if (tas_priv->cur_conf != val) { 324 295 tas_priv->cur_conf = val; 325 296 ret = 1; 326 297 } 327 298 299 + mutex_unlock(&tas_priv->codec_lock); 300 + 328 301 return ret; 329 - } 330 - 331 - /* 332 - * tas2781_digital_getvol - get the volum control 333 - * @kcontrol: control pointer 334 - * @ucontrol: User data 335 - * Customer Kcontrol for tas2781 is primarily for regmap booking, paging 336 - * depends on internal regmap mechanism. 337 - * tas2781 contains book and page two-level register map, especially 338 - * book switching will set the register BXXP00R7F, after switching to the 339 - * correct book, then leverage the mechanism for paging to access the 340 - * register. 341 - */ 342 - static int tas2781_digital_getvol(struct snd_kcontrol *kcontrol, 343 - struct snd_ctl_elem_value *ucontrol) 344 - { 345 - struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol); 346 - struct soc_mixer_control *mc = 347 - (struct soc_mixer_control *)kcontrol->private_value; 348 - 349 - return tasdevice_digital_getvol(tas_priv, ucontrol, mc); 350 302 } 351 303 352 304 static int tas2781_amp_getvol(struct snd_kcontrol *kcontrol, ··· 341 321 struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol); 342 322 struct soc_mixer_control *mc = 343 323 (struct soc_mixer_control *)kcontrol->private_value; 324 + int ret; 344 325 345 - return tasdevice_amp_getvol(tas_priv, ucontrol, mc); 346 - } 326 + mutex_lock(&tas_priv->codec_lock); 347 327 348 - static int tas2781_digital_putvol(struct snd_kcontrol *kcontrol, 349 - struct snd_ctl_elem_value *ucontrol) 350 - { 351 - struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol); 352 - struct soc_mixer_control *mc = 353 - (struct soc_mixer_control *)kcontrol->private_value; 328 + ret = tasdevice_amp_getvol(tas_priv, ucontrol, mc); 354 329 355 - /* The check of the given value is in tasdevice_digital_putvol. */ 356 - return tasdevice_digital_putvol(tas_priv, ucontrol, mc); 330 + dev_dbg(tas_priv->dev, "%s: kcontrol %s: %ld\n", 331 + __func__, kcontrol->id.name, ucontrol->value.integer.value[0]); 332 + 333 + mutex_unlock(&tas_priv->codec_lock); 334 + 335 + return ret; 357 336 } 358 337 359 338 static int tas2781_amp_putvol(struct snd_kcontrol *kcontrol, ··· 361 342 struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol); 362 343 struct soc_mixer_control *mc = 363 344 (struct soc_mixer_control *)kcontrol->private_value; 345 + int ret; 346 + 347 + mutex_lock(&tas_priv->codec_lock); 348 + 349 + dev_dbg(tas_priv->dev, "%s: kcontrol %s: -> %ld\n", 350 + __func__, kcontrol->id.name, ucontrol->value.integer.value[0]); 364 351 365 352 /* The check of the given value is in tasdevice_amp_putvol. */ 366 - return tasdevice_amp_putvol(tas_priv, ucontrol, mc); 353 + ret = tasdevice_amp_putvol(tas_priv, ucontrol, mc); 354 + 355 + mutex_unlock(&tas_priv->codec_lock); 356 + 357 + return ret; 367 358 } 368 359 369 360 static int tas2781_force_fwload_get(struct snd_kcontrol *kcontrol, ··· 381 352 { 382 353 struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol); 383 354 355 + mutex_lock(&tas_priv->codec_lock); 356 + 384 357 ucontrol->value.integer.value[0] = (int)tas_priv->force_fwload_status; 385 - dev_dbg(tas_priv->dev, "%s : Force FWload %s\n", __func__, 386 - tas_priv->force_fwload_status ? "ON" : "OFF"); 358 + dev_dbg(tas_priv->dev, "%s: kcontrol %s: %d\n", 359 + __func__, kcontrol->id.name, tas_priv->force_fwload_status); 360 + 361 + mutex_unlock(&tas_priv->codec_lock); 387 362 388 363 return 0; 389 364 } ··· 398 365 struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol); 399 366 bool change, val = (bool)ucontrol->value.integer.value[0]; 400 367 368 + mutex_lock(&tas_priv->codec_lock); 369 + 370 + dev_dbg(tas_priv->dev, "%s: kcontrol %s: %d -> %d\n", 371 + __func__, kcontrol->id.name, 372 + tas_priv->force_fwload_status, val); 373 + 401 374 if (tas_priv->force_fwload_status == val) 402 375 change = false; 403 376 else { 404 377 change = true; 405 378 tas_priv->force_fwload_status = val; 406 379 } 407 - dev_dbg(tas_priv->dev, "%s : Force FWload %s\n", __func__, 408 - tas_priv->force_fwload_status ? "ON" : "OFF"); 380 + 381 + mutex_unlock(&tas_priv->codec_lock); 409 382 410 383 return change; 411 384 } ··· 420 381 ACARD_SINGLE_RANGE_EXT_TLV("Speaker Analog Gain", TAS2781_AMP_LEVEL, 421 382 1, 0, 20, 0, tas2781_amp_getvol, 422 383 tas2781_amp_putvol, amp_vol_tlv), 423 - ACARD_SINGLE_RANGE_EXT_TLV("Speaker Digital Gain", TAS2781_DVC_LVL, 424 - 0, 0, 200, 1, tas2781_digital_getvol, 425 - tas2781_digital_putvol, dvc_tlv), 426 384 ACARD_SINGLE_BOOL_EXT("Speaker Force Firmware Load", 0, 427 385 tas2781_force_fwload_get, tas2781_force_fwload_put), 428 386 };
+14 -3
sound/sh/aica.c
··· 278 278 dreamcastcard->clicks++; 279 279 if (unlikely(dreamcastcard->clicks >= AICA_PERIOD_NUMBER)) 280 280 dreamcastcard->clicks %= AICA_PERIOD_NUMBER; 281 - mod_timer(&dreamcastcard->timer, jiffies + 1); 281 + if (snd_pcm_running(dreamcastcard->substream)) 282 + mod_timer(&dreamcastcard->timer, jiffies + 1); 282 283 } 283 284 } 284 285 ··· 291 290 /*timer function - so cannot sleep */ 292 291 int play_period; 293 292 struct snd_pcm_runtime *runtime; 293 + if (!snd_pcm_running(substream)) 294 + return; 294 295 runtime = substream->runtime; 295 296 dreamcastcard = substream->pcm->private_data; 296 297 /* Have we played out an additional period? */ ··· 353 350 return 0; 354 351 } 355 352 353 + static int snd_aicapcm_pcm_sync_stop(struct snd_pcm_substream *substream) 354 + { 355 + struct snd_card_aica *dreamcastcard = substream->pcm->private_data; 356 + 357 + del_timer_sync(&dreamcastcard->timer); 358 + cancel_work_sync(&dreamcastcard->spu_dma_work); 359 + return 0; 360 + } 361 + 356 362 static int snd_aicapcm_pcm_close(struct snd_pcm_substream 357 363 *substream) 358 364 { 359 365 struct snd_card_aica *dreamcastcard = substream->pcm->private_data; 360 - flush_work(&(dreamcastcard->spu_dma_work)); 361 - del_timer(&dreamcastcard->timer); 362 366 dreamcastcard->substream = NULL; 363 367 kfree(dreamcastcard->channel); 364 368 spu_disable(); ··· 411 401 .prepare = snd_aicapcm_pcm_prepare, 412 402 .trigger = snd_aicapcm_pcm_trigger, 413 403 .pointer = snd_aicapcm_pcm_pointer, 404 + .sync_stop = snd_aicapcm_pcm_sync_stop, 414 405 }; 415 406 416 407 /* TO DO: set up to handle more than one pcm instance */
+16 -3
sound/soc/sof/ipc4-topology.c
··· 1356 1356 int sample_rate, channel_count; 1357 1357 int bit_depth, ret; 1358 1358 u32 nhlt_type; 1359 + int dev_type = 0; 1359 1360 1360 1361 /* convert to NHLT type */ 1361 1362 switch (linktype) { ··· 1372 1371 &bit_depth); 1373 1372 if (ret < 0) 1374 1373 return ret; 1374 + 1375 + /* 1376 + * We need to know the type of the external device attached to a SSP 1377 + * port to retrieve the blob from NHLT. However, device type is not 1378 + * specified in topology. 1379 + * Query the type for the port and then pass that information back 1380 + * to the blob lookup function. 1381 + */ 1382 + dev_type = intel_nhlt_ssp_device_type(sdev->dev, ipc4_data->nhlt, 1383 + dai_index); 1384 + if (dev_type < 0) 1385 + return dev_type; 1375 1386 break; 1376 1387 default: 1377 1388 return 0; 1378 1389 } 1379 1390 1380 - dev_dbg(sdev->dev, "dai index %d nhlt type %d direction %d\n", 1381 - dai_index, nhlt_type, dir); 1391 + dev_dbg(sdev->dev, "dai index %d nhlt type %d direction %d dev type %d\n", 1392 + dai_index, nhlt_type, dir, dev_type); 1382 1393 1383 1394 /* find NHLT blob with matching params */ 1384 1395 cfg = intel_nhlt_get_endpoint_blob(sdev->dev, ipc4_data->nhlt, dai_index, nhlt_type, 1385 1396 bit_depth, bit_depth, channel_count, sample_rate, 1386 - dir, 0); 1397 + dir, dev_type); 1387 1398 1388 1399 if (!cfg) { 1389 1400 dev_err(sdev->dev,
+6 -7
tools/Makefile
··· 11 11 @echo '' 12 12 @echo ' acpi - ACPI tools' 13 13 @echo ' bpf - misc BPF tools' 14 - @echo ' cgroup - cgroup tools' 15 14 @echo ' counter - counter tools' 16 15 @echo ' cpupower - a tool for all things x86 CPU power' 17 16 @echo ' debugging - tools for debugging' ··· 68 69 cpupower: FORCE 69 70 $(call descend,power/$@) 70 71 71 - cgroup counter firewire hv guest bootconfig spi usb virtio mm bpf iio gpio objtool leds wmi pci firmware debugging tracing: FORCE 72 + counter firewire hv guest bootconfig spi usb virtio mm bpf iio gpio objtool leds wmi pci firmware debugging tracing: FORCE 72 73 $(call descend,$@) 73 74 74 75 bpf/%: FORCE ··· 115 116 kvm_stat: FORCE 116 117 $(call descend,kvm/$@) 117 118 118 - all: acpi cgroup counter cpupower gpio hv firewire \ 119 + all: acpi counter cpupower gpio hv firewire \ 119 120 perf selftests bootconfig spi turbostat usb \ 120 121 virtio mm bpf x86_energy_perf_policy \ 121 122 tmon freefall iio objtool kvm_stat wmi \ ··· 127 128 cpupower_install: 128 129 $(call descend,power/$(@:_install=),install) 129 130 130 - cgroup_install counter_install firewire_install gpio_install hv_install iio_install perf_install bootconfig_install spi_install usb_install virtio_install mm_install bpf_install objtool_install wmi_install pci_install debugging_install tracing_install: 131 + counter_install firewire_install gpio_install hv_install iio_install perf_install bootconfig_install spi_install usb_install virtio_install mm_install bpf_install objtool_install wmi_install pci_install debugging_install tracing_install: 131 132 $(call descend,$(@:_install=),install) 132 133 133 134 selftests_install: ··· 154 155 kvm_stat_install: 155 156 $(call descend,kvm/$(@:_install=),install) 156 157 157 - install: acpi_install cgroup_install counter_install cpupower_install gpio_install \ 158 + install: acpi_install counter_install cpupower_install gpio_install \ 158 159 hv_install firewire_install iio_install \ 159 160 perf_install selftests_install turbostat_install usb_install \ 160 161 virtio_install mm_install bpf_install x86_energy_perf_policy_install \ ··· 168 169 cpupower_clean: 169 170 $(call descend,power/cpupower,clean) 170 171 171 - cgroup_clean counter_clean hv_clean firewire_clean bootconfig_clean spi_clean usb_clean virtio_clean mm_clean wmi_clean bpf_clean iio_clean gpio_clean objtool_clean leds_clean pci_clean firmware_clean debugging_clean tracing_clean: 172 + counter_clean hv_clean firewire_clean bootconfig_clean spi_clean usb_clean virtio_clean mm_clean wmi_clean bpf_clean iio_clean gpio_clean objtool_clean leds_clean pci_clean firmware_clean debugging_clean tracing_clean: 172 173 $(call descend,$(@:_clean=),clean) 173 174 174 175 libapi_clean: ··· 208 209 build_clean: 209 210 $(call descend,build,clean) 210 211 211 - clean: acpi_clean cgroup_clean counter_clean cpupower_clean hv_clean firewire_clean \ 212 + clean: acpi_clean counter_clean cpupower_clean hv_clean firewire_clean \ 212 213 perf_clean selftests_clean turbostat_clean bootconfig_clean spi_clean usb_clean virtio_clean \ 213 214 mm_clean bpf_clean iio_clean x86_energy_perf_policy_clean tmon_clean \ 214 215 freefall_clean build_clean libbpf_clean libsubcmd_clean \
+1 -1
tools/bpf/bpftool/gen.c
··· 121 121 int i, n; 122 122 123 123 /* recognize hard coded LLVM section name */ 124 - if (strcmp(sec_name, ".arena.1") == 0) { 124 + if (strcmp(sec_name, ".addr_space.1") == 0) { 125 125 /* this is the name to use in skeleton */ 126 126 snprintf(buf, buf_sz, "arena"); 127 127 return true;
+7 -3
tools/lib/bpf/libbpf.c
··· 498 498 #define KSYMS_SEC ".ksyms" 499 499 #define STRUCT_OPS_SEC ".struct_ops" 500 500 #define STRUCT_OPS_LINK_SEC ".struct_ops.link" 501 - #define ARENA_SEC ".arena.1" 501 + #define ARENA_SEC ".addr_space.1" 502 502 503 503 enum libbpf_map_type { 504 504 LIBBPF_MAP_UNSPEC, ··· 1649 1649 { 1650 1650 return syscall(__NR_memfd_create, name, flags); 1651 1651 } 1652 + 1653 + #ifndef MFD_CLOEXEC 1654 + #define MFD_CLOEXEC 0x0001U 1655 + #endif 1652 1656 1653 1657 static int create_placeholder_fd(void) 1654 1658 { ··· 5356 5352 goto err_out; 5357 5353 } 5358 5354 if (map->def.type == BPF_MAP_TYPE_ARENA) { 5359 - map->mmaped = mmap((void *)map->map_extra, bpf_map_mmap_sz(map), 5360 - PROT_READ | PROT_WRITE, 5355 + map->mmaped = mmap((void *)(long)map->map_extra, 5356 + bpf_map_mmap_sz(map), PROT_READ | PROT_WRITE, 5361 5357 map->map_extra ? MAP_SHARED | MAP_FIXED : MAP_SHARED, 5362 5358 map->fd, 0); 5363 5359 if (map->mmaped == MAP_FAILED) {
+5 -2
tools/net/ynl/ynl-gen-c.py
··· 228 228 presence = '' 229 229 for i in range(0, len(ref)): 230 230 presence = f"{var}->{'.'.join(ref[:i] + [''])}_present.{ref[i]}" 231 - if self.presence_type() == 'bit': 232 - code.append(presence + ' = 1;') 231 + # Every layer below last is a nest, so we know it uses bit presence 232 + # last layer is "self" and may be a complex type 233 + if i == len(ref) - 1 and self.presence_type() != 'bit': 234 + continue 235 + code.append(presence + ' = 1;') 233 236 code += self._setter_lines(ri, member, presence) 234 237 235 238 func_name = f"{op_prefix(ri, direction, deref=deref)}_set_{'_'.join(ref)}"
+1 -1
tools/objtool/check.c
··· 585 585 struct section *rsec; 586 586 struct reloc *reloc; 587 587 struct instruction *insn; 588 - unsigned long offset; 588 + uint64_t offset; 589 589 590 590 /* 591 591 * Check for manually annotated dead ends.
+3
tools/testing/kunit/configs/all_tests.config
··· 28 28 CONFIG_INET=y 29 29 CONFIG_MPTCP=y 30 30 31 + CONFIG_NETDEVICES=y 32 + CONFIG_WLAN=y 31 33 CONFIG_CFG80211=y 32 34 CONFIG_MAC80211=y 33 35 CONFIG_WLAN_VENDOR_INTEL=y ··· 40 38 CONFIG_DAMON_PADDR=y 41 39 CONFIG_DEBUG_FS=y 42 40 CONFIG_DAMON_DBGFS=y 41 + CONFIG_DAMON_DBGFS_DEPRECATED=y 43 42 44 43 CONFIG_REGMAP_BUILD=y 45 44
+1 -1
tools/testing/selftests/bpf/bpf_arena_common.h
··· 32 32 */ 33 33 #endif 34 34 35 - #if defined(__BPF_FEATURE_ARENA_CAST) && !defined(BPF_ARENA_FORCE_ASM) 35 + #if defined(__BPF_FEATURE_ADDR_SPACE_CAST) && !defined(BPF_ARENA_FORCE_ASM) 36 36 #define __arena __attribute__((address_space(1))) 37 37 #define cast_kern(ptr) /* nop for bpf prog. emitted by LLVM */ 38 38 #define cast_user(ptr) /* nop for bpf prog. emitted by LLVM */
+5 -3
tools/testing/selftests/bpf/prog_tests/arena_htab.c
··· 3 3 #include <test_progs.h> 4 4 #include <sys/mman.h> 5 5 #include <network_helpers.h> 6 - 6 + #include <sys/user.h> 7 + #ifndef PAGE_SIZE /* on some archs it comes in sys/user.h */ 8 + #include <unistd.h> 9 + #define PAGE_SIZE getpagesize() 10 + #endif 7 11 #include "arena_htab_asm.skel.h" 8 12 #include "arena_htab.skel.h" 9 - 10 - #define PAGE_SIZE 4096 11 13 12 14 #include "bpf_arena_htab.h" 13 15
+5 -2
tools/testing/selftests/bpf/prog_tests/arena_list.c
··· 3 3 #include <test_progs.h> 4 4 #include <sys/mman.h> 5 5 #include <network_helpers.h> 6 - 7 - #define PAGE_SIZE 4096 6 + #include <sys/user.h> 7 + #ifndef PAGE_SIZE /* on some archs it comes in sys/user.h */ 8 + #include <unistd.h> 9 + #define PAGE_SIZE getpagesize() 10 + #endif 8 11 9 12 #include "bpf_arena_list.h" 10 13 #include "arena_list.skel.h"
+6
tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c
··· 2 2 /* Copyright (c) 2021 Facebook */ 3 3 4 4 #include <sys/syscall.h> 5 + #include <limits.h> 5 6 #include <test_progs.h> 6 7 #include "bloom_filter_map.skel.h" 7 8 ··· 20 19 /* Invalid value size */ 21 20 fd = bpf_map_create(BPF_MAP_TYPE_BLOOM_FILTER, NULL, 0, 0, 100, NULL); 22 21 if (!ASSERT_LT(fd, 0, "bpf_map_create bloom filter invalid value size 0")) 22 + close(fd); 23 + 24 + /* Invalid value size: too big */ 25 + fd = bpf_map_create(BPF_MAP_TYPE_BLOOM_FILTER, NULL, 0, INT32_MAX, 100, NULL); 26 + if (!ASSERT_LT(fd, 0, "bpf_map_create bloom filter invalid value too large")) 23 27 close(fd); 24 28 25 29 /* Invalid max entries size */
+2
tools/testing/selftests/bpf/prog_tests/verifier.c
··· 5 5 #include "cap_helpers.h" 6 6 #include "verifier_and.skel.h" 7 7 #include "verifier_arena.skel.h" 8 + #include "verifier_arena_large.skel.h" 8 9 #include "verifier_array_access.skel.h" 9 10 #include "verifier_basic_stack.skel.h" 10 11 #include "verifier_bitfield_write.skel.h" ··· 121 120 122 121 void test_verifier_and(void) { RUN(verifier_and); } 123 122 void test_verifier_arena(void) { RUN(verifier_arena); } 123 + void test_verifier_arena_large(void) { RUN(verifier_arena_large); } 124 124 void test_verifier_basic_stack(void) { RUN(verifier_basic_stack); } 125 125 void test_verifier_bitfield_write(void) { RUN(verifier_bitfield_write); } 126 126 void test_verifier_bounds(void) { RUN(verifier_bounds); }
+1 -1
tools/testing/selftests/bpf/progs/arena_htab.c
··· 22 22 SEC("syscall") 23 23 int arena_htab_llvm(void *ctx) 24 24 { 25 - #if defined(__BPF_FEATURE_ARENA_CAST) || defined(BPF_ARENA_FORCE_ASM) 25 + #if defined(__BPF_FEATURE_ADDR_SPACE_CAST) || defined(BPF_ARENA_FORCE_ASM) 26 26 struct htab __arena *htab; 27 27 __u64 i; 28 28
+5 -5
tools/testing/selftests/bpf/progs/arena_list.c
··· 30 30 int cnt; 31 31 bool skip = false; 32 32 33 - #ifdef __BPF_FEATURE_ARENA_CAST 33 + #ifdef __BPF_FEATURE_ADDR_SPACE_CAST 34 34 long __arena arena_sum; 35 35 int __arena test_val = 1; 36 36 struct arena_list_head __arena global_head; 37 37 #else 38 - long arena_sum SEC(".arena.1"); 39 - int test_val SEC(".arena.1"); 38 + long arena_sum SEC(".addr_space.1"); 39 + int test_val SEC(".addr_space.1"); 40 40 #endif 41 41 42 42 int zero; ··· 44 44 SEC("syscall") 45 45 int arena_list_add(void *ctx) 46 46 { 47 - #ifdef __BPF_FEATURE_ARENA_CAST 47 + #ifdef __BPF_FEATURE_ADDR_SPACE_CAST 48 48 __u64 i; 49 49 50 50 list_head = &global_head; ··· 66 66 SEC("syscall") 67 67 int arena_list_del(void *ctx) 68 68 { 69 - #ifdef __BPF_FEATURE_ARENA_CAST 69 + #ifdef __BPF_FEATURE_ADDR_SPACE_CAST 70 70 struct elem __arena *n; 71 71 int sum = 0; 72 72
+7 -3
tools/testing/selftests/bpf/progs/verifier_arena.c
··· 12 12 __uint(type, BPF_MAP_TYPE_ARENA); 13 13 __uint(map_flags, BPF_F_MMAPABLE); 14 14 __uint(max_entries, 2); /* arena of two pages close to 32-bit boundary*/ 15 - __ulong(map_extra, (1ull << 44) | (~0u - __PAGE_SIZE * 2 + 1)); /* start of mmap() region */ 15 + #ifdef __TARGET_ARCH_arm64 16 + __ulong(map_extra, (1ull << 32) | (~0u - __PAGE_SIZE * 2 + 1)); /* start of mmap() region */ 17 + #else 18 + __ulong(map_extra, (1ull << 44) | (~0u - __PAGE_SIZE * 2 + 1)); /* start of mmap() region */ 19 + #endif 16 20 } arena SEC(".maps"); 17 21 18 22 SEC("syscall") 19 23 __success __retval(0) 20 24 int basic_alloc1(void *ctx) 21 25 { 22 - #if defined(__BPF_FEATURE_ARENA_CAST) 26 + #if defined(__BPF_FEATURE_ADDR_SPACE_CAST) 23 27 volatile int __arena *page1, *page2, *no_page, *page3; 24 28 25 29 page1 = bpf_arena_alloc_pages(&arena, NULL, 1, NUMA_NO_NODE, 0); ··· 62 58 __success __retval(0) 63 59 int basic_alloc2(void *ctx) 64 60 { 65 - #if defined(__BPF_FEATURE_ARENA_CAST) 61 + #if defined(__BPF_FEATURE_ADDR_SPACE_CAST) 66 62 volatile char __arena *page1, *page2, *page3, *page4; 67 63 68 64 page1 = bpf_arena_alloc_pages(&arena, NULL, 2, NUMA_NO_NODE, 0);
+68
tools/testing/selftests/bpf/progs/verifier_arena_large.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */ 3 + 4 + #include <vmlinux.h> 5 + #include <bpf/bpf_helpers.h> 6 + #include <bpf/bpf_tracing.h> 7 + #include "bpf_misc.h" 8 + #include "bpf_experimental.h" 9 + #include "bpf_arena_common.h" 10 + 11 + #define ARENA_SIZE (1ull << 32) 12 + 13 + struct { 14 + __uint(type, BPF_MAP_TYPE_ARENA); 15 + __uint(map_flags, BPF_F_MMAPABLE); 16 + __uint(max_entries, ARENA_SIZE / PAGE_SIZE); 17 + } arena SEC(".maps"); 18 + 19 + SEC("syscall") 20 + __success __retval(0) 21 + int big_alloc1(void *ctx) 22 + { 23 + #if defined(__BPF_FEATURE_ADDR_SPACE_CAST) 24 + volatile char __arena *page1, *page2, *no_page, *page3; 25 + void __arena *base; 26 + 27 + page1 = base = bpf_arena_alloc_pages(&arena, NULL, 1, NUMA_NO_NODE, 0); 28 + if (!page1) 29 + return 1; 30 + *page1 = 1; 31 + page2 = bpf_arena_alloc_pages(&arena, base + ARENA_SIZE - PAGE_SIZE, 32 + 1, NUMA_NO_NODE, 0); 33 + if (!page2) 34 + return 2; 35 + *page2 = 2; 36 + no_page = bpf_arena_alloc_pages(&arena, base + ARENA_SIZE, 37 + 1, NUMA_NO_NODE, 0); 38 + if (no_page) 39 + return 3; 40 + if (*page1 != 1) 41 + return 4; 42 + if (*page2 != 2) 43 + return 5; 44 + bpf_arena_free_pages(&arena, (void __arena *)page1, 1); 45 + if (*page2 != 2) 46 + return 6; 47 + if (*page1 != 0) /* use-after-free should return 0 */ 48 + return 7; 49 + page3 = bpf_arena_alloc_pages(&arena, NULL, 1, NUMA_NO_NODE, 0); 50 + if (!page3) 51 + return 8; 52 + *page3 = 3; 53 + if (page1 != page3) 54 + return 9; 55 + if (*page2 != 2) 56 + return 10; 57 + if (*(page1 + PAGE_SIZE) != 0) 58 + return 11; 59 + if (*(page1 - PAGE_SIZE) != 0) 60 + return 12; 61 + if (*(page2 + PAGE_SIZE) != 0) 62 + return 13; 63 + if (*(page2 - PAGE_SIZE) != 0) 64 + return 14; 65 + #endif 66 + return 0; 67 + } 68 + char _license[] SEC("license") = "GPL";
+3
tools/testing/selftests/dmabuf-heaps/config
··· 1 + CONFIG_DMABUF_HEAPS=y 2 + CONFIG_DMABUF_HEAPS_SYSTEM=y 3 + CONFIG_DRM_VGEM=y
+1
tools/testing/selftests/drivers/net/netdevsim/settings
··· 1 + timeout=600
+2 -2
tools/testing/selftests/exec/Makefile
··· 19 19 20 20 $(OUTPUT)/subdir: 21 21 mkdir -p $@ 22 - $(OUTPUT)/script: 23 - echo '#!/bin/sh' > $@ 22 + $(OUTPUT)/script: Makefile 23 + echo '#!/bin/bash' > $@ 24 24 echo 'exit $$*' >> $@ 25 25 chmod +x $@ 26 26 $(OUTPUT)/execveat.symlink: $(OUTPUT)/execveat
+9 -1
tools/testing/selftests/exec/binfmt_script.py
··· 16 16 NAME_MAX=int(subprocess.check_output(["getconf", "NAME_MAX", "."])) 17 17 18 18 test_num=0 19 + pass_num=0 20 + fail_num=0 19 21 20 22 code='''#!/usr/bin/perl 21 23 print "Executed interpreter! Args:\n"; ··· 44 42 # ... 45 43 def test(name, size, good=True, leading="", root="./", target="/perl", 46 44 fill="A", arg="", newline="\n", hashbang="#!"): 47 - global test_num, tests, NAME_MAX 45 + global test_num, pass_num, fail_num, tests, NAME_MAX 48 46 test_num += 1 49 47 if test_num > tests: 50 48 raise ValueError("more binfmt_script tests than expected! (want %d, expected %d)" ··· 82 80 if good: 83 81 print("ok %d - binfmt_script %s (successful good exec)" 84 82 % (test_num, name)) 83 + pass_num += 1 85 84 else: 86 85 print("not ok %d - binfmt_script %s succeeded when it should have failed" 87 86 % (test_num, name)) 87 + fail_num = 1 88 88 else: 89 89 if good: 90 90 print("not ok %d - binfmt_script %s failed when it should have succeeded (rc:%d)" 91 91 % (test_num, name, proc.returncode)) 92 + fail_num = 1 92 93 else: 93 94 print("ok %d - binfmt_script %s (correctly failed bad exec)" 94 95 % (test_num, name)) 96 + pass_num += 1 95 97 96 98 # Clean up crazy binaries 97 99 os.unlink(script) ··· 171 165 test(name="two-under-trunc-arg", size=int(SIZE/2), arg=" ") 172 166 test(name="two-under-leading", size=int(SIZE/2), leading=" ") 173 167 test(name="two-under-lead-trunc-arg", size=int(SIZE/2), leading=" ", arg=" ") 168 + 169 + print("# Totals: pass:%d fail:%d xfail:0 xpass:0 skip:0 error:0" % (pass_num, fail_num)) 174 170 175 171 if test_num != tests: 176 172 raise ValueError("fewer binfmt_script tests than expected! (ran %d, expected %d"
+7 -5
tools/testing/selftests/exec/execveat.c
··· 98 98 if (child == 0) { 99 99 /* Child: do execveat(). */ 100 100 rc = execveat_(fd, path, argv, envp, flags); 101 - ksft_print_msg("execveat() failed, rc=%d errno=%d (%s)\n", 101 + ksft_print_msg("child execveat() failed, rc=%d errno=%d (%s)\n", 102 102 rc, errno, strerror(errno)); 103 - ksft_test_result_fail("%s\n", test_name); 104 - exit(1); /* should not reach here */ 103 + exit(errno); 105 104 } 106 105 /* Parent: wait for & check child's exit status. */ 107 106 rc = waitpid(child, &status, 0); ··· 225 226 * "If the command name is found, but it is not an executable utility, 226 227 * the exit status shall be 126."), so allow either. 227 228 */ 228 - if (is_script) 229 + if (is_script) { 230 + ksft_print_msg("Invoke script via root_dfd and relative filename\n"); 229 231 fail += check_execveat_invoked_rc(root_dfd, longpath + 1, 0, 230 232 127, 126); 231 - else 233 + } else { 234 + ksft_print_msg("Invoke exec via root_dfd and relative filename\n"); 232 235 fail += check_execveat(root_dfd, longpath + 1, 0); 236 + } 233 237 234 238 return fail; 235 239 }
+16 -20
tools/testing/selftests/exec/load_address.c
··· 5 5 #include <link.h> 6 6 #include <stdio.h> 7 7 #include <stdlib.h> 8 + #include "../kselftest.h" 8 9 9 10 struct Statistics { 10 11 unsigned long long load_address; ··· 42 41 unsigned long long misalign; 43 42 int ret; 44 43 45 - ret = dl_iterate_phdr(ExtractStatistics, &extracted); 46 - if (ret != 1) { 47 - fprintf(stderr, "FAILED\n"); 48 - return 1; 49 - } 44 + ksft_print_header(); 45 + ksft_set_plan(1); 50 46 51 - if (extracted.alignment == 0) { 52 - fprintf(stderr, "No alignment found\n"); 53 - return 1; 54 - } else if (extracted.alignment & (extracted.alignment - 1)) { 55 - fprintf(stderr, "Alignment is not a power of 2\n"); 56 - return 1; 57 - } 47 + ret = dl_iterate_phdr(ExtractStatistics, &extracted); 48 + if (ret != 1) 49 + ksft_exit_fail_msg("FAILED: dl_iterate_phdr\n"); 50 + 51 + if (extracted.alignment == 0) 52 + ksft_exit_fail_msg("FAILED: No alignment found\n"); 53 + else if (extracted.alignment & (extracted.alignment - 1)) 54 + ksft_exit_fail_msg("FAILED: Alignment is not a power of 2\n"); 58 55 59 56 misalign = extracted.load_address & (extracted.alignment - 1); 60 - if (misalign) { 61 - printf("alignment = %llu, load_address = %llu\n", 62 - extracted.alignment, extracted.load_address); 63 - fprintf(stderr, "FAILED\n"); 64 - return 1; 65 - } 57 + if (misalign) 58 + ksft_exit_fail_msg("FAILED: alignment = %llu, load_address = %llu\n", 59 + extracted.alignment, extracted.load_address); 66 60 67 - fprintf(stderr, "PASS\n"); 68 - return 0; 61 + ksft_test_result_pass("Completed\n"); 62 + ksft_finished(); 69 63 }
+26 -27
tools/testing/selftests/exec/recursion-depth.c
··· 23 23 #include <fcntl.h> 24 24 #include <sys/mount.h> 25 25 #include <unistd.h> 26 + #include "../kselftest.h" 26 27 27 28 int main(void) 28 29 { 30 + int fd, rv; 31 + 32 + ksft_print_header(); 33 + ksft_set_plan(1); 34 + 29 35 if (unshare(CLONE_NEWNS) == -1) { 30 36 if (errno == ENOSYS || errno == EPERM) { 31 - fprintf(stderr, "error: unshare, errno %d\n", errno); 32 - return 4; 37 + ksft_test_result_skip("error: unshare, errno %d\n", errno); 38 + ksft_finished(); 33 39 } 34 - fprintf(stderr, "error: unshare, errno %d\n", errno); 35 - return 1; 40 + ksft_exit_fail_msg("error: unshare, errno %d\n", errno); 36 41 } 37 - if (mount(NULL, "/", NULL, MS_PRIVATE|MS_REC, NULL) == -1) { 38 - fprintf(stderr, "error: mount '/', errno %d\n", errno); 39 - return 1; 40 - } 42 + 43 + if (mount(NULL, "/", NULL, MS_PRIVATE | MS_REC, NULL) == -1) 44 + ksft_exit_fail_msg("error: mount '/', errno %d\n", errno); 45 + 41 46 /* Require "exec" filesystem. */ 42 - if (mount(NULL, "/tmp", "ramfs", 0, NULL) == -1) { 43 - fprintf(stderr, "error: mount ramfs, errno %d\n", errno); 44 - return 1; 45 - } 47 + if (mount(NULL, "/tmp", "ramfs", 0, NULL) == -1) 48 + ksft_exit_fail_msg("error: mount ramfs, errno %d\n", errno); 46 49 47 50 #define FILENAME "/tmp/1" 48 51 49 - int fd = creat(FILENAME, 0700); 50 - if (fd == -1) { 51 - fprintf(stderr, "error: creat, errno %d\n", errno); 52 - return 1; 53 - } 52 + fd = creat(FILENAME, 0700); 53 + if (fd == -1) 54 + ksft_exit_fail_msg("error: creat, errno %d\n", errno); 55 + 54 56 #define S "#!" FILENAME "\n" 55 - if (write(fd, S, strlen(S)) != strlen(S)) { 56 - fprintf(stderr, "error: write, errno %d\n", errno); 57 - return 1; 58 - } 57 + if (write(fd, S, strlen(S)) != strlen(S)) 58 + ksft_exit_fail_msg("error: write, errno %d\n", errno); 59 + 59 60 close(fd); 60 61 61 - int rv = execve(FILENAME, NULL, NULL); 62 - if (rv == -1 && errno == ELOOP) { 63 - return 0; 64 - } 65 - fprintf(stderr, "error: execve, rv %d, errno %d\n", rv, errno); 66 - return 1; 62 + rv = execve(FILENAME, NULL, NULL); 63 + ksft_test_result(rv == -1 && errno == ELOOP, 64 + "execve failed as expected (ret %d, errno %d)\n", rv, errno); 65 + ksft_finished(); 67 66 }
+1 -1
tools/testing/selftests/ftrace/test.d/filter/event-filter-function.tc
··· 24 24 echo "Get the most frequently calling function" 25 25 sample_events 26 26 27 - target_func=`cut -d: -f3 trace | sed 's/call_site=\([^+]*\)+0x.*/\1/' | sort | uniq -c | sort | tail -n 1 | sed 's/^[ 0-9]*//'` 27 + target_func=`cat trace | grep -o 'call_site=\([^+]*\)' | sed 's/call_site=//' | sort | uniq -c | sort | tail -n 1 | sed 's/^[ 0-9]*//'` 28 28 if [ -z "$target_func" ]; then 29 29 exit_fail 30 30 fi
+1 -1
tools/testing/selftests/mm/gup_test.c
··· 203 203 ksft_print_header(); 204 204 ksft_set_plan(nthreads); 205 205 206 - filed = open(file, O_RDWR|O_CREAT); 206 + filed = open(file, O_RDWR|O_CREAT, 0664); 207 207 if (filed < 0) 208 208 ksft_exit_fail_msg("Unable to open %s: %s\n", file, strerror(errno)); 209 209
+5 -1
tools/testing/selftests/mm/protection_keys.c
··· 1745 1745 shadow_pkey_reg = __read_pkey_reg(); 1746 1746 } 1747 1747 1748 + pid_t parent_pid; 1749 + 1748 1750 void restore_settings_atexit(void) 1749 1751 { 1750 - cat_into_file(buf, "/proc/sys/vm/nr_hugepages"); 1752 + if (parent_pid == getpid()) 1753 + cat_into_file(buf, "/proc/sys/vm/nr_hugepages"); 1751 1754 } 1752 1755 1753 1756 void save_settings(void) ··· 1776 1773 exit(__LINE__); 1777 1774 } 1778 1775 1776 + parent_pid = getpid(); 1779 1777 atexit(restore_settings_atexit); 1780 1778 close(fd); 1781 1779 }
+1 -1
tools/testing/selftests/mm/soft-dirty.c
··· 137 137 if (!map) 138 138 ksft_exit_fail_msg("anon mmap failed\n"); 139 139 } else { 140 - test_fd = open(fname, O_RDWR | O_CREAT); 140 + test_fd = open(fname, O_RDWR | O_CREAT, 0664); 141 141 if (test_fd < 0) { 142 142 ksft_test_result_skip("Test %s open() file failed\n", __func__); 143 143 return;
+1 -1
tools/testing/selftests/mm/split_huge_page_test.c
··· 223 223 ksft_exit_fail_msg("Fail to create file-backed THP split testing file\n"); 224 224 } 225 225 226 - fd = open(testfile, O_CREAT|O_WRONLY); 226 + fd = open(testfile, O_CREAT|O_WRONLY, 0664); 227 227 if (fd == -1) { 228 228 ksft_perror("Cannot open testing file"); 229 229 goto cleanup;
+3
tools/testing/selftests/mm/uffd-common.c
··· 18 18 unsigned long long *count_verify; 19 19 uffd_test_ops_t *uffd_test_ops; 20 20 uffd_test_case_ops_t *uffd_test_case_ops; 21 + atomic_bool ready_for_fork; 21 22 22 23 static int uffd_mem_fd_create(off_t mem_size, bool hugetlb) 23 24 { ··· 518 517 pollfd[0].events = POLLIN; 519 518 pollfd[1].fd = pipefd[cpu*2]; 520 519 pollfd[1].events = POLLIN; 520 + 521 + ready_for_fork = true; 521 522 522 523 for (;;) { 523 524 ret = poll(pollfd, 2, -1);
+2
tools/testing/selftests/mm/uffd-common.h
··· 32 32 #include <inttypes.h> 33 33 #include <stdint.h> 34 34 #include <sys/random.h> 35 + #include <stdatomic.h> 35 36 36 37 #include "../kselftest.h" 37 38 #include "vm_util.h" ··· 104 103 extern bool test_uffdio_wp; 105 104 extern unsigned long long *count_verify; 106 105 extern volatile bool test_uffdio_copy_eexist; 106 + extern atomic_bool ready_for_fork; 107 107 108 108 extern uffd_test_ops_t anon_uffd_test_ops; 109 109 extern uffd_test_ops_t shmem_uffd_test_ops;
+12 -1
tools/testing/selftests/mm/uffd-unit-tests.c
··· 775 775 char c; 776 776 struct uffd_args args = { 0 }; 777 777 778 + ready_for_fork = false; 779 + 778 780 fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); 779 781 780 782 if (uffd_register(uffd, area_dst, nr_pages * page_size, ··· 791 789 args.apply_wp = wp; 792 790 if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args)) 793 791 err("uffd_poll_thread create"); 792 + 793 + while (!ready_for_fork) 794 + ; /* Wait for the poll_thread to start executing before forking */ 794 795 795 796 pid = fork(); 796 797 if (pid < 0) ··· 834 829 char c; 835 830 struct uffd_args args = { 0 }; 836 831 832 + ready_for_fork = false; 833 + 837 834 fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); 838 835 if (uffd_register(uffd, area_dst, nr_pages * page_size, 839 836 true, wp, false)) ··· 844 837 args.apply_wp = wp; 845 838 if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args)) 846 839 err("uffd_poll_thread create"); 840 + 841 + while (!ready_for_fork) 842 + ; /* Wait for the poll_thread to start executing before forking */ 847 843 848 844 pid = fork(); 849 845 if (pid < 0) ··· 1437 1427 .uffd_fn = uffd_sigbus_wp_test, 1438 1428 .mem_targets = MEM_ALL, 1439 1429 .uffd_feature_required = UFFD_FEATURE_SIGBUS | 1440 - UFFD_FEATURE_EVENT_FORK | UFFD_FEATURE_PAGEFAULT_FLAG_WP, 1430 + UFFD_FEATURE_EVENT_FORK | UFFD_FEATURE_PAGEFAULT_FLAG_WP | 1431 + UFFD_FEATURE_WP_HUGETLBFS_SHMEM, 1441 1432 }, 1442 1433 { 1443 1434 .name = "events",
+128 -77
tools/testing/selftests/net/test_vxlan_mdb.sh
··· 1177 1177 local plen=$1; shift 1178 1178 local enc_ethtype=$1; shift 1179 1179 local grp=$1; shift 1180 + local grp_dmac=$1; shift 1180 1181 local src=$1; shift 1181 1182 local mz=$1; shift 1182 1183 ··· 1196 1195 run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent dst $vtep2_ip src_vni 10020" 1197 1196 1198 1197 run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 101 proto all flower enc_dst_ip $vtep1_ip action pass" 1199 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1198 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1200 1199 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 1201 1200 log_test $? 0 "Destination IP - match" 1202 1201 1203 - run_cmd "ip netns exec $ns1 $mz br0.20 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1202 + run_cmd "ip netns exec $ns1 $mz br0.20 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1204 1203 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 1205 1204 log_test $? 0 "Destination IP - no match" 1206 1205 ··· 1213 1212 run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent dst $vtep1_ip dst_port 1111 src_vni 10020" 1214 1213 1215 1214 run_cmd "tc -n $ns2 filter replace dev veth0 ingress pref 1 handle 101 proto $enc_ethtype flower ip_proto udp dst_port 4789 action pass" 1216 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1215 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1217 1216 tc_check_packets "$ns2" "dev veth0 ingress" 101 1 1218 1217 log_test $? 0 "Default destination port - match" 1219 1218 1220 - run_cmd "ip netns exec $ns1 $mz br0.20 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1219 + run_cmd "ip netns exec $ns1 $mz br0.20 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1221 1220 tc_check_packets "$ns2" "dev veth0 ingress" 101 1 1222 1221 log_test $? 0 "Default destination port - no match" 1223 1222 1224 1223 run_cmd "tc -n $ns2 filter replace dev veth0 ingress pref 1 handle 101 proto $enc_ethtype flower ip_proto udp dst_port 1111 action pass" 1225 - run_cmd "ip netns exec $ns1 $mz br0.20 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1224 + run_cmd "ip netns exec $ns1 $mz br0.20 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1226 1225 tc_check_packets "$ns2" "dev veth0 ingress" 101 1 1227 1226 log_test $? 0 "Non-default destination port - match" 1228 1227 1229 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1228 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1230 1229 tc_check_packets "$ns2" "dev veth0 ingress" 101 1 1231 1230 log_test $? 0 "Non-default destination port - no match" 1232 1231 ··· 1239 1238 run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent dst $vtep1_ip src_vni 10020" 1240 1239 1241 1240 run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 101 proto all flower enc_key_id 10010 action pass" 1242 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1241 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1243 1242 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 1244 1243 log_test $? 0 "Default destination VNI - match" 1245 1244 1246 - run_cmd "ip netns exec $ns1 $mz br0.20 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1245 + run_cmd "ip netns exec $ns1 $mz br0.20 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1247 1246 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 1248 1247 log_test $? 0 "Default destination VNI - no match" 1249 1248 ··· 1251 1250 run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent dst $vtep1_ip vni 10010 src_vni 10020" 1252 1251 1253 1252 run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 101 proto all flower enc_key_id 10020 action pass" 1254 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1253 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1255 1254 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 1256 1255 log_test $? 0 "Non-default destination VNI - match" 1257 1256 1258 - run_cmd "ip netns exec $ns1 $mz br0.20 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1257 + run_cmd "ip netns exec $ns1 $mz br0.20 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1259 1258 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 1260 1259 log_test $? 0 "Non-default destination VNI - no match" 1261 1260 ··· 1273 1272 local plen=32 1274 1273 local enc_ethtype="ip" 1275 1274 local grp=239.1.1.1 1275 + local grp_dmac=01:00:5e:01:01:01 1276 1276 local src=192.0.2.129 1277 1277 1278 1278 echo ··· 1281 1279 echo "------------------------------------------------------------------" 1282 1280 1283 1281 encap_params_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $enc_ethtype \ 1284 - $grp $src "mausezahn" 1282 + $grp $grp_dmac $src "mausezahn" 1285 1283 } 1286 1284 1287 1285 encap_params_ipv6_ipv4() ··· 1293 1291 local plen=32 1294 1292 local enc_ethtype="ip" 1295 1293 local grp=ff0e::1 1294 + local grp_dmac=33:33:00:00:00:01 1296 1295 local src=2001:db8:100::1 1297 1296 1298 1297 echo ··· 1301 1298 echo "------------------------------------------------------------------" 1302 1299 1303 1300 encap_params_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $enc_ethtype \ 1304 - $grp $src "mausezahn -6" 1301 + $grp $grp_dmac $src "mausezahn -6" 1305 1302 } 1306 1303 1307 1304 encap_params_ipv4_ipv6() ··· 1313 1310 local plen=128 1314 1311 local enc_ethtype="ipv6" 1315 1312 local grp=239.1.1.1 1313 + local grp_dmac=01:00:5e:01:01:01 1316 1314 local src=192.0.2.129 1317 1315 1318 1316 echo ··· 1321 1317 echo "------------------------------------------------------------------" 1322 1318 1323 1319 encap_params_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $enc_ethtype \ 1324 - $grp $src "mausezahn" 1320 + $grp $grp_dmac $src "mausezahn" 1325 1321 } 1326 1322 1327 1323 encap_params_ipv6_ipv6() ··· 1333 1329 local plen=128 1334 1330 local enc_ethtype="ipv6" 1335 1331 local grp=ff0e::1 1332 + local grp_dmac=33:33:00:00:00:01 1336 1333 local src=2001:db8:100::1 1337 1334 1338 1335 echo ··· 1341 1336 echo "------------------------------------------------------------------" 1342 1337 1343 1338 encap_params_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $enc_ethtype \ 1344 - $grp $src "mausezahn -6" 1339 + $grp $grp_dmac $src "mausezahn -6" 1345 1340 } 1346 1341 1347 1342 starg_exclude_ir_common() ··· 1352 1347 local vtep2_ip=$1; shift 1353 1348 local plen=$1; shift 1354 1349 local grp=$1; shift 1350 + local grp_dmac=$1; shift 1355 1351 local valid_src=$1; shift 1356 1352 local invalid_src=$1; shift 1357 1353 local mz=$1; shift ··· 1374 1368 run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $invalid_src dst $vtep2_ip src_vni 10010" 1375 1369 1376 1370 # Check that invalid source is not forwarded to any VTEP. 1377 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1371 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1378 1372 tc_check_packets "$ns2" "dev vx0 ingress" 101 0 1379 1373 log_test $? 0 "Block excluded source - first VTEP" 1380 1374 tc_check_packets "$ns2" "dev vx0 ingress" 102 0 1381 1375 log_test $? 0 "Block excluded source - second VTEP" 1382 1376 1383 1377 # Check that valid source is forwarded to both VTEPs. 1384 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1378 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1385 1379 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 1386 1380 log_test $? 0 "Forward valid source - first VTEP" 1387 1381 tc_check_packets "$ns2" "dev vx0 ingress" 102 1 ··· 1391 1385 run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep2_ip src_vni 10010" 1392 1386 1393 1387 # Check that invalid source is not forwarded to any VTEP. 1394 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1388 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1395 1389 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 1396 1390 log_test $? 0 "Block excluded source after removal - first VTEP" 1397 1391 tc_check_packets "$ns2" "dev vx0 ingress" 102 1 1398 1392 log_test $? 0 "Block excluded source after removal - second VTEP" 1399 1393 1400 1394 # Check that valid source is forwarded to the remaining VTEP. 1401 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1395 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1402 1396 tc_check_packets "$ns2" "dev vx0 ingress" 101 2 1403 1397 log_test $? 0 "Forward valid source after removal - first VTEP" 1404 1398 tc_check_packets "$ns2" "dev vx0 ingress" 102 1 ··· 1413 1407 local vtep2_ip=198.51.100.200 1414 1408 local plen=32 1415 1409 local grp=239.1.1.1 1410 + local grp_dmac=01:00:5e:01:01:01 1416 1411 local valid_src=192.0.2.129 1417 1412 local invalid_src=192.0.2.145 1418 1413 ··· 1422 1415 echo "-------------------------------------------------------------" 1423 1416 1424 1417 starg_exclude_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \ 1425 - $valid_src $invalid_src "mausezahn" 1418 + $grp_dmac $valid_src $invalid_src "mausezahn" 1426 1419 } 1427 1420 1428 1421 starg_exclude_ir_ipv6_ipv4() ··· 1433 1426 local vtep2_ip=198.51.100.200 1434 1427 local plen=32 1435 1428 local grp=ff0e::1 1429 + local grp_dmac=33:33:00:00:00:01 1436 1430 local valid_src=2001:db8:100::1 1437 1431 local invalid_src=2001:db8:200::1 1438 1432 ··· 1442 1434 echo "-------------------------------------------------------------" 1443 1435 1444 1436 starg_exclude_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \ 1445 - $valid_src $invalid_src "mausezahn -6" 1437 + $grp_dmac $valid_src $invalid_src "mausezahn -6" 1446 1438 } 1447 1439 1448 1440 starg_exclude_ir_ipv4_ipv6() ··· 1453 1445 local vtep2_ip=2001:db8:2000::1 1454 1446 local plen=128 1455 1447 local grp=239.1.1.1 1448 + local grp_dmac=01:00:5e:01:01:01 1456 1449 local valid_src=192.0.2.129 1457 1450 local invalid_src=192.0.2.145 1458 1451 ··· 1462 1453 echo "-------------------------------------------------------------" 1463 1454 1464 1455 starg_exclude_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \ 1465 - $valid_src $invalid_src "mausezahn" 1456 + $grp_dmac $valid_src $invalid_src "mausezahn" 1466 1457 } 1467 1458 1468 1459 starg_exclude_ir_ipv6_ipv6() ··· 1473 1464 local vtep2_ip=2001:db8:2000::1 1474 1465 local plen=128 1475 1466 local grp=ff0e::1 1467 + local grp_dmac=33:33:00:00:00:01 1476 1468 local valid_src=2001:db8:100::1 1477 1469 local invalid_src=2001:db8:200::1 1478 1470 ··· 1482 1472 echo "-------------------------------------------------------------" 1483 1473 1484 1474 starg_exclude_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \ 1485 - $valid_src $invalid_src "mausezahn -6" 1475 + $grp_dmac $valid_src $invalid_src "mausezahn -6" 1486 1476 } 1487 1477 1488 1478 starg_include_ir_common() ··· 1493 1483 local vtep2_ip=$1; shift 1494 1484 local plen=$1; shift 1495 1485 local grp=$1; shift 1486 + local grp_dmac=$1; shift 1496 1487 local valid_src=$1; shift 1497 1488 local invalid_src=$1; shift 1498 1489 local mz=$1; shift ··· 1515 1504 run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode include source_list $valid_src dst $vtep2_ip src_vni 10010" 1516 1505 1517 1506 # Check that invalid source is not forwarded to any VTEP. 1518 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1507 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1519 1508 tc_check_packets "$ns2" "dev vx0 ingress" 101 0 1520 1509 log_test $? 0 "Block excluded source - first VTEP" 1521 1510 tc_check_packets "$ns2" "dev vx0 ingress" 102 0 1522 1511 log_test $? 0 "Block excluded source - second VTEP" 1523 1512 1524 1513 # Check that valid source is forwarded to both VTEPs. 1525 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1514 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1526 1515 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 1527 1516 log_test $? 0 "Forward valid source - first VTEP" 1528 1517 tc_check_packets "$ns2" "dev vx0 ingress" 102 1 ··· 1532 1521 run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep2_ip src_vni 10010" 1533 1522 1534 1523 # Check that invalid source is not forwarded to any VTEP. 1535 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1524 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1536 1525 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 1537 1526 log_test $? 0 "Block excluded source after removal - first VTEP" 1538 1527 tc_check_packets "$ns2" "dev vx0 ingress" 102 1 1539 1528 log_test $? 0 "Block excluded source after removal - second VTEP" 1540 1529 1541 1530 # Check that valid source is forwarded to the remaining VTEP. 1542 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1531 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1543 1532 tc_check_packets "$ns2" "dev vx0 ingress" 101 2 1544 1533 log_test $? 0 "Forward valid source after removal - first VTEP" 1545 1534 tc_check_packets "$ns2" "dev vx0 ingress" 102 1 ··· 1554 1543 local vtep2_ip=198.51.100.200 1555 1544 local plen=32 1556 1545 local grp=239.1.1.1 1546 + local grp_dmac=01:00:5e:01:01:01 1557 1547 local valid_src=192.0.2.129 1558 1548 local invalid_src=192.0.2.145 1559 1549 ··· 1563 1551 echo "-------------------------------------------------------------" 1564 1552 1565 1553 starg_include_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \ 1566 - $valid_src $invalid_src "mausezahn" 1554 + $grp_dmac $valid_src $invalid_src "mausezahn" 1567 1555 } 1568 1556 1569 1557 starg_include_ir_ipv6_ipv4() ··· 1574 1562 local vtep2_ip=198.51.100.200 1575 1563 local plen=32 1576 1564 local grp=ff0e::1 1565 + local grp_dmac=33:33:00:00:00:01 1577 1566 local valid_src=2001:db8:100::1 1578 1567 local invalid_src=2001:db8:200::1 1579 1568 ··· 1583 1570 echo "-------------------------------------------------------------" 1584 1571 1585 1572 starg_include_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \ 1586 - $valid_src $invalid_src "mausezahn -6" 1573 + $grp_dmac $valid_src $invalid_src "mausezahn -6" 1587 1574 } 1588 1575 1589 1576 starg_include_ir_ipv4_ipv6() ··· 1594 1581 local vtep2_ip=2001:db8:2000::1 1595 1582 local plen=128 1596 1583 local grp=239.1.1.1 1584 + local grp_dmac=01:00:5e:01:01:01 1597 1585 local valid_src=192.0.2.129 1598 1586 local invalid_src=192.0.2.145 1599 1587 ··· 1603 1589 echo "-------------------------------------------------------------" 1604 1590 1605 1591 starg_include_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \ 1606 - $valid_src $invalid_src "mausezahn" 1592 + $grp_dmac $valid_src $invalid_src "mausezahn" 1607 1593 } 1608 1594 1609 1595 starg_include_ir_ipv6_ipv6() ··· 1614 1600 local vtep2_ip=2001:db8:2000::1 1615 1601 local plen=128 1616 1602 local grp=ff0e::1 1603 + local grp_dmac=33:33:00:00:00:01 1617 1604 local valid_src=2001:db8:100::1 1618 1605 local invalid_src=2001:db8:200::1 1619 1606 ··· 1623 1608 echo "-------------------------------------------------------------" 1624 1609 1625 1610 starg_include_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \ 1626 - $valid_src $invalid_src "mausezahn -6" 1611 + $grp_dmac $valid_src $invalid_src "mausezahn -6" 1627 1612 } 1628 1613 1629 1614 starg_exclude_p2mp_common() ··· 1633 1618 local mcast_grp=$1; shift 1634 1619 local plen=$1; shift 1635 1620 local grp=$1; shift 1621 + local grp_dmac=$1; shift 1636 1622 local valid_src=$1; shift 1637 1623 local invalid_src=$1; shift 1638 1624 local mz=$1; shift ··· 1651 1635 run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $invalid_src dst $mcast_grp src_vni 10010 via veth0" 1652 1636 1653 1637 # Check that invalid source is not forwarded. 1654 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1638 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1655 1639 tc_check_packets "$ns2" "dev vx0 ingress" 101 0 1656 1640 log_test $? 0 "Block excluded source" 1657 1641 1658 1642 # Check that valid source is forwarded. 1659 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1643 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1660 1644 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 1661 1645 log_test $? 0 "Forward valid source" 1662 1646 ··· 1664 1648 run_cmd "ip -n $ns2 address del $mcast_grp/$plen dev veth0" 1665 1649 1666 1650 # Check that valid source is not received anymore. 1667 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1651 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1668 1652 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 1669 1653 log_test $? 0 "Receive of valid source after removal from group" 1670 1654 } ··· 1676 1660 local mcast_grp=238.1.1.1 1677 1661 local plen=32 1678 1662 local grp=239.1.1.1 1663 + local grp_dmac=01:00:5e:01:01:01 1679 1664 local valid_src=192.0.2.129 1680 1665 local invalid_src=192.0.2.145 1681 1666 ··· 1684 1667 echo "Data path: (*, G) EXCLUDE - P2MP - IPv4 overlay / IPv4 underlay" 1685 1668 echo "---------------------------------------------------------------" 1686 1669 1687 - starg_exclude_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \ 1670 + starg_exclude_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp $grp_dmac \ 1688 1671 $valid_src $invalid_src "mausezahn" 1689 1672 } 1690 1673 ··· 1695 1678 local mcast_grp=238.1.1.1 1696 1679 local plen=32 1697 1680 local grp=ff0e::1 1681 + local grp_dmac=33:33:00:00:00:01 1698 1682 local valid_src=2001:db8:100::1 1699 1683 local invalid_src=2001:db8:200::1 1700 1684 ··· 1703 1685 echo "Data path: (*, G) EXCLUDE - P2MP - IPv6 overlay / IPv4 underlay" 1704 1686 echo "---------------------------------------------------------------" 1705 1687 1706 - starg_exclude_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \ 1688 + starg_exclude_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp $grp_dmac \ 1707 1689 $valid_src $invalid_src "mausezahn -6" 1708 1690 } 1709 1691 ··· 1714 1696 local mcast_grp=ff0e::2 1715 1697 local plen=128 1716 1698 local grp=239.1.1.1 1699 + local grp_dmac=01:00:5e:01:01:01 1717 1700 local valid_src=192.0.2.129 1718 1701 local invalid_src=192.0.2.145 1719 1702 ··· 1722 1703 echo "Data path: (*, G) EXCLUDE - P2MP - IPv4 overlay / IPv6 underlay" 1723 1704 echo "---------------------------------------------------------------" 1724 1705 1725 - starg_exclude_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \ 1706 + starg_exclude_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp $grp_dmac \ 1726 1707 $valid_src $invalid_src "mausezahn" 1727 1708 } 1728 1709 ··· 1733 1714 local mcast_grp=ff0e::2 1734 1715 local plen=128 1735 1716 local grp=ff0e::1 1717 + local grp_dmac=33:33:00:00:00:01 1736 1718 local valid_src=2001:db8:100::1 1737 1719 local invalid_src=2001:db8:200::1 1738 1720 ··· 1741 1721 echo "Data path: (*, G) EXCLUDE - P2MP - IPv6 overlay / IPv6 underlay" 1742 1722 echo "---------------------------------------------------------------" 1743 1723 1744 - starg_exclude_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \ 1724 + starg_exclude_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp $grp_dmac \ 1745 1725 $valid_src $invalid_src "mausezahn -6" 1746 1726 } 1747 1727 ··· 1752 1732 local mcast_grp=$1; shift 1753 1733 local plen=$1; shift 1754 1734 local grp=$1; shift 1735 + local grp_dmac=$1; shift 1755 1736 local valid_src=$1; shift 1756 1737 local invalid_src=$1; shift 1757 1738 local mz=$1; shift ··· 1770 1749 run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode include source_list $valid_src dst $mcast_grp src_vni 10010 via veth0" 1771 1750 1772 1751 # Check that invalid source is not forwarded. 1773 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1752 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1774 1753 tc_check_packets "$ns2" "dev vx0 ingress" 101 0 1775 1754 log_test $? 0 "Block excluded source" 1776 1755 1777 1756 # Check that valid source is forwarded. 1778 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1757 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1779 1758 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 1780 1759 log_test $? 0 "Forward valid source" 1781 1760 ··· 1783 1762 run_cmd "ip -n $ns2 address del $mcast_grp/$plen dev veth0" 1784 1763 1785 1764 # Check that valid source is not received anymore. 1786 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1765 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1787 1766 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 1788 1767 log_test $? 0 "Receive of valid source after removal from group" 1789 1768 } ··· 1795 1774 local mcast_grp=238.1.1.1 1796 1775 local plen=32 1797 1776 local grp=239.1.1.1 1777 + local grp_dmac=01:00:5e:01:01:01 1798 1778 local valid_src=192.0.2.129 1799 1779 local invalid_src=192.0.2.145 1800 1780 ··· 1803 1781 echo "Data path: (*, G) INCLUDE - P2MP - IPv4 overlay / IPv4 underlay" 1804 1782 echo "---------------------------------------------------------------" 1805 1783 1806 - starg_include_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \ 1784 + starg_include_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp $grp_dmac \ 1807 1785 $valid_src $invalid_src "mausezahn" 1808 1786 } 1809 1787 ··· 1814 1792 local mcast_grp=238.1.1.1 1815 1793 local plen=32 1816 1794 local grp=ff0e::1 1795 + local grp_dmac=33:33:00:00:00:01 1817 1796 local valid_src=2001:db8:100::1 1818 1797 local invalid_src=2001:db8:200::1 1819 1798 ··· 1822 1799 echo "Data path: (*, G) INCLUDE - P2MP - IPv6 overlay / IPv4 underlay" 1823 1800 echo "---------------------------------------------------------------" 1824 1801 1825 - starg_include_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \ 1802 + starg_include_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp $grp_dmac \ 1826 1803 $valid_src $invalid_src "mausezahn -6" 1827 1804 } 1828 1805 ··· 1833 1810 local mcast_grp=ff0e::2 1834 1811 local plen=128 1835 1812 local grp=239.1.1.1 1813 + local grp_dmac=01:00:5e:01:01:01 1836 1814 local valid_src=192.0.2.129 1837 1815 local invalid_src=192.0.2.145 1838 1816 ··· 1841 1817 echo "Data path: (*, G) INCLUDE - P2MP - IPv4 overlay / IPv6 underlay" 1842 1818 echo "---------------------------------------------------------------" 1843 1819 1844 - starg_include_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \ 1820 + starg_include_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp $grp_dmac \ 1845 1821 $valid_src $invalid_src "mausezahn" 1846 1822 } 1847 1823 ··· 1852 1828 local mcast_grp=ff0e::2 1853 1829 local plen=128 1854 1830 local grp=ff0e::1 1831 + local grp_dmac=33:33:00:00:00:01 1855 1832 local valid_src=2001:db8:100::1 1856 1833 local invalid_src=2001:db8:200::1 1857 1834 ··· 1860 1835 echo "Data path: (*, G) INCLUDE - P2MP - IPv6 overlay / IPv6 underlay" 1861 1836 echo "---------------------------------------------------------------" 1862 1837 1863 - starg_include_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \ 1838 + starg_include_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp $grp_dmac \ 1864 1839 $valid_src $invalid_src "mausezahn -6" 1865 1840 } 1866 1841 ··· 1872 1847 local plen=$1; shift 1873 1848 local proto=$1; shift 1874 1849 local grp=$1; shift 1850 + local grp_dmac=$1; shift 1875 1851 local src=$1; shift 1876 1852 local mz=$1; shift 1877 1853 ··· 1908 1882 # Make sure that packets sent from the first VTEP over VLAN 10 are 1909 1883 # received by the SVI corresponding to the L3VNI (14000 / VLAN 4000) on 1910 1884 # the second VTEP, since it is configured as PVID. 1911 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1885 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1912 1886 tc_check_packets "$ns2" "dev br0.4000 ingress" 101 1 1913 1887 log_test $? 0 "Egress VNI translation - PVID configured" 1914 1888 1915 1889 # Remove PVID flag from VLAN 4000 on the second VTEP and make sure 1916 1890 # packets are no longer received by the SVI interface. 1917 1891 run_cmd "bridge -n $ns2 vlan add vid 4000 dev vx0" 1918 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1892 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1919 1893 tc_check_packets "$ns2" "dev br0.4000 ingress" 101 1 1920 1894 log_test $? 0 "Egress VNI translation - no PVID configured" 1921 1895 1922 1896 # Reconfigure the PVID and make sure packets are received again. 1923 1897 run_cmd "bridge -n $ns2 vlan add vid 4000 dev vx0 pvid" 1924 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1898 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 1925 1899 tc_check_packets "$ns2" "dev br0.4000 ingress" 101 2 1926 1900 log_test $? 0 "Egress VNI translation - PVID reconfigured" 1927 1901 } ··· 1934 1908 local plen=32 1935 1909 local proto="ipv4" 1936 1910 local grp=239.1.1.1 1911 + local grp_dmac=01:00:5e:01:01:01 1937 1912 local src=192.0.2.129 1938 1913 1939 1914 echo ··· 1942 1915 echo "----------------------------------------------------------------" 1943 1916 1944 1917 egress_vni_translation_common $ns1 $ns2 $mcast_grp $plen $proto $grp \ 1945 - $src "mausezahn" 1918 + $grp_dmac $src "mausezahn" 1946 1919 } 1947 1920 1948 1921 egress_vni_translation_ipv6_ipv4() ··· 1953 1926 local plen=32 1954 1927 local proto="ipv6" 1955 1928 local grp=ff0e::1 1929 + local grp_dmac=33:33:00:00:00:01 1956 1930 local src=2001:db8:100::1 1957 1931 1958 1932 echo ··· 1961 1933 echo "----------------------------------------------------------------" 1962 1934 1963 1935 egress_vni_translation_common $ns1 $ns2 $mcast_grp $plen $proto $grp \ 1964 - $src "mausezahn -6" 1936 + $grp_dmac $src "mausezahn -6" 1965 1937 } 1966 1938 1967 1939 egress_vni_translation_ipv4_ipv6() ··· 1972 1944 local plen=128 1973 1945 local proto="ipv4" 1974 1946 local grp=239.1.1.1 1947 + local grp_dmac=01:00:5e:01:01:01 1975 1948 local src=192.0.2.129 1976 1949 1977 1950 echo ··· 1980 1951 echo "----------------------------------------------------------------" 1981 1952 1982 1953 egress_vni_translation_common $ns1 $ns2 $mcast_grp $plen $proto $grp \ 1983 - $src "mausezahn" 1954 + $grp_dmac $src "mausezahn" 1984 1955 } 1985 1956 1986 1957 egress_vni_translation_ipv6_ipv6() ··· 1991 1962 local plen=128 1992 1963 local proto="ipv6" 1993 1964 local grp=ff0e::1 1965 + local grp_dmac=33:33:00:00:00:01 1994 1966 local src=2001:db8:100::1 1995 1967 1996 1968 echo ··· 1999 1969 echo "----------------------------------------------------------------" 2000 1970 2001 1971 egress_vni_translation_common $ns1 $ns2 $mcast_grp $plen $proto $grp \ 2002 - $src "mausezahn -6" 1972 + $grp_dmac $src "mausezahn -6" 2003 1973 } 2004 1974 2005 1975 all_zeros_mdb_common() ··· 2012 1982 local vtep4_ip=$1; shift 2013 1983 local plen=$1; shift 2014 1984 local ipv4_grp=239.1.1.1 1985 + local ipv4_grp_dmac=01:00:5e:01:01:01 2015 1986 local ipv4_unreg_grp=239.2.2.2 1987 + local ipv4_unreg_grp_dmac=01:00:5e:02:02:02 2016 1988 local ipv4_ll_grp=224.0.0.100 1989 + local ipv4_ll_grp_dmac=01:00:5e:00:00:64 2017 1990 local ipv4_src=192.0.2.129 2018 1991 local ipv6_grp=ff0e::1 1992 + local ipv6_grp_dmac=33:33:00:00:00:01 2019 1993 local ipv6_unreg_grp=ff0e::2 1994 + local ipv6_unreg_grp_dmac=33:33:00:00:00:02 2020 1995 local ipv6_ll_grp=ff02::1 1996 + local ipv6_ll_grp_dmac=33:33:00:00:00:01 2021 1997 local ipv6_src=2001:db8:100::1 2022 1998 2023 1999 # Install all-zeros (catchall) MDB entries for IPv4 and IPv6 traffic ··· 2059 2023 2060 2024 # Send registered IPv4 multicast and make sure it only arrives to the 2061 2025 # first VTEP. 2062 - run_cmd "ip netns exec $ns1 mausezahn br0.10 -A $ipv4_src -B $ipv4_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2026 + run_cmd "ip netns exec $ns1 mausezahn br0.10 -a own -b $ipv4_grp_dmac -A $ipv4_src -B $ipv4_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2063 2027 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 2064 2028 log_test $? 0 "Registered IPv4 multicast - first VTEP" 2065 2029 tc_check_packets "$ns2" "dev vx0 ingress" 102 0 ··· 2067 2031 2068 2032 # Send unregistered IPv4 multicast that is not link-local and make sure 2069 2033 # it arrives to the first and second VTEPs. 2070 - run_cmd "ip netns exec $ns1 mausezahn br0.10 -A $ipv4_src -B $ipv4_unreg_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2034 + run_cmd "ip netns exec $ns1 mausezahn br0.10 -a own -b $ipv4_unreg_grp_dmac -A $ipv4_src -B $ipv4_unreg_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2071 2035 tc_check_packets "$ns2" "dev vx0 ingress" 101 2 2072 2036 log_test $? 0 "Unregistered IPv4 multicast - first VTEP" 2073 2037 tc_check_packets "$ns2" "dev vx0 ingress" 102 1 ··· 2075 2039 2076 2040 # Send IPv4 link-local multicast traffic and make sure it does not 2077 2041 # arrive to any VTEP. 2078 - run_cmd "ip netns exec $ns1 mausezahn br0.10 -A $ipv4_src -B $ipv4_ll_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2042 + run_cmd "ip netns exec $ns1 mausezahn br0.10 -a own -b $ipv4_ll_grp_dmac -A $ipv4_src -B $ipv4_ll_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2079 2043 tc_check_packets "$ns2" "dev vx0 ingress" 101 2 2080 2044 log_test $? 0 "Link-local IPv4 multicast - first VTEP" 2081 2045 tc_check_packets "$ns2" "dev vx0 ingress" 102 1 ··· 2110 2074 2111 2075 # Send registered IPv6 multicast and make sure it only arrives to the 2112 2076 # third VTEP. 2113 - run_cmd "ip netns exec $ns1 mausezahn -6 br0.10 -A $ipv6_src -B $ipv6_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2077 + run_cmd "ip netns exec $ns1 mausezahn -6 br0.10 -a own -b $ipv6_grp_dmac -A $ipv6_src -B $ipv6_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2114 2078 tc_check_packets "$ns2" "dev vx0 ingress" 103 1 2115 2079 log_test $? 0 "Registered IPv6 multicast - third VTEP" 2116 2080 tc_check_packets "$ns2" "dev vx0 ingress" 104 0 ··· 2118 2082 2119 2083 # Send unregistered IPv6 multicast that is not link-local and make sure 2120 2084 # it arrives to the third and fourth VTEPs. 2121 - run_cmd "ip netns exec $ns1 mausezahn -6 br0.10 -A $ipv6_src -B $ipv6_unreg_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2085 + run_cmd "ip netns exec $ns1 mausezahn -6 br0.10 -a own -b $ipv6_unreg_grp_dmac -A $ipv6_src -B $ipv6_unreg_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2122 2086 tc_check_packets "$ns2" "dev vx0 ingress" 103 2 2123 2087 log_test $? 0 "Unregistered IPv6 multicast - third VTEP" 2124 2088 tc_check_packets "$ns2" "dev vx0 ingress" 104 1 ··· 2126 2090 2127 2091 # Send IPv6 link-local multicast traffic and make sure it does not 2128 2092 # arrive to any VTEP. 2129 - run_cmd "ip netns exec $ns1 mausezahn -6 br0.10 -A $ipv6_src -B $ipv6_ll_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2093 + run_cmd "ip netns exec $ns1 mausezahn -6 br0.10 -a own -b $ipv6_ll_grp_dmac -A $ipv6_src -B $ipv6_ll_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2130 2094 tc_check_packets "$ns2" "dev vx0 ingress" 103 2 2131 2095 log_test $? 0 "Link-local IPv6 multicast - third VTEP" 2132 2096 tc_check_packets "$ns2" "dev vx0 ingress" 104 1 ··· 2201 2165 local plen=$1; shift 2202 2166 local proto=$1; shift 2203 2167 local grp=$1; shift 2168 + local grp_dmac=$1; shift 2204 2169 local src=$1; shift 2205 2170 local mz=$1; shift 2206 2171 ··· 2225 2188 2226 2189 # Send IP multicast traffic and make sure it is forwarded by the MDB 2227 2190 # and only arrives to the first VTEP. 2228 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2191 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2229 2192 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 2230 2193 log_test $? 0 "IP multicast - first VTEP" 2231 2194 tc_check_packets "$ns2" "dev vx0 ingress" 102 0 ··· 2242 2205 # Remove the MDB entry and make sure that IP multicast is now forwarded 2243 2206 # by the FDB to the second VTEP. 2244 2207 run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep1_ip src_vni 10010" 2245 - run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2208 + run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b $grp_dmac -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q" 2246 2209 tc_check_packets "$ns2" "dev vx0 ingress" 101 1 2247 2210 log_test $? 0 "IP multicast after removal - first VTEP" 2248 2211 tc_check_packets "$ns2" "dev vx0 ingress" 102 2 ··· 2258 2221 local plen=32 2259 2222 local proto="ipv4" 2260 2223 local grp=239.1.1.1 2224 + local grp_dmac=01:00:5e:01:01:01 2261 2225 local src=192.0.2.129 2262 2226 2263 2227 echo 2264 2228 echo "Data path: MDB with FDB - IPv4 overlay / IPv4 underlay" 2265 2229 echo "------------------------------------------------------" 2266 2230 2267 - mdb_fdb_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $proto $grp $src \ 2268 - "mausezahn" 2231 + mdb_fdb_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $proto $grp \ 2232 + $grp_dmac $src "mausezahn" 2269 2233 } 2270 2234 2271 2235 mdb_fdb_ipv6_ipv4() ··· 2278 2240 local plen=32 2279 2241 local proto="ipv6" 2280 2242 local grp=ff0e::1 2243 + local grp_dmac=33:33:00:00:00:01 2281 2244 local src=2001:db8:100::1 2282 2245 2283 2246 echo 2284 2247 echo "Data path: MDB with FDB - IPv6 overlay / IPv4 underlay" 2285 2248 echo "------------------------------------------------------" 2286 2249 2287 - mdb_fdb_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $proto $grp $src \ 2288 - "mausezahn -6" 2250 + mdb_fdb_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $proto $grp \ 2251 + $grp_dmac $src "mausezahn -6" 2289 2252 } 2290 2253 2291 2254 mdb_fdb_ipv4_ipv6() ··· 2298 2259 local plen=128 2299 2260 local proto="ipv4" 2300 2261 local grp=239.1.1.1 2262 + local grp_dmac=01:00:5e:01:01:01 2301 2263 local src=192.0.2.129 2302 2264 2303 2265 echo 2304 2266 echo "Data path: MDB with FDB - IPv4 overlay / IPv6 underlay" 2305 2267 echo "------------------------------------------------------" 2306 2268 2307 - mdb_fdb_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $proto $grp $src \ 2308 - "mausezahn" 2269 + mdb_fdb_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $proto $grp \ 2270 + $grp_dmac $src "mausezahn" 2309 2271 } 2310 2272 2311 2273 mdb_fdb_ipv6_ipv6() ··· 2318 2278 local plen=128 2319 2279 local proto="ipv6" 2320 2280 local grp=ff0e::1 2281 + local grp_dmac=33:33:00:00:00:01 2321 2282 local src=2001:db8:100::1 2322 2283 2323 2284 echo 2324 2285 echo "Data path: MDB with FDB - IPv6 overlay / IPv6 underlay" 2325 2286 echo "------------------------------------------------------" 2326 2287 2327 - mdb_fdb_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $proto $grp $src \ 2328 - "mausezahn -6" 2288 + mdb_fdb_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $proto $grp \ 2289 + $grp_dmac $src "mausezahn -6" 2329 2290 } 2330 2291 2331 2292 mdb_grp1_loop() ··· 2361 2320 local vtep1_ip=$1; shift 2362 2321 local vtep2_ip=$1; shift 2363 2322 local grp1=$1; shift 2323 + local grp1_dmac=$1; shift 2364 2324 local grp2=$1; shift 2325 + local grp2_dmac=$1; shift 2365 2326 local src=$1; shift 2366 2327 local mz=$1; shift 2367 2328 local pid1 ··· 2388 2345 pid1=$! 2389 2346 mdb_grp2_loop $ns1 $vtep1_ip $vtep2_ip $grp2 & 2390 2347 pid2=$! 2391 - ip netns exec $ns1 $mz br0.10 -A $src -B $grp1 -t udp sp=12345,dp=54321 -p 100 -c 0 -q & 2348 + ip netns exec $ns1 $mz br0.10 -a own -b $grp1_dmac -A $src -B $grp1 -t udp sp=12345,dp=54321 -p 100 -c 0 -q & 2392 2349 pid3=$! 2393 - ip netns exec $ns1 $mz br0.10 -A $src -B $grp2 -t udp sp=12345,dp=54321 -p 100 -c 0 -q & 2350 + ip netns exec $ns1 $mz br0.10 -a own -b $grp2_dmac -A $src -B $grp2 -t udp sp=12345,dp=54321 -p 100 -c 0 -q & 2394 2351 pid4=$! 2395 2352 2396 2353 sleep 30 ··· 2406 2363 local vtep1_ip=198.51.100.100 2407 2364 local vtep2_ip=198.51.100.200 2408 2365 local grp1=239.1.1.1 2366 + local grp1_dmac=01:00:5e:01:01:01 2409 2367 local grp2=239.2.2.2 2368 + local grp2_dmac=01:00:5e:02:02:02 2410 2369 local src=192.0.2.129 2411 2370 2412 2371 echo 2413 2372 echo "Data path: MDB torture test - IPv4 overlay / IPv4 underlay" 2414 2373 echo "----------------------------------------------------------" 2415 2374 2416 - mdb_torture_common $ns1 $vtep1_ip $vtep2_ip $grp1 $grp2 $src \ 2417 - "mausezahn" 2375 + mdb_torture_common $ns1 $vtep1_ip $vtep2_ip $grp1 $grp1_dmac $grp2 \ 2376 + $grp2_dmac $src "mausezahn" 2418 2377 } 2419 2378 2420 2379 mdb_torture_ipv6_ipv4() ··· 2425 2380 local vtep1_ip=198.51.100.100 2426 2381 local vtep2_ip=198.51.100.200 2427 2382 local grp1=ff0e::1 2383 + local grp1_dmac=33:33:00:00:00:01 2428 2384 local grp2=ff0e::2 2385 + local grp2_dmac=33:33:00:00:00:02 2429 2386 local src=2001:db8:100::1 2430 2387 2431 2388 echo 2432 2389 echo "Data path: MDB torture test - IPv6 overlay / IPv4 underlay" 2433 2390 echo "----------------------------------------------------------" 2434 2391 2435 - mdb_torture_common $ns1 $vtep1_ip $vtep2_ip $grp1 $grp2 $src \ 2436 - "mausezahn -6" 2392 + mdb_torture_common $ns1 $vtep1_ip $vtep2_ip $grp1 $grp1_dmac $grp2 \ 2393 + $grp2_dmac $src "mausezahn -6" 2437 2394 } 2438 2395 2439 2396 mdb_torture_ipv4_ipv6() ··· 2444 2397 local vtep1_ip=2001:db8:1000::1 2445 2398 local vtep2_ip=2001:db8:2000::1 2446 2399 local grp1=239.1.1.1 2400 + local grp1_dmac=01:00:5e:01:01:01 2447 2401 local grp2=239.2.2.2 2402 + local grp2_dmac=01:00:5e:02:02:02 2448 2403 local src=192.0.2.129 2449 2404 2450 2405 echo 2451 2406 echo "Data path: MDB torture test - IPv4 overlay / IPv6 underlay" 2452 2407 echo "----------------------------------------------------------" 2453 2408 2454 - mdb_torture_common $ns1 $vtep1_ip $vtep2_ip $grp1 $grp2 $src \ 2455 - "mausezahn" 2409 + mdb_torture_common $ns1 $vtep1_ip $vtep2_ip $grp1 $grp1_dmac $grp2 \ 2410 + $grp2_dmac $src "mausezahn" 2456 2411 } 2457 2412 2458 2413 mdb_torture_ipv6_ipv6() ··· 2463 2414 local vtep1_ip=2001:db8:1000::1 2464 2415 local vtep2_ip=2001:db8:2000::1 2465 2416 local grp1=ff0e::1 2417 + local grp1_dmac=33:33:00:00:00:01 2466 2418 local grp2=ff0e::2 2419 + local grp2_dmac=33:33:00:00:00:02 2467 2420 local src=2001:db8:100::1 2468 2421 2469 2422 echo 2470 2423 echo "Data path: MDB torture test - IPv6 overlay / IPv6 underlay" 2471 2424 echo "----------------------------------------------------------" 2472 2425 2473 - mdb_torture_common $ns1 $vtep1_ip $vtep2_ip $grp1 $grp2 $src \ 2474 - "mausezahn -6" 2426 + mdb_torture_common $ns1 $vtep1_ip $vtep2_ip $grp1 $grp1_dmac $grp2 \ 2427 + $grp2_dmac $src "mausezahn -6" 2475 2428 } 2476 2429 2477 2430 ################################################################################
+34
tools/testing/selftests/net/tls.c
··· 1615 1615 EXPECT_EQ(errno, EINVAL); 1616 1616 } 1617 1617 1618 + TEST_F(tls, recv_efault) 1619 + { 1620 + char *rec1 = "1111111111"; 1621 + char *rec2 = "2222222222"; 1622 + struct msghdr hdr = {}; 1623 + struct iovec iov[2]; 1624 + char recv_mem[12]; 1625 + int ret; 1626 + 1627 + if (self->notls) 1628 + SKIP(return, "no TLS support"); 1629 + 1630 + EXPECT_EQ(send(self->fd, rec1, 10, 0), 10); 1631 + EXPECT_EQ(send(self->fd, rec2, 10, 0), 10); 1632 + 1633 + iov[0].iov_base = recv_mem; 1634 + iov[0].iov_len = sizeof(recv_mem); 1635 + iov[1].iov_base = NULL; /* broken iov to make process_rx_list fail */ 1636 + iov[1].iov_len = 1; 1637 + 1638 + hdr.msg_iovlen = 2; 1639 + hdr.msg_iov = iov; 1640 + 1641 + EXPECT_EQ(recv(self->cfd, recv_mem, 1, 0), 1); 1642 + EXPECT_EQ(recv_mem[0], rec1[0]); 1643 + 1644 + ret = recvmsg(self->cfd, &hdr, 0); 1645 + EXPECT_LE(ret, sizeof(recv_mem)); 1646 + EXPECT_GE(ret, 9); 1647 + EXPECT_EQ(memcmp(rec1, recv_mem, 9), 0); 1648 + if (ret > 9) 1649 + EXPECT_EQ(memcmp(rec2, recv_mem + 9, ret - 9), 0); 1650 + } 1651 + 1618 1652 FIXTURE(tls_err) 1619 1653 { 1620 1654 int fd, cfd;
+1 -1
tools/testing/selftests/seccomp/settings
··· 1 - timeout=120 1 + timeout=180