Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

No conflicts.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+3238 -1632
+2 -2
Documentation/ABI/testing/sysfs-kernel-mm-memory-tiers
··· 10 10 11 11 12 12 What: /sys/devices/virtual/memory_tiering/memory_tierN/ 13 - /sys/devices/virtual/memory_tiering/memory_tierN/nodes 13 + /sys/devices/virtual/memory_tiering/memory_tierN/nodelist 14 14 Date: August 2022 15 15 Contact: Linux memory management mailing list <linux-mm@kvack.org> 16 16 Description: Directory with details of a specific memory tier ··· 21 21 A smaller value of N implies a higher (faster) memory tier in the 22 22 hierarchy. 23 23 24 - nodes: NUMA nodes that are part of this memory tier. 24 + nodelist: NUMA nodes that are part of this memory tier. 25 25
+1 -1
Documentation/kernel-hacking/hacking.rst
··· 120 120 .. warning:: 121 121 122 122 Beware that this will return a false positive if a 123 - :ref:`botton half lock <local_bh_disable>` is held. 123 + :ref:`bottom half lock <local_bh_disable>` is held. 124 124 125 125 Some Basic Rules 126 126 ================
+3 -10
Documentation/process/2.Process.rst
··· 126 126 5.2.21 was the final stable update of the 5.2 release. 127 127 128 128 Some kernels are designated "long term" kernels; they will receive support 129 - for a longer period. As of this writing, the current long term kernels 130 - and their maintainers are: 129 + for a longer period. Please refer to the following link for the list of active 130 + long term kernel versions and their maintainers: 131 131 132 - ====== ================================ ======================= 133 - 3.16 Ben Hutchings (very long-term kernel) 134 - 4.4 Greg Kroah-Hartman & Sasha Levin (very long-term kernel) 135 - 4.9 Greg Kroah-Hartman & Sasha Levin 136 - 4.14 Greg Kroah-Hartman & Sasha Levin 137 - 4.19 Greg Kroah-Hartman & Sasha Levin 138 - 5.4 Greg Kroah-Hartman & Sasha Levin 139 - ====== ================================ ======================= 132 + https://www.kernel.org/category/releases.html 140 133 141 134 The selection of a kernel for long-term support is purely a matter of a 142 135 maintainer having the need and the time to maintain that release. There
+1 -1
Documentation/process/howto.rst
··· 36 36 - "C: A Reference Manual" by Harbison and Steele [Prentice Hall] 37 37 38 38 The kernel is written using GNU C and the GNU toolchain. While it 39 - adheres to the ISO C89 standard, it uses a number of extensions that are 39 + adheres to the ISO C11 standard, it uses a number of extensions that are 40 40 not featured in the standard. The kernel is a freestanding C 41 41 environment, with no reliance on the standard C library, so some 42 42 portions of the C standard are not supported. Arbitrary long long
+1 -1
Documentation/trace/histogram.rst
··· 39 39 will use the event's kernel stacktrace as the key. The keywords 40 40 'keys' or 'key' can be used to specify keys, and the keywords 41 41 'values', 'vals', or 'val' can be used to specify values. Compound 42 - keys consisting of up to two fields can be specified by the 'keys' 42 + keys consisting of up to three fields can be specified by the 'keys' 43 43 keyword. Hashing a compound key produces a unique entry in the 44 44 table for each unique combination of component keys, and can be 45 45 useful for providing more fine-grained summaries of event data.
+1 -1
Documentation/translations/it_IT/process/howto.rst
··· 44 44 - "C: A Reference Manual" di Harbison and Steele [Prentice Hall] 45 45 46 46 Il kernel è stato scritto usando GNU C e la toolchain GNU. 47 - Sebbene si attenga allo standard ISO C89, esso utilizza una serie di 47 + Sebbene si attenga allo standard ISO C11, esso utilizza una serie di 48 48 estensioni che non sono previste in questo standard. Il kernel è un 49 49 ambiente C indipendente, che non ha alcuna dipendenza dalle librerie 50 50 C standard, così alcune parti del C standard non sono supportate.
+1 -1
Documentation/translations/ja_JP/howto.rst
··· 65 65 - 『新・詳説 C 言語 H&S リファレンス』 (サミュエル P ハービソン/ガイ L スティール共著 斉藤 信男監訳)[ソフトバンク] 66 66 67 67 カーネルは GNU C と GNU ツールチェインを使って書かれています。カーネル 68 - は ISO C89 仕様に準拠して書く一方で、標準には無い言語拡張を多く使って 68 + は ISO C11 仕様に準拠して書く一方で、標準には無い言語拡張を多く使って 69 69 います。カーネルは標準 C ライブラリに依存しない、C 言語非依存環境です。 70 70 そのため、C の標準の中で使えないものもあります。特に任意の long long 71 71 の除算や浮動小数点は使えません。カーネルがツールチェインや C 言語拡張
+1 -1
Documentation/translations/ko_KR/howto.rst
··· 62 62 - "Practical C Programming" by Steve Oualline [O'Reilly] 63 63 - "C: A Reference Manual" by Harbison and Steele [Prentice Hall] 64 64 65 - 커널은 GNU C와 GNU 툴체인을 사용하여 작성되었다. 이 툴들은 ISO C89 표준을 65 + 커널은 GNU C와 GNU 툴체인을 사용하여 작성되었다. 이 툴들은 ISO C11 표준을 66 66 따르는 반면 표준에 있지 않은 많은 확장기능도 가지고 있다. 커널은 표준 C 67 67 라이브러리와는 관계없이 freestanding C 환경이어서 C 표준의 일부는 68 68 지원되지 않는다. 임의의 long long 나누기나 floating point는 지원되지 않는다.
+1 -1
Documentation/translations/zh_CN/process/howto.rst
··· 45 45 - "C: A Reference Manual" by Harbison and Steele [Prentice Hall] 46 46 《C语言参考手册(原书第5版)》(邱仲潘 等译)[机械工业出版社] 47 47 48 - Linux内核使用GNU C和GNU工具链开发。虽然它遵循ISO C89标准,但也用到了一些 48 + Linux内核使用GNU C和GNU工具链开发。虽然它遵循ISO C11标准,但也用到了一些 49 49 标准中没有定义的扩展。内核是自给自足的C环境,不依赖于标准C库的支持,所以 50 50 并不支持C标准中的部分定义。比如long long类型的大数除法和浮点运算就不允许 51 51 使用。有时候确实很难弄清楚内核对工具链的要求和它所使用的扩展,不幸的是目
+1 -1
Documentation/translations/zh_TW/process/howto.rst
··· 48 48 - "C: A Reference Manual" by Harbison and Steele [Prentice Hall] 49 49 《C語言參考手冊(原書第5版)》(邱仲潘 等譯)[機械工業出版社] 50 50 51 - Linux內核使用GNU C和GNU工具鏈開發。雖然它遵循ISO C89標準,但也用到了一些 51 + Linux內核使用GNU C和GNU工具鏈開發。雖然它遵循ISO C11標準,但也用到了一些 52 52 標準中沒有定義的擴展。內核是自給自足的C環境,不依賴於標準C庫的支持,所以 53 53 並不支持C標準中的部分定義。比如long long類型的大數除法和浮點運算就不允許 54 54 使用。有時候確實很難弄清楚內核對工具鏈的要求和它所使用的擴展,不幸的是目
+14 -36
MAINTAINERS
··· 4102 4102 N: bcm7120 4103 4103 4104 4104 BROADCOM BDC DRIVER 4105 + M: Justin Chen <justinpopo6@gmail.com> 4105 4106 M: Al Cooper <alcooperx@gmail.com> 4106 4107 L: linux-usb@vger.kernel.org 4107 4108 R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com> ··· 4209 4208 F: drivers/tty/serial/8250/8250_bcm7271.c 4210 4209 4211 4210 BROADCOM BRCMSTB USB EHCI DRIVER 4211 + M: Justin Chen <justinpopo6@gmail.com> 4212 4212 M: Al Cooper <alcooperx@gmail.com> 4213 4213 R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com> 4214 4214 L: linux-usb@vger.kernel.org ··· 4226 4224 F: drivers/usb/misc/brcmstb-usb-pinmap.c 4227 4225 4228 4226 BROADCOM BRCMSTB USB2 and USB3 PHY DRIVER 4227 + M: Justin Chen <justinpopo6@gmail.com> 4229 4228 M: Al Cooper <alcooperx@gmail.com> 4230 4229 R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com> 4231 4230 L: linux-kernel@vger.kernel.org ··· 5042 5039 5043 5040 CISCO VIC ETHERNET NIC DRIVER 5044 5041 M: Christian Benvenuti <benve@cisco.com> 5045 - M: Govindarajulu Varadarajan <_govind@gmx.com> 5042 + M: Satish Kharat <satishkh@cisco.com> 5046 5043 S: Supported 5047 5044 F: drivers/net/ethernet/cisco/enic/ 5048 5045 ··· 9780 9777 F: drivers/pci/hotplug/rpaphp* 9781 9778 9782 9779 IBM Power SRIOV Virtual NIC Device Driver 9783 - M: Dany Madden <drt@linux.ibm.com> 9780 + M: Haren Myneni <haren@linux.ibm.com> 9781 + M: Rick Lindsley <ricklind@linux.ibm.com> 9782 + R: Nick Child <nnac123@linux.ibm.com> 9783 + R: Dany Madden <danymadden@us.ibm.com> 9784 9784 R: Thomas Falcon <tlfalcon@linux.ibm.com> 9785 9785 L: netdev@vger.kernel.org 9786 9786 S: Supported ··· 11253 11247 L: kvm-riscv@lists.infradead.org 11254 11248 L: linux-riscv@lists.infradead.org 11255 11249 S: Maintained 11256 - T: git git://github.com/kvm-riscv/linux.git 11250 + T: git https://github.com/kvm-riscv/linux.git 11257 11251 F: arch/riscv/include/asm/kvm* 11258 11252 F: arch/riscv/include/uapi/asm/kvm* 11259 11253 F: arch/riscv/kvm/ ··· 11266 11260 R: David Hildenbrand <david@redhat.com> 11267 11261 L: kvm@vger.kernel.org 11268 11262 S: Supported 11269 - W: http://www.ibm.com/developerworks/linux/linux390/ 11270 11263 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux.git 11271 11264 F: Documentation/virt/kvm/s390* 11272 11265 F: arch/s390/include/asm/gmap.h ··· 14532 14527 S: Supported 14533 14528 W: https://nilfs.sourceforge.io/ 14534 14529 W: https://nilfs.osdn.jp/ 14535 - T: git git://github.com/konis/nilfs2.git 14530 + T: git https://github.com/konis/nilfs2.git 14536 14531 F: Documentation/filesystems/nilfs2.rst 14537 14532 F: fs/nilfs2/ 14538 14533 F: include/trace/events/nilfs2.h ··· 15636 15631 F: drivers/input/serio/hp_sdc* 15637 15632 F: drivers/parisc/ 15638 15633 F: drivers/parport/parport_gsc.* 15639 - F: drivers/tty/serial/8250/8250_gsc.c 15634 + F: drivers/tty/serial/8250/8250_parisc.c 15640 15635 F: drivers/video/console/sti* 15641 15636 F: drivers/video/fbdev/sti* 15642 15637 F: drivers/video/logo/logo_parisc* ··· 17821 17816 F: drivers/tty/serial/rp2.* 17822 17817 17823 17818 ROHM BD99954 CHARGER IC 17824 - R: Matti Vaittinen <mazziesaccount@gmail.com> 17819 + M: Matti Vaittinen <mazziesaccount@gmail.com> 17825 17820 S: Supported 17826 17821 F: drivers/power/supply/bd99954-charger.c 17827 17822 F: drivers/power/supply/bd99954-charger.h ··· 17844 17839 F: include/linux/mfd/bd9571mwv.h 17845 17840 17846 17841 ROHM POWER MANAGEMENT IC DEVICE DRIVERS 17847 - R: Matti Vaittinen <mazziesaccount@gmail.com> 17842 + M: Matti Vaittinen <mazziesaccount@gmail.com> 17848 17843 S: Supported 17849 17844 F: drivers/clk/clk-bd718x7.c 17850 17845 F: drivers/gpio/gpio-bd71815.c ··· 18006 18001 R: Sven Schnelle <svens@linux.ibm.com> 18007 18002 L: linux-s390@vger.kernel.org 18008 18003 S: Supported 18009 - W: http://www.ibm.com/developerworks/linux/linux390/ 18010 18004 T: git git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux.git 18011 18005 F: Documentation/driver-api/s390-drivers.rst 18012 18006 F: Documentation/s390/ ··· 18017 18013 M: Peter Oberparleiter <oberpar@linux.ibm.com> 18018 18014 L: linux-s390@vger.kernel.org 18019 18015 S: Supported 18020 - W: http://www.ibm.com/developerworks/linux/linux390/ 18021 18016 F: drivers/s390/cio/ 18022 18017 18023 18018 S390 DASD DRIVER ··· 18024 18021 M: Jan Hoeppner <hoeppner@linux.ibm.com> 18025 18022 L: linux-s390@vger.kernel.org 18026 18023 S: Supported 18027 - W: http://www.ibm.com/developerworks/linux/linux390/ 18028 18024 F: block/partitions/ibm.c 18029 18025 F: drivers/s390/block/dasd* 18030 18026 F: include/linux/dasd_mod.h ··· 18033 18031 M: Gerald Schaefer <gerald.schaefer@linux.ibm.com> 18034 18032 L: linux-s390@vger.kernel.org 18035 18033 S: Supported 18036 - W: http://www.ibm.com/developerworks/linux/linux390/ 18037 18034 F: drivers/iommu/s390-iommu.c 18038 18035 18039 18036 S390 IUCV NETWORK LAYER ··· 18041 18040 L: linux-s390@vger.kernel.org 18042 18041 L: netdev@vger.kernel.org 18043 18042 S: Supported 18044 - W: http://www.ibm.com/developerworks/linux/linux390/ 18045 18043 F: drivers/s390/net/*iucv* 18046 18044 F: include/net/iucv/ 18047 18045 F: net/iucv/ ··· 18051 18051 L: linux-s390@vger.kernel.org 18052 18052 L: netdev@vger.kernel.org 18053 18053 S: Supported 18054 - W: http://www.ibm.com/developerworks/linux/linux390/ 18055 18054 F: drivers/s390/net/ 18056 18055 18057 18056 S390 PCI SUBSYSTEM ··· 18058 18059 M: Gerald Schaefer <gerald.schaefer@linux.ibm.com> 18059 18060 L: linux-s390@vger.kernel.org 18060 18061 S: Supported 18061 - W: http://www.ibm.com/developerworks/linux/linux390/ 18062 18062 F: arch/s390/pci/ 18063 18063 F: drivers/pci/hotplug/s390_pci_hpc.c 18064 18064 F: Documentation/s390/pci.rst ··· 18068 18070 M: Jason Herne <jjherne@linux.ibm.com> 18069 18071 L: linux-s390@vger.kernel.org 18070 18072 S: Supported 18071 - W: http://www.ibm.com/developerworks/linux/linux390/ 18072 18073 F: Documentation/s390/vfio-ap* 18073 18074 F: drivers/s390/crypto/vfio_ap* 18074 18075 ··· 18096 18099 M: Harald Freudenberger <freude@linux.ibm.com> 18097 18100 L: linux-s390@vger.kernel.org 18098 18101 S: Supported 18099 - W: http://www.ibm.com/developerworks/linux/linux390/ 18100 18102 F: drivers/s390/crypto/ 18101 18103 18102 18104 S390 ZFCP DRIVER ··· 18103 18107 M: Benjamin Block <bblock@linux.ibm.com> 18104 18108 L: linux-s390@vger.kernel.org 18105 18109 S: Supported 18106 - W: http://www.ibm.com/developerworks/linux/linux390/ 18107 18110 F: drivers/s390/scsi/zfcp_* 18108 18111 18109 18112 S3C ADC BATTERY DRIVER ··· 18674 18679 M: Jan Karcher <jaka@linux.ibm.com> 18675 18680 L: linux-s390@vger.kernel.org 18676 18681 S: Supported 18677 - W: http://www.ibm.com/developerworks/linux/linux390/ 18678 18682 F: net/smc/ 18679 18683 18680 18684 SHARP GP2AP002A00F/GP2AP002S00F SENSOR DRIVER ··· 18784 18790 M: Paul Walmsley <paul.walmsley@sifive.com> 18785 18791 L: linux-riscv@lists.infradead.org 18786 18792 S: Supported 18787 - T: git git://github.com/sifive/riscv-linux.git 18793 + T: git https://github.com/sifive/riscv-linux.git 18788 18794 N: sifive 18789 18795 K: [^@]sifive 18790 18796 ··· 21188 21194 F: Documentation/usb/ehci.rst 21189 21195 F: drivers/usb/host/ehci* 21190 21196 21191 - USB GADGET/PERIPHERAL SUBSYSTEM 21192 - M: Felipe Balbi <balbi@kernel.org> 21193 - L: linux-usb@vger.kernel.org 21194 - S: Maintained 21195 - W: http://www.linux-usb.org/gadget 21196 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git 21197 - F: drivers/usb/gadget/ 21198 - F: include/linux/usb/gadget* 21199 - 21200 21197 USB HID/HIDBP DRIVERS (USB KEYBOARDS, MICE, REMOTE CONTROLS, ...) 21201 21198 M: Jiri Kosina <jikos@kernel.org> 21202 21199 M: Benjamin Tissoires <benjamin.tissoires@redhat.com> ··· 21293 21308 W: https://github.com/petkan/pegasus 21294 21309 T: git https://github.com/petkan/pegasus.git 21295 21310 F: drivers/net/usb/pegasus.* 21296 - 21297 - USB PHY LAYER 21298 - M: Felipe Balbi <balbi@kernel.org> 21299 - L: linux-usb@vger.kernel.org 21300 - S: Maintained 21301 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git 21302 - F: drivers/usb/phy/ 21303 21311 21304 21312 USB PRINTER DRIVER (usblp) 21305 21313 M: Pete Zaitcev <zaitcev@redhat.com>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 1 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/loongarch/include/asm/processor.h
··· 191 191 unsigned long __get_wchan(struct task_struct *p); 192 192 193 193 #define __KSTK_TOS(tsk) ((unsigned long)task_stack_page(tsk) + \ 194 - THREAD_SIZE - 32 - sizeof(struct pt_regs)) 194 + THREAD_SIZE - sizeof(struct pt_regs)) 195 195 #define task_pt_regs(tsk) ((struct pt_regs *)__KSTK_TOS(tsk)) 196 196 #define KSTK_EIP(tsk) (task_pt_regs(tsk)->csr_era) 197 197 #define KSTK_ESP(tsk) (task_pt_regs(tsk)->regs[3])
+2 -2
arch/loongarch/include/asm/ptrace.h
··· 29 29 unsigned long csr_euen; 30 30 unsigned long csr_ecfg; 31 31 unsigned long csr_estat; 32 - unsigned long __last[0]; 32 + unsigned long __last[]; 33 33 } __aligned(8); 34 34 35 35 static inline int regs_irqs_disabled(struct pt_regs *regs) ··· 133 133 #define current_pt_regs() \ 134 134 ({ \ 135 135 unsigned long sp = (unsigned long)__builtin_frame_address(0); \ 136 - (struct pt_regs *)((sp | (THREAD_SIZE - 1)) + 1 - 32) - 1; \ 136 + (struct pt_regs *)((sp | (THREAD_SIZE - 1)) + 1) - 1; \ 137 137 }) 138 138 139 139 /* Helpers for working with the user stack pointer */
+1 -2
arch/loongarch/kernel/head.S
··· 84 84 85 85 la.pcrel tp, init_thread_union 86 86 /* Set the SP after an empty pt_regs. */ 87 - PTR_LI sp, (_THREAD_SIZE - 32 - PT_SIZE) 87 + PTR_LI sp, (_THREAD_SIZE - PT_SIZE) 88 88 PTR_ADD sp, sp, tp 89 89 set_saved_sp sp, t0, t1 90 - PTR_ADDI sp, sp, -4 * SZREG # init stack pointer 91 90 92 91 bl start_kernel 93 92 ASM_BUG()
+2 -2
arch/loongarch/kernel/process.c
··· 129 129 unsigned long clone_flags = args->flags; 130 130 struct pt_regs *childregs, *regs = current_pt_regs(); 131 131 132 - childksp = (unsigned long)task_stack_page(p) + THREAD_SIZE - 32; 132 + childksp = (unsigned long)task_stack_page(p) + THREAD_SIZE; 133 133 134 134 /* set up new TSS. */ 135 135 childregs = (struct pt_regs *) childksp - 1; ··· 236 236 struct stack_info *info) 237 237 { 238 238 unsigned long begin = (unsigned long)task_stack_page(task); 239 - unsigned long end = begin + THREAD_SIZE - 32; 239 + unsigned long end = begin + THREAD_SIZE; 240 240 241 241 if (stack < begin || stack >= end) 242 242 return false;
+1 -1
arch/loongarch/kernel/switch.S
··· 26 26 move tp, a2 27 27 cpu_restore_nonscratch a1 28 28 29 - li.w t0, _THREAD_SIZE - 32 29 + li.w t0, _THREAD_SIZE 30 30 PTR_ADD t0, t0, tp 31 31 set_saved_sp t0, t1, t2 32 32
+13 -18
arch/loongarch/net/bpf_jit.c
··· 279 279 const u8 t1 = LOONGARCH_GPR_T1; 280 280 const u8 t2 = LOONGARCH_GPR_T2; 281 281 const u8 t3 = LOONGARCH_GPR_T3; 282 + const u8 r0 = regmap[BPF_REG_0]; 282 283 const u8 src = regmap[insn->src_reg]; 283 284 const u8 dst = regmap[insn->dst_reg]; 284 285 const s16 off = insn->off; ··· 360 359 break; 361 360 /* r0 = atomic_cmpxchg(dst + off, r0, src); */ 362 361 case BPF_CMPXCHG: 363 - u8 r0 = regmap[BPF_REG_0]; 364 - 365 362 move_reg(ctx, t2, r0); 366 363 if (isdw) { 367 364 emit_insn(ctx, lld, r0, t1, 0); ··· 389 390 390 391 static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool extra_pass) 391 392 { 392 - const bool is32 = BPF_CLASS(insn->code) == BPF_ALU || 393 - BPF_CLASS(insn->code) == BPF_JMP32; 393 + u8 tm = -1; 394 + u64 func_addr; 395 + bool func_addr_fixed; 396 + int i = insn - ctx->prog->insnsi; 397 + int ret, jmp_offset; 394 398 const u8 code = insn->code; 395 399 const u8 cond = BPF_OP(code); 396 400 const u8 t1 = LOONGARCH_GPR_T1; ··· 402 400 const u8 dst = regmap[insn->dst_reg]; 403 401 const s16 off = insn->off; 404 402 const s32 imm = insn->imm; 405 - int jmp_offset; 406 - int i = insn - ctx->prog->insnsi; 403 + const u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm; 404 + const bool is32 = BPF_CLASS(insn->code) == BPF_ALU || BPF_CLASS(insn->code) == BPF_JMP32; 407 405 408 406 switch (code) { 409 407 /* dst = src */ ··· 726 724 case BPF_JMP32 | BPF_JSGE | BPF_K: 727 725 case BPF_JMP32 | BPF_JSLT | BPF_K: 728 726 case BPF_JMP32 | BPF_JSLE | BPF_K: 729 - u8 t7 = -1; 730 727 jmp_offset = bpf2la_offset(i, off, ctx); 731 728 if (imm) { 732 729 move_imm(ctx, t1, imm, false); 733 - t7 = t1; 730 + tm = t1; 734 731 } else { 735 732 /* If imm is 0, simply use zero register. */ 736 - t7 = LOONGARCH_GPR_ZERO; 733 + tm = LOONGARCH_GPR_ZERO; 737 734 } 738 735 move_reg(ctx, t2, dst); 739 736 if (is_signed_bpf_cond(BPF_OP(code))) { 740 - emit_sext_32(ctx, t7, is32); 737 + emit_sext_32(ctx, tm, is32); 741 738 emit_sext_32(ctx, t2, is32); 742 739 } else { 743 - emit_zext_32(ctx, t7, is32); 740 + emit_zext_32(ctx, tm, is32); 744 741 emit_zext_32(ctx, t2, is32); 745 742 } 746 - if (emit_cond_jmp(ctx, cond, t2, t7, jmp_offset) < 0) 743 + if (emit_cond_jmp(ctx, cond, t2, tm, jmp_offset) < 0) 747 744 goto toofar; 748 745 break; 749 746 ··· 776 775 777 776 /* function call */ 778 777 case BPF_JMP | BPF_CALL: 779 - int ret; 780 - u64 func_addr; 781 - bool func_addr_fixed; 782 - 783 778 mark_call(ctx); 784 779 ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass, 785 780 &func_addr, &func_addr_fixed); ··· 808 811 809 812 /* dst = imm64 */ 810 813 case BPF_LD | BPF_IMM | BPF_DW: 811 - u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm; 812 - 813 814 move_imm(ctx, dst, imm64, is32); 814 815 return 1; 815 816
+6 -6
arch/parisc/include/asm/hardware.h
··· 10 10 #define SVERSION_ANY_ID PA_SVERSION_ANY_ID 11 11 12 12 struct hp_hardware { 13 - unsigned short hw_type:5; /* HPHW_xxx */ 14 - unsigned short hversion; 15 - unsigned long sversion:28; 16 - unsigned short opt; 17 - const char name[80]; /* The hardware description */ 18 - }; 13 + unsigned int hw_type:8; /* HPHW_xxx */ 14 + unsigned int hversion:12; 15 + unsigned int sversion:12; 16 + unsigned char opt; 17 + unsigned char name[59]; /* The hardware description */ 18 + } __packed; 19 19 20 20 struct parisc_device; 21 21
+13 -23
arch/parisc/include/uapi/asm/pdc.h
··· 363 363 364 364 #if !defined(__ASSEMBLY__) 365 365 366 - /* flags of the device_path */ 366 + /* flags for hardware_path */ 367 367 #define PF_AUTOBOOT 0x80 368 368 #define PF_AUTOSEARCH 0x40 369 369 #define PF_TIMER 0x0F 370 370 371 - struct device_path { /* page 1-69 */ 372 - unsigned char flags; /* flags see above! */ 373 - unsigned char bc[6]; /* bus converter routing info */ 374 - unsigned char mod; 375 - unsigned int layers[6];/* device-specific layer-info */ 376 - } __attribute__((aligned(8))) ; 371 + struct hardware_path { 372 + unsigned char flags; /* see bit definitions below */ 373 + signed char bc[6]; /* Bus Converter routing info to a specific */ 374 + /* I/O adaptor (< 0 means none, > 63 resvd) */ 375 + signed char mod; /* fixed field of specified module */ 376 + }; 377 + 378 + struct pdc_module_path { /* page 1-69 */ 379 + struct hardware_path path; 380 + unsigned int layers[6]; /* device-specific info (ctlr #, unit # ...) */ 381 + } __attribute__((aligned(8))); 377 382 378 383 struct pz_device { 379 - struct device_path dp; /* see above */ 384 + struct pdc_module_path dp; /* see above */ 380 385 /* struct iomod *hpa; */ 381 386 unsigned int hpa; /* HPA base address */ 382 387 /* char *spa; */ ··· 614 609 int factor; 615 610 int width; 616 611 int mode; 617 - }; 618 - 619 - struct hardware_path { 620 - char flags; /* see bit definitions below */ 621 - char bc[6]; /* Bus Converter routing info to a specific */ 622 - /* I/O adaptor (< 0 means none, > 63 resvd) */ 623 - char mod; /* fixed field of specified module */ 624 - }; 625 - 626 - /* 627 - * Device path specifications used by PDC. 628 - */ 629 - struct pdc_module_path { 630 - struct hardware_path path; 631 - unsigned int layers[6]; /* device-specific info (ctlr #, unit # ...) */ 632 612 }; 633 613 634 614 /* Only used on some pre-PA2.0 boxes */
+6 -8
arch/parisc/kernel/drivers.c
··· 882 882 &root); 883 883 } 884 884 885 - static void print_parisc_device(struct parisc_device *dev) 885 + static __init void print_parisc_device(struct parisc_device *dev) 886 886 { 887 - char hw_path[64]; 888 - static int count; 887 + static int count __initdata; 889 888 890 - print_pa_hwpath(dev, hw_path); 891 - pr_info("%d. %s at %pap [%s] { %d, 0x%x, 0x%.3x, 0x%.5x }", 892 - ++count, dev->name, &(dev->hpa.start), hw_path, dev->id.hw_type, 893 - dev->id.hversion_rev, dev->id.hversion, dev->id.sversion); 889 + pr_info("%d. %s at %pap { type:%d, hv:%#x, sv:%#x, rev:%#x }", 890 + ++count, dev->name, &(dev->hpa.start), dev->id.hw_type, 891 + dev->id.hversion, dev->id.sversion, dev->id.hversion_rev); 894 892 895 893 if (dev->num_addrs) { 896 894 int k; ··· 1077 1079 1078 1080 1079 1081 1080 - static int print_one_device(struct device * dev, void * data) 1082 + static __init int print_one_device(struct device * dev, void * data) 1081 1083 { 1082 1084 struct parisc_device * pdev = to_parisc_device(dev); 1083 1085
+2 -1
arch/powerpc/Kconfig
··· 147 147 select ARCH_MIGHT_HAVE_PC_SERIO 148 148 select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX 149 149 select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT 150 + select ARCH_SPLIT_ARG64 if PPC32 150 151 select ARCH_STACKWALK 151 152 select ARCH_SUPPORTS_ATOMIC_RMW 152 153 select ARCH_SUPPORTS_DEBUG_PAGEALLOC if PPC_BOOK3S || PPC_8xx || 40x ··· 286 285 # 287 286 288 287 config PPC_LONG_DOUBLE_128 289 - depends on PPC64 288 + depends on PPC64 && ALTIVEC 290 289 def_bool $(success,test "$(shell,echo __LONG_DOUBLE_128__ | $(CC) -E -P -)" = 1) 291 290 292 291 config PPC_BARRIER_NOSPEC
+6
arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
··· 32 32 33 33 if (radix_enabled()) 34 34 return; 35 + /* 36 + * apply_to_page_range can call us this preempt enabled when 37 + * operating on kernel page tables. 38 + */ 39 + preempt_disable(); 35 40 batch = this_cpu_ptr(&ppc64_tlb_batch); 36 41 batch->active = 1; 37 42 } ··· 52 47 if (batch->index) 53 48 __flush_tlb_pending(batch); 54 49 batch->active = 0; 50 + preempt_enable(); 55 51 } 56 52 57 53 #define arch_flush_lazy_mmu_mode() do {} while (0)
+7
arch/powerpc/include/asm/syscalls.h
··· 104 104 unsigned long len1, unsigned long len2); 105 105 long sys_ppc32_fadvise64(int fd, u32 unused, u32 offset1, u32 offset2, 106 106 size_t len, int advice); 107 + long sys_ppc_sync_file_range2(int fd, unsigned int flags, 108 + unsigned int offset1, 109 + unsigned int offset2, 110 + unsigned int nbytes1, 111 + unsigned int nbytes2); 112 + long sys_ppc_fallocate(int fd, int mode, u32 offset1, u32 offset2, 113 + u32 len1, u32 len2); 107 114 #endif 108 115 #ifdef CONFIG_COMPAT 109 116 long compat_sys_mmap2(unsigned long addr, size_t len,
+7
arch/powerpc/kernel/exceptions-64e.S
··· 813 813 EXCEPTION_COMMON(0x260) 814 814 CHECK_NAPPING() 815 815 addi r3,r1,STACK_FRAME_OVERHEAD 816 + /* 817 + * XXX: Returning from performance_monitor_exception taken as a 818 + * soft-NMI (Linux irqs disabled) may be risky to use interrupt_return 819 + * and could cause bugs in return or elsewhere. That case should just 820 + * restore registers and return. There is a workaround for one known 821 + * problem in interrupt_exit_kernel_prepare(). 822 + */ 816 823 bl performance_monitor_exception 817 824 b interrupt_return 818 825
+13 -1
arch/powerpc/kernel/exceptions-64s.S
··· 2357 2357 EXC_COMMON_BEGIN(performance_monitor_common) 2358 2358 GEN_COMMON performance_monitor 2359 2359 addi r3,r1,STACK_FRAME_OVERHEAD 2360 - bl performance_monitor_exception 2360 + lbz r4,PACAIRQSOFTMASK(r13) 2361 + cmpdi r4,IRQS_ENABLED 2362 + bne 1f 2363 + bl performance_monitor_exception_async 2361 2364 b interrupt_return_srr 2365 + 1: 2366 + bl performance_monitor_exception_nmi 2367 + /* Clear MSR_RI before setting SRR0 and SRR1. */ 2368 + li r9,0 2369 + mtmsrd r9,1 2362 2370 2371 + kuap_kernel_restore r9, r10 2372 + 2373 + EXCEPTION_RESTORE_REGS hsrr=0 2374 + RFI_TO_KERNEL 2363 2375 2364 2376 /** 2365 2377 * Interrupt 0xf20 - Vector Unavailable Interrupt.
+11 -3
arch/powerpc/kernel/interrupt.c
··· 374 374 if (regs_is_unrecoverable(regs)) 375 375 unrecoverable_exception(regs); 376 376 /* 377 - * CT_WARN_ON comes here via program_check_exception, 378 - * so avoid recursion. 377 + * CT_WARN_ON comes here via program_check_exception, so avoid 378 + * recursion. 379 + * 380 + * Skip the assertion on PMIs on 64e to work around a problem caused 381 + * by NMI PMIs incorrectly taking this interrupt return path, it's 382 + * possible for this to hit after interrupt exit to user switches 383 + * context to user. See also the comment in the performance monitor 384 + * handler in exceptions-64e.S 379 385 */ 380 - if (TRAP(regs) != INTERRUPT_PROGRAM) 386 + if (!IS_ENABLED(CONFIG_PPC_BOOK3E_64) && 387 + TRAP(regs) != INTERRUPT_PROGRAM && 388 + TRAP(regs) != INTERRUPT_PERFMON) 381 389 CT_WARN_ON(ct_state() == CONTEXT_USER); 382 390 383 391 kuap = kuap_get_and_assert_locked();
+11 -2
arch/powerpc/kernel/interrupt_64.S
··· 532 532 * Returning to soft-disabled context. 533 533 * Check if a MUST_HARD_MASK interrupt has become pending, in which 534 534 * case we need to disable MSR[EE] in the return context. 535 + * 536 + * The MSR[EE] check catches among other things the short incoherency 537 + * in hard_irq_disable() between clearing MSR[EE] and setting 538 + * PACA_IRQ_HARD_DIS. 535 539 */ 536 540 ld r12,_MSR(r1) 537 541 andi. r10,r12,MSR_EE 538 542 beq .Lfast_kernel_interrupt_return_\srr\() // EE already disabled 539 543 lbz r11,PACAIRQHAPPENED(r13) 540 544 andi. r10,r11,PACA_IRQ_MUST_HARD_MASK 541 - beq .Lfast_kernel_interrupt_return_\srr\() // No HARD_MASK pending 545 + bne 1f // HARD_MASK is pending 546 + // No HARD_MASK pending, clear possible HARD_DIS set by interrupt 547 + andi. r11,r11,(~PACA_IRQ_HARD_DIS)@l 548 + stb r11,PACAIRQHAPPENED(r13) 549 + b .Lfast_kernel_interrupt_return_\srr\() 542 550 543 - /* Must clear MSR_EE from _MSR */ 551 + 552 + 1: /* Must clear MSR_EE from _MSR */ 544 553 #ifdef CONFIG_PPC_BOOK3S 545 554 li r10,0 546 555 /* Clear valid before changing _MSR */
+12 -1
arch/powerpc/kernel/sys_ppc32.c
··· 112 112 advice); 113 113 } 114 114 115 - COMPAT_SYSCALL_DEFINE6(ppc_sync_file_range2, 115 + PPC32_SYSCALL_DEFINE6(ppc_sync_file_range2, 116 116 int, fd, unsigned int, flags, 117 117 unsigned int, offset1, unsigned int, offset2, 118 118 unsigned int, nbytes1, unsigned int, nbytes2) ··· 122 122 123 123 return ksys_sync_file_range(fd, offset, nbytes, flags); 124 124 } 125 + 126 + #ifdef CONFIG_PPC32 127 + SYSCALL_DEFINE6(ppc_fallocate, 128 + int, fd, int, mode, 129 + u32, offset1, u32, offset2, u32, len1, u32, len2) 130 + { 131 + return ksys_fallocate(fd, mode, 132 + merge_64(offset1, offset2), 133 + merge_64(len1, len2)); 134 + } 135 + #endif
+5 -2
arch/powerpc/kernel/syscalls/syscall.tbl
··· 394 394 305 common signalfd sys_signalfd compat_sys_signalfd 395 395 306 common timerfd_create sys_timerfd_create 396 396 307 common eventfd sys_eventfd 397 - 308 common sync_file_range2 sys_sync_file_range2 compat_sys_ppc_sync_file_range2 398 - 309 nospu fallocate sys_fallocate compat_sys_fallocate 397 + 308 32 sync_file_range2 sys_ppc_sync_file_range2 compat_sys_ppc_sync_file_range2 398 + 308 64 sync_file_range2 sys_sync_file_range2 399 + 308 spu sync_file_range2 sys_sync_file_range2 400 + 309 32 fallocate sys_ppc_fallocate compat_sys_fallocate 401 + 309 64 fallocate sys_fallocate 399 402 310 nospu subpage_prot sys_subpage_prot 400 403 311 32 timerfd_settime sys_timerfd_settime32 401 404 311 64 timerfd_settime sys_timerfd_settime
+4
arch/powerpc/kvm/Kconfig
··· 51 51 config KVM_BOOK3S_32 52 52 tristate "KVM support for PowerPC book3s_32 processors" 53 53 depends on PPC_BOOK3S_32 && !SMP && !PTE_64BIT 54 + depends on !CONTEXT_TRACKING_USER 54 55 select KVM 55 56 select KVM_BOOK3S_32_HANDLER 56 57 select KVM_BOOK3S_PR_POSSIBLE ··· 106 105 config KVM_BOOK3S_64_PR 107 106 tristate "KVM support without using hypervisor mode in host" 108 107 depends on KVM_BOOK3S_64 108 + depends on !CONTEXT_TRACKING_USER 109 109 select KVM_BOOK3S_PR_POSSIBLE 110 110 help 111 111 Support running guest kernels in virtual machines on processors ··· 192 190 config KVM_E500V2 193 191 bool "KVM support for PowerPC E500v2 processors" 194 192 depends on PPC_E500 && !PPC_E500MC 193 + depends on !CONTEXT_TRACKING_USER 195 194 select KVM 196 195 select KVM_MMIO 197 196 select MMU_NOTIFIER ··· 208 205 config KVM_E500MC 209 206 bool "KVM support for PowerPC E500MC/E5500/E6500 processors" 210 207 depends on PPC_E500MC 208 + depends on !CONTEXT_TRACKING_USER 211 209 select KVM 212 210 select KVM_MMIO 213 211 select KVM_BOOKE_HV
+11 -1
arch/powerpc/lib/vmx-helper.c
··· 36 36 { 37 37 disable_kernel_altivec(); 38 38 pagefault_enable(); 39 - preempt_enable(); 39 + preempt_enable_no_resched(); 40 + /* 41 + * Must never explicitly call schedule (including preempt_enable()) 42 + * while in a kuap-unlocked user copy, because the AMR register will 43 + * not be saved and restored across context switch. However preempt 44 + * kernels need to be preempted as soon as possible if need_resched is 45 + * set and we are preemptible. The hack here is to schedule a 46 + * decrementer to fire here and reschedule for us if necessary. 47 + */ 48 + if (IS_ENABLED(CONFIG_PREEMPT) && need_resched()) 49 + set_dec(1); 40 50 return 0; 41 51 } 42 52
+59 -8
arch/powerpc/mm/book3s64/hash_native.c
··· 43 43 44 44 static DEFINE_RAW_SPINLOCK(native_tlbie_lock); 45 45 46 + #ifdef CONFIG_LOCKDEP 47 + static struct lockdep_map hpte_lock_map = 48 + STATIC_LOCKDEP_MAP_INIT("hpte_lock", &hpte_lock_map); 49 + 50 + static void acquire_hpte_lock(void) 51 + { 52 + lock_map_acquire(&hpte_lock_map); 53 + } 54 + 55 + static void release_hpte_lock(void) 56 + { 57 + lock_map_release(&hpte_lock_map); 58 + } 59 + #else 60 + static void acquire_hpte_lock(void) 61 + { 62 + } 63 + 64 + static void release_hpte_lock(void) 65 + { 66 + } 67 + #endif 68 + 46 69 static inline unsigned long ___tlbie(unsigned long vpn, int psize, 47 70 int apsize, int ssize) 48 71 { ··· 243 220 { 244 221 unsigned long *word = (unsigned long *)&hptep->v; 245 222 223 + acquire_hpte_lock(); 246 224 while (1) { 247 225 if (!test_and_set_bit_lock(HPTE_LOCK_BIT, word)) 248 226 break; ··· 258 234 { 259 235 unsigned long *word = (unsigned long *)&hptep->v; 260 236 237 + release_hpte_lock(); 261 238 clear_bit_unlock(HPTE_LOCK_BIT, word); 262 239 } 263 240 ··· 268 243 { 269 244 struct hash_pte *hptep = htab_address + hpte_group; 270 245 unsigned long hpte_v, hpte_r; 246 + unsigned long flags; 271 247 int i; 248 + 249 + local_irq_save(flags); 272 250 273 251 if (!(vflags & HPTE_V_BOLTED)) { 274 252 DBG_LOW(" insert(group=%lx, vpn=%016lx, pa=%016lx," ··· 291 263 hptep++; 292 264 } 293 265 294 - if (i == HPTES_PER_GROUP) 266 + if (i == HPTES_PER_GROUP) { 267 + local_irq_restore(flags); 295 268 return -1; 269 + } 296 270 297 271 hpte_v = hpte_encode_v(vpn, psize, apsize, ssize) | vflags | HPTE_V_VALID; 298 272 hpte_r = hpte_encode_r(pa, psize, apsize) | rflags; ··· 316 286 * Now set the first dword including the valid bit 317 287 * NOTE: this also unlocks the hpte 318 288 */ 289 + release_hpte_lock(); 319 290 hptep->v = cpu_to_be64(hpte_v); 320 291 321 292 __asm__ __volatile__ ("ptesync" : : : "memory"); 293 + 294 + local_irq_restore(flags); 322 295 323 296 return i | (!!(vflags & HPTE_V_SECONDARY) << 3); 324 297 } ··· 360 327 return -1; 361 328 362 329 /* Invalidate the hpte. NOTE: this also unlocks it */ 330 + release_hpte_lock(); 363 331 hptep->v = 0; 364 332 365 333 return i; ··· 373 339 struct hash_pte *hptep = htab_address + slot; 374 340 unsigned long hpte_v, want_v; 375 341 int ret = 0, local = 0; 342 + unsigned long irqflags; 343 + 344 + local_irq_save(irqflags); 376 345 377 346 want_v = hpte_encode_avpn(vpn, bpsize, ssize); 378 347 ··· 418 381 */ 419 382 if (!(flags & HPTE_NOHPTE_UPDATE)) 420 383 tlbie(vpn, bpsize, apsize, ssize, local); 384 + 385 + local_irq_restore(irqflags); 421 386 422 387 return ret; 423 388 } ··· 484 445 unsigned long vsid; 485 446 long slot; 486 447 struct hash_pte *hptep; 448 + unsigned long flags; 449 + 450 + local_irq_save(flags); 487 451 488 452 vsid = get_kernel_vsid(ea, ssize); 489 453 vpn = hpt_vpn(ea, vsid, ssize); ··· 505 463 * actual page size will be same. 506 464 */ 507 465 tlbie(vpn, psize, psize, ssize, 0); 466 + 467 + local_irq_restore(flags); 508 468 } 509 469 510 470 /* ··· 520 476 unsigned long vsid; 521 477 long slot; 522 478 struct hash_pte *hptep; 479 + unsigned long flags; 480 + 481 + local_irq_save(flags); 523 482 524 483 vsid = get_kernel_vsid(ea, ssize); 525 484 vpn = hpt_vpn(ea, vsid, ssize); ··· 540 493 541 494 /* Invalidate the TLB */ 542 495 tlbie(vpn, psize, psize, ssize, 0); 496 + 497 + local_irq_restore(flags); 498 + 543 499 return 0; 544 500 } 545 501 ··· 567 517 /* recheck with locks held */ 568 518 hpte_v = hpte_get_old_v(hptep); 569 519 570 - if (HPTE_V_COMPARE(hpte_v, want_v) && (hpte_v & HPTE_V_VALID)) 520 + if (HPTE_V_COMPARE(hpte_v, want_v) && (hpte_v & HPTE_V_VALID)) { 571 521 /* Invalidate the hpte. NOTE: this also unlocks it */ 522 + release_hpte_lock(); 572 523 hptep->v = 0; 573 - else 524 + } else 574 525 native_unlock_hpte(hptep); 575 526 } 576 527 /* ··· 631 580 hpte_v = hpte_get_old_v(hptep); 632 581 633 582 if (HPTE_V_COMPARE(hpte_v, want_v) && (hpte_v & HPTE_V_VALID)) { 634 - /* 635 - * Invalidate the hpte. NOTE: this also unlocks it 636 - */ 637 - 583 + /* Invalidate the hpte. NOTE: this also unlocks it */ 584 + release_hpte_lock(); 638 585 hptep->v = 0; 639 586 } else 640 587 native_unlock_hpte(hptep); ··· 814 765 815 766 if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID)) 816 767 native_unlock_hpte(hptep); 817 - else 768 + else { 769 + release_hpte_lock(); 818 770 hptep->v = 0; 771 + } 819 772 820 773 } pte_iterate_hashed_end(); 821 774 }
+5 -3
arch/powerpc/mm/book3s64/hash_pgtable.c
··· 404 404 405 405 struct change_memory_parms { 406 406 unsigned long start, end, newpp; 407 - unsigned int step, nr_cpus, master_cpu; 407 + unsigned int step, nr_cpus; 408 + atomic_t master_cpu; 408 409 atomic_t cpu_counter; 409 410 }; 410 411 ··· 479 478 { 480 479 struct change_memory_parms *parms = data; 481 480 482 - if (parms->master_cpu != smp_processor_id()) 481 + // First CPU goes through, all others wait. 482 + if (atomic_xchg(&parms->master_cpu, 1) == 1) 483 483 return chmem_secondary_loop(parms); 484 484 485 485 // Wait for all but one CPU (this one) to call-in ··· 518 516 chmem_parms.end = end; 519 517 chmem_parms.step = step; 520 518 chmem_parms.newpp = newpp; 521 - chmem_parms.master_cpu = smp_processor_id(); 519 + atomic_set(&chmem_parms.master_cpu, 0); 522 520 523 521 cpus_read_lock(); 524 522
+6 -6
arch/powerpc/mm/book3s64/hash_utils.c
··· 1981 1981 } 1982 1982 1983 1983 #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE) 1984 - static DEFINE_SPINLOCK(linear_map_hash_lock); 1984 + static DEFINE_RAW_SPINLOCK(linear_map_hash_lock); 1985 1985 1986 1986 static void kernel_map_linear_page(unsigned long vaddr, unsigned long lmi) 1987 1987 { ··· 2005 2005 mmu_linear_psize, mmu_kernel_ssize); 2006 2006 2007 2007 BUG_ON (ret < 0); 2008 - spin_lock(&linear_map_hash_lock); 2008 + raw_spin_lock(&linear_map_hash_lock); 2009 2009 BUG_ON(linear_map_hash_slots[lmi] & 0x80); 2010 2010 linear_map_hash_slots[lmi] = ret | 0x80; 2011 - spin_unlock(&linear_map_hash_lock); 2011 + raw_spin_unlock(&linear_map_hash_lock); 2012 2012 } 2013 2013 2014 2014 static void kernel_unmap_linear_page(unsigned long vaddr, unsigned long lmi) ··· 2018 2018 unsigned long vpn = hpt_vpn(vaddr, vsid, mmu_kernel_ssize); 2019 2019 2020 2020 hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize); 2021 - spin_lock(&linear_map_hash_lock); 2021 + raw_spin_lock(&linear_map_hash_lock); 2022 2022 if (!(linear_map_hash_slots[lmi] & 0x80)) { 2023 - spin_unlock(&linear_map_hash_lock); 2023 + raw_spin_unlock(&linear_map_hash_lock); 2024 2024 return; 2025 2025 } 2026 2026 hidx = linear_map_hash_slots[lmi] & 0x7f; 2027 2027 linear_map_hash_slots[lmi] = 0; 2028 - spin_unlock(&linear_map_hash_lock); 2028 + raw_spin_unlock(&linear_map_hash_lock); 2029 2029 if (hidx & _PTEIDX_SECONDARY) 2030 2030 hash = ~hash; 2031 2031 slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
+11
arch/powerpc/platforms/pseries/lparcfg.c
··· 35 35 #include <asm/drmem.h> 36 36 37 37 #include "pseries.h" 38 + #include "vas.h" /* pseries_vas_dlpar_cpu() */ 38 39 39 40 /* 40 41 * This isn't a module but we expose that to userspace ··· 749 748 return -EINVAL; 750 749 751 750 retval = update_ppp(new_entitled_ptr, NULL); 751 + 752 + if (retval == H_SUCCESS || retval == H_CONSTRAINED) { 753 + /* 754 + * The hypervisor assigns VAS resources based 755 + * on entitled capacity for shared mode. 756 + * Reconfig VAS windows based on DLPAR CPU events. 757 + */ 758 + if (pseries_vas_dlpar_cpu() != 0) 759 + retval = H_HARDWARE; 760 + } 752 761 } else if (!strcmp(kbuf, "capacity_weight")) { 753 762 char *endp; 754 763 *new_weight_ptr = (u8) simple_strtoul(tmp, &endp, 10);
+62 -21
arch/powerpc/platforms/pseries/vas.c
··· 200 200 struct vas_user_win_ref *tsk_ref; 201 201 int rc; 202 202 203 - rc = h_get_nx_fault(txwin->vas_win.winid, (u64)virt_to_phys(&crb)); 204 - if (!rc) { 205 - tsk_ref = &txwin->vas_win.task_ref; 206 - vas_dump_crb(&crb); 207 - vas_update_csb(&crb, tsk_ref); 203 + while (atomic_read(&txwin->pending_faults)) { 204 + rc = h_get_nx_fault(txwin->vas_win.winid, (u64)virt_to_phys(&crb)); 205 + if (!rc) { 206 + tsk_ref = &txwin->vas_win.task_ref; 207 + vas_dump_crb(&crb); 208 + vas_update_csb(&crb, tsk_ref); 209 + } 210 + atomic_dec(&txwin->pending_faults); 208 211 } 209 212 210 213 return IRQ_HANDLED; 214 + } 215 + 216 + /* 217 + * irq_default_primary_handler() can be used only with IRQF_ONESHOT 218 + * which disables IRQ before executing the thread handler and enables 219 + * it after. But this disabling interrupt sets the VAS IRQ OFF 220 + * state in the hypervisor. If the NX generates fault interrupt 221 + * during this window, the hypervisor will not deliver this 222 + * interrupt to the LPAR. So use VAS specific IRQ handler instead 223 + * of calling the default primary handler. 224 + */ 225 + static irqreturn_t pseries_vas_irq_handler(int irq, void *data) 226 + { 227 + struct pseries_vas_window *txwin = data; 228 + 229 + /* 230 + * The thread hanlder will process this interrupt if it is 231 + * already running. 232 + */ 233 + atomic_inc(&txwin->pending_faults); 234 + 235 + return IRQ_WAKE_THREAD; 211 236 } 212 237 213 238 /* ··· 265 240 goto out_irq; 266 241 } 267 242 268 - rc = request_threaded_irq(txwin->fault_virq, NULL, 269 - pseries_vas_fault_thread_fn, IRQF_ONESHOT, 243 + rc = request_threaded_irq(txwin->fault_virq, 244 + pseries_vas_irq_handler, 245 + pseries_vas_fault_thread_fn, 0, 270 246 txwin->name, txwin); 271 247 if (rc) { 272 248 pr_err("VAS-Window[%d]: Request IRQ(%u) failed with %d\n", ··· 852 826 mutex_unlock(&vas_pseries_mutex); 853 827 return rc; 854 828 } 829 + 830 + int pseries_vas_dlpar_cpu(void) 831 + { 832 + int new_nr_creds, rc; 833 + 834 + rc = h_query_vas_capabilities(H_QUERY_VAS_CAPABILITIES, 835 + vascaps[VAS_GZIP_DEF_FEAT_TYPE].feat, 836 + (u64)virt_to_phys(&hv_cop_caps)); 837 + if (!rc) { 838 + new_nr_creds = be16_to_cpu(hv_cop_caps.target_lpar_creds); 839 + rc = vas_reconfig_capabilties(VAS_GZIP_DEF_FEAT_TYPE, new_nr_creds); 840 + } 841 + 842 + if (rc) 843 + pr_err("Failed reconfig VAS capabilities with DLPAR\n"); 844 + 845 + return rc; 846 + } 847 + 855 848 /* 856 849 * Total number of default credits available (target_credits) 857 850 * in LPAR depends on number of cores configured. It varies based on ··· 885 840 struct of_reconfig_data *rd = data; 886 841 struct device_node *dn = rd->dn; 887 842 const __be32 *intserv = NULL; 888 - int new_nr_creds, len, rc = 0; 843 + int len; 844 + 845 + /* 846 + * For shared CPU partition, the hypervisor assigns total credits 847 + * based on entitled core capacity. So updating VAS windows will 848 + * be called from lparcfg_write(). 849 + */ 850 + if (is_shared_processor()) 851 + return NOTIFY_OK; 889 852 890 853 if ((action == OF_RECONFIG_ATTACH_NODE) || 891 854 (action == OF_RECONFIG_DETACH_NODE)) ··· 905 852 if (!intserv) 906 853 return NOTIFY_OK; 907 854 908 - rc = h_query_vas_capabilities(H_QUERY_VAS_CAPABILITIES, 909 - vascaps[VAS_GZIP_DEF_FEAT_TYPE].feat, 910 - (u64)virt_to_phys(&hv_cop_caps)); 911 - if (!rc) { 912 - new_nr_creds = be16_to_cpu(hv_cop_caps.target_lpar_creds); 913 - rc = vas_reconfig_capabilties(VAS_GZIP_DEF_FEAT_TYPE, 914 - new_nr_creds); 915 - } 916 - 917 - if (rc) 918 - pr_err("Failed reconfig VAS capabilities with DLPAR\n"); 919 - 920 - return rc; 855 + return pseries_vas_dlpar_cpu(); 921 856 } 922 857 923 858 static struct notifier_block pseries_vas_nb = {
+6
arch/powerpc/platforms/pseries/vas.h
··· 132 132 u64 flags; 133 133 char *name; 134 134 int fault_virq; 135 + atomic_t pending_faults; /* Number of pending faults */ 135 136 }; 136 137 137 138 int sysfs_add_vas_caps(struct vas_cop_feat_caps *caps); ··· 141 140 142 141 #ifdef CONFIG_PPC_VAS 143 142 int vas_migration_handler(int action); 143 + int pseries_vas_dlpar_cpu(void); 144 144 #else 145 145 static inline int vas_migration_handler(int action) 146 + { 147 + return 0; 148 + } 149 + static inline int pseries_vas_dlpar_cpu(void) 146 150 { 147 151 return 0; 148 152 }
+13 -4
arch/riscv/Kconfig
··· 411 411 412 412 If you don't know what to do here, say Y. 413 413 414 - config CC_HAS_ZICBOM 414 + config TOOLCHAIN_HAS_ZICBOM 415 415 bool 416 - default y if 64BIT && $(cc-option,-mabi=lp64 -march=rv64ima_zicbom) 417 - default y if 32BIT && $(cc-option,-mabi=ilp32 -march=rv32ima_zicbom) 416 + default y 417 + depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zicbom) 418 + depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zicbom) 419 + depends on LLD_VERSION >= 150000 || LD_VERSION >= 23800 418 420 419 421 config RISCV_ISA_ZICBOM 420 422 bool "Zicbom extension support for non-coherent DMA operation" 421 - depends on CC_HAS_ZICBOM 423 + depends on TOOLCHAIN_HAS_ZICBOM 422 424 depends on !XIP_KERNEL && MMU 423 425 select RISCV_DMA_NONCOHERENT 424 426 select RISCV_ALTERNATIVE ··· 434 432 non-coherent DMA support on devices that need it. 435 433 436 434 If you don't know what to do here, say Y. 435 + 436 + config TOOLCHAIN_HAS_ZIHINTPAUSE 437 + bool 438 + default y 439 + depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zihintpause) 440 + depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zihintpause) 441 + depends on LLD_VERSION >= 150000 || LD_VERSION >= 23600 437 442 438 443 config FPU 439 444 bool "FPU support"
+2 -4
arch/riscv/Makefile
··· 59 59 riscv-march-$(toolchain-need-zicsr-zifencei) := $(riscv-march-y)_zicsr_zifencei 60 60 61 61 # Check if the toolchain supports Zicbom extension 62 - toolchain-supports-zicbom := $(call cc-option-yn, -march=$(riscv-march-y)_zicbom) 63 - riscv-march-$(toolchain-supports-zicbom) := $(riscv-march-y)_zicbom 62 + riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZICBOM) := $(riscv-march-y)_zicbom 64 63 65 64 # Check if the toolchain supports Zihintpause extension 66 - toolchain-supports-zihintpause := $(call cc-option-yn, -march=$(riscv-march-y)_zihintpause) 67 - riscv-march-$(toolchain-supports-zihintpause) := $(riscv-march-y)_zihintpause 65 + riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE) := $(riscv-march-y)_zihintpause 68 66 69 67 KBUILD_CFLAGS += -march=$(subst fd,,$(riscv-march-y)) 70 68 KBUILD_AFLAGS += -march=$(riscv-march-y)
+4 -4
arch/riscv/include/asm/jump_label.h
··· 14 14 15 15 #define JUMP_LABEL_NOP_SIZE 4 16 16 17 - static __always_inline bool arch_static_branch(struct static_key *key, 18 - bool branch) 17 + static __always_inline bool arch_static_branch(struct static_key * const key, 18 + const bool branch) 19 19 { 20 20 asm_volatile_goto( 21 21 " .option push \n\t" ··· 35 35 return true; 36 36 } 37 37 38 - static __always_inline bool arch_static_branch_jump(struct static_key *key, 39 - bool branch) 38 + static __always_inline bool arch_static_branch_jump(struct static_key * const key, 39 + const bool branch) 40 40 { 41 41 asm_volatile_goto( 42 42 " .option push \n\t"
+1 -1
arch/riscv/include/asm/vdso/processor.h
··· 21 21 * Reduce instruction retirement. 22 22 * This assumes the PC changes. 23 23 */ 24 - #ifdef __riscv_zihintpause 24 + #ifdef CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE 25 25 __asm__ __volatile__ ("pause"); 26 26 #else 27 27 /* Encoding of the pause instruction */
+3
arch/riscv/kernel/cpu.c
··· 213 213 214 214 static void *c_start(struct seq_file *m, loff_t *pos) 215 215 { 216 + if (*pos == nr_cpu_ids) 217 + return NULL; 218 + 216 219 *pos = cpumask_next(*pos - 1, cpu_online_mask); 217 220 if ((*pos) < nr_cpu_ids) 218 221 return (void *)(uintptr_t)(1 + *pos);
+6 -1
arch/riscv/mm/kasan_init.c
··· 113 113 base_pud = pt_ops.get_pud_virt(pfn_to_phys(_pgd_pfn(*pgd))); 114 114 } else if (pgd_none(*pgd)) { 115 115 base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE); 116 + memcpy(base_pud, (void *)kasan_early_shadow_pud, 117 + sizeof(pud_t) * PTRS_PER_PUD); 116 118 } else { 117 119 base_pud = (pud_t *)pgd_page_vaddr(*pgd); 118 120 if (base_pud == lm_alias(kasan_early_shadow_pud)) { ··· 175 173 base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgd))); 176 174 } else { 177 175 base_p4d = (p4d_t *)pgd_page_vaddr(*pgd); 178 - if (base_p4d == lm_alias(kasan_early_shadow_p4d)) 176 + if (base_p4d == lm_alias(kasan_early_shadow_p4d)) { 179 177 base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE); 178 + memcpy(base_p4d, (void *)kasan_early_shadow_p4d, 179 + sizeof(p4d_t) * PTRS_PER_P4D); 180 + } 180 181 } 181 182 182 183 p4dp = base_p4d + p4d_index(vaddr);
+11 -2
arch/s390/boot/vmlinux.lds.S
··· 102 102 _compressed_start = .; 103 103 *(.vmlinux.bin.compressed) 104 104 _compressed_end = .; 105 - FILL(0xff); 106 - . = ALIGN(4096); 105 + } 106 + 107 + #define SB_TRAILER_SIZE 32 108 + /* Trailer needed for Secure Boot */ 109 + . += SB_TRAILER_SIZE; /* make sure .sb.trailer does not overwrite the previous section */ 110 + . = ALIGN(4096) - SB_TRAILER_SIZE; 111 + .sb.trailer : { 112 + QUAD(0) 113 + QUAD(0) 114 + QUAD(0) 115 + QUAD(0x000000207a49504c) 107 116 } 108 117 _end = .; 109 118
+2 -1
arch/s390/include/asm/futex.h
··· 17 17 "3: jl 1b\n" \ 18 18 " lhi %0,0\n" \ 19 19 "4: sacf 768\n" \ 20 - EX_TABLE(0b,4b) EX_TABLE(2b,4b) EX_TABLE(3b,4b) \ 20 + EX_TABLE(0b,4b) EX_TABLE(1b,4b) \ 21 + EX_TABLE(2b,4b) EX_TABLE(3b,4b) \ 21 22 : "=d" (ret), "=&d" (oldval), "=&d" (newval), \ 22 23 "=m" (*uaddr) \ 23 24 : "0" (-EFAULT), "d" (oparg), "a" (uaddr), \
+1
arch/s390/kernel/perf_pai_ext.c
··· 459 459 raw.frag.data = cpump->save; 460 460 raw.size = raw.frag.size; 461 461 data.raw = &raw; 462 + data.sample_flags |= PERF_SAMPLE_RAW; 462 463 } 463 464 464 465 overflow = perf_event_overflow(event, &data, &regs);
+3 -3
arch/s390/lib/uaccess.c
··· 157 157 asm volatile( 158 158 " lr 0,%[spec]\n" 159 159 "0: mvcos 0(%1),0(%4),%0\n" 160 - " jz 4f\n" 160 + "6: jz 4f\n" 161 161 "1: algr %0,%2\n" 162 162 " slgr %1,%2\n" 163 163 " j 0b\n" ··· 167 167 " clgr %0,%3\n" /* copy crosses next page boundary? */ 168 168 " jnh 5f\n" 169 169 "3: mvcos 0(%1),0(%4),%3\n" 170 - " slgr %0,%3\n" 170 + "7: slgr %0,%3\n" 171 171 " j 5f\n" 172 172 "4: slgr %0,%0\n" 173 173 "5:\n" 174 - EX_TABLE(0b,2b) EX_TABLE(3b,5b) 174 + EX_TABLE(0b,2b) EX_TABLE(6b,2b) EX_TABLE(3b,5b) EX_TABLE(7b,5b) 175 175 : "+a" (size), "+a" (to), "+a" (tmp1), "=a" (tmp2) 176 176 : "a" (empty_zero_page), [spec] "d" (spec.val) 177 177 : "cc", "memory", "0");
+4 -4
arch/s390/pci/pci_mmio.c
··· 64 64 asm volatile ( 65 65 " sacf 256\n" 66 66 "0: llgc %[tmp],0(%[src])\n" 67 - " sllg %[val],%[val],8\n" 67 + "4: sllg %[val],%[val],8\n" 68 68 " aghi %[src],1\n" 69 69 " ogr %[val],%[tmp]\n" 70 70 " brctg %[cnt],0b\n" ··· 72 72 "2: ipm %[cc]\n" 73 73 " srl %[cc],28\n" 74 74 "3: sacf 768\n" 75 - EX_TABLE(0b, 3b) EX_TABLE(1b, 3b) EX_TABLE(2b, 3b) 75 + EX_TABLE(0b, 3b) EX_TABLE(4b, 3b) EX_TABLE(1b, 3b) EX_TABLE(2b, 3b) 76 76 : 77 77 [src] "+a" (src), [cnt] "+d" (cnt), 78 78 [val] "+d" (val), [tmp] "=d" (tmp), ··· 215 215 "2: ahi %[shift],-8\n" 216 216 " srlg %[tmp],%[val],0(%[shift])\n" 217 217 "3: stc %[tmp],0(%[dst])\n" 218 - " aghi %[dst],1\n" 218 + "5: aghi %[dst],1\n" 219 219 " brctg %[cnt],2b\n" 220 220 "4: sacf 768\n" 221 - EX_TABLE(0b, 4b) EX_TABLE(1b, 4b) EX_TABLE(3b, 4b) 221 + EX_TABLE(0b, 4b) EX_TABLE(1b, 4b) EX_TABLE(3b, 4b) EX_TABLE(5b, 4b) 222 222 : 223 223 [ioaddr_len] "+&d" (ioaddr_len.pair), 224 224 [cc] "+d" (cc), [val] "=d" (val),
+14 -5
arch/x86/crypto/polyval-clmulni_glue.c
··· 27 27 #include <asm/cpu_device_id.h> 28 28 #include <asm/simd.h> 29 29 30 + #define POLYVAL_ALIGN 16 31 + #define POLYVAL_ALIGN_ATTR __aligned(POLYVAL_ALIGN) 32 + #define POLYVAL_ALIGN_EXTRA ((POLYVAL_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1)) 33 + #define POLYVAL_CTX_SIZE (sizeof(struct polyval_tfm_ctx) + POLYVAL_ALIGN_EXTRA) 30 34 #define NUM_KEY_POWERS 8 31 35 32 36 struct polyval_tfm_ctx { 33 37 /* 34 38 * These powers must be in the order h^8, ..., h^1. 35 39 */ 36 - u8 key_powers[NUM_KEY_POWERS][POLYVAL_BLOCK_SIZE]; 40 + u8 key_powers[NUM_KEY_POWERS][POLYVAL_BLOCK_SIZE] POLYVAL_ALIGN_ATTR; 37 41 }; 38 42 39 43 struct polyval_desc_ctx { ··· 48 44 asmlinkage void clmul_polyval_update(const struct polyval_tfm_ctx *keys, 49 45 const u8 *in, size_t nblocks, u8 *accumulator); 50 46 asmlinkage void clmul_polyval_mul(u8 *op1, const u8 *op2); 47 + 48 + static inline struct polyval_tfm_ctx *polyval_tfm_ctx(struct crypto_shash *tfm) 49 + { 50 + return PTR_ALIGN(crypto_shash_ctx(tfm), POLYVAL_ALIGN); 51 + } 51 52 52 53 static void internal_polyval_update(const struct polyval_tfm_ctx *keys, 53 54 const u8 *in, size_t nblocks, u8 *accumulator) ··· 81 72 static int polyval_x86_setkey(struct crypto_shash *tfm, 82 73 const u8 *key, unsigned int keylen) 83 74 { 84 - struct polyval_tfm_ctx *tctx = crypto_shash_ctx(tfm); 75 + struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(tfm); 85 76 int i; 86 77 87 78 if (keylen != POLYVAL_BLOCK_SIZE) ··· 111 102 const u8 *src, unsigned int srclen) 112 103 { 113 104 struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); 114 - const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); 105 + const struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(desc->tfm); 115 106 u8 *pos; 116 107 unsigned int nblocks; 117 108 unsigned int n; ··· 152 143 static int polyval_x86_final(struct shash_desc *desc, u8 *dst) 153 144 { 154 145 struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); 155 - const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); 146 + const struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(desc->tfm); 156 147 157 148 if (dctx->bytes) { 158 149 internal_polyval_mul(dctx->buffer, ··· 176 167 .cra_driver_name = "polyval-clmulni", 177 168 .cra_priority = 200, 178 169 .cra_blocksize = POLYVAL_BLOCK_SIZE, 179 - .cra_ctxsize = sizeof(struct polyval_tfm_ctx), 170 + .cra_ctxsize = POLYVAL_CTX_SIZE, 180 171 .cra_module = THIS_MODULE, 181 172 }, 182 173 };
+1 -1
arch/x86/events/amd/ibs.c
··· 801 801 /* Extension Memory */ 802 802 if (ibs_caps & IBS_CAPS_ZEN4 && 803 803 ibs_data_src == IBS_DATA_SRC_EXT_EXT_MEM) { 804 - data_src->mem_lvl_num = PERF_MEM_LVLNUM_EXTN_MEM; 804 + data_src->mem_lvl_num = PERF_MEM_LVLNUM_CXL; 805 805 if (op_data2->rmt_node) { 806 806 data_src->mem_remote = PERF_MEM_REMOTE_REMOTE; 807 807 /* IBS doesn't provide Remote socket detail */
+4
arch/x86/events/rapl.c
··· 806 806 X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE, &model_skl), 807 807 X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, &model_skl), 808 808 X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, &model_skl), 809 + X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_N, &model_skl), 809 810 X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &model_spr), 811 + X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, &model_skl), 812 + X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P, &model_skl), 813 + X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_S, &model_skl), 810 814 {}, 811 815 }; 812 816 MODULE_DEVICE_TABLE(x86cpu, rapl_model_match);
+7 -4
arch/x86/include/asm/string_64.h
··· 10 10 /* Even with __builtin_ the compiler may decide to use the out of line 11 11 function. */ 12 12 13 + #if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY) 14 + #include <linux/kmsan_string.h> 15 + #endif 16 + 13 17 #define __HAVE_ARCH_MEMCPY 1 14 - #if defined(__SANITIZE_MEMORY__) 18 + #if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY) 15 19 #undef memcpy 16 - void *__msan_memcpy(void *dst, const void *src, size_t size); 17 20 #define memcpy __msan_memcpy 18 21 #else 19 22 extern void *memcpy(void *to, const void *from, size_t len); ··· 24 21 extern void *__memcpy(void *to, const void *from, size_t len); 25 22 26 23 #define __HAVE_ARCH_MEMSET 27 - #if defined(__SANITIZE_MEMORY__) 24 + #if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY) 28 25 extern void *__msan_memset(void *s, int c, size_t n); 29 26 #undef memset 30 27 #define memset __msan_memset ··· 70 67 } 71 68 72 69 #define __HAVE_ARCH_MEMMOVE 73 - #if defined(__SANITIZE_MEMORY__) 70 + #if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY) 74 71 #undef memmove 75 72 void *__msan_memmove(void *dest, const void *src, size_t len); 76 73 #define memmove __msan_memmove
+7 -6
arch/x86/include/asm/uaccess.h
··· 254 254 #define __put_user_size(x, ptr, size, label) \ 255 255 do { \ 256 256 __typeof__(*(ptr)) __x = (x); /* eval x once */ \ 257 - __chk_user_ptr(ptr); \ 257 + __typeof__(ptr) __ptr = (ptr); /* eval ptr once */ \ 258 + __chk_user_ptr(__ptr); \ 258 259 switch (size) { \ 259 260 case 1: \ 260 - __put_user_goto(__x, ptr, "b", "iq", label); \ 261 + __put_user_goto(__x, __ptr, "b", "iq", label); \ 261 262 break; \ 262 263 case 2: \ 263 - __put_user_goto(__x, ptr, "w", "ir", label); \ 264 + __put_user_goto(__x, __ptr, "w", "ir", label); \ 264 265 break; \ 265 266 case 4: \ 266 - __put_user_goto(__x, ptr, "l", "ir", label); \ 267 + __put_user_goto(__x, __ptr, "l", "ir", label); \ 267 268 break; \ 268 269 case 8: \ 269 - __put_user_goto_u64(__x, ptr, label); \ 270 + __put_user_goto_u64(__x, __ptr, label); \ 270 271 break; \ 271 272 default: \ 272 273 __put_user_bad(); \ 273 274 } \ 274 - instrument_put_user(__x, ptr, size); \ 275 + instrument_put_user(__x, __ptr, size); \ 275 276 } while (0) 276 277 277 278 #ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT
+9 -2
arch/x86/kvm/cpuid.c
··· 1133 1133 entry->eax = max(entry->eax, 0x80000021); 1134 1134 break; 1135 1135 case 0x80000001: 1136 + entry->ebx &= ~GENMASK(27, 16); 1136 1137 cpuid_entry_override(entry, CPUID_8000_0001_EDX); 1137 1138 cpuid_entry_override(entry, CPUID_8000_0001_ECX); 1138 1139 break; 1139 1140 case 0x80000006: 1140 - /* L2 cache and TLB: pass through host info. */ 1141 + /* Drop reserved bits, pass host L2 cache and TLB info. */ 1142 + entry->edx &= ~GENMASK(17, 16); 1141 1143 break; 1142 1144 case 0x80000007: /* Advanced power management */ 1143 1145 /* invariant TSC is CPUID.80000007H:EDX[8] */ ··· 1169 1167 g_phys_as = phys_as; 1170 1168 1171 1169 entry->eax = g_phys_as | (virt_as << 8); 1170 + entry->ecx &= ~(GENMASK(31, 16) | GENMASK(11, 8)); 1172 1171 entry->edx = 0; 1173 1172 cpuid_entry_override(entry, CPUID_8000_0008_EBX); 1174 1173 break; ··· 1189 1186 entry->ecx = entry->edx = 0; 1190 1187 break; 1191 1188 case 0x8000001a: 1189 + entry->eax &= GENMASK(2, 0); 1190 + entry->ebx = entry->ecx = entry->edx = 0; 1191 + break; 1192 1192 case 0x8000001e: 1193 1193 break; 1194 1194 case 0x8000001F: ··· 1199 1193 entry->eax = entry->ebx = entry->ecx = entry->edx = 0; 1200 1194 } else { 1201 1195 cpuid_entry_override(entry, CPUID_8000_001F_EAX); 1202 - 1196 + /* Clear NumVMPL since KVM does not support VMPL. */ 1197 + entry->ebx &= ~GENMASK(31, 12); 1203 1198 /* 1204 1199 * Enumerate '0' for "PA bits reduction", the adjusted 1205 1200 * MAXPHYADDR is enumerated directly (see 0x80000008).
+6 -1
arch/x86/kvm/debugfs.c
··· 158 158 static int kvm_mmu_rmaps_stat_open(struct inode *inode, struct file *file) 159 159 { 160 160 struct kvm *kvm = inode->i_private; 161 + int r; 161 162 162 163 if (!kvm_get_kvm_safe(kvm)) 163 164 return -ENOENT; 164 165 165 - return single_open(file, kvm_mmu_rmaps_stat_show, kvm); 166 + r = single_open(file, kvm_mmu_rmaps_stat_show, kvm); 167 + if (r < 0) 168 + kvm_put_kvm(kvm); 169 + 170 + return r; 166 171 } 167 172 168 173 static int kvm_mmu_rmaps_stat_release(struct inode *inode, struct file *file)
+77 -33
arch/x86/kvm/emulate.c
··· 791 791 ctxt->mode, linear); 792 792 } 793 793 794 - static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst, 795 - enum x86emul_mode mode) 794 + static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst) 796 795 { 797 796 ulong linear; 798 797 int rc; ··· 801 802 802 803 if (ctxt->op_bytes != sizeof(unsigned long)) 803 804 addr.ea = dst & ((1UL << (ctxt->op_bytes << 3)) - 1); 804 - rc = __linearize(ctxt, addr, &max_size, 1, false, true, mode, &linear); 805 + rc = __linearize(ctxt, addr, &max_size, 1, false, true, ctxt->mode, &linear); 805 806 if (rc == X86EMUL_CONTINUE) 806 807 ctxt->_eip = addr.ea; 807 808 return rc; 808 809 } 809 810 810 - static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst) 811 + static inline int emulator_recalc_and_set_mode(struct x86_emulate_ctxt *ctxt) 811 812 { 812 - return assign_eip(ctxt, dst, ctxt->mode); 813 + u64 efer; 814 + struct desc_struct cs; 815 + u16 selector; 816 + u32 base3; 817 + 818 + ctxt->ops->get_msr(ctxt, MSR_EFER, &efer); 819 + 820 + if (!(ctxt->ops->get_cr(ctxt, 0) & X86_CR0_PE)) { 821 + /* Real mode. cpu must not have long mode active */ 822 + if (efer & EFER_LMA) 823 + return X86EMUL_UNHANDLEABLE; 824 + ctxt->mode = X86EMUL_MODE_REAL; 825 + return X86EMUL_CONTINUE; 826 + } 827 + 828 + if (ctxt->eflags & X86_EFLAGS_VM) { 829 + /* Protected/VM86 mode. cpu must not have long mode active */ 830 + if (efer & EFER_LMA) 831 + return X86EMUL_UNHANDLEABLE; 832 + ctxt->mode = X86EMUL_MODE_VM86; 833 + return X86EMUL_CONTINUE; 834 + } 835 + 836 + if (!ctxt->ops->get_segment(ctxt, &selector, &cs, &base3, VCPU_SREG_CS)) 837 + return X86EMUL_UNHANDLEABLE; 838 + 839 + if (efer & EFER_LMA) { 840 + if (cs.l) { 841 + /* Proper long mode */ 842 + ctxt->mode = X86EMUL_MODE_PROT64; 843 + } else if (cs.d) { 844 + /* 32 bit compatibility mode*/ 845 + ctxt->mode = X86EMUL_MODE_PROT32; 846 + } else { 847 + ctxt->mode = X86EMUL_MODE_PROT16; 848 + } 849 + } else { 850 + /* Legacy 32 bit / 16 bit mode */ 851 + ctxt->mode = cs.d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16; 852 + } 853 + 854 + return X86EMUL_CONTINUE; 813 855 } 814 856 815 - static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst, 816 - const struct desc_struct *cs_desc) 857 + static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst) 817 858 { 818 - enum x86emul_mode mode = ctxt->mode; 819 - int rc; 859 + return assign_eip(ctxt, dst); 860 + } 820 861 821 - #ifdef CONFIG_X86_64 822 - if (ctxt->mode >= X86EMUL_MODE_PROT16) { 823 - if (cs_desc->l) { 824 - u64 efer = 0; 862 + static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst) 863 + { 864 + int rc = emulator_recalc_and_set_mode(ctxt); 825 865 826 - ctxt->ops->get_msr(ctxt, MSR_EFER, &efer); 827 - if (efer & EFER_LMA) 828 - mode = X86EMUL_MODE_PROT64; 829 - } else 830 - mode = X86EMUL_MODE_PROT32; /* temporary value */ 831 - } 832 - #endif 833 - if (mode == X86EMUL_MODE_PROT16 || mode == X86EMUL_MODE_PROT32) 834 - mode = cs_desc->d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16; 835 - rc = assign_eip(ctxt, dst, mode); 836 - if (rc == X86EMUL_CONTINUE) 837 - ctxt->mode = mode; 838 - return rc; 866 + if (rc != X86EMUL_CONTINUE) 867 + return rc; 868 + 869 + return assign_eip(ctxt, dst); 839 870 } 840 871 841 872 static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel) ··· 2201 2172 if (rc != X86EMUL_CONTINUE) 2202 2173 return rc; 2203 2174 2204 - rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc); 2175 + rc = assign_eip_far(ctxt, ctxt->src.val); 2205 2176 /* Error handling is not implemented. */ 2206 2177 if (rc != X86EMUL_CONTINUE) 2207 2178 return X86EMUL_UNHANDLEABLE; ··· 2279 2250 &new_desc); 2280 2251 if (rc != X86EMUL_CONTINUE) 2281 2252 return rc; 2282 - rc = assign_eip_far(ctxt, eip, &new_desc); 2253 + rc = assign_eip_far(ctxt, eip); 2283 2254 /* Error handling is not implemented. */ 2284 2255 if (rc != X86EMUL_CONTINUE) 2285 2256 return X86EMUL_UNHANDLEABLE; ··· 2461 2432 ctxt->eflags = GET_SMSTATE(u32, smstate, 0x7ff4) | X86_EFLAGS_FIXED; 2462 2433 ctxt->_eip = GET_SMSTATE(u32, smstate, 0x7ff0); 2463 2434 2464 - for (i = 0; i < NR_EMULATOR_GPRS; i++) 2435 + for (i = 0; i < 8; i++) 2465 2436 *reg_write(ctxt, i) = GET_SMSTATE(u32, smstate, 0x7fd0 + i * 4); 2466 2437 2467 2438 val = GET_SMSTATE(u32, smstate, 0x7fcc); ··· 2518 2489 u16 selector; 2519 2490 int i, r; 2520 2491 2521 - for (i = 0; i < NR_EMULATOR_GPRS; i++) 2492 + for (i = 0; i < 16; i++) 2522 2493 *reg_write(ctxt, i) = GET_SMSTATE(u64, smstate, 0x7ff8 - i * 8); 2523 2494 2524 2495 ctxt->_eip = GET_SMSTATE(u64, smstate, 0x7f78); ··· 2662 2633 * those side effects need to be explicitly handled for both success 2663 2634 * and shutdown. 2664 2635 */ 2665 - return X86EMUL_CONTINUE; 2636 + return emulator_recalc_and_set_mode(ctxt); 2666 2637 2667 2638 emulate_shutdown: 2668 2639 ctxt->ops->triple_fault(ctxt); ··· 2905 2876 ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS); 2906 2877 2907 2878 ctxt->_eip = rdx; 2879 + ctxt->mode = usermode; 2908 2880 *reg_write(ctxt, VCPU_REGS_RSP) = rcx; 2909 2881 2910 2882 return X86EMUL_CONTINUE; ··· 3499 3469 if (rc != X86EMUL_CONTINUE) 3500 3470 return rc; 3501 3471 3502 - rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc); 3472 + rc = assign_eip_far(ctxt, ctxt->src.val); 3503 3473 if (rc != X86EMUL_CONTINUE) 3504 3474 goto fail; 3505 3475 ··· 3641 3611 3642 3612 static int em_cr_write(struct x86_emulate_ctxt *ctxt) 3643 3613 { 3644 - if (ctxt->ops->set_cr(ctxt, ctxt->modrm_reg, ctxt->src.val)) 3614 + int cr_num = ctxt->modrm_reg; 3615 + int r; 3616 + 3617 + if (ctxt->ops->set_cr(ctxt, cr_num, ctxt->src.val)) 3645 3618 return emulate_gp(ctxt, 0); 3646 3619 3647 3620 /* Disable writeback. */ 3648 3621 ctxt->dst.type = OP_NONE; 3622 + 3623 + if (cr_num == 0) { 3624 + /* 3625 + * CR0 write might have updated CR0.PE and/or CR0.PG 3626 + * which can affect the cpu's execution mode. 3627 + */ 3628 + r = emulator_recalc_and_set_mode(ctxt); 3629 + if (r != X86EMUL_CONTINUE) 3630 + return r; 3631 + } 3632 + 3649 3633 return X86EMUL_CONTINUE; 3650 3634 } 3651 3635
+5
arch/x86/kvm/vmx/vmx.c
··· 8263 8263 if (!cpu_has_virtual_nmis()) 8264 8264 enable_vnmi = 0; 8265 8265 8266 + #ifdef CONFIG_X86_SGX_KVM 8267 + if (!cpu_has_vmx_encls_vmexit()) 8268 + enable_sgx = false; 8269 + #endif 8270 + 8266 8271 /* 8267 8272 * set_apic_access_page_addr() is used to reload apic access 8268 8273 * page upon invalidation. No need to do anything if not
+21 -6
arch/x86/kvm/x86.c
··· 2315 2315 2316 2316 /* we verify if the enable bit is set... */ 2317 2317 if (system_time & 1) { 2318 - kvm_gfn_to_pfn_cache_init(vcpu->kvm, &vcpu->arch.pv_time, vcpu, 2319 - KVM_HOST_USES_PFN, system_time & ~1ULL, 2320 - sizeof(struct pvclock_vcpu_time_info)); 2318 + kvm_gpc_activate(vcpu->kvm, &vcpu->arch.pv_time, vcpu, 2319 + KVM_HOST_USES_PFN, system_time & ~1ULL, 2320 + sizeof(struct pvclock_vcpu_time_info)); 2321 2321 } else { 2322 - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.pv_time); 2322 + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.pv_time); 2323 2323 } 2324 2324 2325 2325 return; ··· 3388 3388 3389 3389 static void kvmclock_reset(struct kvm_vcpu *vcpu) 3390 3390 { 3391 - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.pv_time); 3391 + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.pv_time); 3392 3392 vcpu->arch.time = 0; 3393 3393 } 3394 3394 ··· 10044 10044 kvm_x86_ops.nested_ops->has_events(vcpu)) 10045 10045 *req_immediate_exit = true; 10046 10046 10047 - WARN_ON(kvm_is_exception_pending(vcpu)); 10047 + /* 10048 + * KVM must never queue a new exception while injecting an event; KVM 10049 + * is done emulating and should only propagate the to-be-injected event 10050 + * to the VMCS/VMCB. Queueing a new exception can put the vCPU into an 10051 + * infinite loop as KVM will bail from VM-Enter to inject the pending 10052 + * exception and start the cycle all over. 10053 + * 10054 + * Exempt triple faults as they have special handling and won't put the 10055 + * vCPU into an infinite loop. Triple fault can be queued when running 10056 + * VMX without unrestricted guest, as that requires KVM to emulate Real 10057 + * Mode events (see kvm_inject_realmode_interrupt()). 10058 + */ 10059 + WARN_ON_ONCE(vcpu->arch.exception.pending || 10060 + vcpu->arch.exception_vmexit.pending); 10048 10061 return 0; 10049 10062 10050 10063 out: ··· 11828 11815 vcpu->arch.last_vmentry_cpu = -1; 11829 11816 vcpu->arch.regs_avail = ~0; 11830 11817 vcpu->arch.regs_dirty = ~0; 11818 + 11819 + kvm_gpc_init(&vcpu->arch.pv_time); 11831 11820 11832 11821 if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu)) 11833 11822 vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
+34 -30
arch/x86/kvm/xen.c
··· 42 42 int idx = srcu_read_lock(&kvm->srcu); 43 43 44 44 if (gfn == GPA_INVALID) { 45 - kvm_gfn_to_pfn_cache_destroy(kvm, gpc); 45 + kvm_gpc_deactivate(kvm, gpc); 46 46 goto out; 47 47 } 48 48 49 49 do { 50 - ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, KVM_HOST_USES_PFN, 51 - gpa, PAGE_SIZE); 50 + ret = kvm_gpc_activate(kvm, gpc, NULL, KVM_HOST_USES_PFN, gpa, 51 + PAGE_SIZE); 52 52 if (ret) 53 53 goto out; 54 54 ··· 554 554 offsetof(struct compat_vcpu_info, time)); 555 555 556 556 if (data->u.gpa == GPA_INVALID) { 557 - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache); 557 + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache); 558 558 r = 0; 559 559 break; 560 560 } 561 561 562 - r = kvm_gfn_to_pfn_cache_init(vcpu->kvm, 563 - &vcpu->arch.xen.vcpu_info_cache, 564 - NULL, KVM_HOST_USES_PFN, data->u.gpa, 565 - sizeof(struct vcpu_info)); 562 + r = kvm_gpc_activate(vcpu->kvm, 563 + &vcpu->arch.xen.vcpu_info_cache, NULL, 564 + KVM_HOST_USES_PFN, data->u.gpa, 565 + sizeof(struct vcpu_info)); 566 566 if (!r) 567 567 kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); 568 568 ··· 570 570 571 571 case KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO: 572 572 if (data->u.gpa == GPA_INVALID) { 573 - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, 574 - &vcpu->arch.xen.vcpu_time_info_cache); 573 + kvm_gpc_deactivate(vcpu->kvm, 574 + &vcpu->arch.xen.vcpu_time_info_cache); 575 575 r = 0; 576 576 break; 577 577 } 578 578 579 - r = kvm_gfn_to_pfn_cache_init(vcpu->kvm, 580 - &vcpu->arch.xen.vcpu_time_info_cache, 581 - NULL, KVM_HOST_USES_PFN, data->u.gpa, 582 - sizeof(struct pvclock_vcpu_time_info)); 579 + r = kvm_gpc_activate(vcpu->kvm, 580 + &vcpu->arch.xen.vcpu_time_info_cache, 581 + NULL, KVM_HOST_USES_PFN, data->u.gpa, 582 + sizeof(struct pvclock_vcpu_time_info)); 583 583 if (!r) 584 584 kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); 585 585 break; ··· 590 590 break; 591 591 } 592 592 if (data->u.gpa == GPA_INVALID) { 593 - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, 594 - &vcpu->arch.xen.runstate_cache); 593 + kvm_gpc_deactivate(vcpu->kvm, 594 + &vcpu->arch.xen.runstate_cache); 595 595 r = 0; 596 596 break; 597 597 } 598 598 599 - r = kvm_gfn_to_pfn_cache_init(vcpu->kvm, 600 - &vcpu->arch.xen.runstate_cache, 601 - NULL, KVM_HOST_USES_PFN, data->u.gpa, 602 - sizeof(struct vcpu_runstate_info)); 599 + r = kvm_gpc_activate(vcpu->kvm, &vcpu->arch.xen.runstate_cache, 600 + NULL, KVM_HOST_USES_PFN, data->u.gpa, 601 + sizeof(struct vcpu_runstate_info)); 603 602 break; 604 603 605 604 case KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT: ··· 1666 1667 case EVTCHNSTAT_ipi: 1667 1668 /* IPI must map back to the same port# */ 1668 1669 if (data->u.evtchn.deliver.port.port != data->u.evtchn.send_port) 1669 - goto out; /* -EINVAL */ 1670 + goto out_noeventfd; /* -EINVAL */ 1670 1671 break; 1671 1672 1672 1673 case EVTCHNSTAT_interdomain: 1673 1674 if (data->u.evtchn.deliver.port.port) { 1674 1675 if (data->u.evtchn.deliver.port.port >= max_evtchn_port(kvm)) 1675 - goto out; /* -EINVAL */ 1676 + goto out_noeventfd; /* -EINVAL */ 1676 1677 } else { 1677 1678 eventfd = eventfd_ctx_fdget(data->u.evtchn.deliver.eventfd.fd); 1678 1679 if (IS_ERR(eventfd)) { 1679 1680 ret = PTR_ERR(eventfd); 1680 - goto out; 1681 + goto out_noeventfd; 1681 1682 } 1682 1683 } 1683 1684 break; ··· 1717 1718 out: 1718 1719 if (eventfd) 1719 1720 eventfd_ctx_put(eventfd); 1721 + out_noeventfd: 1720 1722 kfree(evtchnfd); 1721 1723 return ret; 1722 1724 } ··· 1816 1816 { 1817 1817 vcpu->arch.xen.vcpu_id = vcpu->vcpu_idx; 1818 1818 vcpu->arch.xen.poll_evtchn = 0; 1819 + 1819 1820 timer_setup(&vcpu->arch.xen.poll_timer, cancel_evtchn_poll, 0); 1821 + 1822 + kvm_gpc_init(&vcpu->arch.xen.runstate_cache); 1823 + kvm_gpc_init(&vcpu->arch.xen.vcpu_info_cache); 1824 + kvm_gpc_init(&vcpu->arch.xen.vcpu_time_info_cache); 1820 1825 } 1821 1826 1822 1827 void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) ··· 1829 1824 if (kvm_xen_timer_enabled(vcpu)) 1830 1825 kvm_xen_stop_timer(vcpu); 1831 1826 1832 - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, 1833 - &vcpu->arch.xen.runstate_cache); 1834 - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, 1835 - &vcpu->arch.xen.vcpu_info_cache); 1836 - kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, 1837 - &vcpu->arch.xen.vcpu_time_info_cache); 1827 + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.runstate_cache); 1828 + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache); 1829 + kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_time_info_cache); 1830 + 1838 1831 del_timer_sync(&vcpu->arch.xen.poll_timer); 1839 1832 } 1840 1833 1841 1834 void kvm_xen_init_vm(struct kvm *kvm) 1842 1835 { 1843 1836 idr_init(&kvm->arch.xen.evtchn_ports); 1837 + kvm_gpc_init(&kvm->arch.xen.shinfo_cache); 1844 1838 } 1845 1839 1846 1840 void kvm_xen_destroy_vm(struct kvm *kvm) ··· 1847 1843 struct evtchnfd *evtchnfd; 1848 1844 int i; 1849 1845 1850 - kvm_gfn_to_pfn_cache_destroy(kvm, &kvm->arch.xen.shinfo_cache); 1846 + kvm_gpc_deactivate(kvm, &kvm->arch.xen.shinfo_cache); 1851 1847 1852 1848 idr_for_each_entry(&kvm->arch.xen.evtchn_ports, evtchnfd, i) { 1853 1849 if (!evtchnfd->deliver.port.port)
+1
arch/x86/purgatory/Makefile
··· 26 26 KASAN_SANITIZE := n 27 27 UBSAN_SANITIZE := n 28 28 KCSAN_SANITIZE := n 29 + KMSAN_SANITIZE := n 29 30 KCOV_INSTRUMENT := n 30 31 31 32 # These are adjustments to the compiler flags used for objects that
+6 -1
block/blk-mq.c
··· 611 611 .nr_tags = 1, 612 612 }; 613 613 u64 alloc_time_ns = 0; 614 + struct request *rq; 614 615 unsigned int cpu; 615 616 unsigned int tag; 616 617 int ret; ··· 661 660 tag = blk_mq_get_tag(&data); 662 661 if (tag == BLK_MQ_NO_TAG) 663 662 goto out_queue_exit; 664 - return blk_mq_rq_ctx_init(&data, blk_mq_tags_from_data(&data), tag, 663 + rq = blk_mq_rq_ctx_init(&data, blk_mq_tags_from_data(&data), tag, 665 664 alloc_time_ns); 665 + rq->__data_len = 0; 666 + rq->__sector = (sector_t) -1; 667 + rq->bio = rq->biotail = NULL; 668 + return rq; 666 669 667 670 out_queue_exit: 668 671 blk_queue_exit(q);
+8 -4
block/genhd.c
··· 410 410 * Otherwise just allocate the device numbers for both the whole device 411 411 * and all partitions from the extended dev_t space. 412 412 */ 413 + ret = -EINVAL; 413 414 if (disk->major) { 414 415 if (WARN_ON(!disk->minors)) 415 - return -EINVAL; 416 + goto out_exit_elevator; 416 417 417 418 if (disk->minors > DISK_MAX_PARTS) { 418 419 pr_err("block: can't allocate more than %d partitions\n", ··· 421 420 disk->minors = DISK_MAX_PARTS; 422 421 } 423 422 if (disk->first_minor + disk->minors > MINORMASK + 1) 424 - return -EINVAL; 423 + goto out_exit_elevator; 425 424 } else { 426 425 if (WARN_ON(disk->minors)) 427 - return -EINVAL; 426 + goto out_exit_elevator; 428 427 429 428 ret = blk_alloc_ext_minor(); 430 429 if (ret < 0) 431 - return ret; 430 + goto out_exit_elevator; 432 431 disk->major = BLOCK_EXT_MAJOR; 433 432 disk->first_minor = ret; 434 433 } ··· 541 540 out_free_ext_minor: 542 541 if (disk->major == BLOCK_EXT_MAJOR) 543 542 blk_free_ext_minor(disk->first_minor); 543 + out_exit_elevator: 544 + if (disk->queue->elevator) 545 + elevator_exit(disk->queue); 544 546 return ret; 545 547 } 546 548 EXPORT_SYMBOL(device_add_disk);
+1 -1
drivers/acpi/acpi_pcc.c
··· 27 27 * Arbitrary retries in case the remote processor is slow to respond 28 28 * to PCC commands 29 29 */ 30 - #define PCC_CMD_WAIT_RETRIES_NUM 500 30 + #define PCC_CMD_WAIT_RETRIES_NUM 500ULL 31 31 32 32 struct pcc_data { 33 33 struct pcc_mbox_chan *pcc_chan;
+7
drivers/acpi/resource.c
··· 425 425 DMI_MATCH(DMI_BOARD_NAME, "S5402ZA"), 426 426 }, 427 427 }, 428 + { 429 + .ident = "Asus Vivobook S5602ZA", 430 + .matches = { 431 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 432 + DMI_MATCH(DMI_BOARD_NAME, "S5602ZA"), 433 + }, 434 + }, 428 435 { } 429 436 }; 430 437
+1
drivers/acpi/scan.c
··· 789 789 static const char * const acpi_ignore_dep_ids[] = { 790 790 "PNP0D80", /* Windows-compatible System Power Management Controller */ 791 791 "INT33BD", /* Intel Baytrail Mailbox Device */ 792 + "LATT2021", /* Lattice FW Update Client Driver */ 792 793 NULL 793 794 }; 794 795
+4
drivers/base/power/domain.c
··· 2952 2952 np = it.node; 2953 2953 if (!of_match_node(idle_state_match, np)) 2954 2954 continue; 2955 + 2956 + if (!of_device_is_available(np)) 2957 + continue; 2958 + 2955 2959 if (states) { 2956 2960 ret = genpd_parse_state(&states[i], np); 2957 2961 if (ret) {
+2 -2
drivers/base/property.c
··· 229 229 * Find a given string in a string array and if it is found return the 230 230 * index back. 231 231 * 232 - * Return: %0 if the property was found (success), 232 + * Return: index, starting from %0, if the property was found (success), 233 233 * %-EINVAL if given arguments are not valid, 234 234 * %-ENODATA if the property does not have a value, 235 235 * %-EPROTO if the property is not an array of strings, ··· 450 450 * Find a given string in a string array and if it is found return the 451 451 * index back. 452 452 * 453 - * Return: %0 if the property was found (success), 453 + * Return: index, starting from %0, if the property was found (success), 454 454 * %-EINVAL if given arguments are not valid, 455 455 * %-ENODATA if the property does not have a value, 456 456 * %-EPROTO if the property is not an array of strings,
+3 -1
drivers/block/rbd.c
··· 7222 7222 int ret; 7223 7223 7224 7224 ret = device_register(&rbd_root_dev); 7225 - if (ret < 0) 7225 + if (ret < 0) { 7226 + put_device(&rbd_root_dev); 7226 7227 return ret; 7228 + } 7227 7229 7228 7230 ret = bus_register(&rbd_bus_type); 7229 7231 if (ret < 0)
+1 -1
drivers/bluetooth/virtio_bt.c
··· 219 219 if (!skb) 220 220 return; 221 221 222 - skb->len = len; 222 + skb_put(skb, len); 223 223 virtbt_rx_handle(vbt, skb); 224 224 225 225 if (virtbt_add_inbuf(vbt) < 0)
+2 -2
drivers/char/random.c
··· 791 791 #endif 792 792 793 793 for (i = 0, arch_bits = sizeof(entropy) * 8; i < ARRAY_SIZE(entropy);) { 794 - longs = arch_get_random_seed_longs(entropy, ARRAY_SIZE(entropy) - i); 794 + longs = arch_get_random_seed_longs_early(entropy, ARRAY_SIZE(entropy) - i); 795 795 if (longs) { 796 796 _mix_pool_bytes(entropy, sizeof(*entropy) * longs); 797 797 i += longs; 798 798 continue; 799 799 } 800 - longs = arch_get_random_longs(entropy, ARRAY_SIZE(entropy) - i); 800 + longs = arch_get_random_longs_early(entropy, ARRAY_SIZE(entropy) - i); 801 801 if (longs) { 802 802 _mix_pool_bytes(entropy, sizeof(*entropy) * longs); 803 803 i += longs;
+42 -22
drivers/counter/104-quad-8.c
··· 232 232 COUNTER_FUNCTION_QUADRATURE_X4, 233 233 }; 234 234 235 + static int quad8_function_get(const struct quad8 *const priv, const size_t id, 236 + enum counter_function *const function) 237 + { 238 + if (!priv->quadrature_mode[id]) { 239 + *function = COUNTER_FUNCTION_PULSE_DIRECTION; 240 + return 0; 241 + } 242 + 243 + switch (priv->quadrature_scale[id]) { 244 + case 0: 245 + *function = COUNTER_FUNCTION_QUADRATURE_X1_A; 246 + return 0; 247 + case 1: 248 + *function = COUNTER_FUNCTION_QUADRATURE_X2_A; 249 + return 0; 250 + case 2: 251 + *function = COUNTER_FUNCTION_QUADRATURE_X4; 252 + return 0; 253 + default: 254 + /* should never reach this path */ 255 + return -EINVAL; 256 + } 257 + } 258 + 235 259 static int quad8_function_read(struct counter_device *counter, 236 260 struct counter_count *count, 237 261 enum counter_function *function) 238 262 { 239 263 struct quad8 *const priv = counter_priv(counter); 240 - const int id = count->id; 241 264 unsigned long irqflags; 265 + int retval; 242 266 243 267 spin_lock_irqsave(&priv->lock, irqflags); 244 268 245 - if (priv->quadrature_mode[id]) 246 - switch (priv->quadrature_scale[id]) { 247 - case 0: 248 - *function = COUNTER_FUNCTION_QUADRATURE_X1_A; 249 - break; 250 - case 1: 251 - *function = COUNTER_FUNCTION_QUADRATURE_X2_A; 252 - break; 253 - case 2: 254 - *function = COUNTER_FUNCTION_QUADRATURE_X4; 255 - break; 256 - } 257 - else 258 - *function = COUNTER_FUNCTION_PULSE_DIRECTION; 269 + retval = quad8_function_get(priv, count->id, function); 259 270 260 271 spin_unlock_irqrestore(&priv->lock, irqflags); 261 272 262 - return 0; 273 + return retval; 263 274 } 264 275 265 276 static int quad8_function_write(struct counter_device *counter, ··· 370 359 enum counter_synapse_action *action) 371 360 { 372 361 struct quad8 *const priv = counter_priv(counter); 362 + unsigned long irqflags; 373 363 int err; 374 364 enum counter_function function; 375 365 const size_t signal_a_id = count->synapses[0].signal->id; ··· 386 374 return 0; 387 375 } 388 376 389 - err = quad8_function_read(counter, count, &function); 390 - if (err) 377 + spin_lock_irqsave(&priv->lock, irqflags); 378 + 379 + /* Get Count function and direction atomically */ 380 + err = quad8_function_get(priv, count->id, &function); 381 + if (err) { 382 + spin_unlock_irqrestore(&priv->lock, irqflags); 391 383 return err; 384 + } 385 + err = quad8_direction_read(counter, count, &direction); 386 + if (err) { 387 + spin_unlock_irqrestore(&priv->lock, irqflags); 388 + return err; 389 + } 390 + 391 + spin_unlock_irqrestore(&priv->lock, irqflags); 392 392 393 393 /* Default action mode */ 394 394 *action = COUNTER_SYNAPSE_ACTION_NONE; ··· 413 389 return 0; 414 390 case COUNTER_FUNCTION_QUADRATURE_X1_A: 415 391 if (synapse->signal->id == signal_a_id) { 416 - err = quad8_direction_read(counter, count, &direction); 417 - if (err) 418 - return err; 419 - 420 392 if (direction == COUNTER_COUNT_DIRECTION_FORWARD) 421 393 *action = COUNTER_SYNAPSE_ACTION_RISING_EDGE; 422 394 else
+14 -4
drivers/counter/microchip-tcb-capture.c
··· 28 28 int qdec_mode; 29 29 int num_channels; 30 30 int channel[2]; 31 - bool trig_inverted; 32 31 }; 33 32 34 33 static const enum counter_function mchp_tc_count_functions[] = { ··· 152 153 153 154 regmap_read(priv->regmap, ATMEL_TC_REG(priv->channel[0], SR), &sr); 154 155 155 - if (priv->trig_inverted) 156 + if (signal->id == 1) 156 157 sigstatus = (sr & ATMEL_TC_MTIOB); 157 158 else 158 159 sigstatus = (sr & ATMEL_TC_MTIOA); ··· 169 170 { 170 171 struct mchp_tc_data *const priv = counter_priv(counter); 171 172 u32 cmr; 173 + 174 + if (priv->qdec_mode) { 175 + *action = COUNTER_SYNAPSE_ACTION_BOTH_EDGES; 176 + return 0; 177 + } 178 + 179 + /* Only TIOA signal is evaluated in non-QDEC mode */ 180 + if (synapse->signal->id != 0) { 181 + *action = COUNTER_SYNAPSE_ACTION_NONE; 182 + return 0; 183 + } 172 184 173 185 regmap_read(priv->regmap, ATMEL_TC_REG(priv->channel[0], CMR), &cmr); 174 186 ··· 209 199 struct mchp_tc_data *const priv = counter_priv(counter); 210 200 u32 edge = ATMEL_TC_ETRGEDG_NONE; 211 201 212 - /* QDEC mode is rising edge only */ 213 - if (priv->qdec_mode) 202 + /* QDEC mode is rising edge only; only TIOA handled in non-QDEC mode */ 203 + if (priv->qdec_mode || synapse->signal->id != 0) 214 204 return -EINVAL; 215 205 216 206 switch (action) {
+4 -3
drivers/counter/ti-ecap-capture.c
··· 377 377 COUNTER_SIGNAL_POLARITY_NEGATIVE, 378 378 }; 379 379 380 - static DEFINE_COUNTER_ARRAY_POLARITY(ecap_cnt_pol_array, ecap_cnt_pol_avail, ECAP_NB_CEVT); 380 + static DEFINE_COUNTER_AVAILABLE(ecap_cnt_pol_available, ecap_cnt_pol_avail); 381 + static DEFINE_COUNTER_ARRAY_POLARITY(ecap_cnt_pol_array, ecap_cnt_pol_available, ECAP_NB_CEVT); 381 382 382 383 static struct counter_comp ecap_cnt_signal_ext[] = { 383 384 COUNTER_COMP_ARRAY_POLARITY(ecap_cnt_pol_read, ecap_cnt_pol_write, ecap_cnt_pol_array), ··· 480 479 int ret; 481 480 482 481 counter_dev = devm_counter_alloc(dev, sizeof(*ecap_dev)); 483 - if (IS_ERR(counter_dev)) 484 - return PTR_ERR(counter_dev); 482 + if (!counter_dev) 483 + return -ENOMEM; 485 484 486 485 counter_dev->name = ECAP_DRV_NAME; 487 486 counter_dev->parent = dev;
+50 -89
drivers/cpufreq/intel_pstate.c
··· 27 27 #include <linux/pm_qos.h> 28 28 #include <trace/events/power.h> 29 29 30 + #include <asm/cpu.h> 30 31 #include <asm/div64.h> 31 32 #include <asm/msr.h> 32 33 #include <asm/cpu_device_id.h> ··· 281 280 * structure is used to store those callbacks. 282 281 */ 283 282 struct pstate_funcs { 284 - int (*get_max)(void); 285 - int (*get_max_physical)(void); 286 - int (*get_min)(void); 287 - int (*get_turbo)(void); 283 + int (*get_max)(int cpu); 284 + int (*get_max_physical)(int cpu); 285 + int (*get_min)(int cpu); 286 + int (*get_turbo)(int cpu); 288 287 int (*get_scaling)(void); 289 288 int (*get_cpu_scaling)(int cpu); 290 289 int (*get_aperf_mperf_shift)(void); ··· 398 397 return cppc_perf.guaranteed_perf; 399 398 400 399 return cppc_perf.nominal_perf; 401 - } 402 - 403 - static u32 intel_pstate_cppc_nominal(int cpu) 404 - { 405 - u64 nominal_perf; 406 - 407 - if (cppc_get_nominal_perf(cpu, &nominal_perf)) 408 - return 0; 409 - 410 - return nominal_perf; 411 400 } 412 401 #else /* CONFIG_ACPI_CPPC_LIB */ 413 402 static inline void intel_pstate_set_itmt_prio(int cpu) ··· 522 531 { 523 532 int perf_ctl_max_phys = cpu->pstate.max_pstate_physical; 524 533 int perf_ctl_scaling = cpu->pstate.perf_ctl_scaling; 525 - int perf_ctl_turbo = pstate_funcs.get_turbo(); 526 - int turbo_freq = perf_ctl_turbo * perf_ctl_scaling; 534 + int perf_ctl_turbo = pstate_funcs.get_turbo(cpu->cpu); 527 535 int scaling = cpu->pstate.scaling; 528 536 529 537 pr_debug("CPU%d: perf_ctl_max_phys = %d\n", cpu->cpu, perf_ctl_max_phys); 530 - pr_debug("CPU%d: perf_ctl_max = %d\n", cpu->cpu, pstate_funcs.get_max()); 531 538 pr_debug("CPU%d: perf_ctl_turbo = %d\n", cpu->cpu, perf_ctl_turbo); 532 539 pr_debug("CPU%d: perf_ctl_scaling = %d\n", cpu->cpu, perf_ctl_scaling); 533 540 pr_debug("CPU%d: HWP_CAP guaranteed = %d\n", cpu->cpu, cpu->pstate.max_pstate); 534 541 pr_debug("CPU%d: HWP_CAP highest = %d\n", cpu->cpu, cpu->pstate.turbo_pstate); 535 542 pr_debug("CPU%d: HWP-to-frequency scaling factor: %d\n", cpu->cpu, scaling); 536 543 537 - /* 538 - * If the product of the HWP performance scaling factor and the HWP_CAP 539 - * highest performance is greater than the maximum turbo frequency 540 - * corresponding to the pstate_funcs.get_turbo() return value, the 541 - * scaling factor is too high, so recompute it to make the HWP_CAP 542 - * highest performance correspond to the maximum turbo frequency. 543 - */ 544 - cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * scaling; 545 - if (turbo_freq < cpu->pstate.turbo_freq) { 546 - cpu->pstate.turbo_freq = turbo_freq; 547 - scaling = DIV_ROUND_UP(turbo_freq, cpu->pstate.turbo_pstate); 548 - cpu->pstate.scaling = scaling; 549 - 550 - pr_debug("CPU%d: refined HWP-to-frequency scaling factor: %d\n", 551 - cpu->cpu, scaling); 552 - } 553 - 544 + cpu->pstate.turbo_freq = rounddown(cpu->pstate.turbo_pstate * scaling, 545 + perf_ctl_scaling); 554 546 cpu->pstate.max_freq = rounddown(cpu->pstate.max_pstate * scaling, 555 547 perf_ctl_scaling); 556 548 ··· 1714 1740 intel_pstate_update_epp_defaults(cpudata); 1715 1741 } 1716 1742 1717 - static int atom_get_min_pstate(void) 1743 + static int atom_get_min_pstate(int not_used) 1718 1744 { 1719 1745 u64 value; 1720 1746 ··· 1722 1748 return (value >> 8) & 0x7F; 1723 1749 } 1724 1750 1725 - static int atom_get_max_pstate(void) 1751 + static int atom_get_max_pstate(int not_used) 1726 1752 { 1727 1753 u64 value; 1728 1754 ··· 1730 1756 return (value >> 16) & 0x7F; 1731 1757 } 1732 1758 1733 - static int atom_get_turbo_pstate(void) 1759 + static int atom_get_turbo_pstate(int not_used) 1734 1760 { 1735 1761 u64 value; 1736 1762 ··· 1808 1834 cpudata->vid.turbo = value & 0x7f; 1809 1835 } 1810 1836 1811 - static int core_get_min_pstate(void) 1837 + static int core_get_min_pstate(int cpu) 1812 1838 { 1813 1839 u64 value; 1814 1840 1815 - rdmsrl(MSR_PLATFORM_INFO, value); 1841 + rdmsrl_on_cpu(cpu, MSR_PLATFORM_INFO, &value); 1816 1842 return (value >> 40) & 0xFF; 1817 1843 } 1818 1844 1819 - static int core_get_max_pstate_physical(void) 1845 + static int core_get_max_pstate_physical(int cpu) 1820 1846 { 1821 1847 u64 value; 1822 1848 1823 - rdmsrl(MSR_PLATFORM_INFO, value); 1849 + rdmsrl_on_cpu(cpu, MSR_PLATFORM_INFO, &value); 1824 1850 return (value >> 8) & 0xFF; 1825 1851 } 1826 1852 1827 - static int core_get_tdp_ratio(u64 plat_info) 1853 + static int core_get_tdp_ratio(int cpu, u64 plat_info) 1828 1854 { 1829 1855 /* Check how many TDP levels present */ 1830 1856 if (plat_info & 0x600000000) { ··· 1834 1860 int err; 1835 1861 1836 1862 /* Get the TDP level (0, 1, 2) to get ratios */ 1837 - err = rdmsrl_safe(MSR_CONFIG_TDP_CONTROL, &tdp_ctrl); 1863 + err = rdmsrl_safe_on_cpu(cpu, MSR_CONFIG_TDP_CONTROL, &tdp_ctrl); 1838 1864 if (err) 1839 1865 return err; 1840 1866 1841 1867 /* TDP MSR are continuous starting at 0x648 */ 1842 1868 tdp_msr = MSR_CONFIG_TDP_NOMINAL + (tdp_ctrl & 0x03); 1843 - err = rdmsrl_safe(tdp_msr, &tdp_ratio); 1869 + err = rdmsrl_safe_on_cpu(cpu, tdp_msr, &tdp_ratio); 1844 1870 if (err) 1845 1871 return err; 1846 1872 ··· 1857 1883 return -ENXIO; 1858 1884 } 1859 1885 1860 - static int core_get_max_pstate(void) 1886 + static int core_get_max_pstate(int cpu) 1861 1887 { 1862 1888 u64 tar; 1863 1889 u64 plat_info; ··· 1865 1891 int tdp_ratio; 1866 1892 int err; 1867 1893 1868 - rdmsrl(MSR_PLATFORM_INFO, plat_info); 1894 + rdmsrl_on_cpu(cpu, MSR_PLATFORM_INFO, &plat_info); 1869 1895 max_pstate = (plat_info >> 8) & 0xFF; 1870 1896 1871 - tdp_ratio = core_get_tdp_ratio(plat_info); 1897 + tdp_ratio = core_get_tdp_ratio(cpu, plat_info); 1872 1898 if (tdp_ratio <= 0) 1873 1899 return max_pstate; 1874 1900 ··· 1877 1903 return tdp_ratio; 1878 1904 } 1879 1905 1880 - err = rdmsrl_safe(MSR_TURBO_ACTIVATION_RATIO, &tar); 1906 + err = rdmsrl_safe_on_cpu(cpu, MSR_TURBO_ACTIVATION_RATIO, &tar); 1881 1907 if (!err) { 1882 1908 int tar_levels; 1883 1909 ··· 1892 1918 return max_pstate; 1893 1919 } 1894 1920 1895 - static int core_get_turbo_pstate(void) 1921 + static int core_get_turbo_pstate(int cpu) 1896 1922 { 1897 1923 u64 value; 1898 1924 int nont, ret; 1899 1925 1900 - rdmsrl(MSR_TURBO_RATIO_LIMIT, value); 1901 - nont = core_get_max_pstate(); 1926 + rdmsrl_on_cpu(cpu, MSR_TURBO_RATIO_LIMIT, &value); 1927 + nont = core_get_max_pstate(cpu); 1902 1928 ret = (value) & 255; 1903 1929 if (ret <= nont) 1904 1930 ret = nont; ··· 1926 1952 return 10; 1927 1953 } 1928 1954 1929 - static int knl_get_turbo_pstate(void) 1955 + static int knl_get_turbo_pstate(int cpu) 1930 1956 { 1931 1957 u64 value; 1932 1958 int nont, ret; 1933 1959 1934 - rdmsrl(MSR_TURBO_RATIO_LIMIT, value); 1935 - nont = core_get_max_pstate(); 1960 + rdmsrl_on_cpu(cpu, MSR_TURBO_RATIO_LIMIT, &value); 1961 + nont = core_get_max_pstate(cpu); 1936 1962 ret = (((value) >> 8) & 0xFF); 1937 1963 if (ret <= nont) 1938 1964 ret = nont; 1939 1965 return ret; 1940 1966 } 1941 1967 1942 - #ifdef CONFIG_ACPI_CPPC_LIB 1943 - static u32 hybrid_ref_perf; 1968 + static void hybrid_get_type(void *data) 1969 + { 1970 + u8 *cpu_type = data; 1971 + 1972 + *cpu_type = get_this_hybrid_cpu_type(); 1973 + } 1944 1974 1945 1975 static int hybrid_get_cpu_scaling(int cpu) 1946 1976 { 1947 - return DIV_ROUND_UP(core_get_scaling() * hybrid_ref_perf, 1948 - intel_pstate_cppc_nominal(cpu)); 1977 + u8 cpu_type = 0; 1978 + 1979 + smp_call_function_single(cpu, hybrid_get_type, &cpu_type, 1); 1980 + /* P-cores have a smaller perf level-to-freqency scaling factor. */ 1981 + if (cpu_type == 0x40) 1982 + return 78741; 1983 + 1984 + return core_get_scaling(); 1949 1985 } 1950 - 1951 - static void intel_pstate_cppc_set_cpu_scaling(void) 1952 - { 1953 - u32 min_nominal_perf = U32_MAX; 1954 - int cpu; 1955 - 1956 - for_each_present_cpu(cpu) { 1957 - u32 nominal_perf = intel_pstate_cppc_nominal(cpu); 1958 - 1959 - if (nominal_perf && nominal_perf < min_nominal_perf) 1960 - min_nominal_perf = nominal_perf; 1961 - } 1962 - 1963 - if (min_nominal_perf < U32_MAX) { 1964 - hybrid_ref_perf = min_nominal_perf; 1965 - pstate_funcs.get_cpu_scaling = hybrid_get_cpu_scaling; 1966 - } 1967 - } 1968 - #else 1969 - static inline void intel_pstate_cppc_set_cpu_scaling(void) 1970 - { 1971 - } 1972 - #endif /* CONFIG_ACPI_CPPC_LIB */ 1973 1986 1974 1987 static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate) 1975 1988 { ··· 1986 2025 1987 2026 static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) 1988 2027 { 1989 - int perf_ctl_max_phys = pstate_funcs.get_max_physical(); 2028 + int perf_ctl_max_phys = pstate_funcs.get_max_physical(cpu->cpu); 1990 2029 int perf_ctl_scaling = pstate_funcs.get_scaling(); 1991 2030 1992 - cpu->pstate.min_pstate = pstate_funcs.get_min(); 2031 + cpu->pstate.min_pstate = pstate_funcs.get_min(cpu->cpu); 1993 2032 cpu->pstate.max_pstate_physical = perf_ctl_max_phys; 1994 2033 cpu->pstate.perf_ctl_scaling = perf_ctl_scaling; 1995 2034 ··· 2005 2044 } 2006 2045 } else { 2007 2046 cpu->pstate.scaling = perf_ctl_scaling; 2008 - cpu->pstate.max_pstate = pstate_funcs.get_max(); 2009 - cpu->pstate.turbo_pstate = pstate_funcs.get_turbo(); 2047 + cpu->pstate.max_pstate = pstate_funcs.get_max(cpu->cpu); 2048 + cpu->pstate.turbo_pstate = pstate_funcs.get_turbo(cpu->cpu); 2010 2049 } 2011 2050 2012 2051 if (cpu->pstate.scaling == perf_ctl_scaling) { ··· 3182 3221 3183 3222 static int __init intel_pstate_msrs_not_valid(void) 3184 3223 { 3185 - if (!pstate_funcs.get_max() || 3186 - !pstate_funcs.get_min() || 3187 - !pstate_funcs.get_turbo()) 3224 + if (!pstate_funcs.get_max(0) || 3225 + !pstate_funcs.get_min(0) || 3226 + !pstate_funcs.get_turbo(0)) 3188 3227 return -ENODEV; 3189 3228 3190 3229 return 0; ··· 3411 3450 default_driver = &intel_pstate; 3412 3451 3413 3452 if (boot_cpu_has(X86_FEATURE_HYBRID_CPU)) 3414 - intel_pstate_cppc_set_cpu_scaling(); 3453 + pstate_funcs.get_cpu_scaling = hybrid_get_cpu_scaling; 3415 3454 3416 3455 goto hwp_cpu_matched; 3417 3456 }
+44 -16
drivers/gpio/gpio-tegra.c
··· 18 18 #include <linux/of_device.h> 19 19 #include <linux/platform_device.h> 20 20 #include <linux/module.h> 21 + #include <linux/seq_file.h> 21 22 #include <linux/irqdomain.h> 22 23 #include <linux/irqchip/chained_irq.h> 23 24 #include <linux/pinctrl/consumer.h> ··· 95 94 struct tegra_gpio_bank *bank_info; 96 95 const struct tegra_gpio_soc_config *soc; 97 96 struct gpio_chip gc; 98 - struct irq_chip ic; 99 97 u32 bank_count; 100 98 unsigned int *irqs; 101 99 }; ··· 288 288 unsigned int gpio = d->hwirq; 289 289 290 290 tegra_gpio_mask_write(tgi, GPIO_MSK_INT_ENB(tgi, gpio), gpio, 0); 291 + gpiochip_disable_irq(chip, gpio); 291 292 } 292 293 293 294 static void tegra_gpio_irq_unmask(struct irq_data *d) ··· 297 296 struct tegra_gpio_info *tgi = gpiochip_get_data(chip); 298 297 unsigned int gpio = d->hwirq; 299 298 299 + gpiochip_enable_irq(chip, gpio); 300 300 tegra_gpio_mask_write(tgi, GPIO_MSK_INT_ENB(tgi, gpio), gpio, 1); 301 301 } 302 302 ··· 600 598 tegra_gpio_enable(tgi, d->hwirq); 601 599 } 602 600 601 + static void tegra_gpio_irq_print_chip(struct irq_data *d, struct seq_file *s) 602 + { 603 + struct gpio_chip *chip = irq_data_get_irq_chip_data(d); 604 + 605 + seq_printf(s, dev_name(chip->parent)); 606 + } 607 + 608 + static const struct irq_chip tegra_gpio_irq_chip = { 609 + .irq_shutdown = tegra_gpio_irq_shutdown, 610 + .irq_ack = tegra_gpio_irq_ack, 611 + .irq_mask = tegra_gpio_irq_mask, 612 + .irq_unmask = tegra_gpio_irq_unmask, 613 + .irq_set_type = tegra_gpio_irq_set_type, 614 + #ifdef CONFIG_PM_SLEEP 615 + .irq_set_wake = tegra_gpio_irq_set_wake, 616 + #endif 617 + .irq_print_chip = tegra_gpio_irq_print_chip, 618 + .irq_request_resources = tegra_gpio_irq_request_resources, 619 + .irq_release_resources = tegra_gpio_irq_release_resources, 620 + .flags = IRQCHIP_IMMUTABLE, 621 + }; 622 + 623 + static const struct irq_chip tegra210_gpio_irq_chip = { 624 + .irq_shutdown = tegra_gpio_irq_shutdown, 625 + .irq_ack = tegra_gpio_irq_ack, 626 + .irq_mask = tegra_gpio_irq_mask, 627 + .irq_unmask = tegra_gpio_irq_unmask, 628 + .irq_set_affinity = tegra_gpio_irq_set_affinity, 629 + .irq_set_type = tegra_gpio_irq_set_type, 630 + #ifdef CONFIG_PM_SLEEP 631 + .irq_set_wake = tegra_gpio_irq_set_wake, 632 + #endif 633 + .irq_print_chip = tegra_gpio_irq_print_chip, 634 + .irq_request_resources = tegra_gpio_irq_request_resources, 635 + .irq_release_resources = tegra_gpio_irq_release_resources, 636 + .flags = IRQCHIP_IMMUTABLE, 637 + }; 638 + 603 639 #ifdef CONFIG_DEBUG_FS 604 640 605 641 #include <linux/debugfs.h> 606 - #include <linux/seq_file.h> 607 642 608 643 static int tegra_dbg_gpio_show(struct seq_file *s, void *unused) 609 644 { ··· 728 689 tgi->gc.ngpio = tgi->bank_count * 32; 729 690 tgi->gc.parent = &pdev->dev; 730 691 731 - tgi->ic.name = "GPIO"; 732 - tgi->ic.irq_ack = tegra_gpio_irq_ack; 733 - tgi->ic.irq_mask = tegra_gpio_irq_mask; 734 - tgi->ic.irq_unmask = tegra_gpio_irq_unmask; 735 - tgi->ic.irq_set_type = tegra_gpio_irq_set_type; 736 - tgi->ic.irq_shutdown = tegra_gpio_irq_shutdown; 737 - #ifdef CONFIG_PM_SLEEP 738 - tgi->ic.irq_set_wake = tegra_gpio_irq_set_wake; 739 - #endif 740 - tgi->ic.irq_request_resources = tegra_gpio_irq_request_resources; 741 - tgi->ic.irq_release_resources = tegra_gpio_irq_release_resources; 742 - 743 692 platform_set_drvdata(pdev, tgi); 744 693 745 694 if (tgi->soc->debounce_supported) ··· 760 733 } 761 734 762 735 irq = &tgi->gc.irq; 763 - irq->chip = &tgi->ic; 764 736 irq->fwnode = of_node_to_fwnode(pdev->dev.of_node); 765 737 irq->child_to_parent_hwirq = tegra_gpio_child_to_parent_hwirq; 766 738 irq->populate_parent_alloc_arg = tegra_gpio_populate_parent_fwspec; ··· 778 752 if (!irq->parent_domain) 779 753 return -EPROBE_DEFER; 780 754 781 - tgi->ic.irq_set_affinity = tegra_gpio_irq_set_affinity; 755 + gpio_irq_chip_set_chip(irq, &tegra210_gpio_irq_chip); 756 + } else { 757 + gpio_irq_chip_set_chip(irq, &tegra_gpio_irq_chip); 782 758 } 783 759 784 760 tgi->regs = devm_platform_ioremap_resource(pdev, 0);
+3 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 510 510 struct ttm_tt *ttm = bo->tbo.ttm; 511 511 int ret; 512 512 513 + if (WARN_ON(ttm->num_pages != src_ttm->num_pages)) 514 + return -EINVAL; 515 + 513 516 ttm->sg = kmalloc(sizeof(*ttm->sg), GFP_KERNEL); 514 517 if (unlikely(!ttm->sg)) 515 518 return -ENOMEM; 516 - 517 - if (WARN_ON(ttm->num_pages != src_ttm->num_pages)) 518 - return -EINVAL; 519 519 520 520 /* Same sequence as in amdgpu_ttm_tt_pin_userptr */ 521 521 ret = sg_alloc_table_from_pages(ttm->sg, src_ttm->pages,
+4 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
··· 326 326 if (r) 327 327 return r; 328 328 329 - ctx->stable_pstate = current_stable_pstate; 329 + if (mgr->adev->pm.stable_pstate_ctx) 330 + ctx->stable_pstate = mgr->adev->pm.stable_pstate_ctx->stable_pstate; 331 + else 332 + ctx->stable_pstate = current_stable_pstate; 330 333 331 334 return 0; 332 335 }
+17 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3210 3210 return r; 3211 3211 } 3212 3212 adev->ip_blocks[i].status.hw = true; 3213 + 3214 + if (adev->in_s0ix && adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SMC) { 3215 + /* disable gfxoff for IP resume. The gfxoff will be re-enabled in 3216 + * amdgpu_device_resume() after IP resume. 3217 + */ 3218 + amdgpu_gfx_off_ctrl(adev, false); 3219 + DRM_DEBUG("will disable gfxoff for re-initializing other blocks\n"); 3220 + } 3221 + 3213 3222 } 3214 3223 3215 3224 return 0; ··· 4194 4185 /* Make sure IB tests flushed */ 4195 4186 flush_delayed_work(&adev->delayed_init_work); 4196 4187 4188 + if (adev->in_s0ix) { 4189 + /* re-enable gfxoff after IP resume. This re-enables gfxoff after 4190 + * it was disabled for IP resume in amdgpu_device_ip_resume_phase2(). 4191 + */ 4192 + amdgpu_gfx_off_ctrl(adev, true); 4193 + DRM_DEBUG("will enable gfxoff for the mission mode\n"); 4194 + } 4197 4195 if (fbcon) 4198 4196 drm_fb_helper_set_suspend_unlocked(adev_to_drm(adev)->fb_helper, false); 4199 4197 ··· 5397 5381 drm_sched_start(&ring->sched, !tmp_adev->asic_reset_res); 5398 5382 } 5399 5383 5400 - if (adev->enable_mes) 5384 + if (adev->enable_mes && adev->ip_versions[GC_HWIP][0] != IP_VERSION(11, 0, 3)) 5401 5385 amdgpu_mes_self_test(tmp_adev); 5402 5386 5403 5387 if (!drm_drv_uses_atomic_modeset(adev_to_drm(tmp_adev)) && !job_signaled) {
+13
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 344 344 fw_info->ver = adev->mes.ucode_fw_version[1]; 345 345 fw_info->feature = 0; 346 346 break; 347 + case AMDGPU_INFO_FW_IMU: 348 + fw_info->ver = adev->gfx.imu_fw_version; 349 + fw_info->feature = 0; 350 + break; 347 351 default: 348 352 return -EINVAL; 349 353 } ··· 1523 1519 seq_printf(m, "MEC2 feature version: %u, firmware version: 0x%08x\n", 1524 1520 fw_info.feature, fw_info.ver); 1525 1521 } 1522 + 1523 + /* IMU */ 1524 + query_fw.fw_type = AMDGPU_INFO_FW_IMU; 1525 + query_fw.index = 0; 1526 + ret = amdgpu_firmware_info(&fw_info, &query_fw, adev); 1527 + if (ret) 1528 + return ret; 1529 + seq_printf(m, "IMU feature version: %u, firmware version: 0x%08x\n", 1530 + fw_info.feature, fw_info.ver); 1526 1531 1527 1532 /* PSP SOS */ 1528 1533 query_fw.fw_type = AMDGPU_INFO_FW_SOS;
+3 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
··· 698 698 FW_VERSION_ATTR(rlc_srls_fw_version, 0444, gfx.rlc_srls_fw_version); 699 699 FW_VERSION_ATTR(mec_fw_version, 0444, gfx.mec_fw_version); 700 700 FW_VERSION_ATTR(mec2_fw_version, 0444, gfx.mec2_fw_version); 701 + FW_VERSION_ATTR(imu_fw_version, 0444, gfx.imu_fw_version); 701 702 FW_VERSION_ATTR(sos_fw_version, 0444, psp.sos.fw_version); 702 703 FW_VERSION_ATTR(asd_fw_version, 0444, psp.asd_context.bin_desc.fw_version); 703 704 FW_VERSION_ATTR(ta_ras_fw_version, 0444, psp.ras_context.context.bin_desc.fw_version); ··· 720 719 &dev_attr_ta_ras_fw_version.attr, &dev_attr_ta_xgmi_fw_version.attr, 721 720 &dev_attr_smc_fw_version.attr, &dev_attr_sdma_fw_version.attr, 722 721 &dev_attr_sdma2_fw_version.attr, &dev_attr_vcn_fw_version.attr, 723 - &dev_attr_dmcu_fw_version.attr, NULL 722 + &dev_attr_dmcu_fw_version.attr, &dev_attr_imu_fw_version.attr, 723 + NULL 724 724 }; 725 725 726 726 static const struct attribute_group fw_attr_group = {
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
··· 547 547 POPULATE_UCODE_INFO(vf2pf_info, AMD_SRIOV_UCODE_ID_RLC_SRLS, adev->gfx.rlc_srls_fw_version); 548 548 POPULATE_UCODE_INFO(vf2pf_info, AMD_SRIOV_UCODE_ID_MEC, adev->gfx.mec_fw_version); 549 549 POPULATE_UCODE_INFO(vf2pf_info, AMD_SRIOV_UCODE_ID_MEC2, adev->gfx.mec2_fw_version); 550 + POPULATE_UCODE_INFO(vf2pf_info, AMD_SRIOV_UCODE_ID_IMU, adev->gfx.imu_fw_version); 550 551 POPULATE_UCODE_INFO(vf2pf_info, AMD_SRIOV_UCODE_ID_SOS, adev->psp.sos.fw_version); 551 552 POPULATE_UCODE_INFO(vf2pf_info, AMD_SRIOV_UCODE_ID_ASD, 552 553 adev->psp.asd_context.bin_desc.fw_version);
+1
drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h
··· 70 70 AMD_SRIOV_UCODE_ID_RLC_SRLS, 71 71 AMD_SRIOV_UCODE_ID_MEC, 72 72 AMD_SRIOV_UCODE_ID_MEC2, 73 + AMD_SRIOV_UCODE_ID_IMU, 73 74 AMD_SRIOV_UCODE_ID_SOS, 74 75 AMD_SRIOV_UCODE_ID_ASD, 75 76 AMD_SRIOV_UCODE_ID_TA_RAS,
+1
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
··· 5051 5051 switch (adev->ip_versions[GC_HWIP][0]) { 5052 5052 case IP_VERSION(11, 0, 0): 5053 5053 case IP_VERSION(11, 0, 2): 5054 + case IP_VERSION(11, 0, 3): 5054 5055 amdgpu_gfx_off_ctrl(adev, enable); 5055 5056 break; 5056 5057 case IP_VERSION(11, 0, 1):
+8 -1
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
··· 98 98 struct amdgpu_device *adev = mes->adev; 99 99 struct amdgpu_ring *ring = &mes->ring; 100 100 unsigned long flags; 101 + signed long timeout = adev->usec_timeout; 101 102 103 + if (amdgpu_emu_mode) { 104 + timeout *= 100; 105 + } else if (amdgpu_sriov_vf(adev)) { 106 + /* Worst case in sriov where all other 15 VF timeout, each VF needs about 600ms */ 107 + timeout = 15 * 600 * 1000; 108 + } 102 109 BUG_ON(size % 4 != 0); 103 110 104 111 spin_lock_irqsave(&mes->ring_lock, flags); ··· 125 118 DRM_DEBUG("MES msg=%d was emitted\n", x_pkt->header.opcode); 126 119 127 120 r = amdgpu_fence_wait_polling(ring, ring->fence_drv.sync_seq, 128 - adev->usec_timeout * (amdgpu_emu_mode ? 100 : 1)); 121 + timeout); 129 122 if (r < 1) { 130 123 DRM_ERROR("MES failed to response msg=%d\n", 131 124 x_pkt->header.opcode);
+8 -20
drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
··· 32 32 #include "gc/gc_10_1_0_offset.h" 33 33 #include "soc15_common.h" 34 34 35 - #define mmMM_ATC_L2_MISC_CG_Sienna_Cichlid 0x064d 36 - #define mmMM_ATC_L2_MISC_CG_Sienna_Cichlid_BASE_IDX 0 37 35 #define mmDAGB0_CNTL_MISC2_Sienna_Cichlid 0x0070 38 36 #define mmDAGB0_CNTL_MISC2_Sienna_Cichlid_BASE_IDX 0 39 37 ··· 572 574 case IP_VERSION(2, 1, 0): 573 575 case IP_VERSION(2, 1, 1): 574 576 case IP_VERSION(2, 1, 2): 575 - def = data = RREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG_Sienna_Cichlid); 576 577 def1 = data1 = RREG32_SOC15(MMHUB, 0, mmDAGB0_CNTL_MISC2_Sienna_Cichlid); 577 578 break; 578 579 default: ··· 605 608 case IP_VERSION(2, 1, 0): 606 609 case IP_VERSION(2, 1, 1): 607 610 case IP_VERSION(2, 1, 2): 608 - if (def != data) 609 - WREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG_Sienna_Cichlid, data); 610 611 if (def1 != data1) 611 612 WREG32_SOC15(MMHUB, 0, mmDAGB0_CNTL_MISC2_Sienna_Cichlid, data1); 612 613 break; ··· 629 634 case IP_VERSION(2, 1, 0): 630 635 case IP_VERSION(2, 1, 1): 631 636 case IP_VERSION(2, 1, 2): 632 - def = data = RREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG_Sienna_Cichlid); 633 - break; 637 + /* There is no ATCL2 in MMHUB for 2.1.x */ 638 + return; 634 639 default: 635 640 def = data = RREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG); 636 641 break; ··· 641 646 else 642 647 data &= ~MM_ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK; 643 648 644 - if (def != data) { 645 - switch (adev->ip_versions[MMHUB_HWIP][0]) { 646 - case IP_VERSION(2, 1, 0): 647 - case IP_VERSION(2, 1, 1): 648 - case IP_VERSION(2, 1, 2): 649 - WREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG_Sienna_Cichlid, data); 650 - break; 651 - default: 652 - WREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG, data); 653 - break; 654 - } 655 - } 649 + if (def != data) 650 + WREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG, data); 656 651 } 657 652 658 653 static int mmhub_v2_0_set_clockgating(struct amdgpu_device *adev, ··· 680 695 case IP_VERSION(2, 1, 0): 681 696 case IP_VERSION(2, 1, 1): 682 697 case IP_VERSION(2, 1, 2): 683 - data = RREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG_Sienna_Cichlid); 698 + /* There is no ATCL2 in MMHUB for 2.1.x. Keep the status 699 + * based on DAGB 700 + */ 701 + data = MM_ATC_L2_MISC_CG__ENABLE_MASK; 684 702 data1 = RREG32_SOC15(MMHUB, 0, mmDAGB0_CNTL_MISC2_Sienna_Cichlid); 685 703 break; 686 704 default:
+104 -2
drivers/gpu/drm/amd/amdkfd/kfd_crat.c
··· 795 795 }, 796 796 }; 797 797 798 + static struct kfd_gpu_cache_info gfx1037_cache_info[] = { 799 + { 800 + /* TCP L1 Cache per CU */ 801 + .cache_size = 16, 802 + .cache_level = 1, 803 + .flags = (CRAT_CACHE_FLAGS_ENABLED | 804 + CRAT_CACHE_FLAGS_DATA_CACHE | 805 + CRAT_CACHE_FLAGS_SIMD_CACHE), 806 + .num_cu_shared = 1, 807 + }, 808 + { 809 + /* Scalar L1 Instruction Cache per SQC */ 810 + .cache_size = 32, 811 + .cache_level = 1, 812 + .flags = (CRAT_CACHE_FLAGS_ENABLED | 813 + CRAT_CACHE_FLAGS_INST_CACHE | 814 + CRAT_CACHE_FLAGS_SIMD_CACHE), 815 + .num_cu_shared = 2, 816 + }, 817 + { 818 + /* Scalar L1 Data Cache per SQC */ 819 + .cache_size = 16, 820 + .cache_level = 1, 821 + .flags = (CRAT_CACHE_FLAGS_ENABLED | 822 + CRAT_CACHE_FLAGS_DATA_CACHE | 823 + CRAT_CACHE_FLAGS_SIMD_CACHE), 824 + .num_cu_shared = 2, 825 + }, 826 + { 827 + /* GL1 Data Cache per SA */ 828 + .cache_size = 128, 829 + .cache_level = 1, 830 + .flags = (CRAT_CACHE_FLAGS_ENABLED | 831 + CRAT_CACHE_FLAGS_DATA_CACHE | 832 + CRAT_CACHE_FLAGS_SIMD_CACHE), 833 + .num_cu_shared = 2, 834 + }, 835 + { 836 + /* L2 Data Cache per GPU (Total Tex Cache) */ 837 + .cache_size = 256, 838 + .cache_level = 2, 839 + .flags = (CRAT_CACHE_FLAGS_ENABLED | 840 + CRAT_CACHE_FLAGS_DATA_CACHE | 841 + CRAT_CACHE_FLAGS_SIMD_CACHE), 842 + .num_cu_shared = 2, 843 + }, 844 + }; 845 + 846 + static struct kfd_gpu_cache_info gc_10_3_6_cache_info[] = { 847 + { 848 + /* TCP L1 Cache per CU */ 849 + .cache_size = 16, 850 + .cache_level = 1, 851 + .flags = (CRAT_CACHE_FLAGS_ENABLED | 852 + CRAT_CACHE_FLAGS_DATA_CACHE | 853 + CRAT_CACHE_FLAGS_SIMD_CACHE), 854 + .num_cu_shared = 1, 855 + }, 856 + { 857 + /* Scalar L1 Instruction Cache per SQC */ 858 + .cache_size = 32, 859 + .cache_level = 1, 860 + .flags = (CRAT_CACHE_FLAGS_ENABLED | 861 + CRAT_CACHE_FLAGS_INST_CACHE | 862 + CRAT_CACHE_FLAGS_SIMD_CACHE), 863 + .num_cu_shared = 2, 864 + }, 865 + { 866 + /* Scalar L1 Data Cache per SQC */ 867 + .cache_size = 16, 868 + .cache_level = 1, 869 + .flags = (CRAT_CACHE_FLAGS_ENABLED | 870 + CRAT_CACHE_FLAGS_DATA_CACHE | 871 + CRAT_CACHE_FLAGS_SIMD_CACHE), 872 + .num_cu_shared = 2, 873 + }, 874 + { 875 + /* GL1 Data Cache per SA */ 876 + .cache_size = 128, 877 + .cache_level = 1, 878 + .flags = (CRAT_CACHE_FLAGS_ENABLED | 879 + CRAT_CACHE_FLAGS_DATA_CACHE | 880 + CRAT_CACHE_FLAGS_SIMD_CACHE), 881 + .num_cu_shared = 2, 882 + }, 883 + { 884 + /* L2 Data Cache per GPU (Total Tex Cache) */ 885 + .cache_size = 256, 886 + .cache_level = 2, 887 + .flags = (CRAT_CACHE_FLAGS_ENABLED | 888 + CRAT_CACHE_FLAGS_DATA_CACHE | 889 + CRAT_CACHE_FLAGS_SIMD_CACHE), 890 + .num_cu_shared = 2, 891 + }, 892 + }; 893 + 798 894 static void kfd_populated_cu_info_cpu(struct kfd_topology_device *dev, 799 895 struct crat_subtype_computeunit *cu) 800 896 { ··· 1610 1514 num_of_cache_types = ARRAY_SIZE(beige_goby_cache_info); 1611 1515 break; 1612 1516 case IP_VERSION(10, 3, 3): 1613 - case IP_VERSION(10, 3, 6): /* TODO: Double check these on production silicon */ 1614 - case IP_VERSION(10, 3, 7): /* TODO: Double check these on production silicon */ 1615 1517 pcache_info = yellow_carp_cache_info; 1616 1518 num_of_cache_types = ARRAY_SIZE(yellow_carp_cache_info); 1519 + break; 1520 + case IP_VERSION(10, 3, 6): 1521 + pcache_info = gc_10_3_6_cache_info; 1522 + num_of_cache_types = ARRAY_SIZE(gc_10_3_6_cache_info); 1523 + break; 1524 + case IP_VERSION(10, 3, 7): 1525 + pcache_info = gfx1037_cache_info; 1526 + num_of_cache_types = ARRAY_SIZE(gfx1037_cache_info); 1617 1527 break; 1618 1528 case IP_VERSION(11, 0, 0): 1619 1529 case IP_VERSION(11, 0, 1):
+7 -43
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
··· 1369 1369 { 1370 1370 struct amdgpu_device *adev = drm_to_adev(plane->dev); 1371 1371 const struct drm_format_info *info = drm_format_info(format); 1372 - struct hw_asic_id asic_id = adev->dm.dc->ctx->asic_id; 1372 + int i; 1373 1373 1374 1374 enum dm_micro_swizzle microtile = modifier_gfx9_swizzle_mode(modifier) & 3; 1375 1375 ··· 1386 1386 return true; 1387 1387 } 1388 1388 1389 - /* check if swizzle mode is supported by this version of DCN */ 1390 - switch (asic_id.chip_family) { 1391 - case FAMILY_SI: 1392 - case FAMILY_CI: 1393 - case FAMILY_KV: 1394 - case FAMILY_CZ: 1395 - case FAMILY_VI: 1396 - /* asics before AI does not have modifier support */ 1397 - return false; 1398 - case FAMILY_AI: 1399 - case FAMILY_RV: 1400 - case FAMILY_NV: 1401 - case FAMILY_VGH: 1402 - case FAMILY_YELLOW_CARP: 1403 - case AMDGPU_FAMILY_GC_10_3_6: 1404 - case AMDGPU_FAMILY_GC_10_3_7: 1405 - switch (AMD_FMT_MOD_GET(TILE, modifier)) { 1406 - case AMD_FMT_MOD_TILE_GFX9_64K_R_X: 1407 - case AMD_FMT_MOD_TILE_GFX9_64K_D_X: 1408 - case AMD_FMT_MOD_TILE_GFX9_64K_S_X: 1409 - case AMD_FMT_MOD_TILE_GFX9_64K_D: 1410 - return true; 1411 - default: 1412 - return false; 1413 - } 1414 - break; 1415 - case AMDGPU_FAMILY_GC_11_0_0: 1416 - case AMDGPU_FAMILY_GC_11_0_1: 1417 - switch (AMD_FMT_MOD_GET(TILE, modifier)) { 1418 - case AMD_FMT_MOD_TILE_GFX11_256K_R_X: 1419 - case AMD_FMT_MOD_TILE_GFX9_64K_R_X: 1420 - case AMD_FMT_MOD_TILE_GFX9_64K_D_X: 1421 - case AMD_FMT_MOD_TILE_GFX9_64K_S_X: 1422 - case AMD_FMT_MOD_TILE_GFX9_64K_D: 1423 - return true; 1424 - default: 1425 - return false; 1426 - } 1427 - break; 1428 - default: 1429 - ASSERT(0); /* Unknown asic */ 1430 - break; 1389 + /* Check that the modifier is on the list of the plane's supported modifiers. */ 1390 + for (i = 0; i < plane->modifier_count; i++) { 1391 + if (modifier == plane->modifiers[i]) 1392 + break; 1431 1393 } 1394 + if (i == plane->modifier_count) 1395 + return false; 1432 1396 1433 1397 /* 1434 1398 * For D swizzle the canonical modifier depends on the bpp, so check
+1 -11
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
··· 1270 1270 lock, 1271 1271 &hw_locks, 1272 1272 &inst_flags); 1273 - } else if (pipe->stream && pipe->stream->mall_stream_config.type == SUBVP_MAIN) { 1274 - union dmub_inbox0_cmd_lock_hw hw_lock_cmd = { 0 }; 1275 - hw_lock_cmd.bits.command_code = DMUB_INBOX0_CMD__HW_LOCK; 1276 - hw_lock_cmd.bits.hw_lock_client = HW_LOCK_CLIENT_DRIVER; 1277 - hw_lock_cmd.bits.lock_pipe = 1; 1278 - hw_lock_cmd.bits.otg_inst = pipe->stream_res.tg->inst; 1279 - hw_lock_cmd.bits.lock = lock; 1280 - if (!lock) 1281 - hw_lock_cmd.bits.should_release = 1; 1282 - dmub_hw_lock_mgr_inbox0_cmd(dc->ctx->dmub_srv, hw_lock_cmd); 1283 1273 } else if (pipe->plane_state != NULL && pipe->plane_state->triplebuffer_flips) { 1284 1274 if (lock) 1285 1275 pipe->stream_res.tg->funcs->triplebuffer_lock(pipe->stream_res.tg); ··· 1846 1856 1847 1857 for (j = 0; j < TIMEOUT_FOR_PIPE_ENABLE_MS*1000 1848 1858 && hubp->funcs->hubp_is_flip_pending(hubp); j++) 1849 - mdelay(1); 1859 + udelay(1); 1850 1860 } 1851 1861 } 1852 1862
+1 -1
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource_helpers.c
··· 200 200 struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i]; 201 201 202 202 if (!pipe->stream) 203 - return false; 203 + continue; 204 204 205 205 if (!pipe->plane_state) 206 206 return false;
+81 -30
drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu13_driver_if_v13_0_0.h
··· 25 25 #define SMU13_DRIVER_IF_V13_0_0_H 26 26 27 27 //Increment this version if SkuTable_t or BoardTable_t change 28 - #define PPTABLE_VERSION 0x24 28 + #define PPTABLE_VERSION 0x26 29 29 30 30 #define NUM_GFXCLK_DPM_LEVELS 16 31 31 #define NUM_SOCCLK_DPM_LEVELS 8 ··· 109 109 #define FEATURE_SPARE_63_BIT 63 110 110 #define NUM_FEATURES 64 111 111 112 + #define ALLOWED_FEATURE_CTRL_DEFAULT 0xFFFFFFFFFFFFFFFFULL 113 + #define ALLOWED_FEATURE_CTRL_SCPM ((1 << FEATURE_DPM_GFXCLK_BIT) | \ 114 + (1 << FEATURE_DPM_GFX_POWER_OPTIMIZER_BIT) | \ 115 + (1 << FEATURE_DPM_UCLK_BIT) | \ 116 + (1 << FEATURE_DPM_FCLK_BIT) | \ 117 + (1 << FEATURE_DPM_SOCCLK_BIT) | \ 118 + (1 << FEATURE_DPM_MP0CLK_BIT) | \ 119 + (1 << FEATURE_DPM_LINK_BIT) | \ 120 + (1 << FEATURE_DPM_DCN_BIT) | \ 121 + (1 << FEATURE_DS_GFXCLK_BIT) | \ 122 + (1 << FEATURE_DS_SOCCLK_BIT) | \ 123 + (1 << FEATURE_DS_FCLK_BIT) | \ 124 + (1 << FEATURE_DS_LCLK_BIT) | \ 125 + (1 << FEATURE_DS_DCFCLK_BIT) | \ 126 + (1 << FEATURE_DS_UCLK_BIT)) 127 + 112 128 //For use with feature control messages 113 129 typedef enum { 114 130 FEATURE_PWR_ALL, ··· 149 133 #define DEBUG_OVERRIDE_DISABLE_DFLL 0x00000200 150 134 #define DEBUG_OVERRIDE_ENABLE_RLC_VF_BRINGUP_MODE 0x00000400 151 135 #define DEBUG_OVERRIDE_DFLL_MASTER_MODE 0x00000800 136 + #define DEBUG_OVERRIDE_ENABLE_PROFILING_MODE 0x00001000 152 137 153 138 // VR Mapping Bit Defines 154 139 #define VR_MAPPING_VR_SELECT_MASK 0x01 ··· 279 262 } I2cControllerPort_e; 280 263 281 264 typedef enum { 282 - I2C_CONTROLLER_NAME_VR_GFX = 0, 283 - I2C_CONTROLLER_NAME_VR_SOC, 284 - I2C_CONTROLLER_NAME_VR_VMEMP, 285 - I2C_CONTROLLER_NAME_VR_VDDIO, 286 - I2C_CONTROLLER_NAME_LIQUID0, 287 - I2C_CONTROLLER_NAME_LIQUID1, 288 - I2C_CONTROLLER_NAME_PLX, 289 - I2C_CONTROLLER_NAME_OTHER, 290 - I2C_CONTROLLER_NAME_COUNT, 265 + I2C_CONTROLLER_NAME_VR_GFX = 0, 266 + I2C_CONTROLLER_NAME_VR_SOC, 267 + I2C_CONTROLLER_NAME_VR_VMEMP, 268 + I2C_CONTROLLER_NAME_VR_VDDIO, 269 + I2C_CONTROLLER_NAME_LIQUID0, 270 + I2C_CONTROLLER_NAME_LIQUID1, 271 + I2C_CONTROLLER_NAME_PLX, 272 + I2C_CONTROLLER_NAME_FAN_INTAKE, 273 + I2C_CONTROLLER_NAME_COUNT, 291 274 } I2cControllerName_e; 292 275 293 276 typedef enum { ··· 299 282 I2C_CONTROLLER_THROTTLER_LIQUID0, 300 283 I2C_CONTROLLER_THROTTLER_LIQUID1, 301 284 I2C_CONTROLLER_THROTTLER_PLX, 285 + I2C_CONTROLLER_THROTTLER_FAN_INTAKE, 302 286 I2C_CONTROLLER_THROTTLER_INA3221, 303 287 I2C_CONTROLLER_THROTTLER_COUNT, 304 288 } I2cControllerThrottler_e; 305 289 306 290 typedef enum { 307 - I2C_CONTROLLER_PROTOCOL_VR_XPDE132G5, 308 - I2C_CONTROLLER_PROTOCOL_VR_IR35217, 309 - I2C_CONTROLLER_PROTOCOL_TMP_TMP102A, 310 - I2C_CONTROLLER_PROTOCOL_INA3221, 311 - I2C_CONTROLLER_PROTOCOL_COUNT, 291 + I2C_CONTROLLER_PROTOCOL_VR_XPDE132G5, 292 + I2C_CONTROLLER_PROTOCOL_VR_IR35217, 293 + I2C_CONTROLLER_PROTOCOL_TMP_MAX31875, 294 + I2C_CONTROLLER_PROTOCOL_INA3221, 295 + I2C_CONTROLLER_PROTOCOL_COUNT, 312 296 } I2cControllerProtocol_e; 313 297 314 298 typedef struct { ··· 676 658 677 659 #define PP_NUM_OD_VF_CURVE_POINTS PP_NUM_RTAVFS_PWL_ZONES + 1 678 660 661 + typedef enum { 662 + FAN_MODE_AUTO = 0, 663 + FAN_MODE_MANUAL_LINEAR, 664 + } FanMode_e; 679 665 680 666 typedef struct { 681 667 uint32_t FeatureCtrlMask; 682 668 683 669 //Voltage control 684 670 int16_t VoltageOffsetPerZoneBoundary[PP_NUM_OD_VF_CURVE_POINTS]; 685 - uint16_t reserved[2]; 671 + uint16_t VddGfxVmax; // in mV 672 + 673 + uint8_t IdlePwrSavingFeaturesCtrl; 674 + uint8_t RuntimePwrSavingFeaturesCtrl; 686 675 687 676 //Frequency changes 688 677 int16_t GfxclkFmin; // MHz ··· 699 674 700 675 //PPT 701 676 int16_t Ppt; // % 702 - int16_t reserved1; 677 + int16_t Tdc; 703 678 704 679 //Fan control 705 680 uint8_t FanLinearPwmPoints[NUM_OD_FAN_MAX_POINTS]; ··· 726 701 uint32_t FeatureCtrlMask; 727 702 728 703 int16_t VoltageOffsetPerZoneBoundary; 729 - uint16_t reserved[2]; 704 + uint16_t VddGfxVmax; // in mV 730 705 731 - uint16_t GfxclkFmin; // MHz 732 - uint16_t GfxclkFmax; // MHz 706 + uint8_t IdlePwrSavingFeaturesCtrl; 707 + uint8_t RuntimePwrSavingFeaturesCtrl; 708 + 709 + int16_t GfxclkFmin; // MHz 710 + int16_t GfxclkFmax; // MHz 733 711 uint16_t UclkFmin; // MHz 734 712 uint16_t UclkFmax; // MHz 735 713 736 714 //PPT 737 715 int16_t Ppt; // % 738 - int16_t reserved1; 716 + int16_t Tdc; 739 717 740 718 uint8_t FanLinearPwmPoints; 741 719 uint8_t FanLinearTempPoints; ··· 885 857 uint16_t FanStartTempMin; 886 858 uint16_t FanStartTempMax; 887 859 888 - uint32_t Spare[12]; 860 + uint16_t PowerMinPpt0[POWER_SOURCE_COUNT]; 861 + uint32_t Spare[11]; 889 862 890 863 } MsgLimits_t; 891 864 ··· 1070 1041 uint32_t GfxoffSpare[15]; 1071 1042 1072 1043 // GFX GPO 1073 - uint32_t GfxGpoSpare[16]; 1044 + uint32_t DfllBtcMasterScalerM; 1045 + int32_t DfllBtcMasterScalerB; 1046 + uint32_t DfllBtcSlaveScalerM; 1047 + int32_t DfllBtcSlaveScalerB; 1048 + 1049 + uint32_t DfllPccAsWaitCtrl; //GDFLL_AS_WAIT_CTRL_PCC register value to be passed to RLC msg 1050 + uint32_t DfllPccAsStepCtrl; //GDFLL_AS_STEP_CTRL_PCC register value to be passed to RLC msg 1051 + 1052 + uint32_t DfllL2FrequencyBoostM; //Unitless (float) 1053 + uint32_t DfllL2FrequencyBoostB; //In MHz (integer) 1054 + uint32_t GfxGpoSpare[8]; 1074 1055 1075 1056 // GFX DCS 1076 1057 ··· 1153 1114 uint16_t IntakeTempHighIntakeAcousticLimit; 1154 1115 uint16_t IntakeTempAcouticLimitReleaseRate; 1155 1116 1156 - uint16_t FanStalledTempLimitOffset; 1117 + int16_t FanAbnormalTempLimitOffset; 1157 1118 uint16_t FanStalledTriggerRpm; 1158 - uint16_t FanAbnormalTriggerRpm; 1159 - uint16_t FanPadding; 1119 + uint16_t FanAbnormalTriggerRpmCoeff; 1120 + uint16_t FanAbnormalDetectionEnable; 1160 1121 1161 - uint32_t FanSpare[14]; 1122 + uint8_t FanIntakeSensorSupport; 1123 + uint8_t FanIntakePadding[3]; 1124 + uint32_t FanSpare[13]; 1162 1125 1163 1126 // SECTION: VDD_GFX AVFS 1164 1127 ··· 1239 1198 int16_t TotalBoardPowerM; 1240 1199 int16_t TotalBoardPowerB; 1241 1200 1201 + //PMFW-11158 1202 + QuadraticInt_t qFeffCoeffGameClock[POWER_SOURCE_COUNT]; 1203 + QuadraticInt_t qFeffCoeffBaseClock[POWER_SOURCE_COUNT]; 1204 + QuadraticInt_t qFeffCoeffBoostClock[POWER_SOURCE_COUNT]; 1205 + 1242 1206 // SECTION: Sku Reserved 1243 - uint32_t Spare[61]; 1207 + uint32_t Spare[43]; 1244 1208 1245 1209 // Padding for MMHUB - do not modify this 1246 1210 uint32_t MmHubPadding[8]; ··· 1334 1288 uint32_t PostVoltageSetBacoDelay; // in microseconds. Amount of time FW will wait after power good is established or PSI0 command is issued 1335 1289 uint32_t BacoEntryDelay; // in milliseconds. Amount of time FW will wait to trigger BACO entry after receiving entry notification from OS 1336 1290 1291 + uint8_t FuseWritePowerMuxPresent; 1292 + uint8_t FuseWritePadding[3]; 1293 + 1337 1294 // SECTION: Board Reserved 1338 - uint32_t BoardSpare[64]; 1295 + uint32_t BoardSpare[63]; 1339 1296 1340 1297 // SECTION: Structure Padding 1341 1298 ··· 1430 1381 uint16_t AverageTotalBoardPower; 1431 1382 1432 1383 uint16_t AvgTemperature[TEMP_COUNT]; 1433 - uint16_t TempPadding; 1384 + uint16_t AvgTemperatureFanIntake; 1434 1385 1435 1386 uint8_t PcieRate ; 1436 1387 uint8_t PcieWidth ; ··· 1599 1550 #define IH_INTERRUPT_CONTEXT_ID_AUDIO_D0 0x5 1600 1551 #define IH_INTERRUPT_CONTEXT_ID_AUDIO_D3 0x6 1601 1552 #define IH_INTERRUPT_CONTEXT_ID_THERMAL_THROTTLING 0x7 1553 + #define IH_INTERRUPT_CONTEXT_ID_FAN_ABNORMAL 0x8 1554 + #define IH_INTERRUPT_CONTEXT_ID_FAN_RECOVERY 0x9 1602 1555 1603 1556 #endif
+1 -1
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
··· 30 30 #define SMU13_DRIVER_IF_VERSION_ALDE 0x08 31 31 #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_4 0x07 32 32 #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_5 0x04 33 - #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_0 0x30 33 + #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_0_10 0x32 34 34 #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_7 0x2C 35 35 #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_10 0x1D 36 36
+3 -4
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
··· 289 289 smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_ALDE; 290 290 break; 291 291 case IP_VERSION(13, 0, 0): 292 - smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_0; 292 + case IP_VERSION(13, 0, 10): 293 + smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_0_10; 293 294 break; 294 295 case IP_VERSION(13, 0, 7): 295 296 smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_7; ··· 305 304 break; 306 305 case IP_VERSION(13, 0, 5): 307 306 smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_5; 308 - break; 309 - case IP_VERSION(13, 0, 10): 310 - smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_10; 311 307 break; 312 308 default: 313 309 dev_err(adev->dev, "smu unsupported IP version: 0x%x.\n", ··· 840 842 case IP_VERSION(13, 0, 5): 841 843 case IP_VERSION(13, 0, 7): 842 844 case IP_VERSION(13, 0, 8): 845 + case IP_VERSION(13, 0, 10): 843 846 if (!(adev->pm.pp_feature & PP_GFXOFF_MASK)) 844 847 return 0; 845 848 if (enable)
+23 -2
drivers/gpu/drm/bridge/parade-ps8640.c
··· 105 105 struct gpio_desc *gpio_powerdown; 106 106 struct device_link *link; 107 107 bool pre_enabled; 108 + bool need_post_hpd_delay; 108 109 }; 109 110 110 111 static const struct regmap_config ps8640_regmap_config[] = { ··· 174 173 { 175 174 struct regmap *map = ps_bridge->regmap[PAGE2_TOP_CNTL]; 176 175 int status; 176 + int ret; 177 177 178 178 /* 179 179 * Apparently something about the firmware in the chip signals that 180 180 * HPD goes high by reporting GPIO9 as high (even though HPD isn't 181 181 * actually connected to GPIO9). 182 182 */ 183 - return regmap_read_poll_timeout(map, PAGE2_GPIO_H, status, 184 - status & PS_GPIO9, wait_us / 10, wait_us); 183 + ret = regmap_read_poll_timeout(map, PAGE2_GPIO_H, status, 184 + status & PS_GPIO9, wait_us / 10, wait_us); 185 + 186 + /* 187 + * The first time we see HPD go high after a reset we delay an extra 188 + * 50 ms. The best guess is that the MCU is doing "stuff" during this 189 + * time (maybe talking to the panel) and we don't want to interrupt it. 190 + * 191 + * No locking is done around "need_post_hpd_delay". If we're here we 192 + * know we're holding a PM Runtime reference and the only other place 193 + * that touches this is PM Runtime resume. 194 + */ 195 + if (!ret && ps_bridge->need_post_hpd_delay) { 196 + ps_bridge->need_post_hpd_delay = false; 197 + msleep(50); 198 + } 199 + 200 + return ret; 185 201 } 186 202 187 203 static int ps8640_wait_hpd_asserted(struct drm_dp_aux *aux, unsigned long wait_us) ··· 398 380 gpiod_set_value(ps_bridge->gpio_reset, 1); 399 381 msleep(50); 400 382 gpiod_set_value(ps_bridge->gpio_reset, 0); 383 + 384 + /* We just reset things, so we need a delay after the first HPD */ 385 + ps_bridge->need_post_hpd_delay = true; 401 386 402 387 /* 403 388 * Mystery 200 ms delay for the "MCU to be ready". It's unclear if
+2
drivers/gpu/drm/i915/display/intel_dp.c
··· 3957 3957 3958 3958 drm_dp_pcon_hdmi_frl_link_error_count(&intel_dp->aux, &intel_dp->attached_connector->base); 3959 3959 3960 + intel_dp->frl.is_trained = false; 3961 + 3960 3962 /* Restart FRL training or fall back to TMDS mode */ 3961 3963 intel_dp_check_frl_training(intel_dp); 3962 3964 }
+2 -2
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 2293 2293 } 2294 2294 2295 2295 if (IS_DG1_GRAPHICS_STEP(i915, STEP_A0, STEP_B0) || 2296 - IS_ROCKETLAKE(i915) || IS_TIGERLAKE(i915)) { 2296 + IS_ROCKETLAKE(i915) || IS_TIGERLAKE(i915) || IS_ALDERLAKE_P(i915)) { 2297 2297 /* 2298 2298 * Wa_1607030317:tgl 2299 2299 * Wa_1607186500:tgl 2300 - * Wa_1607297627:tgl,rkl,dg1[a0] 2300 + * Wa_1607297627:tgl,rkl,dg1[a0],adlp 2301 2301 * 2302 2302 * On TGL and RKL there are multiple entries for this WA in the 2303 2303 * BSpec; some indicate this is an A0-only WA, others indicate
+9 -2
drivers/gpu/drm/i915/intel_runtime_pm.c
··· 591 591 pm_runtime_use_autosuspend(kdev); 592 592 } 593 593 594 - /* Enable by default */ 595 - pm_runtime_allow(kdev); 594 + /* 595 + * FIXME: Temp hammer to keep autosupend disable on lmem supported platforms. 596 + * As per PCIe specs 5.3.1.4.1, all iomem read write request over a PCIe 597 + * function will be unsupported in case PCIe endpoint function is in D3. 598 + * Let's keep i915 autosuspend control 'on' till we fix all known issue 599 + * with lmem access in D3. 600 + */ 601 + if (!IS_DGFX(i915)) 602 + pm_runtime_allow(kdev); 596 603 597 604 /* 598 605 * The core calls the driver load handler with an RPM reference held.
+1 -1
drivers/gpu/drm/msm/Kconfig
··· 155 155 Compile in support for the HDMI output MSM DRM driver. It can 156 156 be a primary or a secondary display on device. Note that this is used 157 157 only for the direct HDMI output. If the device outputs HDMI data 158 - throught some kind of DSI-to-HDMI bridge, this option can be disabled. 158 + through some kind of DSI-to-HDMI bridge, this option can be disabled. 159 159 160 160 config DRM_MSM_HDMI_HDCP 161 161 bool "Enable HDMI HDCP support in MSM DRM driver"
+11 -3
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
··· 91 91 static void *state_kcalloc(struct a6xx_gpu_state *a6xx_state, int nr, size_t objsize) 92 92 { 93 93 struct a6xx_state_memobj *obj = 94 - kzalloc((nr * objsize) + sizeof(*obj), GFP_KERNEL); 94 + kvzalloc((nr * objsize) + sizeof(*obj), GFP_KERNEL); 95 95 96 96 if (!obj) 97 97 return NULL; ··· 813 813 { 814 814 struct msm_gpu_state_bo *snapshot; 815 815 816 + if (!bo->size) 817 + return NULL; 818 + 816 819 snapshot = state_kcalloc(a6xx_state, 1, sizeof(*snapshot)); 817 820 if (!snapshot) 818 821 return NULL; ··· 1043 1040 if (a6xx_state->gmu_hfi) 1044 1041 kvfree(a6xx_state->gmu_hfi->data); 1045 1042 1046 - list_for_each_entry_safe(obj, tmp, &a6xx_state->objs, node) 1047 - kfree(obj); 1043 + if (a6xx_state->gmu_debug) 1044 + kvfree(a6xx_state->gmu_debug->data); 1045 + 1046 + list_for_each_entry_safe(obj, tmp, &a6xx_state->objs, node) { 1047 + list_del(&obj->node); 1048 + kvfree(obj); 1049 + } 1048 1050 1049 1051 adreno_gpu_state_destroy(state); 1050 1052 kfree(a6xx_state);
+9 -1
drivers/gpu/drm/msm/adreno/adreno_device.c
··· 679 679 struct msm_gpu *gpu = dev_to_gpu(dev); 680 680 int remaining, ret; 681 681 682 + if (!gpu) 683 + return 0; 684 + 682 685 suspend_scheduler(gpu); 683 686 684 687 remaining = wait_event_timeout(gpu->retire_event, ··· 703 700 704 701 static int adreno_system_resume(struct device *dev) 705 702 { 706 - resume_scheduler(dev_to_gpu(dev)); 703 + struct msm_gpu *gpu = dev_to_gpu(dev); 704 + 705 + if (!gpu) 706 + return 0; 707 + 708 + resume_scheduler(gpu); 707 709 return pm_runtime_force_resume(dev); 708 710 } 709 711
+6 -1
drivers/gpu/drm/msm/adreno/adreno_gpu.c
··· 729 729 return buf; 730 730 } 731 731 732 - /* len is expected to be in bytes */ 732 + /* len is expected to be in bytes 733 + * 734 + * WARNING: *ptr should be allocated with kvmalloc or friends. It can be free'd 735 + * with kvfree() and replaced with a newly kvmalloc'd buffer on the first call 736 + * when the unencoded raw data is encoded 737 + */ 733 738 void adreno_show_object(struct drm_printer *p, void **ptr, int len, 734 739 bool *encoded) 735 740 {
+3 -2
drivers/gpu/drm/msm/disp/mdp4/mdp4_lvds_connector.c
··· 56 56 return ret; 57 57 } 58 58 59 - static int mdp4_lvds_connector_mode_valid(struct drm_connector *connector, 60 - struct drm_display_mode *mode) 59 + static enum drm_mode_status 60 + mdp4_lvds_connector_mode_valid(struct drm_connector *connector, 61 + struct drm_display_mode *mode) 61 62 { 62 63 struct mdp4_lvds_connector *mdp4_lvds_connector = 63 64 to_mdp4_lvds_connector(connector);
+5 -8
drivers/gpu/drm/msm/dp/dp_ctrl.c
··· 1243 1243 { 1244 1244 int ret = 0; 1245 1245 const u8 *dpcd = ctrl->panel->dpcd; 1246 - u8 encoding = DP_SET_ANSI_8B10B; 1247 - u8 ssc; 1246 + u8 encoding[] = { 0, DP_SET_ANSI_8B10B }; 1248 1247 u8 assr; 1249 1248 struct dp_link_info link_info = {0}; 1250 1249 ··· 1255 1256 1256 1257 dp_aux_link_configure(ctrl->aux, &link_info); 1257 1258 1258 - if (drm_dp_max_downspread(dpcd)) { 1259 - ssc = DP_SPREAD_AMP_0_5; 1260 - drm_dp_dpcd_write(ctrl->aux, DP_DOWNSPREAD_CTRL, &ssc, 1); 1261 - } 1259 + if (drm_dp_max_downspread(dpcd)) 1260 + encoding[0] |= DP_SPREAD_AMP_0_5; 1262 1261 1263 - drm_dp_dpcd_write(ctrl->aux, DP_MAIN_LINK_CHANNEL_CODING_SET, 1264 - &encoding, 1); 1262 + /* config DOWNSPREAD_CTRL and MAIN_LINK_CHANNEL_CODING_SET */ 1263 + drm_dp_dpcd_write(ctrl->aux, DP_DOWNSPREAD_CTRL, encoding, 2); 1265 1264 1266 1265 if (drm_dp_alternate_scrambler_reset_cap(dpcd)) { 1267 1266 assr = DP_ALTERNATE_SCRAMBLER_RESET_ENABLE;
+20 -3
drivers/gpu/drm/msm/dp/dp_display.c
··· 1249 1249 return -EINVAL; 1250 1250 } 1251 1251 1252 - rc = devm_request_irq(&dp->pdev->dev, dp->irq, 1252 + rc = devm_request_irq(dp_display->drm_dev->dev, dp->irq, 1253 1253 dp_display_irq_handler, 1254 1254 IRQF_TRIGGER_HIGH, "dp_display_isr", dp); 1255 1255 if (rc < 0) { ··· 1528 1528 } 1529 1529 } 1530 1530 1531 + static void of_dp_aux_depopulate_bus_void(void *data) 1532 + { 1533 + of_dp_aux_depopulate_bus(data); 1534 + } 1535 + 1531 1536 static int dp_display_get_next_bridge(struct msm_dp *dp) 1532 1537 { 1533 1538 int rc; ··· 1557 1552 * panel driver is probed asynchronously but is the best we 1558 1553 * can do without a bigger driver reorganization. 1559 1554 */ 1560 - rc = devm_of_dp_aux_populate_ep_devices(dp_priv->aux); 1555 + rc = of_dp_aux_populate_bus(dp_priv->aux, NULL); 1561 1556 of_node_put(aux_bus); 1557 + if (rc) 1558 + goto error; 1559 + 1560 + rc = devm_add_action_or_reset(dp->drm_dev->dev, 1561 + of_dp_aux_depopulate_bus_void, 1562 + dp_priv->aux); 1562 1563 if (rc) 1563 1564 goto error; 1564 1565 } else if (dp->is_edp) { ··· 1579 1568 * For DisplayPort interfaces external bridges are optional, so 1580 1569 * silently ignore an error if one is not present (-ENODEV). 1581 1570 */ 1582 - rc = dp_parser_find_next_bridge(dp_priv->parser); 1571 + rc = devm_dp_parser_find_next_bridge(dp->drm_dev->dev, dp_priv->parser); 1583 1572 if (!dp->is_edp && rc == -ENODEV) 1584 1573 return 0; 1585 1574 ··· 1608 1597 return -EINVAL; 1609 1598 1610 1599 priv = dev->dev_private; 1600 + 1601 + if (priv->num_bridges == ARRAY_SIZE(priv->bridges)) { 1602 + DRM_DEV_ERROR(dev->dev, "too many bridges\n"); 1603 + return -ENOSPC; 1604 + } 1605 + 1611 1606 dp_display->drm_dev = dev; 1612 1607 1613 1608 dp_priv = container_of(dp_display, struct dp_display_private, dp_display);
+34
drivers/gpu/drm/msm/dp/dp_drm.c
··· 31 31 connector_status_disconnected; 32 32 } 33 33 34 + static int dp_bridge_atomic_check(struct drm_bridge *bridge, 35 + struct drm_bridge_state *bridge_state, 36 + struct drm_crtc_state *crtc_state, 37 + struct drm_connector_state *conn_state) 38 + { 39 + struct msm_dp *dp; 40 + 41 + dp = to_dp_bridge(bridge)->dp_display; 42 + 43 + drm_dbg_dp(dp->drm_dev, "is_connected = %s\n", 44 + (dp->is_connected) ? "true" : "false"); 45 + 46 + /* 47 + * There is no protection in the DRM framework to check if the display 48 + * pipeline has been already disabled before trying to disable it again. 49 + * Hence if the sink is unplugged, the pipeline gets disabled, but the 50 + * crtc->active is still true. Any attempt to set the mode or manually 51 + * disable this encoder will result in the crash. 52 + * 53 + * TODO: add support for telling the DRM subsystem that the pipeline is 54 + * disabled by the hardware and thus all access to it should be forbidden. 55 + * After that this piece of code can be removed. 56 + */ 57 + if (bridge->ops & DRM_BRIDGE_OP_HPD) 58 + return (dp->is_connected) ? 0 : -ENOTCONN; 59 + 60 + return 0; 61 + } 62 + 63 + 34 64 /** 35 65 * dp_bridge_get_modes - callback to add drm modes via drm_mode_probed_add() 36 66 * @bridge: Poiner to drm bridge ··· 91 61 } 92 62 93 63 static const struct drm_bridge_funcs dp_bridge_ops = { 64 + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 65 + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, 66 + .atomic_reset = drm_atomic_helper_bridge_reset, 94 67 .enable = dp_bridge_enable, 95 68 .disable = dp_bridge_disable, 96 69 .post_disable = dp_bridge_post_disable, ··· 101 68 .mode_valid = dp_bridge_mode_valid, 102 69 .get_modes = dp_bridge_get_modes, 103 70 .detect = dp_bridge_detect, 71 + .atomic_check = dp_bridge_atomic_check, 104 72 }; 105 73 106 74 struct drm_bridge *dp_bridge_init(struct msm_dp *dp_display, struct drm_device *dev,
+3 -3
drivers/gpu/drm/msm/dp/dp_parser.c
··· 240 240 return 0; 241 241 } 242 242 243 - int dp_parser_find_next_bridge(struct dp_parser *parser) 243 + int devm_dp_parser_find_next_bridge(struct device *dev, struct dp_parser *parser) 244 244 { 245 - struct device *dev = &parser->pdev->dev; 245 + struct platform_device *pdev = parser->pdev; 246 246 struct drm_bridge *bridge; 247 247 248 - bridge = devm_drm_of_get_bridge(dev, dev->of_node, 1, 0); 248 + bridge = devm_drm_of_get_bridge(dev, pdev->dev.of_node, 1, 0); 249 249 if (IS_ERR(bridge)) 250 250 return PTR_ERR(bridge); 251 251
+3 -2
drivers/gpu/drm/msm/dp/dp_parser.h
··· 138 138 struct dp_parser *dp_parser_get(struct platform_device *pdev); 139 139 140 140 /** 141 - * dp_parser_find_next_bridge() - find an additional bridge to DP 141 + * devm_dp_parser_find_next_bridge() - find an additional bridge to DP 142 142 * 143 + * @dev: device to tie bridge lifetime to 143 144 * @parser: dp_parser data from client 144 145 * 145 146 * This function is used to find any additional bridge attached to ··· 148 147 * 149 148 * Return: 0 if able to get the bridge, otherwise negative errno for failure. 150 149 */ 151 - int dp_parser_find_next_bridge(struct dp_parser *parser); 150 + int devm_dp_parser_find_next_bridge(struct device *dev, struct dp_parser *parser); 152 151 153 152 #endif
+6
drivers/gpu/drm/msm/dsi/dsi.c
··· 218 218 return -EINVAL; 219 219 220 220 priv = dev->dev_private; 221 + 222 + if (priv->num_bridges == ARRAY_SIZE(priv->bridges)) { 223 + DRM_DEV_ERROR(dev->dev, "too many bridges\n"); 224 + return -ENOSPC; 225 + } 226 + 221 227 msm_dsi->dev = dev; 222 228 223 229 ret = msm_dsi_host_modeset_init(msm_dsi->host, dev);
+6 -1
drivers/gpu/drm/msm/hdmi/hdmi.c
··· 300 300 struct platform_device *pdev = hdmi->pdev; 301 301 int ret; 302 302 303 + if (priv->num_bridges == ARRAY_SIZE(priv->bridges)) { 304 + DRM_DEV_ERROR(dev->dev, "too many bridges\n"); 305 + return -ENOSPC; 306 + } 307 + 303 308 hdmi->dev = dev; 304 309 hdmi->encoder = encoder; 305 310 ··· 344 339 goto fail; 345 340 } 346 341 347 - ret = devm_request_irq(&pdev->dev, hdmi->irq, 342 + ret = devm_request_irq(dev->dev, hdmi->irq, 348 343 msm_hdmi_irq, IRQF_TRIGGER_HIGH, 349 344 "hdmi_isr", hdmi); 350 345 if (ret < 0) {
+1
drivers/gpu/drm/msm/msm_drv.c
··· 247 247 248 248 for (i = 0; i < priv->num_bridges; i++) 249 249 drm_bridge_remove(priv->bridges[i]); 250 + priv->num_bridges = 0; 250 251 251 252 pm_runtime_get_sync(dev); 252 253 msm_irq_uninstall(ddev);
+4 -5
drivers/gpu/drm/msm/msm_gem_submit.c
··· 501 501 */ 502 502 static void submit_cleanup(struct msm_gem_submit *submit, bool error) 503 503 { 504 - unsigned cleanup_flags = BO_LOCKED | BO_OBJ_PINNED; 504 + unsigned cleanup_flags = BO_LOCKED; 505 505 unsigned i; 506 506 507 507 if (error) 508 - cleanup_flags |= BO_VMA_PINNED; 508 + cleanup_flags |= BO_VMA_PINNED | BO_OBJ_PINNED; 509 509 510 510 for (i = 0; i < submit->nr_bos; i++) { 511 511 struct msm_gem_object *msm_obj = submit->bos[i].obj; ··· 706 706 struct msm_drm_private *priv = dev->dev_private; 707 707 struct drm_msm_gem_submit *args = data; 708 708 struct msm_file_private *ctx = file->driver_priv; 709 - struct msm_gem_submit *submit = NULL; 709 + struct msm_gem_submit *submit; 710 710 struct msm_gpu *gpu = priv->gpu; 711 711 struct msm_gpu_submitqueue *queue; 712 712 struct msm_ringbuffer *ring; ··· 946 946 put_unused_fd(out_fence_fd); 947 947 mutex_unlock(&queue->lock); 948 948 out_post_unlock: 949 - if (submit) 950 - msm_gem_submit_put(submit); 949 + msm_gem_submit_put(submit); 951 950 if (!IS_ERR_OR_NULL(post_deps)) { 952 951 for (i = 0; i < args->nr_out_syncobjs; ++i) { 953 952 kfree(post_deps[i].chain);
+2
drivers/gpu/drm/msm/msm_gpu.c
··· 997 997 } 998 998 999 999 msm_devfreq_cleanup(gpu); 1000 + 1001 + platform_set_drvdata(gpu->pdev, NULL); 1000 1002 }
+4
drivers/gpu/drm/msm/msm_gpu.h
··· 280 280 static inline struct msm_gpu *dev_to_gpu(struct device *dev) 281 281 { 282 282 struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(dev); 283 + 284 + if (!adreno_smmu) 285 + return NULL; 286 + 283 287 return container_of(adreno_smmu, struct msm_gpu, adreno_smmu); 284 288 } 285 289
+2 -1
drivers/gpu/drm/msm/msm_ringbuffer.c
··· 25 25 26 26 msm_gem_lock(obj); 27 27 msm_gem_unpin_vma_fenced(submit->bos[i].vma, fctx); 28 - submit->bos[i].flags &= ~BO_VMA_PINNED; 28 + msm_gem_unpin_locked(obj); 29 + submit->bos[i].flags &= ~(BO_VMA_PINNED | BO_OBJ_PINNED); 29 30 msm_gem_unlock(obj); 30 31 } 31 32
+5 -1
drivers/gpu/drm/scheduler/sched_entity.c
··· 207 207 struct drm_sched_job *job = container_of(cb, struct drm_sched_job, 208 208 finish_cb); 209 209 210 + dma_fence_put(f); 210 211 INIT_WORK(&job->work, drm_sched_entity_kill_jobs_work); 211 212 schedule_work(&job->work); 212 213 } ··· 235 234 struct drm_sched_fence *s_fence = job->s_fence; 236 235 237 236 /* Wait for all dependencies to avoid data corruptions */ 238 - while ((f = drm_sched_job_dependency(job, entity))) 237 + while ((f = drm_sched_job_dependency(job, entity))) { 239 238 dma_fence_wait(f, false); 239 + dma_fence_put(f); 240 + } 240 241 241 242 drm_sched_fence_scheduled(s_fence); 242 243 dma_fence_set_error(&s_fence->finished, -ESRCH); ··· 253 250 continue; 254 251 } 255 252 253 + dma_fence_get(entity->last_scheduled); 256 254 r = dma_fence_add_callback(entity->last_scheduled, 257 255 &job->finish_cb, 258 256 drm_sched_entity_kill_jobs_cb);
+4 -3
drivers/hwtracing/coresight/coresight-core.c
··· 1687 1687 ret = coresight_fixup_device_conns(csdev); 1688 1688 if (!ret) 1689 1689 ret = coresight_fixup_orphan_conns(csdev); 1690 - if (!ret && cti_assoc_ops && cti_assoc_ops->add) 1691 - cti_assoc_ops->add(csdev); 1692 1690 1693 1691 out_unlock: 1694 1692 mutex_unlock(&coresight_mutex); 1695 1693 /* Success */ 1696 - if (!ret) 1694 + if (!ret) { 1695 + if (cti_assoc_ops && cti_assoc_ops->add) 1696 + cti_assoc_ops->add(csdev); 1697 1697 return csdev; 1698 + } 1698 1699 1699 1700 /* Unregister the device if needed */ 1700 1701 if (registered) {
+3 -7
drivers/hwtracing/coresight/coresight-cti-core.c
··· 90 90 static int cti_enable_hw(struct cti_drvdata *drvdata) 91 91 { 92 92 struct cti_config *config = &drvdata->config; 93 - struct device *dev = &drvdata->csdev->dev; 94 93 unsigned long flags; 95 94 int rc = 0; 96 95 97 - pm_runtime_get_sync(dev->parent); 98 96 spin_lock_irqsave(&drvdata->spinlock, flags); 99 97 100 98 /* no need to do anything if enabled or unpowered*/ ··· 117 119 /* cannot enable due to error */ 118 120 cti_err_not_enabled: 119 121 spin_unlock_irqrestore(&drvdata->spinlock, flags); 120 - pm_runtime_put(dev->parent); 121 122 return rc; 122 123 } 123 124 ··· 150 153 static int cti_disable_hw(struct cti_drvdata *drvdata) 151 154 { 152 155 struct cti_config *config = &drvdata->config; 153 - struct device *dev = &drvdata->csdev->dev; 154 156 struct coresight_device *csdev = drvdata->csdev; 155 157 156 158 spin_lock(&drvdata->spinlock); ··· 171 175 coresight_disclaim_device_unlocked(csdev); 172 176 CS_LOCK(drvdata->base); 173 177 spin_unlock(&drvdata->spinlock); 174 - pm_runtime_put(dev->parent); 175 178 return 0; 176 179 177 180 /* not disabled this call */ ··· 536 541 /* 537 542 * Search the cti list to add an associated CTI into the supplied CS device 538 543 * This will set the association if CTI declared before the CS device. 539 - * (called from coresight_register() with coresight_mutex locked). 544 + * (called from coresight_register() without coresight_mutex locked). 540 545 */ 541 546 static void cti_add_assoc_to_csdev(struct coresight_device *csdev) 542 547 { ··· 564 569 * if we found a matching csdev then update the ECT 565 570 * association pointer for the device with this CTI. 566 571 */ 567 - csdev->ect_dev = ect_item->csdev; 572 + coresight_set_assoc_ectdev_mutex(csdev->ect_dev, 573 + ect_item->csdev); 568 574 break; 569 575 } 570 576 }
+18 -5
drivers/iio/accel/adxl367.c
··· 1185 1185 return sysfs_emit(buf, "%d\n", fifo_watermark); 1186 1186 } 1187 1187 1188 - static IIO_CONST_ATTR(hwfifo_watermark_min, "1"); 1189 - static IIO_CONST_ATTR(hwfifo_watermark_max, 1190 - __stringify(ADXL367_FIFO_MAX_WATERMARK)); 1188 + static ssize_t hwfifo_watermark_min_show(struct device *dev, 1189 + struct device_attribute *attr, 1190 + char *buf) 1191 + { 1192 + return sysfs_emit(buf, "%s\n", "1"); 1193 + } 1194 + 1195 + static ssize_t hwfifo_watermark_max_show(struct device *dev, 1196 + struct device_attribute *attr, 1197 + char *buf) 1198 + { 1199 + return sysfs_emit(buf, "%s\n", __stringify(ADXL367_FIFO_MAX_WATERMARK)); 1200 + } 1201 + 1202 + static IIO_DEVICE_ATTR_RO(hwfifo_watermark_min, 0); 1203 + static IIO_DEVICE_ATTR_RO(hwfifo_watermark_max, 0); 1191 1204 static IIO_DEVICE_ATTR(hwfifo_watermark, 0444, 1192 1205 adxl367_get_fifo_watermark, NULL, 0); 1193 1206 static IIO_DEVICE_ATTR(hwfifo_enabled, 0444, 1194 1207 adxl367_get_fifo_enabled, NULL, 0); 1195 1208 1196 1209 static const struct attribute *adxl367_fifo_attributes[] = { 1197 - &iio_const_attr_hwfifo_watermark_min.dev_attr.attr, 1198 - &iio_const_attr_hwfifo_watermark_max.dev_attr.attr, 1210 + &iio_dev_attr_hwfifo_watermark_min.dev_attr.attr, 1211 + &iio_dev_attr_hwfifo_watermark_max.dev_attr.attr, 1199 1212 &iio_dev_attr_hwfifo_watermark.dev_attr.attr, 1200 1213 &iio_dev_attr_hwfifo_enabled.dev_attr.attr, 1201 1214 NULL,
+18 -5
drivers/iio/accel/adxl372.c
··· 998 998 return sprintf(buf, "%d\n", st->watermark); 999 999 } 1000 1000 1001 - static IIO_CONST_ATTR(hwfifo_watermark_min, "1"); 1002 - static IIO_CONST_ATTR(hwfifo_watermark_max, 1003 - __stringify(ADXL372_FIFO_SIZE)); 1001 + static ssize_t hwfifo_watermark_min_show(struct device *dev, 1002 + struct device_attribute *attr, 1003 + char *buf) 1004 + { 1005 + return sysfs_emit(buf, "%s\n", "1"); 1006 + } 1007 + 1008 + static ssize_t hwfifo_watermark_max_show(struct device *dev, 1009 + struct device_attribute *attr, 1010 + char *buf) 1011 + { 1012 + return sysfs_emit(buf, "%s\n", __stringify(ADXL372_FIFO_SIZE)); 1013 + } 1014 + 1015 + static IIO_DEVICE_ATTR_RO(hwfifo_watermark_min, 0); 1016 + static IIO_DEVICE_ATTR_RO(hwfifo_watermark_max, 0); 1004 1017 static IIO_DEVICE_ATTR(hwfifo_watermark, 0444, 1005 1018 adxl372_get_fifo_watermark, NULL, 0); 1006 1019 static IIO_DEVICE_ATTR(hwfifo_enabled, 0444, 1007 1020 adxl372_get_fifo_enabled, NULL, 0); 1008 1021 1009 1022 static const struct attribute *adxl372_fifo_attributes[] = { 1010 - &iio_const_attr_hwfifo_watermark_min.dev_attr.attr, 1011 - &iio_const_attr_hwfifo_watermark_max.dev_attr.attr, 1023 + &iio_dev_attr_hwfifo_watermark_min.dev_attr.attr, 1024 + &iio_dev_attr_hwfifo_watermark_max.dev_attr.attr, 1012 1025 &iio_dev_attr_hwfifo_watermark.dev_attr.attr, 1013 1026 &iio_dev_attr_hwfifo_enabled.dev_attr.attr, 1014 1027 NULL,
+18 -5
drivers/iio/accel/bmc150-accel-core.c
··· 925 925 { } 926 926 }; 927 927 928 - static IIO_CONST_ATTR(hwfifo_watermark_min, "1"); 929 - static IIO_CONST_ATTR(hwfifo_watermark_max, 930 - __stringify(BMC150_ACCEL_FIFO_LENGTH)); 928 + static ssize_t hwfifo_watermark_min_show(struct device *dev, 929 + struct device_attribute *attr, 930 + char *buf) 931 + { 932 + return sysfs_emit(buf, "%s\n", "1"); 933 + } 934 + 935 + static ssize_t hwfifo_watermark_max_show(struct device *dev, 936 + struct device_attribute *attr, 937 + char *buf) 938 + { 939 + return sysfs_emit(buf, "%s\n", __stringify(BMC150_ACCEL_FIFO_LENGTH)); 940 + } 941 + 942 + static IIO_DEVICE_ATTR_RO(hwfifo_watermark_min, 0); 943 + static IIO_DEVICE_ATTR_RO(hwfifo_watermark_max, 0); 931 944 static IIO_DEVICE_ATTR(hwfifo_enabled, S_IRUGO, 932 945 bmc150_accel_get_fifo_state, NULL, 0); 933 946 static IIO_DEVICE_ATTR(hwfifo_watermark, S_IRUGO, 934 947 bmc150_accel_get_fifo_watermark, NULL, 0); 935 948 936 949 static const struct attribute *bmc150_accel_fifo_attributes[] = { 937 - &iio_const_attr_hwfifo_watermark_min.dev_attr.attr, 938 - &iio_const_attr_hwfifo_watermark_max.dev_attr.attr, 950 + &iio_dev_attr_hwfifo_watermark_min.dev_attr.attr, 951 + &iio_dev_attr_hwfifo_watermark_max.dev_attr.attr, 939 952 &iio_dev_attr_hwfifo_watermark.dev_attr.attr, 940 953 &iio_dev_attr_hwfifo_enabled.dev_attr.attr, 941 954 NULL,
+18 -5
drivers/iio/adc/at91-sama5d2_adc.c
··· 2193 2193 return scnprintf(buf, PAGE_SIZE, "%d\n", st->dma_st.watermark); 2194 2194 } 2195 2195 2196 + static ssize_t hwfifo_watermark_min_show(struct device *dev, 2197 + struct device_attribute *attr, 2198 + char *buf) 2199 + { 2200 + return sysfs_emit(buf, "%s\n", "2"); 2201 + } 2202 + 2203 + static ssize_t hwfifo_watermark_max_show(struct device *dev, 2204 + struct device_attribute *attr, 2205 + char *buf) 2206 + { 2207 + return sysfs_emit(buf, "%s\n", AT91_HWFIFO_MAX_SIZE_STR); 2208 + } 2209 + 2196 2210 static IIO_DEVICE_ATTR(hwfifo_enabled, 0444, 2197 2211 at91_adc_get_fifo_state, NULL, 0); 2198 2212 static IIO_DEVICE_ATTR(hwfifo_watermark, 0444, 2199 2213 at91_adc_get_watermark, NULL, 0); 2200 - 2201 - static IIO_CONST_ATTR(hwfifo_watermark_min, "2"); 2202 - static IIO_CONST_ATTR(hwfifo_watermark_max, AT91_HWFIFO_MAX_SIZE_STR); 2214 + static IIO_DEVICE_ATTR_RO(hwfifo_watermark_min, 0); 2215 + static IIO_DEVICE_ATTR_RO(hwfifo_watermark_max, 0); 2203 2216 2204 2217 static const struct attribute *at91_adc_fifo_attributes[] = { 2205 - &iio_const_attr_hwfifo_watermark_min.dev_attr.attr, 2206 - &iio_const_attr_hwfifo_watermark_max.dev_attr.attr, 2218 + &iio_dev_attr_hwfifo_watermark_min.dev_attr.attr, 2219 + &iio_dev_attr_hwfifo_watermark_max.dev_attr.attr, 2207 2220 &iio_dev_attr_hwfifo_watermark.dev_attr.attr, 2208 2221 &iio_dev_attr_hwfifo_enabled.dev_attr.attr, 2209 2222 NULL,
+7 -6
drivers/iio/adc/mcp3911.c
··· 55 55 /* Internal voltage reference in mV */ 56 56 #define MCP3911_INT_VREF_MV 1200 57 57 58 - #define MCP3911_REG_READ(reg, id) ((((reg) << 1) | ((id) << 5) | (1 << 0)) & 0xff) 59 - #define MCP3911_REG_WRITE(reg, id) ((((reg) << 1) | ((id) << 5) | (0 << 0)) & 0xff) 58 + #define MCP3911_REG_READ(reg, id) ((((reg) << 1) | ((id) << 6) | (1 << 0)) & 0xff) 59 + #define MCP3911_REG_WRITE(reg, id) ((((reg) << 1) | ((id) << 6) | (0 << 0)) & 0xff) 60 + #define MCP3911_REG_MASK GENMASK(4, 1) 60 61 61 62 #define MCP3911_NUM_CHANNELS 2 62 63 ··· 90 89 91 90 be32_to_cpus(val); 92 91 *val >>= ((4 - len) * 8); 93 - dev_dbg(&adc->spi->dev, "reading 0x%x from register 0x%x\n", *val, 94 - reg >> 1); 92 + dev_dbg(&adc->spi->dev, "reading 0x%x from register 0x%lx\n", *val, 93 + FIELD_GET(MCP3911_REG_MASK, reg)); 95 94 return ret; 96 95 } 97 96 ··· 249 248 break; 250 249 251 250 case IIO_CHAN_INFO_OVERSAMPLING_RATIO: 252 - for (int i = 0; i < sizeof(mcp3911_osr_table); i++) { 251 + for (int i = 0; i < ARRAY_SIZE(mcp3911_osr_table); i++) { 253 252 if (val == mcp3911_osr_table[i]) { 254 253 val = FIELD_PREP(MCP3911_CONFIG_OSR, i); 255 254 ret = mcp3911_update(adc, MCP3911_REG_CONFIG, MCP3911_CONFIG_OSR, ··· 497 496 indio_dev->name, 498 497 iio_device_id(indio_dev)); 499 498 if (!adc->trig) 500 - return PTR_ERR(adc->trig); 499 + return -ENOMEM; 501 500 502 501 adc->trig->ops = &mcp3911_trigger_ops; 503 502 iio_trigger_set_drvdata(adc->trig, adc);
+6 -5
drivers/iio/adc/stm32-adc.c
··· 2086 2086 stm32_adc_chan_init_one(indio_dev, &channels[scan_index], val, 2087 2087 vin[1], scan_index, differential); 2088 2088 2089 + val = 0; 2089 2090 ret = fwnode_property_read_u32(child, "st,min-sample-time-ns", &val); 2090 2091 /* st,min-sample-time-ns is optional */ 2091 - if (!ret) { 2092 - stm32_adc_smpr_init(adc, channels[scan_index].channel, val); 2093 - if (differential) 2094 - stm32_adc_smpr_init(adc, vin[1], val); 2095 - } else if (ret != -EINVAL) { 2092 + if (ret && ret != -EINVAL) { 2096 2093 dev_err(&indio_dev->dev, "Invalid st,min-sample-time-ns property %d\n", 2097 2094 ret); 2098 2095 goto err; 2099 2096 } 2097 + 2098 + stm32_adc_smpr_init(adc, channels[scan_index].channel, val); 2099 + if (differential) 2100 + stm32_adc_smpr_init(adc, vin[1], val); 2100 2101 2101 2102 scan_index++; 2102 2103 }
+1 -1
drivers/iio/light/tsl2583.c
··· 858 858 TSL2583_POWER_OFF_DELAY_MS); 859 859 pm_runtime_use_autosuspend(&clientp->dev); 860 860 861 - ret = devm_iio_device_register(indio_dev->dev.parent, indio_dev); 861 + ret = iio_device_register(indio_dev); 862 862 if (ret) { 863 863 dev_err(&clientp->dev, "%s: iio registration failed\n", 864 864 __func__);
+6 -7
drivers/iio/temperature/ltc2983.c
··· 1385 1385 return ret; 1386 1386 } 1387 1387 1388 - st->iio_chan = devm_kzalloc(&st->spi->dev, 1389 - st->iio_channels * sizeof(*st->iio_chan), 1390 - GFP_KERNEL); 1391 - 1392 - if (!st->iio_chan) 1393 - return -ENOMEM; 1394 - 1395 1388 ret = regmap_update_bits(st->regmap, LTC2983_GLOBAL_CONFIG_REG, 1396 1389 LTC2983_NOTCH_FREQ_MASK, 1397 1390 LTC2983_NOTCH_FREQ(st->filter_notch_freq)); ··· 1506 1513 usleep_range(1000, 1200); 1507 1514 gpiod_set_value_cansleep(gpio, 0); 1508 1515 } 1516 + 1517 + st->iio_chan = devm_kzalloc(&spi->dev, 1518 + st->iio_channels * sizeof(*st->iio_chan), 1519 + GFP_KERNEL); 1520 + if (!st->iio_chan) 1521 + return -ENOMEM; 1509 1522 1510 1523 ret = ltc2983_setup(st, true); 1511 1524 if (ret)
+1 -1
drivers/infiniband/core/cma.c
··· 1556 1556 return false; 1557 1557 1558 1558 memset(&fl4, 0, sizeof(fl4)); 1559 - fl4.flowi4_iif = net_dev->ifindex; 1559 + fl4.flowi4_oif = net_dev->ifindex; 1560 1560 fl4.daddr = daddr; 1561 1561 fl4.saddr = saddr; 1562 1562
+9 -1
drivers/infiniband/core/device.c
··· 2815 2815 2816 2816 nldev_init(); 2817 2817 rdma_nl_register(RDMA_NL_LS, ibnl_ls_cb_table); 2818 - roce_gid_mgmt_init(); 2818 + ret = roce_gid_mgmt_init(); 2819 + if (ret) { 2820 + pr_warn("Couldn't init RoCE GID management\n"); 2821 + goto err_parent; 2822 + } 2819 2823 2820 2824 return 0; 2821 2825 2826 + err_parent: 2827 + rdma_nl_unregister(RDMA_NL_LS); 2828 + nldev_exit(); 2829 + unregister_pernet_device(&rdma_dev_net_ops); 2822 2830 err_compat: 2823 2831 unregister_blocking_lsm_notifier(&ibdev_lsm_nb); 2824 2832 err_sa:
+1 -1
drivers/infiniband/core/nldev.c
··· 2537 2537 rdma_nl_register(RDMA_NL_NLDEV, nldev_cb_table); 2538 2538 } 2539 2539 2540 - void __exit nldev_exit(void) 2540 + void nldev_exit(void) 2541 2541 { 2542 2542 rdma_nl_unregister(RDMA_NL_NLDEV); 2543 2543 }
+3 -1
drivers/infiniband/hw/efa/efa_main.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause 2 2 /* 3 - * Copyright 2018-2021 Amazon.com, Inc. or its affiliates. All rights reserved. 3 + * Copyright 2018-2022 Amazon.com, Inc. or its affiliates. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/module.h> ··· 14 14 15 15 #define PCI_DEV_ID_EFA0_VF 0xefa0 16 16 #define PCI_DEV_ID_EFA1_VF 0xefa1 17 + #define PCI_DEV_ID_EFA2_VF 0xefa2 17 18 18 19 static const struct pci_device_id efa_pci_tbl[] = { 19 20 { PCI_VDEVICE(AMAZON, PCI_DEV_ID_EFA0_VF) }, 20 21 { PCI_VDEVICE(AMAZON, PCI_DEV_ID_EFA1_VF) }, 22 + { PCI_VDEVICE(AMAZON, PCI_DEV_ID_EFA2_VF) }, 21 23 { } 22 24 }; 23 25
+1 -2
drivers/infiniband/hw/hfi1/pio.c
··· 913 913 spin_unlock(&sc->release_lock); 914 914 915 915 write_seqlock(&sc->waitlock); 916 - if (!list_empty(&sc->piowait)) 917 - list_move(&sc->piowait, &wake_list); 916 + list_splice_init(&sc->piowait, &wake_list); 918 917 write_sequnlock(&sc->waitlock); 919 918 while (!list_empty(&wake_list)) { 920 919 struct iowait *wait;
+4 -11
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 118 118 HR_OPC_MAP(ATOMIC_CMP_AND_SWP, ATOM_CMP_AND_SWAP), 119 119 HR_OPC_MAP(ATOMIC_FETCH_AND_ADD, ATOM_FETCH_AND_ADD), 120 120 HR_OPC_MAP(SEND_WITH_INV, SEND_WITH_INV), 121 - HR_OPC_MAP(LOCAL_INV, LOCAL_INV), 122 121 HR_OPC_MAP(MASKED_ATOMIC_CMP_AND_SWP, ATOM_MSK_CMP_AND_SWAP), 123 122 HR_OPC_MAP(MASKED_ATOMIC_FETCH_AND_ADD, ATOM_MSK_FETCH_AND_ADD), 124 123 HR_OPC_MAP(REG_MR, FAST_REG_PMR), ··· 558 559 else 559 560 ret = -EOPNOTSUPP; 560 561 break; 561 - case IB_WR_LOCAL_INV: 562 - hr_reg_enable(rc_sq_wqe, RC_SEND_WQE_SO); 563 - fallthrough; 564 562 case IB_WR_SEND_WITH_INV: 565 563 rc_sq_wqe->inv_key = cpu_to_le32(wr->ex.invalidate_rkey); 566 564 break; ··· 2801 2805 2802 2806 static int free_mr_init(struct hns_roce_dev *hr_dev) 2803 2807 { 2808 + struct hns_roce_v2_priv *priv = hr_dev->priv; 2809 + struct hns_roce_v2_free_mr *free_mr = &priv->free_mr; 2804 2810 int ret; 2811 + 2812 + mutex_init(&free_mr->mutex); 2805 2813 2806 2814 ret = free_mr_alloc_res(hr_dev); 2807 2815 if (ret) ··· 3222 3222 3223 3223 hr_reg_write(mpt_entry, MPT_ST, V2_MPT_ST_VALID); 3224 3224 hr_reg_write(mpt_entry, MPT_PD, mr->pd); 3225 - hr_reg_enable(mpt_entry, MPT_L_INV_EN); 3226 3225 3227 3226 hr_reg_write_bool(mpt_entry, MPT_BIND_EN, 3228 3227 mr->access & IB_ACCESS_MW_BIND); ··· 3312 3313 3313 3314 hr_reg_enable(mpt_entry, MPT_RA_EN); 3314 3315 hr_reg_enable(mpt_entry, MPT_R_INV_EN); 3315 - hr_reg_enable(mpt_entry, MPT_L_INV_EN); 3316 3316 3317 3317 hr_reg_enable(mpt_entry, MPT_FRE); 3318 3318 hr_reg_clear(mpt_entry, MPT_MR_MW); ··· 3343 3345 hr_reg_write(mpt_entry, MPT_PD, mw->pdn); 3344 3346 3345 3347 hr_reg_enable(mpt_entry, MPT_R_INV_EN); 3346 - hr_reg_enable(mpt_entry, MPT_L_INV_EN); 3347 3348 hr_reg_enable(mpt_entry, MPT_LW_EN); 3348 3349 3349 3350 hr_reg_enable(mpt_entry, MPT_MR_MW); ··· 3791 3794 HR_WC_OP_MAP(RDMA_READ, RDMA_READ), 3792 3795 HR_WC_OP_MAP(RDMA_WRITE, RDMA_WRITE), 3793 3796 HR_WC_OP_MAP(RDMA_WRITE_WITH_IMM, RDMA_WRITE), 3794 - HR_WC_OP_MAP(LOCAL_INV, LOCAL_INV), 3795 3797 HR_WC_OP_MAP(ATOM_CMP_AND_SWAP, COMP_SWAP), 3796 3798 HR_WC_OP_MAP(ATOM_FETCH_AND_ADD, FETCH_ADD), 3797 3799 HR_WC_OP_MAP(ATOM_MSK_CMP_AND_SWAP, MASKED_COMP_SWAP), ··· 3839 3843 case HNS_ROCE_V2_WQE_OP_SEND_WITH_IMM: 3840 3844 case HNS_ROCE_V2_WQE_OP_RDMA_WRITE_WITH_IMM: 3841 3845 wc->wc_flags |= IB_WC_WITH_IMM; 3842 - break; 3843 - case HNS_ROCE_V2_WQE_OP_LOCAL_INV: 3844 - wc->wc_flags |= IB_WC_WITH_INVALIDATE; 3845 3846 break; 3846 3847 case HNS_ROCE_V2_WQE_OP_ATOM_CMP_AND_SWAP: 3847 3848 case HNS_ROCE_V2_WQE_OP_ATOM_FETCH_AND_ADD:
-2
drivers/infiniband/hw/hns/hns_roce_hw_v2.h
··· 179 179 HNS_ROCE_V2_WQE_OP_ATOM_MSK_CMP_AND_SWAP = 0x8, 180 180 HNS_ROCE_V2_WQE_OP_ATOM_MSK_FETCH_AND_ADD = 0x9, 181 181 HNS_ROCE_V2_WQE_OP_FAST_REG_PMR = 0xa, 182 - HNS_ROCE_V2_WQE_OP_LOCAL_INV = 0xb, 183 182 HNS_ROCE_V2_WQE_OP_BIND_MW = 0xc, 184 183 HNS_ROCE_V2_WQE_OP_MASK = 0x1f, 185 184 }; ··· 914 915 #define RC_SEND_WQE_OWNER RC_SEND_WQE_FIELD_LOC(7, 7) 915 916 #define RC_SEND_WQE_CQE RC_SEND_WQE_FIELD_LOC(8, 8) 916 917 #define RC_SEND_WQE_FENCE RC_SEND_WQE_FIELD_LOC(9, 9) 917 - #define RC_SEND_WQE_SO RC_SEND_WQE_FIELD_LOC(10, 10) 918 918 #define RC_SEND_WQE_SE RC_SEND_WQE_FIELD_LOC(11, 11) 919 919 #define RC_SEND_WQE_INLINE RC_SEND_WQE_FIELD_LOC(12, 12) 920 920 #define RC_SEND_WQE_WQE_INDEX RC_SEND_WQE_FIELD_LOC(30, 15)
+8 -1
drivers/infiniband/hw/qedr/main.c
··· 344 344 if (IS_IWARP(dev)) { 345 345 xa_init(&dev->qps); 346 346 dev->iwarp_wq = create_singlethread_workqueue("qedr_iwarpq"); 347 + if (!dev->iwarp_wq) { 348 + rc = -ENOMEM; 349 + goto err1; 350 + } 347 351 } 348 352 349 353 /* Allocate Status blocks for CNQ */ ··· 355 351 GFP_KERNEL); 356 352 if (!dev->sb_array) { 357 353 rc = -ENOMEM; 358 - goto err1; 354 + goto err_destroy_wq; 359 355 } 360 356 361 357 dev->cnq_array = kcalloc(dev->num_cnq, ··· 406 402 kfree(dev->cnq_array); 407 403 err2: 408 404 kfree(dev->sb_array); 405 + err_destroy_wq: 406 + if (IS_IWARP(dev)) 407 + destroy_workqueue(dev->iwarp_wq); 409 408 err1: 410 409 kfree(dev->sgid_tbl); 411 410 return rc;
+3 -1
drivers/infiniband/sw/rxe/rxe_resp.c
··· 806 806 807 807 skb = prepare_ack_packet(qp, &ack_pkt, opcode, payload, 808 808 res->cur_psn, AETH_ACK_UNLIMITED); 809 - if (!skb) 809 + if (!skb) { 810 + rxe_put(mr); 810 811 return RESPST_ERR_RNR; 812 + } 811 813 812 814 rxe_mr_copy(mr, res->read.va, payload_addr(&ack_pkt), 813 815 payload, RXE_FROM_MR_OBJ);
+1 -1
drivers/isdn/hardware/mISDN/netjet.c
··· 956 956 } 957 957 if (card->irq > 0) 958 958 free_irq(card->irq, card); 959 - if (card->isac.dch.dev.dev.class) 959 + if (device_is_registered(&card->isac.dch.dev.dev)) 960 960 mISDN_unregister_device(&card->isac.dch.dev); 961 961 962 962 for (i = 0; i < 2; i++) {
+3 -2
drivers/isdn/mISDN/core.c
··· 233 233 if (debug & DEBUG_CORE) 234 234 printk(KERN_DEBUG "mISDN_register %s %d\n", 235 235 dev_name(&dev->dev), dev->id); 236 + dev->dev.class = &mISDN_class; 237 + 236 238 err = create_stack(dev); 237 239 if (err) 238 240 goto error1; 239 241 240 - dev->dev.class = &mISDN_class; 241 242 dev->dev.platform_data = dev; 242 243 dev->dev.parent = parent; 243 244 dev_set_drvdata(&dev->dev, dev); ··· 250 249 251 250 error3: 252 251 delete_stack(dev); 253 - return err; 254 252 error1: 253 + put_device(&dev->dev); 255 254 return err; 256 255 257 256 }
+3 -3
drivers/misc/sgi-gru/grumain.c
··· 152 152 * Optionally, build an array of chars that contain the bit numbers allocated. 153 153 */ 154 154 static unsigned long reserve_resources(unsigned long *p, int n, int mmax, 155 - char *idx) 155 + signed char *idx) 156 156 { 157 157 unsigned long bits = 0; 158 158 int i; ··· 170 170 } 171 171 172 172 unsigned long gru_reserve_cb_resources(struct gru_state *gru, int cbr_au_count, 173 - char *cbmap) 173 + signed char *cbmap) 174 174 { 175 175 return reserve_resources(&gru->gs_cbr_map, cbr_au_count, GRU_CBR_AU, 176 176 cbmap); 177 177 } 178 178 179 179 unsigned long gru_reserve_ds_resources(struct gru_state *gru, int dsr_au_count, 180 - char *dsmap) 180 + signed char *dsmap) 181 181 { 182 182 return reserve_resources(&gru->gs_dsr_map, dsr_au_count, GRU_DSR_AU, 183 183 dsmap);
+7 -7
drivers/misc/sgi-gru/grutables.h
··· 351 351 pid_t ts_tgid_owner; /* task that is using the 352 352 context - for migration */ 353 353 short ts_user_blade_id;/* user selected blade */ 354 - char ts_user_chiplet_id;/* user selected chiplet */ 354 + signed char ts_user_chiplet_id;/* user selected chiplet */ 355 355 unsigned short ts_sizeavail; /* Pagesizes in use */ 356 356 int ts_tsid; /* thread that owns the 357 357 structure */ ··· 364 364 required for contest */ 365 365 unsigned char ts_cbr_au_count;/* Number of CBR resources 366 366 required for contest */ 367 - char ts_cch_req_slice;/* CCH packet slice */ 368 - char ts_blade; /* If >= 0, migrate context if 367 + signed char ts_cch_req_slice;/* CCH packet slice */ 368 + signed char ts_blade; /* If >= 0, migrate context if 369 369 ref from different blade */ 370 - char ts_force_cch_reload; 371 - char ts_cbr_idx[GRU_CBR_AU];/* CBR numbers of each 370 + signed char ts_force_cch_reload; 371 + signed char ts_cbr_idx[GRU_CBR_AU];/* CBR numbers of each 372 372 allocated CB */ 373 373 int ts_data_valid; /* Indicates if ts_gdata has 374 374 valid data */ ··· 643 643 int cbr_au_count, int dsr_au_count, 644 644 unsigned char tlb_preload_count, int options, int tsid); 645 645 extern unsigned long gru_reserve_cb_resources(struct gru_state *gru, 646 - int cbr_au_count, char *cbmap); 646 + int cbr_au_count, signed char *cbmap); 647 647 extern unsigned long gru_reserve_ds_resources(struct gru_state *gru, 648 - int dsr_au_count, char *dsmap); 648 + int dsr_au_count, signed char *dsmap); 649 649 extern vm_fault_t gru_fault(struct vm_fault *vmf); 650 650 extern struct gru_mm_struct *gru_register_mmu_notifier(void); 651 651 extern void gru_drop_mmu_notifier(struct gru_mm_struct *gms);
+26 -18
drivers/mmc/core/block.c
··· 134 134 * track of the current selected device partition. 135 135 */ 136 136 unsigned int part_curr; 137 + #define MMC_BLK_PART_INVALID UINT_MAX /* Unknown partition active */ 137 138 int area_type; 138 139 139 140 /* debugfs files (only in main mmc_blk_data) */ ··· 988 987 return ms; 989 988 } 990 989 990 + /* 991 + * Attempts to reset the card and get back to the requested partition. 992 + * Therefore any error here must result in cancelling the block layer 993 + * request, it must not be reattempted without going through the mmc_blk 994 + * partition sanity checks. 995 + */ 991 996 static int mmc_blk_reset(struct mmc_blk_data *md, struct mmc_host *host, 992 997 int type) 993 998 { 994 999 int err; 1000 + struct mmc_blk_data *main_md = dev_get_drvdata(&host->card->dev); 995 1001 996 1002 if (md->reset_done & type) 997 1003 return -EEXIST; 998 1004 999 1005 md->reset_done |= type; 1000 1006 err = mmc_hw_reset(host->card); 1007 + /* 1008 + * A successful reset will leave the card in the main partition, but 1009 + * upon failure it might not be, so set it to MMC_BLK_PART_INVALID 1010 + * in that case. 1011 + */ 1012 + main_md->part_curr = err ? MMC_BLK_PART_INVALID : main_md->part_type; 1013 + if (err) 1014 + return err; 1001 1015 /* Ensure we switch back to the correct partition */ 1002 - if (err) { 1003 - struct mmc_blk_data *main_md = 1004 - dev_get_drvdata(&host->card->dev); 1005 - int part_err; 1006 - 1007 - main_md->part_curr = main_md->part_type; 1008 - part_err = mmc_blk_part_switch(host->card, md->part_type); 1009 - if (part_err) { 1010 - /* 1011 - * We have failed to get back into the correct 1012 - * partition, so we need to abort the whole request. 1013 - */ 1014 - return -ENODEV; 1015 - } 1016 - } 1017 - return err; 1016 + if (mmc_blk_part_switch(host->card, md->part_type)) 1017 + /* 1018 + * We have failed to get back into the correct 1019 + * partition, so we need to abort the whole request. 1020 + */ 1021 + return -ENODEV; 1022 + return 0; 1018 1023 } 1019 1024 1020 1025 static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type) ··· 1878 1871 return; 1879 1872 1880 1873 /* Reset before last retry */ 1881 - if (mqrq->retries + 1 == MMC_MAX_RETRIES) 1882 - mmc_blk_reset(md, card->host, type); 1874 + if (mqrq->retries + 1 == MMC_MAX_RETRIES && 1875 + mmc_blk_reset(md, card->host, type)) 1876 + return; 1883 1877 1884 1878 /* Command errors fail fast, so use all MMC_MAX_RETRIES */ 1885 1879 if (brq->sbc.error || brq->cmd.error)
+8
drivers/mmc/core/queue.c
··· 48 48 case REQ_OP_DRV_OUT: 49 49 case REQ_OP_DISCARD: 50 50 case REQ_OP_SECURE_ERASE: 51 + case REQ_OP_WRITE_ZEROES: 51 52 return MMC_ISSUE_SYNC; 52 53 case REQ_OP_FLUSH: 53 54 return mmc_cqe_can_dcmd(host) ? MMC_ISSUE_DCMD : MMC_ISSUE_SYNC; ··· 493 492 */ 494 493 if (blk_queue_quiesced(q)) 495 494 blk_mq_unquiesce_queue(q); 495 + 496 + /* 497 + * If the recovery completes the last (and only remaining) request in 498 + * the queue, and the card has been removed, we could end up here with 499 + * the recovery not quite finished yet, so cancel it. 500 + */ 501 + cancel_work_sync(&mq->recovery_work); 496 502 497 503 blk_mq_free_tag_set(&mq->tag_set); 498 504
+2 -1
drivers/mmc/core/sdio_bus.c
··· 291 291 { 292 292 struct sdio_func *func = dev_to_sdio_func(dev); 293 293 294 - sdio_free_func_cis(func); 294 + if (!(func->card->quirks & MMC_QUIRK_NONSTD_SDIO)) 295 + sdio_free_func_cis(func); 295 296 296 297 kfree(func->info); 297 298 kfree(func->tmpbuf);
+2 -1
drivers/mmc/host/Kconfig
··· 1075 1075 1076 1076 config MMC_SDHCI_AM654 1077 1077 tristate "Support for the SDHCI Controller in TI's AM654 SOCs" 1078 - depends on MMC_SDHCI_PLTFM && OF && REGMAP_MMIO 1078 + depends on MMC_SDHCI_PLTFM && OF 1079 1079 select MMC_SDHCI_IO_ACCESSORS 1080 1080 select MMC_CQHCI 1081 + select REGMAP_MMIO 1081 1082 help 1082 1083 This selects the Secure Digital Host Controller Interface (SDHCI) 1083 1084 support present in TI's AM654 SOCs. The controller supports
+8 -6
drivers/mmc/host/sdhci-esdhc-imx.c
··· 1660 1660 host->mmc_host_ops.execute_tuning = usdhc_execute_tuning; 1661 1661 } 1662 1662 1663 + err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data); 1664 + if (err) 1665 + goto disable_ahb_clk; 1666 + 1663 1667 if (imx_data->socdata->flags & ESDHC_FLAG_MAN_TUNING) 1664 1668 sdhci_esdhc_ops.platform_execute_tuning = 1665 1669 esdhc_executing_tuning; ··· 1671 1667 if (imx_data->socdata->flags & ESDHC_FLAG_ERR004536) 1672 1668 host->quirks |= SDHCI_QUIRK_BROKEN_ADMA; 1673 1669 1674 - if (imx_data->socdata->flags & ESDHC_FLAG_HS400) 1670 + if (host->caps & MMC_CAP_8_BIT_DATA && 1671 + imx_data->socdata->flags & ESDHC_FLAG_HS400) 1675 1672 host->mmc->caps2 |= MMC_CAP2_HS400; 1676 1673 1677 1674 if (imx_data->socdata->flags & ESDHC_FLAG_BROKEN_AUTO_CMD23) 1678 1675 host->quirks2 |= SDHCI_QUIRK2_ACMD23_BROKEN; 1679 1676 1680 - if (imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) { 1677 + if (host->caps & MMC_CAP_8_BIT_DATA && 1678 + imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) { 1681 1679 host->mmc->caps2 |= MMC_CAP2_HS400_ES; 1682 1680 host->mmc_host_ops.hs400_enhanced_strobe = 1683 1681 esdhc_hs400_enhanced_strobe; ··· 1700 1694 if (err) 1701 1695 goto disable_ahb_clk; 1702 1696 } 1703 - 1704 - err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data); 1705 - if (err) 1706 - goto disable_ahb_clk; 1707 1697 1708 1698 sdhci_esdhc_imx_hwinit(host); 1709 1699
+11 -3
drivers/mmc/host/sdhci-pci-core.c
··· 914 914 dmi_match(DMI_SYS_VENDOR, "IRBIS")); 915 915 } 916 916 917 + static bool jsl_broken_hs400es(struct sdhci_pci_slot *slot) 918 + { 919 + return slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_JSL_EMMC && 920 + dmi_match(DMI_BIOS_VENDOR, "ASUSTeK COMPUTER INC."); 921 + } 922 + 917 923 static int glk_emmc_probe_slot(struct sdhci_pci_slot *slot) 918 924 { 919 925 int ret = byt_emmc_probe_slot(slot); ··· 928 922 slot->host->mmc->caps2 |= MMC_CAP2_CQE; 929 923 930 924 if (slot->chip->pdev->device != PCI_DEVICE_ID_INTEL_GLK_EMMC) { 931 - slot->host->mmc->caps2 |= MMC_CAP2_HS400_ES; 932 - slot->host->mmc_host_ops.hs400_enhanced_strobe = 933 - intel_hs400_enhanced_strobe; 925 + if (!jsl_broken_hs400es(slot)) { 926 + slot->host->mmc->caps2 |= MMC_CAP2_HS400_ES; 927 + slot->host->mmc_host_ops.hs400_enhanced_strobe = 928 + intel_hs400_enhanced_strobe; 929 + } 934 930 slot->host->mmc->caps2 |= MMC_CAP2_CQE_DCMD; 935 931 } 936 932
+1 -1
drivers/mtd/mtdcore.c
··· 562 562 if (!mtd_is_partition(mtd)) 563 563 return; 564 564 parent = mtd->parent; 565 - parent_dn = dev_of_node(&parent->dev); 565 + parent_dn = of_node_get(dev_of_node(&parent->dev)); 566 566 if (!parent_dn) 567 567 return; 568 568
+15 -8
drivers/mtd/nand/raw/intel-nand-controller.c
··· 608 608 ret = of_property_read_u32(chip_np, "reg", &cs); 609 609 if (ret) { 610 610 dev_err(dev, "failed to get chip select: %d\n", ret); 611 - return ret; 611 + goto err_of_node_put; 612 612 } 613 613 if (cs >= MAX_CS) { 614 614 dev_err(dev, "got invalid chip select: %d\n", cs); 615 - return -EINVAL; 615 + ret = -EINVAL; 616 + goto err_of_node_put; 616 617 } 617 618 618 619 ebu_host->cs_num = cs; ··· 621 620 resname = devm_kasprintf(dev, GFP_KERNEL, "nand_cs%d", cs); 622 621 ebu_host->cs[cs].chipaddr = devm_platform_ioremap_resource_byname(pdev, 623 622 resname); 624 - if (IS_ERR(ebu_host->cs[cs].chipaddr)) 625 - return PTR_ERR(ebu_host->cs[cs].chipaddr); 623 + if (IS_ERR(ebu_host->cs[cs].chipaddr)) { 624 + ret = PTR_ERR(ebu_host->cs[cs].chipaddr); 625 + goto err_of_node_put; 626 + } 626 627 627 628 ebu_host->clk = devm_clk_get(dev, NULL); 628 - if (IS_ERR(ebu_host->clk)) 629 - return dev_err_probe(dev, PTR_ERR(ebu_host->clk), 630 - "failed to get clock\n"); 629 + if (IS_ERR(ebu_host->clk)) { 630 + ret = dev_err_probe(dev, PTR_ERR(ebu_host->clk), 631 + "failed to get clock\n"); 632 + goto err_of_node_put; 633 + } 631 634 632 635 ret = clk_prepare_enable(ebu_host->clk); 633 636 if (ret) { 634 637 dev_err(dev, "failed to enable clock: %d\n", ret); 635 - return ret; 638 + goto err_of_node_put; 636 639 } 637 640 638 641 ebu_host->dma_tx = dma_request_chan(dev, "tx"); ··· 700 695 ebu_dma_cleanup(ebu_host); 701 696 err_disable_unprepare_clk: 702 697 clk_disable_unprepare(ebu_host->clk); 698 + err_of_node_put: 699 + of_node_put(chip_np); 703 700 704 701 return ret; 705 702 }
+1 -1
drivers/mtd/nand/raw/marvell_nand.c
··· 2678 2678 chip->controller = &nfc->controller; 2679 2679 nand_set_flash_node(chip, np); 2680 2680 2681 - if (!of_property_read_bool(np, "marvell,nand-keep-config")) 2681 + if (of_property_read_bool(np, "marvell,nand-keep-config")) 2682 2682 chip->options |= NAND_KEEP_TIMINGS; 2683 2683 2684 2684 mtd = nand_to_mtd(chip);
+3 -1
drivers/mtd/nand/raw/tegra_nand.c
··· 1181 1181 pm_runtime_enable(&pdev->dev); 1182 1182 err = pm_runtime_resume_and_get(&pdev->dev); 1183 1183 if (err) 1184 - return err; 1184 + goto err_dis_pm; 1185 1185 1186 1186 err = reset_control_reset(rst); 1187 1187 if (err) { ··· 1215 1215 err_put_pm: 1216 1216 pm_runtime_put_sync_suspend(ctrl->dev); 1217 1217 pm_runtime_force_suspend(ctrl->dev); 1218 + err_dis_pm: 1219 + pm_runtime_disable(&pdev->dev); 1218 1220 return err; 1219 1221 } 1220 1222
+2 -2
drivers/mtd/parsers/bcm47xxpart.c
··· 233 233 } 234 234 235 235 /* Read middle of the block */ 236 - err = mtd_read(master, offset + 0x8000, 0x4, &bytes_read, 236 + err = mtd_read(master, offset + (blocksize / 2), 0x4, &bytes_read, 237 237 (uint8_t *)buf); 238 238 if (err && !mtd_is_bitflip(err)) { 239 239 pr_err("mtd_read error while parsing (offset: 0x%X): %d\n", 240 - offset + 0x8000, err); 240 + offset + (blocksize / 2), err); 241 241 continue; 242 242 } 243 243
+3 -1
drivers/mtd/spi-nor/core.c
··· 2724 2724 */ 2725 2725 WARN_ONCE(nor->flags & SNOR_F_BROKEN_RESET, 2726 2726 "enabling reset hack; may not recover from unexpected reboots\n"); 2727 - return nor->params->set_4byte_addr_mode(nor, true); 2727 + err = nor->params->set_4byte_addr_mode(nor, true); 2728 + if (err && err != -ENOTSUPP) 2729 + return err; 2728 2730 } 2729 2731 2730 2732 return 0;
+18 -7
drivers/net/dsa/dsa_loop.c
··· 376 376 377 377 #define NUM_FIXED_PHYS (DSA_LOOP_NUM_PORTS - 2) 378 378 379 + static void dsa_loop_phydevs_unregister(void) 380 + { 381 + unsigned int i; 382 + 383 + for (i = 0; i < NUM_FIXED_PHYS; i++) 384 + if (!IS_ERR(phydevs[i])) { 385 + fixed_phy_unregister(phydevs[i]); 386 + phy_device_free(phydevs[i]); 387 + } 388 + } 389 + 379 390 static int __init dsa_loop_init(void) 380 391 { 381 392 struct fixed_phy_status status = { ··· 394 383 .speed = SPEED_100, 395 384 .duplex = DUPLEX_FULL, 396 385 }; 397 - unsigned int i; 386 + unsigned int i, ret; 398 387 399 388 for (i = 0; i < NUM_FIXED_PHYS; i++) 400 389 phydevs[i] = fixed_phy_register(PHY_POLL, &status, NULL); 401 390 402 - return mdio_driver_register(&dsa_loop_drv); 391 + ret = mdio_driver_register(&dsa_loop_drv); 392 + if (ret) 393 + dsa_loop_phydevs_unregister(); 394 + 395 + return ret; 403 396 } 404 397 module_init(dsa_loop_init); 405 398 406 399 static void __exit dsa_loop_exit(void) 407 400 { 408 - unsigned int i; 409 - 410 401 mdio_driver_unregister(&dsa_loop_drv); 411 - for (i = 0; i < NUM_FIXED_PHYS; i++) 412 - if (!IS_ERR(phydevs[i])) 413 - fixed_phy_unregister(phydevs[i]); 402 + dsa_loop_phydevs_unregister(); 414 403 } 415 404 module_exit(dsa_loop_exit); 416 405
+29 -9
drivers/net/ethernet/adi/adin1110.c
··· 1528 1528 .notifier_call = adin1110_switchdev_event, 1529 1529 }; 1530 1530 1531 - static void adin1110_unregister_notifiers(void *data) 1531 + static void adin1110_unregister_notifiers(void) 1532 1532 { 1533 1533 unregister_switchdev_blocking_notifier(&adin1110_switchdev_blocking_notifier); 1534 1534 unregister_switchdev_notifier(&adin1110_switchdev_notifier); 1535 1535 unregister_netdevice_notifier(&adin1110_netdevice_nb); 1536 1536 } 1537 1537 1538 - static int adin1110_setup_notifiers(struct adin1110_priv *priv) 1538 + static int adin1110_setup_notifiers(void) 1539 1539 { 1540 - struct device *dev = &priv->spidev->dev; 1541 1540 int ret; 1542 1541 1543 1542 ret = register_netdevice_notifier(&adin1110_netdevice_nb); ··· 1551 1552 if (ret < 0) 1552 1553 goto err_sdev; 1553 1554 1554 - return devm_add_action_or_reset(dev, adin1110_unregister_notifiers, NULL); 1555 + return 0; 1555 1556 1556 1557 err_sdev: 1557 1558 unregister_switchdev_notifier(&adin1110_switchdev_notifier); 1558 1559 1559 1560 err_netdev: 1560 1561 unregister_netdevice_notifier(&adin1110_netdevice_nb); 1562 + 1561 1563 return ret; 1562 1564 } 1563 1565 ··· 1626 1626 adin1110_irq, 1627 1627 IRQF_TRIGGER_LOW | IRQF_ONESHOT, 1628 1628 dev_name(dev), priv); 1629 - if (ret < 0) 1630 - return ret; 1631 - 1632 - ret = adin1110_setup_notifiers(priv); 1633 1629 if (ret < 0) 1634 1630 return ret; 1635 1631 ··· 1705 1709 .probe = adin1110_probe, 1706 1710 .id_table = adin1110_spi_id, 1707 1711 }; 1708 - module_spi_driver(adin1110_driver); 1712 + 1713 + static int __init adin1110_driver_init(void) 1714 + { 1715 + int ret; 1716 + 1717 + ret = adin1110_setup_notifiers(); 1718 + if (ret < 0) 1719 + return ret; 1720 + 1721 + ret = spi_register_driver(&adin1110_driver); 1722 + if (ret < 0) { 1723 + adin1110_unregister_notifiers(); 1724 + return ret; 1725 + } 1726 + 1727 + return 0; 1728 + } 1729 + 1730 + static void __exit adin1110_exit(void) 1731 + { 1732 + adin1110_unregister_notifiers(); 1733 + spi_unregister_driver(&adin1110_driver); 1734 + } 1735 + module_init(adin1110_driver_init); 1736 + module_exit(adin1110_exit); 1709 1737 1710 1738 MODULE_DESCRIPTION("ADIN1110 Network driver"); 1711 1739 MODULE_AUTHOR("Alexandru Tachici <alexandru.tachici@analog.com>");
+2 -2
drivers/net/ethernet/freescale/fec_main.c
··· 709 709 dev_kfree_skb_any(skb); 710 710 if (net_ratelimit()) 711 711 netdev_err(ndev, "Tx DMA memory map failed\n"); 712 - return NETDEV_TX_BUSY; 712 + return NETDEV_TX_OK; 713 713 } 714 714 715 715 bdp->cbd_datlen = cpu_to_fec16(size); ··· 771 771 dev_kfree_skb_any(skb); 772 772 if (net_ratelimit()) 773 773 netdev_err(ndev, "Tx DMA memory map failed\n"); 774 - return NETDEV_TX_BUSY; 774 + return NETDEV_TX_OK; 775 775 } 776 776 } 777 777
+8 -8
drivers/net/ethernet/ibm/ibmvnic.c
··· 3007 3007 rwi = get_next_rwi(adapter); 3008 3008 3009 3009 /* 3010 - * If there is another reset queued, free the previous rwi 3011 - * and process the new reset even if previous reset failed 3012 - * (the previous reset could have failed because of a fail 3013 - * over for instance, so process the fail over). 3014 - * 3015 3010 * If there are no resets queued and the previous reset failed, 3016 3011 * the adapter would be in an undefined state. So retry the 3017 3012 * previous reset as a hard reset. 3013 + * 3014 + * Else, free the previous rwi and, if there is another reset 3015 + * queued, process the new reset even if previous reset failed 3016 + * (the previous reset could have failed because of a fail 3017 + * over for instance, so process the fail over). 3018 3018 */ 3019 - if (rwi) 3020 - kfree(tmprwi); 3021 - else if (rc) 3019 + if (!rwi && rc) 3022 3020 rwi = tmprwi; 3021 + else 3022 + kfree(tmprwi); 3023 3023 3024 3024 if (rwi && (rwi->reset_reason == VNIC_RESET_FAILOVER || 3025 3025 rwi->reset_reason == VNIC_RESET_MOBILITY || rc))
+18 -8
drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
··· 414 414 /* Get the received frame and unmap it */ 415 415 db = &rx->dcbs[rx->dcb_index].db[rx->db_index]; 416 416 page = rx->page[rx->dcb_index][rx->db_index]; 417 + 418 + dma_sync_single_for_cpu(lan966x->dev, (dma_addr_t)db->dataptr, 419 + FDMA_DCB_STATUS_BLOCKL(db->status), 420 + DMA_FROM_DEVICE); 421 + 417 422 skb = build_skb(page_address(page), PAGE_SIZE << rx->page_order); 418 423 if (unlikely(!skb)) 419 424 goto unmap_page; 420 425 421 - dma_unmap_single(lan966x->dev, (dma_addr_t)db->dataptr, 422 - FDMA_DCB_STATUS_BLOCKL(db->status), 423 - DMA_FROM_DEVICE); 424 426 skb_put(skb, FDMA_DCB_STATUS_BLOCKL(db->status)); 425 427 426 428 lan966x_ifh_get_src_port(skb->data, &src_port); ··· 430 428 431 429 if (WARN_ON(src_port >= lan966x->num_phys_ports)) 432 430 goto free_skb; 431 + 432 + dma_unmap_single_attrs(lan966x->dev, (dma_addr_t)db->dataptr, 433 + PAGE_SIZE << rx->page_order, DMA_FROM_DEVICE, 434 + DMA_ATTR_SKIP_CPU_SYNC); 433 435 434 436 skb->dev = lan966x->ports[src_port]->dev; 435 437 skb_pull(skb, IFH_LEN * sizeof(u32)); ··· 460 454 free_skb: 461 455 kfree_skb(skb); 462 456 unmap_page: 463 - dma_unmap_page(lan966x->dev, (dma_addr_t)db->dataptr, 464 - FDMA_DCB_STATUS_BLOCKL(db->status), 465 - DMA_FROM_DEVICE); 457 + dma_unmap_single_attrs(lan966x->dev, (dma_addr_t)db->dataptr, 458 + PAGE_SIZE << rx->page_order, DMA_FROM_DEVICE, 459 + DMA_ATTR_SKIP_CPU_SYNC); 466 460 __free_pages(page, rx->page_order); 467 461 468 462 return NULL; ··· 674 668 int i; 675 669 676 670 for (i = 0; i < lan966x->num_phys_ports; ++i) { 671 + struct lan966x_port *port; 677 672 int mtu; 678 673 679 - if (!lan966x->ports[i]) 674 + port = lan966x->ports[i]; 675 + if (!port) 680 676 continue; 681 677 682 - mtu = lan966x->ports[i]->dev->mtu; 678 + mtu = lan_rd(lan966x, DEV_MAC_MAXLEN_CFG(port->chip_port)); 683 679 if (mtu > max_mtu) 684 680 max_mtu = mtu; 685 681 } ··· 741 733 742 734 max_mtu = lan966x_fdma_get_max_mtu(lan966x); 743 735 max_mtu += IFH_LEN * sizeof(u32); 736 + max_mtu += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); 737 + max_mtu += VLAN_HLEN * 2; 744 738 745 739 if (round_up(max_mtu, PAGE_SIZE) / PAGE_SIZE - 1 == 746 740 lan966x->rx.page_order)
+2 -2
drivers/net/ethernet/microchip/lan966x/lan966x_main.c
··· 386 386 int old_mtu = dev->mtu; 387 387 int err; 388 388 389 - lan_wr(DEV_MAC_MAXLEN_CFG_MAX_LEN_SET(new_mtu), 389 + lan_wr(DEV_MAC_MAXLEN_CFG_MAX_LEN_SET(LAN966X_HW_MTU(new_mtu)), 390 390 lan966x, DEV_MAC_MAXLEN_CFG(port->chip_port)); 391 391 dev->mtu = new_mtu; 392 392 ··· 395 395 396 396 err = lan966x_fdma_change_mtu(lan966x); 397 397 if (err) { 398 - lan_wr(DEV_MAC_MAXLEN_CFG_MAX_LEN_SET(old_mtu), 398 + lan_wr(DEV_MAC_MAXLEN_CFG_MAX_LEN_SET(LAN966X_HW_MTU(old_mtu)), 399 399 lan966x, DEV_MAC_MAXLEN_CFG(port->chip_port)); 400 400 dev->mtu = old_mtu; 401 401 }
+2
drivers/net/ethernet/microchip/lan966x/lan966x_main.h
··· 26 26 #define LAN966X_BUFFER_MEMORY (160 * 1024) 27 27 #define LAN966X_BUFFER_MIN_SZ 60 28 28 29 + #define LAN966X_HW_MTU(mtu) ((mtu) + ETH_HLEN + ETH_FCS_LEN) 30 + 29 31 #define PGID_AGGR 64 30 32 #define PGID_SRC 80 31 33 #define PGID_ENTRIES 89
+15
drivers/net/ethernet/microchip/lan966x/lan966x_regs.h
··· 585 585 #define DEV_MAC_MAXLEN_CFG_MAX_LEN_GET(x)\ 586 586 FIELD_GET(DEV_MAC_MAXLEN_CFG_MAX_LEN, x) 587 587 588 + /* DEV:MAC_CFG_STATUS:MAC_TAGS_CFG */ 589 + #define DEV_MAC_TAGS_CFG(t) __REG(TARGET_DEV, t, 8, 28, 0, 1, 44, 12, 0, 1, 4) 590 + 591 + #define DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA BIT(1) 592 + #define DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA_SET(x)\ 593 + FIELD_PREP(DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA, x) 594 + #define DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA_GET(x)\ 595 + FIELD_GET(DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA, x) 596 + 597 + #define DEV_MAC_TAGS_CFG_VLAN_AWR_ENA BIT(0) 598 + #define DEV_MAC_TAGS_CFG_VLAN_AWR_ENA_SET(x)\ 599 + FIELD_PREP(DEV_MAC_TAGS_CFG_VLAN_AWR_ENA, x) 600 + #define DEV_MAC_TAGS_CFG_VLAN_AWR_ENA_GET(x)\ 601 + FIELD_GET(DEV_MAC_TAGS_CFG_VLAN_AWR_ENA, x) 602 + 588 603 /* DEV:MAC_CFG_STATUS:MAC_IFG_CFG */ 589 604 #define DEV_MAC_IFG_CFG(t) __REG(TARGET_DEV, t, 8, 28, 0, 1, 44, 20, 0, 1, 4) 590 605
+6
drivers/net/ethernet/microchip/lan966x/lan966x_vlan.c
··· 169 169 ANA_VLAN_CFG_VLAN_POP_CNT, 170 170 lan966x, ANA_VLAN_CFG(port->chip_port)); 171 171 172 + lan_rmw(DEV_MAC_TAGS_CFG_VLAN_AWR_ENA_SET(port->vlan_aware) | 173 + DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA_SET(port->vlan_aware), 174 + DEV_MAC_TAGS_CFG_VLAN_AWR_ENA | 175 + DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA, 176 + lan966x, DEV_MAC_TAGS_CFG(port->chip_port)); 177 + 172 178 /* Drop frames with multicast source address */ 173 179 val = ANA_DROP_CFG_DROP_MC_SMAC_ENA_SET(1); 174 180 if (port->vlan_aware && !pvid)
+6 -2
drivers/net/ethernet/sfc/efx.c
··· 1059 1059 1060 1060 /* Allocate and initialise a struct net_device */ 1061 1061 net_dev = alloc_etherdev_mq(sizeof(probe_data), EFX_MAX_CORE_TX_QUEUES); 1062 - if (!net_dev) 1063 - return -ENOMEM; 1062 + if (!net_dev) { 1063 + rc = -ENOMEM; 1064 + goto fail0; 1065 + } 1064 1066 probe_ptr = netdev_priv(net_dev); 1065 1067 *probe_ptr = probe_data; 1066 1068 efx->net_dev = net_dev; ··· 1134 1132 WARN_ON(rc > 0); 1135 1133 netif_dbg(efx, drv, efx->net_dev, "initialisation failed. rc=%d\n", rc); 1136 1134 free_netdev(net_dev); 1135 + fail0: 1136 + kfree(probe_data); 1137 1137 return rc; 1138 1138 } 1139 1139
+2 -5
drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
··· 51 51 struct stmmac_resources res; 52 52 struct device_node *np; 53 53 int ret, i, phy_mode; 54 - bool mdio = false; 55 54 56 55 np = dev_of_node(&pdev->dev); 57 56 ··· 68 69 if (!plat) 69 70 return -ENOMEM; 70 71 72 + plat->mdio_node = of_get_child_by_name(np, "mdio"); 71 73 if (plat->mdio_node) { 72 - dev_err(&pdev->dev, "Found MDIO subnode\n"); 73 - mdio = true; 74 - } 74 + dev_info(&pdev->dev, "Found MDIO subnode\n"); 75 75 76 - if (mdio) { 77 76 plat->mdio_bus_data = devm_kzalloc(&pdev->dev, 78 77 sizeof(*plat->mdio_bus_data), 79 78 GFP_KERNEL);
+1 -1
drivers/net/ethernet/xilinx/xilinx_emaclite.c
··· 108 108 * @next_tx_buf_to_use: next Tx buffer to write to 109 109 * @next_rx_buf_to_use: next Rx buffer to read from 110 110 * @base_addr: base address of the Emaclite device 111 - * @reset_lock: lock used for synchronization 111 + * @reset_lock: lock to serialize xmit and tx_timeout execution 112 112 * @deferred_skb: holds an skb (for transmission at a later time) when the 113 113 * Tx buffer is not free 114 114 * @phy_dev: pointer to the PHY device
+1 -1
drivers/net/phy/mdio_bus.c
··· 583 583 } 584 584 585 585 for (i = 0; i < PHY_MAX_ADDR; i++) { 586 - if ((bus->phy_mask & (1 << i)) == 0) { 586 + if ((bus->phy_mask & BIT(i)) == 0) { 587 587 struct phy_device *phydev; 588 588 589 589 phydev = mdiobus_scan(bus, i);
+2 -1
drivers/net/tun.c
··· 1459 1459 int err; 1460 1460 int i; 1461 1461 1462 - if (it->nr_segs > MAX_SKB_FRAGS + 1) 1462 + if (it->nr_segs > MAX_SKB_FRAGS + 1 || 1463 + len > (ETH_MAX_MTU - NET_SKB_PAD - NET_IP_ALIGN)) 1463 1464 return ERR_PTR(-EMSGSIZE); 1464 1465 1465 1466 local_bh_disable();
+9 -1
drivers/nfc/fdp/fdp.c
··· 249 249 static int fdp_nci_send(struct nci_dev *ndev, struct sk_buff *skb) 250 250 { 251 251 struct fdp_nci_info *info = nci_get_drvdata(ndev); 252 + int ret; 252 253 253 254 if (atomic_dec_and_test(&info->data_pkt_counter)) 254 255 info->data_pkt_counter_cb(ndev); 255 256 256 - return info->phy_ops->write(info->phy, skb); 257 + ret = info->phy_ops->write(info->phy, skb); 258 + if (ret < 0) { 259 + kfree_skb(skb); 260 + return ret; 261 + } 262 + 263 + consume_skb(skb); 264 + return 0; 257 265 } 258 266 259 267 static int fdp_nci_request_firmware(struct nci_dev *ndev)
+7 -2
drivers/nfc/nfcmrvl/i2c.c
··· 132 132 ret = -EREMOTEIO; 133 133 } else 134 134 ret = 0; 135 - kfree_skb(skb); 136 135 } 137 136 138 - return ret; 137 + if (ret) { 138 + kfree_skb(skb); 139 + return ret; 140 + } 141 + 142 + consume_skb(skb); 143 + return 0; 139 144 } 140 145 141 146 static void nfcmrvl_i2c_nci_update_config(struct nfcmrvl_private *priv,
+5 -2
drivers/nfc/nxp-nci/core.c
··· 80 80 return -EINVAL; 81 81 82 82 r = info->phy_ops->write(info->phy_id, skb); 83 - if (r < 0) 83 + if (r < 0) { 84 84 kfree_skb(skb); 85 + return r; 86 + } 85 87 86 - return r; 88 + consume_skb(skb); 89 + return 0; 87 90 } 88 91 89 92 static int nxp_nci_rf_pll_unlocked_ntf(struct nci_dev *ndev,
+6 -2
drivers/nfc/s3fwrn5/core.c
··· 110 110 } 111 111 112 112 ret = s3fwrn5_write(info, skb); 113 - if (ret < 0) 113 + if (ret < 0) { 114 114 kfree_skb(skb); 115 + mutex_unlock(&info->mutex); 116 + return ret; 117 + } 115 118 119 + consume_skb(skb); 116 120 mutex_unlock(&info->mutex); 117 - return ret; 121 + return 0; 118 122 } 119 123 120 124 static int s3fwrn5_nci_post_setup(struct nci_dev *ndev)
+1
drivers/nvme/host/multipath.c
··· 516 516 /* set to a default value of 512 until the disk is validated */ 517 517 blk_queue_logical_block_size(head->disk->queue, 512); 518 518 blk_set_stacking_limits(&head->disk->queue->limits); 519 + blk_queue_dma_alignment(head->disk->queue, 3); 519 520 520 521 /* we need to propagate up the VMC settings */ 521 522 if (ctrl->vwc & NVME_CTRL_VWC_PRESENT)
+11 -2
drivers/nvme/host/tcp.c
··· 387 387 { 388 388 struct scatterlist sg; 389 389 390 - sg_init_marker(&sg, 1); 390 + sg_init_table(&sg, 1); 391 391 sg_set_page(&sg, page, len, off); 392 392 ahash_request_set_crypt(hash, &sg, NULL, len); 393 393 crypto_ahash_update(hash); ··· 1141 1141 static int nvme_tcp_try_send(struct nvme_tcp_queue *queue) 1142 1142 { 1143 1143 struct nvme_tcp_request *req; 1144 + unsigned int noreclaim_flag; 1144 1145 int ret = 1; 1145 1146 1146 1147 if (!queue->request) { ··· 1151 1150 } 1152 1151 req = queue->request; 1153 1152 1153 + noreclaim_flag = memalloc_noreclaim_save(); 1154 1154 if (req->state == NVME_TCP_SEND_CMD_PDU) { 1155 1155 ret = nvme_tcp_try_send_cmd_pdu(req); 1156 1156 if (ret <= 0) 1157 1157 goto done; 1158 1158 if (!nvme_tcp_has_inline_data(req)) 1159 - return ret; 1159 + goto out; 1160 1160 } 1161 1161 1162 1162 if (req->state == NVME_TCP_SEND_H2C_PDU) { ··· 1183 1181 nvme_tcp_fail_request(queue->request); 1184 1182 nvme_tcp_done_send_req(queue); 1185 1183 } 1184 + out: 1185 + memalloc_noreclaim_restore(noreclaim_flag); 1186 1186 return ret; 1187 1187 } 1188 1188 ··· 1300 1296 struct page *page; 1301 1297 struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); 1302 1298 struct nvme_tcp_queue *queue = &ctrl->queues[qid]; 1299 + unsigned int noreclaim_flag; 1303 1300 1304 1301 if (!test_and_clear_bit(NVME_TCP_Q_ALLOCATED, &queue->flags)) 1305 1302 return; ··· 1313 1308 __page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias); 1314 1309 queue->pf_cache.va = NULL; 1315 1310 } 1311 + 1312 + noreclaim_flag = memalloc_noreclaim_save(); 1316 1313 sock_release(queue->sock); 1314 + memalloc_noreclaim_restore(noreclaim_flag); 1315 + 1317 1316 kfree(queue->pdu); 1318 1317 mutex_destroy(&queue->send_mutex); 1319 1318 mutex_destroy(&queue->queue_lock);
+1
drivers/parisc/iosapic.c
··· 866 866 867 867 return vi->txn_irq; 868 868 } 869 + EXPORT_SYMBOL(iosapic_serial_irq); 869 870 #endif 870 871 871 872
+17 -17
drivers/parisc/pdc_stable.c
··· 14 14 * all) PA-RISC machines should have them. Anyway, for safety reasons, the 15 15 * following code can deal with just 96 bytes of Stable Storage, and all 16 16 * sizes between 96 and 192 bytes (provided they are multiple of struct 17 - * device_path size, eg: 128, 160 and 192) to provide full information. 17 + * pdc_module_path size, eg: 128, 160 and 192) to provide full information. 18 18 * One last word: there's one path we can always count on: the primary path. 19 19 * Anything above 224 bytes is used for 'osdep2' OS-dependent storage area. 20 20 * ··· 88 88 short ready; /* entry record is valid if != 0 */ 89 89 unsigned long addr; /* entry address in stable storage */ 90 90 char *name; /* entry name */ 91 - struct device_path devpath; /* device path in parisc representation */ 91 + struct pdc_module_path devpath; /* device path in parisc representation */ 92 92 struct device *dev; /* corresponding device */ 93 93 struct kobject kobj; 94 94 }; ··· 138 138 static int 139 139 pdcspath_fetch(struct pdcspath_entry *entry) 140 140 { 141 - struct device_path *devpath; 141 + struct pdc_module_path *devpath; 142 142 143 143 if (!entry) 144 144 return -EINVAL; ··· 153 153 return -EIO; 154 154 155 155 /* Find the matching device. 156 - NOTE: hardware_path overlays with device_path, so the nice cast can 156 + NOTE: hardware_path overlays with pdc_module_path, so the nice cast can 157 157 be used */ 158 158 entry->dev = hwpath_to_device((struct hardware_path *)devpath); 159 159 ··· 179 179 static void 180 180 pdcspath_store(struct pdcspath_entry *entry) 181 181 { 182 - struct device_path *devpath; 182 + struct pdc_module_path *devpath; 183 183 184 184 BUG_ON(!entry); 185 185 ··· 221 221 pdcspath_hwpath_read(struct pdcspath_entry *entry, char *buf) 222 222 { 223 223 char *out = buf; 224 - struct device_path *devpath; 224 + struct pdc_module_path *devpath; 225 225 short i; 226 226 227 227 if (!entry || !buf) ··· 236 236 return -ENODATA; 237 237 238 238 for (i = 0; i < 6; i++) { 239 - if (devpath->bc[i] >= 128) 239 + if (devpath->path.bc[i] < 0) 240 240 continue; 241 - out += sprintf(out, "%u/", (unsigned char)devpath->bc[i]); 241 + out += sprintf(out, "%d/", devpath->path.bc[i]); 242 242 } 243 - out += sprintf(out, "%u\n", (unsigned char)devpath->mod); 243 + out += sprintf(out, "%u\n", (unsigned char)devpath->path.mod); 244 244 245 245 return out - buf; 246 246 } ··· 296 296 for (i=5; ((temp = strrchr(in, '/'))) && (temp-in > 0) && (likely(i)); i--) { 297 297 hwpath.bc[i] = simple_strtoul(temp+1, NULL, 10); 298 298 in[temp-in] = '\0'; 299 - DPRINTK("%s: bc[%d]: %d\n", __func__, i, hwpath.bc[i]); 299 + DPRINTK("%s: bc[%d]: %d\n", __func__, i, hwpath.path.bc[i]); 300 300 } 301 301 302 302 /* Store the final field */ 303 303 hwpath.bc[i] = simple_strtoul(in, NULL, 10); 304 - DPRINTK("%s: bc[%d]: %d\n", __func__, i, hwpath.bc[i]); 304 + DPRINTK("%s: bc[%d]: %d\n", __func__, i, hwpath.path.bc[i]); 305 305 306 306 /* Now we check that the user isn't trying to lure us */ 307 307 if (!(dev = hwpath_to_device((struct hardware_path *)&hwpath))) { ··· 342 342 pdcspath_layer_read(struct pdcspath_entry *entry, char *buf) 343 343 { 344 344 char *out = buf; 345 - struct device_path *devpath; 345 + struct pdc_module_path *devpath; 346 346 short i; 347 347 348 348 if (!entry || !buf) ··· 547 547 pathentry = &pdcspath_entry_primary; 548 548 549 549 read_lock(&pathentry->rw_lock); 550 - out += sprintf(out, "%s\n", (pathentry->devpath.flags & knob) ? 550 + out += sprintf(out, "%s\n", (pathentry->devpath.path.flags & knob) ? 551 551 "On" : "Off"); 552 552 read_unlock(&pathentry->rw_lock); 553 553 ··· 594 594 595 595 /* print the timer value in seconds */ 596 596 read_lock(&pathentry->rw_lock); 597 - out += sprintf(out, "%u\n", (pathentry->devpath.flags & PF_TIMER) ? 598 - (1 << (pathentry->devpath.flags & PF_TIMER)) : 0); 597 + out += sprintf(out, "%u\n", (pathentry->devpath.path.flags & PF_TIMER) ? 598 + (1 << (pathentry->devpath.path.flags & PF_TIMER)) : 0); 599 599 read_unlock(&pathentry->rw_lock); 600 600 601 601 return out - buf; ··· 764 764 765 765 /* Be nice to the existing flag record */ 766 766 read_lock(&pathentry->rw_lock); 767 - flags = pathentry->devpath.flags; 767 + flags = pathentry->devpath.path.flags; 768 768 read_unlock(&pathentry->rw_lock); 769 769 770 770 DPRINTK("%s: flags before: 0x%X\n", __func__, flags); ··· 785 785 write_lock(&pathentry->rw_lock); 786 786 787 787 /* Change the path entry flags first */ 788 - pathentry->devpath.flags = flags; 788 + pathentry->devpath.path.flags = flags; 789 789 790 790 /* Now, dive in. Write back to the hardware */ 791 791 pdcspath_store(pathentry);
+14 -10
drivers/platform/loongarch/loongson-laptop.c
··· 199 199 struct key_entry ke; 200 200 struct backlight_device *bd; 201 201 202 + bd = backlight_device_get_by_type(BACKLIGHT_PLATFORM); 203 + if (bd) { 204 + loongson_laptop_backlight_update(bd) ? 205 + pr_warn("Loongson_backlight: resume brightness failed") : 206 + pr_info("Loongson_backlight: resume brightness %d\n", bd->props.brightness); 207 + } 208 + 202 209 /* 203 210 * Only if the firmware supports SW_LID event model, we can handle the 204 211 * event. This is for the consideration of development board without EC. ··· 233 226 ke.sw.code = SW_LID; 234 227 sparse_keymap_report_entry(generic_inputdev, &ke, 1, true); 235 228 } 236 - } 237 - 238 - bd = backlight_device_get_by_type(BACKLIGHT_PLATFORM); 239 - if (bd) { 240 - loongson_laptop_backlight_update(bd) ? 241 - pr_warn("Loongson_backlight: resume brightness failed") : 242 - pr_info("Loongson_backlight: resume brightness %d\n", bd->props.brightness); 243 229 } 244 230 245 231 return 0; ··· 448 448 if (ret < 0) { 449 449 pr_err("Failed to setup input device keymap\n"); 450 450 input_free_device(generic_inputdev); 451 + generic_inputdev = NULL; 451 452 452 453 return ret; 453 454 } ··· 503 502 if (ret) 504 503 return -EINVAL; 505 504 506 - if (sub_driver->init) 507 - sub_driver->init(sub_driver); 505 + if (sub_driver->init) { 506 + ret = sub_driver->init(sub_driver); 507 + if (ret) 508 + goto err_out; 509 + } 508 510 509 511 if (sub_driver->notify) { 510 512 ret = setup_acpi_notify(sub_driver); ··· 523 519 524 520 err_out: 525 521 generic_subdriver_exit(sub_driver); 526 - return (ret < 0) ? ret : 0; 522 + return ret; 527 523 } 528 524 529 525 static void generic_subdriver_exit(struct generic_sub_driver *sub_driver)
+11 -3
drivers/rtc/rtc-cmos.c
··· 1233 1233 1234 1234 static inline void rtc_wake_setup(struct device *dev) 1235 1235 { 1236 + if (acpi_disabled) 1237 + return; 1238 + 1236 1239 acpi_install_fixed_event_handler(ACPI_EVENT_RTC, rtc_handler, dev); 1237 1240 /* 1238 1241 * After the RTC handler is installed, the Fixed_RTC event should ··· 1289 1286 1290 1287 use_acpi_alarm_quirks(); 1291 1288 1292 - rtc_wake_setup(dev); 1293 1289 acpi_rtc_info.wake_on = rtc_wake_on; 1294 1290 acpi_rtc_info.wake_off = rtc_wake_off; 1295 1291 ··· 1346 1344 { 1347 1345 } 1348 1346 1347 + static void rtc_wake_setup(struct device *dev) 1348 + { 1349 + } 1349 1350 #endif 1350 1351 1351 1352 #ifdef CONFIG_PNP ··· 1358 1353 static int cmos_pnp_probe(struct pnp_dev *pnp, const struct pnp_device_id *id) 1359 1354 { 1360 1355 int irq, ret; 1356 + 1357 + cmos_wake_setup(&pnp->dev); 1361 1358 1362 1359 if (pnp_port_start(pnp, 0) == 0x70 && !pnp_irq_valid(pnp, 0)) { 1363 1360 irq = 0; ··· 1379 1372 if (ret) 1380 1373 return ret; 1381 1374 1382 - cmos_wake_setup(&pnp->dev); 1375 + rtc_wake_setup(&pnp->dev); 1383 1376 1384 1377 return 0; 1385 1378 } ··· 1468 1461 int irq, ret; 1469 1462 1470 1463 cmos_of_init(pdev); 1464 + cmos_wake_setup(&pdev->dev); 1471 1465 1472 1466 if (RTC_IOMAPPED) 1473 1467 resource = platform_get_resource(pdev, IORESOURCE_IO, 0); ··· 1482 1474 if (ret) 1483 1475 return ret; 1484 1476 1485 - cmos_wake_setup(&pdev->dev); 1477 + rtc_wake_setup(&pdev->dev); 1486 1478 1487 1479 return 0; 1488 1480 }
+2 -6
drivers/s390/cio/css.c
··· 753 753 { 754 754 struct idset *set = data; 755 755 struct subchannel *sch = to_subchannel(dev); 756 - struct ccw_device *cdev; 757 756 758 - if (sch->st == SUBCHANNEL_TYPE_IO) { 759 - cdev = sch_get_cdev(sch); 760 - if (cdev && cdev->online) 761 - idset_sch_del(set, sch->schid); 762 - } 757 + if (sch->st == SUBCHANNEL_TYPE_IO && sch->config.ena) 758 + idset_sch_del(set, sch->schid); 763 759 764 760 return 0; 765 761 }
+1 -1
drivers/s390/crypto/vfio_ap_private.h
··· 52 52 struct mutex guests_lock; /* serializes access to each KVM guest */ 53 53 struct mdev_parent parent; 54 54 struct mdev_type mdev_type; 55 - struct mdev_type *mdev_types[]; 55 + struct mdev_type *mdev_types[1]; 56 56 }; 57 57 58 58 extern struct ap_matrix_dev *matrix_dev;
+2 -2
drivers/scsi/lpfc/lpfc_bsg.c
··· 2582 2582 * 2583 2583 * This function obtains the transmit and receive ids required to send 2584 2584 * an unsolicited ct command with a payload. A special lpfc FsType and CmdRsp 2585 - * flags are used to the unsolicted response handler is able to process 2585 + * flags are used to the unsolicited response handler is able to process 2586 2586 * the ct command sent on the same port. 2587 2587 **/ 2588 2588 static int lpfcdiag_loop_get_xri(struct lpfc_hba *phba, uint16_t rpi, ··· 2874 2874 * @len: Number of data bytes 2875 2875 * 2876 2876 * This function allocates and posts a data buffer of sufficient size to receive 2877 - * an unsolicted CT command. 2877 + * an unsolicited CT command. 2878 2878 **/ 2879 2879 static int lpfcdiag_sli3_loop_post_rxbufs(struct lpfc_hba *phba, uint16_t rxxri, 2880 2880 size_t len)
+1 -1
drivers/scsi/lpfc/lpfc_ct.c
··· 90 90 get_job_ulpstatus(phba, piocbq)); 91 91 } 92 92 lpfc_printf_log(phba, KERN_INFO, LOG_ELS, 93 - "0145 Ignoring unsolicted CT HBQ Size:%d " 93 + "0145 Ignoring unsolicited CT HBQ Size:%d " 94 94 "status = x%x\n", 95 95 size, get_job_ulpstatus(phba, piocbq)); 96 96 }
+8 -19
drivers/scsi/megaraid/megaraid_sas_base.c
··· 5874 5874 static 5875 5875 int megasas_get_device_list(struct megasas_instance *instance) 5876 5876 { 5877 - memset(instance->pd_list, 0, 5878 - (MEGASAS_MAX_PD * sizeof(struct megasas_pd_list))); 5879 - memset(instance->ld_ids, 0xff, MEGASAS_MAX_LD_IDS); 5880 - 5881 5877 if (instance->enable_fw_dev_list) { 5882 5878 if (megasas_host_device_list_query(instance, true)) 5883 5879 return FAILED; ··· 7216 7220 7217 7221 if (!fusion->ioc_init_request) { 7218 7222 dev_err(&pdev->dev, 7219 - "Failed to allocate PD list buffer\n"); 7223 + "Failed to allocate ioc init request\n"); 7220 7224 return -ENOMEM; 7221 7225 } 7222 7226 ··· 7435 7439 (instance->pdev->device == PCI_DEVICE_ID_LSI_SAS0071SKINNY)) 7436 7440 instance->flag_ieee = 1; 7437 7441 7438 - megasas_dbg_lvl = 0; 7439 7442 instance->flag = 0; 7440 7443 instance->unload = 1; 7441 7444 instance->last_time = 0; ··· 8757 8762 int megasas_update_device_list(struct megasas_instance *instance, 8758 8763 int event_type) 8759 8764 { 8760 - int dcmd_ret = DCMD_SUCCESS; 8765 + int dcmd_ret; 8761 8766 8762 8767 if (instance->enable_fw_dev_list) { 8763 - dcmd_ret = megasas_host_device_list_query(instance, false); 8764 - if (dcmd_ret != DCMD_SUCCESS) 8765 - goto out; 8768 + return megasas_host_device_list_query(instance, false); 8766 8769 } else { 8767 8770 if (event_type & SCAN_PD_CHANNEL) { 8768 8771 dcmd_ret = megasas_get_pd_list(instance); 8769 - 8770 8772 if (dcmd_ret != DCMD_SUCCESS) 8771 - goto out; 8773 + return dcmd_ret; 8772 8774 } 8773 8775 8774 8776 if (event_type & SCAN_VD_CHANNEL) { 8775 8777 if (!instance->requestorId || 8776 8778 megasas_get_ld_vf_affiliation(instance, 0)) { 8777 - dcmd_ret = megasas_ld_list_query(instance, 8779 + return megasas_ld_list_query(instance, 8778 8780 MR_LD_QUERY_TYPE_EXPOSED_TO_HOST); 8779 - if (dcmd_ret != DCMD_SUCCESS) 8780 - goto out; 8781 8781 } 8782 8782 } 8783 8783 } 8784 - 8785 - out: 8786 - return dcmd_ret; 8784 + return DCMD_SUCCESS; 8787 8785 } 8788 8786 8789 8787 /** ··· 8906 8918 sdev1 = scsi_device_lookup(instance->host, 8907 8919 MEGASAS_MAX_PD_CHANNELS + 8908 8920 (ld_target_id / MEGASAS_MAX_DEV_PER_CHANNEL), 8909 - (ld_target_id - MEGASAS_MAX_DEV_PER_CHANNEL), 8921 + (ld_target_id % MEGASAS_MAX_DEV_PER_CHANNEL), 8910 8922 0); 8911 8923 if (sdev1) 8912 8924 megasas_remove_scsi_device(sdev1); ··· 9004 9016 */ 9005 9017 pr_info("megasas: %s\n", MEGASAS_VERSION); 9006 9018 9019 + megasas_dbg_lvl = 0; 9007 9020 support_poll_for_event = 2; 9008 9021 support_device_change = 1; 9009 9022 support_nvme_encapsulation = true;
+1
drivers/scsi/mpi3mr/Kconfig
··· 4 4 tristate "Broadcom MPI3 Storage Controller Device Driver" 5 5 depends on PCI && SCSI 6 6 select BLK_DEV_BSGLIB 7 + select SCSI_SAS_ATTRS 7 8 help 8 9 MPI3 based Storage & RAID Controllers Driver.
+1
drivers/scsi/pm8001/pm8001_init.c
··· 99 99 static struct scsi_host_template pm8001_sht = { 100 100 .module = THIS_MODULE, 101 101 .name = DRV_NAME, 102 + .proc_name = DRV_NAME, 102 103 .queuecommand = sas_queuecommand, 103 104 .dma_need_drain = ata_scsi_dma_need_drain, 104 105 .target_alloc = sas_target_alloc,
+27 -3
drivers/scsi/qla2xxx/qla_attr.c
··· 951 951 if (!capable(CAP_SYS_ADMIN) || off != 0 || count > DCBX_TLV_DATA_SIZE) 952 952 return 0; 953 953 954 + mutex_lock(&vha->hw->optrom_mutex); 954 955 if (ha->dcbx_tlv) 955 956 goto do_read; 956 - mutex_lock(&vha->hw->optrom_mutex); 957 957 if (qla2x00_chip_is_down(vha)) { 958 958 mutex_unlock(&vha->hw->optrom_mutex); 959 959 return 0; ··· 3330 3330 .bsg_timeout = qla24xx_bsg_timeout, 3331 3331 }; 3332 3332 3333 + static uint 3334 + qla2x00_get_host_supported_speeds(scsi_qla_host_t *vha, uint speeds) 3335 + { 3336 + uint supported_speeds = FC_PORTSPEED_UNKNOWN; 3337 + 3338 + if (speeds & FDMI_PORT_SPEED_64GB) 3339 + supported_speeds |= FC_PORTSPEED_64GBIT; 3340 + if (speeds & FDMI_PORT_SPEED_32GB) 3341 + supported_speeds |= FC_PORTSPEED_32GBIT; 3342 + if (speeds & FDMI_PORT_SPEED_16GB) 3343 + supported_speeds |= FC_PORTSPEED_16GBIT; 3344 + if (speeds & FDMI_PORT_SPEED_8GB) 3345 + supported_speeds |= FC_PORTSPEED_8GBIT; 3346 + if (speeds & FDMI_PORT_SPEED_4GB) 3347 + supported_speeds |= FC_PORTSPEED_4GBIT; 3348 + if (speeds & FDMI_PORT_SPEED_2GB) 3349 + supported_speeds |= FC_PORTSPEED_2GBIT; 3350 + if (speeds & FDMI_PORT_SPEED_1GB) 3351 + supported_speeds |= FC_PORTSPEED_1GBIT; 3352 + 3353 + return supported_speeds; 3354 + } 3355 + 3333 3356 void 3334 3357 qla2x00_init_host_attr(scsi_qla_host_t *vha) 3335 3358 { 3336 3359 struct qla_hw_data *ha = vha->hw; 3337 - u32 speeds = FC_PORTSPEED_UNKNOWN; 3360 + u32 speeds = 0, fdmi_speed = 0; 3338 3361 3339 3362 fc_host_dev_loss_tmo(vha->host) = ha->port_down_retry_count; 3340 3363 fc_host_node_name(vha->host) = wwn_to_u64(vha->node_name); ··· 3367 3344 fc_host_max_npiv_vports(vha->host) = ha->max_npiv_vports; 3368 3345 fc_host_npiv_vports_inuse(vha->host) = ha->cur_vport_count; 3369 3346 3370 - speeds = qla25xx_fdmi_port_speed_capability(ha); 3347 + fdmi_speed = qla25xx_fdmi_port_speed_capability(ha); 3348 + speeds = qla2x00_get_host_supported_speeds(vha, fdmi_speed); 3371 3349 3372 3350 fc_host_supported_speeds(vha->host) = speeds; 3373 3351 }
+19
drivers/target/target_core_device.c
··· 284 284 complete(&deve->pr_comp); 285 285 } 286 286 287 + /* 288 + * Establish UA condition on SCSI device - all LUNs 289 + */ 290 + void target_dev_ua_allocate(struct se_device *dev, u8 asc, u8 ascq) 291 + { 292 + struct se_dev_entry *se_deve; 293 + struct se_lun *lun; 294 + 295 + spin_lock(&dev->se_port_lock); 296 + list_for_each_entry(lun, &dev->dev_sep_list, lun_dev_link) { 297 + 298 + spin_lock(&lun->lun_deve_lock); 299 + list_for_each_entry(se_deve, &lun->lun_deve_list, lun_link) 300 + core_scsi3_ua_allocate(se_deve, asc, ascq); 301 + spin_unlock(&lun->lun_deve_lock); 302 + } 303 + spin_unlock(&dev->se_port_lock); 304 + } 305 + 287 306 static void 288 307 target_luns_data_has_changed(struct se_node_acl *nacl, struct se_dev_entry *new, 289 308 bool skip_new)
+4 -15
drivers/target/target_core_iblock.c
··· 230 230 clear_bit(IBD_PLUGF_PLUGGED, &ib_dev_plug->flags); 231 231 } 232 232 233 - static unsigned long long iblock_emulate_read_cap_with_block_size( 234 - struct se_device *dev, 235 - struct block_device *bd, 236 - struct request_queue *q) 233 + static sector_t iblock_get_blocks(struct se_device *dev) 237 234 { 238 - u32 block_size = bdev_logical_block_size(bd); 235 + struct iblock_dev *ib_dev = IBLOCK_DEV(dev); 236 + u32 block_size = bdev_logical_block_size(ib_dev->ibd_bd); 239 237 unsigned long long blocks_long = 240 - div_u64(bdev_nr_bytes(bd), block_size) - 1; 238 + div_u64(bdev_nr_bytes(ib_dev->ibd_bd), block_size) - 1; 241 239 242 240 if (block_size == dev->dev_attrib.block_size) 243 241 return blocks_long; ··· 825 827 kfree(ibr); 826 828 fail: 827 829 return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 828 - } 829 - 830 - static sector_t iblock_get_blocks(struct se_device *dev) 831 - { 832 - struct iblock_dev *ib_dev = IBLOCK_DEV(dev); 833 - struct block_device *bd = ib_dev->ibd_bd; 834 - struct request_queue *q = bdev_get_queue(bd); 835 - 836 - return iblock_emulate_read_cap_with_block_size(dev, bd, q); 837 830 } 838 831 839 832 static sector_t iblock_get_alignment_offset_lbas(struct se_device *dev)
+1
drivers/target/target_core_internal.h
··· 89 89 void target_free_device(struct se_device *); 90 90 int target_for_each_device(int (*fn)(struct se_device *dev, void *data), 91 91 void *data); 92 + void target_dev_ua_allocate(struct se_device *dev, u8 asc, u8 ascq); 92 93 93 94 /* target_core_configfs.c */ 94 95 extern struct configfs_item_operations target_core_dev_item_ops;
+26 -7
drivers/target/target_core_pr.c
··· 2956 2956 __core_scsi3_complete_pro_preempt(dev, pr_reg_n, 2957 2957 (preempt_type == PREEMPT_AND_ABORT) ? &preempt_and_abort_list : NULL, 2958 2958 type, scope, preempt_type); 2959 - 2960 - if (preempt_type == PREEMPT_AND_ABORT) 2961 - core_scsi3_release_preempt_and_abort( 2962 - &preempt_and_abort_list, pr_reg_n); 2963 2959 } 2960 + 2964 2961 spin_unlock(&dev->dev_reservation_lock); 2962 + 2963 + /* 2964 + * SPC-4 5.12.11.2.6 Preempting and aborting 2965 + * The actions described in this subclause shall be performed 2966 + * for all I_T nexuses that are registered with the non-zero 2967 + * SERVICE ACTION RESERVATION KEY value, without regard for 2968 + * whether the preempted I_T nexuses hold the persistent 2969 + * reservation. If the SERVICE ACTION RESERVATION KEY field is 2970 + * set to zero and an all registrants persistent reservation is 2971 + * present, the device server shall abort all commands for all 2972 + * registered I_T nexuses. 2973 + */ 2974 + if (preempt_type == PREEMPT_AND_ABORT) { 2975 + core_tmr_lun_reset(dev, NULL, &preempt_and_abort_list, 2976 + cmd); 2977 + core_scsi3_release_preempt_and_abort( 2978 + &preempt_and_abort_list, pr_reg_n); 2979 + } 2965 2980 2966 2981 if (pr_tmpl->pr_aptpl_active) 2967 2982 core_scsi3_update_and_write_aptpl(cmd->se_dev, true); ··· 3037 3022 if (calling_it_nexus) 3038 3023 continue; 3039 3024 3040 - if (pr_reg->pr_res_key != sa_res_key) 3025 + if (sa_res_key && pr_reg->pr_res_key != sa_res_key) 3041 3026 continue; 3042 3027 3043 3028 pr_reg_nacl = pr_reg->pr_reg_nacl; ··· 3440 3425 * transport protocols where port names are not required; 3441 3426 * d) Register the reservation key specified in the SERVICE ACTION 3442 3427 * RESERVATION KEY field; 3443 - * e) Retain the reservation key specified in the SERVICE ACTION 3444 - * RESERVATION KEY field and associated information; 3445 3428 * 3446 3429 * Also, It is not an error for a REGISTER AND MOVE service action to 3447 3430 * register an I_T nexus that is already registered with the same ··· 3461 3448 dest_pr_reg = __core_scsi3_locate_pr_reg(dev, dest_node_acl, 3462 3449 iport_ptr); 3463 3450 new_reg = 1; 3451 + } else { 3452 + /* 3453 + * e) Retain the reservation key specified in the SERVICE ACTION 3454 + * RESERVATION KEY field and associated information; 3455 + */ 3456 + dest_pr_reg->pr_res_key = sa_res_key; 3464 3457 } 3465 3458 /* 3466 3459 * f) Release the persistent reservation for the persistent reservation
+1 -2
drivers/target/target_core_transport.c
··· 3531 3531 tmr->response = (!ret) ? TMR_FUNCTION_COMPLETE : 3532 3532 TMR_FUNCTION_REJECTED; 3533 3533 if (tmr->response == TMR_FUNCTION_COMPLETE) { 3534 - target_ua_allocate_lun(cmd->se_sess->se_node_acl, 3535 - cmd->orig_fe_lun, 0x29, 3534 + target_dev_ua_allocate(dev, 0x29, 3536 3535 ASCQ_29H_BUS_DEVICE_RESET_FUNCTION_OCCURRED); 3537 3536 } 3538 3537 break;
drivers/tty/serial/8250/8250_gsc.c drivers/tty/serial/8250/8250_parisc.c
+2 -2
drivers/tty/serial/8250/Kconfig
··· 116 116 117 117 If unsure, say N. 118 118 119 - config SERIAL_8250_GSC 119 + config SERIAL_8250_PARISC 120 120 tristate 121 - depends on SERIAL_8250 && GSC 121 + depends on SERIAL_8250 && PARISC 122 122 default SERIAL_8250 123 123 124 124 config SERIAL_8250_DMA
+1 -1
drivers/tty/serial/8250/Makefile
··· 12 12 8250_base-$(CONFIG_SERIAL_8250_DMA) += 8250_dma.o 13 13 8250_base-$(CONFIG_SERIAL_8250_DWLIB) += 8250_dwlib.o 14 14 8250_base-$(CONFIG_SERIAL_8250_FINTEK) += 8250_fintek.o 15 - obj-$(CONFIG_SERIAL_8250_GSC) += 8250_gsc.o 15 + obj-$(CONFIG_SERIAL_8250_PARISC) += 8250_parisc.o 16 16 obj-$(CONFIG_SERIAL_8250_PCI) += 8250_pci.o 17 17 obj-$(CONFIG_SERIAL_8250_EXAR) += 8250_exar.o 18 18 obj-$(CONFIG_SERIAL_8250_HP300) += 8250_hp300.o
+2 -2
drivers/ufs/core/ufshcd.c
··· 772 772 } 773 773 774 774 /** 775 - * ufshcd_utmrl_clear - Clear a bit in UTRMLCLR register 775 + * ufshcd_utmrl_clear - Clear a bit in UTMRLCLR register 776 776 * @hba: per adapter instance 777 777 * @pos: position of the bit to be cleared 778 778 */ ··· 3098 3098 3099 3099 if (ret) 3100 3100 dev_err(hba->dev, 3101 - "%s: query attribute, opcode %d, idn %d, failed with error %d after %d retries\n", 3101 + "%s: query flag, opcode %d, idn %d, failed with error %d after %d retries\n", 3102 3102 __func__, opcode, idn, ret, retries); 3103 3103 return ret; 3104 3104 }
+3 -3
drivers/ufs/core/ufshpb.c
··· 383 383 rgn = hpb->rgn_tbl + rgn_idx; 384 384 srgn = rgn->srgn_tbl + srgn_idx; 385 385 386 - /* If command type is WRITE or DISCARD, set bitmap as drity */ 386 + /* If command type is WRITE or DISCARD, set bitmap as dirty */ 387 387 if (ufshpb_is_write_or_discard(cmd)) { 388 388 ufshpb_iterate_rgn(hpb, rgn_idx, srgn_idx, srgn_offset, 389 389 transfer_len, true); ··· 616 616 static enum rq_end_io_ret ufshpb_umap_req_compl_fn(struct request *req, 617 617 blk_status_t error) 618 618 { 619 - struct ufshpb_req *umap_req = (struct ufshpb_req *)req->end_io_data; 619 + struct ufshpb_req *umap_req = req->end_io_data; 620 620 621 621 ufshpb_put_req(umap_req->hpb, umap_req); 622 622 return RQ_END_IO_NONE; ··· 625 625 static enum rq_end_io_ret ufshpb_map_req_compl_fn(struct request *req, 626 626 blk_status_t error) 627 627 { 628 - struct ufshpb_req *map_req = (struct ufshpb_req *) req->end_io_data; 628 + struct ufshpb_req *map_req = req->end_io_data; 629 629 struct ufshpb_lu *hpb = map_req->hpb; 630 630 struct ufshpb_subregion *srgn; 631 631 unsigned long flags;
-1
drivers/ufs/host/ufs-qcom-ice.c
··· 118 118 host->ice_mmio = devm_ioremap_resource(dev, res); 119 119 if (IS_ERR(host->ice_mmio)) { 120 120 err = PTR_ERR(host->ice_mmio); 121 - dev_err(dev, "Failed to map ICE registers; err=%d\n", err); 122 121 return err; 123 122 } 124 123
+48 -1
drivers/usb/dwc3/core.c
··· 23 23 #include <linux/delay.h> 24 24 #include <linux/dma-mapping.h> 25 25 #include <linux/of.h> 26 + #include <linux/of_graph.h> 26 27 #include <linux/acpi.h> 27 28 #include <linux/pinctrl/consumer.h> 28 29 #include <linux/reset.h> ··· 86 85 * mode. If the controller supports DRD but the dr_mode is not 87 86 * specified or set to OTG, then set the mode to peripheral. 88 87 */ 89 - if (mode == USB_DR_MODE_OTG && 88 + if (mode == USB_DR_MODE_OTG && !dwc->edev && 90 89 (!IS_ENABLED(CONFIG_USB_ROLE_SWITCH) || 91 90 !device_property_read_bool(dwc->dev, "usb-role-switch")) && 92 91 !DWC3_VER_IS_PRIOR(DWC3, 330A)) ··· 1691 1690 } 1692 1691 } 1693 1692 1693 + static struct extcon_dev *dwc3_get_extcon(struct dwc3 *dwc) 1694 + { 1695 + struct device *dev = dwc->dev; 1696 + struct device_node *np_phy; 1697 + struct extcon_dev *edev = NULL; 1698 + const char *name; 1699 + 1700 + if (device_property_read_bool(dev, "extcon")) 1701 + return extcon_get_edev_by_phandle(dev, 0); 1702 + 1703 + /* 1704 + * Device tree platforms should get extcon via phandle. 1705 + * On ACPI platforms, we get the name from a device property. 1706 + * This device property is for kernel internal use only and 1707 + * is expected to be set by the glue code. 1708 + */ 1709 + if (device_property_read_string(dev, "linux,extcon-name", &name) == 0) 1710 + return extcon_get_extcon_dev(name); 1711 + 1712 + /* 1713 + * Try to get an extcon device from the USB PHY controller's "port" 1714 + * node. Check if it has the "port" node first, to avoid printing the 1715 + * error message from underlying code, as it's a valid case: extcon 1716 + * device (and "port" node) may be missing in case of "usb-role-switch" 1717 + * or OTG mode. 1718 + */ 1719 + np_phy = of_parse_phandle(dev->of_node, "phys", 0); 1720 + if (of_graph_is_present(np_phy)) { 1721 + struct device_node *np_conn; 1722 + 1723 + np_conn = of_graph_get_remote_node(np_phy, -1, -1); 1724 + if (np_conn) 1725 + edev = extcon_find_edev_by_node(np_conn); 1726 + of_node_put(np_conn); 1727 + } 1728 + of_node_put(np_phy); 1729 + 1730 + return edev; 1731 + } 1732 + 1694 1733 static int dwc3_probe(struct platform_device *pdev) 1695 1734 { 1696 1735 struct device *dev = &pdev->dev; ··· 1879 1838 dev_err(dwc->dev, "failed to allocate event buffers\n"); 1880 1839 ret = -ENOMEM; 1881 1840 goto err2; 1841 + } 1842 + 1843 + dwc->edev = dwc3_get_extcon(dwc); 1844 + if (IS_ERR(dwc->edev)) { 1845 + ret = dev_err_probe(dwc->dev, PTR_ERR(dwc->edev), "failed to get extcon\n"); 1846 + goto err3; 1882 1847 } 1883 1848 1884 1849 ret = dwc3_get_dr_mode(dwc);
-50
drivers/usb/dwc3/drd.c
··· 8 8 */ 9 9 10 10 #include <linux/extcon.h> 11 - #include <linux/of_graph.h> 12 11 #include <linux/of_platform.h> 13 12 #include <linux/platform_device.h> 14 13 #include <linux/property.h> ··· 438 439 return NOTIFY_DONE; 439 440 } 440 441 441 - static struct extcon_dev *dwc3_get_extcon(struct dwc3 *dwc) 442 - { 443 - struct device *dev = dwc->dev; 444 - struct device_node *np_phy; 445 - struct extcon_dev *edev = NULL; 446 - const char *name; 447 - 448 - if (device_property_read_bool(dev, "extcon")) 449 - return extcon_get_edev_by_phandle(dev, 0); 450 - 451 - /* 452 - * Device tree platforms should get extcon via phandle. 453 - * On ACPI platforms, we get the name from a device property. 454 - * This device property is for kernel internal use only and 455 - * is expected to be set by the glue code. 456 - */ 457 - if (device_property_read_string(dev, "linux,extcon-name", &name) == 0) { 458 - edev = extcon_get_extcon_dev(name); 459 - if (!edev) 460 - return ERR_PTR(-EPROBE_DEFER); 461 - 462 - return edev; 463 - } 464 - 465 - /* 466 - * Try to get an extcon device from the USB PHY controller's "port" 467 - * node. Check if it has the "port" node first, to avoid printing the 468 - * error message from underlying code, as it's a valid case: extcon 469 - * device (and "port" node) may be missing in case of "usb-role-switch" 470 - * or OTG mode. 471 - */ 472 - np_phy = of_parse_phandle(dev->of_node, "phys", 0); 473 - if (of_graph_is_present(np_phy)) { 474 - struct device_node *np_conn; 475 - 476 - np_conn = of_graph_get_remote_node(np_phy, -1, -1); 477 - if (np_conn) 478 - edev = extcon_find_edev_by_node(np_conn); 479 - of_node_put(np_conn); 480 - } 481 - of_node_put(np_phy); 482 - 483 - return edev; 484 - } 485 - 486 442 #if IS_ENABLED(CONFIG_USB_ROLE_SWITCH) 487 443 #define ROLE_SWITCH 1 488 444 static int dwc3_usb_role_switch_set(struct usb_role_switch *sw, ··· 541 587 if (ROLE_SWITCH && 542 588 device_property_read_bool(dwc->dev, "usb-role-switch")) 543 589 return dwc3_setup_role_switch(dwc); 544 - 545 - dwc->edev = dwc3_get_extcon(dwc); 546 - if (IS_ERR(dwc->edev)) 547 - return PTR_ERR(dwc->edev); 548 590 549 591 if (dwc->edev) { 550 592 dwc->edev_nb.notifier_call = dwc3_drd_notifier;
+1 -1
drivers/usb/dwc3/dwc3-st.c
··· 251 251 /* Manage SoftReset */ 252 252 reset_control_deassert(dwc3_data->rstc_rst); 253 253 254 - child = of_get_child_by_name(node, "usb"); 254 + child = of_get_compatible_child(node, "snps,dwc3"); 255 255 if (!child) { 256 256 dev_err(&pdev->dev, "failed to find dwc3 core node\n"); 257 257 ret = -ENODEV;
+17 -3
drivers/usb/dwc3/gadget.c
··· 1292 1292 trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS; 1293 1293 } 1294 1294 1295 - /* always enable Interrupt on Missed ISOC */ 1296 - trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI; 1295 + if (!no_interrupt && !chain) 1296 + trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI; 1297 1297 break; 1298 1298 1299 1299 case USB_ENDPOINT_XFER_BULK: ··· 1698 1698 cmd |= DWC3_DEPCMD_PARAM(dep->resource_index); 1699 1699 memset(&params, 0, sizeof(params)); 1700 1700 ret = dwc3_send_gadget_ep_cmd(dep, cmd, &params); 1701 + /* 1702 + * If the End Transfer command was timed out while the device is 1703 + * not in SETUP phase, it's possible that an incoming Setup packet 1704 + * may prevent the command's completion. Let's retry when the 1705 + * ep0state returns to EP0_SETUP_PHASE. 1706 + */ 1707 + if (ret == -ETIMEDOUT && dep->dwc->ep0state != EP0_SETUP_PHASE) { 1708 + dep->flags |= DWC3_EP_DELAY_STOP; 1709 + return 0; 1710 + } 1701 1711 WARN_ON_ONCE(ret); 1702 1712 dep->resource_index = 0; 1703 1713 ··· 3248 3238 if (event->status & DEPEVT_STATUS_SHORT && !chain) 3249 3239 return 1; 3250 3240 3241 + if ((trb->ctrl & DWC3_TRB_CTRL_ISP_IMI) && 3242 + DWC3_TRB_SIZE_TRBSTS(trb->size) == DWC3_TRBSTS_MISSED_ISOC) 3243 + return 1; 3244 + 3251 3245 if ((trb->ctrl & DWC3_TRB_CTRL_IOC) || 3252 3246 (trb->ctrl & DWC3_TRB_CTRL_LST)) 3253 3247 return 1; ··· 3733 3719 * timeout. Delay issuing the End Transfer command until the Setup TRB is 3734 3720 * prepared. 3735 3721 */ 3736 - if (dwc->ep0state != EP0_SETUP_PHASE) { 3722 + if (dwc->ep0state != EP0_SETUP_PHASE && !dwc->delayed_status) { 3737 3723 dep->flags |= DWC3_EP_DELAY_STOP; 3738 3724 return; 3739 3725 }
+5 -3
drivers/usb/gadget/function/uvc_queue.c
··· 304 304 305 305 queue->sequence = 0; 306 306 queue->buf_used = 0; 307 + queue->flags &= ~UVC_QUEUE_DROP_INCOMPLETE; 307 308 } else { 308 309 ret = vb2_streamoff(&queue->queue, queue->queue.type); 309 310 if (ret < 0) ··· 330 329 void uvcg_complete_buffer(struct uvc_video_queue *queue, 331 330 struct uvc_buffer *buf) 332 331 { 333 - if ((queue->flags & UVC_QUEUE_DROP_INCOMPLETE) && 334 - buf->length != buf->bytesused) { 335 - buf->state = UVC_BUF_STATE_QUEUED; 332 + if (queue->flags & UVC_QUEUE_DROP_INCOMPLETE) { 333 + queue->flags &= ~UVC_QUEUE_DROP_INCOMPLETE; 334 + buf->state = UVC_BUF_STATE_ERROR; 336 335 vb2_set_plane_payload(&buf->buf.vb2_buf, 0, 0); 336 + vb2_buffer_done(&buf->buf.vb2_buf, VB2_BUF_STATE_ERROR); 337 337 return; 338 338 } 339 339
+18 -7
drivers/usb/gadget/function/uvc_video.c
··· 88 88 struct uvc_buffer *buf) 89 89 { 90 90 void *mem = req->buf; 91 + struct uvc_request *ureq = req->context; 91 92 int len = video->req_size; 92 93 int ret; 93 94 ··· 114 113 video->queue.buf_used = 0; 115 114 buf->state = UVC_BUF_STATE_DONE; 116 115 list_del(&buf->queue); 117 - uvcg_complete_buffer(&video->queue, buf); 118 116 video->fid ^= UVC_STREAM_FID; 117 + ureq->last_buf = buf; 119 118 120 119 video->payload_size = 0; 121 120 } 122 121 123 122 if (video->payload_size == video->max_payload_size || 123 + video->queue.flags & UVC_QUEUE_DROP_INCOMPLETE || 124 124 buf->bytesused == video->queue.buf_used) 125 125 video->payload_size = 0; 126 126 } ··· 157 155 sg = sg_next(sg); 158 156 159 157 for_each_sg(sg, iter, ureq->sgt.nents - 1, i) { 160 - if (!len || !buf->sg || !sg_dma_len(buf->sg)) 158 + if (!len || !buf->sg || !buf->sg->length) 161 159 break; 162 160 163 - sg_left = sg_dma_len(buf->sg) - buf->offset; 161 + sg_left = buf->sg->length - buf->offset; 164 162 part = min_t(unsigned int, len, sg_left); 165 163 166 164 sg_set_page(iter, sg_page(buf->sg), part, buf->offset); ··· 182 180 req->length -= len; 183 181 video->queue.buf_used += req->length - header_len; 184 182 185 - if (buf->bytesused == video->queue.buf_used || !buf->sg) { 183 + if (buf->bytesused == video->queue.buf_used || !buf->sg || 184 + video->queue.flags & UVC_QUEUE_DROP_INCOMPLETE) { 186 185 video->queue.buf_used = 0; 187 186 buf->state = UVC_BUF_STATE_DONE; 188 187 buf->offset = 0; ··· 198 195 struct uvc_buffer *buf) 199 196 { 200 197 void *mem = req->buf; 198 + struct uvc_request *ureq = req->context; 201 199 int len = video->req_size; 202 200 int ret; 203 201 ··· 213 209 214 210 req->length = video->req_size - len; 215 211 216 - if (buf->bytesused == video->queue.buf_used) { 212 + if (buf->bytesused == video->queue.buf_used || 213 + video->queue.flags & UVC_QUEUE_DROP_INCOMPLETE) { 217 214 video->queue.buf_used = 0; 218 215 buf->state = UVC_BUF_STATE_DONE; 219 216 list_del(&buf->queue); 220 - uvcg_complete_buffer(&video->queue, buf); 221 217 video->fid ^= UVC_STREAM_FID; 218 + ureq->last_buf = buf; 222 219 } 223 220 } 224 221 ··· 258 253 259 254 switch (req->status) { 260 255 case 0: 256 + break; 257 + 258 + case -EXDEV: 259 + uvcg_dbg(&video->uvc->func, "VS request missed xfer.\n"); 260 + queue->flags |= UVC_QUEUE_DROP_INCOMPLETE; 261 261 break; 262 262 263 263 case -ESHUTDOWN: /* disconnect from host. */ ··· 441 431 442 432 /* Endpoint now owns the request */ 443 433 req = NULL; 444 - video->req_int_count++; 434 + if (buf->state != UVC_BUF_STATE_DONE) 435 + video->req_int_count++; 445 436 } 446 437 447 438 if (!req)
+1
drivers/usb/gadget/udc/aspeed-vhub/dev.c
··· 591 591 d->gadget.max_speed = USB_SPEED_HIGH; 592 592 d->gadget.speed = USB_SPEED_UNKNOWN; 593 593 d->gadget.dev.of_node = vhub->pdev->dev.of_node; 594 + d->gadget.dev.of_node_reused = true; 594 595 595 596 rc = usb_add_gadget_udc(d->port_dev, &d->gadget); 596 597 if (rc != 0)
+1
drivers/usb/gadget/udc/bdc/bdc_udc.c
··· 151 151 bdc->delayed_status = false; 152 152 bdc->reinit = reinit; 153 153 bdc->test_mode = false; 154 + usb_gadget_set_state(&bdc->gadget, USB_STATE_NOTATTACHED); 154 155 } 155 156 156 157 /* TNotify wkaeup timer */
+12 -8
drivers/usb/host/xhci-mem.c
··· 889 889 if (dev->eps[i].stream_info) 890 890 xhci_free_stream_info(xhci, 891 891 dev->eps[i].stream_info); 892 - /* Endpoints on the TT/root port lists should have been removed 893 - * when usb_disable_device() was called for the device. 894 - * We can't drop them anyway, because the udev might have gone 895 - * away by this point, and we can't tell what speed it was. 892 + /* 893 + * Endpoints are normally deleted from the bandwidth list when 894 + * endpoints are dropped, before device is freed. 895 + * If host is dying or being removed then endpoints aren't 896 + * dropped cleanly, so delete the endpoint from list here. 897 + * Only applicable for hosts with software bandwidth checking. 896 898 */ 897 - if (!list_empty(&dev->eps[i].bw_endpoint_list)) 898 - xhci_warn(xhci, "Slot %u endpoint %u " 899 - "not removed from BW list!\n", 900 - slot_id, i); 899 + 900 + if (!list_empty(&dev->eps[i].bw_endpoint_list)) { 901 + list_del_init(&dev->eps[i].bw_endpoint_list); 902 + xhci_dbg(xhci, "Slot %u endpoint %u not removed from BW list!\n", 903 + slot_id, i); 904 + } 901 905 } 902 906 /* If this is a hub, free the TT(s) from the TT list */ 903 907 xhci_free_tt_info(xhci, dev, slot_id);
+15 -29
drivers/usb/host/xhci-pci.c
··· 58 58 #define PCI_DEVICE_ID_INTEL_CML_XHCI 0xa3af 59 59 #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI 0x9a13 60 60 #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI 0x1138 61 - #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI 0x461e 62 - #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI 0x464e 63 - #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI 0x51ed 64 - #define PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI 0xa71e 65 - #define PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI 0x7ec0 61 + #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI 0x51ed 66 62 67 63 #define PCI_DEVICE_ID_AMD_RENOIR_XHCI 0x1639 68 64 #define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9 69 65 #define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba 70 66 #define PCI_DEVICE_ID_AMD_PROMONTORYA_2 0x43bb 71 67 #define PCI_DEVICE_ID_AMD_PROMONTORYA_1 0x43bc 72 - #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_1 0x161a 73 - #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_2 0x161b 74 - #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3 0x161d 75 - #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 0x161e 76 - #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 0x15d6 77 - #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 0x15d7 78 - #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 0x161c 79 - #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8 0x161f 80 68 81 69 #define PCI_DEVICE_ID_ASMEDIA_1042_XHCI 0x1042 82 70 #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142 ··· 246 258 xhci->quirks |= XHCI_MISSING_CAS; 247 259 248 260 if (pdev->vendor == PCI_VENDOR_ID_INTEL && 261 + pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI) 262 + xhci->quirks |= XHCI_RESET_TO_DEFAULT; 263 + 264 + if (pdev->vendor == PCI_VENDOR_ID_INTEL && 249 265 (pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_XHCI || 250 266 pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_XHCI || 251 267 pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_XHCI || ··· 260 268 pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI || 261 269 pdev->device == PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI || 262 270 pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI || 263 - pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI || 264 - pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI || 265 - pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI || 266 - pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI || 267 - pdev->device == PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI || 268 - pdev->device == PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI)) 271 + pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI)) 269 272 xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW; 270 273 271 274 if (pdev->vendor == PCI_VENDOR_ID_ETRON && ··· 293 306 } 294 307 295 308 if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && 296 - pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI) 309 + pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI) { 310 + /* 311 + * try to tame the ASMedia 1042 controller which reports 0.96 312 + * but appears to behave more like 1.0 313 + */ 314 + xhci->quirks |= XHCI_SPURIOUS_SUCCESS; 297 315 xhci->quirks |= XHCI_BROKEN_STREAMS; 316 + } 298 317 if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && 299 318 pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI) { 300 319 xhci->quirks |= XHCI_TRUST_TX_LENGTH; ··· 329 336 pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_4)) 330 337 xhci->quirks |= XHCI_NO_SOFT_RETRY; 331 338 332 - if (pdev->vendor == PCI_VENDOR_ID_AMD && 333 - (pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_1 || 334 - pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_2 || 335 - pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3 || 336 - pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 || 337 - pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 || 338 - pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 || 339 - pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 || 340 - pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8)) 339 + /* xHC spec requires PCI devices to support D3hot and D3cold */ 340 + if (xhci->hci_version >= 0x120) 341 341 xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW; 342 342 343 343 if (xhci->quirks & XHCI_RESET_ON_RESUME)
+8 -2
drivers/usb/host/xhci.c
··· 810 810 811 811 spin_lock_irq(&xhci->lock); 812 812 xhci_halt(xhci); 813 - /* Workaround for spurious wakeups at shutdown with HSW */ 814 - if (xhci->quirks & XHCI_SPURIOUS_WAKEUP) 813 + 814 + /* 815 + * Workaround for spurious wakeps at shutdown with HSW, and for boot 816 + * firmware delay in ADL-P PCH if port are left in U3 at shutdown 817 + */ 818 + if (xhci->quirks & XHCI_SPURIOUS_WAKEUP || 819 + xhci->quirks & XHCI_RESET_TO_DEFAULT) 815 820 xhci_reset(xhci, XHCI_RESET_SHORT_USEC); 821 + 816 822 spin_unlock_irq(&xhci->lock); 817 823 818 824 xhci_cleanup_msix(xhci);
+1
drivers/usb/host/xhci.h
··· 1897 1897 #define XHCI_BROKEN_D3COLD BIT_ULL(41) 1898 1898 #define XHCI_EP_CTX_BROKEN_DCS BIT_ULL(42) 1899 1899 #define XHCI_SUSPEND_RESUME_CLKS BIT_ULL(43) 1900 + #define XHCI_RESET_TO_DEFAULT BIT_ULL(44) 1900 1901 1901 1902 unsigned int num_active_eps; 1902 1903 unsigned int limit_active_eps;
+1 -1
drivers/usb/misc/sisusbvga/sisusb_struct.h
··· 91 91 unsigned char VB_ExtTVYFilterIndex; 92 92 unsigned char VB_ExtTVYFilterIndexROM661; 93 93 unsigned char REFindex; 94 - char ROMMODEIDX661; 94 + signed char ROMMODEIDX661; 95 95 }; 96 96 97 97 struct SiS_Ext2 {
+29 -13
drivers/usb/typec/ucsi/ucsi.c
··· 183 183 } 184 184 EXPORT_SYMBOL_GPL(ucsi_send_command); 185 185 186 - int ucsi_resume(struct ucsi *ucsi) 187 - { 188 - u64 command; 189 - 190 - /* Restore UCSI notification enable mask after system resume */ 191 - command = UCSI_SET_NOTIFICATION_ENABLE | ucsi->ntfy; 192 - 193 - return ucsi_send_command(ucsi, command, NULL, 0); 194 - } 195 - EXPORT_SYMBOL_GPL(ucsi_resume); 196 186 /* -------------------------------------------------------------------------- */ 197 187 198 188 struct ucsi_work { ··· 734 744 735 745 static int ucsi_check_connection(struct ucsi_connector *con) 736 746 { 747 + u8 prev_flags = con->status.flags; 737 748 u64 command; 738 749 int ret; 739 750 ··· 745 754 return ret; 746 755 } 747 756 757 + if (con->status.flags == prev_flags) 758 + return 0; 759 + 748 760 if (con->status.flags & UCSI_CONSTAT_CONNECTED) { 749 - if (UCSI_CONSTAT_PWR_OPMODE(con->status.flags) == 750 - UCSI_CONSTAT_PWR_OPMODE_PD) 751 - ucsi_partner_task(con, ucsi_check_altmodes, 30, 0); 761 + ucsi_register_partner(con); 762 + ucsi_pwr_opmode_change(con); 763 + ucsi_partner_change(con); 752 764 } else { 753 765 ucsi_partner_change(con); 754 766 ucsi_port_psy_changed(con); ··· 1269 1275 err: 1270 1276 return ret; 1271 1277 } 1278 + 1279 + int ucsi_resume(struct ucsi *ucsi) 1280 + { 1281 + struct ucsi_connector *con; 1282 + u64 command; 1283 + int ret; 1284 + 1285 + /* Restore UCSI notification enable mask after system resume */ 1286 + command = UCSI_SET_NOTIFICATION_ENABLE | ucsi->ntfy; 1287 + ret = ucsi_send_command(ucsi, command, NULL, 0); 1288 + if (ret < 0) 1289 + return ret; 1290 + 1291 + for (con = ucsi->connector; con->port; con++) { 1292 + mutex_lock(&con->lock); 1293 + ucsi_check_connection(con); 1294 + mutex_unlock(&con->lock); 1295 + } 1296 + 1297 + return 0; 1298 + } 1299 + EXPORT_SYMBOL_GPL(ucsi_resume); 1272 1300 1273 1301 static void ucsi_init_work(struct work_struct *work) 1274 1302 {
+10
drivers/usb/typec/ucsi/ucsi_acpi.c
··· 185 185 return 0; 186 186 } 187 187 188 + static int ucsi_acpi_resume(struct device *dev) 189 + { 190 + struct ucsi_acpi *ua = dev_get_drvdata(dev); 191 + 192 + return ucsi_resume(ua->ucsi); 193 + } 194 + 195 + static DEFINE_SIMPLE_DEV_PM_OPS(ucsi_acpi_pm_ops, NULL, ucsi_acpi_resume); 196 + 188 197 static const struct acpi_device_id ucsi_acpi_match[] = { 189 198 { "PNP0CA0", 0 }, 190 199 { }, ··· 203 194 static struct platform_driver ucsi_acpi_platform_driver = { 204 195 .driver = { 205 196 .name = "ucsi_acpi", 197 + .pm = pm_ptr(&ucsi_acpi_pm_ops), 206 198 .acpi_match_table = ACPI_PTR(ucsi_acpi_match), 207 199 }, 208 200 .probe = ucsi_acpi_probe,
+1 -4
drivers/video/aperture.c
··· 340 340 size = pci_resource_len(pdev, bar); 341 341 ret = aperture_remove_conflicting_devices(base, size, primary, name); 342 342 if (ret) 343 - break; 343 + return ret; 344 344 } 345 - 346 - if (ret) 347 - return ret; 348 345 349 346 /* 350 347 * WARNING: Apparently we must kick fbdev drivers before vgacon,
+2
drivers/video/fbdev/cyber2000fb.c
··· 1796 1796 failed_regions: 1797 1797 cyberpro_free_fb_info(cfb); 1798 1798 failed_release: 1799 + pci_disable_device(dev); 1799 1800 return err; 1800 1801 } 1801 1802 ··· 1813 1812 int_cfb_info = NULL; 1814 1813 1815 1814 pci_release_regions(dev); 1815 + pci_disable_device(dev); 1816 1816 } 1817 1817 } 1818 1818
+2 -1
drivers/video/fbdev/da8xx-fb.c
··· 1076 1076 if (par->lcd_supply) { 1077 1077 ret = regulator_disable(par->lcd_supply); 1078 1078 if (ret) 1079 - return ret; 1079 + dev_warn(&dev->dev, "Failed to disable regulator (%pe)\n", 1080 + ERR_PTR(ret)); 1080 1081 } 1081 1082 1082 1083 lcd_disable_raster(DA8XX_FRAME_WAIT);
+2 -2
drivers/video/fbdev/gbefb.c
··· 1060 1060 1061 1061 static ssize_t gbefb_show_memsize(struct device *dev, struct device_attribute *attr, char *buf) 1062 1062 { 1063 - return snprintf(buf, PAGE_SIZE, "%u\n", gbe_mem_size); 1063 + return sysfs_emit(buf, "%u\n", gbe_mem_size); 1064 1064 } 1065 1065 1066 1066 static DEVICE_ATTR(size, S_IRUGO, gbefb_show_memsize, NULL); 1067 1067 1068 1068 static ssize_t gbefb_show_rev(struct device *device, struct device_attribute *attr, char *buf) 1069 1069 { 1070 - return snprintf(buf, PAGE_SIZE, "%d\n", gbe_revision); 1070 + return sysfs_emit(buf, "%d\n", gbe_revision); 1071 1071 } 1072 1072 1073 1073 static DEVICE_ATTR(revision, S_IRUGO, gbefb_show_rev, NULL);
+1 -1
drivers/video/fbdev/sis/sis_accel.c
··· 202 202 * and destination blitting areas overlap and 203 203 * adapt the bitmap addresses synchronously 204 204 * if the coordinates exceed the valid range. 205 - * The the areas do not overlap, we do our 205 + * The areas do not overlap, we do our 206 206 * normal check. 207 207 */ 208 208 if((mymax - mymin) < height) {
+1 -1
drivers/video/fbdev/sis/vstruct.h
··· 148 148 unsigned char VB_ExtTVYFilterIndex; 149 149 unsigned char VB_ExtTVYFilterIndexROM661; 150 150 unsigned char REFindex; 151 - char ROMMODEIDX661; 151 + signed char ROMMODEIDX661; 152 152 }; 153 153 154 154 struct SiS_Ext2 {
+1 -1
drivers/video/fbdev/sm501fb.c
··· 1166 1166 ctrl = smc501_readl(info->regs + SM501_DC_CRT_CONTROL); 1167 1167 ctrl &= SM501_DC_CRT_CONTROL_SEL; 1168 1168 1169 - return snprintf(buf, PAGE_SIZE, "%s\n", ctrl ? "crt" : "panel"); 1169 + return sysfs_emit(buf, "%s\n", ctrl ? "crt" : "panel"); 1170 1170 } 1171 1171 1172 1172 /* sm501fb_crtsrc_show
+31 -26
drivers/video/fbdev/smscufx.c
··· 97 97 struct kref kref; 98 98 int fb_count; 99 99 bool virtualized; /* true when physical usb device not present */ 100 - struct delayed_work free_framebuffer_work; 101 100 atomic_t usb_active; /* 0 = update virtual buffer, but no usb traffic */ 102 101 atomic_t lost_pixels; /* 1 = a render op failed. Need screen refresh */ 103 102 u8 *edid; /* null until we read edid from hw or get from sysfs */ ··· 1116 1117 { 1117 1118 struct ufx_data *dev = container_of(kref, struct ufx_data, kref); 1118 1119 1119 - /* this function will wait for all in-flight urbs to complete */ 1120 - if (dev->urbs.count > 0) 1121 - ufx_free_urb_list(dev); 1122 - 1123 - pr_debug("freeing ufx_data %p", dev); 1124 - 1125 1120 kfree(dev); 1126 1121 } 1122 + 1123 + static void ufx_ops_destory(struct fb_info *info) 1124 + { 1125 + struct ufx_data *dev = info->par; 1126 + int node = info->node; 1127 + 1128 + /* Assume info structure is freed after this point */ 1129 + framebuffer_release(info); 1130 + 1131 + pr_debug("fb_info for /dev/fb%d has been freed", node); 1132 + 1133 + /* release reference taken by kref_init in probe() */ 1134 + kref_put(&dev->kref, ufx_free); 1135 + } 1136 + 1127 1137 1128 1138 static void ufx_release_urb_work(struct work_struct *work) 1129 1139 { ··· 1142 1134 up(&unode->dev->urbs.limit_sem); 1143 1135 } 1144 1136 1145 - static void ufx_free_framebuffer_work(struct work_struct *work) 1137 + static void ufx_free_framebuffer(struct ufx_data *dev) 1146 1138 { 1147 - struct ufx_data *dev = container_of(work, struct ufx_data, 1148 - free_framebuffer_work.work); 1149 1139 struct fb_info *info = dev->info; 1150 - int node = info->node; 1151 - 1152 - unregister_framebuffer(info); 1153 1140 1154 1141 if (info->cmap.len != 0) 1155 1142 fb_dealloc_cmap(&info->cmap); ··· 1155 1152 fb_destroy_modelist(&info->modelist); 1156 1153 1157 1154 dev->info = NULL; 1158 - 1159 - /* Assume info structure is freed after this point */ 1160 - framebuffer_release(info); 1161 - 1162 - pr_debug("fb_info for /dev/fb%d has been freed", node); 1163 1155 1164 1156 /* ref taken in probe() as part of registering framebfufer */ 1165 1157 kref_put(&dev->kref, ufx_free); ··· 1167 1169 { 1168 1170 struct ufx_data *dev = info->par; 1169 1171 1172 + mutex_lock(&disconnect_mutex); 1173 + 1170 1174 dev->fb_count--; 1171 1175 1172 1176 /* We can't free fb_info here - fbmem will touch it when we return */ 1173 1177 if (dev->virtualized && (dev->fb_count == 0)) 1174 - schedule_delayed_work(&dev->free_framebuffer_work, HZ); 1178 + ufx_free_framebuffer(dev); 1175 1179 1176 1180 if ((dev->fb_count == 0) && (info->fbdefio)) { 1177 1181 fb_deferred_io_cleanup(info); ··· 1185 1185 info->node, user, dev->fb_count); 1186 1186 1187 1187 kref_put(&dev->kref, ufx_free); 1188 + 1189 + mutex_unlock(&disconnect_mutex); 1188 1190 1189 1191 return 0; 1190 1192 } ··· 1294 1292 .fb_blank = ufx_ops_blank, 1295 1293 .fb_check_var = ufx_ops_check_var, 1296 1294 .fb_set_par = ufx_ops_set_par, 1295 + .fb_destroy = ufx_ops_destory, 1297 1296 }; 1298 1297 1299 1298 /* Assumes &info->lock held by caller ··· 1676 1673 goto destroy_modedb; 1677 1674 } 1678 1675 1679 - INIT_DELAYED_WORK(&dev->free_framebuffer_work, 1680 - ufx_free_framebuffer_work); 1681 - 1682 1676 retval = ufx_reg_read(dev, 0x3000, &id_rev); 1683 1677 check_warn_goto_error(retval, "error %d reading 0x3000 register from device", retval); 1684 1678 dev_dbg(dev->gdev, "ID_REV register value 0x%08x", id_rev); ··· 1748 1748 static void ufx_usb_disconnect(struct usb_interface *interface) 1749 1749 { 1750 1750 struct ufx_data *dev; 1751 + struct fb_info *info; 1751 1752 1752 1753 mutex_lock(&disconnect_mutex); 1753 1754 1754 1755 dev = usb_get_intfdata(interface); 1756 + info = dev->info; 1755 1757 1756 1758 pr_debug("USB disconnect starting\n"); 1757 1759 ··· 1767 1765 1768 1766 /* if clients still have us open, will be freed on last close */ 1769 1767 if (dev->fb_count == 0) 1770 - schedule_delayed_work(&dev->free_framebuffer_work, 0); 1768 + ufx_free_framebuffer(dev); 1771 1769 1772 - /* release reference taken by kref_init in probe() */ 1773 - kref_put(&dev->kref, ufx_free); 1770 + /* this function will wait for all in-flight urbs to complete */ 1771 + if (dev->urbs.count > 0) 1772 + ufx_free_urb_list(dev); 1774 1773 1775 - /* consider ufx_data freed */ 1774 + pr_debug("freeing ufx_data %p", dev); 1775 + 1776 + unregister_framebuffer(info); 1776 1777 1777 1778 mutex_unlock(&disconnect_mutex); 1778 1779 }
+2 -1
drivers/video/fbdev/stifb.c
··· 1055 1055 { 1056 1056 struct stifb_info *fb = container_of(info, struct stifb_info, info); 1057 1057 1058 - if (rect->rop != ROP_COPY) 1058 + if (rect->rop != ROP_COPY || 1059 + (fb->id == S9000_ID_HCRX && fb->info.var.bits_per_pixel == 32)) 1059 1060 return cfb_fillrect(info, rect); 1060 1061 1061 1062 SETUP_HW(fb);
+4 -4
drivers/video/fbdev/xilinxfb.c
··· 376 376 return rc; 377 377 } 378 378 379 - static int xilinxfb_release(struct device *dev) 379 + static void xilinxfb_release(struct device *dev) 380 380 { 381 381 struct xilinxfb_drvdata *drvdata = dev_get_drvdata(dev); 382 382 ··· 402 402 if (!(drvdata->flags & BUS_ACCESS_FLAG)) 403 403 dcr_unmap(drvdata->dcr_host, drvdata->dcr_len); 404 404 #endif 405 - 406 - return 0; 407 405 } 408 406 409 407 /* --------------------------------------------------------------------- ··· 478 480 479 481 static int xilinxfb_of_remove(struct platform_device *op) 480 482 { 481 - return xilinxfb_release(&op->dev); 483 + xilinxfb_release(&op->dev); 484 + 485 + return 0; 482 486 } 483 487 484 488 /* Match table for of_platform binding */
+3 -1
drivers/watchdog/exar_wdt.c
··· 355 355 &priv->wdt_res, 1, 356 356 priv, sizeof(*priv)); 357 357 if (IS_ERR(n->pdev)) { 358 + int err = PTR_ERR(n->pdev); 359 + 358 360 kfree(n); 359 - return PTR_ERR(n->pdev); 361 + return err; 360 362 } 361 363 362 364 list_add_tail(&n->list, &pdev_list);
+1 -1
drivers/watchdog/sp805_wdt.c
··· 88 88 return (wdtcontrol & ENABLE_MASK) == ENABLE_MASK; 89 89 } 90 90 91 - /* This routine finds load value that will reset system in required timout */ 91 + /* This routine finds load value that will reset system in required timeout */ 92 92 static int wdt_setload(struct watchdog_device *wdd, unsigned int timeout) 93 93 { 94 94 struct sp805_wdt *wdt = watchdog_get_drvdata(wdd);
+4 -6
fs/btrfs/disk-io.c
··· 166 166 * Return 0 if the superblock checksum type matches the checksum value of that 167 167 * algorithm. Pass the raw disk superblock data. 168 168 */ 169 - static int btrfs_check_super_csum(struct btrfs_fs_info *fs_info, 170 - char *raw_disk_sb) 169 + int btrfs_check_super_csum(struct btrfs_fs_info *fs_info, 170 + const struct btrfs_super_block *disk_sb) 171 171 { 172 - struct btrfs_super_block *disk_sb = 173 - (struct btrfs_super_block *)raw_disk_sb; 174 172 char result[BTRFS_CSUM_SIZE]; 175 173 SHASH_DESC_ON_STACK(shash, fs_info->csum_shash); 176 174 ··· 179 181 * BTRFS_SUPER_INFO_SIZE range, we expect that the unused space is 180 182 * filled with zeros and is included in the checksum. 181 183 */ 182 - crypto_shash_digest(shash, raw_disk_sb + BTRFS_CSUM_SIZE, 184 + crypto_shash_digest(shash, (const u8 *)disk_sb + BTRFS_CSUM_SIZE, 183 185 BTRFS_SUPER_INFO_SIZE - BTRFS_CSUM_SIZE, result); 184 186 185 187 if (memcmp(disk_sb->csum, result, fs_info->csum_size)) ··· 3477 3479 * We want to check superblock checksum, the type is stored inside. 3478 3480 * Pass the whole disk block of size BTRFS_SUPER_INFO_SIZE (4k). 3479 3481 */ 3480 - if (btrfs_check_super_csum(fs_info, (u8 *)disk_super)) { 3482 + if (btrfs_check_super_csum(fs_info, disk_super)) { 3481 3483 btrfs_err(fs_info, "superblock checksum mismatch"); 3482 3484 err = -EINVAL; 3483 3485 btrfs_release_disk_super(disk_super);
+2
fs/btrfs/disk-io.h
··· 42 42 void btrfs_clean_tree_block(struct extent_buffer *buf); 43 43 void btrfs_clear_oneshot_options(struct btrfs_fs_info *fs_info); 44 44 int btrfs_start_pre_rw_mount(struct btrfs_fs_info *fs_info); 45 + int btrfs_check_super_csum(struct btrfs_fs_info *fs_info, 46 + const struct btrfs_super_block *disk_sb); 45 47 int __cold open_ctree(struct super_block *sb, 46 48 struct btrfs_fs_devices *fs_devices, 47 49 char *options);
+1 -1
fs/btrfs/export.c
··· 58 58 } 59 59 60 60 struct dentry *btrfs_get_dentry(struct super_block *sb, u64 objectid, 61 - u64 root_objectid, u32 generation, 61 + u64 root_objectid, u64 generation, 62 62 int check_generation) 63 63 { 64 64 struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+1 -1
fs/btrfs/export.h
··· 19 19 } __attribute__ ((packed)); 20 20 21 21 struct dentry *btrfs_get_dentry(struct super_block *sb, u64 objectid, 22 - u64 root_objectid, u32 generation, 22 + u64 root_objectid, u64 generation, 23 23 int check_generation); 24 24 struct dentry *btrfs_get_parent(struct dentry *child); 25 25
+13 -12
fs/btrfs/extent-tree.c
··· 3295 3295 } 3296 3296 3297 3297 /* 3298 - * If this is a leaf and there are tree mod log users, we may 3299 - * have recorded mod log operations that point to this leaf. 3300 - * So we must make sure no one reuses this leaf's extent before 3301 - * mod log operations are applied to a node, otherwise after 3302 - * rewinding a node using the mod log operations we get an 3303 - * inconsistent btree, as the leaf's extent may now be used as 3304 - * a node or leaf for another different btree. 3298 + * If there are tree mod log users we may have recorded mod log 3299 + * operations for this node. If we re-allocate this node we 3300 + * could replay operations on this node that happened when it 3301 + * existed in a completely different root. For example if it 3302 + * was part of root A, then was reallocated to root B, and we 3303 + * are doing a btrfs_old_search_slot(root b), we could replay 3304 + * operations that happened when the block was part of root A, 3305 + * giving us an inconsistent view of the btree. 3306 + * 3305 3307 * We are safe from races here because at this point no other 3306 3308 * node or root points to this extent buffer, so if after this 3307 - * check a new tree mod log user joins, it will not be able to 3308 - * find a node pointing to this leaf and record operations that 3309 - * point to this leaf. 3309 + * check a new tree mod log user joins we will not have an 3310 + * existing log of operations on this node that we have to 3311 + * contend with. 3310 3312 */ 3311 - if (btrfs_header_level(buf) == 0 && 3312 - test_bit(BTRFS_FS_TREE_MOD_LOG_USERS, &fs_info->flags)) 3313 + if (test_bit(BTRFS_FS_TREE_MOD_LOG_USERS, &fs_info->flags)) 3313 3314 must_pin = true; 3314 3315 3315 3316 if (must_pin || btrfs_is_zoned(fs_info)) {
+11 -7
fs/btrfs/raid56.c
··· 1632 1632 int ret; 1633 1633 1634 1634 ret = alloc_rbio_parity_pages(rbio); 1635 - if (ret) { 1636 - __free_raid_bio(rbio); 1635 + if (ret) 1637 1636 return ret; 1638 - } 1639 1637 1640 1638 ret = lock_stripe_add(rbio); 1641 1639 if (ret == 0) ··· 1821 1823 */ 1822 1824 if (rbio_is_full(rbio)) { 1823 1825 ret = full_stripe_write(rbio); 1824 - if (ret) 1826 + if (ret) { 1827 + __free_raid_bio(rbio); 1825 1828 goto fail; 1829 + } 1826 1830 return; 1827 1831 } 1828 1832 ··· 1838 1838 list_add_tail(&rbio->plug_list, &plug->rbio_list); 1839 1839 } else { 1840 1840 ret = __raid56_parity_write(rbio); 1841 - if (ret) 1841 + if (ret) { 1842 + __free_raid_bio(rbio); 1842 1843 goto fail; 1844 + } 1843 1845 } 1844 1846 1845 1847 return; ··· 2744 2742 2745 2743 rbio->faila = find_logical_bio_stripe(rbio, bio); 2746 2744 if (rbio->faila == -1) { 2747 - BUG(); 2748 - kfree(rbio); 2745 + btrfs_warn_rl(fs_info, 2746 + "can not determine the failed stripe number for full stripe %llu", 2747 + bioc->raid_map[0]); 2748 + __free_raid_bio(rbio); 2749 2749 return NULL; 2750 2750 } 2751 2751
+13 -11
fs/btrfs/send.c
··· 6668 6668 /* 6669 6669 * First, process the inode as if it was deleted. 6670 6670 */ 6671 - sctx->cur_inode_gen = right_gen; 6672 - sctx->cur_inode_new = false; 6673 - sctx->cur_inode_deleted = true; 6674 - sctx->cur_inode_size = btrfs_inode_size( 6675 - sctx->right_path->nodes[0], right_ii); 6676 - sctx->cur_inode_mode = btrfs_inode_mode( 6677 - sctx->right_path->nodes[0], right_ii); 6678 - ret = process_all_refs(sctx, 6679 - BTRFS_COMPARE_TREE_DELETED); 6680 - if (ret < 0) 6681 - goto out; 6671 + if (old_nlinks > 0) { 6672 + sctx->cur_inode_gen = right_gen; 6673 + sctx->cur_inode_new = false; 6674 + sctx->cur_inode_deleted = true; 6675 + sctx->cur_inode_size = btrfs_inode_size( 6676 + sctx->right_path->nodes[0], right_ii); 6677 + sctx->cur_inode_mode = btrfs_inode_mode( 6678 + sctx->right_path->nodes[0], right_ii); 6679 + ret = process_all_refs(sctx, 6680 + BTRFS_COMPARE_TREE_DELETED); 6681 + if (ret < 0) 6682 + goto out; 6683 + } 6682 6684 6683 6685 /* 6684 6686 * Now process the inode as if it was new.
+16
fs/btrfs/super.c
··· 2555 2555 { 2556 2556 struct btrfs_fs_info *fs_info = dev->fs_info; 2557 2557 struct btrfs_super_block *sb; 2558 + u16 csum_type; 2558 2559 int ret = 0; 2559 2560 2560 2561 /* This should be called with fs still frozen. */ ··· 2569 2568 sb = btrfs_read_dev_one_super(dev->bdev, 0, true); 2570 2569 if (IS_ERR(sb)) 2571 2570 return PTR_ERR(sb); 2571 + 2572 + /* Verify the checksum. */ 2573 + csum_type = btrfs_super_csum_type(sb); 2574 + if (csum_type != btrfs_super_csum_type(fs_info->super_copy)) { 2575 + btrfs_err(fs_info, "csum type changed, has %u expect %u", 2576 + csum_type, btrfs_super_csum_type(fs_info->super_copy)); 2577 + ret = -EUCLEAN; 2578 + goto out; 2579 + } 2580 + 2581 + if (btrfs_check_super_csum(fs_info, sb)) { 2582 + btrfs_err(fs_info, "csum for on-disk super block no longer matches"); 2583 + ret = -EUCLEAN; 2584 + goto out; 2585 + } 2572 2586 2573 2587 /* Btrfs_validate_super() includes fsid check against super->fsid. */ 2574 2588 ret = btrfs_validate_super(fs_info, sb, 0);
+11 -1
fs/btrfs/volumes.c
··· 7142 7142 u64 devid; 7143 7143 u64 type; 7144 7144 u8 uuid[BTRFS_UUID_SIZE]; 7145 + int index; 7145 7146 int num_stripes; 7146 7147 int ret; 7147 7148 int i; ··· 7150 7149 logical = key->offset; 7151 7150 length = btrfs_chunk_length(leaf, chunk); 7152 7151 type = btrfs_chunk_type(leaf, chunk); 7152 + index = btrfs_bg_flags_to_raid_index(type); 7153 7153 num_stripes = btrfs_chunk_num_stripes(leaf, chunk); 7154 7154 7155 7155 #if BITS_PER_LONG == 32 ··· 7204 7202 map->io_align = btrfs_chunk_io_align(leaf, chunk); 7205 7203 map->stripe_len = btrfs_chunk_stripe_len(leaf, chunk); 7206 7204 map->type = type; 7207 - map->sub_stripes = btrfs_chunk_sub_stripes(leaf, chunk); 7205 + /* 7206 + * We can't use the sub_stripes value, as for profiles other than 7207 + * RAID10, they may have 0 as sub_stripes for filesystems created by 7208 + * older mkfs (<v5.4). 7209 + * In that case, it can cause divide-by-zero errors later. 7210 + * Since currently sub_stripes is fixed for each profile, let's 7211 + * use the trusted value instead. 7212 + */ 7213 + map->sub_stripes = btrfs_raid_array[index].sub_stripes; 7208 7214 map->verified_stripes = 0; 7209 7215 em->orig_block_len = btrfs_calc_stripe_length(em); 7210 7216 for (i = 0; i < num_stripes; i++) {
+1 -1
fs/btrfs/volumes.h
··· 395 395 */ 396 396 struct btrfs_bio { 397 397 unsigned int mirror_num; 398 + struct bvec_iter iter; 398 399 399 400 /* for direct I/O */ 400 401 u64 file_offset; ··· 404 403 struct btrfs_device *device; 405 404 u8 *csum; 406 405 u8 csum_inline[BTRFS_BIO_INLINE_CSUM_SIZE]; 407 - struct bvec_iter iter; 408 406 409 407 /* End I/O information supplied to btrfs_bio_alloc */ 410 408 btrfs_bio_end_io_t end_io;
+1
fs/cifs/connect.c
··· 1584 1584 server->session_key.response = NULL; 1585 1585 server->session_key.len = 0; 1586 1586 kfree(server->hostname); 1587 + server->hostname = NULL; 1587 1588 1588 1589 task = xchg(&server->tsk, NULL); 1589 1590 if (task)
+10 -3
fs/cifs/file.c
··· 2434 2434 struct cifs_writedata * 2435 2435 cifs_writedata_alloc(unsigned int nr_pages, work_func_t complete) 2436 2436 { 2437 + struct cifs_writedata *writedata = NULL; 2437 2438 struct page **pages = 2438 2439 kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS); 2439 - if (pages) 2440 - return cifs_writedata_direct_alloc(pages, complete); 2440 + if (pages) { 2441 + writedata = cifs_writedata_direct_alloc(pages, complete); 2442 + if (!writedata) 2443 + kvfree(pages); 2444 + } 2441 2445 2442 - return NULL; 2446 + return writedata; 2443 2447 } 2444 2448 2445 2449 struct cifs_writedata * ··· 3303 3299 cifs_uncached_writev_complete); 3304 3300 if (!wdata) { 3305 3301 rc = -ENOMEM; 3302 + for (i = 0; i < nr_pages; i++) 3303 + put_page(pagevec[i]); 3304 + kvfree(pagevec); 3306 3305 add_credits_and_wake_if(server, credits, 0); 3307 3306 break; 3308 3307 }
+1 -1
fs/exec.c
··· 1012 1012 active_mm = tsk->active_mm; 1013 1013 tsk->active_mm = mm; 1014 1014 tsk->mm = mm; 1015 - lru_gen_add_mm(mm); 1016 1015 /* 1017 1016 * This prevents preemption while active_mm is being loaded and 1018 1017 * it and mm are being updated, which could cause problems for ··· 1024 1025 activate_mm(active_mm, mm); 1025 1026 if (IS_ENABLED(CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM)) 1026 1027 local_irq_enable(); 1028 + lru_gen_add_mm(mm); 1027 1029 task_unlock(tsk); 1028 1030 lru_gen_use_mm(mm); 1029 1031 if (old_mm) {
-4
fs/ext4/super.c
··· 1741 1741 1742 1742 #define DEFAULT_JOURNAL_IOPRIO (IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, 3)) 1743 1743 1744 - static const char deprecated_msg[] = 1745 - "Mount option \"%s\" will be removed by %s\n" 1746 - "Contact linux-ext4@vger.kernel.org if you think we should keep it.\n"; 1747 - 1748 1744 #define MOPT_SET 0x0001 1749 1745 #define MOPT_CLEAR 0x0002 1750 1746 #define MOPT_NOSUPPORT 0x0004
+2 -2
fs/nfs/client.c
··· 280 280 static struct nfs_client *nfs_match_client(const struct nfs_client_initdata *data) 281 281 { 282 282 struct nfs_client *clp; 283 - const struct sockaddr *sap = data->addr; 283 + const struct sockaddr *sap = (struct sockaddr *)data->addr; 284 284 struct nfs_net *nn = net_generic(data->net, nfs_net_id); 285 285 int error; 286 286 ··· 666 666 struct rpc_timeout timeparms; 667 667 struct nfs_client_initdata cl_init = { 668 668 .hostname = ctx->nfs_server.hostname, 669 - .addr = (const struct sockaddr *)&ctx->nfs_server.address, 669 + .addr = &ctx->nfs_server._address, 670 670 .addrlen = ctx->nfs_server.addrlen, 671 671 .nfs_mod = ctx->nfs_mod, 672 672 .proto = ctx->nfs_server.protocol,
+17 -19
fs/nfs/delegation.c
··· 228 228 * 229 229 */ 230 230 void nfs_inode_reclaim_delegation(struct inode *inode, const struct cred *cred, 231 - fmode_t type, 232 - const nfs4_stateid *stateid, 231 + fmode_t type, const nfs4_stateid *stateid, 233 232 unsigned long pagemod_limit) 234 233 { 235 234 struct nfs_delegation *delegation; ··· 238 239 delegation = rcu_dereference(NFS_I(inode)->delegation); 239 240 if (delegation != NULL) { 240 241 spin_lock(&delegation->lock); 241 - if (nfs4_is_valid_delegation(delegation, 0)) { 242 - nfs4_stateid_copy(&delegation->stateid, stateid); 243 - delegation->type = type; 244 - delegation->pagemod_limit = pagemod_limit; 245 - oldcred = delegation->cred; 246 - delegation->cred = get_cred(cred); 247 - clear_bit(NFS_DELEGATION_NEED_RECLAIM, 248 - &delegation->flags); 249 - spin_unlock(&delegation->lock); 250 - rcu_read_unlock(); 251 - put_cred(oldcred); 252 - trace_nfs4_reclaim_delegation(inode, type); 253 - return; 254 - } 255 - /* We appear to have raced with a delegation return. */ 242 + nfs4_stateid_copy(&delegation->stateid, stateid); 243 + delegation->type = type; 244 + delegation->pagemod_limit = pagemod_limit; 245 + oldcred = delegation->cred; 246 + delegation->cred = get_cred(cred); 247 + clear_bit(NFS_DELEGATION_NEED_RECLAIM, &delegation->flags); 248 + if (test_and_clear_bit(NFS_DELEGATION_REVOKED, 249 + &delegation->flags)) 250 + atomic_long_inc(&nfs_active_delegations); 256 251 spin_unlock(&delegation->lock); 252 + rcu_read_unlock(); 253 + put_cred(oldcred); 254 + trace_nfs4_reclaim_delegation(inode, type); 255 + } else { 256 + rcu_read_unlock(); 257 + nfs_inode_set_delegation(inode, cred, type, stateid, 258 + pagemod_limit); 257 259 } 258 - rcu_read_unlock(); 259 - nfs_inode_set_delegation(inode, cred, type, stateid, pagemod_limit); 260 260 } 261 261 262 262 static int nfs_do_return_delegation(struct inode *inode, struct nfs_delegation *delegation, int issync)
+2 -3
fs/nfs/dir.c
··· 2489 2489 spin_unlock(&dentry->d_lock); 2490 2490 goto out; 2491 2491 } 2492 - if (dentry->d_fsdata) 2493 - /* old devname */ 2494 - kfree(dentry->d_fsdata); 2492 + /* old devname */ 2493 + kfree(dentry->d_fsdata); 2495 2494 dentry->d_fsdata = NFS_FSDATA_BLOCKED; 2496 2495 2497 2496 spin_unlock(&dentry->d_lock);
+4 -3
fs/nfs/dns_resolve.c
··· 16 16 #include "dns_resolve.h" 17 17 18 18 ssize_t nfs_dns_resolve_name(struct net *net, char *name, size_t namelen, 19 - struct sockaddr *sa, size_t salen) 19 + struct sockaddr_storage *ss, size_t salen) 20 20 { 21 + struct sockaddr *sa = (struct sockaddr *)ss; 21 22 ssize_t ret; 22 23 char *ip_addr = NULL; 23 24 int ip_len; ··· 342 341 } 343 342 344 343 ssize_t nfs_dns_resolve_name(struct net *net, char *name, 345 - size_t namelen, struct sockaddr *sa, size_t salen) 344 + size_t namelen, struct sockaddr_storage *ss, size_t salen) 346 345 { 347 346 struct nfs_dns_ent key = { 348 347 .hostname = name, ··· 355 354 ret = do_cache_lookup_wait(nn->nfs_dns_resolve, &key, &item); 356 355 if (ret == 0) { 357 356 if (salen >= item->addrlen) { 358 - memcpy(sa, &item->addr, item->addrlen); 357 + memcpy(ss, &item->addr, item->addrlen); 359 358 ret = item->addrlen; 360 359 } else 361 360 ret = -EOVERFLOW;
+1 -1
fs/nfs/dns_resolve.h
··· 32 32 #endif 33 33 34 34 extern ssize_t nfs_dns_resolve_name(struct net *net, char *name, 35 - size_t namelen, struct sockaddr *sa, size_t salen); 35 + size_t namelen, struct sockaddr_storage *sa, size_t salen); 36 36 37 37 #endif
+7 -7
fs/nfs/fs_context.c
··· 273 273 * Address family must be initialized, and address must not be 274 274 * the ANY address for that family. 275 275 */ 276 - static int nfs_verify_server_address(struct sockaddr *addr) 276 + static int nfs_verify_server_address(struct sockaddr_storage *addr) 277 277 { 278 - switch (addr->sa_family) { 278 + switch (addr->ss_family) { 279 279 case AF_INET: { 280 280 struct sockaddr_in *sa = (struct sockaddr_in *)addr; 281 281 return sa->sin_addr.s_addr != htonl(INADDR_ANY); ··· 969 969 { 970 970 struct nfs_fs_context *ctx = nfs_fc2context(fc); 971 971 struct nfs_fh *mntfh = ctx->mntfh; 972 - struct sockaddr *sap = (struct sockaddr *)&ctx->nfs_server.address; 972 + struct sockaddr_storage *sap = &ctx->nfs_server._address; 973 973 int extra_flags = NFS_MOUNT_LEGACY_INTERFACE; 974 974 int ret; 975 975 ··· 1044 1044 memcpy(sap, &data->addr, sizeof(data->addr)); 1045 1045 ctx->nfs_server.addrlen = sizeof(data->addr); 1046 1046 ctx->nfs_server.port = ntohs(data->addr.sin_port); 1047 - if (sap->sa_family != AF_INET || 1047 + if (sap->ss_family != AF_INET || 1048 1048 !nfs_verify_server_address(sap)) 1049 1049 goto out_no_address; 1050 1050 ··· 1200 1200 struct nfs4_mount_data *data) 1201 1201 { 1202 1202 struct nfs_fs_context *ctx = nfs_fc2context(fc); 1203 - struct sockaddr *sap = (struct sockaddr *)&ctx->nfs_server.address; 1203 + struct sockaddr_storage *sap = &ctx->nfs_server._address; 1204 1204 int ret; 1205 1205 char *c; 1206 1206 ··· 1314 1314 { 1315 1315 struct nfs_fs_context *ctx = nfs_fc2context(fc); 1316 1316 struct nfs_subversion *nfs_mod; 1317 - struct sockaddr *sap = (struct sockaddr *)&ctx->nfs_server.address; 1317 + struct sockaddr_storage *sap = &ctx->nfs_server._address; 1318 1318 int max_namelen = PAGE_SIZE; 1319 1319 int max_pathlen = NFS_MAXPATHLEN; 1320 1320 int port = 0; ··· 1540 1540 ctx->version = nfss->nfs_client->rpc_ops->version; 1541 1541 ctx->minorversion = nfss->nfs_client->cl_minorversion; 1542 1542 1543 - memcpy(&ctx->nfs_server.address, &nfss->nfs_client->cl_addr, 1543 + memcpy(&ctx->nfs_server._address, &nfss->nfs_client->cl_addr, 1544 1544 ctx->nfs_server.addrlen); 1545 1545 1546 1546 if (fc->net_ns != net) {
+7 -7
fs/nfs/internal.h
··· 69 69 struct nfs_client_initdata { 70 70 unsigned long init_flags; 71 71 const char *hostname; /* Hostname of the server */ 72 - const struct sockaddr *addr; /* Address of the server */ 72 + const struct sockaddr_storage *addr; /* Address of the server */ 73 73 const char *nodename; /* Hostname of the client */ 74 74 const char *ip_addr; /* IP address of the client */ 75 75 size_t addrlen; ··· 180 180 181 181 /* mount_clnt.c */ 182 182 struct nfs_mount_request { 183 - struct sockaddr *sap; 183 + struct sockaddr_storage *sap; 184 184 size_t salen; 185 185 char *hostname; 186 186 char *dirpath; ··· 223 223 extern struct nfs_server *nfs4_create_server(struct fs_context *); 224 224 extern struct nfs_server *nfs4_create_referral_server(struct fs_context *); 225 225 extern int nfs4_update_server(struct nfs_server *server, const char *hostname, 226 - struct sockaddr *sap, size_t salen, 226 + struct sockaddr_storage *sap, size_t salen, 227 227 struct net *net); 228 228 extern void nfs_free_server(struct nfs_server *server); 229 229 extern struct nfs_server *nfs_clone_server(struct nfs_server *, ··· 235 235 extern int nfs_wait_client_init_complete(const struct nfs_client *clp); 236 236 extern void nfs_mark_client_ready(struct nfs_client *clp, int state); 237 237 extern struct nfs_client *nfs4_set_ds_client(struct nfs_server *mds_srv, 238 - const struct sockaddr *ds_addr, 238 + const struct sockaddr_storage *ds_addr, 239 239 int ds_addrlen, int ds_proto, 240 240 unsigned int ds_timeo, 241 241 unsigned int ds_retrans, ··· 243 243 extern struct rpc_clnt *nfs4_find_or_create_ds_client(struct nfs_client *, 244 244 struct inode *); 245 245 extern struct nfs_client *nfs3_set_ds_client(struct nfs_server *mds_srv, 246 - const struct sockaddr *ds_addr, int ds_addrlen, 246 + const struct sockaddr_storage *ds_addr, int ds_addrlen, 247 247 int ds_proto, unsigned int ds_timeo, 248 248 unsigned int ds_retrans); 249 249 #ifdef CONFIG_PROC_FS ··· 894 894 * Select between a default port value and a user-specified port value. 895 895 * If a zero value is set, then autobind will be used. 896 896 */ 897 - static inline void nfs_set_port(struct sockaddr *sap, int *port, 897 + static inline void nfs_set_port(struct sockaddr_storage *sap, int *port, 898 898 const unsigned short default_port) 899 899 { 900 900 if (*port == NFS_UNSPEC_PORT) 901 901 *port = default_port; 902 902 903 - rpc_set_port(sap, *port); 903 + rpc_set_port((struct sockaddr *)sap, *port); 904 904 } 905 905 906 906 struct nfs_direct_req {
+2 -2
fs/nfs/mount_clnt.c
··· 158 158 struct rpc_create_args args = { 159 159 .net = info->net, 160 160 .protocol = info->protocol, 161 - .address = info->sap, 161 + .address = (struct sockaddr *)info->sap, 162 162 .addrsize = info->salen, 163 163 .timeout = &mnt_timeout, 164 164 .servername = info->hostname, ··· 245 245 struct rpc_create_args args = { 246 246 .net = info->net, 247 247 .protocol = IPPROTO_UDP, 248 - .address = info->sap, 248 + .address = (struct sockaddr *)info->sap, 249 249 .addrsize = info->salen, 250 250 .timeout = &nfs_umnt_timeout, 251 251 .servername = info->hostname,
+1 -1
fs/nfs/namespace.c
··· 175 175 } 176 176 177 177 /* for submounts we want the same server; referrals will reassign */ 178 - memcpy(&ctx->nfs_server.address, &client->cl_addr, client->cl_addrlen); 178 + memcpy(&ctx->nfs_server._address, &client->cl_addr, client->cl_addrlen); 179 179 ctx->nfs_server.addrlen = client->cl_addrlen; 180 180 ctx->nfs_server.port = server->port; 181 181
+2 -2
fs/nfs/nfs3client.c
··· 78 78 * the MDS. 79 79 */ 80 80 struct nfs_client *nfs3_set_ds_client(struct nfs_server *mds_srv, 81 - const struct sockaddr *ds_addr, int ds_addrlen, 81 + const struct sockaddr_storage *ds_addr, int ds_addrlen, 82 82 int ds_proto, unsigned int ds_timeo, unsigned int ds_retrans) 83 83 { 84 84 struct rpc_timeout ds_timeout; ··· 98 98 char buf[INET6_ADDRSTRLEN + 1]; 99 99 100 100 /* fake a hostname because lockd wants it */ 101 - if (rpc_ntop(ds_addr, buf, sizeof(buf)) <= 0) 101 + if (rpc_ntop((struct sockaddr *)ds_addr, buf, sizeof(buf)) <= 0) 102 102 return ERR_PTR(-EINVAL); 103 103 cl_init.hostname = buf; 104 104
+3
fs/nfs/nfs42proc.c
··· 1093 1093 &args.seq_args, &res.seq_res, 0); 1094 1094 trace_nfs4_clone(src_inode, dst_inode, &args, status); 1095 1095 if (status == 0) { 1096 + /* a zero-length count means clone to EOF in src */ 1097 + if (count == 0 && res.dst_fattr->valid & NFS_ATTR_FATTR_SIZE) 1098 + count = nfs_size_to_loff_t(res.dst_fattr->size) - dst_offset; 1096 1099 nfs42_copy_dest_done(dst_inode, dst_offset, count); 1097 1100 status = nfs_post_op_update_inode(dst_inode, res.dst_fattr); 1098 1101 }
+1 -1
fs/nfs/nfs4_fs.h
··· 281 281 int nfs4_submount(struct fs_context *, struct nfs_server *); 282 282 int nfs4_replace_transport(struct nfs_server *server, 283 283 const struct nfs4_fs_locations *locations); 284 - size_t nfs_parse_server_name(char *string, size_t len, struct sockaddr *sa, 284 + size_t nfs_parse_server_name(char *string, size_t len, struct sockaddr_storage *ss, 285 285 size_t salen, struct net *net, int port); 286 286 /* nfs4proc.c */ 287 287 extern int nfs4_handle_exception(struct nfs_server *, int, struct nfs4_exception *);
+10 -9
fs/nfs/nfs4client.c
··· 346 346 ret = nfs4_setup_slot_table(tbl, NFS4_MAX_SLOT_TABLE, 347 347 "NFSv4.0 transport Slot table"); 348 348 if (ret) { 349 + nfs4_shutdown_slot_table(tbl); 349 350 kfree(tbl); 350 351 return ret; 351 352 } ··· 890 889 */ 891 890 static int nfs4_set_client(struct nfs_server *server, 892 891 const char *hostname, 893 - const struct sockaddr *addr, 892 + const struct sockaddr_storage *addr, 894 893 const size_t addrlen, 895 894 const char *ip_addr, 896 895 int proto, const struct rpc_timeout *timeparms, ··· 925 924 __set_bit(NFS_CS_MIGRATION, &cl_init.init_flags); 926 925 if (test_bit(NFS_MIG_TSM_POSSIBLE, &server->mig_status)) 927 926 __set_bit(NFS_CS_TSM_POSSIBLE, &cl_init.init_flags); 928 - server->port = rpc_get_port(addr); 927 + server->port = rpc_get_port((struct sockaddr *)addr); 929 928 930 929 /* Allocate or find a client reference we can use */ 931 930 clp = nfs_get_client(&cl_init); ··· 961 960 * the MDS. 962 961 */ 963 962 struct nfs_client *nfs4_set_ds_client(struct nfs_server *mds_srv, 964 - const struct sockaddr *ds_addr, int ds_addrlen, 963 + const struct sockaddr_storage *ds_addr, int ds_addrlen, 965 964 int ds_proto, unsigned int ds_timeo, unsigned int ds_retrans, 966 965 u32 minor_version) 967 966 { ··· 981 980 }; 982 981 char buf[INET6_ADDRSTRLEN + 1]; 983 982 984 - if (rpc_ntop(ds_addr, buf, sizeof(buf)) <= 0) 983 + if (rpc_ntop((struct sockaddr *)ds_addr, buf, sizeof(buf)) <= 0) 985 984 return ERR_PTR(-EINVAL); 986 985 cl_init.hostname = buf; 987 986 ··· 1149 1148 /* Get a client record */ 1150 1149 error = nfs4_set_client(server, 1151 1150 ctx->nfs_server.hostname, 1152 - &ctx->nfs_server.address, 1151 + &ctx->nfs_server._address, 1153 1152 ctx->nfs_server.addrlen, 1154 1153 ctx->client_address, 1155 1154 ctx->nfs_server.protocol, ··· 1239 1238 rpc_set_port(&ctx->nfs_server.address, NFS_RDMA_PORT); 1240 1239 error = nfs4_set_client(server, 1241 1240 ctx->nfs_server.hostname, 1242 - &ctx->nfs_server.address, 1241 + &ctx->nfs_server._address, 1243 1242 ctx->nfs_server.addrlen, 1244 1243 parent_client->cl_ipaddr, 1245 1244 XPRT_TRANSPORT_RDMA, ··· 1255 1254 rpc_set_port(&ctx->nfs_server.address, NFS_PORT); 1256 1255 error = nfs4_set_client(server, 1257 1256 ctx->nfs_server.hostname, 1258 - &ctx->nfs_server.address, 1257 + &ctx->nfs_server._address, 1259 1258 ctx->nfs_server.addrlen, 1260 1259 parent_client->cl_ipaddr, 1261 1260 XPRT_TRANSPORT_TCP, ··· 1304 1303 * Returns zero on success, or a negative errno value. 1305 1304 */ 1306 1305 int nfs4_update_server(struct nfs_server *server, const char *hostname, 1307 - struct sockaddr *sap, size_t salen, struct net *net) 1306 + struct sockaddr_storage *sap, size_t salen, struct net *net) 1308 1307 { 1309 1308 struct nfs_client *clp = server->nfs_client; 1310 1309 struct rpc_clnt *clnt = server->client; 1311 1310 struct xprt_create xargs = { 1312 1311 .ident = clp->cl_proto, 1313 1312 .net = net, 1314 - .dstaddr = sap, 1313 + .dstaddr = (struct sockaddr *)sap, 1315 1314 .addrlen = salen, 1316 1315 .servername = hostname, 1317 1316 };
+8 -8
fs/nfs/nfs4namespace.c
··· 164 164 return 0; 165 165 } 166 166 167 - size_t nfs_parse_server_name(char *string, size_t len, struct sockaddr *sa, 167 + size_t nfs_parse_server_name(char *string, size_t len, struct sockaddr_storage *ss, 168 168 size_t salen, struct net *net, int port) 169 169 { 170 + struct sockaddr *sa = (struct sockaddr *)ss; 170 171 ssize_t ret; 171 172 172 173 ret = rpc_pton(net, string, len, sa, salen); 173 174 if (ret == 0) { 174 175 ret = rpc_uaddr2sockaddr(net, string, len, sa, salen); 175 176 if (ret == 0) { 176 - ret = nfs_dns_resolve_name(net, string, len, sa, salen); 177 + ret = nfs_dns_resolve_name(net, string, len, ss, salen); 177 178 if (ret < 0) 178 179 ret = 0; 179 180 } ··· 332 331 333 332 ctx->nfs_server.addrlen = 334 333 nfs_parse_server_name(buf->data, buf->len, 335 - &ctx->nfs_server.address, 334 + &ctx->nfs_server._address, 336 335 sizeof(ctx->nfs_server._address), 337 336 fc->net_ns, 0); 338 337 if (ctx->nfs_server.addrlen == 0) ··· 484 483 char *page, char *page2, 485 484 const struct nfs4_fs_location *location) 486 485 { 487 - const size_t addr_bufsize = sizeof(struct sockaddr_storage); 488 486 struct net *net = rpc_net_ns(server->client); 489 - struct sockaddr *sap; 487 + struct sockaddr_storage *sap; 490 488 unsigned int s; 491 489 size_t salen; 492 490 int error; 493 491 494 - sap = kmalloc(addr_bufsize, GFP_KERNEL); 492 + sap = kmalloc(sizeof(*sap), GFP_KERNEL); 495 493 if (sap == NULL) 496 494 return -ENOMEM; 497 495 ··· 506 506 continue; 507 507 508 508 salen = nfs_parse_server_name(buf->data, buf->len, 509 - sap, addr_bufsize, net, 0); 509 + sap, sizeof(*sap), net, 0); 510 510 if (salen == 0) 511 511 continue; 512 - rpc_set_port(sap, NFS_PORT); 512 + rpc_set_port((struct sockaddr *)sap, NFS_PORT); 513 513 514 514 error = -ENOMEM; 515 515 hostname = kmemdup_nul(buf->data, buf->len, GFP_KERNEL);
+6 -4
fs/nfs/nfs4proc.c
··· 3951 3951 3952 3952 for (i = 0; i < location->nservers; i++) { 3953 3953 struct nfs4_string *srv_loc = &location->servers[i]; 3954 - struct sockaddr addr; 3954 + struct sockaddr_storage addr; 3955 3955 size_t addrlen; 3956 3956 struct xprt_create xprt_args = { 3957 3957 .ident = 0, ··· 3974 3974 clp->cl_net, server->port); 3975 3975 if (!addrlen) 3976 3976 return; 3977 - xprt_args.dstaddr = &addr; 3977 + xprt_args.dstaddr = (struct sockaddr *)&addr; 3978 3978 xprt_args.addrlen = addrlen; 3979 3979 servername = kmalloc(srv_loc->len + 1, GFP_KERNEL); 3980 3980 if (!servername) ··· 7138 7138 { 7139 7139 struct nfs4_lockdata *data = calldata; 7140 7140 struct nfs4_lock_state *lsp = data->lsp; 7141 + struct nfs_server *server = NFS_SERVER(d_inode(data->ctx->dentry)); 7141 7142 7142 7143 if (!nfs4_sequence_done(task, &data->res.seq_res)) 7143 7144 return; ··· 7146 7145 data->rpc_status = task->tk_status; 7147 7146 switch (task->tk_status) { 7148 7147 case 0: 7149 - renew_lease(NFS_SERVER(d_inode(data->ctx->dentry)), 7150 - data->timestamp); 7148 + renew_lease(server, data->timestamp); 7151 7149 if (data->arg.new_lock && !data->cancelled) { 7152 7150 data->fl.fl_flags &= ~(FL_SLEEP | FL_ACCESS); 7153 7151 if (locks_lock_inode_wait(lsp->ls_state->inode, &data->fl) < 0) ··· 7166 7166 if (data->arg.new_lock_owner != 0) { 7167 7167 if (!nfs4_stateid_match(&data->arg.open_stateid, 7168 7168 &lsp->ls_state->open_stateid)) 7169 + goto out_restart; 7170 + else if (nfs4_async_handle_error(task, server, lsp->ls_state, NULL) == -EAGAIN) 7169 7171 goto out_restart; 7170 7172 } else if (!nfs4_stateid_match(&data->arg.lock_stateid, 7171 7173 &lsp->ls_stateid))
+2
fs/nfs/nfs4state.c
··· 1786 1786 1787 1787 static void nfs4_state_start_reclaim_reboot(struct nfs_client *clp) 1788 1788 { 1789 + set_bit(NFS4CLNT_RECLAIM_REBOOT, &clp->cl_state); 1789 1790 /* Mark all delegations for reclaim */ 1790 1791 nfs_delegation_mark_reclaim(clp); 1791 1792 nfs4_state_mark_reclaim_helper(clp, nfs4_state_mark_reclaim_reboot); ··· 2671 2670 if (status < 0) 2672 2671 goto out_error; 2673 2672 nfs4_state_end_reclaim_reboot(clp); 2673 + continue; 2674 2674 } 2675 2675 2676 2676 /* Detect expired delegations... */
+3 -3
fs/nfs/pnfs_nfs.c
··· 821 821 822 822 static struct nfs_client *(*get_v3_ds_connect)( 823 823 struct nfs_server *mds_srv, 824 - const struct sockaddr *ds_addr, 824 + const struct sockaddr_storage *ds_addr, 825 825 int ds_addrlen, 826 826 int ds_proto, 827 827 unsigned int ds_timeo, ··· 882 882 continue; 883 883 } 884 884 clp = get_v3_ds_connect(mds_srv, 885 - (struct sockaddr *)&da->da_addr, 885 + &da->da_addr, 886 886 da->da_addrlen, da->da_transport, 887 887 timeo, retrans); 888 888 if (IS_ERR(clp)) ··· 951 951 put_cred(xprtdata.cred); 952 952 } else { 953 953 clp = nfs4_set_ds_client(mds_srv, 954 - (struct sockaddr *)&da->da_addr, 954 + &da->da_addr, 955 955 da->da_addrlen, 956 956 da->da_transport, timeo, 957 957 retrans, minor_version);
+2 -3
fs/nfs/super.c
··· 822 822 { 823 823 struct nfs_fs_context *ctx = nfs_fc2context(fc); 824 824 struct nfs_mount_request request = { 825 - .sap = (struct sockaddr *) 826 - &ctx->mount_server.address, 825 + .sap = &ctx->mount_server._address, 827 826 .dirpath = ctx->nfs_server.export_path, 828 827 .protocol = ctx->mount_server.protocol, 829 828 .fh = root_fh, ··· 853 854 * Construct the mount server's address. 854 855 */ 855 856 if (ctx->mount_server.address.sa_family == AF_UNSPEC) { 856 - memcpy(request.sap, &ctx->nfs_server.address, 857 + memcpy(request.sap, &ctx->nfs_server._address, 857 858 ctx->nfs_server.addrlen); 858 859 ctx->mount_server.addrlen = ctx->nfs_server.addrlen; 859 860 }
+2 -3
fs/nfsd/filecache.c
··· 893 893 894 894 nf = rhashtable_walk_next(&iter); 895 895 while (!IS_ERR_OR_NULL(nf)) { 896 - if (net && nf->nf_net != net) 897 - continue; 898 - nfsd_file_unhash_and_dispose(nf, &dispose); 896 + if (!net || nf->nf_net == net) 897 + nfsd_file_unhash_and_dispose(nf, &dispose); 899 898 nf = rhashtable_walk_next(&iter); 900 899 } 901 900
+14 -9
fs/squashfs/file.c
··· 506 506 squashfs_i(inode)->fragment_size); 507 507 struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info; 508 508 unsigned int n, mask = (1 << (msblk->block_log - PAGE_SHIFT)) - 1; 509 + int error = buffer->error; 509 510 510 - if (buffer->error) 511 + if (error) 511 512 goto out; 512 513 513 514 expected += squashfs_i(inode)->fragment_offset; ··· 530 529 531 530 out: 532 531 squashfs_cache_put(buffer); 533 - return buffer->error; 532 + return error; 534 533 } 535 534 536 535 static void squashfs_readahead(struct readahead_control *ractl) ··· 558 557 int res, bsize; 559 558 u64 block = 0; 560 559 unsigned int expected; 560 + struct page *last_page; 561 + 562 + expected = start >> msblk->block_log == file_end ? 563 + (i_size_read(inode) & (msblk->block_size - 1)) : 564 + msblk->block_size; 565 + 566 + max_pages = (expected + PAGE_SIZE - 1) >> PAGE_SHIFT; 561 567 562 568 nr_pages = __readahead_batch(ractl, pages, max_pages); 563 569 if (!nr_pages) ··· 574 566 goto skip_pages; 575 567 576 568 index = pages[0]->index >> shift; 569 + 577 570 if ((pages[nr_pages - 1]->index >> shift) != index) 578 571 goto skip_pages; 579 - 580 - expected = index == file_end ? 581 - (i_size_read(inode) & (msblk->block_size - 1)) : 582 - msblk->block_size; 583 572 584 573 if (index == file_end && squashfs_i(inode)->fragment_block != 585 574 SQUASHFS_INVALID_BLK) { ··· 598 593 599 594 res = squashfs_read_data(inode->i_sb, block, bsize, NULL, actor); 600 595 601 - squashfs_page_actor_free(actor); 596 + last_page = squashfs_page_actor_free(actor); 602 597 603 598 if (res == expected) { 604 599 int bytes; 605 600 606 601 /* Last page (if present) may have trailing bytes not filled */ 607 602 bytes = res % PAGE_SIZE; 608 - if (pages[nr_pages - 1]->index == file_end && bytes) 609 - memzero_page(pages[nr_pages - 1], bytes, 603 + if (index == file_end && bytes && last_page) 604 + memzero_page(last_page, bytes, 610 605 PAGE_SIZE - bytes); 611 606 612 607 for (i = 0; i < nr_pages; i++) {
+3
fs/squashfs/page_actor.c
··· 71 71 (actor->next_index != actor->page[actor->next_page]->index)) { 72 72 actor->next_index++; 73 73 actor->returned_pages++; 74 + actor->last_page = NULL; 74 75 return actor->alloc_buffer ? actor->tmp_buffer : ERR_PTR(-ENOMEM); 75 76 } 76 77 77 78 actor->next_index++; 78 79 actor->returned_pages++; 80 + actor->last_page = actor->page[actor->next_page]; 79 81 return actor->pageaddr = kmap_local_page(actor->page[actor->next_page++]); 80 82 } 81 83 ··· 127 125 actor->returned_pages = 0; 128 126 actor->next_index = page[0]->index & ~((1 << (msblk->block_log - PAGE_SHIFT)) - 1); 129 127 actor->pageaddr = NULL; 128 + actor->last_page = NULL; 130 129 actor->alloc_buffer = msblk->decompressor->alloc_buffer; 131 130 actor->squashfs_first_page = direct_first_page; 132 131 actor->squashfs_next_page = direct_next_page;
+5 -1
fs/squashfs/page_actor.h
··· 16 16 void *(*squashfs_first_page)(struct squashfs_page_actor *); 17 17 void *(*squashfs_next_page)(struct squashfs_page_actor *); 18 18 void (*squashfs_finish_page)(struct squashfs_page_actor *); 19 + struct page *last_page; 19 20 int pages; 20 21 int length; 21 22 int next_page; ··· 30 29 extern struct squashfs_page_actor *squashfs_page_actor_init_special( 31 30 struct squashfs_sb_info *msblk, 32 31 struct page **page, int pages, int length); 33 - static inline void squashfs_page_actor_free(struct squashfs_page_actor *actor) 32 + static inline struct page *squashfs_page_actor_free(struct squashfs_page_actor *actor) 34 33 { 34 + struct page *last_page = actor->last_page; 35 + 35 36 kfree(actor->tmp_buffer); 36 37 kfree(actor); 38 + return last_page; 37 39 } 38 40 static inline void *squashfs_first_page(struct squashfs_page_actor *actor) 39 41 {
+1 -1
include/asm-generic/compat.h
··· 15 15 #endif 16 16 17 17 #ifndef compat_arg_u64 18 - #ifdef CONFIG_CPU_BIG_ENDIAN 18 + #ifndef CONFIG_CPU_BIG_ENDIAN 19 19 #define compat_arg_u64(name) u32 name##_lo, u32 name##_hi 20 20 #define compat_arg_u64_dual(name) u32, name##_lo, u32, name##_hi 21 21 #else
+2 -1
include/linux/blk-mq.h
··· 853 853 struct io_comp_batch *iob, int ioerror, 854 854 void (*complete)(struct io_comp_batch *)) 855 855 { 856 - if (!iob || (req->rq_flags & RQF_ELV) || ioerror) 856 + if (!iob || (req->rq_flags & RQF_ELV) || ioerror || 857 + (req->end_io && !blk_rq_is_passthrough(req))) 857 858 return false; 858 859 859 860 if (!iob->complete)
+2 -3
include/linux/counter.h
··· 542 542 #define DEFINE_COUNTER_ARRAY_CAPTURE(_name, _length) \ 543 543 DEFINE_COUNTER_ARRAY_U64(_name, _length) 544 544 545 - #define DEFINE_COUNTER_ARRAY_POLARITY(_name, _enums, _length) \ 546 - DEFINE_COUNTER_AVAILABLE(_name##_available, _enums); \ 545 + #define DEFINE_COUNTER_ARRAY_POLARITY(_name, _available, _length) \ 547 546 struct counter_array _name = { \ 548 547 .type = COUNTER_COMP_SIGNAL_POLARITY, \ 549 - .avail = &(_name##_available), \ 548 + .avail = &(_available), \ 550 549 .length = (_length), \ 551 550 } 552 551
+1 -1
include/linux/fb.h
··· 555 555 556 556 #elif defined(__i386__) || defined(__alpha__) || defined(__x86_64__) || \ 557 557 defined(__hppa__) || defined(__sh__) || defined(__powerpc__) || \ 558 - defined(__arm__) || defined(__aarch64__) 558 + defined(__arm__) || defined(__aarch64__) || defined(__mips__) 559 559 560 560 #define fb_readb __raw_readb 561 561 #define fb_readw __raw_readw
+15 -2
include/linux/fortify-string.h
··· 43 43 extern char *__underlying_strncat(char *p, const char *q, __kernel_size_t count) __RENAME(strncat); 44 44 extern char *__underlying_strncpy(char *p, const char *q, __kernel_size_t size) __RENAME(strncpy); 45 45 #else 46 - #define __underlying_memchr __builtin_memchr 47 - #define __underlying_memcmp __builtin_memcmp 46 + 47 + #if defined(__SANITIZE_MEMORY__) 48 + /* 49 + * For KMSAN builds all memcpy/memset/memmove calls should be replaced by the 50 + * corresponding __msan_XXX functions. 51 + */ 52 + #include <linux/kmsan_string.h> 53 + #define __underlying_memcpy __msan_memcpy 54 + #define __underlying_memmove __msan_memmove 55 + #define __underlying_memset __msan_memset 56 + #else 48 57 #define __underlying_memcpy __builtin_memcpy 49 58 #define __underlying_memmove __builtin_memmove 50 59 #define __underlying_memset __builtin_memset 60 + #endif 61 + 62 + #define __underlying_memchr __builtin_memchr 63 + #define __underlying_memcmp __builtin_memcmp 51 64 #define __underlying_strcat __builtin_strcat 52 65 #define __underlying_strcpy __builtin_strcpy 53 66 #define __underlying_strlen __builtin_strlen
+21
include/linux/kmsan_string.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * KMSAN string functions API used in other headers. 4 + * 5 + * Copyright (C) 2022 Google LLC 6 + * Author: Alexander Potapenko <glider@google.com> 7 + * 8 + */ 9 + #ifndef _LINUX_KMSAN_STRING_H 10 + #define _LINUX_KMSAN_STRING_H 11 + 12 + /* 13 + * KMSAN overrides the default memcpy/memset/memmove implementations in the 14 + * kernel, which requires having __msan_XXX function prototypes in several other 15 + * headers. Keep them in one place instead of open-coding. 16 + */ 17 + void *__msan_memcpy(void *dst, const void *src, size_t size); 18 + void *__msan_memset(void *s, int c, size_t n); 19 + void *__msan_memmove(void *dest, const void *src, size_t len); 20 + 21 + #endif /* _LINUX_KMSAN_STRING_H */
+17 -7
include/linux/kvm_host.h
··· 1240 1240 void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn); 1241 1241 1242 1242 /** 1243 - * kvm_gfn_to_pfn_cache_init - prepare a cached kernel mapping and HPA for a 1244 - * given guest physical address. 1243 + * kvm_gpc_init - initialize gfn_to_pfn_cache. 1244 + * 1245 + * @gpc: struct gfn_to_pfn_cache object. 1246 + * 1247 + * This sets up a gfn_to_pfn_cache by initializing locks. Note, the cache must 1248 + * be zero-allocated (or zeroed by the caller before init). 1249 + */ 1250 + void kvm_gpc_init(struct gfn_to_pfn_cache *gpc); 1251 + 1252 + /** 1253 + * kvm_gpc_activate - prepare a cached kernel mapping and HPA for a given guest 1254 + * physical address. 1245 1255 * 1246 1256 * @kvm: pointer to kvm instance. 1247 1257 * @gpc: struct gfn_to_pfn_cache object. ··· 1275 1265 * kvm_gfn_to_pfn_cache_check() to ensure that the cache is valid before 1276 1266 * accessing the target page. 1277 1267 */ 1278 - int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, 1279 - struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, 1280 - gpa_t gpa, unsigned long len); 1268 + int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, 1269 + struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, 1270 + gpa_t gpa, unsigned long len); 1281 1271 1282 1272 /** 1283 1273 * kvm_gfn_to_pfn_cache_check - check validity of a gfn_to_pfn_cache. ··· 1334 1324 void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); 1335 1325 1336 1326 /** 1337 - * kvm_gfn_to_pfn_cache_destroy - destroy and unlink a gfn_to_pfn_cache. 1327 + * kvm_gpc_deactivate - deactivate and unlink a gfn_to_pfn_cache. 1338 1328 * 1339 1329 * @kvm: pointer to kvm instance. 1340 1330 * @gpc: struct gfn_to_pfn_cache object. ··· 1342 1332 * This removes a cache from the @kvm's list to be processed on MMU notifier 1343 1333 * invocation. 1344 1334 */ 1345 - void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); 1335 + void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc); 1346 1336 1347 1337 void kvm_sigset_activate(struct kvm_vcpu *vcpu); 1348 1338 void kvm_sigset_deactivate(struct kvm_vcpu *vcpu);
+3 -3
include/linux/userfaultfd_k.h
··· 146 146 static inline bool vma_can_userfault(struct vm_area_struct *vma, 147 147 unsigned long vm_flags) 148 148 { 149 - if (vm_flags & VM_UFFD_MINOR) 150 - return is_vm_hugetlb_page(vma) || vma_is_shmem(vma); 151 - 149 + if ((vm_flags & VM_UFFD_MINOR) && 150 + (!is_vm_hugetlb_page(vma) && !vma_is_shmem(vma))) 151 + return false; 152 152 #ifndef CONFIG_PTE_MARKER_UFFD_WP 153 153 /* 154 154 * If user requested uffd-wp but not enabled pte markers for
+27 -21
include/net/netlink.h
··· 181 181 NLA_S64, 182 182 NLA_BITFIELD32, 183 183 NLA_REJECT, 184 + NLA_BE16, 185 + NLA_BE32, 184 186 __NLA_TYPE_MAX, 185 187 }; 186 188 ··· 233 231 * NLA_U32, NLA_U64, 234 232 * NLA_S8, NLA_S16, 235 233 * NLA_S32, NLA_S64, 234 + * NLA_BE16, NLA_BE32, 236 235 * NLA_MSECS Leaving the length field zero will verify the 237 236 * given type fits, using it verifies minimum length 238 237 * just like "All other" ··· 264 261 * NLA_U16, 265 262 * NLA_U32, 266 263 * NLA_U64, 264 + * NLA_BE16, 265 + * NLA_BE32, 267 266 * NLA_S8, 268 267 * NLA_S16, 269 268 * NLA_S32, ··· 322 317 u8 validation_type; 323 318 u16 len; 324 319 union { 325 - const u32 bitfield32_valid; 326 - const u32 mask; 327 - const char *reject_message; 328 - const struct nla_policy *nested_policy; 329 - struct netlink_range_validation *range; 330 - struct netlink_range_validation_signed *range_signed; 331 - struct { 332 - s16 min, max; 333 - u8 network_byte_order:1; 334 - }; 335 - int (*validate)(const struct nlattr *attr, 336 - struct netlink_ext_ack *extack); 337 - /* This entry is special, and used for the attribute at index 0 320 + /** 321 + * @strict_start_type: first attribute to validate strictly 322 + * 323 + * This entry is special, and used for the attribute at index 0 338 324 * only, and specifies special data about the policy, namely it 339 325 * specifies the "boundary type" where strict length validation 340 326 * starts for any attribute types >= this value, also, strict ··· 344 348 * was added to enforce strict validation from thereon. 345 349 */ 346 350 u16 strict_start_type; 351 + 352 + /* private: use NLA_POLICY_*() to set */ 353 + const u32 bitfield32_valid; 354 + const u32 mask; 355 + const char *reject_message; 356 + const struct nla_policy *nested_policy; 357 + struct netlink_range_validation *range; 358 + struct netlink_range_validation_signed *range_signed; 359 + struct { 360 + s16 min, max; 361 + }; 362 + int (*validate)(const struct nlattr *attr, 363 + struct netlink_ext_ack *extack); 347 364 }; 348 365 }; 349 366 ··· 378 369 (tp == NLA_U8 || tp == NLA_U16 || tp == NLA_U32 || tp == NLA_U64) 379 370 #define __NLA_IS_SINT_TYPE(tp) \ 380 371 (tp == NLA_S8 || tp == NLA_S16 || tp == NLA_S32 || tp == NLA_S64) 372 + #define __NLA_IS_BEINT_TYPE(tp) \ 373 + (tp == NLA_BE16 || tp == NLA_BE32) 381 374 382 375 #define __NLA_ENSURE(condition) BUILD_BUG_ON_ZERO(!(condition)) 383 376 #define NLA_ENSURE_UINT_TYPE(tp) \ ··· 393 382 #define NLA_ENSURE_INT_OR_BINARY_TYPE(tp) \ 394 383 (__NLA_ENSURE(__NLA_IS_UINT_TYPE(tp) || \ 395 384 __NLA_IS_SINT_TYPE(tp) || \ 385 + __NLA_IS_BEINT_TYPE(tp) || \ 396 386 tp == NLA_MSECS || \ 397 387 tp == NLA_BINARY) + tp) 398 388 #define NLA_ENSURE_NO_VALIDATION_PTR(tp) \ ··· 401 389 tp != NLA_REJECT && \ 402 390 tp != NLA_NESTED && \ 403 391 tp != NLA_NESTED_ARRAY) + tp) 392 + #define NLA_ENSURE_BEINT_TYPE(tp) \ 393 + (__NLA_ENSURE(__NLA_IS_BEINT_TYPE(tp)) + tp) 404 394 405 395 #define NLA_POLICY_RANGE(tp, _min, _max) { \ 406 396 .type = NLA_ENSURE_INT_OR_BINARY_TYPE(tp), \ ··· 433 419 .type = NLA_ENSURE_INT_OR_BINARY_TYPE(tp), \ 434 420 .validation_type = NLA_VALIDATE_MAX, \ 435 421 .max = _max, \ 436 - .network_byte_order = 0, \ 437 - } 438 - 439 - #define NLA_POLICY_MAX_BE(tp, _max) { \ 440 - .type = NLA_ENSURE_UINT_TYPE(tp), \ 441 - .validation_type = NLA_VALIDATE_MAX, \ 442 - .max = _max, \ 443 - .network_byte_order = 1, \ 444 422 } 445 423 446 424 #define NLA_POLICY_MASK(tp, _mask) { \
+7
include/net/sock.h
··· 1889 1889 void sock_kzfree_s(struct sock *sk, void *mem, int size); 1890 1890 void sk_send_sigurg(struct sock *sk); 1891 1891 1892 + static inline void sock_replace_proto(struct sock *sk, struct proto *proto) 1893 + { 1894 + if (sk->sk_socket) 1895 + clear_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags); 1896 + WRITE_ONCE(sk->sk_prot, proto); 1897 + } 1898 + 1892 1899 struct sockcm_cookie { 1893 1900 u64 transmit_time; 1894 1901 u32 mark;
+1
include/sound/control.h
··· 138 138 int snd_ctl_replace(struct snd_card *card, struct snd_kcontrol *kcontrol, bool add_on_replace); 139 139 int snd_ctl_remove_id(struct snd_card * card, struct snd_ctl_elem_id *id); 140 140 int snd_ctl_rename_id(struct snd_card * card, struct snd_ctl_elem_id *src_id, struct snd_ctl_elem_id *dst_id); 141 + void snd_ctl_rename(struct snd_card *card, struct snd_kcontrol *kctl, const char *name); 141 142 int snd_ctl_activate_id(struct snd_card *card, struct snd_ctl_elem_id *id, int active); 142 143 struct snd_kcontrol *snd_ctl_find_numid(struct snd_card * card, unsigned int numid); 143 144 struct snd_kcontrol *snd_ctl_find_id(struct snd_card * card, struct snd_ctl_elem_id *id);
+1
include/sound/simple_card_utils.h
··· 177 177 struct snd_pcm_hw_params *params); 178 178 void asoc_simple_parse_convert(struct device_node *np, char *prefix, 179 179 struct asoc_simple_data *data); 180 + bool asoc_simple_is_convert_required(const struct asoc_simple_data *data); 180 181 181 182 int asoc_simple_parse_routing(struct snd_soc_card *card, 182 183 char *prefix);
+2
include/uapi/drm/amdgpu_drm.h
··· 763 763 #define AMDGPU_INFO_FW_MES_KIQ 0x19 764 764 /* Subquery id: Query MES firmware version */ 765 765 #define AMDGPU_INFO_FW_MES 0x1a 766 + /* Subquery id: Query IMU firmware version */ 767 + #define AMDGPU_INFO_FW_IMU 0x1b 766 768 767 769 /* number of bytes moved for TTM migration */ 768 770 #define AMDGPU_INFO_NUM_BYTES_MOVED 0x0f
+1 -1
include/uapi/linux/perf_event.h
··· 1337 1337 #define PERF_MEM_LVLNUM_L3 0x03 /* L3 */ 1338 1338 #define PERF_MEM_LVLNUM_L4 0x04 /* L4 */ 1339 1339 /* 5-0x8 available */ 1340 - #define PERF_MEM_LVLNUM_EXTN_MEM 0x09 /* Extension memory */ 1340 + #define PERF_MEM_LVLNUM_CXL 0x09 /* CXL */ 1341 1341 #define PERF_MEM_LVLNUM_IO 0x0a /* I/O */ 1342 1342 #define PERF_MEM_LVLNUM_ANY_CACHE 0x0b /* Any cache */ 1343 1343 #define PERF_MEM_LVLNUM_LFB 0x0c /* LFB */
+5 -6
io_uring/io_uring.c
··· 1173 1173 } 1174 1174 } 1175 1175 1176 - int __io_run_local_work(struct io_ring_ctx *ctx, bool locked) 1176 + int __io_run_local_work(struct io_ring_ctx *ctx, bool *locked) 1177 1177 { 1178 1178 struct llist_node *node; 1179 1179 struct llist_node fake; ··· 1192 1192 struct io_kiocb *req = container_of(node, struct io_kiocb, 1193 1193 io_task_work.node); 1194 1194 prefetch(container_of(next, struct io_kiocb, io_task_work.node)); 1195 - req->io_task_work.func(req, &locked); 1195 + req->io_task_work.func(req, locked); 1196 1196 ret++; 1197 1197 node = next; 1198 1198 } ··· 1208 1208 goto again; 1209 1209 } 1210 1210 1211 - if (locked) 1211 + if (*locked) 1212 1212 io_submit_flush_completions(ctx); 1213 1213 trace_io_uring_local_work_run(ctx, ret, loops); 1214 1214 return ret; ··· 1225 1225 1226 1226 __set_current_state(TASK_RUNNING); 1227 1227 locked = mutex_trylock(&ctx->uring_lock); 1228 - ret = __io_run_local_work(ctx, locked); 1228 + ret = __io_run_local_work(ctx, &locked); 1229 1229 if (locked) 1230 1230 mutex_unlock(&ctx->uring_lock); 1231 1231 ··· 1446 1446 io_task_work_pending(ctx)) { 1447 1447 u32 tail = ctx->cached_cq_tail; 1448 1448 1449 - if (!llist_empty(&ctx->work_llist)) 1450 - __io_run_local_work(ctx, true); 1449 + (void) io_run_local_work_locked(ctx); 1451 1450 1452 1451 if (task_work_pending(current) || 1453 1452 wq_list_empty(&ctx->iopoll_list)) {
+11 -2
io_uring/io_uring.h
··· 27 27 struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx, bool overflow); 28 28 bool io_req_cqe_overflow(struct io_kiocb *req); 29 29 int io_run_task_work_sig(struct io_ring_ctx *ctx); 30 - int __io_run_local_work(struct io_ring_ctx *ctx, bool locked); 30 + int __io_run_local_work(struct io_ring_ctx *ctx, bool *locked); 31 31 int io_run_local_work(struct io_ring_ctx *ctx); 32 32 void io_req_complete_failed(struct io_kiocb *req, s32 res); 33 33 void __io_req_complete(struct io_kiocb *req, unsigned issue_flags); ··· 277 277 278 278 static inline int io_run_local_work_locked(struct io_ring_ctx *ctx) 279 279 { 280 + bool locked; 281 + int ret; 282 + 280 283 if (llist_empty(&ctx->work_llist)) 281 284 return 0; 282 - return __io_run_local_work(ctx, true); 285 + 286 + locked = true; 287 + ret = __io_run_local_work(ctx, &locked); 288 + /* shouldn't happen! */ 289 + if (WARN_ON_ONCE(!locked)) 290 + mutex_lock(&ctx->uring_lock); 291 + return ret; 283 292 } 284 293 285 294 static inline void io_tw_lock(struct io_ring_ctx *ctx, bool *locked)
+2 -2
ipc/msg.c
··· 1329 1329 #ifdef CONFIG_IPC_NS 1330 1330 void msg_exit_ns(struct ipc_namespace *ns) 1331 1331 { 1332 - percpu_counter_destroy(&ns->percpu_msg_bytes); 1333 - percpu_counter_destroy(&ns->percpu_msg_hdrs); 1334 1332 free_ipcs(ns, &msg_ids(ns), freeque); 1335 1333 idr_destroy(&ns->ids[IPC_MSG_IDS].ipcs_idr); 1336 1334 rhashtable_destroy(&ns->ids[IPC_MSG_IDS].key_ht); 1335 + percpu_counter_destroy(&ns->percpu_msg_bytes); 1336 + percpu_counter_destroy(&ns->percpu_msg_hdrs); 1337 1337 } 1338 1338 #endif 1339 1339
+1
kernel/events/core.c
··· 9846 9846 9847 9847 perf_sample_data_init(&data, 0, 0); 9848 9848 data.raw = &raw; 9849 + data.sample_flags |= PERF_SAMPLE_RAW; 9849 9850 9850 9851 perf_trace_buf_update(record, event_type); 9851 9852
+1 -1
kernel/power/hibernate.c
··· 645 645 int error; 646 646 647 647 if (hibernation_mode == HIBERNATION_SUSPEND) { 648 - error = suspend_devices_and_enter(PM_SUSPEND_MEM); 648 + error = suspend_devices_and_enter(mem_sleep_current); 649 649 if (error) { 650 650 hibernation_mode = hibernation_ops ? 651 651 HIBERNATION_PLATFORM :
+2 -1
lib/Kconfig.debug
··· 400 400 default 1536 if (!64BIT && XTENSA) 401 401 default 1024 if !64BIT 402 402 default 2048 if 64BIT 403 + default 0 if KMSAN 403 404 help 404 - Tell gcc to warn at build time for stack frames larger than this. 405 + Tell the compiler to warn at build time for stack frames larger than this. 405 406 Setting this too low will cause a lot of warnings. 406 407 Setting it to 0 disables the warning. 407 408
+2 -2
lib/maple_tree.c
··· 2903 2903 unsigned long max, min; 2904 2904 unsigned long prev_max, prev_min; 2905 2905 2906 - last = next = mas->node; 2907 - prev_min = min = mas->min; 2906 + next = mas->node; 2907 + min = mas->min; 2908 2908 max = mas->max; 2909 2909 do { 2910 2910 offset = 0;
+15 -26
lib/nlattr.c
··· 124 124 range->max = U8_MAX; 125 125 break; 126 126 case NLA_U16: 127 + case NLA_BE16: 127 128 case NLA_BINARY: 128 129 range->max = U16_MAX; 129 130 break; 130 131 case NLA_U32: 132 + case NLA_BE32: 131 133 range->max = U32_MAX; 132 134 break; 133 135 case NLA_U64: ··· 161 159 } 162 160 } 163 161 164 - static u64 nla_get_attr_bo(const struct nla_policy *pt, 165 - const struct nlattr *nla) 166 - { 167 - switch (pt->type) { 168 - case NLA_U16: 169 - if (pt->network_byte_order) 170 - return ntohs(nla_get_be16(nla)); 171 - 172 - return nla_get_u16(nla); 173 - case NLA_U32: 174 - if (pt->network_byte_order) 175 - return ntohl(nla_get_be32(nla)); 176 - 177 - return nla_get_u32(nla); 178 - case NLA_U64: 179 - if (pt->network_byte_order) 180 - return be64_to_cpu(nla_get_be64(nla)); 181 - 182 - return nla_get_u64(nla); 183 - } 184 - 185 - WARN_ON_ONCE(1); 186 - return 0; 187 - } 188 - 189 162 static int nla_validate_range_unsigned(const struct nla_policy *pt, 190 163 const struct nlattr *nla, 191 164 struct netlink_ext_ack *extack, ··· 174 197 value = nla_get_u8(nla); 175 198 break; 176 199 case NLA_U16: 200 + value = nla_get_u16(nla); 201 + break; 177 202 case NLA_U32: 203 + value = nla_get_u32(nla); 204 + break; 178 205 case NLA_U64: 179 - value = nla_get_attr_bo(pt, nla); 206 + value = nla_get_u64(nla); 180 207 break; 181 208 case NLA_MSECS: 182 209 value = nla_get_u64(nla); 183 210 break; 184 211 case NLA_BINARY: 185 212 value = nla_len(nla); 213 + break; 214 + case NLA_BE16: 215 + value = ntohs(nla_get_be16(nla)); 216 + break; 217 + case NLA_BE32: 218 + value = ntohl(nla_get_be32(nla)); 186 219 break; 187 220 default: 188 221 return -EINVAL; ··· 321 334 case NLA_U64: 322 335 case NLA_MSECS: 323 336 case NLA_BINARY: 337 + case NLA_BE16: 338 + case NLA_BE32: 324 339 return nla_validate_range_unsigned(pt, nla, extack, validate); 325 340 case NLA_S8: 326 341 case NLA_S16:
+1 -1
mm/huge_memory.c
··· 2462 2462 * Fix up and warn once if private is unexpectedly set. 2463 2463 */ 2464 2464 if (!folio_test_swapcache(page_folio(head))) { 2465 - VM_WARN_ON_ONCE_PAGE(page_tail->private != 0, head); 2465 + VM_WARN_ON_ONCE_PAGE(page_tail->private != 0, page_tail); 2466 2466 page_tail->private = 0; 2467 2467 } 2468 2468
+42 -19
mm/kmemleak.c
··· 1461 1461 } 1462 1462 1463 1463 /* 1464 + * Conditionally call resched() in a object iteration loop while making sure 1465 + * that the given object won't go away without RCU read lock by performing a 1466 + * get_object() if !pinned. 1467 + * 1468 + * Return: false if can't do a cond_resched() due to get_object() failure 1469 + * true otherwise 1470 + */ 1471 + static bool kmemleak_cond_resched(struct kmemleak_object *object, bool pinned) 1472 + { 1473 + if (!pinned && !get_object(object)) 1474 + return false; 1475 + 1476 + rcu_read_unlock(); 1477 + cond_resched(); 1478 + rcu_read_lock(); 1479 + if (!pinned) 1480 + put_object(object); 1481 + return true; 1482 + } 1483 + 1484 + /* 1464 1485 * Scan data sections and all the referenced memory blocks allocated via the 1465 1486 * kernel's standard allocators. This function must be called with the 1466 1487 * scan_mutex held. ··· 1492 1471 struct zone *zone; 1493 1472 int __maybe_unused i; 1494 1473 int new_leaks = 0; 1495 - int loop1_cnt = 0; 1474 + int loop_cnt = 0; 1496 1475 1497 1476 jiffies_last_scan = jiffies; 1498 1477 ··· 1501 1480 list_for_each_entry_rcu(object, &object_list, object_list) { 1502 1481 bool obj_pinned = false; 1503 1482 1504 - loop1_cnt++; 1505 1483 raw_spin_lock_irq(&object->lock); 1506 1484 #ifdef DEBUG 1507 1485 /* ··· 1534 1514 raw_spin_unlock_irq(&object->lock); 1535 1515 1536 1516 /* 1537 - * Do a cond_resched() to avoid soft lockup every 64k objects. 1538 - * Make sure a reference has been taken so that the object 1539 - * won't go away without RCU read lock. 1517 + * Do a cond_resched() every 64k objects to avoid soft lockup. 1540 1518 */ 1541 - if (!(loop1_cnt & 0xffff)) { 1542 - if (!obj_pinned && !get_object(object)) { 1543 - /* Try the next object instead */ 1544 - loop1_cnt--; 1545 - continue; 1546 - } 1547 - 1548 - rcu_read_unlock(); 1549 - cond_resched(); 1550 - rcu_read_lock(); 1551 - 1552 - if (!obj_pinned) 1553 - put_object(object); 1554 - } 1519 + if (!(++loop_cnt & 0xffff) && 1520 + !kmemleak_cond_resched(object, obj_pinned)) 1521 + loop_cnt--; /* Try again on next object */ 1555 1522 } 1556 1523 rcu_read_unlock(); 1557 1524 ··· 1605 1598 * scan and color them gray until the next scan. 1606 1599 */ 1607 1600 rcu_read_lock(); 1601 + loop_cnt = 0; 1608 1602 list_for_each_entry_rcu(object, &object_list, object_list) { 1603 + /* 1604 + * Do a cond_resched() every 64k objects to avoid soft lockup. 1605 + */ 1606 + if (!(++loop_cnt & 0xffff) && 1607 + !kmemleak_cond_resched(object, false)) 1608 + loop_cnt--; /* Try again on next object */ 1609 + 1609 1610 /* 1610 1611 * This is racy but we can save the overhead of lock/unlock 1611 1612 * calls. The missed objects, if any, should be caught in ··· 1647 1632 * Scanning result reporting. 1648 1633 */ 1649 1634 rcu_read_lock(); 1635 + loop_cnt = 0; 1650 1636 list_for_each_entry_rcu(object, &object_list, object_list) { 1637 + /* 1638 + * Do a cond_resched() every 64k objects to avoid soft lockup. 1639 + */ 1640 + if (!(++loop_cnt & 0xffff) && 1641 + !kmemleak_cond_resched(object, false)) 1642 + loop_cnt--; /* Try again on next object */ 1643 + 1651 1644 /* 1652 1645 * This is racy but we can save the overhead of lock/unlock 1653 1646 * calls. The missed objects, if any, should be caught in
+1
mm/kmsan/instrumentation.c
··· 14 14 15 15 #include "kmsan.h" 16 16 #include <linux/gfp.h> 17 + #include <linux/kmsan_string.h> 17 18 #include <linux/mm.h> 18 19 #include <linux/uaccess.h> 19 20
+1
mm/kmsan/shadow.c
··· 167 167 __memcpy(origin_ptr_for(dst), origin_ptr_for(src), PAGE_SIZE); 168 168 kmsan_leave_runtime(); 169 169 } 170 + EXPORT_SYMBOL(kmsan_copy_page_meta); 170 171 171 172 void kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags) 172 173 {
+11 -1
mm/madvise.c
··· 813 813 if (start & ~huge_page_mask(hstate_vma(vma))) 814 814 return false; 815 815 816 - *end = ALIGN(*end, huge_page_size(hstate_vma(vma))); 816 + /* 817 + * Madvise callers expect the length to be rounded up to PAGE_SIZE 818 + * boundaries, and may be unaware that this VMA uses huge pages. 819 + * Avoid unexpected data loss by rounding down the number of 820 + * huge pages freed. 821 + */ 822 + *end = ALIGN_DOWN(*end, huge_page_size(hstate_vma(vma))); 823 + 817 824 return true; 818 825 } 819 826 ··· 834 827 *prev = vma; 835 828 if (!madvise_dontneed_free_valid_vma(vma, start, &end, behavior)) 836 829 return -EINVAL; 830 + 831 + if (start == end) 832 + return 0; 837 833 838 834 if (!userfaultfd_remove(vma, start, end)) { 839 835 *prev = NULL; /* mmap_lock has been dropped, prev is stale */
+4 -4
mm/memory-tiers.c
··· 131 131 kfree(tier); 132 132 } 133 133 134 - static ssize_t nodes_show(struct device *dev, 135 - struct device_attribute *attr, char *buf) 134 + static ssize_t nodelist_show(struct device *dev, 135 + struct device_attribute *attr, char *buf) 136 136 { 137 137 int ret; 138 138 nodemask_t nmask; ··· 143 143 mutex_unlock(&memory_tier_lock); 144 144 return ret; 145 145 } 146 - static DEVICE_ATTR_RO(nodes); 146 + static DEVICE_ATTR_RO(nodelist); 147 147 148 148 static struct attribute *memtier_dev_attrs[] = { 149 - &dev_attr_nodes.attr, 149 + &dev_attr_nodelist.attr, 150 150 NULL 151 151 }; 152 152
+7
mm/migrate.c
··· 1582 1582 */ 1583 1583 list_splice(&ret_pages, from); 1584 1584 1585 + /* 1586 + * Return 0 in case all subpages of fail-to-migrate THPs are 1587 + * migrated successfully. 1588 + */ 1589 + if (list_empty(from)) 1590 + rc = 0; 1591 + 1585 1592 count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded); 1586 1593 count_vm_events(PGMIGRATE_FAIL, nr_failed_pages); 1587 1594 count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded);
+3
mm/mmap.c
··· 2852 2852 if (next->vm_flags != vma->vm_flags) 2853 2853 goto out; 2854 2854 2855 + if (start + size <= next->vm_end) 2856 + break; 2857 + 2855 2858 prev = next; 2856 2859 } 2857 2860
+1
mm/page_alloc.c
··· 807 807 808 808 p->mapping = TAIL_MAPPING; 809 809 set_compound_head(p, head); 810 + set_page_private(p, 0); 810 811 } 811 812 812 813 void prep_compound_page(struct page *page, unsigned int order)
+1 -1
mm/page_isolation.c
··· 330 330 zone->zone_start_pfn); 331 331 332 332 if (skip_isolation) { 333 - int mt = get_pageblock_migratetype(pfn_to_page(isolate_pageblock)); 333 + int mt __maybe_unused = get_pageblock_migratetype(pfn_to_page(isolate_pageblock)); 334 334 335 335 VM_BUG_ON(!is_migrate_isolate(mt)); 336 336 } else {
+17
mm/shmem.c
··· 2424 2424 2425 2425 if (!zeropage) { /* COPY */ 2426 2426 page_kaddr = kmap_local_folio(folio, 0); 2427 + /* 2428 + * The read mmap_lock is held here. Despite the 2429 + * mmap_lock being read recursive a deadlock is still 2430 + * possible if a writer has taken a lock. For example: 2431 + * 2432 + * process A thread 1 takes read lock on own mmap_lock 2433 + * process A thread 2 calls mmap, blocks taking write lock 2434 + * process B thread 1 takes page fault, read lock on own mmap lock 2435 + * process B thread 2 calls mmap, blocks taking write lock 2436 + * process A thread 1 blocks taking read lock on process B 2437 + * process B thread 1 blocks taking read lock on process A 2438 + * 2439 + * Disable page faults to prevent potential deadlock 2440 + * and retry the copy outside the mmap_lock. 2441 + */ 2442 + pagefault_disable(); 2427 2443 ret = copy_from_user(page_kaddr, 2428 2444 (const void __user *)src_addr, 2429 2445 PAGE_SIZE); 2446 + pagefault_enable(); 2430 2447 kunmap_local(page_kaddr); 2431 2448 2432 2449 /* fallback to copy_from_user outside mmap_lock */
+21 -4
mm/userfaultfd.c
··· 157 157 if (!page) 158 158 goto out; 159 159 160 - page_kaddr = kmap_atomic(page); 160 + page_kaddr = kmap_local_page(page); 161 + /* 162 + * The read mmap_lock is held here. Despite the 163 + * mmap_lock being read recursive a deadlock is still 164 + * possible if a writer has taken a lock. For example: 165 + * 166 + * process A thread 1 takes read lock on own mmap_lock 167 + * process A thread 2 calls mmap, blocks taking write lock 168 + * process B thread 1 takes page fault, read lock on own mmap lock 169 + * process B thread 2 calls mmap, blocks taking write lock 170 + * process A thread 1 blocks taking read lock on process B 171 + * process B thread 1 blocks taking read lock on process A 172 + * 173 + * Disable page faults to prevent potential deadlock 174 + * and retry the copy outside the mmap_lock. 175 + */ 176 + pagefault_disable(); 161 177 ret = copy_from_user(page_kaddr, 162 178 (const void __user *) src_addr, 163 179 PAGE_SIZE); 164 - kunmap_atomic(page_kaddr); 180 + pagefault_enable(); 181 + kunmap_local(page_kaddr); 165 182 166 183 /* fallback to copy_from_user outside mmap_lock */ 167 184 if (unlikely(ret)) { ··· 663 646 mmap_read_unlock(dst_mm); 664 647 BUG_ON(!page); 665 648 666 - page_kaddr = kmap(page); 649 + page_kaddr = kmap_local_page(page); 667 650 err = copy_from_user(page_kaddr, 668 651 (const void __user *) src_addr, 669 652 PAGE_SIZE); 670 - kunmap(page); 653 + kunmap_local(page_kaddr); 671 654 if (unlikely(err)) { 672 655 err = -EFAULT; 673 656 goto out;
+12 -6
net/bluetooth/hci_conn.c
··· 1067 1067 hdev->acl_cnt += conn->sent; 1068 1068 } else { 1069 1069 struct hci_conn *acl = conn->link; 1070 + 1070 1071 if (acl) { 1071 1072 acl->link = NULL; 1072 1073 hci_conn_drop(acl); 1074 + } 1075 + 1076 + /* Unacked ISO frames */ 1077 + if (conn->type == ISO_LINK) { 1078 + if (hdev->iso_pkts) 1079 + hdev->iso_cnt += conn->sent; 1080 + else if (hdev->le_pkts) 1081 + hdev->le_cnt += conn->sent; 1082 + else 1083 + hdev->acl_cnt += conn->sent; 1073 1084 } 1074 1085 } 1075 1086 ··· 1772 1761 if (!cis) 1773 1762 return ERR_PTR(-ENOMEM); 1774 1763 cis->cleanup = cis_cleanup; 1764 + cis->dst_type = dst_type; 1775 1765 } 1776 1766 1777 1767 if (cis->state == BT_CONNECTED) ··· 2151 2139 { 2152 2140 struct hci_conn *le; 2153 2141 struct hci_conn *cis; 2154 - 2155 - /* Convert from ISO socket address type to HCI address type */ 2156 - if (dst_type == BDADDR_LE_PUBLIC) 2157 - dst_type = ADDR_LE_DEV_PUBLIC; 2158 - else 2159 - dst_type = ADDR_LE_DEV_RANDOM; 2160 2142 2161 2143 if (hci_dev_test_flag(hdev, HCI_ADVERTISING)) 2162 2144 le = hci_connect_le(hdev, dst, dst_type, false,
+12 -2
net/bluetooth/iso.c
··· 235 235 return err; 236 236 } 237 237 238 + static inline u8 le_addr_type(u8 bdaddr_type) 239 + { 240 + if (bdaddr_type == BDADDR_LE_PUBLIC) 241 + return ADDR_LE_DEV_PUBLIC; 242 + else 243 + return ADDR_LE_DEV_RANDOM; 244 + } 245 + 238 246 static int iso_connect_bis(struct sock *sk) 239 247 { 240 248 struct iso_conn *conn; ··· 336 328 /* Just bind if DEFER_SETUP has been set */ 337 329 if (test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) { 338 330 hcon = hci_bind_cis(hdev, &iso_pi(sk)->dst, 339 - iso_pi(sk)->dst_type, &iso_pi(sk)->qos); 331 + le_addr_type(iso_pi(sk)->dst_type), 332 + &iso_pi(sk)->qos); 340 333 if (IS_ERR(hcon)) { 341 334 err = PTR_ERR(hcon); 342 335 goto done; 343 336 } 344 337 } else { 345 338 hcon = hci_connect_cis(hdev, &iso_pi(sk)->dst, 346 - iso_pi(sk)->dst_type, &iso_pi(sk)->qos); 339 + le_addr_type(iso_pi(sk)->dst_type), 340 + &iso_pi(sk)->qos); 347 341 if (IS_ERR(hcon)) { 348 342 err = PTR_ERR(hcon); 349 343 goto done;
+73 -13
net/bluetooth/l2cap_core.c
··· 1990 1990 if (link_type == LE_LINK && c->src_type == BDADDR_BREDR) 1991 1991 continue; 1992 1992 1993 - if (c->psm == psm) { 1993 + if (c->chan_type != L2CAP_CHAN_FIXED && c->psm == psm) { 1994 1994 int src_match, dst_match; 1995 1995 int src_any, dst_any; 1996 1996 ··· 3764 3764 l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC, 3765 3765 sizeof(rfc), (unsigned long) &rfc, endptr - ptr); 3766 3766 3767 - if (test_bit(FLAG_EFS_ENABLE, &chan->flags)) { 3767 + if (remote_efs && 3768 + test_bit(FLAG_EFS_ENABLE, &chan->flags)) { 3768 3769 chan->remote_id = efs.id; 3769 3770 chan->remote_stype = efs.stype; 3770 3771 chan->remote_msdu = le16_to_cpu(efs.msdu); ··· 5814 5813 BT_DBG("psm 0x%2.2x scid 0x%4.4x mtu %u mps %u", __le16_to_cpu(psm), 5815 5814 scid, mtu, mps); 5816 5815 5816 + /* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 3, Part A 5817 + * page 1059: 5818 + * 5819 + * Valid range: 0x0001-0x00ff 5820 + * 5821 + * Table 4.15: L2CAP_LE_CREDIT_BASED_CONNECTION_REQ SPSM ranges 5822 + */ 5823 + if (!psm || __le16_to_cpu(psm) > L2CAP_PSM_LE_DYN_END) { 5824 + result = L2CAP_CR_LE_BAD_PSM; 5825 + chan = NULL; 5826 + goto response; 5827 + } 5828 + 5817 5829 /* Check if we have socket listening on psm */ 5818 5830 pchan = l2cap_global_chan_by_psm(BT_LISTEN, psm, &conn->hcon->src, 5819 5831 &conn->hcon->dst, LE_LINK); ··· 6014 6000 } 6015 6001 6016 6002 psm = req->psm; 6003 + 6004 + /* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 3, Part A 6005 + * page 1059: 6006 + * 6007 + * Valid range: 0x0001-0x00ff 6008 + * 6009 + * Table 4.15: L2CAP_LE_CREDIT_BASED_CONNECTION_REQ SPSM ranges 6010 + */ 6011 + if (!psm || __le16_to_cpu(psm) > L2CAP_PSM_LE_DYN_END) { 6012 + result = L2CAP_CR_LE_BAD_PSM; 6013 + goto response; 6014 + } 6017 6015 6018 6016 BT_DBG("psm 0x%2.2x mtu %u mps %u", __le16_to_cpu(psm), mtu, mps); 6019 6017 ··· 6911 6885 struct l2cap_ctrl *control, 6912 6886 struct sk_buff *skb, u8 event) 6913 6887 { 6888 + struct l2cap_ctrl local_control; 6914 6889 int err = 0; 6915 6890 bool skb_in_use = false; 6916 6891 ··· 6936 6909 chan->buffer_seq = chan->expected_tx_seq; 6937 6910 skb_in_use = true; 6938 6911 6912 + /* l2cap_reassemble_sdu may free skb, hence invalidate 6913 + * control, so make a copy in advance to use it after 6914 + * l2cap_reassemble_sdu returns and to avoid the race 6915 + * condition, for example: 6916 + * 6917 + * The current thread calls: 6918 + * l2cap_reassemble_sdu 6919 + * chan->ops->recv == l2cap_sock_recv_cb 6920 + * __sock_queue_rcv_skb 6921 + * Another thread calls: 6922 + * bt_sock_recvmsg 6923 + * skb_recv_datagram 6924 + * skb_free_datagram 6925 + * Then the current thread tries to access control, but 6926 + * it was freed by skb_free_datagram. 6927 + */ 6928 + local_control = *control; 6939 6929 err = l2cap_reassemble_sdu(chan, skb, control); 6940 6930 if (err) 6941 6931 break; 6942 6932 6943 - if (control->final) { 6933 + if (local_control.final) { 6944 6934 if (!test_and_clear_bit(CONN_REJ_ACT, 6945 6935 &chan->conn_state)) { 6946 - control->final = 0; 6947 - l2cap_retransmit_all(chan, control); 6936 + local_control.final = 0; 6937 + l2cap_retransmit_all(chan, &local_control); 6948 6938 l2cap_ertm_send(chan); 6949 6939 } 6950 6940 } ··· 7341 7297 static int l2cap_stream_rx(struct l2cap_chan *chan, struct l2cap_ctrl *control, 7342 7298 struct sk_buff *skb) 7343 7299 { 7300 + /* l2cap_reassemble_sdu may free skb, hence invalidate control, so store 7301 + * the txseq field in advance to use it after l2cap_reassemble_sdu 7302 + * returns and to avoid the race condition, for example: 7303 + * 7304 + * The current thread calls: 7305 + * l2cap_reassemble_sdu 7306 + * chan->ops->recv == l2cap_sock_recv_cb 7307 + * __sock_queue_rcv_skb 7308 + * Another thread calls: 7309 + * bt_sock_recvmsg 7310 + * skb_recv_datagram 7311 + * skb_free_datagram 7312 + * Then the current thread tries to access control, but it was freed by 7313 + * skb_free_datagram. 7314 + */ 7315 + u16 txseq = control->txseq; 7316 + 7344 7317 BT_DBG("chan %p, control %p, skb %p, state %d", chan, control, skb, 7345 7318 chan->rx_state); 7346 7319 7347 - if (l2cap_classify_txseq(chan, control->txseq) == 7348 - L2CAP_TXSEQ_EXPECTED) { 7320 + if (l2cap_classify_txseq(chan, txseq) == L2CAP_TXSEQ_EXPECTED) { 7349 7321 l2cap_pass_to_tx(chan, control); 7350 7322 7351 7323 BT_DBG("buffer_seq %u->%u", chan->buffer_seq, ··· 7384 7324 } 7385 7325 } 7386 7326 7387 - chan->last_acked_seq = control->txseq; 7388 - chan->expected_tx_seq = __next_seq(chan, control->txseq); 7327 + chan->last_acked_seq = txseq; 7328 + chan->expected_tx_seq = __next_seq(chan, txseq); 7389 7329 7390 7330 return 0; 7391 7331 } ··· 7641 7581 return; 7642 7582 } 7643 7583 7584 + l2cap_chan_hold(chan); 7644 7585 l2cap_chan_lock(chan); 7645 7586 } else { 7646 7587 BT_DBG("unknown cid 0x%4.4x", cid); ··· 8487 8426 * expected length. 8488 8427 */ 8489 8428 if (skb->len < L2CAP_LEN_SIZE) { 8490 - if (l2cap_recv_frag(conn, skb, conn->mtu) < 0) 8491 - goto drop; 8492 - return; 8429 + l2cap_recv_frag(conn, skb, conn->mtu); 8430 + break; 8493 8431 } 8494 8432 8495 8433 len = get_unaligned_le16(skb->data) + L2CAP_HDR_SIZE; ··· 8532 8472 8533 8473 /* Header still could not be read just continue */ 8534 8474 if (conn->rx_skb->len < L2CAP_LEN_SIZE) 8535 - return; 8475 + break; 8536 8476 } 8537 8477 8538 8478 if (skb->len > conn->rx_len) {
+1 -1
net/bridge/br_netlink.c
··· 1332 1332 1333 1333 if (data[IFLA_BR_FDB_FLUSH]) { 1334 1334 struct net_bridge_fdb_flush_desc desc = { 1335 - .flags_mask = BR_FDB_STATIC 1335 + .flags_mask = BIT(BR_FDB_STATIC) 1336 1336 }; 1337 1337 1338 1338 br_fdb_flush(br, &desc);
+1 -1
net/bridge/br_sysfs_br.c
··· 345 345 struct netlink_ext_ack *extack) 346 346 { 347 347 struct net_bridge_fdb_flush_desc desc = { 348 - .flags_mask = BR_FDB_STATIC 348 + .flags_mask = BIT(BR_FDB_STATIC) 349 349 }; 350 350 351 351 br_fdb_flush(br, &desc);
+1 -1
net/core/neighbour.c
··· 409 409 write_lock_bh(&tbl->lock); 410 410 neigh_flush_dev(tbl, dev, skip_perm); 411 411 pneigh_ifdown_and_unlock(tbl, dev); 412 - pneigh_queue_purge(&tbl->proxy_queue, dev_net(dev)); 412 + pneigh_queue_purge(&tbl->proxy_queue, dev ? dev_net(dev) : NULL); 413 413 if (skb_queue_empty_lockless(&tbl->proxy_queue)) 414 414 del_timer_sync(&tbl->proxy_timer); 415 415 return 0;
+10 -3
net/dsa/dsa2.c
··· 1409 1409 static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master, 1410 1410 const char *user_protocol) 1411 1411 { 1412 + const struct dsa_device_ops *tag_ops = NULL; 1412 1413 struct dsa_switch *ds = dp->ds; 1413 1414 struct dsa_switch_tree *dst = ds->dst; 1414 - const struct dsa_device_ops *tag_ops; 1415 1415 enum dsa_tag_protocol default_proto; 1416 1416 1417 1417 /* Find out which protocol the switch would prefer. */ ··· 1434 1434 } 1435 1435 1436 1436 tag_ops = dsa_find_tagger_by_name(user_protocol); 1437 - } else { 1438 - tag_ops = dsa_tag_driver_get(default_proto); 1437 + if (IS_ERR(tag_ops)) { 1438 + dev_warn(ds->dev, 1439 + "Failed to find a tagging driver for protocol %s, using default\n", 1440 + user_protocol); 1441 + tag_ops = NULL; 1442 + } 1439 1443 } 1444 + 1445 + if (!tag_ops) 1446 + tag_ops = dsa_tag_driver_get(default_proto); 1440 1447 1441 1448 if (IS_ERR(tag_ops)) { 1442 1449 if (PTR_ERR(tag_ops) == -ENOPROTOOPT)
+2
net/ipv4/af_inet.c
··· 754 754 (TCPF_ESTABLISHED | TCPF_SYN_RECV | 755 755 TCPF_CLOSE_WAIT | TCPF_CLOSE))); 756 756 757 + if (test_bit(SOCK_SUPPORT_ZC, &sock->flags)) 758 + set_bit(SOCK_SUPPORT_ZC, &newsock->flags); 757 759 sock_graft(sk2, newsock); 758 760 759 761 newsock->state = SS_CONNECTED;
+2 -2
net/ipv4/tcp_bpf.c
··· 607 607 } else { 608 608 sk->sk_write_space = psock->saved_write_space; 609 609 /* Pairs with lockless read in sk_clone_lock() */ 610 - WRITE_ONCE(sk->sk_prot, psock->sk_proto); 610 + sock_replace_proto(sk, psock->sk_proto); 611 611 } 612 612 return 0; 613 613 } ··· 620 620 } 621 621 622 622 /* Pairs with lockless read in sk_clone_lock() */ 623 - WRITE_ONCE(sk->sk_prot, &tcp_bpf_prots[family][config]); 623 + sock_replace_proto(sk, &tcp_bpf_prots[family][config]); 624 624 return 0; 625 625 } 626 626 EXPORT_SYMBOL_GPL(tcp_bpf_update_proto);
+3
net/ipv4/tcp_ulp.c
··· 136 136 if (icsk->icsk_ulp_ops) 137 137 goto out_err; 138 138 139 + if (sk->sk_socket) 140 + clear_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags); 141 + 139 142 err = ulp_ops->init(sk); 140 143 if (err) 141 144 goto out_err;
+2 -2
net/ipv4/udp_bpf.c
··· 141 141 142 142 if (restore) { 143 143 sk->sk_write_space = psock->saved_write_space; 144 - WRITE_ONCE(sk->sk_prot, psock->sk_proto); 144 + sock_replace_proto(sk, psock->sk_proto); 145 145 return 0; 146 146 } 147 147 148 148 if (sk->sk_family == AF_INET6) 149 149 udp_bpf_check_v6_needs_rebuild(psock->sk_proto); 150 150 151 - WRITE_ONCE(sk->sk_prot, &udp_bpf_prots[family]); 151 + sock_replace_proto(sk, &udp_bpf_prots[family]); 152 152 return 0; 153 153 } 154 154 EXPORT_SYMBOL_GPL(udp_bpf_update_proto);
+10 -4
net/ipv6/route.c
··· 6555 6555 static int __net_init ip6_route_net_init_late(struct net *net) 6556 6556 { 6557 6557 #ifdef CONFIG_PROC_FS 6558 - proc_create_net("ipv6_route", 0, net->proc_net, &ipv6_route_seq_ops, 6559 - sizeof(struct ipv6_route_iter)); 6560 - proc_create_net_single("rt6_stats", 0444, net->proc_net, 6561 - rt6_stats_seq_show, NULL); 6558 + if (!proc_create_net("ipv6_route", 0, net->proc_net, 6559 + &ipv6_route_seq_ops, 6560 + sizeof(struct ipv6_route_iter))) 6561 + return -ENOMEM; 6562 + 6563 + if (!proc_create_net_single("rt6_stats", 0444, net->proc_net, 6564 + rt6_stats_seq_show, NULL)) { 6565 + remove_proc_entry("ipv6_route", net->proc_net); 6566 + return -ENOMEM; 6567 + } 6562 6568 #endif 6563 6569 return 0; 6564 6570 }
+1
net/ipv6/udp.c
··· 66 66 { 67 67 udp_lib_init_sock(sk); 68 68 sk->sk_destruct = udpv6_destruct_sock; 69 + set_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags); 69 70 return 0; 70 71 } 71 72
+6 -24
net/netfilter/ipset/ip_set_hash_gen.h
··· 42 42 #define AHASH_MAX_SIZE (6 * AHASH_INIT_SIZE) 43 43 /* Max muber of elements in the array block when tuned */ 44 44 #define AHASH_MAX_TUNED 64 45 - 46 45 #define AHASH_MAX(h) ((h)->bucketsize) 47 - 48 - /* Max number of elements can be tuned */ 49 - #ifdef IP_SET_HASH_WITH_MULTI 50 - static u8 51 - tune_bucketsize(u8 curr, u32 multi) 52 - { 53 - u32 n; 54 - 55 - if (multi < curr) 56 - return curr; 57 - 58 - n = curr + AHASH_INIT_SIZE; 59 - /* Currently, at listing one hash bucket must fit into a message. 60 - * Therefore we have a hard limit here. 61 - */ 62 - return n > curr && n <= AHASH_MAX_TUNED ? n : curr; 63 - } 64 - #define TUNE_BUCKETSIZE(h, multi) \ 65 - ((h)->bucketsize = tune_bucketsize((h)->bucketsize, multi)) 66 - #else 67 - #define TUNE_BUCKETSIZE(h, multi) 68 - #endif 69 46 70 47 /* A hash bucket */ 71 48 struct hbucket { ··· 913 936 goto set_full; 914 937 /* Create a new slot */ 915 938 if (n->pos >= n->size) { 916 - TUNE_BUCKETSIZE(h, multi); 939 + #ifdef IP_SET_HASH_WITH_MULTI 940 + if (h->bucketsize >= AHASH_MAX_TUNED) 941 + goto set_full; 942 + else if (h->bucketsize < multi) 943 + h->bucketsize += AHASH_INIT_SIZE; 944 + #endif 917 945 if (n->size >= AHASH_MAX(h)) { 918 946 /* Trigger rehashing */ 919 947 mtype_data_next(&h->next, d);
+8 -2
net/netfilter/ipvs/ip_vs_app.c
··· 599 599 int __net_init ip_vs_app_net_init(struct netns_ipvs *ipvs) 600 600 { 601 601 INIT_LIST_HEAD(&ipvs->app_list); 602 - proc_create_net("ip_vs_app", 0, ipvs->net->proc_net, &ip_vs_app_seq_ops, 603 - sizeof(struct seq_net_private)); 602 + #ifdef CONFIG_PROC_FS 603 + if (!proc_create_net("ip_vs_app", 0, ipvs->net->proc_net, 604 + &ip_vs_app_seq_ops, 605 + sizeof(struct seq_net_private))) 606 + return -ENOMEM; 607 + #endif 604 608 return 0; 605 609 } 606 610 607 611 void __net_exit ip_vs_app_net_cleanup(struct netns_ipvs *ipvs) 608 612 { 609 613 unregister_ip_vs_app(ipvs, NULL /* all */); 614 + #ifdef CONFIG_PROC_FS 610 615 remove_proc_entry("ip_vs_app", ipvs->net->proc_net); 616 + #endif 611 617 }
+23 -7
net/netfilter/ipvs/ip_vs_conn.c
··· 1265 1265 * The drop rate array needs tuning for real environments. 1266 1266 * Called from timer bh only => no locking 1267 1267 */ 1268 - static const char todrop_rate[9] = {0, 1, 2, 3, 4, 5, 6, 7, 8}; 1269 - static char todrop_counter[9] = {0}; 1268 + static const signed char todrop_rate[9] = {0, 1, 2, 3, 4, 5, 6, 7, 8}; 1269 + static signed char todrop_counter[9] = {0}; 1270 1270 int i; 1271 1271 1272 1272 /* if the conn entry hasn't lasted for 60 seconds, don't drop it. ··· 1447 1447 { 1448 1448 atomic_set(&ipvs->conn_count, 0); 1449 1449 1450 - proc_create_net("ip_vs_conn", 0, ipvs->net->proc_net, 1451 - &ip_vs_conn_seq_ops, sizeof(struct ip_vs_iter_state)); 1452 - proc_create_net("ip_vs_conn_sync", 0, ipvs->net->proc_net, 1453 - &ip_vs_conn_sync_seq_ops, 1454 - sizeof(struct ip_vs_iter_state)); 1450 + #ifdef CONFIG_PROC_FS 1451 + if (!proc_create_net("ip_vs_conn", 0, ipvs->net->proc_net, 1452 + &ip_vs_conn_seq_ops, 1453 + sizeof(struct ip_vs_iter_state))) 1454 + goto err_conn; 1455 + 1456 + if (!proc_create_net("ip_vs_conn_sync", 0, ipvs->net->proc_net, 1457 + &ip_vs_conn_sync_seq_ops, 1458 + sizeof(struct ip_vs_iter_state))) 1459 + goto err_conn_sync; 1460 + #endif 1461 + 1455 1462 return 0; 1463 + 1464 + #ifdef CONFIG_PROC_FS 1465 + err_conn_sync: 1466 + remove_proc_entry("ip_vs_conn", ipvs->net->proc_net); 1467 + err_conn: 1468 + return -ENOMEM; 1469 + #endif 1456 1470 } 1457 1471 1458 1472 void __net_exit ip_vs_conn_net_cleanup(struct netns_ipvs *ipvs) 1459 1473 { 1460 1474 /* flush all the connection entries first */ 1461 1475 ip_vs_conn_flush(ipvs); 1476 + #ifdef CONFIG_PROC_FS 1462 1477 remove_proc_entry("ip_vs_conn", ipvs->net->proc_net); 1463 1478 remove_proc_entry("ip_vs_conn_sync", ipvs->net->proc_net); 1479 + #endif 1464 1480 } 1465 1481 1466 1482 int __init ip_vs_conn_init(void)
+10 -1
net/netfilter/nf_nat_core.c
··· 1152 1152 WARN_ON(nf_nat_hook != NULL); 1153 1153 RCU_INIT_POINTER(nf_nat_hook, &nat_hook); 1154 1154 1155 - return register_nf_nat_bpf(); 1155 + ret = register_nf_nat_bpf(); 1156 + if (ret < 0) { 1157 + RCU_INIT_POINTER(nf_nat_hook, NULL); 1158 + nf_ct_helper_expectfn_unregister(&follow_master_nat); 1159 + synchronize_net(); 1160 + unregister_pernet_subsys(&nat_net_ops); 1161 + kvfree(nf_nat_bysource); 1162 + } 1163 + 1164 + return ret; 1156 1165 } 1157 1166 1158 1167 static void __exit nf_nat_cleanup(void)
+5 -3
net/netfilter/nf_tables_api.c
··· 8502 8502 nf_tables_chain_destroy(&trans->ctx); 8503 8503 break; 8504 8504 case NFT_MSG_DELRULE: 8505 - if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD) 8506 - nft_flow_rule_destroy(nft_trans_flow_rule(trans)); 8507 - 8508 8505 nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans)); 8509 8506 break; 8510 8507 case NFT_MSG_DELSET: ··· 9007 9010 nft_rule_expr_deactivate(&trans->ctx, 9008 9011 nft_trans_rule(trans), 9009 9012 NFT_TRANS_COMMIT); 9013 + 9014 + if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD) 9015 + nft_flow_rule_destroy(nft_trans_flow_rule(trans)); 9010 9016 break; 9011 9017 case NFT_MSG_NEWSET: 9012 9018 nft_clear(net, nft_trans_set(trans)); ··· 10067 10067 nft_net = nft_pernet(net); 10068 10068 deleted = 0; 10069 10069 mutex_lock(&nft_net->commit_mutex); 10070 + if (!list_empty(&nf_tables_destroy_list)) 10071 + rcu_barrier(); 10070 10072 again: 10071 10073 list_for_each_entry(table, &nft_net->tables, list) { 10072 10074 if (nft_table_has_owner(table) &&
+3 -3
net/netfilter/nft_payload.c
··· 208 208 [NFTA_PAYLOAD_SREG] = { .type = NLA_U32 }, 209 209 [NFTA_PAYLOAD_DREG] = { .type = NLA_U32 }, 210 210 [NFTA_PAYLOAD_BASE] = { .type = NLA_U32 }, 211 - [NFTA_PAYLOAD_OFFSET] = NLA_POLICY_MAX_BE(NLA_U32, 255), 212 - [NFTA_PAYLOAD_LEN] = NLA_POLICY_MAX_BE(NLA_U32, 255), 211 + [NFTA_PAYLOAD_OFFSET] = NLA_POLICY_MAX(NLA_BE32, 255), 212 + [NFTA_PAYLOAD_LEN] = NLA_POLICY_MAX(NLA_BE32, 255), 213 213 [NFTA_PAYLOAD_CSUM_TYPE] = { .type = NLA_U32 }, 214 - [NFTA_PAYLOAD_CSUM_OFFSET] = NLA_POLICY_MAX_BE(NLA_U32, 255), 214 + [NFTA_PAYLOAD_CSUM_OFFSET] = NLA_POLICY_MAX(NLA_BE32, 255), 215 215 [NFTA_PAYLOAD_CSUM_FLAGS] = { .type = NLA_U32 }, 216 216 }; 217 217
+1
net/openvswitch/datapath.c
··· 2544 2544 .parallel_ops = true, 2545 2545 .small_ops = dp_vport_genl_ops, 2546 2546 .n_small_ops = ARRAY_SIZE(dp_vport_genl_ops), 2547 + .resv_start_op = OVS_VPORT_CMD_SET + 1, 2547 2548 .mcgrps = &ovs_dp_vport_multicast_group, 2548 2549 .n_mcgrps = 1, 2549 2550 .module = THIS_MODULE,
+3
net/rose/rose_link.c
··· 236 236 unsigned char *dptr; 237 237 int len; 238 238 239 + if (!neigh->dev) 240 + return; 241 + 239 242 len = AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + ROSE_MIN_LEN + 3; 240 243 241 244 if ((skb = alloc_skb(len, GFP_ATOMIC)) == NULL)
+3 -1
net/sched/sch_red.c
··· 72 72 { 73 73 struct red_sched_data *q = qdisc_priv(sch); 74 74 struct Qdisc *child = q->qdisc; 75 + unsigned int len; 75 76 int ret; 76 77 77 78 q->vars.qavg = red_calc_qavg(&q->parms, ··· 127 126 break; 128 127 } 129 128 129 + len = qdisc_pkt_len(skb); 130 130 ret = qdisc_enqueue(skb, child, to_free); 131 131 if (likely(ret == NET_XMIT_SUCCESS)) { 132 - qdisc_qstats_backlog_inc(sch, skb); 132 + sch->qstats.backlog += len; 133 133 sch->q.qlen++; 134 134 } else if (net_xmit_drop_count(ret)) { 135 135 q->stats.pdrop++;
+4 -2
net/smc/af_smc.c
··· 3380 3380 3381 3381 rc = register_pernet_subsys(&smc_net_stat_ops); 3382 3382 if (rc) 3383 - return rc; 3383 + goto out_pernet_subsys; 3384 3384 3385 3385 smc_ism_init(); 3386 3386 smc_clc_init(); 3387 3387 3388 3388 rc = smc_nl_init(); 3389 3389 if (rc) 3390 - goto out_pernet_subsys; 3390 + goto out_pernet_subsys_stat; 3391 3391 3392 3392 rc = smc_pnet_init(); 3393 3393 if (rc) ··· 3480 3480 smc_pnet_exit(); 3481 3481 out_nl: 3482 3482 smc_nl_exit(); 3483 + out_pernet_subsys_stat: 3484 + unregister_pernet_subsys(&smc_net_stat_ops); 3483 3485 out_pernet_subsys: 3484 3486 unregister_pernet_subsys(&smc_net_ops); 3485 3487
+1 -1
net/sunrpc/auth_gss/auth_gss.c
··· 1989 1989 goto unwrap_failed; 1990 1990 mic.len = len; 1991 1991 mic.data = kmalloc(len, GFP_KERNEL); 1992 - if (!mic.data) 1992 + if (ZERO_OR_NULL_PTR(mic.data)) 1993 1993 goto unwrap_failed; 1994 1994 if (read_bytes_from_xdr_buf(rcv_buf, offset, mic.data, mic.len)) 1995 1995 goto unwrap_failed;
+10 -2
net/sunrpc/sysfs.c
··· 518 518 struct net *net) 519 519 { 520 520 struct rpc_sysfs_client *rpc_client; 521 + struct rpc_sysfs_xprt_switch *xswitch = 522 + (struct rpc_sysfs_xprt_switch *)xprt_switch->xps_sysfs; 523 + 524 + if (!xswitch) 525 + return; 521 526 522 527 rpc_client = rpc_sysfs_client_alloc(rpc_sunrpc_client_kobj, 523 528 net, clnt->cl_clid); 524 529 if (rpc_client) { 525 530 char name[] = "switch"; 526 - struct rpc_sysfs_xprt_switch *xswitch = 527 - (struct rpc_sysfs_xprt_switch *)xprt_switch->xps_sysfs; 528 531 int ret; 529 532 530 533 clnt->cl_sysfs = rpc_client; ··· 561 558 rpc_xprt_switch->xprt_switch = xprt_switch; 562 559 rpc_xprt_switch->xprt = xprt; 563 560 kobject_uevent(&rpc_xprt_switch->kobject, KOBJ_ADD); 561 + } else { 562 + xprt_switch->xps_sysfs = NULL; 564 563 } 565 564 } 566 565 ··· 573 568 struct rpc_sysfs_xprt *rpc_xprt; 574 569 struct rpc_sysfs_xprt_switch *switch_obj = 575 570 (struct rpc_sysfs_xprt_switch *)xprt_switch->xps_sysfs; 571 + 572 + if (!switch_obj) 573 + return; 576 574 577 575 rpc_xprt = rpc_sysfs_xprt_alloc(&switch_obj->kobject, xprt, gfp_flags); 578 576 if (rpc_xprt) {
+4 -4
net/unix/unix_bpf.c
··· 145 145 146 146 if (restore) { 147 147 sk->sk_write_space = psock->saved_write_space; 148 - WRITE_ONCE(sk->sk_prot, psock->sk_proto); 148 + sock_replace_proto(sk, psock->sk_proto); 149 149 return 0; 150 150 } 151 151 152 152 unix_dgram_bpf_check_needs_rebuild(psock->sk_proto); 153 - WRITE_ONCE(sk->sk_prot, &unix_dgram_bpf_prot); 153 + sock_replace_proto(sk, &unix_dgram_bpf_prot); 154 154 return 0; 155 155 } 156 156 ··· 158 158 { 159 159 if (restore) { 160 160 sk->sk_write_space = psock->saved_write_space; 161 - WRITE_ONCE(sk->sk_prot, psock->sk_proto); 161 + sock_replace_proto(sk, psock->sk_proto); 162 162 return 0; 163 163 } 164 164 165 165 unix_stream_bpf_check_needs_rebuild(psock->sk_proto); 166 - WRITE_ONCE(sk->sk_prot, &unix_stream_bpf_prot); 166 + sock_replace_proto(sk, &unix_stream_bpf_prot); 167 167 return 0; 168 168 } 169 169
+4 -3
net/vmw_vsock/af_vsock.c
··· 1905 1905 err = 0; 1906 1906 transport = vsk->transport; 1907 1907 1908 - while ((data = vsock_connectible_has_data(vsk)) == 0) { 1908 + while (1) { 1909 1909 prepare_to_wait(sk_sleep(sk), wait, TASK_INTERRUPTIBLE); 1910 + data = vsock_connectible_has_data(vsk); 1911 + if (data != 0) 1912 + break; 1910 1913 1911 1914 if (sk->sk_err != 0 || 1912 1915 (sk->sk_shutdown & RCV_SHUTDOWN) || ··· 2094 2091 struct vsock_sock *vsk; 2095 2092 const struct vsock_transport *transport; 2096 2093 int err; 2097 - 2098 - DEFINE_WAIT(wait); 2099 2094 2100 2095 sk = sock->sk; 2101 2096 vsk = vsock_sk(sk);
+4 -2
security/commoncap.c
··· 401 401 &tmpbuf, size, GFP_NOFS); 402 402 dput(dentry); 403 403 404 - if (ret < 0 || !tmpbuf) 405 - return ret; 404 + if (ret < 0 || !tmpbuf) { 405 + size = ret; 406 + goto out_free; 407 + } 406 408 407 409 fs_ns = inode->i_sb->s_user_ns; 408 410 cap = (struct vfs_cap_data *) tmpbuf;
+6 -1
sound/aoa/soundbus/i2sbus/core.c
··· 147 147 return rc; 148 148 } 149 149 150 + /* Returns 1 if added, 0 for otherwise; don't return a negative value! */ 150 151 /* FIXME: look at device node refcounting */ 151 152 static int i2sbus_add_dev(struct macio_dev *macio, 152 153 struct i2sbus_control *control, ··· 214 213 * either as the second one in that case is just a modem. */ 215 214 if (!ok) { 216 215 kfree(dev); 217 - return -ENODEV; 216 + return 0; 218 217 } 219 218 220 219 mutex_init(&dev->lock); ··· 303 302 304 303 if (soundbus_add_one(&dev->sound)) { 305 304 printk(KERN_DEBUG "i2sbus: device registration error!\n"); 305 + if (dev->sound.ofdev.dev.kobj.state_initialized) { 306 + soundbus_dev_put(&dev->sound); 307 + return 0; 308 + } 306 309 goto err; 307 310 } 308 311
+23
sound/core/control.c
··· 753 753 } 754 754 EXPORT_SYMBOL(snd_ctl_rename_id); 755 755 756 + /** 757 + * snd_ctl_rename - rename the control on the card 758 + * @card: the card instance 759 + * @kctl: the control to rename 760 + * @name: the new name 761 + * 762 + * Renames the specified control on the card to the new name. 763 + * 764 + * Make sure to take the control write lock - down_write(&card->controls_rwsem). 765 + */ 766 + void snd_ctl_rename(struct snd_card *card, struct snd_kcontrol *kctl, 767 + const char *name) 768 + { 769 + remove_hash_entries(card, kctl); 770 + 771 + if (strscpy(kctl->id.name, name, sizeof(kctl->id.name)) < 0) 772 + pr_warn("ALSA: Renamed control new name '%s' truncated to '%s'\n", 773 + name, kctl->id.name); 774 + 775 + add_hash_entries(card, kctl); 776 + } 777 + EXPORT_SYMBOL(snd_ctl_rename); 778 + 756 779 #ifndef CONFIG_SND_CTL_FAST_LOOKUP 757 780 static struct snd_kcontrol * 758 781 snd_ctl_find_numid_slow(struct snd_card *card, unsigned int numid)
+25 -8
sound/pci/ac97/ac97_codec.c
··· 2009 2009 err = device_register(&ac97->dev); 2010 2010 if (err < 0) { 2011 2011 ac97_err(ac97, "Can't register ac97 bus\n"); 2012 + put_device(&ac97->dev); 2012 2013 ac97->dev.bus = NULL; 2013 2014 return err; 2014 2015 } ··· 2656 2655 */ 2657 2656 static void set_ctl_name(char *dst, const char *src, const char *suffix) 2658 2657 { 2659 - if (suffix) 2660 - sprintf(dst, "%s %s", src, suffix); 2661 - else 2662 - strcpy(dst, src); 2663 - } 2658 + const size_t msize = SNDRV_CTL_ELEM_ID_NAME_MAXLEN; 2659 + 2660 + if (suffix) { 2661 + if (snprintf(dst, msize, "%s %s", src, suffix) >= msize) 2662 + pr_warn("ALSA: AC97 control name '%s %s' truncated to '%s'\n", 2663 + src, suffix, dst); 2664 + } else { 2665 + if (strscpy(dst, src, msize) < 0) 2666 + pr_warn("ALSA: AC97 control name '%s' truncated to '%s'\n", 2667 + src, dst); 2668 + } 2669 + } 2664 2670 2665 2671 /* remove the control with the given name and optional suffix */ 2666 2672 static int snd_ac97_remove_ctl(struct snd_ac97 *ac97, const char *name, ··· 2694 2686 const char *dst, const char *suffix) 2695 2687 { 2696 2688 struct snd_kcontrol *kctl = ctl_find(ac97, src, suffix); 2689 + char name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN]; 2690 + 2697 2691 if (kctl) { 2698 - set_ctl_name(kctl->id.name, dst, suffix); 2692 + set_ctl_name(name, dst, suffix); 2693 + snd_ctl_rename(ac97->bus->card, kctl, name); 2699 2694 return 0; 2700 2695 } 2701 2696 return -ENOENT; ··· 2717 2706 const char *s2, const char *suffix) 2718 2707 { 2719 2708 struct snd_kcontrol *kctl1, *kctl2; 2709 + char name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN]; 2710 + 2720 2711 kctl1 = ctl_find(ac97, s1, suffix); 2721 2712 kctl2 = ctl_find(ac97, s2, suffix); 2722 2713 if (kctl1 && kctl2) { 2723 - set_ctl_name(kctl1->id.name, s2, suffix); 2724 - set_ctl_name(kctl2->id.name, s1, suffix); 2714 + set_ctl_name(name, s2, suffix); 2715 + snd_ctl_rename(ac97->bus->card, kctl1, name); 2716 + 2717 + set_ctl_name(name, s1, suffix); 2718 + snd_ctl_rename(ac97->bus->card, kctl2, name); 2719 + 2725 2720 return 0; 2726 2721 } 2727 2722 return -ENOENT;
+3 -3
sound/pci/au88x0/au88x0.h
··· 141 141 #ifndef CHIP_AU8810 142 142 stream_t dma_wt[NR_WT]; 143 143 wt_voice_t wt_voice[NR_WT]; /* WT register cache. */ 144 - char mixwt[(NR_WT / NR_WTPB) * 6]; /* WT mixin objects */ 144 + s8 mixwt[(NR_WT / NR_WTPB) * 6]; /* WT mixin objects */ 145 145 #endif 146 146 147 147 /* Global resources */ ··· 235 235 static void vortex_connect_default(vortex_t * vortex, int en); 236 236 static int vortex_adb_allocroute(vortex_t * vortex, int dma, int nr_ch, 237 237 int dir, int type, int subdev); 238 - static char vortex_adb_checkinout(vortex_t * vortex, int resmap[], int out, 239 - int restype); 238 + static int vortex_adb_checkinout(vortex_t * vortex, int resmap[], int out, 239 + int restype); 240 240 #ifndef CHIP_AU8810 241 241 static int vortex_wt_allocroute(vortex_t * vortex, int dma, int nr_ch); 242 242 static void vortex_wt_connect(vortex_t * vortex, int en);
+1 -1
sound/pci/au88x0/au88x0_core.c
··· 1998 1998 out: Mean checkout if != 0. Else mean Checkin resource. 1999 1999 restype: Indicates type of resource to be checked in or out. 2000 2000 */ 2001 - static char 2001 + static int 2002 2002 vortex_adb_checkinout(vortex_t * vortex, int resmap[], int out, int restype) 2003 2003 { 2004 2004 int i, qty = resnum[restype], resinuse = 0;
+1 -1
sound/pci/ca0106/ca0106_mixer.c
··· 720 720 { 721 721 struct snd_kcontrol *kctl = ctl_find(card, src); 722 722 if (kctl) { 723 - strcpy(kctl->id.name, dst); 723 + snd_ctl_rename(card, kctl, dst); 724 724 return 0; 725 725 } 726 726 return -ENOENT;
+1 -1
sound/pci/emu10k1/emumixer.c
··· 1767 1767 { 1768 1768 struct snd_kcontrol *kctl = ctl_find(card, src); 1769 1769 if (kctl) { 1770 - strcpy(kctl->id.name, dst); 1770 + snd_ctl_rename(card, kctl, dst); 1771 1771 return 0; 1772 1772 } 1773 1773 return -ENOENT;
+5 -7
sound/pci/hda/patch_realtek.c
··· 2142 2142 2143 2143 kctl = snd_hda_find_mixer_ctl(codec, oldname); 2144 2144 if (kctl) 2145 - strcpy(kctl->id.name, newname); 2145 + snd_ctl_rename(codec->card, kctl, newname); 2146 2146 } 2147 2147 2148 2148 static void alc1220_fixup_gb_dual_codecs(struct hda_codec *codec, ··· 6654 6654 { 6655 6655 struct hda_codec *cdc = dev_to_hda_codec(dev); 6656 6656 struct alc_spec *spec = cdc->spec; 6657 - int ret; 6658 6657 6659 - ret = component_bind_all(dev, spec->comps); 6660 - if (ret) 6661 - return ret; 6662 - 6663 - return 0; 6658 + return component_bind_all(dev, spec->comps); 6664 6659 } 6665 6660 6666 6661 static void comp_unbind(struct device *dev) ··· 9323 9328 SND_PCI_QUIRK(0x103c, 0x8898, "HP EliteBook 845 G8 Notebook PC", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST), 9324 9329 SND_PCI_QUIRK(0x103c, 0x88d0, "HP Pavilion 15-eh1xxx (mainboard 88D0)", ALC287_FIXUP_HP_GPIO_LED), 9325 9330 SND_PCI_QUIRK(0x103c, 0x8902, "HP OMEN 16", ALC285_FIXUP_HP_MUTE_LED), 9331 + SND_PCI_QUIRK(0x103c, 0x896d, "HP ZBook Firefly 16 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9326 9332 SND_PCI_QUIRK(0x103c, 0x896e, "HP EliteBook x360 830 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9327 9333 SND_PCI_QUIRK(0x103c, 0x8971, "HP EliteBook 830 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9328 9334 SND_PCI_QUIRK(0x103c, 0x8972, "HP EliteBook 840 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), ··· 9342 9346 SND_PCI_QUIRK(0x103c, 0x89aa, "HP EliteBook 630 G9", ALC236_FIXUP_HP_GPIO_LED), 9343 9347 SND_PCI_QUIRK(0x103c, 0x89ac, "HP EliteBook 640 G9", ALC236_FIXUP_HP_GPIO_LED), 9344 9348 SND_PCI_QUIRK(0x103c, 0x89ae, "HP EliteBook 650 G9", ALC236_FIXUP_HP_GPIO_LED), 9349 + SND_PCI_QUIRK(0x103c, 0x89c0, "HP ZBook Power 15.6 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9345 9350 SND_PCI_QUIRK(0x103c, 0x89c3, "Zbook Studio G9", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED), 9346 9351 SND_PCI_QUIRK(0x103c, 0x89c6, "Zbook Fury 17 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9347 9352 SND_PCI_QUIRK(0x103c, 0x89ca, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), ··· 9397 9400 SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC), 9398 9401 SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401), 9399 9402 SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE), 9403 + SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402", ALC245_FIXUP_CS35L41_SPI_2), 9400 9404 SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502), 9401 9405 SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS), 9402 9406 SND_PCI_QUIRK(0x1043, 0x1e5e, "ASUS ROG Strix G513", ALC294_FIXUP_ASUS_G513_PINS),
+13 -13
sound/pci/rme9652/hdsp.c
··· 433 433 struct snd_rawmidi *rmidi; 434 434 struct snd_rawmidi_substream *input; 435 435 struct snd_rawmidi_substream *output; 436 - char istimer; /* timer in use */ 436 + signed char istimer; /* timer in use */ 437 437 struct timer_list timer; 438 438 spinlock_t lock; 439 439 int pending; ··· 480 480 pid_t playback_pid; 481 481 int running; 482 482 int system_sample_rate; 483 - const char *channel_map; 483 + const signed char *channel_map; 484 484 int dev; 485 485 int irq; 486 486 unsigned long port; ··· 502 502 where the data for that channel can be read/written from/to. 503 503 */ 504 504 505 - static const char channel_map_df_ss[HDSP_MAX_CHANNELS] = { 505 + static const signed char channel_map_df_ss[HDSP_MAX_CHANNELS] = { 506 506 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 507 507 18, 19, 20, 21, 22, 23, 24, 25 508 508 }; ··· 517 517 -1, -1, -1, -1, -1, -1, -1, -1 518 518 }; 519 519 520 - static const char channel_map_ds[HDSP_MAX_CHANNELS] = { 520 + static const signed char channel_map_ds[HDSP_MAX_CHANNELS] = { 521 521 /* ADAT channels are remapped */ 522 522 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 523 523 /* channels 12 and 13 are S/PDIF */ ··· 526 526 -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 527 527 }; 528 528 529 - static const char channel_map_H9632_ss[HDSP_MAX_CHANNELS] = { 529 + static const signed char channel_map_H9632_ss[HDSP_MAX_CHANNELS] = { 530 530 /* ADAT channels */ 531 531 0, 1, 2, 3, 4, 5, 6, 7, 532 532 /* SPDIF */ ··· 540 540 -1, -1 541 541 }; 542 542 543 - static const char channel_map_H9632_ds[HDSP_MAX_CHANNELS] = { 543 + static const signed char channel_map_H9632_ds[HDSP_MAX_CHANNELS] = { 544 544 /* ADAT */ 545 545 1, 3, 5, 7, 546 546 /* SPDIF */ ··· 554 554 -1, -1, -1, -1, -1, -1 555 555 }; 556 556 557 - static const char channel_map_H9632_qs[HDSP_MAX_CHANNELS] = { 557 + static const signed char channel_map_H9632_qs[HDSP_MAX_CHANNELS] = { 558 558 /* ADAT is disabled in this mode */ 559 559 /* SPDIF */ 560 560 8, 9, ··· 3939 3939 return hdsp_hw_pointer(hdsp); 3940 3940 } 3941 3941 3942 - static char *hdsp_channel_buffer_location(struct hdsp *hdsp, 3942 + static signed char *hdsp_channel_buffer_location(struct hdsp *hdsp, 3943 3943 int stream, 3944 3944 int channel) 3945 3945 ··· 3964 3964 void __user *src, unsigned long count) 3965 3965 { 3966 3966 struct hdsp *hdsp = snd_pcm_substream_chip(substream); 3967 - char *channel_buf; 3967 + signed char *channel_buf; 3968 3968 3969 3969 if (snd_BUG_ON(pos + count > HDSP_CHANNEL_BUFFER_BYTES)) 3970 3970 return -EINVAL; ··· 3982 3982 void *src, unsigned long count) 3983 3983 { 3984 3984 struct hdsp *hdsp = snd_pcm_substream_chip(substream); 3985 - char *channel_buf; 3985 + signed char *channel_buf; 3986 3986 3987 3987 channel_buf = hdsp_channel_buffer_location(hdsp, substream->pstr->stream, channel); 3988 3988 if (snd_BUG_ON(!channel_buf)) ··· 3996 3996 void __user *dst, unsigned long count) 3997 3997 { 3998 3998 struct hdsp *hdsp = snd_pcm_substream_chip(substream); 3999 - char *channel_buf; 3999 + signed char *channel_buf; 4000 4000 4001 4001 if (snd_BUG_ON(pos + count > HDSP_CHANNEL_BUFFER_BYTES)) 4002 4002 return -EINVAL; ··· 4014 4014 void *dst, unsigned long count) 4015 4015 { 4016 4016 struct hdsp *hdsp = snd_pcm_substream_chip(substream); 4017 - char *channel_buf; 4017 + signed char *channel_buf; 4018 4018 4019 4019 channel_buf = hdsp_channel_buffer_location(hdsp, substream->pstr->stream, channel); 4020 4020 if (snd_BUG_ON(!channel_buf)) ··· 4028 4028 unsigned long count) 4029 4029 { 4030 4030 struct hdsp *hdsp = snd_pcm_substream_chip(substream); 4031 - char *channel_buf; 4031 + signed char *channel_buf; 4032 4032 4033 4033 channel_buf = hdsp_channel_buffer_location (hdsp, substream->pstr->stream, channel); 4034 4034 if (snd_BUG_ON(!channel_buf))
+11 -11
sound/pci/rme9652/rme9652.c
··· 230 230 int last_spdif_sample_rate; /* so that we can catch externally ... */ 231 231 int last_adat_sample_rate; /* ... induced rate changes */ 232 232 233 - const char *channel_map; 233 + const signed char *channel_map; 234 234 235 235 struct snd_card *card; 236 236 struct snd_pcm *pcm; ··· 247 247 where the data for that channel can be read/written from/to. 248 248 */ 249 249 250 - static const char channel_map_9652_ss[26] = { 250 + static const signed char channel_map_9652_ss[26] = { 251 251 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 252 252 18, 19, 20, 21, 22, 23, 24, 25 253 253 }; 254 254 255 - static const char channel_map_9636_ss[26] = { 255 + static const signed char channel_map_9636_ss[26] = { 256 256 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 257 257 /* channels 16 and 17 are S/PDIF */ 258 258 24, 25, ··· 260 260 -1, -1, -1, -1, -1, -1, -1, -1 261 261 }; 262 262 263 - static const char channel_map_9652_ds[26] = { 263 + static const signed char channel_map_9652_ds[26] = { 264 264 /* ADAT channels are remapped */ 265 265 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 266 266 /* channels 12 and 13 are S/PDIF */ ··· 269 269 -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 270 270 }; 271 271 272 - static const char channel_map_9636_ds[26] = { 272 + static const signed char channel_map_9636_ds[26] = { 273 273 /* ADAT channels are remapped */ 274 274 1, 3, 5, 7, 9, 11, 13, 15, 275 275 /* channels 8 and 9 are S/PDIF */ ··· 1819 1819 return rme9652_hw_pointer(rme9652); 1820 1820 } 1821 1821 1822 - static char *rme9652_channel_buffer_location(struct snd_rme9652 *rme9652, 1822 + static signed char *rme9652_channel_buffer_location(struct snd_rme9652 *rme9652, 1823 1823 int stream, 1824 1824 int channel) 1825 1825 ··· 1847 1847 void __user *src, unsigned long count) 1848 1848 { 1849 1849 struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream); 1850 - char *channel_buf; 1850 + signed char *channel_buf; 1851 1851 1852 1852 if (snd_BUG_ON(pos + count > RME9652_CHANNEL_BUFFER_BYTES)) 1853 1853 return -EINVAL; ··· 1867 1867 void *src, unsigned long count) 1868 1868 { 1869 1869 struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream); 1870 - char *channel_buf; 1870 + signed char *channel_buf; 1871 1871 1872 1872 channel_buf = rme9652_channel_buffer_location(rme9652, 1873 1873 substream->pstr->stream, ··· 1883 1883 void __user *dst, unsigned long count) 1884 1884 { 1885 1885 struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream); 1886 - char *channel_buf; 1886 + signed char *channel_buf; 1887 1887 1888 1888 if (snd_BUG_ON(pos + count > RME9652_CHANNEL_BUFFER_BYTES)) 1889 1889 return -EINVAL; ··· 1903 1903 void *dst, unsigned long count) 1904 1904 { 1905 1905 struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream); 1906 - char *channel_buf; 1906 + signed char *channel_buf; 1907 1907 1908 1908 channel_buf = rme9652_channel_buffer_location(rme9652, 1909 1909 substream->pstr->stream, ··· 1919 1919 unsigned long count) 1920 1920 { 1921 1921 struct snd_rme9652 *rme9652 = snd_pcm_substream_chip(substream); 1922 - char *channel_buf; 1922 + signed char *channel_buf; 1923 1923 1924 1924 channel_buf = rme9652_channel_buffer_location (rme9652, 1925 1925 substream->pstr->stream,
+21
sound/soc/amd/yc/acp6x-mach.c
··· 49 49 .driver_data = &acp6x_card, 50 50 .matches = { 51 51 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 52 + DMI_MATCH(DMI_PRODUCT_NAME, "21D0"), 53 + } 54 + }, 55 + { 56 + .driver_data = &acp6x_card, 57 + .matches = { 58 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 59 + DMI_MATCH(DMI_PRODUCT_NAME, "21D0"), 60 + } 61 + }, 62 + { 63 + .driver_data = &acp6x_card, 64 + .matches = { 65 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 66 + DMI_MATCH(DMI_PRODUCT_NAME, "21D1"), 67 + } 68 + }, 69 + { 70 + .driver_data = &acp6x_card, 71 + .matches = { 72 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 52 73 DMI_MATCH(DMI_PRODUCT_NAME, "21D2"), 53 74 } 54 75 },
+1
sound/soc/codecs/Kconfig
··· 1629 1629 config SND_SOC_TLV320ADC3XXX 1630 1630 tristate "Texas Instruments TLV320ADC3001/3101 audio ADC" 1631 1631 depends on I2C 1632 + depends on GPIOLIB 1632 1633 help 1633 1634 Enable support for Texas Instruments TLV320ADC3001 and TLV320ADC3101 1634 1635 ADCs.
+1 -1
sound/soc/codecs/cx2072x.h
··· 177 177 #define CX2072X_PLBK_DRC_PARM_LEN 9 178 178 #define CX2072X_CLASSD_AMP_LEN 6 179 179 180 - /* DAI interfae type */ 180 + /* DAI interface type */ 181 181 #define CX2072X_DAI_HIFI 1 182 182 #define CX2072X_DAI_DSP 2 183 183 #define CX2072X_DAI_DSP_PWM 3 /* 4 ch, including mic and AEC */
+19 -15
sound/soc/codecs/jz4725b.c
··· 136 136 #define REG_CGR3_GO1L_OFFSET 0 137 137 #define REG_CGR3_GO1L_MASK (0x1f << REG_CGR3_GO1L_OFFSET) 138 138 139 + #define REG_CGR10_GIL_OFFSET 0 140 + #define REG_CGR10_GIR_OFFSET 4 141 + 139 142 struct jz_icdc { 140 143 struct regmap *regmap; 141 144 void __iomem *base; 142 145 struct clk *clk; 143 146 }; 144 147 145 - static const SNDRV_CTL_TLVD_DECLARE_DB_LINEAR(jz4725b_dac_tlv, -2250, 0); 146 - static const SNDRV_CTL_TLVD_DECLARE_DB_LINEAR(jz4725b_line_tlv, -1500, 600); 148 + static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(jz4725b_adc_tlv, 0, 150, 0); 149 + static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(jz4725b_dac_tlv, -2250, 150, 0); 147 150 148 151 static const struct snd_kcontrol_new jz4725b_codec_controls[] = { 149 152 SOC_DOUBLE_TLV("Master Playback Volume", ··· 154 151 REG_CGR1_GODL_OFFSET, 155 152 REG_CGR1_GODR_OFFSET, 156 153 0xf, 1, jz4725b_dac_tlv), 157 - SOC_DOUBLE_R_TLV("Master Capture Volume", 158 - JZ4725B_CODEC_REG_CGR3, 159 - JZ4725B_CODEC_REG_CGR2, 160 - REG_CGR2_GO1R_OFFSET, 161 - 0x1f, 1, jz4725b_line_tlv), 154 + SOC_DOUBLE_TLV("Master Capture Volume", 155 + JZ4725B_CODEC_REG_CGR10, 156 + REG_CGR10_GIL_OFFSET, 157 + REG_CGR10_GIR_OFFSET, 158 + 0xf, 0, jz4725b_adc_tlv), 162 159 163 160 SOC_SINGLE("Master Playback Switch", JZ4725B_CODEC_REG_CR1, 164 161 REG_CR1_DAC_MUTE_OFFSET, 1, 1), ··· 183 180 jz4725b_codec_adc_src_texts, 184 181 jz4725b_codec_adc_src_values); 185 182 static const struct snd_kcontrol_new jz4725b_codec_adc_src_ctrl = 186 - SOC_DAPM_ENUM("Route", jz4725b_codec_adc_src_enum); 183 + SOC_DAPM_ENUM("ADC Source Capture Route", jz4725b_codec_adc_src_enum); 187 184 188 185 static const struct snd_kcontrol_new jz4725b_codec_mixer_controls[] = { 189 186 SOC_DAPM_SINGLE("Line In Bypass", JZ4725B_CODEC_REG_CR1, ··· 228 225 SND_SOC_DAPM_ADC("ADC", "Capture", 229 226 JZ4725B_CODEC_REG_PMR1, REG_PMR1_SB_ADC_OFFSET, 1), 230 227 231 - SND_SOC_DAPM_MUX("ADC Source", SND_SOC_NOPM, 0, 0, 228 + SND_SOC_DAPM_MUX("ADC Source Capture Route", SND_SOC_NOPM, 0, 0, 232 229 &jz4725b_codec_adc_src_ctrl), 233 230 234 231 /* Mixer */ ··· 239 236 SND_SOC_DAPM_MIXER("DAC to Mixer", JZ4725B_CODEC_REG_CR1, 240 237 REG_CR1_DACSEL_OFFSET, 0, NULL, 0), 241 238 242 - SND_SOC_DAPM_MIXER("Line In", SND_SOC_NOPM, 0, 0, NULL, 0), 239 + SND_SOC_DAPM_MIXER("Line In", JZ4725B_CODEC_REG_PMR1, 240 + REG_PMR1_SB_LIN_OFFSET, 1, NULL, 0), 243 241 SND_SOC_DAPM_MIXER("HP Out", JZ4725B_CODEC_REG_CR1, 244 242 REG_CR1_HP_DIS_OFFSET, 1, NULL, 0), 245 243 ··· 287 283 {"Mixer", NULL, "DAC to Mixer"}, 288 284 289 285 {"Mixer to ADC", NULL, "Mixer"}, 290 - {"ADC Source", "Mixer", "Mixer to ADC"}, 291 - {"ADC Source", "Line In", "Line In"}, 292 - {"ADC Source", "Mic 1", "Mic 1"}, 293 - {"ADC Source", "Mic 2", "Mic 2"}, 294 - {"ADC", NULL, "ADC Source"}, 286 + {"ADC Source Capture Route", "Mixer", "Mixer to ADC"}, 287 + {"ADC Source Capture Route", "Line In", "Line In"}, 288 + {"ADC Source Capture Route", "Mic 1", "Mic 1"}, 289 + {"ADC Source Capture Route", "Mic 2", "Mic 2"}, 290 + {"ADC", NULL, "ADC Source Capture Route"}, 295 291 296 292 {"Out Stage", NULL, "Mixer"}, 297 293 {"HP Out", NULL, "Out Stage"},
+4 -4
sound/soc/codecs/mt6660.c
··· 503 503 dev_err(chip->dev, "read chip revision fail\n"); 504 504 goto probe_fail; 505 505 } 506 + pm_runtime_set_active(chip->dev); 507 + pm_runtime_enable(chip->dev); 506 508 507 509 ret = devm_snd_soc_register_component(chip->dev, 508 510 &mt6660_component_driver, 509 511 &mt6660_codec_dai, 1); 510 - if (!ret) { 511 - pm_runtime_set_active(chip->dev); 512 - pm_runtime_enable(chip->dev); 513 - } 512 + if (ret) 513 + pm_runtime_disable(chip->dev); 514 514 515 515 return ret; 516 516
+11 -9
sound/soc/codecs/rt1019.c
··· 391 391 unsigned int rx_mask, int slots, int slot_width) 392 392 { 393 393 struct snd_soc_component *component = dai->component; 394 - unsigned int val = 0, rx_slotnum; 394 + unsigned int cn = 0, cl = 0, rx_slotnum; 395 395 int ret = 0, first_bit; 396 396 397 397 switch (slots) { 398 398 case 4: 399 - val |= RT1019_I2S_TX_4CH; 399 + cn = RT1019_I2S_TX_4CH; 400 400 break; 401 401 case 6: 402 - val |= RT1019_I2S_TX_6CH; 402 + cn = RT1019_I2S_TX_6CH; 403 403 break; 404 404 case 8: 405 - val |= RT1019_I2S_TX_8CH; 405 + cn = RT1019_I2S_TX_8CH; 406 406 break; 407 407 case 2: 408 408 break; ··· 412 412 413 413 switch (slot_width) { 414 414 case 20: 415 - val |= RT1019_I2S_DL_20; 415 + cl = RT1019_TDM_CL_20; 416 416 break; 417 417 case 24: 418 - val |= RT1019_I2S_DL_24; 418 + cl = RT1019_TDM_CL_24; 419 419 break; 420 420 case 32: 421 - val |= RT1019_I2S_DL_32; 421 + cl = RT1019_TDM_CL_32; 422 422 break; 423 423 case 8: 424 - val |= RT1019_I2S_DL_8; 424 + cl = RT1019_TDM_CL_8; 425 425 break; 426 426 case 16: 427 427 break; ··· 470 470 goto _set_tdm_err_; 471 471 } 472 472 473 + snd_soc_component_update_bits(component, RT1019_TDM_1, 474 + RT1019_TDM_CL_MASK, cl); 473 475 snd_soc_component_update_bits(component, RT1019_TDM_2, 474 - RT1019_I2S_CH_TX_MASK | RT1019_I2S_DF_MASK, val); 476 + RT1019_I2S_CH_TX_MASK, cn); 475 477 476 478 _set_tdm_err_: 477 479 return ret;
+6
sound/soc/codecs/rt1019.h
··· 95 95 #define RT1019_TDM_BCLK_MASK (0x1 << 6) 96 96 #define RT1019_TDM_BCLK_NORM (0x0 << 6) 97 97 #define RT1019_TDM_BCLK_INV (0x1 << 6) 98 + #define RT1019_TDM_CL_MASK (0x7) 99 + #define RT1019_TDM_CL_8 (0x4) 100 + #define RT1019_TDM_CL_32 (0x3) 101 + #define RT1019_TDM_CL_24 (0x2) 102 + #define RT1019_TDM_CL_20 (0x1) 103 + #define RT1019_TDM_CL_16 (0x0) 98 104 99 105 /* 0x0401 TDM Control-2 */ 100 106 #define RT1019_I2S_CH_TX_MASK (0x3 << 6)
+14 -3
sound/soc/codecs/rt1308-sdw.c
··· 50 50 case 0x3008: 51 51 case 0x300a: 52 52 case 0xc000: 53 + case 0xc710: 53 54 case 0xc860 ... 0xc863: 54 55 case 0xc870 ... 0xc873: 55 56 return true; ··· 201 200 { 202 201 struct rt1308_sdw_priv *rt1308 = dev_get_drvdata(dev); 203 202 int ret = 0; 203 + unsigned int tmp; 204 204 205 205 if (rt1308->hw_init) 206 206 return 0; ··· 233 231 /* sw reset */ 234 232 regmap_write(rt1308->regmap, RT1308_SDW_RESET, 0); 235 233 234 + regmap_read(rt1308->regmap, 0xc710, &tmp); 235 + rt1308->hw_ver = tmp; 236 + dev_dbg(dev, "%s, hw_ver=0x%x\n", __func__, rt1308->hw_ver); 237 + 236 238 /* initial settings */ 237 239 regmap_write(rt1308->regmap, 0xc103, 0xc0); 238 240 regmap_write(rt1308->regmap, 0xc030, 0x17); ··· 252 246 regmap_write(rt1308->regmap, 0xc062, 0x05); 253 247 regmap_write(rt1308->regmap, 0xc171, 0x07); 254 248 regmap_write(rt1308->regmap, 0xc173, 0x0d); 255 - regmap_write(rt1308->regmap, 0xc311, 0x7f); 256 - regmap_write(rt1308->regmap, 0xc900, 0x90); 249 + if (rt1308->hw_ver == RT1308_VER_C) { 250 + regmap_write(rt1308->regmap, 0xc311, 0x7f); 251 + regmap_write(rt1308->regmap, 0xc300, 0x09); 252 + } else { 253 + regmap_write(rt1308->regmap, 0xc311, 0x4f); 254 + regmap_write(rt1308->regmap, 0xc300, 0x0b); 255 + } 256 + regmap_write(rt1308->regmap, 0xc900, 0x5a); 257 257 regmap_write(rt1308->regmap, 0xc1a0, 0x84); 258 258 regmap_write(rt1308->regmap, 0xc1a1, 0x01); 259 259 regmap_write(rt1308->regmap, 0xc360, 0x78); ··· 269 257 regmap_write(rt1308->regmap, 0xc070, 0x00); 270 258 regmap_write(rt1308->regmap, 0xc100, 0xd7); 271 259 regmap_write(rt1308->regmap, 0xc101, 0xd7); 272 - regmap_write(rt1308->regmap, 0xc300, 0x09); 273 260 274 261 if (rt1308->first_hw_init) { 275 262 regcache_cache_bypass(rt1308->regmap, false);
+3
sound/soc/codecs/rt1308-sdw.h
··· 139 139 { 0x3005, 0x23 }, 140 140 { 0x3008, 0x02 }, 141 141 { 0x300a, 0x00 }, 142 + { 0xc000 | (RT1308_DATA_PATH << 4), 0x00 }, 142 143 { 0xc003 | (RT1308_DAC_SET << 4), 0x00 }, 143 144 { 0xc000 | (RT1308_POWER << 4), 0x00 }, 144 145 { 0xc001 | (RT1308_POWER << 4), 0x00 }, 145 146 { 0xc002 | (RT1308_POWER << 4), 0x00 }, 147 + { 0xc000 | (RT1308_POWER_STATUS << 4), 0x00 }, 146 148 }; 147 149 148 150 #define RT1308_SDW_OFFSET 0xc000 ··· 165 163 bool first_hw_init; 166 164 int rx_mask; 167 165 int slots; 166 + int hw_ver; 168 167 }; 169 168 170 169 struct sdw_stream_data {
+5
sound/soc/codecs/rt1308.h
··· 286 286 RT1308_AIFS 287 287 }; 288 288 289 + enum rt1308_hw_ver { 290 + RT1308_VER_C = 2, 291 + RT1308_VER_D 292 + }; 293 + 289 294 #endif /* end of _RT1308_H_ */
+13 -2
sound/soc/codecs/rt5682s.c
··· 1981 1981 unsigned int rx_mask, int slots, int slot_width) 1982 1982 { 1983 1983 struct snd_soc_component *component = dai->component; 1984 - unsigned int cl, val = 0; 1984 + unsigned int cl, val = 0, tx_slotnum; 1985 1985 1986 1986 if (tx_mask || rx_mask) 1987 1987 snd_soc_component_update_bits(component, ··· 1989 1989 else 1990 1990 snd_soc_component_update_bits(component, 1991 1991 RT5682S_TDM_ADDA_CTRL_2, RT5682S_TDM_EN, 0); 1992 + 1993 + /* Tx slot configuration */ 1994 + tx_slotnum = hweight_long(tx_mask); 1995 + if (tx_slotnum) { 1996 + if (tx_slotnum > slots) { 1997 + dev_err(component->dev, "Invalid or oversized Tx slots.\n"); 1998 + return -EINVAL; 1999 + } 2000 + val |= (tx_slotnum - 1) << RT5682S_TDM_ADC_DL_SFT; 2001 + } 1992 2002 1993 2003 switch (slots) { 1994 2004 case 4: ··· 2020 2010 } 2021 2011 2022 2012 snd_soc_component_update_bits(component, RT5682S_TDM_CTRL, 2023 - RT5682S_TDM_TX_CH_MASK | RT5682S_TDM_RX_CH_MASK, val); 2013 + RT5682S_TDM_TX_CH_MASK | RT5682S_TDM_RX_CH_MASK | 2014 + RT5682S_TDM_ADC_DL_MASK, val); 2024 2015 2025 2016 switch (slot_width) { 2026 2017 case 8:
+1
sound/soc/codecs/rt5682s.h
··· 899 899 #define RT5682S_TDM_RX_CH_8 (0x3 << 8) 900 900 #define RT5682S_TDM_ADC_LCA_MASK (0x7 << 4) 901 901 #define RT5682S_TDM_ADC_LCA_SFT 4 902 + #define RT5682S_TDM_ADC_DL_MASK (0x3 << 0) 902 903 #define RT5682S_TDM_ADC_DL_SFT 0 903 904 904 905 /* TDM control 2 (0x007a) */
+1 -1
sound/soc/codecs/tlv320adc3xxx.c
··· 1449 1449 .of_match_table = tlv320adc3xxx_of_match, 1450 1450 }, 1451 1451 .probe_new = adc3xxx_i2c_probe, 1452 - .remove = adc3xxx_i2c_remove, 1452 + .remove = __exit_p(adc3xxx_i2c_remove), 1453 1453 .id_table = adc3xxx_i2c_id, 1454 1454 }; 1455 1455
+4 -3
sound/soc/codecs/wm5102.c
··· 2099 2099 regmap_update_bits(arizona->regmap, wm5102_digital_vu[i], 2100 2100 WM5102_DIG_VU, WM5102_DIG_VU); 2101 2101 2102 + pm_runtime_enable(&pdev->dev); 2103 + pm_runtime_idle(&pdev->dev); 2104 + 2102 2105 ret = arizona_request_irq(arizona, ARIZONA_IRQ_DSP_IRQ1, 2103 2106 "ADSP2 Compressed IRQ", wm5102_adsp2_irq, 2104 2107 wm5102); ··· 2134 2131 goto err_spk_irqs; 2135 2132 } 2136 2133 2137 - pm_runtime_enable(&pdev->dev); 2138 - pm_runtime_idle(&pdev->dev); 2139 - 2140 2134 return ret; 2141 2135 2142 2136 err_spk_irqs: ··· 2142 2142 arizona_set_irq_wake(arizona, ARIZONA_IRQ_DSP_IRQ1, 0); 2143 2143 arizona_free_irq(arizona, ARIZONA_IRQ_DSP_IRQ1, wm5102); 2144 2144 err_jack_codec_dev: 2145 + pm_runtime_disable(&pdev->dev); 2145 2146 arizona_jack_codec_dev_remove(&wm5102->core); 2146 2147 2147 2148 return ret;
+4 -3
sound/soc/codecs/wm5110.c
··· 2457 2457 regmap_update_bits(arizona->regmap, wm5110_digital_vu[i], 2458 2458 WM5110_DIG_VU, WM5110_DIG_VU); 2459 2459 2460 + pm_runtime_enable(&pdev->dev); 2461 + pm_runtime_idle(&pdev->dev); 2462 + 2460 2463 ret = arizona_request_irq(arizona, ARIZONA_IRQ_DSP_IRQ1, 2461 2464 "ADSP2 Compressed IRQ", wm5110_adsp2_irq, 2462 2465 wm5110); ··· 2492 2489 goto err_spk_irqs; 2493 2490 } 2494 2491 2495 - pm_runtime_enable(&pdev->dev); 2496 - pm_runtime_idle(&pdev->dev); 2497 - 2498 2492 return ret; 2499 2493 2500 2494 err_spk_irqs: ··· 2500 2500 arizona_set_irq_wake(arizona, ARIZONA_IRQ_DSP_IRQ1, 0); 2501 2501 arizona_free_irq(arizona, ARIZONA_IRQ_DSP_IRQ1, wm5110); 2502 2502 err_jack_codec_dev: 2503 + pm_runtime_disable(&pdev->dev); 2503 2504 arizona_jack_codec_dev_remove(&wm5110->core); 2504 2505 2505 2506 return ret;
+52 -2
sound/soc/codecs/wm8962.c
··· 1840 1840 4, 1, 0, inmix_tlv), 1841 1841 }; 1842 1842 1843 + static int tp_event(struct snd_soc_dapm_widget *w, 1844 + struct snd_kcontrol *kcontrol, int event) 1845 + { 1846 + int ret, reg, val, mask; 1847 + struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm); 1848 + 1849 + ret = pm_runtime_resume_and_get(component->dev); 1850 + if (ret < 0) { 1851 + dev_err(component->dev, "Failed to resume device: %d\n", ret); 1852 + return ret; 1853 + } 1854 + 1855 + reg = WM8962_ADDITIONAL_CONTROL_4; 1856 + 1857 + if (!strcmp(w->name, "TEMP_HP")) { 1858 + mask = WM8962_TEMP_ENA_HP_MASK; 1859 + val = WM8962_TEMP_ENA_HP; 1860 + } else if (!strcmp(w->name, "TEMP_SPK")) { 1861 + mask = WM8962_TEMP_ENA_SPK_MASK; 1862 + val = WM8962_TEMP_ENA_SPK; 1863 + } else { 1864 + pm_runtime_put(component->dev); 1865 + return -EINVAL; 1866 + } 1867 + 1868 + switch (event) { 1869 + case SND_SOC_DAPM_POST_PMD: 1870 + val = 0; 1871 + fallthrough; 1872 + case SND_SOC_DAPM_POST_PMU: 1873 + ret = snd_soc_component_update_bits(component, reg, mask, val); 1874 + break; 1875 + default: 1876 + WARN(1, "Invalid event %d\n", event); 1877 + pm_runtime_put(component->dev); 1878 + return -EINVAL; 1879 + } 1880 + 1881 + pm_runtime_put(component->dev); 1882 + 1883 + return 0; 1884 + } 1885 + 1843 1886 static int cp_event(struct snd_soc_dapm_widget *w, 1844 1887 struct snd_kcontrol *kcontrol, int event) 1845 1888 { ··· 2183 2140 SND_SOC_DAPM_SUPPLY_S("DSP2", 1, WM8962_DSP2_POWER_MANAGEMENT, 2184 2141 WM8962_DSP2_ENA_SHIFT, 0, dsp2_event, 2185 2142 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD), 2186 - SND_SOC_DAPM_SUPPLY("TEMP_HP", WM8962_ADDITIONAL_CONTROL_4, 2, 0, NULL, 0), 2187 - SND_SOC_DAPM_SUPPLY("TEMP_SPK", WM8962_ADDITIONAL_CONTROL_4, 1, 0, NULL, 0), 2143 + SND_SOC_DAPM_SUPPLY("TEMP_HP", SND_SOC_NOPM, 0, 0, tp_event, 2144 + SND_SOC_DAPM_POST_PMU|SND_SOC_DAPM_POST_PMD), 2145 + SND_SOC_DAPM_SUPPLY("TEMP_SPK", SND_SOC_NOPM, 0, 0, tp_event, 2146 + SND_SOC_DAPM_POST_PMU|SND_SOC_DAPM_POST_PMD), 2188 2147 2189 2148 SND_SOC_DAPM_MIXER("INPGAL", WM8962_LEFT_INPUT_PGA_CONTROL, 4, 0, 2190 2149 inpgal, ARRAY_SIZE(inpgal)), ··· 3807 3762 &soc_component_dev_wm8962, &wm8962_dai, 1); 3808 3763 if (ret < 0) 3809 3764 goto err_pm_runtime; 3765 + 3766 + regmap_update_bits(wm8962->regmap, WM8962_ADDITIONAL_CONTROL_4, 3767 + WM8962_TEMP_ENA_HP_MASK, 0); 3768 + regmap_update_bits(wm8962->regmap, WM8962_ADDITIONAL_CONTROL_4, 3769 + WM8962_TEMP_ENA_SPK_MASK, 0); 3810 3770 3811 3771 regcache_cache_only(wm8962->regmap, true); 3812 3772
+4 -3
sound/soc/codecs/wm8997.c
··· 1161 1161 regmap_update_bits(arizona->regmap, wm8997_digital_vu[i], 1162 1162 WM8997_DIG_VU, WM8997_DIG_VU); 1163 1163 1164 + pm_runtime_enable(&pdev->dev); 1165 + pm_runtime_idle(&pdev->dev); 1166 + 1164 1167 arizona_init_common(arizona); 1165 1168 1166 1169 ret = arizona_init_vol_limit(arizona); ··· 1182 1179 goto err_spk_irqs; 1183 1180 } 1184 1181 1185 - pm_runtime_enable(&pdev->dev); 1186 - pm_runtime_idle(&pdev->dev); 1187 - 1188 1182 return ret; 1189 1183 1190 1184 err_spk_irqs: 1191 1185 arizona_free_spk_irqs(arizona); 1192 1186 err_jack_codec_dev: 1187 + pm_runtime_disable(&pdev->dev); 1193 1188 arizona_jack_codec_dev_remove(&wm8997->core); 1194 1189 1195 1190 return ret;
+1 -1
sound/soc/generic/audio-graph-card.c
··· 417 417 * or has convert-xxx property 418 418 */ 419 419 if ((of_get_child_count(codec_port) > 1) || 420 - (adata->convert_rate || adata->convert_channels)) 420 + asoc_simple_is_convert_required(adata)) 421 421 return true; 422 422 423 423 return false;
+15
sound/soc/generic/simple-card-utils.c
··· 85 85 } 86 86 EXPORT_SYMBOL_GPL(asoc_simple_parse_convert); 87 87 88 + /** 89 + * asoc_simple_is_convert_required() - Query if HW param conversion was requested 90 + * @data: Link data. 91 + * 92 + * Returns true if any HW param conversion was requested for this DAI link with 93 + * any "convert-xxx" properties. 94 + */ 95 + bool asoc_simple_is_convert_required(const struct asoc_simple_data *data) 96 + { 97 + return data->convert_rate || 98 + data->convert_channels || 99 + data->convert_sample_format; 100 + } 101 + EXPORT_SYMBOL_GPL(asoc_simple_is_convert_required); 102 + 88 103 int asoc_simple_parse_daifmt(struct device *dev, 89 104 struct device_node *node, 90 105 struct device_node *codec,
+1 -2
sound/soc/generic/simple-card.c
··· 393 393 * or has convert-xxx property 394 394 */ 395 395 if (dpcm_selectable && 396 - (num > 2 || 397 - adata.convert_rate || adata.convert_channels)) { 396 + (num > 2 || asoc_simple_is_convert_required(&adata))) { 398 397 /* 399 398 * np 400 399 * |1(CPU)|0(Codec) li->cpu
+12
sound/soc/intel/boards/sof_rt5682.c
··· 223 223 SOF_RT5682_SSP_AMP(2) | 224 224 SOF_RT5682_NUM_HDMIDEV(4)), 225 225 }, 226 + { 227 + .callback = sof_rt5682_quirk_cb, 228 + .matches = { 229 + DMI_MATCH(DMI_PRODUCT_FAMILY, "Google_Rex"), 230 + }, 231 + .driver_data = (void *)(SOF_RT5682_MCLK_EN | 232 + SOF_RT5682_SSP_CODEC(2) | 233 + SOF_SPEAKER_AMP_PRESENT | 234 + SOF_RT5682_SSP_AMP(0) | 235 + SOF_RT5682_NUM_HDMIDEV(4) 236 + ), 237 + }, 226 238 {} 227 239 }; 228 240
+11
sound/soc/intel/boards/sof_sdw.c
··· 202 202 SOF_SDW_PCH_DMIC | 203 203 RT711_JD1), 204 204 }, 205 + { 206 + /* NUC15 LAPBC710 skews */ 207 + .callback = sof_sdw_quirk_cb, 208 + .matches = { 209 + DMI_MATCH(DMI_BOARD_VENDOR, "Intel Corporation"), 210 + DMI_MATCH(DMI_BOARD_NAME, "LAPBC710"), 211 + }, 212 + .driver_data = (void *)(SOF_SDW_TGL_HDMI | 213 + SOF_SDW_PCH_DMIC | 214 + RT711_JD1), 215 + }, 205 216 /* TigerLake-SDCA devices */ 206 217 { 207 218 .callback = sof_sdw_quirk_cb,
+1 -7
sound/soc/intel/skylake/skl.c
··· 689 689 690 690 #endif /* CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC */ 691 691 692 - static void skl_codec_device_exit(struct device *dev) 693 - { 694 - snd_hdac_device_exit(dev_to_hdac_dev(dev)); 695 - } 696 - 697 692 static struct hda_codec *skl_codec_device_init(struct hdac_bus *bus, int addr) 698 693 { 699 694 struct hda_codec *codec; ··· 701 706 } 702 707 703 708 codec->core.type = HDA_DEV_ASOC; 704 - codec->core.dev.release = skl_codec_device_exit; 705 709 706 710 ret = snd_hdac_device_register(&codec->core); 707 711 if (ret) { 708 712 dev_err(bus->dev, "failed to register hdac device\n"); 709 - snd_hdac_device_exit(&codec->core); 713 + put_device(&codec->core.dev); 710 714 return ERR_PTR(ret); 711 715 } 712 716
+1
sound/soc/qcom/Kconfig
··· 187 187 config SND_SOC_SC7180 188 188 tristate "SoC Machine driver for SC7180 boards" 189 189 depends on I2C && GPIOLIB 190 + depends on SOUNDWIRE || SOUNDWIRE=n 190 191 select SND_SOC_QCOM_COMMON 191 192 select SND_SOC_LPASS_SC7180 192 193 select SND_SOC_MAX98357A
+10
sound/soc/qcom/lpass-cpu.c
··· 782 782 return true; 783 783 if (reg == LPASS_HDMI_TX_LEGACY_ADDR(v)) 784 784 return true; 785 + if (reg == LPASS_HDMI_TX_VBIT_CTL_ADDR(v)) 786 + return true; 787 + if (reg == LPASS_HDMI_TX_PARITY_ADDR(v)) 788 + return true; 785 789 786 790 for (i = 0; i < v->hdmi_rdma_channels; ++i) { 787 791 if (reg == LPAIF_HDMI_RDMACURR_REG(v, i)) 792 + return true; 793 + if (reg == LPASS_HDMI_TX_DMA_ADDR(v, i)) 794 + return true; 795 + if (reg == LPASS_HDMI_TX_CH_LSB_ADDR(v, i)) 796 + return true; 797 + if (reg == LPASS_HDMI_TX_CH_MSB_ADDR(v, i)) 788 798 return true; 789 799 } 790 800 return false;
+4 -2
sound/soc/soc-component.c
··· 1213 1213 int i; 1214 1214 1215 1215 for_each_rtd_components(rtd, i, component) { 1216 - int ret = pm_runtime_resume_and_get(component->dev); 1217 - if (ret < 0 && ret != -EACCES) 1216 + int ret = pm_runtime_get_sync(component->dev); 1217 + if (ret < 0 && ret != -EACCES) { 1218 + pm_runtime_put_noidle(component->dev); 1218 1219 return soc_component_ret(component, ret); 1220 + } 1219 1221 /* mark stream if succeeded */ 1220 1222 soc_component_mark_push(component, stream, pm); 1221 1223 }
+1 -7
sound/soc/sof/intel/hda-codec.c
··· 109 109 #define is_generic_config(x) 0 110 110 #endif 111 111 112 - static void hda_codec_device_exit(struct device *dev) 113 - { 114 - snd_hdac_device_exit(dev_to_hdac_dev(dev)); 115 - } 116 - 117 112 static struct hda_codec *hda_codec_device_init(struct hdac_bus *bus, int addr, int type) 118 113 { 119 114 struct hda_codec *codec; ··· 121 126 } 122 127 123 128 codec->core.type = type; 124 - codec->core.dev.release = hda_codec_device_exit; 125 129 126 130 ret = snd_hdac_device_register(&codec->core); 127 131 if (ret) { 128 132 dev_err(bus->dev, "failed to register hdac device\n"); 129 - snd_hdac_device_exit(&codec->core); 133 + put_device(&codec->core.dev); 130 134 return ERR_PTR(ret); 131 135 } 132 136
+1 -1
sound/soc/sof/intel/pci-mtl.c
··· 38 38 [SOF_INTEL_IPC4] = "intel/sof-ace-tplg", 39 39 }, 40 40 .default_fw_filename = { 41 - [SOF_INTEL_IPC4] = "dsp_basefw.bin", 41 + [SOF_INTEL_IPC4] = "sof-mtl.ri", 42 42 }, 43 43 .nocodec_tplg_filename = "sof-mtl-nocodec.tplg", 44 44 .ops = &sof_mtl_ops,
+29 -1
sound/soc/sof/intel/pci-tgl.c
··· 159 159 .ops_init = sof_tgl_ops_init, 160 160 }; 161 161 162 + static const struct sof_dev_desc adl_n_desc = { 163 + .machines = snd_soc_acpi_intel_adl_machines, 164 + .alt_machines = snd_soc_acpi_intel_adl_sdw_machines, 165 + .use_acpi_target_states = true, 166 + .resindex_lpe_base = 0, 167 + .resindex_pcicfg_base = -1, 168 + .resindex_imr_base = -1, 169 + .irqindex_host_ipc = -1, 170 + .chip_info = &tgl_chip_info, 171 + .ipc_supported_mask = BIT(SOF_IPC) | BIT(SOF_INTEL_IPC4), 172 + .ipc_default = SOF_IPC, 173 + .default_fw_path = { 174 + [SOF_IPC] = "intel/sof", 175 + [SOF_INTEL_IPC4] = "intel/avs/adl-n", 176 + }, 177 + .default_tplg_path = { 178 + [SOF_IPC] = "intel/sof-tplg", 179 + [SOF_INTEL_IPC4] = "intel/avs-tplg", 180 + }, 181 + .default_fw_filename = { 182 + [SOF_IPC] = "sof-adl-n.ri", 183 + [SOF_INTEL_IPC4] = "dsp_basefw.bin", 184 + }, 185 + .nocodec_tplg_filename = "sof-adl-nocodec.tplg", 186 + .ops = &sof_tgl_ops, 187 + .ops_init = sof_tgl_ops_init, 188 + }; 189 + 162 190 static const struct sof_dev_desc rpls_desc = { 163 191 .machines = snd_soc_acpi_intel_rpl_machines, 164 192 .alt_machines = snd_soc_acpi_intel_rpl_sdw_machines, ··· 274 246 { PCI_DEVICE(0x8086, 0x51cf), /* RPL-PX */ 275 247 .driver_data = (unsigned long)&rpl_desc}, 276 248 { PCI_DEVICE(0x8086, 0x54c8), /* ADL-N */ 277 - .driver_data = (unsigned long)&adl_desc}, 249 + .driver_data = (unsigned long)&adl_n_desc}, 278 250 { 0, } 279 251 }; 280 252 MODULE_DEVICE_TABLE(pci, sof_pci_ids);
+18 -2
sound/soc/sof/ipc4-mtrace.c
··· 108 108 int id; 109 109 u32 slot_offset; 110 110 void *log_buffer; 111 + struct mutex buffer_lock; /* for log_buffer alloc/free */ 111 112 u32 host_read_ptr; 112 113 u32 dsp_write_ptr; 113 114 /* pos update IPC arrived before the slot offset is known, queried */ ··· 129 128 struct sof_mtrace_core_data *core_data = inode->i_private; 130 129 int ret; 131 130 131 + mutex_lock(&core_data->buffer_lock); 132 + 133 + if (core_data->log_buffer) { 134 + ret = -EBUSY; 135 + goto out; 136 + } 137 + 132 138 ret = debugfs_file_get(file->f_path.dentry); 133 139 if (unlikely(ret)) 134 - return ret; 140 + goto out; 135 141 136 142 core_data->log_buffer = kmalloc(SOF_MTRACE_SLOT_SIZE, GFP_KERNEL); 137 143 if (!core_data->log_buffer) { 138 144 debugfs_file_put(file->f_path.dentry); 139 - return -ENOMEM; 145 + ret = -ENOMEM; 146 + goto out; 140 147 } 141 148 142 149 ret = simple_open(inode, file); ··· 152 143 kfree(core_data->log_buffer); 153 144 debugfs_file_put(file->f_path.dentry); 154 145 } 146 + 147 + out: 148 + mutex_unlock(&core_data->buffer_lock); 155 149 156 150 return ret; 157 151 } ··· 292 280 293 281 debugfs_file_put(file->f_path.dentry); 294 282 283 + mutex_lock(&core_data->buffer_lock); 295 284 kfree(core_data->log_buffer); 285 + core_data->log_buffer = NULL; 286 + mutex_unlock(&core_data->buffer_lock); 296 287 297 288 return 0; 298 289 } ··· 578 563 struct sof_mtrace_core_data *core_data = &priv->cores[i]; 579 564 580 565 init_waitqueue_head(&core_data->trace_sleep); 566 + mutex_init(&core_data->buffer_lock); 581 567 core_data->sdev = sdev; 582 568 core_data->id = i; 583 569 }
+1 -6
sound/synth/emux/emux.c
··· 126 126 */ 127 127 int snd_emux_free(struct snd_emux *emu) 128 128 { 129 - unsigned long flags; 130 - 131 129 if (! emu) 132 130 return -EINVAL; 133 131 134 - spin_lock_irqsave(&emu->voice_lock, flags); 135 - if (emu->timer_active) 136 - del_timer(&emu->tlist); 137 - spin_unlock_irqrestore(&emu->voice_lock, flags); 132 + del_timer_sync(&emu->tlist); 138 133 139 134 snd_emux_proc_free(emu); 140 135 snd_emux_delete_virmidi(emu);
+2
sound/usb/implicit.c
··· 47 47 static const struct snd_usb_implicit_fb_match playback_implicit_fb_quirks[] = { 48 48 /* Fixed EP */ 49 49 /* FIXME: check the availability of generic matching */ 50 + IMPLICIT_FB_FIXED_DEV(0x0763, 0x2030, 0x81, 3), /* M-Audio Fast Track C400 */ 51 + IMPLICIT_FB_FIXED_DEV(0x0763, 0x2031, 0x81, 3), /* M-Audio Fast Track C600 */ 50 52 IMPLICIT_FB_FIXED_DEV(0x0763, 0x2080, 0x81, 2), /* M-Audio FastTrack Ultra */ 51 53 IMPLICIT_FB_FIXED_DEV(0x0763, 0x2081, 0x81, 2), /* M-Audio FastTrack Ultra */ 52 54 IMPLICIT_FB_FIXED_DEV(0x2466, 0x8010, 0x81, 2), /* Fractal Audio Axe-Fx III */
+1 -1
sound/usb/mixer.c
··· 1631 1631 if (!found) 1632 1632 return; 1633 1633 1634 - strscpy(kctl->id.name, "Headphone", sizeof(kctl->id.name)); 1634 + snd_ctl_rename(card, kctl, "Headphone"); 1635 1635 } 1636 1636 1637 1637 static const struct usb_feature_control_info *get_feature_control_info(int control)
+4
tools/iio/iio_utils.c
··· 547 547 { 548 548 int count = 0; 549 549 550 + /* It takes a digit to represent zero */ 551 + if (!num) 552 + return 1; 553 + 550 554 while (num != 0) { 551 555 num /= 10; 552 556 count++;
+10 -7
tools/include/nolibc/string.h
··· 19 19 int memcmp(const void *s1, const void *s2, size_t n) 20 20 { 21 21 size_t ofs = 0; 22 - char c1 = 0; 22 + int c1 = 0; 23 23 24 - while (ofs < n && !(c1 = ((char *)s1)[ofs] - ((char *)s2)[ofs])) { 24 + while (ofs < n && !(c1 = ((unsigned char *)s1)[ofs] - ((unsigned char *)s2)[ofs])) { 25 25 ofs++; 26 26 } 27 27 return c1; ··· 125 125 } 126 126 127 127 /* this function is only used with arguments that are not constants or when 128 - * it's not known because optimizations are disabled. 128 + * it's not known because optimizations are disabled. Note that gcc 12 129 + * recognizes an strlen() pattern and replaces it with a jump to strlen(), 130 + * thus itself, hence the asm() statement below that's meant to disable this 131 + * confusing practice. 129 132 */ 130 133 static __attribute__((unused)) 131 - size_t nolibc_strlen(const char *str) 134 + size_t strlen(const char *str) 132 135 { 133 136 size_t len; 134 137 135 - for (len = 0; str[len]; len++); 138 + for (len = 0; str[len]; len++) 139 + asm(""); 136 140 return len; 137 141 } 138 142 ··· 144 140 * the two branches, then will rely on an external definition of strlen(). 145 141 */ 146 142 #if defined(__OPTIMIZE__) 143 + #define nolibc_strlen(x) strlen(x) 147 144 #define strlen(str) ({ \ 148 145 __builtin_constant_p((str)) ? \ 149 146 __builtin_strlen((str)) : \ 150 147 nolibc_strlen((str)); \ 151 148 }) 152 - #else 153 - #define strlen(str) nolibc_strlen((str)) 154 149 #endif 155 150 156 151 static __attribute__((unused))
+6 -6
tools/power/pm-graph/README
··· 6 6 |_| |___/ |_| 7 7 8 8 pm-graph: suspend/resume/boot timing analysis tools 9 - Version: 5.9 9 + Version: 5.10 10 10 Author: Todd Brandt <todd.e.brandt@intel.com> 11 - Home Page: https://01.org/pm-graph 11 + Home Page: https://www.intel.com/content/www/us/en/developer/topic-technology/open/pm-graph/overview.html 12 12 13 13 Report bugs/issues at bugzilla.kernel.org Tools/pm-graph 14 14 - https://bugzilla.kernel.org/buglist.cgi?component=pm-graph&product=Tools 15 15 16 16 Full documentation available online & in man pages 17 17 - Getting Started: 18 - https://01.org/pm-graph/documentation/getting-started 18 + https://www.intel.com/content/www/us/en/developer/articles/technical/usage.html 19 19 20 - - Config File Format: 21 - https://01.org/pm-graph/documentation/3-config-file-format 20 + - Feature Summary: 21 + https://www.intel.com/content/www/us/en/developer/topic-technology/open/pm-graph/features.html 22 22 23 23 - upstream version in git: 24 - https://github.com/intel/pm-graph/ 24 + git clone https://github.com/intel/pm-graph/ 25 25 26 26 Table of Contents 27 27 - Overview
+3
tools/power/pm-graph/sleepgraph.8
··· 78 78 If a wifi connection is available, check that it reconnects after resume. Include 79 79 the reconnect time in the total resume time calculation and treat wifi timeouts 80 80 as resume failures. 81 + .TP 82 + \fB-wifitrace\fR 83 + Trace through the wifi reconnect time and include it in the timeline. 81 84 82 85 .SS "advanced" 83 86 .TP
+109 -116
tools/power/pm-graph/sleepgraph.py
··· 86 86 # store system values and test parameters 87 87 class SystemValues: 88 88 title = 'SleepGraph' 89 - version = '5.9' 89 + version = '5.10' 90 90 ansi = False 91 91 rs = 0 92 92 display = '' ··· 100 100 ftracelog = False 101 101 acpidebug = True 102 102 tstat = True 103 + wifitrace = False 103 104 mindevlen = 0.0001 104 105 mincglen = 0.0 105 106 cgphase = '' ··· 125 124 epath = '/sys/kernel/debug/tracing/events/power/' 126 125 pmdpath = '/sys/power/pm_debug_messages' 127 126 s0ixpath = '/sys/module/intel_pmc_core/parameters/warn_on_s0ix_failures' 127 + s0ixres = '/sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us' 128 128 acpipath='/sys/module/acpi/parameters/debug_level' 129 129 traceevents = [ 130 130 'suspend_resume', ··· 182 180 tmstart = 'SUSPEND START %Y%m%d-%H:%M:%S.%f' 183 181 tmend = 'RESUME COMPLETE %Y%m%d-%H:%M:%S.%f' 184 182 tracefuncs = { 183 + 'async_synchronize_full': {}, 185 184 'sys_sync': {}, 186 185 'ksys_sync': {}, 187 186 '__pm_notifier_call_chain': {}, ··· 307 304 [2, 'suspendstats', 'sh', '-c', 'grep -v invalid /sys/power/suspend_stats/*'], 308 305 [2, 'cpuidle', 'sh', '-c', 'grep -v invalid /sys/devices/system/cpu/cpu*/cpuidle/state*/s2idle/*'], 309 306 [2, 'battery', 'sh', '-c', 'grep -v invalid /sys/class/power_supply/*/*'], 307 + [2, 'thermal', 'sh', '-c', 'grep . /sys/class/thermal/thermal_zone*/temp'], 310 308 ] 311 309 cgblacklist = [] 312 310 kprobes = dict() ··· 781 777 return 782 778 if not quiet: 783 779 sysvals.printSystemInfo(False) 784 - pprint('INITIALIZING FTRACE...') 780 + pprint('INITIALIZING FTRACE') 785 781 # turn trace off 786 782 self.fsetVal('0', 'tracing_on') 787 783 self.cleanupFtrace() ··· 845 841 for name in self.dev_tracefuncs: 846 842 self.defaultKprobe(name, self.dev_tracefuncs[name]) 847 843 if not quiet: 848 - pprint('INITIALIZING KPROBES...') 844 + pprint('INITIALIZING KPROBES') 849 845 self.addKprobes(self.verbose) 850 846 if(self.usetraceevents): 851 847 # turn trace events on ··· 1137 1133 self.cfgdef[file] = fp.read().strip() 1138 1134 fp.write(value) 1139 1135 fp.close() 1136 + def s0ixSupport(self): 1137 + if not os.path.exists(self.s0ixres) or not os.path.exists(self.mempowerfile): 1138 + return False 1139 + fp = open(sysvals.mempowerfile, 'r') 1140 + data = fp.read().strip() 1141 + fp.close() 1142 + if '[s2idle]' in data: 1143 + return True 1144 + return False 1140 1145 def haveTurbostat(self): 1141 1146 if not self.tstat: 1142 1147 return False ··· 1159 1146 self.vprint(out) 1160 1147 return True 1161 1148 return False 1162 - def turbostat(self): 1149 + def turbostat(self, s0ixready): 1163 1150 cmd = self.getExec('turbostat') 1164 1151 rawout = keyline = valline = '' 1165 1152 fullcmd = '%s -q -S echo freeze > %s' % (cmd, self.powerfile) ··· 1186 1173 for key in keyline: 1187 1174 idx = keyline.index(key) 1188 1175 val = valline[idx] 1176 + if key == 'SYS%LPI' and not s0ixready and re.match('^[0\.]*$', val): 1177 + continue 1189 1178 out.append('%s=%s' % (key, val)) 1190 1179 return '|'.join(out) 1191 1180 def netfixon(self, net='both'): ··· 1198 1183 out = ascii(fp.read()).strip() 1199 1184 fp.close() 1200 1185 return out 1201 - def wifiRepair(self): 1202 - out = self.netfixon('wifi') 1203 - if not out or 'error' in out.lower(): 1204 - return '' 1205 - m = re.match('WIFI \S* ONLINE (?P<action>\S*)', out) 1206 - if not m: 1207 - return 'dead' 1208 - return m.group('action') 1209 1186 def wifiDetails(self, dev): 1210 1187 try: 1211 1188 info = open('/sys/class/net/%s/device/uevent' % dev, 'r').read().strip() ··· 1227 1220 return '%s reconnected %.2f' % \ 1228 1221 (self.wifiDetails(dev), max(0, time.time() - start)) 1229 1222 time.sleep(0.01) 1230 - if self.netfix: 1231 - res = self.wifiRepair() 1232 - if res: 1233 - timeout = max(0, time.time() - start) 1234 - return '%s %s %d' % (self.wifiDetails(dev), res, timeout) 1235 1223 return '%s timeout %d' % (self.wifiDetails(dev), timeout) 1236 1224 def errorSummary(self, errinfo, msg): 1237 1225 found = False ··· 1348 1346 for i in self.rslist: 1349 1347 self.setVal(self.rstgt, i) 1350 1348 pprint('runtime suspend settings restored on %d devices' % len(self.rslist)) 1349 + def start(self, pm): 1350 + if self.useftrace: 1351 + self.dlog('start ftrace tracing') 1352 + self.fsetVal('1', 'tracing_on') 1353 + if self.useprocmon: 1354 + self.dlog('start the process monitor') 1355 + pm.start() 1356 + def stop(self, pm): 1357 + if self.useftrace: 1358 + if self.useprocmon: 1359 + self.dlog('stop the process monitor') 1360 + pm.stop() 1361 + self.dlog('stop ftrace tracing') 1362 + self.fsetVal('0', 'tracing_on') 1351 1363 1352 1364 sysvals = SystemValues() 1353 1365 switchvalues = ['enable', 'disable', 'on', 'off', 'true', 'false', '1', '0'] ··· 1659 1643 ubiquitous = False 1660 1644 if kprobename in dtf and 'ub' in dtf[kprobename]: 1661 1645 ubiquitous = True 1662 - title = cdata+' '+rdata 1663 - mstr = '\(.*\) *(?P<args>.*) *\((?P<caller>.*)\+.* arg1=(?P<ret>.*)' 1664 - m = re.match(mstr, title) 1665 - if m: 1666 - c = m.group('caller') 1667 - a = m.group('args').strip() 1668 - r = m.group('ret') 1646 + mc = re.match('\(.*\) *(?P<args>.*)', cdata) 1647 + mr = re.match('\((?P<caller>\S*).* arg1=(?P<ret>.*)', rdata) 1648 + if mc and mr: 1649 + c = mr.group('caller').split('+')[0] 1650 + a = mc.group('args').strip() 1651 + r = mr.group('ret') 1669 1652 if len(r) > 6: 1670 1653 r = '' 1671 1654 else: 1672 1655 r = 'ret=%s ' % r 1673 1656 if ubiquitous and c in dtf and 'ub' in dtf[c]: 1674 1657 return False 1658 + else: 1659 + return False 1675 1660 color = sysvals.kprobeColor(kprobename) 1676 1661 e = DevFunction(displayname, a, c, r, start, end, ubiquitous, proc, pid, color) 1677 1662 tgtdev['src'].append(e) ··· 1789 1772 e.time = self.trimTimeVal(e.time, t0, dT, left) 1790 1773 e.end = self.trimTimeVal(e.end, t0, dT, left) 1791 1774 e.length = e.end - e.time 1775 + if('cpuexec' in d): 1776 + cpuexec = dict() 1777 + for e in d['cpuexec']: 1778 + c0, cN = e 1779 + c0 = self.trimTimeVal(c0, t0, dT, left) 1780 + cN = self.trimTimeVal(cN, t0, dT, left) 1781 + cpuexec[(c0, cN)] = d['cpuexec'][e] 1782 + d['cpuexec'] = cpuexec 1792 1783 for dir in ['suspend', 'resume']: 1793 1784 list = [] 1794 1785 for e in self.errorinfo[dir]: ··· 2111 2086 return d 2112 2087 def addProcessUsageEvent(self, name, times): 2113 2088 # get the start and end times for this process 2114 - maxC = 0 2115 - tlast = 0 2116 - start = -1 2117 - end = -1 2089 + cpuexec = dict() 2090 + tlast = start = end = -1 2118 2091 for t in sorted(times): 2119 - if tlast == 0: 2092 + if tlast < 0: 2120 2093 tlast = t 2121 2094 continue 2122 - if name in self.pstl[t]: 2123 - if start == -1 or tlast < start: 2095 + if name in self.pstl[t] and self.pstl[t][name] > 0: 2096 + if start < 0: 2124 2097 start = tlast 2125 - if end == -1 or t > end: 2126 - end = t 2098 + end, key = t, (tlast, t) 2099 + maxj = (t - tlast) * 1024.0 2100 + cpuexec[key] = min(1.0, float(self.pstl[t][name]) / maxj) 2127 2101 tlast = t 2128 - if start == -1 or end == -1: 2129 - return 0 2102 + if start < 0 or end < 0: 2103 + return 2130 2104 # add a new action for this process and get the object 2131 2105 out = self.newActionGlobal(name, start, end, -3) 2132 - if not out: 2133 - return 0 2134 - phase, devname = out 2135 - dev = self.dmesg[phase]['list'][devname] 2136 - # get the cpu exec data 2137 - tlast = 0 2138 - clast = 0 2139 - cpuexec = dict() 2140 - for t in sorted(times): 2141 - if tlast == 0 or t <= start or t > end: 2142 - tlast = t 2143 - continue 2144 - list = self.pstl[t] 2145 - c = 0 2146 - if name in list: 2147 - c = list[name] 2148 - if c > maxC: 2149 - maxC = c 2150 - if c != clast: 2151 - key = (tlast, t) 2152 - cpuexec[key] = c 2153 - tlast = t 2154 - clast = c 2155 - dev['cpuexec'] = cpuexec 2156 - return maxC 2106 + if out: 2107 + phase, devname = out 2108 + dev = self.dmesg[phase]['list'][devname] 2109 + dev['cpuexec'] = cpuexec 2157 2110 def createProcessUsageEvents(self): 2158 - # get an array of process names 2159 - proclist = [] 2111 + # get an array of process names and times 2112 + proclist = {'sus': dict(), 'res': dict()} 2113 + tdata = {'sus': [], 'res': []} 2160 2114 for t in sorted(self.pstl): 2161 - pslist = self.pstl[t] 2162 - for ps in sorted(pslist): 2163 - if ps not in proclist: 2164 - proclist.append(ps) 2165 - # get a list of data points for suspend and resume 2166 - tsus = [] 2167 - tres = [] 2168 - for t in sorted(self.pstl): 2169 - if t < self.tSuspended: 2170 - tsus.append(t) 2171 - else: 2172 - tres.append(t) 2115 + dir = 'sus' if t < self.tSuspended else 'res' 2116 + for ps in sorted(self.pstl[t]): 2117 + if ps not in proclist[dir]: 2118 + proclist[dir][ps] = 0 2119 + tdata[dir].append(t) 2173 2120 # process the events for suspend and resume 2174 - if len(proclist) > 0: 2121 + if len(proclist['sus']) > 0 or len(proclist['res']) > 0: 2175 2122 sysvals.vprint('Process Execution:') 2176 - for ps in proclist: 2177 - c = self.addProcessUsageEvent(ps, tsus) 2178 - if c > 0: 2179 - sysvals.vprint('%25s (sus): %d' % (ps, c)) 2180 - c = self.addProcessUsageEvent(ps, tres) 2181 - if c > 0: 2182 - sysvals.vprint('%25s (res): %d' % (ps, c)) 2123 + for dir in ['sus', 'res']: 2124 + for ps in sorted(proclist[dir]): 2125 + self.addProcessUsageEvent(ps, tdata[dir]) 2183 2126 def handleEndMarker(self, time, msg=''): 2184 2127 dm = self.dmesg 2185 2128 self.setEnd(time, msg) ··· 3211 3218 # markers, and/or kprobes required for primary parsing. 3212 3219 def doesTraceLogHaveTraceEvents(): 3213 3220 kpcheck = ['_cal: (', '_ret: ('] 3214 - techeck = ['suspend_resume', 'device_pm_callback'] 3221 + techeck = ['suspend_resume', 'device_pm_callback', 'tracing_mark_write'] 3215 3222 tmcheck = ['SUSPEND START', 'RESUME COMPLETE'] 3216 3223 sysvals.usekprobes = False 3217 3224 fp = sysvals.openlog(sysvals.ftracefile, 'r') ··· 3234 3241 check.remove(i) 3235 3242 tmcheck = check 3236 3243 fp.close() 3237 - sysvals.usetraceevents = True if len(techeck) < 2 else False 3244 + sysvals.usetraceevents = True if len(techeck) < 3 else False 3238 3245 sysvals.usetracemarkers = True if len(tmcheck) == 0 else False 3239 3246 3240 3247 # Function: appendIncompleteTraceLog ··· 3449 3456 continue 3450 3457 # process cpu exec line 3451 3458 if t.type == 'tracing_mark_write': 3459 + if t.name == 'CMD COMPLETE' and data.tKernRes == 0: 3460 + data.tKernRes = t.time 3452 3461 m = re.match(tp.procexecfmt, t.name) 3453 3462 if(m): 3454 3463 parts, msg = 1, m.group('ps') ··· 3668 3673 continue 3669 3674 e = next((x for x in reversed(tp.ktemp[key]) if x['end'] < 0), 0) 3670 3675 if not e: 3676 + continue 3677 + if (t.time - e['begin']) * 1000 < sysvals.mindevlen: 3678 + tp.ktemp[key].pop() 3671 3679 continue 3672 3680 e['end'] = t.time 3673 3681 e['rdata'] = kprobedata ··· 4211 4213 fmt = '<n>(%.3f ms @ '+sv.timeformat+')</n>' 4212 4214 flen = fmt % (line.length*1000, line.time) 4213 4215 if line.isLeaf(): 4216 + if line.length * 1000 < sv.mincglen: 4217 + continue 4214 4218 hf.write(html_func_leaf.format(line.name, flen)) 4215 4219 elif line.freturn: 4216 4220 hf.write(html_func_end) ··· 4827 4827 if('cpuexec' in dev): 4828 4828 for t in sorted(dev['cpuexec']): 4829 4829 start, end = t 4830 - j = float(dev['cpuexec'][t]) / 5 4831 - if j > 1.0: 4832 - j = 1.0 4833 4830 height = '%.3f' % (rowheight/3) 4834 4831 top = '%.3f' % (rowtop + devtl.scaleH + 2*rowheight/3) 4835 4832 left = '%f' % (((start-m0)*100)/mTotal) 4836 4833 width = '%f' % ((end-start)*100/mTotal) 4837 - color = 'rgba(255, 0, 0, %f)' % j 4834 + color = 'rgba(255, 0, 0, %f)' % dev['cpuexec'][t] 4838 4835 devtl.html += \ 4839 4836 html_cpuexec.format(left, top, height, width, color) 4840 4837 if('src' not in dev): ··· 5450 5453 call('sync', shell=True) 5451 5454 sv.dlog('read dmesg') 5452 5455 sv.initdmesg() 5453 - # start ftrace 5454 - if sv.useftrace: 5455 - if not quiet: 5456 - pprint('START TRACING') 5457 - sv.dlog('start ftrace tracing') 5458 - sv.fsetVal('1', 'tracing_on') 5459 - if sv.useprocmon: 5460 - sv.dlog('start the process monitor') 5461 - pm.start() 5462 - sv.dlog('run the cmdinfo list before') 5456 + sv.dlog('cmdinfo before') 5463 5457 sv.cmdinfo(True) 5458 + sv.start(pm) 5464 5459 # execute however many s/r runs requested 5465 5460 for count in range(1,sv.execcount+1): 5466 5461 # x2delay in between test runs ··· 5489 5500 if res != 0: 5490 5501 tdata['error'] = 'cmd returned %d' % res 5491 5502 else: 5503 + s0ixready = sv.s0ixSupport() 5492 5504 mode = sv.suspendmode 5493 5505 if sv.memmode and os.path.exists(sv.mempowerfile): 5494 5506 mode = 'mem' ··· 5499 5509 sv.testVal(sv.diskpowerfile, 'radio', sv.diskmode) 5500 5510 if sv.acpidebug: 5501 5511 sv.testVal(sv.acpipath, 'acpi', '0xe') 5502 - if mode == 'freeze' and sv.haveTurbostat(): 5512 + if ((mode == 'freeze') or (sv.memmode == 's2idle')) \ 5513 + and sv.haveTurbostat(): 5503 5514 # execution will pause here 5504 - turbo = sv.turbostat() 5515 + turbo = sv.turbostat(s0ixready) 5505 5516 if turbo: 5506 5517 tdata['turbo'] = turbo 5507 5518 else: ··· 5513 5522 pf.close() 5514 5523 except Exception as e: 5515 5524 tdata['error'] = str(e) 5516 - sv.dlog('system returned from resume') 5525 + sv.fsetVal('CMD COMPLETE', 'trace_marker') 5526 + sv.dlog('system returned') 5517 5527 # reset everything 5518 5528 sv.testVal('restoreall') 5519 5529 if(sv.rtcwake): ··· 5527 5535 sv.fsetVal('WAIT END', 'trace_marker') 5528 5536 # return from suspend 5529 5537 pprint('RESUME COMPLETE') 5530 - sv.fsetVal(datetime.now().strftime(sv.tmend), 'trace_marker') 5538 + if(count < sv.execcount): 5539 + sv.fsetVal(datetime.now().strftime(sv.tmend), 'trace_marker') 5540 + elif(not sv.wifitrace): 5541 + sv.fsetVal(datetime.now().strftime(sv.tmend), 'trace_marker') 5542 + sv.stop(pm) 5531 5543 if sv.wifi and wifi: 5532 5544 tdata['wifi'] = sv.pollWifi(wifi) 5533 5545 sv.dlog('wifi check, %s' % tdata['wifi']) 5534 - if sv.netfix: 5535 - netfixout = sv.netfixon('wired') 5536 - elif sv.netfix: 5537 - netfixout = sv.netfixon() 5538 - if sv.netfix and netfixout: 5539 - tdata['netfix'] = netfixout 5546 + if(count == sv.execcount and sv.wifitrace): 5547 + sv.fsetVal(datetime.now().strftime(sv.tmend), 'trace_marker') 5548 + sv.stop(pm) 5549 + if sv.netfix: 5550 + tdata['netfix'] = sv.netfixon() 5540 5551 sv.dlog('netfix, %s' % tdata['netfix']) 5541 5552 if(sv.suspendmode == 'mem' or sv.suspendmode == 'command'): 5542 5553 sv.dlog('read the ACPI FPDT') 5543 5554 tdata['fw'] = getFPDT(False) 5544 5555 testdata.append(tdata) 5545 - sv.dlog('run the cmdinfo list after') 5556 + sv.dlog('cmdinfo after') 5546 5557 cmdafter = sv.cmdinfo(False) 5547 - # stop ftrace 5548 - if sv.useftrace: 5549 - if sv.useprocmon: 5550 - sv.dlog('stop the process monitor') 5551 - pm.stop() 5552 - sv.fsetVal('0', 'tracing_on') 5553 5558 # grab a copy of the dmesg output 5554 5559 if not quiet: 5555 5560 pprint('CAPTURING DMESG') 5556 - sysvals.dlog('EXECUTION TRACE END') 5557 5561 sv.getdmesg(testdata) 5558 5562 # grab a copy of the ftrace output 5559 5563 if sv.useftrace: ··· 6338 6350 if not m: 6339 6351 continue 6340 6352 name, time, phase = m.group('n'), m.group('t'), m.group('p') 6353 + if name == 'async_synchronize_full': 6354 + continue 6341 6355 if ' async' in name or ' sync' in name: 6342 6356 name = ' '.join(name.split(' ')[:-1]) 6343 6357 if phase.startswith('suspend'): ··· 6691 6701 ' -skiphtml Run the test and capture the trace logs, but skip the timeline (default: disabled)\n'\ 6692 6702 ' -result fn Export a results table to a text file for parsing.\n'\ 6693 6703 ' -wifi If a wifi connection is available, check that it reconnects after resume.\n'\ 6704 + ' -wifitrace Trace kernel execution through wifi reconnect.\n'\ 6694 6705 ' -netfix Use netfix to reset the network in the event it fails to resume.\n'\ 6695 6706 ' [testprep]\n'\ 6696 6707 ' -sync Sync the filesystems before starting the test\n'\ ··· 6819 6828 sysvals.sync = True 6820 6829 elif(arg == '-wifi'): 6821 6830 sysvals.wifi = True 6831 + elif(arg == '-wifitrace'): 6832 + sysvals.wifitrace = True 6822 6833 elif(arg == '-netfix'): 6823 6834 sysvals.netfix = True 6824 6835 elif(arg == '-gzip'):
+141 -1
tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
··· 15 15 #include <time.h> 16 16 #include <sched.h> 17 17 #include <signal.h> 18 + #include <pthread.h> 18 19 19 20 #include <sys/eventfd.h> 21 + 22 + /* Defined in include/linux/kvm_types.h */ 23 + #define GPA_INVALID (~(ulong)0) 20 24 21 25 #define SHINFO_REGION_GVA 0xc0000000ULL 22 26 #define SHINFO_REGION_GPA 0xc0000000ULL ··· 47 43 #define XEN_HYPERCALL_MSR 0x40000000 48 44 49 45 #define MIN_STEAL_TIME 50000 46 + 47 + #define SHINFO_RACE_TIMEOUT 2 /* seconds */ 50 48 51 49 #define __HYPERVISOR_set_timer_op 15 52 50 #define __HYPERVISOR_sched_op 29 ··· 132 126 struct kvm_irq_routing_entry entries[2]; 133 127 } irq_routes; 134 128 135 - bool guest_saw_irq; 129 + static volatile bool guest_saw_irq; 136 130 137 131 static void evtchn_handler(struct ex_regs *regs) 138 132 { ··· 154 148 static void guest_code(void) 155 149 { 156 150 struct vcpu_runstate_info *rs = (void *)RUNSTATE_VADDR; 151 + int i; 157 152 158 153 __asm__ __volatile__( 159 154 "sti\n" ··· 332 325 guest_wait_for_irq(); 333 326 334 327 GUEST_SYNC(21); 328 + /* Racing host ioctls */ 329 + 330 + guest_wait_for_irq(); 331 + 332 + GUEST_SYNC(22); 333 + /* Racing vmcall against host ioctl */ 334 + 335 + ports[0] = 0; 336 + 337 + p = (struct sched_poll) { 338 + .ports = ports, 339 + .nr_ports = 1, 340 + .timeout = 0 341 + }; 342 + 343 + wait_for_timer: 344 + /* 345 + * Poll for a timer wake event while the worker thread is mucking with 346 + * the shared info. KVM XEN drops timer IRQs if the shared info is 347 + * invalid when the timer expires. Arbitrarily poll 100 times before 348 + * giving up and asking the VMM to re-arm the timer. 100 polls should 349 + * consume enough time to beat on KVM without taking too long if the 350 + * timer IRQ is dropped due to an invalid event channel. 351 + */ 352 + for (i = 0; i < 100 && !guest_saw_irq; i++) 353 + asm volatile("vmcall" 354 + : "=a" (rax) 355 + : "a" (__HYPERVISOR_sched_op), 356 + "D" (SCHEDOP_poll), 357 + "S" (&p) 358 + : "memory"); 359 + 360 + /* 361 + * Re-send the timer IRQ if it was (likely) dropped due to the timer 362 + * expiring while the event channel was invalid. 363 + */ 364 + if (!guest_saw_irq) { 365 + GUEST_SYNC(23); 366 + goto wait_for_timer; 367 + } 368 + guest_saw_irq = false; 369 + 370 + GUEST_SYNC(24); 335 371 } 336 372 337 373 static int cmp_timespec(struct timespec *a, struct timespec *b) ··· 402 352 TEST_FAIL("IRQ delivery timed out"); 403 353 } 404 354 355 + static void *juggle_shinfo_state(void *arg) 356 + { 357 + struct kvm_vm *vm = (struct kvm_vm *)arg; 358 + 359 + struct kvm_xen_hvm_attr cache_init = { 360 + .type = KVM_XEN_ATTR_TYPE_SHARED_INFO, 361 + .u.shared_info.gfn = SHINFO_REGION_GPA / PAGE_SIZE 362 + }; 363 + 364 + struct kvm_xen_hvm_attr cache_destroy = { 365 + .type = KVM_XEN_ATTR_TYPE_SHARED_INFO, 366 + .u.shared_info.gfn = GPA_INVALID 367 + }; 368 + 369 + for (;;) { 370 + __vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &cache_init); 371 + __vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &cache_destroy); 372 + pthread_testcancel(); 373 + }; 374 + 375 + return NULL; 376 + } 377 + 405 378 int main(int argc, char *argv[]) 406 379 { 407 380 struct timespec min_ts, max_ts, vm_ts; 408 381 struct kvm_vm *vm; 382 + pthread_t thread; 409 383 bool verbose; 384 + int ret; 410 385 411 386 verbose = argc > 1 && (!strncmp(argv[1], "-v", 3) || 412 387 !strncmp(argv[1], "--verbose", 10)); ··· 860 785 case 21: 861 786 TEST_ASSERT(!evtchn_irq_expected, 862 787 "Expected event channel IRQ but it didn't happen"); 788 + alarm(0); 789 + 790 + if (verbose) 791 + printf("Testing shinfo lock corruption (KVM_XEN_HVM_EVTCHN_SEND)\n"); 792 + 793 + ret = pthread_create(&thread, NULL, &juggle_shinfo_state, (void *)vm); 794 + TEST_ASSERT(ret == 0, "pthread_create() failed: %s", strerror(ret)); 795 + 796 + struct kvm_irq_routing_xen_evtchn uxe = { 797 + .port = 1, 798 + .vcpu = vcpu->id, 799 + .priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL 800 + }; 801 + 802 + evtchn_irq_expected = true; 803 + for (time_t t = time(NULL) + SHINFO_RACE_TIMEOUT; time(NULL) < t;) 804 + __vm_ioctl(vm, KVM_XEN_HVM_EVTCHN_SEND, &uxe); 805 + break; 806 + 807 + case 22: 808 + TEST_ASSERT(!evtchn_irq_expected, 809 + "Expected event channel IRQ but it didn't happen"); 810 + 811 + if (verbose) 812 + printf("Testing shinfo lock corruption (SCHEDOP_poll)\n"); 813 + 814 + shinfo->evtchn_pending[0] = 1; 815 + 816 + evtchn_irq_expected = true; 817 + tmr.u.timer.expires_ns = rs->state_entry_time + 818 + SHINFO_RACE_TIMEOUT * 1000000000ULL; 819 + vcpu_ioctl(vcpu, KVM_XEN_VCPU_SET_ATTR, &tmr); 820 + break; 821 + 822 + case 23: 823 + /* 824 + * Optional and possibly repeated sync point. 825 + * Injecting the timer IRQ may fail if the 826 + * shinfo is invalid when the timer expires. 827 + * If the timer has expired but the IRQ hasn't 828 + * been delivered, rearm the timer and retry. 829 + */ 830 + vcpu_ioctl(vcpu, KVM_XEN_VCPU_GET_ATTR, &tmr); 831 + 832 + /* Resume the guest if the timer is still pending. */ 833 + if (tmr.u.timer.expires_ns) 834 + break; 835 + 836 + /* All done if the IRQ was delivered. */ 837 + if (!evtchn_irq_expected) 838 + break; 839 + 840 + tmr.u.timer.expires_ns = rs->state_entry_time + 841 + SHINFO_RACE_TIMEOUT * 1000000000ULL; 842 + vcpu_ioctl(vcpu, KVM_XEN_VCPU_SET_ATTR, &tmr); 843 + break; 844 + case 24: 845 + TEST_ASSERT(!evtchn_irq_expected, 846 + "Expected event channel IRQ but it didn't happen"); 847 + 848 + ret = pthread_cancel(thread); 849 + TEST_ASSERT(ret == 0, "pthread_cancel() failed: %s", strerror(ret)); 850 + 851 + ret = pthread_join(thread, 0); 852 + TEST_ASSERT(ret == 0, "pthread_join() failed: %s", strerror(ret)); 863 853 goto done; 864 854 865 855 case 0x20:
+6 -7
virt/kvm/kvm_main.c
··· 5409 5409 int (*get)(void *, u64 *), int (*set)(void *, u64), 5410 5410 const char *fmt) 5411 5411 { 5412 + int ret; 5412 5413 struct kvm_stat_data *stat_data = (struct kvm_stat_data *) 5413 5414 inode->i_private; 5414 5415 ··· 5421 5420 if (!kvm_get_kvm_safe(stat_data->kvm)) 5422 5421 return -ENOENT; 5423 5422 5424 - if (simple_attr_open(inode, file, get, 5425 - kvm_stats_debugfs_mode(stat_data->desc) & 0222 5426 - ? set : NULL, 5427 - fmt)) { 5423 + ret = simple_attr_open(inode, file, get, 5424 + kvm_stats_debugfs_mode(stat_data->desc) & 0222 5425 + ? set : NULL, fmt); 5426 + if (ret) 5428 5427 kvm_put_kvm(stat_data->kvm); 5429 - return -ENOMEM; 5430 - } 5431 5428 5432 - return 0; 5429 + return ret; 5433 5430 } 5434 5431 5435 5432 static int kvm_debugfs_release(struct inode *inode, struct file *file)
+46 -16
virt/kvm/pfncache.c
··· 81 81 { 82 82 struct kvm_memslots *slots = kvm_memslots(kvm); 83 83 84 + if (!gpc->active) 85 + return false; 86 + 84 87 if ((gpa & ~PAGE_MASK) + len > PAGE_SIZE) 85 88 return false; 86 89 ··· 243 240 { 244 241 struct kvm_memslots *slots = kvm_memslots(kvm); 245 242 unsigned long page_offset = gpa & ~PAGE_MASK; 246 - kvm_pfn_t old_pfn, new_pfn; 243 + bool unmap_old = false; 247 244 unsigned long old_uhva; 245 + kvm_pfn_t old_pfn; 248 246 void *old_khva; 249 - int ret = 0; 247 + int ret; 250 248 251 249 /* 252 250 * If must fit within a single page. The 'len' argument is ··· 264 260 mutex_lock(&gpc->refresh_lock); 265 261 266 262 write_lock_irq(&gpc->lock); 263 + 264 + if (!gpc->active) { 265 + ret = -EINVAL; 266 + goto out_unlock; 267 + } 267 268 268 269 old_pfn = gpc->pfn; 269 270 old_khva = gpc->khva - offset_in_page(gpc->khva); ··· 300 291 /* If the HVA→PFN mapping was already valid, don't unmap it. */ 301 292 old_pfn = KVM_PFN_ERR_FAULT; 302 293 old_khva = NULL; 294 + ret = 0; 303 295 } 304 296 305 297 out: ··· 315 305 gpc->khva = NULL; 316 306 } 317 307 318 - /* Snapshot the new pfn before dropping the lock! */ 319 - new_pfn = gpc->pfn; 308 + /* Detect a pfn change before dropping the lock! */ 309 + unmap_old = (old_pfn != gpc->pfn); 320 310 311 + out_unlock: 321 312 write_unlock_irq(&gpc->lock); 322 313 323 314 mutex_unlock(&gpc->refresh_lock); 324 315 325 - if (old_pfn != new_pfn) 316 + if (unmap_old) 326 317 gpc_unmap_khva(kvm, old_pfn, old_khva); 327 318 328 319 return ret; ··· 357 346 } 358 347 EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_unmap); 359 348 349 + void kvm_gpc_init(struct gfn_to_pfn_cache *gpc) 350 + { 351 + rwlock_init(&gpc->lock); 352 + mutex_init(&gpc->refresh_lock); 353 + } 354 + EXPORT_SYMBOL_GPL(kvm_gpc_init); 360 355 361 - int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, 362 - struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, 363 - gpa_t gpa, unsigned long len) 356 + int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, 357 + struct kvm_vcpu *vcpu, enum pfn_cache_usage usage, 358 + gpa_t gpa, unsigned long len) 364 359 { 365 360 WARN_ON_ONCE(!usage || (usage & KVM_GUEST_AND_HOST_USE_PFN) != usage); 366 361 367 362 if (!gpc->active) { 368 - rwlock_init(&gpc->lock); 369 - mutex_init(&gpc->refresh_lock); 370 - 371 363 gpc->khva = NULL; 372 364 gpc->pfn = KVM_PFN_ERR_FAULT; 373 365 gpc->uhva = KVM_HVA_ERR_BAD; 374 366 gpc->vcpu = vcpu; 375 367 gpc->usage = usage; 376 368 gpc->valid = false; 377 - gpc->active = true; 378 369 379 370 spin_lock(&kvm->gpc_lock); 380 371 list_add(&gpc->list, &kvm->gpc_list); 381 372 spin_unlock(&kvm->gpc_lock); 373 + 374 + /* 375 + * Activate the cache after adding it to the list, a concurrent 376 + * refresh must not establish a mapping until the cache is 377 + * reachable by mmu_notifier events. 378 + */ 379 + write_lock_irq(&gpc->lock); 380 + gpc->active = true; 381 + write_unlock_irq(&gpc->lock); 382 382 } 383 383 return kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpa, len); 384 384 } 385 - EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_init); 385 + EXPORT_SYMBOL_GPL(kvm_gpc_activate); 386 386 387 - void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) 387 + void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) 388 388 { 389 389 if (gpc->active) { 390 + /* 391 + * Deactivate the cache before removing it from the list, KVM 392 + * must stall mmu_notifier events until all users go away, i.e. 393 + * until gpc->lock is dropped and refresh is guaranteed to fail. 394 + */ 395 + write_lock_irq(&gpc->lock); 396 + gpc->active = false; 397 + write_unlock_irq(&gpc->lock); 398 + 390 399 spin_lock(&kvm->gpc_lock); 391 400 list_del(&gpc->list); 392 401 spin_unlock(&kvm->gpc_lock); 393 402 394 403 kvm_gfn_to_pfn_cache_unmap(kvm, gpc); 395 - gpc->active = false; 396 404 } 397 405 } 398 - EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_destroy); 406 + EXPORT_SYMBOL_GPL(kvm_gpc_deactivate);