Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

ASoC: Merge up fixes from mainline

There's several things here that will really help my CI.

+3486 -2096
+3
.mailmap
··· 241 241 Johan Hovold <johan@kernel.org> <jhovold@gmail.com> 242 242 Johan Hovold <johan@kernel.org> <johan@hovoldconsulting.com> 243 243 John Crispin <john@phrozen.org> <blogic@openwrt.org> 244 + John Fastabend <john.fastabend@gmail.com> <john.r.fastabend@intel.com> 244 245 John Keeping <john@keeping.me.uk> <john@metanate.com> 245 246 John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> 246 247 John Stultz <johnstul@us.ibm.com> ··· 455 454 Sedat Dilek <sedat.dilek@gmail.com> <sedat.dilek@credativ.de> 456 455 Seth Forshee <sforshee@kernel.org> <seth.forshee@canonical.com> 457 456 Shannon Nelson <shannon.nelson@amd.com> <snelson@pensando.io> 457 + Shannon Nelson <shannon.nelson@amd.com> <shannon.nelson@intel.com> 458 + Shannon Nelson <shannon.nelson@amd.com> <shannon.nelson@oracle.com> 458 459 Shiraz Hashim <shiraz.linux.kernel@gmail.com> <shiraz.hashim@st.com> 459 460 Shuah Khan <shuah@kernel.org> <shuahkhan@gmail.com> 460 461 Shuah Khan <shuah@kernel.org> <shuah.khan@hp.com>
+3 -3
Documentation/devicetree/bindings/hwmon/moortec,mr75203.yaml
··· 105 105 G coefficient for temperature equation. 106 106 Default for series 5 = 60000 107 107 Default for series 6 = 57400 108 - multipleOf: 1000 108 + multipleOf: 100 109 109 minimum: 1000 110 110 $ref: /schemas/types.yaml#/definitions/uint32 111 111 ··· 114 114 H coefficient for temperature equation. 115 115 Default for series 5 = 200000 116 116 Default for series 6 = 249400 117 - multipleOf: 1000 117 + multipleOf: 100 118 118 minimum: 1000 119 119 $ref: /schemas/types.yaml#/definitions/uint32 120 120 ··· 131 131 J coefficient for temperature equation. 132 132 Default for series 5 = -100 133 133 Default for series 6 = 0 134 - multipleOf: 1000 134 + multipleOf: 100 135 135 maximum: 0 136 136 $ref: /schemas/types.yaml#/definitions/int32 137 137
-19
Documentation/devicetree/bindings/serial/cavium-uart.txt
··· 1 - * Universal Asynchronous Receiver/Transmitter (UART) 2 - 3 - - compatible: "cavium,octeon-3860-uart" 4 - 5 - Compatibility with all cn3XXX, cn5XXX and cn6XXX SOCs. 6 - 7 - - reg: The base address of the UART register bank. 8 - 9 - - interrupts: A single interrupt specifier. 10 - 11 - - current-speed: Optional, the current bit rate in bits per second. 12 - 13 - Example: 14 - uart1: serial@1180000000c00 { 15 - compatible = "cavium,octeon-3860-uart","ns16550"; 16 - reg = <0x11800 0x00000c00 0x0 0x400>; 17 - current-speed = <115200>; 18 - interrupts = <0 35>; 19 - };
-28
Documentation/devicetree/bindings/serial/nxp,lpc1850-uart.txt
··· 1 - * NXP LPC1850 UART 2 - 3 - Required properties: 4 - - compatible : "nxp,lpc1850-uart", "ns16550a". 5 - - reg : offset and length of the register set for the device. 6 - - interrupts : should contain uart interrupt. 7 - - clocks : phandle to the input clocks. 8 - - clock-names : required elements: "uartclk", "reg". 9 - 10 - Optional properties: 11 - - dmas : Two or more DMA channel specifiers following the 12 - convention outlined in bindings/dma/dma.txt 13 - - dma-names : Names for the dma channels, if present. There must 14 - be at least one channel named "tx" for transmit 15 - and named "rx" for receive. 16 - 17 - Since it's also possible to also use the of_serial.c driver all 18 - parameters from 8250.txt also apply but are optional. 19 - 20 - Example: 21 - uart0: serial@40081000 { 22 - compatible = "nxp,lpc1850-uart", "ns16550a"; 23 - reg = <0x40081000 0x1000>; 24 - reg-shift = <2>; 25 - interrupts = <24>; 26 - clocks = <&ccu2 CLK_APB0_UART0>, <&ccu1 CLK_CPU_UART0>; 27 - clock-names = "uartclk", "reg"; 28 - };
+1 -1
Documentation/devicetree/bindings/sound/google,sc7180-trogdor.yaml
··· 7 7 title: Google SC7180-Trogdor ASoC sound card driver 8 8 9 9 maintainers: 10 - - Rohit kumar <rohitkr@codeaurora.org> 10 + - Rohit kumar <quic_rohkumar@quicinc.com> 11 11 - Cheng-Yi Chiang <cychiang@chromium.org> 12 12 13 13 description:
+1 -1
Documentation/devicetree/bindings/sound/qcom,lpass-cpu.yaml
··· 8 8 9 9 maintainers: 10 10 - Srinivas Kandagatla <srinivas.kandagatla@linaro.org> 11 - - Rohit kumar <rohitkr@codeaurora.org> 11 + - Rohit kumar <quic_rohkumar@quicinc.com> 12 12 13 13 description: | 14 14 Qualcomm Technologies Inc. SOC Low-Power Audio SubSystem (LPASS) that consist
+2
MAINTAINERS
··· 1865 1865 L: asahi@lists.linux.dev 1866 1866 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 1867 1867 S: Maintained 1868 + F: Documentation/devicetree/bindings/sound/adi,ssm3515.yaml 1868 1869 F: Documentation/devicetree/bindings/sound/apple,* 1869 1870 F: sound/soc/apple/* 1870 1871 F: sound/soc/codecs/cs42l83-i2c.c 1872 + F: sound/soc/codecs/ssm3515.c 1871 1873 1872 1874 ARM/APPLE MACHINE SUPPORT 1873 1875 M: Hector Martin <marcan@marcan.st>
+18 -6
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 5 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION* ··· 555 555 $(USERINCLUDE) 556 556 557 557 KBUILD_AFLAGS := -D__ASSEMBLY__ -fno-PIE 558 - KBUILD_CFLAGS := -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs \ 559 - -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE \ 560 - -Werror=implicit-function-declaration -Werror=implicit-int \ 561 - -Werror=return-type -Wno-format-security -funsigned-char \ 562 - -std=gnu11 558 + 559 + KBUILD_CFLAGS := 560 + KBUILD_CFLAGS += -std=gnu11 561 + KBUILD_CFLAGS += -fshort-wchar 562 + KBUILD_CFLAGS += -funsigned-char 563 + KBUILD_CFLAGS += -fno-common 564 + KBUILD_CFLAGS += -fno-PIE 565 + KBUILD_CFLAGS += -fno-strict-aliasing 566 + KBUILD_CFLAGS += -Wall 567 + KBUILD_CFLAGS += -Wundef 568 + KBUILD_CFLAGS += -Werror=implicit-function-declaration 569 + KBUILD_CFLAGS += -Werror=implicit-int 570 + KBUILD_CFLAGS += -Werror=return-type 571 + KBUILD_CFLAGS += -Werror=strict-prototypes 572 + KBUILD_CFLAGS += -Wno-format-security 573 + KBUILD_CFLAGS += -Wno-trigraphs 574 + 563 575 KBUILD_CPPFLAGS := -D__KERNEL__ 564 576 KBUILD_RUSTFLAGS := $(rust_common_flags) \ 565 577 --target=$(objtree)/scripts/target.json \
+2
arch/arm64/include/asm/kvm_host.h
··· 727 727 #define DBG_SS_ACTIVE_PENDING __vcpu_single_flag(sflags, BIT(5)) 728 728 /* PMUSERENR for the guest EL0 is on physical CPU */ 729 729 #define PMUSERENR_ON_CPU __vcpu_single_flag(sflags, BIT(6)) 730 + /* WFI instruction trapped */ 731 + #define IN_WFI __vcpu_single_flag(sflags, BIT(7)) 730 732 731 733 732 734 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
+9 -17
arch/arm64/include/asm/kvm_pgtable.h
··· 608 608 kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr); 609 609 610 610 /** 611 - * kvm_pgtable_stage2_mkold() - Clear the access flag in a page-table entry. 611 + * kvm_pgtable_stage2_test_clear_young() - Test and optionally clear the access 612 + * flag in a page-table entry. 612 613 * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). 613 614 * @addr: Intermediate physical address to identify the page-table entry. 615 + * @size: Size of the address range to visit. 616 + * @mkold: True if the access flag should be cleared. 614 617 * 615 618 * The offset of @addr within a page is ignored. 616 619 * 617 - * If there is a valid, leaf page-table entry used to translate @addr, then 618 - * clear the access flag in that entry. 620 + * Tests and conditionally clears the access flag for every valid, leaf 621 + * page-table entry used to translate the range [@addr, @addr + @size). 619 622 * 620 623 * Note that it is the caller's responsibility to invalidate the TLB after 621 624 * calling this function to ensure that the updated permissions are visible 622 625 * to the CPUs. 623 626 * 624 - * Return: The old page-table entry prior to clearing the flag, 0 on failure. 627 + * Return: True if any of the visited PTEs had the access flag set. 625 628 */ 626 - kvm_pte_t kvm_pgtable_stage2_mkold(struct kvm_pgtable *pgt, u64 addr); 629 + bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, 630 + u64 size, bool mkold); 627 631 628 632 /** 629 633 * kvm_pgtable_stage2_relax_perms() - Relax the permissions enforced by a ··· 648 644 */ 649 645 int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, 650 646 enum kvm_pgtable_prot prot); 651 - 652 - /** 653 - * kvm_pgtable_stage2_is_young() - Test whether a page-table entry has the 654 - * access flag set. 655 - * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). 656 - * @addr: Intermediate physical address to identify the page-table entry. 657 - * 658 - * The offset of @addr within a page is ignored. 659 - * 660 - * Return: True if the page-table entry has the access flag set, false otherwise. 661 - */ 662 - bool kvm_pgtable_stage2_is_young(struct kvm_pgtable *pgt, u64 addr); 663 647 664 648 /** 665 649 * kvm_pgtable_stage2_flush_range() - Clean and invalidate data cache to Point
+1
arch/arm64/include/asm/virt.h
··· 78 78 79 79 void __hyp_set_vectors(phys_addr_t phys_vector_base); 80 80 void __hyp_reset_vectors(void); 81 + bool is_kvm_arm_initialised(void); 81 82 82 83 DECLARE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); 83 84
+25 -8
arch/arm64/kernel/fpsimd.c
··· 847 847 int vec_set_vector_length(struct task_struct *task, enum vec_type type, 848 848 unsigned long vl, unsigned long flags) 849 849 { 850 + bool free_sme = false; 851 + 850 852 if (flags & ~(unsigned long)(PR_SVE_VL_INHERIT | 851 853 PR_SVE_SET_VL_ONEXEC)) 852 854 return -EINVAL; ··· 899 897 task->thread.fp_type = FP_STATE_FPSIMD; 900 898 } 901 899 902 - if (system_supports_sme() && type == ARM64_VEC_SME) { 903 - task->thread.svcr &= ~(SVCR_SM_MASK | 904 - SVCR_ZA_MASK); 905 - clear_thread_flag(TIF_SME); 900 + if (system_supports_sme()) { 901 + if (type == ARM64_VEC_SME || 902 + !(task->thread.svcr & (SVCR_SM_MASK | SVCR_ZA_MASK))) { 903 + /* 904 + * We are changing the SME VL or weren't using 905 + * SME anyway, discard the state and force a 906 + * reallocation. 907 + */ 908 + task->thread.svcr &= ~(SVCR_SM_MASK | 909 + SVCR_ZA_MASK); 910 + clear_thread_flag(TIF_SME); 911 + free_sme = true; 912 + } 906 913 } 907 914 908 915 if (task == current) 909 916 put_cpu_fpsimd_context(); 910 917 911 918 /* 912 - * Force reallocation of task SVE and SME state to the correct 913 - * size on next use: 919 + * Free the changed states if they are not in use, SME will be 920 + * reallocated to the correct size on next use and we just 921 + * allocate SVE now in case it is needed for use in streaming 922 + * mode. 914 923 */ 915 - sve_free(task); 916 - if (system_supports_sme() && type == ARM64_VEC_SME) 924 + if (system_supports_sve()) { 925 + sve_free(task); 926 + sve_alloc(task, true); 927 + } 928 + 929 + if (free_sme) 917 930 sme_free(task); 918 931 919 932 task_set_vl(task, type, vl);
+4
arch/arm64/kernel/vdso/vgettimeofday.c
··· 6 6 * 7 7 */ 8 8 9 + int __kernel_clock_gettime(clockid_t clock, struct __kernel_timespec *ts); 10 + int __kernel_gettimeofday(struct __kernel_old_timeval *tv, struct timezone *tz); 11 + int __kernel_clock_getres(clockid_t clock_id, struct __kernel_timespec *res); 12 + 9 13 int __kernel_clock_gettime(clockid_t clock, 10 14 struct __kernel_timespec *ts) 11 15 {
+3 -3
arch/arm64/kvm/arch_timer.c
··· 827 827 assign_clear_set_bit(tpt, CNTHCTL_EL1PCEN << 10, set, clr); 828 828 assign_clear_set_bit(tpc, CNTHCTL_EL1PCTEN << 10, set, clr); 829 829 830 - /* This only happens on VHE, so use the CNTKCTL_EL1 accessor */ 831 - sysreg_clear_set(cntkctl_el1, clr, set); 830 + /* This only happens on VHE, so use the CNTHCTL_EL2 accessor. */ 831 + sysreg_clear_set(cnthctl_el2, clr, set); 832 832 } 833 833 834 834 void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu) ··· 1563 1563 void kvm_timer_init_vhe(void) 1564 1564 { 1565 1565 if (cpus_have_final_cap(ARM64_HAS_ECV_CNTPOFF)) 1566 - sysreg_clear_set(cntkctl_el1, 0, CNTHCTL_ECV); 1566 + sysreg_clear_set(cnthctl_el2, 0, CNTHCTL_ECV); 1567 1567 } 1568 1568 1569 1569 int kvm_arm_timer_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+24 -4
arch/arm64/kvm/arm.c
··· 53 53 54 54 DECLARE_KVM_NVHE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); 55 55 56 - static bool vgic_present; 56 + static bool vgic_present, kvm_arm_initialised; 57 57 58 58 static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled); 59 59 DEFINE_STATIC_KEY_FALSE(userspace_irqchip_in_use); 60 + 61 + bool is_kvm_arm_initialised(void) 62 + { 63 + return kvm_arm_initialised; 64 + } 60 65 61 66 int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) 62 67 { ··· 718 713 */ 719 714 preempt_disable(); 720 715 kvm_vgic_vmcr_sync(vcpu); 721 - vgic_v4_put(vcpu, true); 716 + vcpu_set_flag(vcpu, IN_WFI); 717 + vgic_v4_put(vcpu); 722 718 preempt_enable(); 723 719 724 720 kvm_vcpu_halt(vcpu); 725 721 vcpu_clear_flag(vcpu, IN_WFIT); 726 722 727 723 preempt_disable(); 724 + vcpu_clear_flag(vcpu, IN_WFI); 728 725 vgic_v4_load(vcpu); 729 726 preempt_enable(); 730 727 } ··· 794 787 if (kvm_check_request(KVM_REQ_RELOAD_GICv4, vcpu)) { 795 788 /* The distributor enable bits were changed */ 796 789 preempt_disable(); 797 - vgic_v4_put(vcpu, false); 790 + vgic_v4_put(vcpu); 798 791 vgic_v4_load(vcpu); 799 792 preempt_enable(); 800 793 } ··· 1874 1867 1875 1868 int kvm_arch_hardware_enable(void) 1876 1869 { 1877 - int was_enabled = __this_cpu_read(kvm_arm_hardware_enabled); 1870 + int was_enabled; 1878 1871 1872 + /* 1873 + * Most calls to this function are made with migration 1874 + * disabled, but not with preemption disabled. The former is 1875 + * enough to ensure correctness, but most of the helpers 1876 + * expect the later and will throw a tantrum otherwise. 1877 + */ 1878 + preempt_disable(); 1879 + 1880 + was_enabled = __this_cpu_read(kvm_arm_hardware_enabled); 1879 1881 _kvm_arch_hardware_enable(NULL); 1880 1882 1881 1883 if (!was_enabled) { 1882 1884 kvm_vgic_cpu_up(); 1883 1885 kvm_timer_cpu_up(); 1884 1886 } 1887 + 1888 + preempt_enable(); 1885 1889 1886 1890 return 0; 1887 1891 } ··· 2499 2481 err = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE); 2500 2482 if (err) 2501 2483 goto out_subs; 2484 + 2485 + kvm_arm_initialised = true; 2502 2486 2503 2487 return 0; 2504 2488
+8
arch/arm64/kvm/hyp/hyp-entry.S
··· 154 154 esb 155 155 stp x0, x1, [sp, #-16]! 156 156 662: 157 + /* 158 + * spectre vectors __bp_harden_hyp_vecs generate br instructions at runtime 159 + * that jump at offset 8 at __kvm_hyp_vector. 160 + * As hyp .text is guarded section, it needs bti j. 161 + */ 162 + bti j 157 163 b \target 158 164 159 165 check_preamble_length 661b, 662b ··· 171 165 nop 172 166 stp x0, x1, [sp, #-16]! 173 167 662: 168 + /* Check valid_vect */ 169 + bti j 174 170 b \target 175 171 176 172 check_preamble_length 661b, 662b
+10
arch/arm64/kvm/hyp/nvhe/host.S
··· 297 297 298 298 ret 299 299 SYM_CODE_END(__kvm_hyp_host_forward_smc) 300 + 301 + /* 302 + * kvm_host_psci_cpu_entry is called through br instruction, which requires 303 + * bti j instruction as compilers (gcc and llvm) doesn't insert bti j for external 304 + * functions, but bti c instead. 305 + */ 306 + SYM_CODE_START(kvm_host_psci_cpu_entry) 307 + bti j 308 + b __kvm_host_psci_cpu_entry 309 + SYM_CODE_END(kvm_host_psci_cpu_entry)
+1 -1
arch/arm64/kvm/hyp/nvhe/psci-relay.c
··· 200 200 __hyp_pa(init_params), 0); 201 201 } 202 202 203 - asmlinkage void __noreturn kvm_host_psci_cpu_entry(bool is_cpu_on) 203 + asmlinkage void __noreturn __kvm_host_psci_cpu_entry(bool is_cpu_on) 204 204 { 205 205 struct psci_boot_args *boot_args; 206 206 struct kvm_cpu_context *host_ctxt;
+38 -9
arch/arm64/kvm/hyp/pgtable.c
··· 1195 1195 return pte; 1196 1196 } 1197 1197 1198 - kvm_pte_t kvm_pgtable_stage2_mkold(struct kvm_pgtable *pgt, u64 addr) 1198 + struct stage2_age_data { 1199 + bool mkold; 1200 + bool young; 1201 + }; 1202 + 1203 + static int stage2_age_walker(const struct kvm_pgtable_visit_ctx *ctx, 1204 + enum kvm_pgtable_walk_flags visit) 1199 1205 { 1200 - kvm_pte_t pte = 0; 1201 - stage2_update_leaf_attrs(pgt, addr, 1, 0, KVM_PTE_LEAF_ATTR_LO_S2_AF, 1202 - &pte, NULL, 0); 1206 + kvm_pte_t new = ctx->old & ~KVM_PTE_LEAF_ATTR_LO_S2_AF; 1207 + struct stage2_age_data *data = ctx->arg; 1208 + 1209 + if (!kvm_pte_valid(ctx->old) || new == ctx->old) 1210 + return 0; 1211 + 1212 + data->young = true; 1213 + 1214 + /* 1215 + * stage2_age_walker() is always called while holding the MMU lock for 1216 + * write, so this will always succeed. Nonetheless, this deliberately 1217 + * follows the race detection pattern of the other stage-2 walkers in 1218 + * case the locking mechanics of the MMU notifiers is ever changed. 1219 + */ 1220 + if (data->mkold && !stage2_try_set_pte(ctx, new)) 1221 + return -EAGAIN; 1222 + 1203 1223 /* 1204 1224 * "But where's the TLBI?!", you scream. 1205 1225 * "Over in the core code", I sigh. 1206 1226 * 1207 1227 * See the '->clear_flush_young()' callback on the KVM mmu notifier. 1208 1228 */ 1209 - return pte; 1229 + return 0; 1210 1230 } 1211 1231 1212 - bool kvm_pgtable_stage2_is_young(struct kvm_pgtable *pgt, u64 addr) 1232 + bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, 1233 + u64 size, bool mkold) 1213 1234 { 1214 - kvm_pte_t pte = 0; 1215 - stage2_update_leaf_attrs(pgt, addr, 1, 0, 0, &pte, NULL, 0); 1216 - return pte & KVM_PTE_LEAF_ATTR_LO_S2_AF; 1235 + struct stage2_age_data data = { 1236 + .mkold = mkold, 1237 + }; 1238 + struct kvm_pgtable_walker walker = { 1239 + .cb = stage2_age_walker, 1240 + .arg = &data, 1241 + .flags = KVM_PGTABLE_WALK_LEAF, 1242 + }; 1243 + 1244 + WARN_ON(kvm_pgtable_walk(pgt, addr, size, &walker)); 1245 + return data.young; 1217 1246 } 1218 1247 1219 1248 int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr,
+8 -10
arch/arm64/kvm/mmu.c
··· 1756 1756 bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) 1757 1757 { 1758 1758 u64 size = (range->end - range->start) << PAGE_SHIFT; 1759 - kvm_pte_t kpte; 1760 - pte_t pte; 1761 1759 1762 1760 if (!kvm->arch.mmu.pgt) 1763 1761 return false; 1764 1762 1765 - WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE); 1766 - 1767 - kpte = kvm_pgtable_stage2_mkold(kvm->arch.mmu.pgt, 1768 - range->start << PAGE_SHIFT); 1769 - pte = __pte(kpte); 1770 - return pte_valid(pte) && pte_young(pte); 1763 + return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, 1764 + range->start << PAGE_SHIFT, 1765 + size, true); 1771 1766 } 1772 1767 1773 1768 bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) 1774 1769 { 1770 + u64 size = (range->end - range->start) << PAGE_SHIFT; 1771 + 1775 1772 if (!kvm->arch.mmu.pgt) 1776 1773 return false; 1777 1774 1778 - return kvm_pgtable_stage2_is_young(kvm->arch.mmu.pgt, 1779 - range->start << PAGE_SHIFT); 1775 + return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, 1776 + range->start << PAGE_SHIFT, 1777 + size, false); 1780 1778 } 1781 1779 1782 1780 phys_addr_t kvm_mmu_get_httbr(void)
+1 -1
arch/arm64/kvm/pkvm.c
··· 244 244 { 245 245 int ret; 246 246 247 - if (!is_protected_kvm_enabled()) 247 + if (!is_protected_kvm_enabled() || !is_kvm_arm_initialised()) 248 248 return 0; 249 249 250 250 /*
+21 -21
arch/arm64/kvm/sys_regs.c
··· 986 986 987 987 if (p->is_write) { 988 988 kvm_pmu_set_counter_event_type(vcpu, p->regval, idx); 989 - __vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_PMU_EVTYPE_MASK; 990 989 kvm_vcpu_pmu_restore_guest(vcpu); 991 990 } else { 992 991 p->regval = __vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_MASK; ··· 1114 1115 { SYS_DESC(SYS_DBGWCRn_EL1(n)), \ 1115 1116 trap_wcr, reset_wcr, 0, 0, get_wcr, set_wcr } 1116 1117 1117 - #define PMU_SYS_REG(r) \ 1118 - SYS_DESC(r), .reset = reset_pmu_reg, .visibility = pmu_visibility 1118 + #define PMU_SYS_REG(name) \ 1119 + SYS_DESC(SYS_##name), .reset = reset_pmu_reg, \ 1120 + .visibility = pmu_visibility 1119 1121 1120 1122 /* Macro to expand the PMEVCNTRn_EL0 register */ 1121 1123 #define PMU_PMEVCNTR_EL0(n) \ 1122 - { PMU_SYS_REG(SYS_PMEVCNTRn_EL0(n)), \ 1124 + { PMU_SYS_REG(PMEVCNTRn_EL0(n)), \ 1123 1125 .reset = reset_pmevcntr, .get_user = get_pmu_evcntr, \ 1124 1126 .access = access_pmu_evcntr, .reg = (PMEVCNTR0_EL0 + n), } 1125 1127 1126 1128 /* Macro to expand the PMEVTYPERn_EL0 register */ 1127 1129 #define PMU_PMEVTYPER_EL0(n) \ 1128 - { PMU_SYS_REG(SYS_PMEVTYPERn_EL0(n)), \ 1130 + { PMU_SYS_REG(PMEVTYPERn_EL0(n)), \ 1129 1131 .reset = reset_pmevtyper, \ 1130 1132 .access = access_pmu_evtyper, .reg = (PMEVTYPER0_EL0 + n), } 1131 1133 ··· 2115 2115 { SYS_DESC(SYS_PMBSR_EL1), undef_access }, 2116 2116 /* PMBIDR_EL1 is not trapped */ 2117 2117 2118 - { PMU_SYS_REG(SYS_PMINTENSET_EL1), 2118 + { PMU_SYS_REG(PMINTENSET_EL1), 2119 2119 .access = access_pminten, .reg = PMINTENSET_EL1 }, 2120 - { PMU_SYS_REG(SYS_PMINTENCLR_EL1), 2120 + { PMU_SYS_REG(PMINTENCLR_EL1), 2121 2121 .access = access_pminten, .reg = PMINTENSET_EL1 }, 2122 2122 { SYS_DESC(SYS_PMMIR_EL1), trap_raz_wi }, 2123 2123 ··· 2164 2164 { SYS_DESC(SYS_CTR_EL0), access_ctr }, 2165 2165 { SYS_DESC(SYS_SVCR), undef_access }, 2166 2166 2167 - { PMU_SYS_REG(SYS_PMCR_EL0), .access = access_pmcr, 2167 + { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, 2168 2168 .reset = reset_pmcr, .reg = PMCR_EL0 }, 2169 - { PMU_SYS_REG(SYS_PMCNTENSET_EL0), 2169 + { PMU_SYS_REG(PMCNTENSET_EL0), 2170 2170 .access = access_pmcnten, .reg = PMCNTENSET_EL0 }, 2171 - { PMU_SYS_REG(SYS_PMCNTENCLR_EL0), 2171 + { PMU_SYS_REG(PMCNTENCLR_EL0), 2172 2172 .access = access_pmcnten, .reg = PMCNTENSET_EL0 }, 2173 - { PMU_SYS_REG(SYS_PMOVSCLR_EL0), 2173 + { PMU_SYS_REG(PMOVSCLR_EL0), 2174 2174 .access = access_pmovs, .reg = PMOVSSET_EL0 }, 2175 2175 /* 2176 2176 * PM_SWINC_EL0 is exposed to userspace as RAZ/WI, as it was 2177 2177 * previously (and pointlessly) advertised in the past... 2178 2178 */ 2179 - { PMU_SYS_REG(SYS_PMSWINC_EL0), 2179 + { PMU_SYS_REG(PMSWINC_EL0), 2180 2180 .get_user = get_raz_reg, .set_user = set_wi_reg, 2181 2181 .access = access_pmswinc, .reset = NULL }, 2182 - { PMU_SYS_REG(SYS_PMSELR_EL0), 2182 + { PMU_SYS_REG(PMSELR_EL0), 2183 2183 .access = access_pmselr, .reset = reset_pmselr, .reg = PMSELR_EL0 }, 2184 - { PMU_SYS_REG(SYS_PMCEID0_EL0), 2184 + { PMU_SYS_REG(PMCEID0_EL0), 2185 2185 .access = access_pmceid, .reset = NULL }, 2186 - { PMU_SYS_REG(SYS_PMCEID1_EL0), 2186 + { PMU_SYS_REG(PMCEID1_EL0), 2187 2187 .access = access_pmceid, .reset = NULL }, 2188 - { PMU_SYS_REG(SYS_PMCCNTR_EL0), 2188 + { PMU_SYS_REG(PMCCNTR_EL0), 2189 2189 .access = access_pmu_evcntr, .reset = reset_unknown, 2190 2190 .reg = PMCCNTR_EL0, .get_user = get_pmu_evcntr}, 2191 - { PMU_SYS_REG(SYS_PMXEVTYPER_EL0), 2191 + { PMU_SYS_REG(PMXEVTYPER_EL0), 2192 2192 .access = access_pmu_evtyper, .reset = NULL }, 2193 - { PMU_SYS_REG(SYS_PMXEVCNTR_EL0), 2193 + { PMU_SYS_REG(PMXEVCNTR_EL0), 2194 2194 .access = access_pmu_evcntr, .reset = NULL }, 2195 2195 /* 2196 2196 * PMUSERENR_EL0 resets as unknown in 64bit mode while it resets as zero 2197 2197 * in 32bit mode. Here we choose to reset it as zero for consistency. 2198 2198 */ 2199 - { PMU_SYS_REG(SYS_PMUSERENR_EL0), .access = access_pmuserenr, 2199 + { PMU_SYS_REG(PMUSERENR_EL0), .access = access_pmuserenr, 2200 2200 .reset = reset_val, .reg = PMUSERENR_EL0, .val = 0 }, 2201 - { PMU_SYS_REG(SYS_PMOVSSET_EL0), 2201 + { PMU_SYS_REG(PMOVSSET_EL0), 2202 2202 .access = access_pmovs, .reg = PMOVSSET_EL0 }, 2203 2203 2204 2204 { SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 }, ··· 2354 2354 * PMCCFILTR_EL0 resets as unknown in 64bit mode while it resets as zero 2355 2355 * in 32bit mode. Here we choose to reset it as zero for consistency. 2356 2356 */ 2357 - { PMU_SYS_REG(SYS_PMCCFILTR_EL0), .access = access_pmu_evtyper, 2357 + { PMU_SYS_REG(PMCCFILTR_EL0), .access = access_pmu_evtyper, 2358 2358 .reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 }, 2359 2359 2360 2360 EL2_REG(VPIDR_EL2, access_rw, reset_unknown, 0),
+1 -1
arch/arm64/kvm/vgic/vgic-v3.c
··· 749 749 { 750 750 struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; 751 751 752 - WARN_ON(vgic_v4_put(vcpu, false)); 752 + WARN_ON(vgic_v4_put(vcpu)); 753 753 754 754 vgic_v3_vmcr_sync(vcpu); 755 755
+5 -2
arch/arm64/kvm/vgic/vgic-v4.c
··· 336 336 its_vm->vpes = NULL; 337 337 } 338 338 339 - int vgic_v4_put(struct kvm_vcpu *vcpu, bool need_db) 339 + int vgic_v4_put(struct kvm_vcpu *vcpu) 340 340 { 341 341 struct its_vpe *vpe = &vcpu->arch.vgic_cpu.vgic_v3.its_vpe; 342 342 343 343 if (!vgic_supports_direct_msis(vcpu->kvm) || !vpe->resident) 344 344 return 0; 345 345 346 - return its_make_vpe_non_resident(vpe, need_db); 346 + return its_make_vpe_non_resident(vpe, !!vcpu_get_flag(vcpu, IN_WFI)); 347 347 } 348 348 349 349 int vgic_v4_load(struct kvm_vcpu *vcpu) ··· 352 352 int err; 353 353 354 354 if (!vgic_supports_direct_msis(vcpu->kvm) || vpe->resident) 355 + return 0; 356 + 357 + if (vcpu_get_flag(vcpu, IN_WFI)) 355 358 return 0; 356 359 357 360 /*
+3 -1
arch/arm64/mm/trans_pgd.c
··· 24 24 #include <linux/bug.h> 25 25 #include <linux/mm.h> 26 26 #include <linux/mmzone.h> 27 + #include <linux/kfence.h> 27 28 28 29 static void *trans_alloc(struct trans_pgd_info *info) 29 30 { ··· 42 41 * the temporary mappings we use during restore. 43 42 */ 44 43 set_pte(dst_ptep, pte_mkwrite(pte)); 45 - } else if (debug_pagealloc_enabled() && !pte_none(pte)) { 44 + } else if ((debug_pagealloc_enabled() || 45 + is_kfence_address((void *)addr)) && !pte_none(pte)) { 46 46 /* 47 47 * debug_pagealloc will removed the PTE_VALID bit if 48 48 * the page isn't in use by the resume kernel. It may have
+7 -1
arch/arm64/net/bpf_jit_comp.c
··· 322 322 * 323 323 */ 324 324 325 - emit_bti(A64_BTI_C, ctx); 325 + /* bpf function may be invoked by 3 instruction types: 326 + * 1. bl, attached via freplace to bpf prog via short jump 327 + * 2. br, attached via freplace to bpf prog via long jump 328 + * 3. blr, working as a function pointer, used by emit_call. 329 + * So BTI_JC should used here to support both br and blr. 330 + */ 331 + emit_bti(A64_BTI_JC, ctx); 326 332 327 333 emit(A64_MOV(1, A64_R(9), A64_LR), ctx); 328 334 emit(A64_NOP, ctx);
+6 -6
arch/arm64/tools/sysreg
··· 2017 2017 EndSysreg 2018 2018 2019 2019 SysregFields HFGxTR_EL2 2020 - Field 63 nAMIAIR2_EL1 2020 + Field 63 nAMAIR2_EL1 2021 2021 Field 62 nMAIR2_EL1 2022 2022 Field 61 nS2POR_EL1 2023 2023 Field 60 nPOR_EL1 ··· 2032 2032 Res0 51 2033 2033 Field 50 nACCDATA_EL1 2034 2034 Field 49 ERXADDR_EL1 2035 - Field 48 EXRPFGCDN_EL1 2036 - Field 47 EXPFGCTL_EL1 2037 - Field 46 EXPFGF_EL1 2035 + Field 48 ERXPFGCDN_EL1 2036 + Field 47 ERXPFGCTL_EL1 2037 + Field 46 ERXPFGF_EL1 2038 2038 Field 45 ERXMISCn_EL1 2039 2039 Field 44 ERXSTATUS_EL1 2040 2040 Field 43 ERXCTLR_EL1 ··· 2049 2049 Field 34 TPIDRRO_EL0 2050 2050 Field 33 TPIDR_EL1 2051 2051 Field 32 TCR_EL1 2052 - Field 31 SCTXNUM_EL0 2053 - Field 30 SCTXNUM_EL1 2052 + Field 31 SCXTNUM_EL0 2053 + Field 30 SCXTNUM_EL1 2054 2054 Field 29 SCTLR_EL1 2055 2055 Field 28 REVIDR_EL1 2056 2056 Field 27 PAR_EL1
+1 -1
arch/ia64/kernel/sys_ia64.c
··· 63 63 info.low_limit = addr; 64 64 info.high_limit = TASK_SIZE; 65 65 info.align_mask = align_mask; 66 - info.align_offset = 0; 66 + info.align_offset = pgoff << PAGE_SHIFT; 67 67 return vm_unmapped_area(&info); 68 68 } 69 69
+10 -5
arch/parisc/kernel/sys_parisc.c
··· 27 27 #include <linux/elf-randomize.h> 28 28 29 29 /* 30 - * Construct an artificial page offset for the mapping based on the physical 30 + * Construct an artificial page offset for the mapping based on the virtual 31 31 * address of the kernel file mapping variable. 32 + * If filp is zero the calculated pgoff value aliases the memory of the given 33 + * address. This is useful for io_uring where the mapping shall alias a kernel 34 + * address and a userspace adress where both the kernel and the userspace 35 + * access the same memory region. 32 36 */ 33 - #define GET_FILP_PGOFF(filp) \ 34 - (filp ? (((unsigned long) filp->f_mapping) >> 8) \ 35 - & ((SHM_COLOUR-1) >> PAGE_SHIFT) : 0UL) 37 + #define GET_FILP_PGOFF(filp, addr) \ 38 + ((filp ? (((unsigned long) filp->f_mapping) >> 8) \ 39 + & ((SHM_COLOUR-1) >> PAGE_SHIFT) : 0UL) \ 40 + + (addr >> PAGE_SHIFT)) 36 41 37 42 static unsigned long shared_align_offset(unsigned long filp_pgoff, 38 43 unsigned long pgoff) ··· 117 112 do_color_align = 0; 118 113 if (filp || (flags & MAP_SHARED)) 119 114 do_color_align = 1; 120 - filp_pgoff = GET_FILP_PGOFF(filp); 115 + filp_pgoff = GET_FILP_PGOFF(filp, addr); 121 116 122 117 if (flags & MAP_FIXED) { 123 118 /* Even MAP_FIXED mappings must reside within TASK_SIZE */
+3
arch/powerpc/crypto/.gitignore
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + aesp10-ppc.S 3 + ghashp10-ppc.S
+13 -56
arch/powerpc/include/asm/bug.h
··· 4 4 #ifdef __KERNEL__ 5 5 6 6 #include <asm/asm-compat.h> 7 - #include <asm/extable.h> 8 7 9 8 #ifdef CONFIG_BUG 10 9 11 10 #ifdef __ASSEMBLY__ 12 11 #include <asm/asm-offsets.h> 13 12 #ifdef CONFIG_DEBUG_BUGVERBOSE 14 - .macro __EMIT_BUG_ENTRY addr,file,line,flags 13 + .macro EMIT_BUG_ENTRY addr,file,line,flags 15 14 .section __bug_table,"aw" 16 15 5001: .4byte \addr - . 17 16 .4byte 5002f - . ··· 22 23 .previous 23 24 .endm 24 25 #else 25 - .macro __EMIT_BUG_ENTRY addr,file,line,flags 26 + .macro EMIT_BUG_ENTRY addr,file,line,flags 26 27 .section __bug_table,"aw" 27 28 5001: .4byte \addr - . 28 29 .short \flags ··· 30 31 .previous 31 32 .endm 32 33 #endif /* verbose */ 33 - 34 - .macro EMIT_WARN_ENTRY addr,file,line,flags 35 - EX_TABLE(\addr,\addr+4) 36 - __EMIT_BUG_ENTRY \addr,\file,\line,\flags 37 - .endm 38 - 39 - .macro EMIT_BUG_ENTRY addr,file,line,flags 40 - .if \flags & 1 /* BUGFLAG_WARNING */ 41 - .err /* Use EMIT_WARN_ENTRY for warnings */ 42 - .endif 43 - __EMIT_BUG_ENTRY \addr,\file,\line,\flags 44 - .endm 45 34 46 35 #else /* !__ASSEMBLY__ */ 47 36 /* _EMIT_BUG_ENTRY expects args %0,%1,%2,%3 to be FILE, LINE, flags and ··· 60 73 "i" (sizeof(struct bug_entry)), \ 61 74 ##__VA_ARGS__) 62 75 63 - #define WARN_ENTRY(insn, flags, label, ...) \ 64 - asm_volatile_goto( \ 65 - "1: " insn "\n" \ 66 - EX_TABLE(1b, %l[label]) \ 67 - _EMIT_BUG_ENTRY \ 68 - : : "i" (__FILE__), "i" (__LINE__), \ 69 - "i" (flags), \ 70 - "i" (sizeof(struct bug_entry)), \ 71 - ##__VA_ARGS__ : : label) 72 - 73 76 /* 74 77 * BUG_ON() and WARN_ON() do their best to cooperate with compile-time 75 78 * optimisations. However depending on the complexity of the condition ··· 72 95 } while (0) 73 96 #define HAVE_ARCH_BUG 74 97 75 - #define __WARN_FLAGS(flags) do { \ 76 - __label__ __label_warn_on; \ 77 - \ 78 - WARN_ENTRY("twi 31, 0, 0", BUGFLAG_WARNING | (flags), __label_warn_on); \ 79 - barrier_before_unreachable(); \ 80 - __builtin_unreachable(); \ 81 - \ 82 - __label_warn_on: \ 83 - break; \ 84 - } while (0) 98 + #define __WARN_FLAGS(flags) BUG_ENTRY("twi 31, 0, 0", BUGFLAG_WARNING | (flags)) 85 99 86 100 #ifdef CONFIG_PPC64 87 101 #define BUG_ON(x) do { \ ··· 85 117 } while (0) 86 118 87 119 #define WARN_ON(x) ({ \ 88 - bool __ret_warn_on = false; \ 89 - do { \ 90 - if (__builtin_constant_p((x))) { \ 91 - if (!(x)) \ 92 - break; \ 120 + int __ret_warn_on = !!(x); \ 121 + if (__builtin_constant_p(__ret_warn_on)) { \ 122 + if (__ret_warn_on) \ 93 123 __WARN(); \ 94 - __ret_warn_on = true; \ 95 - } else { \ 96 - __label__ __label_warn_on; \ 97 - \ 98 - WARN_ENTRY(PPC_TLNEI " %4, 0", \ 99 - BUGFLAG_WARNING | BUGFLAG_TAINT(TAINT_WARN), \ 100 - __label_warn_on, \ 101 - "r" ((__force long)(x))); \ 102 - break; \ 103 - __label_warn_on: \ 104 - __ret_warn_on = true; \ 105 - } \ 106 - } while (0); \ 124 + } else { \ 125 + BUG_ENTRY(PPC_TLNEI " %4, 0", \ 126 + BUGFLAG_WARNING | BUGFLAG_TAINT(TAINT_WARN), \ 127 + "r" (__ret_warn_on)); \ 128 + } \ 107 129 unlikely(__ret_warn_on); \ 108 130 }) 109 131 ··· 106 148 #ifdef __ASSEMBLY__ 107 149 .macro EMIT_BUG_ENTRY addr,file,line,flags 108 150 .endm 109 - .macro EMIT_WARN_ENTRY addr,file,line,flags 110 - .endm 111 151 #else /* !__ASSEMBLY__ */ 112 152 #define _EMIT_BUG_ENTRY 113 - #define _EMIT_WARN_ENTRY 114 153 #endif 115 154 #endif /* CONFIG_BUG */ 155 + 156 + #define EMIT_WARN_ENTRY EMIT_BUG_ENTRY 116 157 117 158 #include <asm-generic/bug.h> 118 159
-6
arch/powerpc/include/asm/elf.h
··· 12 12 13 13 /* 14 14 * This is used to ensure we don't load something for the wrong architecture. 15 - * 64le only supports ELFv2 64-bit binaries (64be supports v1 and v2). 16 15 */ 17 - #if defined(CONFIG_PPC64) && defined(CONFIG_CPU_LITTLE_ENDIAN) 18 - #define elf_check_arch(x) (((x)->e_machine == ELF_ARCH) && \ 19 - (((x)->e_flags & 0x3) == 0x2)) 20 - #else 21 16 #define elf_check_arch(x) ((x)->e_machine == ELF_ARCH) 22 - #endif 23 17 #define compat_elf_check_arch(x) ((x)->e_machine == EM_PPC) 24 18 25 19 #define CORE_DUMP_USE_REGSET
+1 -5
arch/powerpc/include/asm/thread_info.h
··· 183 183 #define clear_tsk_compat_task(tsk) do { } while (0) 184 184 #endif 185 185 186 - #ifdef CONFIG_PPC64 187 - #ifdef CONFIG_CPU_BIG_ENDIAN 186 + #if defined(CONFIG_PPC64) 188 187 #define is_elf2_task() (test_thread_flag(TIF_ELF2ABI)) 189 - #else 190 - #define is_elf2_task() (1) 191 - #endif 192 188 #else 193 189 #define is_elf2_task() (0) 194 190 #endif
+2 -7
arch/powerpc/kernel/traps.c
··· 1508 1508 1509 1509 if (!(regs->msr & MSR_PR) && /* not user-mode */ 1510 1510 report_bug(bugaddr, regs) == BUG_TRAP_TYPE_WARN) { 1511 - const struct exception_table_entry *entry; 1512 - 1513 - entry = search_exception_tables(bugaddr); 1514 - if (entry) { 1515 - regs_set_return_ip(regs, extable_fixup(entry) + regs->nip - bugaddr); 1516 - return; 1517 - } 1511 + regs_add_return_ip(regs, 4); 1512 + return; 1518 1513 } 1519 1514 1520 1515 if (cpu_has_feature(CPU_FTR_DEXCR_NPHIE) && user_mode(regs)) {
+1
arch/powerpc/mm/kasan/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 3 KASAN_SANITIZE := n 4 + KCOV_INSTRUMENT := n 4 5 5 6 obj-$(CONFIG_PPC32) += init_32.o 6 7 obj-$(CONFIG_PPC_8xx) += 8xx.o
+2 -4
arch/powerpc/platforms/512x/mpc512x_lpbfifo.c
··· 477 477 return ret; 478 478 } 479 479 480 - static int mpc512x_lpbfifo_remove(struct platform_device *pdev) 480 + static void mpc512x_lpbfifo_remove(struct platform_device *pdev) 481 481 { 482 482 unsigned long flags; 483 483 struct dma_device *dma_dev = lpbfifo.chan->device; ··· 494 494 free_irq(lpbfifo.irq, &pdev->dev); 495 495 irq_dispose_mapping(lpbfifo.irq); 496 496 dma_release_channel(lpbfifo.chan); 497 - 498 - return 0; 499 497 } 500 498 501 499 static const struct of_device_id mpc512x_lpbfifo_match[] = { ··· 504 506 505 507 static struct platform_driver mpc512x_lpbfifo_driver = { 506 508 .probe = mpc512x_lpbfifo_probe, 507 - .remove = mpc512x_lpbfifo_remove, 509 + .remove_new = mpc512x_lpbfifo_remove, 508 510 .driver = { 509 511 .name = DRV_NAME, 510 512 .of_match_table = mpc512x_lpbfifo_match,
+7 -2
arch/powerpc/platforms/pseries/vas.c
··· 744 744 } 745 745 746 746 task_ref = &win->vas_win.task_ref; 747 + /* 748 + * VAS mmap (coproc_mmap()) and its fault handler 749 + * (vas_mmap_fault()) are called after holding mmap lock. 750 + * So hold mmap mutex after mmap_lock to avoid deadlock. 751 + */ 752 + mmap_write_lock(task_ref->mm); 747 753 mutex_lock(&task_ref->mmap_mutex); 748 754 vma = task_ref->vma; 749 755 /* ··· 758 752 */ 759 753 win->vas_win.status |= flag; 760 754 761 - mmap_write_lock(task_ref->mm); 762 755 /* 763 756 * vma is set in the original mapping. But this mapping 764 757 * is done with mmap() after the window is opened with ioctl. ··· 767 762 if (vma) 768 763 zap_vma_pages(vma); 769 764 770 - mmap_write_unlock(task_ref->mm); 771 765 mutex_unlock(&task_ref->mmap_mutex); 766 + mmap_write_unlock(task_ref->mm); 772 767 /* 773 768 * Close VAS window in the hypervisor, but do not 774 769 * free vas_window struct since it may be reused
+1 -1
arch/s390/crypto/paes_s390.c
··· 103 103 { 104 104 if (kb->key && kb->key != kb->keybuf 105 105 && kb->keylen > sizeof(kb->keybuf)) { 106 - kfree(kb->key); 106 + kfree_sensitive(kb->key); 107 107 kb->key = NULL; 108 108 } 109 109 }
+6 -2
arch/s390/kvm/pv.c
··· 411 411 u16 _rc, _rrc; 412 412 int cc = 0; 413 413 414 - /* Make sure the counter does not reach 0 before calling s390_uv_destroy_range */ 415 - atomic_inc(&kvm->mm->context.protected_count); 414 + /* 415 + * Nothing to do if the counter was already 0. Otherwise make sure 416 + * the counter does not reach 0 before calling s390_uv_destroy_range. 417 + */ 418 + if (!atomic_inc_not_zero(&kvm->mm->context.protected_count)) 419 + return 0; 416 420 417 421 *rc = 1; 418 422 /* If the current VM is protected, destroy it */
+2
arch/s390/mm/fault.c
··· 421 421 vma_end_read(vma); 422 422 if (!(fault & VM_FAULT_RETRY)) { 423 423 count_vm_vma_lock_event(VMA_LOCK_SUCCESS); 424 + if (likely(!(fault & VM_FAULT_ERROR))) 425 + fault = 0; 424 426 goto out; 425 427 } 426 428 count_vm_vma_lock_event(VMA_LOCK_RETRY);
+1
arch/s390/mm/gmap.c
··· 2853 2853 page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); 2854 2854 if (!page) 2855 2855 return -ENOMEM; 2856 + page->index = 0; 2856 2857 table = page_to_virt(page); 2857 2858 memcpy(table, gmap->table, 1UL << (CRST_ALLOC_ORDER + PAGE_SHIFT)); 2858 2859
+1 -2
block/blk-core.c
··· 1144 1144 { 1145 1145 if (!list_empty(&plug->cb_list)) 1146 1146 flush_plug_callbacks(plug, from_schedule); 1147 - if (!rq_list_empty(plug->mq_list)) 1148 - blk_mq_flush_plug_list(plug, from_schedule); 1147 + blk_mq_flush_plug_list(plug, from_schedule); 1149 1148 /* 1150 1149 * Unconditionally flush out cached requests, even if the unplug 1151 1150 * event came from schedule. Since we know hold references to the
+4
block/blk-iocost.c
··· 2516 2516 u64 seek_pages = 0; 2517 2517 u64 cost = 0; 2518 2518 2519 + /* Can't calculate cost for empty bio */ 2520 + if (!bio->bi_iter.bi_size) 2521 + goto out; 2522 + 2519 2523 switch (bio_op(bio)) { 2520 2524 case REQ_OP_READ: 2521 2525 coef_seqio = ioc->params.lcoefs[LCOEF_RSEQIO];
+8 -1
block/blk-mq.c
··· 2754 2754 { 2755 2755 struct request *rq; 2756 2756 2757 - if (rq_list_empty(plug->mq_list)) 2757 + /* 2758 + * We may have been called recursively midway through handling 2759 + * plug->mq_list via a schedule() in the driver's queue_rq() callback. 2760 + * To avoid mq_list changing under our feet, clear rq_count early and 2761 + * bail out specifically if rq_count is 0 rather than checking 2762 + * whether the mq_list is empty. 2763 + */ 2764 + if (plug->rq_count == 0) 2758 2765 return; 2759 2766 plug->rq_count = 0; 2760 2767
+9
drivers/accel/habanalabs/common/habanalabs.h
··· 3980 3980 { 3981 3981 } 3982 3982 3983 + static inline int hl_debugfs_device_init(struct hl_device *hdev) 3984 + { 3985 + return 0; 3986 + } 3987 + 3988 + static inline void hl_debugfs_device_fini(struct hl_device *hdev) 3989 + { 3990 + } 3991 + 3983 3992 static inline void hl_debugfs_add_device(struct hl_device *hdev) 3984 3993 { 3985 3994 }
+25 -14
drivers/accel/qaic/qaic_control.c
··· 14 14 #include <linux/mm.h> 15 15 #include <linux/moduleparam.h> 16 16 #include <linux/mutex.h> 17 + #include <linux/overflow.h> 17 18 #include <linux/pci.h> 18 19 #include <linux/scatterlist.h> 19 20 #include <linux/types.h> ··· 367 366 if (in_trans->hdr.len % 8 != 0) 368 367 return -EINVAL; 369 368 370 - if (msg_hdr_len + in_trans->hdr.len > QAIC_MANAGE_EXT_MSG_LENGTH) 369 + if (size_add(msg_hdr_len, in_trans->hdr.len) > QAIC_MANAGE_EXT_MSG_LENGTH) 371 370 return -ENOSPC; 372 371 373 372 trans_wrapper = add_wrapper(wrappers, ··· 419 418 } 420 419 421 420 ret = get_user_pages_fast(xfer_start_addr, nr_pages, 0, page_list); 422 - if (ret < 0 || ret != nr_pages) { 423 - ret = -EFAULT; 421 + if (ret < 0) 424 422 goto free_page_list; 423 + if (ret != nr_pages) { 424 + nr_pages = ret; 425 + ret = -EFAULT; 426 + goto put_pages; 425 427 } 426 428 427 429 sgt = kmalloc(sizeof(*sgt), GFP_KERNEL); ··· 561 557 msg = &wrapper->msg; 562 558 msg_hdr_len = le32_to_cpu(msg->hdr.len); 563 559 564 - if (msg_hdr_len > (UINT_MAX - QAIC_MANAGE_EXT_MSG_LENGTH)) 565 - return -EINVAL; 566 - 567 560 /* There should be enough space to hold at least one ASP entry. */ 568 - if (msg_hdr_len + sizeof(*out_trans) + sizeof(struct wire_addr_size_pair) > 561 + if (size_add(msg_hdr_len, sizeof(*out_trans) + sizeof(struct wire_addr_size_pair)) > 569 562 QAIC_MANAGE_EXT_MSG_LENGTH) 570 563 return -ENOMEM; 571 564 ··· 635 634 msg = &wrapper->msg; 636 635 msg_hdr_len = le32_to_cpu(msg->hdr.len); 637 636 638 - if (msg_hdr_len + sizeof(*out_trans) > QAIC_MANAGE_MAX_MSG_LENGTH) 637 + if (size_add(msg_hdr_len, sizeof(*out_trans)) > QAIC_MANAGE_MAX_MSG_LENGTH) 639 638 return -ENOSPC; 640 639 641 640 if (!in_trans->queue_size) ··· 719 718 msg = &wrapper->msg; 720 719 msg_hdr_len = le32_to_cpu(msg->hdr.len); 721 720 722 - if (msg_hdr_len + in_trans->hdr.len > QAIC_MANAGE_MAX_MSG_LENGTH) 721 + if (size_add(msg_hdr_len, in_trans->hdr.len) > QAIC_MANAGE_MAX_MSG_LENGTH) 723 722 return -ENOSPC; 724 723 725 724 trans_wrapper = add_wrapper(wrappers, sizeof(*trans_wrapper)); ··· 749 748 int ret; 750 749 int i; 751 750 752 - if (!user_msg->count) { 751 + if (!user_msg->count || 752 + user_msg->len < sizeof(*trans_hdr)) { 753 753 ret = -EINVAL; 754 754 goto out; 755 755 } ··· 767 765 } 768 766 769 767 for (i = 0; i < user_msg->count; ++i) { 770 - if (user_len >= user_msg->len) { 768 + if (user_len > user_msg->len - sizeof(*trans_hdr)) { 771 769 ret = -EINVAL; 772 770 break; 773 771 } 774 772 trans_hdr = (struct qaic_manage_trans_hdr *)(user_msg->data + user_len); 775 - if (user_len + trans_hdr->len > user_msg->len) { 773 + if (trans_hdr->len < sizeof(trans_hdr) || 774 + size_add(user_len, trans_hdr->len) > user_msg->len) { 776 775 ret = -EINVAL; 777 776 break; 778 777 } ··· 956 953 int ret; 957 954 int i; 958 955 959 - if (msg_hdr_len > QAIC_MANAGE_MAX_MSG_LENGTH) 956 + if (msg_hdr_len < sizeof(*trans_hdr) || 957 + msg_hdr_len > QAIC_MANAGE_MAX_MSG_LENGTH) 960 958 return -EINVAL; 961 959 962 960 user_msg->len = 0; 963 961 user_msg->count = le32_to_cpu(msg->hdr.count); 964 962 965 963 for (i = 0; i < user_msg->count; ++i) { 964 + u32 hdr_len; 965 + 966 + if (msg_len > msg_hdr_len - sizeof(*trans_hdr)) 967 + return -EINVAL; 968 + 966 969 trans_hdr = (struct wire_trans_hdr *)(msg->data + msg_len); 967 - if (msg_len + le32_to_cpu(trans_hdr->len) > msg_hdr_len) 970 + hdr_len = le32_to_cpu(trans_hdr->len); 971 + if (hdr_len < sizeof(*trans_hdr) || 972 + size_add(msg_len, hdr_len) > msg_hdr_len) 968 973 return -EINVAL; 969 974 970 975 switch (le32_to_cpu(trans_hdr->type)) {
+2
drivers/ata/pata_parport/aten.c
··· 139 139 }; 140 140 141 141 MODULE_LICENSE("GPL"); 142 + MODULE_AUTHOR("Grant R. Guenther <grant@torque.net>"); 143 + MODULE_DESCRIPTION("ATEN EH-100 parallel port IDE adapter protocol driver"); 142 144 module_pata_parport_driver(aten);
+2
drivers/ata/pata_parport/bpck.c
··· 502 502 }; 503 503 504 504 MODULE_LICENSE("GPL"); 505 + MODULE_AUTHOR("Grant R. Guenther <grant@torque.net>"); 506 + MODULE_DESCRIPTION("MicroSolutions BACKPACK parallel port IDE adapter protocol driver"); 505 507 module_pata_parport_driver(bpck);
+2 -1
drivers/ata/pata_parport/bpck6.c
··· 459 459 460 460 MODULE_LICENSE("GPL"); 461 461 MODULE_AUTHOR("Micro Solutions Inc."); 462 - MODULE_DESCRIPTION("BACKPACK Protocol module, compatible with PARIDE"); 462 + MODULE_DESCRIPTION("Micro Solutions BACKPACK parallel port IDE adapter " 463 + "(version 6 drives) protocol driver"); 463 464 module_pata_parport_driver(bpck6);
+2
drivers/ata/pata_parport/comm.c
··· 201 201 }; 202 202 203 203 MODULE_LICENSE("GPL"); 204 + MODULE_AUTHOR("Grant R. Guenther <grant@torque.net>"); 205 + MODULE_DESCRIPTION("DataStor Commuter parallel port IDE adapter protocol driver"); 204 206 module_pata_parport_driver(comm);
+2
drivers/ata/pata_parport/dstr.c
··· 230 230 }; 231 231 232 232 MODULE_LICENSE("GPL"); 233 + MODULE_AUTHOR("Grant R. Guenther <grant@torque.net>"); 234 + MODULE_DESCRIPTION("DataStor EP2000 parallel port IDE adapter protocol driver"); 233 235 module_pata_parport_driver(dstr);
+3
drivers/ata/pata_parport/epat.c
··· 358 358 } 359 359 360 360 MODULE_LICENSE("GPL"); 361 + MODULE_AUTHOR("Grant R. Guenther <grant@torque.net>"); 362 + MODULE_DESCRIPTION("Shuttle Technologies EPAT parallel port IDE adapter " 363 + "protocol driver"); 361 364 module_init(epat_init) 362 365 module_exit(epat_exit)
+3
drivers/ata/pata_parport/epia.c
··· 306 306 }; 307 307 308 308 MODULE_LICENSE("GPL"); 309 + MODULE_AUTHOR("Grant R. Guenther <grant@torque.net>"); 310 + MODULE_DESCRIPTION("Shuttle Technologies EPIA parallel port IDE adapter " 311 + "protocol driver"); 309 312 module_pata_parport_driver(epia);
+3
drivers/ata/pata_parport/fit2.c
··· 132 132 }; 133 133 134 134 MODULE_LICENSE("GPL"); 135 + MODULE_AUTHOR("Grant R. Guenther <grant@torque.net>"); 136 + MODULE_DESCRIPTION("Fidelity International Technology parallel port IDE adapter" 137 + "(older models) protocol driver"); 135 138 module_pata_parport_driver(fit2);
+3
drivers/ata/pata_parport/fit3.c
··· 193 193 }; 194 194 195 195 MODULE_LICENSE("GPL"); 196 + MODULE_AUTHOR("Grant R. Guenther <grant@torque.net>"); 197 + MODULE_DESCRIPTION("Fidelity International Technology parallel port IDE adapter" 198 + "(newer models) protocol driver"); 196 199 module_pata_parport_driver(fit3);
+2
drivers/ata/pata_parport/friq.c
··· 259 259 }; 260 260 261 261 MODULE_LICENSE("GPL"); 262 + MODULE_AUTHOR("Grant R. Guenther <grant@torque.net>"); 263 + MODULE_DESCRIPTION("Freecom IQ parallel port IDE adapter protocol driver"); 262 264 module_pata_parport_driver(friq);
+2
drivers/ata/pata_parport/frpw.c
··· 293 293 }; 294 294 295 295 MODULE_LICENSE("GPL"); 296 + MODULE_AUTHOR("Grant R. Guenther <grant@torque.net>"); 297 + MODULE_DESCRIPTION("Freecom Power parallel port IDE adapter protocol driver"); 296 298 module_pata_parport_driver(frpw);
+3
drivers/ata/pata_parport/kbic.c
··· 301 301 } 302 302 303 303 MODULE_LICENSE("GPL"); 304 + MODULE_AUTHOR("Grant R. Guenther <grant@torque.net>"); 305 + MODULE_DESCRIPTION("KingByte Information Systems KBIC-951A and KBIC-971A " 306 + "parallel port IDE adapter protocol driver"); 304 307 module_init(kbic_init) 305 308 module_exit(kbic_exit)
+2
drivers/ata/pata_parport/ktti.c
··· 106 106 }; 107 107 108 108 MODULE_LICENSE("GPL"); 109 + MODULE_AUTHOR("Grant R. Guenther <grant@torque.net>"); 110 + MODULE_DESCRIPTION("KT Technology parallel port IDE adapter protocol driver"); 109 111 module_pata_parport_driver(ktti);
+2
drivers/ata/pata_parport/on20.c
··· 142 142 }; 143 143 144 144 MODULE_LICENSE("GPL"); 145 + MODULE_AUTHOR("Grant R. Guenther <grant@torque.net>"); 146 + MODULE_DESCRIPTION("Onspec 90c20 parallel port IDE adapter protocol driver"); 145 147 module_pata_parport_driver(on20);
+2
drivers/ata/pata_parport/on26.c
··· 310 310 }; 311 311 312 312 MODULE_LICENSE("GPL"); 313 + MODULE_AUTHOR("Grant R. Guenther <grant@torque.net>"); 314 + MODULE_DESCRIPTION("Onspec 90c26 parallel port IDE adapter protocol driver"); 313 315 module_pata_parport_driver(on26);
+4
drivers/base/regmap/regcache-rbtree.c
··· 471 471 unsigned int start, end; 472 472 int ret; 473 473 474 + map->async = true; 475 + 474 476 rbtree_ctx = map->cache; 475 477 for (node = rb_first(&rbtree_ctx->root); node; node = rb_next(node)) { 476 478 rbnode = rb_entry(node, struct regcache_rbtree_node, node); ··· 500 498 if (ret != 0) 501 499 return ret; 502 500 } 501 + 502 + map->async = false; 503 503 504 504 return regmap_async_complete(map); 505 505 }
-3
drivers/base/regmap/regcache.c
··· 368 368 if (!map->cache_dirty) 369 369 goto out; 370 370 371 - map->async = true; 372 - 373 371 /* Apply any patch first */ 374 372 map->cache_bypass = true; 375 373 for (i = 0; i < map->patch_regs; i++) { ··· 390 392 391 393 out: 392 394 /* Restore the bypass state */ 393 - map->async = false; 394 395 map->cache_bypass = bypass; 395 396 map->no_sync_defaults = false; 396 397 map->unlock(map->lock_arg);
+4 -4
drivers/base/regmap/regmap-i2c.c
··· 242 242 static const struct regmap_bus regmap_i2c_smbus_i2c_block = { 243 243 .write = regmap_i2c_smbus_i2c_write, 244 244 .read = regmap_i2c_smbus_i2c_read, 245 - .max_raw_read = I2C_SMBUS_BLOCK_MAX, 246 - .max_raw_write = I2C_SMBUS_BLOCK_MAX, 245 + .max_raw_read = I2C_SMBUS_BLOCK_MAX - 1, 246 + .max_raw_write = I2C_SMBUS_BLOCK_MAX - 1, 247 247 }; 248 248 249 249 static int regmap_i2c_smbus_i2c_write_reg16(void *context, const void *data, ··· 299 299 static const struct regmap_bus regmap_i2c_smbus_i2c_block_reg16 = { 300 300 .write = regmap_i2c_smbus_i2c_write_reg16, 301 301 .read = regmap_i2c_smbus_i2c_read_reg16, 302 - .max_raw_read = I2C_SMBUS_BLOCK_MAX, 303 - .max_raw_write = I2C_SMBUS_BLOCK_MAX, 302 + .max_raw_read = I2C_SMBUS_BLOCK_MAX - 2, 303 + .max_raw_write = I2C_SMBUS_BLOCK_MAX - 2, 304 304 }; 305 305 306 306 static const struct regmap_bus *regmap_get_i2c_bus(struct i2c_client *i2c,
+5
drivers/base/regmap/regmap-kunit.c
··· 58 58 int i; 59 59 struct reg_default *defaults; 60 60 61 + config->disable_locking = config->cache_type == REGCACHE_RBTREE || 62 + config->cache_type == REGCACHE_MAPLE; 63 + 61 64 buf = kmalloc(size, GFP_KERNEL); 62 65 if (!buf) 63 66 return ERR_PTR(-ENOMEM); ··· 892 889 893 890 config->cache_type = test_type->cache_type; 894 891 config->val_format_endian = test_type->val_endian; 892 + config->disable_locking = config->cache_type == REGCACHE_RBTREE || 893 + config->cache_type == REGCACHE_MAPLE; 895 894 896 895 buf = kmalloc(size, GFP_KERNEL); 897 896 if (!buf)
+1 -1
drivers/base/regmap/regmap-spi-avmm.c
··· 660 660 .reg_format_endian_default = REGMAP_ENDIAN_NATIVE, 661 661 .val_format_endian_default = REGMAP_ENDIAN_NATIVE, 662 662 .max_raw_read = SPI_AVMM_VAL_SIZE * MAX_READ_CNT, 663 - .max_raw_write = SPI_AVMM_REG_SIZE + SPI_AVMM_VAL_SIZE * MAX_WRITE_CNT, 663 + .max_raw_write = SPI_AVMM_VAL_SIZE * MAX_WRITE_CNT, 664 664 .free_context = spi_avmm_bridge_ctx_free, 665 665 }; 666 666
+2 -4
drivers/base/regmap/regmap.c
··· 2082 2082 size_t val_count = val_len / val_bytes; 2083 2083 size_t chunk_count, chunk_bytes; 2084 2084 size_t chunk_regs = val_count; 2085 - size_t max_data = map->max_raw_write - map->format.reg_bytes - 2086 - map->format.pad_bytes; 2087 2085 int ret, i; 2088 2086 2089 2087 if (!val_count) ··· 2089 2091 2090 2092 if (map->use_single_write) 2091 2093 chunk_regs = 1; 2092 - else if (map->max_raw_write && val_len > max_data) 2093 - chunk_regs = max_data / val_bytes; 2094 + else if (map->max_raw_write && val_len > map->max_raw_write) 2095 + chunk_regs = map->max_raw_write / val_bytes; 2094 2096 2095 2097 chunk_count = val_count / chunk_regs; 2096 2098 chunk_bytes = chunk_regs * val_bytes;
+38 -2
drivers/block/loop.c
··· 1775 1775 /* 1776 1776 * If max_loop is specified, create that many devices upfront. 1777 1777 * This also becomes a hard limit. If max_loop is not specified, 1778 + * the default isn't a hard limit (as before commit 85c50197716c 1779 + * changed the default value from 0 for max_loop=0 reasons), just 1778 1780 * create CONFIG_BLK_DEV_LOOP_MIN_COUNT loop devices at module 1779 1781 * init time. Loop devices can be requested on-demand with the 1780 1782 * /dev/loop-control interface, or be instantiated by accessing 1781 1783 * a 'dead' device node. 1782 1784 */ 1783 1785 static int max_loop = CONFIG_BLK_DEV_LOOP_MIN_COUNT; 1784 - module_param(max_loop, int, 0444); 1786 + 1787 + #ifdef CONFIG_BLOCK_LEGACY_AUTOLOAD 1788 + static bool max_loop_specified; 1789 + 1790 + static int max_loop_param_set_int(const char *val, 1791 + const struct kernel_param *kp) 1792 + { 1793 + int ret; 1794 + 1795 + ret = param_set_int(val, kp); 1796 + if (ret < 0) 1797 + return ret; 1798 + 1799 + max_loop_specified = true; 1800 + return 0; 1801 + } 1802 + 1803 + static const struct kernel_param_ops max_loop_param_ops = { 1804 + .set = max_loop_param_set_int, 1805 + .get = param_get_int, 1806 + }; 1807 + 1808 + module_param_cb(max_loop, &max_loop_param_ops, &max_loop, 0444); 1785 1809 MODULE_PARM_DESC(max_loop, "Maximum number of loop devices"); 1810 + #else 1811 + module_param(max_loop, int, 0444); 1812 + MODULE_PARM_DESC(max_loop, "Initial number of loop devices"); 1813 + #endif 1814 + 1786 1815 module_param(max_part, int, 0444); 1787 1816 MODULE_PARM_DESC(max_part, "Maximum number of partitions per loop device"); 1788 1817 ··· 2122 2093 put_disk(lo->lo_disk); 2123 2094 } 2124 2095 2096 + #ifdef CONFIG_BLOCK_LEGACY_AUTOLOAD 2125 2097 static void loop_probe(dev_t dev) 2126 2098 { 2127 2099 int idx = MINOR(dev) >> part_shift; 2128 2100 2129 - if (max_loop && idx >= max_loop) 2101 + if (max_loop_specified && max_loop && idx >= max_loop) 2130 2102 return; 2131 2103 loop_add(idx); 2132 2104 } 2105 + #else 2106 + #define loop_probe NULL 2107 + #endif /* !CONFIG_BLOCK_LEGACY_AUTOLOAD */ 2133 2108 2134 2109 static int loop_control_remove(int idx) 2135 2110 { ··· 2314 2281 static int __init max_loop_setup(char *str) 2315 2282 { 2316 2283 max_loop = simple_strtol(str, NULL, 0); 2284 + #ifdef CONFIG_BLOCK_LEGACY_AUTOLOAD 2285 + max_loop_specified = true; 2286 + #endif 2317 2287 return 1; 2318 2288 } 2319 2289
+1
drivers/bluetooth/btusb.c
··· 4104 4104 BT_DBG("intf %p id %p", intf, id); 4105 4105 4106 4106 if ((id->driver_info & BTUSB_IFNUM_2) && 4107 + (intf->cur_altsetting->desc.bInterfaceNumber != 0) && 4107 4108 (intf->cur_altsetting->desc.bInterfaceNumber != 2)) 4108 4109 return -ENODEV; 4109 4110
+7
drivers/char/tpm/tpm-chip.c
··· 518 518 * 6.x.y.z series: 6.0.18.6 + 519 519 * 3.x.y.z series: 3.57.y.5 + 520 520 */ 521 + #ifdef CONFIG_X86 521 522 static bool tpm_amd_is_rng_defective(struct tpm_chip *chip) 522 523 { 523 524 u32 val1, val2; ··· 567 566 568 567 return true; 569 568 } 569 + #else 570 + static inline bool tpm_amd_is_rng_defective(struct tpm_chip *chip) 571 + { 572 + return false; 573 + } 574 + #endif /* CONFIG_X86 */ 570 575 571 576 static int tpm_hwrng_read(struct hwrng *rng, void *data, size_t max, bool wait) 572 577 {
+11 -8
drivers/char/tpm/tpm_crb.c
··· 563 563 u32 rsp_size; 564 564 int ret; 565 565 566 - INIT_LIST_HEAD(&acpi_resource_list); 567 - ret = acpi_dev_get_resources(device, &acpi_resource_list, 568 - crb_check_resource, iores_array); 569 - if (ret < 0) 570 - return ret; 571 - acpi_dev_free_resource_list(&acpi_resource_list); 572 - 573 - /* Pluton doesn't appear to define ACPI memory regions */ 566 + /* 567 + * Pluton sometimes does not define ACPI memory regions. 568 + * Mapping is then done in crb_map_pluton 569 + */ 574 570 if (priv->sm != ACPI_TPM2_COMMAND_BUFFER_WITH_PLUTON) { 571 + INIT_LIST_HEAD(&acpi_resource_list); 572 + ret = acpi_dev_get_resources(device, &acpi_resource_list, 573 + crb_check_resource, iores_array); 574 + if (ret < 0) 575 + return ret; 576 + acpi_dev_free_resource_list(&acpi_resource_list); 577 + 575 578 if (resource_type(iores_array) != IORESOURCE_MEM) { 576 579 dev_err(dev, FW_BUG "TPM2 ACPI table does not define a memory resource\n"); 577 580 return -EINVAL;
+25
drivers/char/tpm/tpm_tis.c
··· 116 116 static const struct dmi_system_id tpm_tis_dmi_table[] = { 117 117 { 118 118 .callback = tpm_tis_disable_irq, 119 + .ident = "Framework Laptop (12th Gen Intel Core)", 120 + .matches = { 121 + DMI_MATCH(DMI_SYS_VENDOR, "Framework"), 122 + DMI_MATCH(DMI_PRODUCT_NAME, "Laptop (12th Gen Intel Core)"), 123 + }, 124 + }, 125 + { 126 + .callback = tpm_tis_disable_irq, 127 + .ident = "Framework Laptop (13th Gen Intel Core)", 128 + .matches = { 129 + DMI_MATCH(DMI_SYS_VENDOR, "Framework"), 130 + DMI_MATCH(DMI_PRODUCT_NAME, "Laptop (13th Gen Intel Core)"), 131 + }, 132 + }, 133 + { 134 + .callback = tpm_tis_disable_irq, 119 135 .ident = "ThinkPad T490s", 120 136 .matches = { 121 137 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), ··· 156 140 }, 157 141 { 158 142 .callback = tpm_tis_disable_irq, 143 + .ident = "ThinkPad L590", 144 + .matches = { 145 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 146 + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L590"), 147 + }, 148 + }, 149 + { 150 + .callback = tpm_tis_disable_irq, 159 151 .ident = "UPX-TGL", 160 152 .matches = { 161 153 DMI_MATCH(DMI_SYS_VENDOR, "AAEON"), 154 + DMI_MATCH(DMI_PRODUCT_VERSION, "UPX-TGL"), 162 155 }, 163 156 }, 164 157 {}
+88 -15
drivers/char/tpm/tpm_tis_core.c
··· 24 24 #include <linux/wait.h> 25 25 #include <linux/acpi.h> 26 26 #include <linux/freezer.h> 27 + #include <linux/dmi.h> 27 28 #include "tpm.h" 28 29 #include "tpm_tis_core.h" 30 + 31 + #define TPM_TIS_MAX_UNHANDLED_IRQS 1000 29 32 30 33 static void tpm_tis_clkrun_enable(struct tpm_chip *chip, bool value); 31 34 ··· 471 468 return rc; 472 469 } 473 470 474 - static void disable_interrupts(struct tpm_chip *chip) 471 + static void __tpm_tis_disable_interrupts(struct tpm_chip *chip) 475 472 { 476 473 struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 477 - u32 intmask; 478 - int rc; 474 + u32 int_mask = 0; 475 + 476 + tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &int_mask); 477 + int_mask &= ~TPM_GLOBAL_INT_ENABLE; 478 + tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), int_mask); 479 + 480 + chip->flags &= ~TPM_CHIP_FLAG_IRQ; 481 + } 482 + 483 + static void tpm_tis_disable_interrupts(struct tpm_chip *chip) 484 + { 485 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 479 486 480 487 if (priv->irq == 0) 481 488 return; 482 489 483 - rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask); 484 - if (rc < 0) 485 - intmask = 0; 486 - 487 - intmask &= ~TPM_GLOBAL_INT_ENABLE; 488 - rc = tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask); 490 + __tpm_tis_disable_interrupts(chip); 489 491 490 492 devm_free_irq(chip->dev.parent, priv->irq, chip); 491 493 priv->irq = 0; 492 - chip->flags &= ~TPM_CHIP_FLAG_IRQ; 493 494 } 494 495 495 496 /* ··· 559 552 if (!test_bit(TPM_TIS_IRQ_TESTED, &priv->flags)) 560 553 tpm_msleep(1); 561 554 if (!test_bit(TPM_TIS_IRQ_TESTED, &priv->flags)) 562 - disable_interrupts(chip); 555 + tpm_tis_disable_interrupts(chip); 563 556 set_bit(TPM_TIS_IRQ_TESTED, &priv->flags); 564 557 return rc; 565 558 } ··· 759 752 return status == TPM_STS_COMMAND_READY; 760 753 } 761 754 755 + static irqreturn_t tpm_tis_revert_interrupts(struct tpm_chip *chip) 756 + { 757 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 758 + const char *product; 759 + const char *vendor; 760 + 761 + dev_warn(&chip->dev, FW_BUG 762 + "TPM interrupt storm detected, polling instead\n"); 763 + 764 + vendor = dmi_get_system_info(DMI_SYS_VENDOR); 765 + product = dmi_get_system_info(DMI_PRODUCT_VERSION); 766 + 767 + if (vendor && product) { 768 + dev_info(&chip->dev, 769 + "Consider adding the following entry to tpm_tis_dmi_table:\n"); 770 + dev_info(&chip->dev, "\tDMI_SYS_VENDOR: %s\n", vendor); 771 + dev_info(&chip->dev, "\tDMI_PRODUCT_VERSION: %s\n", product); 772 + } 773 + 774 + if (tpm_tis_request_locality(chip, 0) != 0) 775 + return IRQ_NONE; 776 + 777 + __tpm_tis_disable_interrupts(chip); 778 + tpm_tis_relinquish_locality(chip, 0); 779 + 780 + schedule_work(&priv->free_irq_work); 781 + 782 + return IRQ_HANDLED; 783 + } 784 + 785 + static irqreturn_t tpm_tis_update_unhandled_irqs(struct tpm_chip *chip) 786 + { 787 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 788 + irqreturn_t irqret = IRQ_HANDLED; 789 + 790 + if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) 791 + return IRQ_HANDLED; 792 + 793 + if (time_after(jiffies, priv->last_unhandled_irq + HZ/10)) 794 + priv->unhandled_irqs = 1; 795 + else 796 + priv->unhandled_irqs++; 797 + 798 + priv->last_unhandled_irq = jiffies; 799 + 800 + if (priv->unhandled_irqs > TPM_TIS_MAX_UNHANDLED_IRQS) 801 + irqret = tpm_tis_revert_interrupts(chip); 802 + 803 + return irqret; 804 + } 805 + 762 806 static irqreturn_t tis_int_handler(int dummy, void *dev_id) 763 807 { 764 808 struct tpm_chip *chip = dev_id; ··· 819 761 820 762 rc = tpm_tis_read32(priv, TPM_INT_STATUS(priv->locality), &interrupt); 821 763 if (rc < 0) 822 - return IRQ_NONE; 764 + goto err; 823 765 824 766 if (interrupt == 0) 825 - return IRQ_NONE; 767 + goto err; 826 768 827 769 set_bit(TPM_TIS_IRQ_TESTED, &priv->flags); 828 770 if (interrupt & TPM_INTF_DATA_AVAIL_INT) ··· 838 780 rc = tpm_tis_write32(priv, TPM_INT_STATUS(priv->locality), interrupt); 839 781 tpm_tis_relinquish_locality(chip, 0); 840 782 if (rc < 0) 841 - return IRQ_NONE; 783 + goto err; 842 784 843 785 tpm_tis_read32(priv, TPM_INT_STATUS(priv->locality), &interrupt); 844 786 return IRQ_HANDLED; 787 + 788 + err: 789 + return tpm_tis_update_unhandled_irqs(chip); 845 790 } 846 791 847 792 static void tpm_tis_gen_interrupt(struct tpm_chip *chip) ··· 865 804 chip->flags &= ~TPM_CHIP_FLAG_IRQ; 866 805 } 867 806 807 + static void tpm_tis_free_irq_func(struct work_struct *work) 808 + { 809 + struct tpm_tis_data *priv = container_of(work, typeof(*priv), free_irq_work); 810 + struct tpm_chip *chip = priv->chip; 811 + 812 + devm_free_irq(chip->dev.parent, priv->irq, chip); 813 + priv->irq = 0; 814 + } 815 + 868 816 /* Register the IRQ and issue a command that will cause an interrupt. If an 869 817 * irq is seen then leave the chip setup for IRQ operation, otherwise reverse 870 818 * everything and leave in polling mode. Returns 0 on success. ··· 886 816 int rc; 887 817 u32 int_status; 888 818 819 + INIT_WORK(&priv->free_irq_work, tpm_tis_free_irq_func); 889 820 890 821 rc = devm_request_threaded_irq(chip->dev.parent, irq, NULL, 891 822 tis_int_handler, IRQF_ONESHOT | flags, ··· 989 918 interrupt = 0; 990 919 991 920 tpm_tis_write32(priv, reg, ~TPM_GLOBAL_INT_ENABLE & interrupt); 921 + flush_work(&priv->free_irq_work); 992 922 993 923 tpm_tis_clkrun_enable(chip, false); 994 924 ··· 1093 1021 chip->timeout_b = msecs_to_jiffies(TIS_TIMEOUT_B_MAX); 1094 1022 chip->timeout_c = msecs_to_jiffies(TIS_TIMEOUT_C_MAX); 1095 1023 chip->timeout_d = msecs_to_jiffies(TIS_TIMEOUT_D_MAX); 1024 + priv->chip = chip; 1096 1025 priv->timeout_min = TPM_TIMEOUT_USECS_MIN; 1097 1026 priv->timeout_max = TPM_TIMEOUT_USECS_MAX; 1098 1027 priv->phy_ops = phy_ops; ··· 1252 1179 rc = tpm_tis_request_locality(chip, 0); 1253 1180 if (rc < 0) 1254 1181 goto out_err; 1255 - disable_interrupts(chip); 1182 + tpm_tis_disable_interrupts(chip); 1256 1183 tpm_tis_relinquish_locality(chip, 0); 1257 1184 } 1258 1185 }
+4
drivers/char/tpm/tpm_tis_core.h
··· 91 91 }; 92 92 93 93 struct tpm_tis_data { 94 + struct tpm_chip *chip; 94 95 u16 manufacturer_id; 95 96 struct mutex locality_count_mutex; 96 97 unsigned int locality_count; 97 98 int locality; 98 99 int irq; 100 + struct work_struct free_irq_work; 101 + unsigned long last_unhandled_irq; 102 + unsigned int unhandled_irqs; 99 103 unsigned int int_mask; 100 104 unsigned long flags; 101 105 void __iomem *ilb_base_addr;
+36 -21
drivers/char/tpm/tpm_tis_i2c.c
··· 189 189 int ret; 190 190 191 191 for (i = 0; i < TPM_RETRY; i++) { 192 - /* write register */ 193 - msg.len = sizeof(reg); 194 - msg.buf = &reg; 195 - msg.flags = 0; 196 - ret = tpm_tis_i2c_retry_transfer_until_ack(data, &msg); 197 - if (ret < 0) 198 - return ret; 192 + u16 read = 0; 199 193 200 - /* read data */ 201 - msg.buf = result; 202 - msg.len = len; 203 - msg.flags = I2C_M_RD; 204 - ret = tpm_tis_i2c_retry_transfer_until_ack(data, &msg); 205 - if (ret < 0) 206 - return ret; 194 + while (read < len) { 195 + /* write register */ 196 + msg.len = sizeof(reg); 197 + msg.buf = &reg; 198 + msg.flags = 0; 199 + ret = tpm_tis_i2c_retry_transfer_until_ack(data, &msg); 200 + if (ret < 0) 201 + return ret; 202 + 203 + /* read data */ 204 + msg.buf = result + read; 205 + msg.len = len - read; 206 + msg.flags = I2C_M_RD; 207 + if (msg.len > I2C_SMBUS_BLOCK_MAX) 208 + msg.len = I2C_SMBUS_BLOCK_MAX; 209 + ret = tpm_tis_i2c_retry_transfer_until_ack(data, &msg); 210 + if (ret < 0) 211 + return ret; 212 + read += msg.len; 213 + } 207 214 208 215 ret = tpm_tis_i2c_sanity_check_read(reg, len, result); 209 216 if (ret == 0) ··· 230 223 struct i2c_msg msg = { .addr = phy->i2c_client->addr }; 231 224 u8 reg = tpm_tis_i2c_address_to_register(addr); 232 225 int ret; 226 + u16 wrote = 0; 233 227 234 228 if (len > TPM_BUFSIZE - 1) 235 229 return -EIO; 236 230 237 - /* write register and data in one go */ 238 231 phy->io_buf[0] = reg; 239 - memcpy(phy->io_buf + sizeof(reg), value, len); 240 - 241 - msg.len = sizeof(reg) + len; 242 232 msg.buf = phy->io_buf; 243 - ret = tpm_tis_i2c_retry_transfer_until_ack(data, &msg); 244 - if (ret < 0) 245 - return ret; 233 + while (wrote < len) { 234 + /* write register and data in one go */ 235 + msg.len = sizeof(reg) + len - wrote; 236 + if (msg.len > I2C_SMBUS_BLOCK_MAX) 237 + msg.len = I2C_SMBUS_BLOCK_MAX; 238 + 239 + memcpy(phy->io_buf + sizeof(reg), value + wrote, 240 + msg.len - sizeof(reg)); 241 + 242 + ret = tpm_tis_i2c_retry_transfer_until_ack(data, &msg); 243 + if (ret < 0) 244 + return ret; 245 + wrote += msg.len - sizeof(reg); 246 + } 246 247 247 248 return 0; 248 249 }
+8
drivers/char/tpm/tpm_tis_spi_main.c
··· 136 136 } 137 137 138 138 exit: 139 + if (ret < 0) { 140 + /* Deactivate chip select */ 141 + memset(&spi_xfer, 0, sizeof(spi_xfer)); 142 + spi_message_init(&m); 143 + spi_message_add_tail(&spi_xfer, &m); 144 + spi_sync_locked(phy->spi_device, &m); 145 + } 146 + 139 147 spi_bus_unlock(phy->spi_device->master); 140 148 return ret; 141 149 }
+7 -23
drivers/char/tpm/tpm_vtpm_proxy.c
··· 683 683 .fops = &vtpmx_fops, 684 684 }; 685 685 686 - static int vtpmx_init(void) 687 - { 688 - return misc_register(&vtpmx_miscdev); 689 - } 690 - 691 - static void vtpmx_cleanup(void) 692 - { 693 - misc_deregister(&vtpmx_miscdev); 694 - } 695 - 696 686 static int __init vtpm_module_init(void) 697 687 { 698 688 int rc; 699 689 700 - rc = vtpmx_init(); 701 - if (rc) { 702 - pr_err("couldn't create vtpmx device\n"); 703 - return rc; 704 - } 705 - 706 690 workqueue = create_workqueue("tpm-vtpm"); 707 691 if (!workqueue) { 708 692 pr_err("couldn't create workqueue\n"); 709 - rc = -ENOMEM; 710 - goto err_vtpmx_cleanup; 693 + return -ENOMEM; 711 694 } 712 695 713 - return 0; 714 - 715 - err_vtpmx_cleanup: 716 - vtpmx_cleanup(); 696 + rc = misc_register(&vtpmx_miscdev); 697 + if (rc) { 698 + pr_err("couldn't create vtpmx device\n"); 699 + destroy_workqueue(workqueue); 700 + } 717 701 718 702 return rc; 719 703 } ··· 705 721 static void __exit vtpm_module_exit(void) 706 722 { 707 723 destroy_workqueue(workqueue); 708 - vtpmx_cleanup(); 724 + misc_deregister(&vtpmx_miscdev); 709 725 } 710 726 711 727 module_init(vtpm_module_init);
+9 -4
drivers/dma-buf/dma-resv.c
··· 571 571 dma_resv_for_each_fence_unlocked(&cursor, fence) { 572 572 573 573 if (dma_resv_iter_is_restarted(&cursor)) { 574 + struct dma_fence **new_fences; 574 575 unsigned int count; 575 576 576 577 while (*num_fences) ··· 580 579 count = cursor.num_fences + 1; 581 580 582 581 /* Eventually re-allocate the array */ 583 - *fences = krealloc_array(*fences, count, 584 - sizeof(void *), 585 - GFP_KERNEL); 586 - if (count && !*fences) { 582 + new_fences = krealloc_array(*fences, count, 583 + sizeof(void *), 584 + GFP_KERNEL); 585 + if (count && !new_fences) { 586 + kfree(*fences); 587 + *fences = NULL; 588 + *num_fences = 0; 587 589 dma_resv_iter_end(&cursor); 588 590 return -ENOMEM; 589 591 } 592 + *fences = new_fences; 590 593 } 591 594 592 595 (*fences)[(*num_fences)++] = dma_fence_get(fence);
+15 -11
drivers/gpio/gpio-mvebu.c
··· 874 874 875 875 spin_lock_init(&mvpwm->lock); 876 876 877 - return pwmchip_add(&mvpwm->chip); 877 + return devm_pwmchip_add(dev, &mvpwm->chip); 878 878 } 879 879 880 880 #ifdef CONFIG_DEBUG_FS ··· 1112 1112 return 0; 1113 1113 } 1114 1114 1115 + static void mvebu_gpio_remove_irq_domain(void *data) 1116 + { 1117 + struct irq_domain *domain = data; 1118 + 1119 + irq_domain_remove(domain); 1120 + } 1121 + 1115 1122 static int mvebu_gpio_probe(struct platform_device *pdev) 1116 1123 { 1117 1124 struct mvebu_gpio_chip *mvchip; ··· 1250 1243 if (!mvchip->domain) { 1251 1244 dev_err(&pdev->dev, "couldn't allocate irq domain %s (DT).\n", 1252 1245 mvchip->chip.label); 1253 - err = -ENODEV; 1254 - goto err_pwm; 1246 + return -ENODEV; 1255 1247 } 1248 + 1249 + err = devm_add_action_or_reset(&pdev->dev, mvebu_gpio_remove_irq_domain, 1250 + mvchip->domain); 1251 + if (err) 1252 + return err; 1256 1253 1257 1254 err = irq_alloc_domain_generic_chips( 1258 1255 mvchip->domain, ngpios, 2, np->name, handle_level_irq, ··· 1264 1253 if (err) { 1265 1254 dev_err(&pdev->dev, "couldn't allocate irq chips %s (DT).\n", 1266 1255 mvchip->chip.label); 1267 - goto err_domain; 1256 + return err; 1268 1257 } 1269 1258 1270 1259 /* ··· 1304 1293 } 1305 1294 1306 1295 return 0; 1307 - 1308 - err_domain: 1309 - irq_domain_remove(mvchip->domain); 1310 - err_pwm: 1311 - pwmchip_remove(&mvchip->mvpwm->chip); 1312 - 1313 - return err; 1314 1296 } 1315 1297 1316 1298 static struct platform_driver mvebu_gpio_driver = {
+3 -3
drivers/gpio/gpio-tps68470.c
··· 91 91 struct tps68470_gpio_data *tps68470_gpio = gpiochip_get_data(gc); 92 92 struct regmap *regmap = tps68470_gpio->tps68470_regmap; 93 93 94 + /* Set the initial value */ 95 + tps68470_gpio_set(gc, offset, value); 96 + 94 97 /* rest are always outputs */ 95 98 if (offset >= TPS68470_N_REGULAR_GPIO) 96 99 return 0; 97 - 98 - /* Set the initial value */ 99 - tps68470_gpio_set(gc, offset, value); 100 100 101 101 return regmap_update_bits(regmap, TPS68470_GPIO_CTL_REG_A(offset), 102 102 TPS68470_GPIO_MODE_MASK,
+2 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 1709 1709 alloc_flags |= (flags & KFD_IOC_ALLOC_MEM_FLAGS_PUBLIC) ? 1710 1710 AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED : 0; 1711 1711 } 1712 - xcp_id = fpriv->xcp_id == ~0 ? 0 : fpriv->xcp_id; 1712 + xcp_id = fpriv->xcp_id == AMDGPU_XCP_NO_PARTITION ? 1713 + 0 : fpriv->xcp_id; 1713 1714 } else if (flags & KFD_IOC_ALLOC_MEM_FLAGS_GTT) { 1714 1715 domain = alloc_domain = AMDGPU_GEM_DOMAIN_GTT; 1715 1716 alloc_flags = 0;
+3 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 1229 1229 pasid = 0; 1230 1230 } 1231 1231 1232 - r = amdgpu_vm_init(adev, &fpriv->vm); 1232 + r = amdgpu_xcp_open_device(adev, fpriv, file_priv); 1233 1233 if (r) 1234 1234 goto error_pasid; 1235 1235 1236 - r = amdgpu_xcp_open_device(adev, fpriv, file_priv); 1236 + r = amdgpu_vm_init(adev, &fpriv->vm, fpriv->xcp_id); 1237 1237 if (r) 1238 - goto error_vm; 1238 + goto error_pasid; 1239 1239 1240 1240 r = amdgpu_vm_set_pasid(adev, &fpriv->vm, pasid); 1241 1241 if (r)
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
··· 1382 1382 goto error_pasid; 1383 1383 } 1384 1384 1385 - r = amdgpu_vm_init(adev, vm); 1385 + r = amdgpu_vm_init(adev, vm, -1); 1386 1386 if (r) { 1387 1387 DRM_ERROR("failed to initialize vm\n"); 1388 1388 goto error_pasid;
+3 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
··· 55 55 DRM_WARN("%s: vblank timer overrun\n", __func__); 56 56 57 57 ret = drm_crtc_handle_vblank(crtc); 58 + /* Don't queue timer again when vblank is disabled. */ 58 59 if (!ret) 59 - DRM_ERROR("amdgpu_vkms failure on handling vblank"); 60 + return HRTIMER_NORESTART; 60 61 61 62 return HRTIMER_RESTART; 62 63 } ··· 82 81 { 83 82 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 84 83 85 - hrtimer_cancel(&amdgpu_crtc->vblank_timer); 84 + hrtimer_try_to_cancel(&amdgpu_crtc->vblank_timer); 86 85 } 87 86 88 87 static bool amdgpu_vkms_get_vblank_timestamp(struct drm_crtc *crtc,
+3 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 2121 2121 * 2122 2122 * @adev: amdgpu_device pointer 2123 2123 * @vm: requested vm 2124 + * @xcp_id: GPU partition selection id 2124 2125 * 2125 2126 * Init @vm fields. 2126 2127 * 2127 2128 * Returns: 2128 2129 * 0 for success, error for failure. 2129 2130 */ 2130 - int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm) 2131 + int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm, int32_t xcp_id) 2131 2132 { 2132 2133 struct amdgpu_bo *root_bo; 2133 2134 struct amdgpu_bo_vm *root; ··· 2178 2177 vm->evicting = false; 2179 2178 2180 2179 r = amdgpu_vm_pt_create(adev, vm, adev->vm_manager.root_level, 2181 - false, &root); 2180 + false, &root, xcp_id); 2182 2181 if (r) 2183 2182 goto error_free_delayed; 2184 2183 root_bo = &root->bo;
+3 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
··· 392 392 u32 pasid); 393 393 394 394 long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout); 395 - int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm); 395 + int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm, int32_t xcp_id); 396 396 int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm); 397 397 void amdgpu_vm_release_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm); 398 398 void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm); ··· 475 475 int amdgpu_vm_pt_clear(struct amdgpu_device *adev, struct amdgpu_vm *vm, 476 476 struct amdgpu_bo_vm *vmbo, bool immediate); 477 477 int amdgpu_vm_pt_create(struct amdgpu_device *adev, struct amdgpu_vm *vm, 478 - int level, bool immediate, struct amdgpu_bo_vm **vmbo); 478 + int level, bool immediate, struct amdgpu_bo_vm **vmbo, 479 + int32_t xcp_id); 479 480 void amdgpu_vm_pt_free_root(struct amdgpu_device *adev, struct amdgpu_vm *vm); 480 481 bool amdgpu_vm_pt_is_root_clean(struct amdgpu_device *adev, 481 482 struct amdgpu_vm *vm);
+7 -5
drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
··· 498 498 * @level: the page table level 499 499 * @immediate: use a immediate update 500 500 * @vmbo: pointer to the buffer object pointer 501 + * @xcp_id: GPU partition id 501 502 */ 502 503 int amdgpu_vm_pt_create(struct amdgpu_device *adev, struct amdgpu_vm *vm, 503 - int level, bool immediate, struct amdgpu_bo_vm **vmbo) 504 + int level, bool immediate, struct amdgpu_bo_vm **vmbo, 505 + int32_t xcp_id) 504 506 { 505 - struct amdgpu_fpriv *fpriv = container_of(vm, struct amdgpu_fpriv, vm); 506 507 struct amdgpu_bo_param bp; 507 508 struct amdgpu_bo *bo; 508 509 struct dma_resv *resv; ··· 536 535 537 536 bp.type = ttm_bo_type_kernel; 538 537 bp.no_wait_gpu = immediate; 539 - bp.xcp_id_plus1 = fpriv->xcp_id == ~0 ? 0 : fpriv->xcp_id + 1; 538 + bp.xcp_id_plus1 = xcp_id + 1; 540 539 541 540 if (vm->root.bo) 542 541 bp.resv = vm->root.bo->tbo.base.resv; ··· 562 561 bp.type = ttm_bo_type_kernel; 563 562 bp.resv = bo->tbo.base.resv; 564 563 bp.bo_ptr_size = sizeof(struct amdgpu_bo); 565 - bp.xcp_id_plus1 = fpriv->xcp_id == ~0 ? 0 : fpriv->xcp_id + 1; 564 + bp.xcp_id_plus1 = xcp_id + 1; 566 565 567 566 r = amdgpu_bo_create(adev, &bp, &(*vmbo)->shadow); 568 567 ··· 607 606 return 0; 608 607 609 608 amdgpu_vm_eviction_unlock(vm); 610 - r = amdgpu_vm_pt_create(adev, vm, cursor->level, immediate, &pt); 609 + r = amdgpu_vm_pt_create(adev, vm, cursor->level, immediate, &pt, 610 + vm->root.bo->xcp_id); 611 611 amdgpu_vm_eviction_lock(vm); 612 612 if (r) 613 613 return r;
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.c
··· 363 363 if (!adev->xcp_mgr) 364 364 return 0; 365 365 366 - fpriv->xcp_id = ~0; 366 + fpriv->xcp_id = AMDGPU_XCP_NO_PARTITION; 367 367 for (i = 0; i < MAX_XCP; ++i) { 368 368 if (!adev->xcp_mgr->xcp[i].ddev) 369 369 break; ··· 381 381 } 382 382 } 383 383 384 - fpriv->vm.mem_id = fpriv->xcp_id == ~0 ? -1 : 384 + fpriv->vm.mem_id = fpriv->xcp_id == AMDGPU_XCP_NO_PARTITION ? -1 : 385 385 adev->xcp_mgr->xcp[fpriv->xcp_id].mem_id; 386 386 return 0; 387 387 }
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_xcp.h
··· 37 37 #define AMDGPU_XCP_FL_NONE 0 38 38 #define AMDGPU_XCP_FL_LOCKED (1 << 0) 39 39 40 + #define AMDGPU_XCP_NO_PARTITION (~0) 41 + 40 42 struct amdgpu_fpriv; 41 43 42 44 enum AMDGPU_XCP_IP_BLOCK {
+2 -2
drivers/gpu/drm/amd/amdgpu/aqua_vanjaram_reg_init.c
··· 68 68 enum AMDGPU_XCP_IP_BLOCK ip_blk; 69 69 uint32_t inst_mask; 70 70 71 - ring->xcp_id = ~0; 71 + ring->xcp_id = AMDGPU_XCP_NO_PARTITION; 72 72 if (adev->xcp_mgr->mode == AMDGPU_XCP_MODE_NONE) 73 73 return; 74 74 ··· 177 177 u32 sel_xcp_id; 178 178 int i; 179 179 180 - if (fpriv->xcp_id == ~0) { 180 + if (fpriv->xcp_id == AMDGPU_XCP_NO_PARTITION) { 181 181 u32 least_ref_cnt = ~0; 182 182 183 183 fpriv->xcp_id = 0;
+1
drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
··· 49 49 MODULE_FIRMWARE("amdgpu/psp_13_0_11_toc.bin"); 50 50 MODULE_FIRMWARE("amdgpu/psp_13_0_11_ta.bin"); 51 51 MODULE_FIRMWARE("amdgpu/psp_13_0_6_sos.bin"); 52 + MODULE_FIRMWARE("amdgpu/psp_13_0_6_ta.bin"); 52 53 53 54 /* For large FW files the time to complete can be very long */ 54 55 #define USBC_PD_POLLING_LIMIT_S 240
+103 -153
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 424 424 425 425 spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags); 426 426 427 - if (amdgpu_crtc->pflip_status != AMDGPU_FLIP_SUBMITTED){ 428 - DC_LOG_PFLIP("amdgpu_crtc->pflip_status = %d !=AMDGPU_FLIP_SUBMITTED(%d) on crtc:%d[%p] \n", 429 - amdgpu_crtc->pflip_status, 430 - AMDGPU_FLIP_SUBMITTED, 431 - amdgpu_crtc->crtc_id, 432 - amdgpu_crtc); 427 + if (amdgpu_crtc->pflip_status != AMDGPU_FLIP_SUBMITTED) { 428 + DC_LOG_PFLIP("amdgpu_crtc->pflip_status = %d !=AMDGPU_FLIP_SUBMITTED(%d) on crtc:%d[%p]\n", 429 + amdgpu_crtc->pflip_status, 430 + AMDGPU_FLIP_SUBMITTED, 431 + amdgpu_crtc->crtc_id, 432 + amdgpu_crtc); 433 433 spin_unlock_irqrestore(&adev_to_drm(adev)->event_lock, flags); 434 434 return; 435 435 } ··· 883 883 } 884 884 885 885 /* Prototypes of private functions */ 886 - static int dm_early_init(void* handle); 886 + static int dm_early_init(void *handle); 887 887 888 888 /* Allocate memory for FBC compressed data */ 889 889 static void amdgpu_dm_fbc_init(struct drm_connector *connector) ··· 1282 1282 pa_config->system_aperture.start_addr = (uint64_t)logical_addr_low << 18; 1283 1283 pa_config->system_aperture.end_addr = (uint64_t)logical_addr_high << 18; 1284 1284 1285 - pa_config->system_aperture.agp_base = (uint64_t)agp_base << 24 ; 1285 + pa_config->system_aperture.agp_base = (uint64_t)agp_base << 24; 1286 1286 pa_config->system_aperture.agp_bot = (uint64_t)agp_bot << 24; 1287 1287 pa_config->system_aperture.agp_top = (uint64_t)agp_top << 24; 1288 1288 ··· 1347 1347 if (amdgpu_in_reset(adev)) 1348 1348 goto skip; 1349 1349 1350 + if (offload_work->data.bytes.device_service_irq.bits.UP_REQ_MSG_RDY || 1351 + offload_work->data.bytes.device_service_irq.bits.DOWN_REP_MSG_RDY) { 1352 + dm_handle_mst_sideband_msg_ready_event(&aconnector->mst_mgr, DOWN_OR_UP_MSG_RDY_EVENT); 1353 + spin_lock_irqsave(&offload_work->offload_wq->offload_lock, flags); 1354 + offload_work->offload_wq->is_handling_mst_msg_rdy_event = false; 1355 + spin_unlock_irqrestore(&offload_work->offload_wq->offload_lock, flags); 1356 + goto skip; 1357 + } 1358 + 1350 1359 mutex_lock(&adev->dm.dc_lock); 1351 1360 if (offload_work->data.bytes.device_service_irq.bits.AUTOMATED_TEST) { 1352 1361 dc_link_dp_handle_automated_test(dc_link); ··· 1374 1365 DP_TEST_RESPONSE, 1375 1366 &test_response.raw, 1376 1367 sizeof(test_response)); 1377 - } 1378 - else if ((dc_link->connector_signal != SIGNAL_TYPE_EDP) && 1368 + } else if ((dc_link->connector_signal != SIGNAL_TYPE_EDP) && 1379 1369 dc_link_check_link_loss_status(dc_link, &offload_work->data) && 1380 1370 dc_link_dp_allow_hpd_rx_irq(dc_link)) { 1381 1371 /* offload_work->data is from handle_hpd_rx_irq-> ··· 1562 1554 mutex_init(&adev->dm.dc_lock); 1563 1555 mutex_init(&adev->dm.audio_lock); 1564 1556 1565 - if(amdgpu_dm_irq_init(adev)) { 1557 + if (amdgpu_dm_irq_init(adev)) { 1566 1558 DRM_ERROR("amdgpu: failed to initialize DM IRQ support.\n"); 1567 1559 goto error; 1568 1560 } ··· 1704 1696 if (amdgpu_dc_debug_mask & DC_DISABLE_STUTTER) 1705 1697 adev->dm.dc->debug.disable_stutter = true; 1706 1698 1707 - if (amdgpu_dc_debug_mask & DC_DISABLE_DSC) { 1699 + if (amdgpu_dc_debug_mask & DC_DISABLE_DSC) 1708 1700 adev->dm.dc->debug.disable_dsc = true; 1709 - } 1710 1701 1711 1702 if (amdgpu_dc_debug_mask & DC_DISABLE_CLOCK_GATING) 1712 1703 adev->dm.dc->debug.disable_clock_gate = true; ··· 1949 1942 mutex_destroy(&adev->dm.audio_lock); 1950 1943 mutex_destroy(&adev->dm.dc_lock); 1951 1944 mutex_destroy(&adev->dm.dpia_aux_lock); 1952 - 1953 - return; 1954 1945 } 1955 1946 1956 1947 static int load_dmcu_fw(struct amdgpu_device *adev) ··· 1957 1952 int r; 1958 1953 const struct dmcu_firmware_header_v1_0 *hdr; 1959 1954 1960 - switch(adev->asic_type) { 1955 + switch (adev->asic_type) { 1961 1956 #if defined(CONFIG_DRM_AMD_DC_SI) 1962 1957 case CHIP_TAHITI: 1963 1958 case CHIP_PITCAIRN: ··· 2714 2709 struct dc_scaling_info scaling_infos[MAX_SURFACES]; 2715 2710 struct dc_flip_addrs flip_addrs[MAX_SURFACES]; 2716 2711 struct dc_stream_update stream_update; 2717 - } * bundle; 2712 + } *bundle; 2718 2713 int k, m; 2719 2714 2720 2715 bundle = kzalloc(sizeof(*bundle), GFP_KERNEL); ··· 2744 2739 2745 2740 cleanup: 2746 2741 kfree(bundle); 2747 - 2748 - return; 2749 2742 } 2750 2743 2751 2744 static int dm_resume(void *handle) ··· 2957 2954 .set_powergating_state = dm_set_powergating_state, 2958 2955 }; 2959 2956 2960 - const struct amdgpu_ip_block_version dm_ip_block = 2961 - { 2957 + const struct amdgpu_ip_block_version dm_ip_block = { 2962 2958 .type = AMD_IP_BLOCK_TYPE_DCE, 2963 2959 .major = 1, 2964 2960 .minor = 0, ··· 3002 3000 caps->ext_caps = &aconnector->dc_link->dpcd_sink_ext_caps; 3003 3001 caps->aux_support = false; 3004 3002 3005 - if (caps->ext_caps->bits.oled == 1 /*|| 3006 - caps->ext_caps->bits.sdr_aux_backlight_control == 1 || 3007 - caps->ext_caps->bits.hdr_aux_backlight_control == 1*/) 3003 + if (caps->ext_caps->bits.oled == 1 3004 + /* 3005 + * || 3006 + * caps->ext_caps->bits.sdr_aux_backlight_control == 1 || 3007 + * caps->ext_caps->bits.hdr_aux_backlight_control == 1 3008 + */) 3008 3009 caps->aux_support = true; 3009 3010 3010 3011 if (amdgpu_backlight == 0) ··· 3241 3236 3242 3237 } 3243 3238 3244 - static void dm_handle_mst_sideband_msg(struct amdgpu_dm_connector *aconnector) 3245 - { 3246 - u8 esi[DP_PSR_ERROR_STATUS - DP_SINK_COUNT_ESI] = { 0 }; 3247 - u8 dret; 3248 - bool new_irq_handled = false; 3249 - int dpcd_addr; 3250 - int dpcd_bytes_to_read; 3251 - 3252 - const int max_process_count = 30; 3253 - int process_count = 0; 3254 - 3255 - const struct dc_link_status *link_status = dc_link_get_status(aconnector->dc_link); 3256 - 3257 - if (link_status->dpcd_caps->dpcd_rev.raw < 0x12) { 3258 - dpcd_bytes_to_read = DP_LANE0_1_STATUS - DP_SINK_COUNT; 3259 - /* DPCD 0x200 - 0x201 for downstream IRQ */ 3260 - dpcd_addr = DP_SINK_COUNT; 3261 - } else { 3262 - dpcd_bytes_to_read = DP_PSR_ERROR_STATUS - DP_SINK_COUNT_ESI; 3263 - /* DPCD 0x2002 - 0x2005 for downstream IRQ */ 3264 - dpcd_addr = DP_SINK_COUNT_ESI; 3265 - } 3266 - 3267 - dret = drm_dp_dpcd_read( 3268 - &aconnector->dm_dp_aux.aux, 3269 - dpcd_addr, 3270 - esi, 3271 - dpcd_bytes_to_read); 3272 - 3273 - while (dret == dpcd_bytes_to_read && 3274 - process_count < max_process_count) { 3275 - u8 ack[DP_PSR_ERROR_STATUS - DP_SINK_COUNT_ESI] = {}; 3276 - u8 retry; 3277 - dret = 0; 3278 - 3279 - process_count++; 3280 - 3281 - DRM_DEBUG_DRIVER("ESI %02x %02x %02x\n", esi[0], esi[1], esi[2]); 3282 - /* handle HPD short pulse irq */ 3283 - if (aconnector->mst_mgr.mst_state) 3284 - drm_dp_mst_hpd_irq_handle_event(&aconnector->mst_mgr, 3285 - esi, 3286 - ack, 3287 - &new_irq_handled); 3288 - 3289 - if (new_irq_handled) { 3290 - /* ACK at DPCD to notify down stream */ 3291 - for (retry = 0; retry < 3; retry++) { 3292 - ssize_t wret; 3293 - 3294 - wret = drm_dp_dpcd_writeb(&aconnector->dm_dp_aux.aux, 3295 - dpcd_addr + 1, 3296 - ack[1]); 3297 - if (wret == 1) 3298 - break; 3299 - } 3300 - 3301 - if (retry == 3) { 3302 - DRM_ERROR("Failed to ack MST event.\n"); 3303 - return; 3304 - } 3305 - 3306 - drm_dp_mst_hpd_irq_send_new_request(&aconnector->mst_mgr); 3307 - /* check if there is new irq to be handled */ 3308 - dret = drm_dp_dpcd_read( 3309 - &aconnector->dm_dp_aux.aux, 3310 - dpcd_addr, 3311 - esi, 3312 - dpcd_bytes_to_read); 3313 - 3314 - new_irq_handled = false; 3315 - } else { 3316 - break; 3317 - } 3318 - } 3319 - 3320 - if (process_count == max_process_count) 3321 - DRM_DEBUG_DRIVER("Loop exceeded max iterations\n"); 3322 - } 3323 - 3324 3239 static void schedule_hpd_rx_offload_work(struct hpd_rx_irq_offload_work_queue *offload_wq, 3325 3240 union hpd_irq_data hpd_irq_data) 3326 3241 { ··· 3302 3377 if (dc_link_dp_allow_hpd_rx_irq(dc_link)) { 3303 3378 if (hpd_irq_data.bytes.device_service_irq.bits.UP_REQ_MSG_RDY || 3304 3379 hpd_irq_data.bytes.device_service_irq.bits.DOWN_REP_MSG_RDY) { 3305 - dm_handle_mst_sideband_msg(aconnector); 3380 + bool skip = false; 3381 + 3382 + /* 3383 + * DOWN_REP_MSG_RDY is also handled by polling method 3384 + * mgr->cbs->poll_hpd_irq() 3385 + */ 3386 + spin_lock(&offload_wq->offload_lock); 3387 + skip = offload_wq->is_handling_mst_msg_rdy_event; 3388 + 3389 + if (!skip) 3390 + offload_wq->is_handling_mst_msg_rdy_event = true; 3391 + 3392 + spin_unlock(&offload_wq->offload_lock); 3393 + 3394 + if (!skip) 3395 + schedule_hpd_rx_offload_work(offload_wq, hpd_irq_data); 3396 + 3306 3397 goto out; 3307 3398 } 3308 3399 ··· 3409 3468 aconnector = to_amdgpu_dm_connector(connector); 3410 3469 dc_link = aconnector->dc_link; 3411 3470 3412 - if (DC_IRQ_SOURCE_INVALID != dc_link->irq_source_hpd) { 3471 + if (dc_link->irq_source_hpd != DC_IRQ_SOURCE_INVALID) { 3413 3472 int_params.int_context = INTERRUPT_LOW_IRQ_CONTEXT; 3414 3473 int_params.irq_source = dc_link->irq_source_hpd; 3415 3474 ··· 3418 3477 (void *) aconnector); 3419 3478 } 3420 3479 3421 - if (DC_IRQ_SOURCE_INVALID != dc_link->irq_source_hpd_rx) { 3480 + if (dc_link->irq_source_hpd_rx != DC_IRQ_SOURCE_INVALID) { 3422 3481 3423 3482 /* Also register for DP short pulse (hpd_rx). */ 3424 3483 int_params.int_context = INTERRUPT_LOW_IRQ_CONTEXT; ··· 3427 3486 amdgpu_dm_irq_register_interrupt(adev, &int_params, 3428 3487 handle_hpd_rx_irq, 3429 3488 (void *) aconnector); 3430 - 3431 - if (adev->dm.hpd_rx_offload_wq) 3432 - adev->dm.hpd_rx_offload_wq[dc_link->link_index].aconnector = 3433 - aconnector; 3434 3489 } 3490 + 3491 + if (adev->dm.hpd_rx_offload_wq) 3492 + adev->dm.hpd_rx_offload_wq[connector->index].aconnector = 3493 + aconnector; 3435 3494 } 3436 3495 } 3437 3496 ··· 3444 3503 struct dc_interrupt_params int_params = {0}; 3445 3504 int r; 3446 3505 int i; 3447 - unsigned client_id = AMDGPU_IRQ_CLIENTID_LEGACY; 3506 + unsigned int client_id = AMDGPU_IRQ_CLIENTID_LEGACY; 3448 3507 3449 3508 int_params.requested_polarity = INTERRUPT_POLARITY_DEFAULT; 3450 3509 int_params.current_polarity = INTERRUPT_POLARITY_DEFAULT; ··· 3458 3517 * Base driver will call amdgpu_dm_irq_handler() for ALL interrupts 3459 3518 * coming from DC hardware. 3460 3519 * amdgpu_dm_irq_handler() will re-direct the interrupt to DC 3461 - * for acknowledging and handling. */ 3520 + * for acknowledging and handling. 3521 + */ 3462 3522 3463 3523 /* Use VBLANK interrupt */ 3464 3524 for (i = 0; i < adev->mode_info.num_crtc; i++) { 3465 - r = amdgpu_irq_add_id(adev, client_id, i+1 , &adev->crtc_irq); 3525 + r = amdgpu_irq_add_id(adev, client_id, i + 1, &adev->crtc_irq); 3466 3526 if (r) { 3467 3527 DRM_ERROR("Failed to add crtc irq id!\n"); 3468 3528 return r; ··· 3471 3529 3472 3530 int_params.int_context = INTERRUPT_HIGH_IRQ_CONTEXT; 3473 3531 int_params.irq_source = 3474 - dc_interrupt_to_irq_source(dc, i+1 , 0); 3532 + dc_interrupt_to_irq_source(dc, i + 1, 0); 3475 3533 3476 3534 c_irq_params = &adev->dm.vblank_params[int_params.irq_source - DC_IRQ_SOURCE_VBLANK1]; 3477 3535 ··· 3527 3585 struct dc_interrupt_params int_params = {0}; 3528 3586 int r; 3529 3587 int i; 3530 - unsigned client_id = AMDGPU_IRQ_CLIENTID_LEGACY; 3588 + unsigned int client_id = AMDGPU_IRQ_CLIENTID_LEGACY; 3531 3589 3532 3590 if (adev->family >= AMDGPU_FAMILY_AI) 3533 3591 client_id = SOC15_IH_CLIENTID_DCE; ··· 3544 3602 * Base driver will call amdgpu_dm_irq_handler() for ALL interrupts 3545 3603 * coming from DC hardware. 3546 3604 * amdgpu_dm_irq_handler() will re-direct the interrupt to DC 3547 - * for acknowledging and handling. */ 3605 + * for acknowledging and handling. 3606 + */ 3548 3607 3549 3608 /* Use VBLANK interrupt */ 3550 3609 for (i = VISLANDS30_IV_SRCID_D1_VERTICAL_INTERRUPT0; i <= VISLANDS30_IV_SRCID_D6_VERTICAL_INTERRUPT0; i++) { ··· 3992 4049 } 3993 4050 3994 4051 static int get_brightness_range(const struct amdgpu_dm_backlight_caps *caps, 3995 - unsigned *min, unsigned *max) 4052 + unsigned int *min, unsigned int *max) 3996 4053 { 3997 4054 if (!caps) 3998 4055 return 0; ··· 4012 4069 static u32 convert_brightness_from_user(const struct amdgpu_dm_backlight_caps *caps, 4013 4070 uint32_t brightness) 4014 4071 { 4015 - unsigned min, max; 4072 + unsigned int min, max; 4016 4073 4017 4074 if (!get_brightness_range(caps, &min, &max)) 4018 4075 return brightness; ··· 4025 4082 static u32 convert_brightness_to_user(const struct amdgpu_dm_backlight_caps *caps, 4026 4083 uint32_t brightness) 4027 4084 { 4028 - unsigned min, max; 4085 + unsigned int min, max; 4029 4086 4030 4087 if (!get_brightness_range(caps, &min, &max)) 4031 4088 return brightness; ··· 4505 4562 static void amdgpu_dm_destroy_drm_device(struct amdgpu_display_manager *dm) 4506 4563 { 4507 4564 drm_atomic_private_obj_fini(&dm->atomic_obj); 4508 - return; 4509 4565 } 4510 4566 4511 4567 /****************************************************************************** ··· 5336 5394 { 5337 5395 enum dc_color_depth depth = timing_out->display_color_depth; 5338 5396 int normalized_clk; 5397 + 5339 5398 do { 5340 5399 normalized_clk = timing_out->pix_clk_100hz / 10; 5341 5400 /* YCbCr 4:2:0 requires additional adjustment of 1/2 */ ··· 5552 5609 { 5553 5610 struct dc_sink_init_data sink_init_data = { 0 }; 5554 5611 struct dc_sink *sink = NULL; 5612 + 5555 5613 sink_init_data.link = aconnector->dc_link; 5556 5614 sink_init_data.sink_signal = aconnector->dc_link->connector_signal; 5557 5615 ··· 5676 5732 return &aconnector->freesync_vid_base; 5677 5733 5678 5734 /* Find the preferred mode */ 5679 - list_for_each_entry (m, list_head, head) { 5735 + list_for_each_entry(m, list_head, head) { 5680 5736 if (m->type & DRM_MODE_TYPE_PREFERRED) { 5681 5737 m_pref = m; 5682 5738 break; ··· 5700 5756 * For some monitors, preferred mode is not the mode with highest 5701 5757 * supported refresh rate. 5702 5758 */ 5703 - list_for_each_entry (m, list_head, head) { 5759 + list_for_each_entry(m, list_head, head) { 5704 5760 current_refresh = drm_mode_vrefresh(m); 5705 5761 5706 5762 if (m->hdisplay == m_pref->hdisplay && ··· 5972 6028 * This may not be an error, the use case is when we have no 5973 6029 * usermode calls to reset and set mode upon hotplug. In this 5974 6030 * case, we call set mode ourselves to restore the previous mode 5975 - * and the modelist may not be filled in in time. 6031 + * and the modelist may not be filled in time. 5976 6032 */ 5977 6033 DRM_DEBUG_DRIVER("No preferred mode found\n"); 5978 6034 } else { ··· 5995 6051 drm_mode_set_crtcinfo(&mode, 0); 5996 6052 5997 6053 /* 5998 - * If scaling is enabled and refresh rate didn't change 5999 - * we copy the vic and polarities of the old timings 6000 - */ 6054 + * If scaling is enabled and refresh rate didn't change 6055 + * we copy the vic and polarities of the old timings 6056 + */ 6001 6057 if (!scale || mode_refresh != preferred_refresh) 6002 6058 fill_stream_properties_from_drm_display_mode( 6003 6059 stream, &mode, &aconnector->base, con_state, NULL, ··· 6761 6817 6762 6818 if (!state->duplicated) { 6763 6819 int max_bpc = conn_state->max_requested_bpc; 6820 + 6764 6821 is_y420 = drm_mode_is_420_also(&connector->display_info, adjusted_mode) && 6765 6822 aconnector->force_yuv420_output; 6766 6823 color_depth = convert_color_depth_from_display_info(connector, ··· 7080 7135 { 7081 7136 struct drm_display_mode *m; 7082 7137 7083 - list_for_each_entry (m, &aconnector->base.probed_modes, head) { 7138 + list_for_each_entry(m, &aconnector->base.probed_modes, head) { 7084 7139 if (drm_mode_equal(m, mode)) 7085 7140 return true; 7086 7141 } ··· 7240 7295 aconnector->as_type = ADAPTIVE_SYNC_TYPE_NONE; 7241 7296 memset(&aconnector->vsdb_info, 0, sizeof(aconnector->vsdb_info)); 7242 7297 mutex_init(&aconnector->hpd_lock); 7298 + mutex_init(&aconnector->handle_mst_msg_ready); 7243 7299 7244 7300 /* 7245 7301 * configure support HPD hot plug connector_>polled default value is 0 ··· 7400 7454 7401 7455 link->priv = aconnector; 7402 7456 7403 - DRM_DEBUG_DRIVER("%s()\n", __func__); 7404 7457 7405 7458 i2c = create_i2c(link->ddc, link->link_index, &res); 7406 7459 if (!i2c) { ··· 8070 8125 * Only allow immediate flips for fast updates that don't 8071 8126 * change memory domain, FB pitch, DCC state, rotation or 8072 8127 * mirroring. 8128 + * 8129 + * dm_crtc_helper_atomic_check() only accepts async flips with 8130 + * fast updates. 8073 8131 */ 8132 + if (crtc->state->async_flip && 8133 + acrtc_state->update_type != UPDATE_TYPE_FAST) 8134 + drm_warn_once(state->dev, 8135 + "[PLANE:%d:%s] async flip with non-fast update\n", 8136 + plane->base.id, plane->name); 8074 8137 bundle->flip_addrs[planes_count].flip_immediate = 8075 8138 crtc->state->async_flip && 8076 8139 acrtc_state->update_type == UPDATE_TYPE_FAST && ··· 8121 8168 * DRI3/Present extension with defined target_msc. 8122 8169 */ 8123 8170 last_flip_vblank = amdgpu_get_vblank_counter_kms(pcrtc); 8124 - } 8125 - else { 8171 + } else { 8126 8172 /* For variable refresh rate mode only: 8127 8173 * Get vblank of last completed flip to avoid > 1 vrr 8128 8174 * flips per video frame by use of throttling, but allow ··· 8454 8502 dc_resource_state_copy_construct_current(dm->dc, dc_state); 8455 8503 } 8456 8504 8457 - for_each_oldnew_crtc_in_state (state, crtc, old_crtc_state, 8458 - new_crtc_state, i) { 8505 + for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, 8506 + new_crtc_state, i) { 8459 8507 struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc); 8460 8508 8461 8509 dm_old_crtc_state = to_dm_crtc_state(old_crtc_state); ··· 8478 8526 dm_old_crtc_state = to_dm_crtc_state(old_crtc_state); 8479 8527 8480 8528 drm_dbg_state(state->dev, 8481 - "amdgpu_crtc id:%d crtc_state_flags: enable:%d, active:%d, " 8482 - "planes_changed:%d, mode_changed:%d,active_changed:%d," 8483 - "connectors_changed:%d\n", 8529 + "amdgpu_crtc id:%d crtc_state_flags: enable:%d, active:%d, planes_changed:%d, mode_changed:%d,active_changed:%d,connectors_changed:%d\n", 8484 8530 acrtc->crtc_id, 8485 8531 new_crtc_state->enable, 8486 8532 new_crtc_state->active, ··· 9054 9104 &commit->flip_done, 10*HZ); 9055 9105 9056 9106 if (ret == 0) 9057 - DRM_ERROR("[CRTC:%d:%s] hw_done or flip_done " 9058 - "timed out\n", crtc->base.id, crtc->name); 9107 + DRM_ERROR("[CRTC:%d:%s] hw_done or flip_done timed out\n", 9108 + crtc->base.id, crtc->name); 9059 9109 9060 9110 drm_crtc_commit_put(commit); 9061 9111 } ··· 9140 9190 return false; 9141 9191 } 9142 9192 9143 - static void set_freesync_fixed_config(struct dm_crtc_state *dm_new_crtc_state) { 9193 + static void set_freesync_fixed_config(struct dm_crtc_state *dm_new_crtc_state) 9194 + { 9144 9195 u64 num, den, res; 9145 9196 struct drm_crtc_state *new_crtc_state = &dm_new_crtc_state->base; 9146 9197 ··· 9263 9312 goto skip_modeset; 9264 9313 9265 9314 drm_dbg_state(state->dev, 9266 - "amdgpu_crtc id:%d crtc_state_flags: enable:%d, active:%d, " 9267 - "planes_changed:%d, mode_changed:%d,active_changed:%d," 9268 - "connectors_changed:%d\n", 9315 + "amdgpu_crtc id:%d crtc_state_flags: enable:%d, active:%d, planes_changed:%d, mode_changed:%d,active_changed:%d,connectors_changed:%d\n", 9269 9316 acrtc->crtc_id, 9270 9317 new_crtc_state->enable, 9271 9318 new_crtc_state->active, ··· 9292 9343 old_crtc_state)) { 9293 9344 new_crtc_state->mode_changed = false; 9294 9345 DRM_DEBUG_DRIVER( 9295 - "Mode change not required for front porch change, " 9296 - "setting mode_changed to %d", 9346 + "Mode change not required for front porch change, setting mode_changed to %d", 9297 9347 new_crtc_state->mode_changed); 9298 9348 9299 9349 set_freesync_fixed_config(dm_new_crtc_state); ··· 9304 9356 struct drm_display_mode *high_mode; 9305 9357 9306 9358 high_mode = get_highest_refresh_rate_mode(aconnector, false); 9307 - if (!drm_mode_equal(&new_crtc_state->mode, high_mode)) { 9359 + if (!drm_mode_equal(&new_crtc_state->mode, high_mode)) 9308 9360 set_freesync_fixed_config(dm_new_crtc_state); 9309 - } 9310 9361 } 9311 9362 9312 9363 ret = dm_atomic_get_state(state, &dm_state); ··· 9473 9526 */ 9474 9527 for_each_oldnew_plane_in_state(state, other, old_other_state, new_other_state, i) { 9475 9528 struct amdgpu_framebuffer *old_afb, *new_afb; 9529 + 9476 9530 if (other->type == DRM_PLANE_TYPE_CURSOR) 9477 9531 continue; 9478 9532 ··· 9572 9624 } 9573 9625 9574 9626 /* Core DRM takes care of checking FB modifiers, so we only need to 9575 - * check tiling flags when the FB doesn't have a modifier. */ 9627 + * check tiling flags when the FB doesn't have a modifier. 9628 + */ 9576 9629 if (!(fb->flags & DRM_MODE_FB_MODIFIERS)) { 9577 9630 if (adev->family < AMDGPU_FAMILY_AI) { 9578 9631 linear = AMDGPU_TILING_GET(afb->tiling_flags, ARRAY_MODE) != DC_ARRAY_2D_TILED_THIN1 && 9579 - AMDGPU_TILING_GET(afb->tiling_flags, ARRAY_MODE) != DC_ARRAY_1D_TILED_THIN1 && 9632 + AMDGPU_TILING_GET(afb->tiling_flags, ARRAY_MODE) != DC_ARRAY_1D_TILED_THIN1 && 9580 9633 AMDGPU_TILING_GET(afb->tiling_flags, MICRO_TILE_MODE) == 0; 9581 9634 } else { 9582 9635 linear = AMDGPU_TILING_GET(afb->tiling_flags, SWIZZLE_MODE) == 0; ··· 9799 9850 /* On DCE and DCN there is no dedicated hardware cursor plane. We get a 9800 9851 * cursor per pipe but it's going to inherit the scaling and 9801 9852 * positioning from the underlying pipe. Check the cursor plane's 9802 - * blending properties match the underlying planes'. */ 9853 + * blending properties match the underlying planes'. 9854 + */ 9803 9855 9804 9856 new_cursor_state = drm_atomic_get_new_plane_state(state, cursor); 9805 - if (!new_cursor_state || !new_cursor_state->fb) { 9857 + if (!new_cursor_state || !new_cursor_state->fb) 9806 9858 return 0; 9807 - } 9808 9859 9809 9860 dm_get_oriented_plane_size(new_cursor_state, &cursor_src_w, &cursor_src_h); 9810 9861 cursor_scale_w = new_cursor_state->crtc_w * 1000 / cursor_src_w; ··· 9849 9900 struct drm_connector_state *conn_state, *old_conn_state; 9850 9901 struct amdgpu_dm_connector *aconnector = NULL; 9851 9902 int i; 9903 + 9852 9904 for_each_oldnew_connector_in_state(state, connector, old_conn_state, conn_state, i) { 9853 9905 if (!conn_state->crtc) 9854 9906 conn_state = old_conn_state; ··· 10284 10334 } 10285 10335 10286 10336 /* Store the overall update type for use later in atomic check. */ 10287 - for_each_new_crtc_in_state (state, crtc, new_crtc_state, i) { 10337 + for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) { 10288 10338 struct dm_crtc_state *dm_new_crtc_state = 10289 10339 to_dm_crtc_state(new_crtc_state); 10290 10340 ··· 10306 10356 else if (ret == -EINTR || ret == -EAGAIN || ret == -ERESTARTSYS) 10307 10357 DRM_DEBUG_DRIVER("Atomic check stopped due to signal.\n"); 10308 10358 else 10309 - DRM_DEBUG_DRIVER("Atomic check failed with err: %d \n", ret); 10359 + DRM_DEBUG_DRIVER("Atomic check failed with err: %d\n", ret); 10310 10360 10311 10361 trace_amdgpu_dm_atomic_check_finish(state, ret); 10312 10362
+7
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
··· 195 195 */ 196 196 bool is_handling_link_loss; 197 197 /** 198 + * @is_handling_mst_msg_rdy_event: Used to prevent inserting mst message 199 + * ready event when we're already handling mst message ready event 200 + */ 201 + bool is_handling_mst_msg_rdy_event; 202 + /** 198 203 * @aconnector: The aconnector that this work queue is attached to 199 204 */ 200 205 struct amdgpu_dm_connector *aconnector; ··· 643 638 struct drm_dp_mst_port *mst_output_port; 644 639 struct amdgpu_dm_connector *mst_root; 645 640 struct drm_dp_aux *dsc_aux; 641 + struct mutex handle_mst_msg_ready; 642 + 646 643 /* TODO see if we can merge with ddc_bus or make a dm_connector */ 647 644 struct amdgpu_i2c_adapter *i2c; 648 645
+12
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
··· 398 398 return -EINVAL; 399 399 } 400 400 401 + /* 402 + * Only allow async flips for fast updates that don't change the FB 403 + * pitch, the DCC state, rotation, etc. 404 + */ 405 + if (crtc_state->async_flip && 406 + dm_crtc_state->update_type != UPDATE_TYPE_FAST) { 407 + drm_dbg_atomic(crtc->dev, 408 + "[CRTC:%d:%s] async flips are only supported for fast updates\n", 409 + crtc->base.id, crtc->name); 410 + return -EINVAL; 411 + } 412 + 401 413 /* In some use cases, like reset, no stream is attached */ 402 414 if (!dm_crtc_state->stream) 403 415 return 0;
+110
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
··· 619 619 return connector; 620 620 } 621 621 622 + void dm_handle_mst_sideband_msg_ready_event( 623 + struct drm_dp_mst_topology_mgr *mgr, 624 + enum mst_msg_ready_type msg_rdy_type) 625 + { 626 + uint8_t esi[DP_PSR_ERROR_STATUS - DP_SINK_COUNT_ESI] = { 0 }; 627 + uint8_t dret; 628 + bool new_irq_handled = false; 629 + int dpcd_addr; 630 + uint8_t dpcd_bytes_to_read; 631 + const uint8_t max_process_count = 30; 632 + uint8_t process_count = 0; 633 + u8 retry; 634 + struct amdgpu_dm_connector *aconnector = 635 + container_of(mgr, struct amdgpu_dm_connector, mst_mgr); 636 + 637 + 638 + const struct dc_link_status *link_status = dc_link_get_status(aconnector->dc_link); 639 + 640 + if (link_status->dpcd_caps->dpcd_rev.raw < 0x12) { 641 + dpcd_bytes_to_read = DP_LANE0_1_STATUS - DP_SINK_COUNT; 642 + /* DPCD 0x200 - 0x201 for downstream IRQ */ 643 + dpcd_addr = DP_SINK_COUNT; 644 + } else { 645 + dpcd_bytes_to_read = DP_PSR_ERROR_STATUS - DP_SINK_COUNT_ESI; 646 + /* DPCD 0x2002 - 0x2005 for downstream IRQ */ 647 + dpcd_addr = DP_SINK_COUNT_ESI; 648 + } 649 + 650 + mutex_lock(&aconnector->handle_mst_msg_ready); 651 + 652 + while (process_count < max_process_count) { 653 + u8 ack[DP_PSR_ERROR_STATUS - DP_SINK_COUNT_ESI] = {}; 654 + 655 + process_count++; 656 + 657 + dret = drm_dp_dpcd_read( 658 + &aconnector->dm_dp_aux.aux, 659 + dpcd_addr, 660 + esi, 661 + dpcd_bytes_to_read); 662 + 663 + if (dret != dpcd_bytes_to_read) { 664 + DRM_DEBUG_KMS("DPCD read and acked number is not as expected!"); 665 + break; 666 + } 667 + 668 + DRM_DEBUG_DRIVER("ESI %02x %02x %02x\n", esi[0], esi[1], esi[2]); 669 + 670 + switch (msg_rdy_type) { 671 + case DOWN_REP_MSG_RDY_EVENT: 672 + /* Only handle DOWN_REP_MSG_RDY case*/ 673 + esi[1] &= DP_DOWN_REP_MSG_RDY; 674 + break; 675 + case UP_REQ_MSG_RDY_EVENT: 676 + /* Only handle UP_REQ_MSG_RDY case*/ 677 + esi[1] &= DP_UP_REQ_MSG_RDY; 678 + break; 679 + default: 680 + /* Handle both cases*/ 681 + esi[1] &= (DP_DOWN_REP_MSG_RDY | DP_UP_REQ_MSG_RDY); 682 + break; 683 + } 684 + 685 + if (!esi[1]) 686 + break; 687 + 688 + /* handle MST irq */ 689 + if (aconnector->mst_mgr.mst_state) 690 + drm_dp_mst_hpd_irq_handle_event(&aconnector->mst_mgr, 691 + esi, 692 + ack, 693 + &new_irq_handled); 694 + 695 + if (new_irq_handled) { 696 + /* ACK at DPCD to notify down stream */ 697 + for (retry = 0; retry < 3; retry++) { 698 + ssize_t wret; 699 + 700 + wret = drm_dp_dpcd_writeb(&aconnector->dm_dp_aux.aux, 701 + dpcd_addr + 1, 702 + ack[1]); 703 + if (wret == 1) 704 + break; 705 + } 706 + 707 + if (retry == 3) { 708 + DRM_ERROR("Failed to ack MST event.\n"); 709 + return; 710 + } 711 + 712 + drm_dp_mst_hpd_irq_send_new_request(&aconnector->mst_mgr); 713 + 714 + new_irq_handled = false; 715 + } else { 716 + break; 717 + } 718 + } 719 + 720 + mutex_unlock(&aconnector->handle_mst_msg_ready); 721 + 722 + if (process_count == max_process_count) 723 + DRM_DEBUG_DRIVER("Loop exceeded max iterations\n"); 724 + } 725 + 726 + static void dm_handle_mst_down_rep_msg_ready(struct drm_dp_mst_topology_mgr *mgr) 727 + { 728 + dm_handle_mst_sideband_msg_ready_event(mgr, DOWN_REP_MSG_RDY_EVENT); 729 + } 730 + 622 731 static const struct drm_dp_mst_topology_cbs dm_mst_cbs = { 623 732 .add_connector = dm_dp_add_mst_connector, 733 + .poll_hpd_irq = dm_handle_mst_down_rep_msg_ready, 624 734 }; 625 735 626 736 void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
+11
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
··· 49 49 #define PBN_FEC_OVERHEAD_MULTIPLIER_8B_10B 1031 50 50 #define PBN_FEC_OVERHEAD_MULTIPLIER_128B_132B 1000 51 51 52 + enum mst_msg_ready_type { 53 + NONE_MSG_RDY_EVENT = 0, 54 + DOWN_REP_MSG_RDY_EVENT = 1, 55 + UP_REQ_MSG_RDY_EVENT = 2, 56 + DOWN_OR_UP_MSG_RDY_EVENT = 3 57 + }; 58 + 52 59 struct amdgpu_display_manager; 53 60 struct amdgpu_dm_connector; 54 61 ··· 67 60 68 61 void 69 62 dm_dp_create_fake_mst_encoders(struct amdgpu_device *adev); 63 + 64 + void dm_handle_mst_sideband_msg_ready_event( 65 + struct drm_dp_mst_topology_mgr *mgr, 66 + enum mst_msg_ready_type msg_rdy_type); 70 67 71 68 struct dsc_mst_fairness_vars { 72 69 int pbn;
+5
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
··· 87 87 stream->signal == SIGNAL_TYPE_DVI_SINGLE_LINK || 88 88 stream->signal == SIGNAL_TYPE_DVI_DUAL_LINK) 89 89 tmds_present = true; 90 + 91 + /* Checking stream / link detection ensuring that PHY is active*/ 92 + if (dc_is_dp_signal(stream->signal) && !stream->dpms_off) 93 + display_count++; 94 + 90 95 } 91 96 92 97 for (i = 0; i < dc->link_count; i++) {
+2 -1
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
··· 3278 3278 if (pipe_ctx->stream_res.opp->mpcc_disconnect_pending[mpcc_inst]) { 3279 3279 struct hubp *hubp = get_hubp_by_inst(res_pool, mpcc_inst); 3280 3280 3281 - if (pipe_ctx->stream_res.tg->funcs->is_tg_enabled(pipe_ctx->stream_res.tg)) 3281 + if (pipe_ctx->stream_res.tg && 3282 + pipe_ctx->stream_res.tg->funcs->is_tg_enabled(pipe_ctx->stream_res.tg)) 3282 3283 res_pool->mpc->funcs->wait_for_idle(res_pool->mpc, mpcc_inst); 3283 3284 pipe_ctx->stream_res.opp->mpcc_disconnect_pending[mpcc_inst] = false; 3284 3285 hubp->funcs->set_blank(hubp, true);
+2 -2
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.c
··· 215 215 optc1->opp_count = 1; 216 216 } 217 217 218 - static void optc3_set_odm_combine(struct timing_generator *optc, int *opp_id, int opp_cnt, 218 + void optc3_set_odm_combine(struct timing_generator *optc, int *opp_id, int opp_cnt, 219 219 struct dc_crtc_timing *timing) 220 220 { 221 221 struct optc *optc1 = DCN10TG_FROM_TG(optc); ··· 293 293 OTG_DRR_TIMING_DBUF_UPDATE_MODE, mode); 294 294 } 295 295 296 - static void optc3_wait_drr_doublebuffer_pending_clear(struct timing_generator *optc) 296 + void optc3_wait_drr_doublebuffer_pending_clear(struct timing_generator *optc) 297 297 { 298 298 struct optc *optc1 = DCN10TG_FROM_TG(optc); 299 299
+3
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.h
··· 351 351 352 352 void optc3_set_odm_bypass(struct timing_generator *optc, 353 353 const struct dc_crtc_timing *dc_crtc_timing); 354 + void optc3_set_odm_combine(struct timing_generator *optc, int *opp_id, int opp_cnt, 355 + struct dc_crtc_timing *timing); 356 + void optc3_wait_drr_doublebuffer_pending_clear(struct timing_generator *optc); 354 357 void optc3_tg_init(struct timing_generator *optc); 355 358 void optc3_set_vtotal_min_max(struct timing_generator *optc, int vtotal_min, int vtotal_max); 356 359 #endif /* __DC_OPTC_DCN30_H__ */
+2 -1
drivers/gpu/drm/amd/display/dc/dcn301/Makefile
··· 11 11 # Makefile for dcn30. 12 12 13 13 DCN301 = dcn301_init.o dcn301_resource.o dcn301_dccg.o \ 14 - dcn301_dio_link_encoder.o dcn301_hwseq.o dcn301_panel_cntl.o dcn301_hubbub.o 14 + dcn301_dio_link_encoder.o dcn301_hwseq.o dcn301_panel_cntl.o dcn301_hubbub.o \ 15 + dcn301_optc.o 15 16 16 17 AMD_DAL_DCN301 = $(addprefix $(AMDDALPATH)/dc/dcn301/,$(DCN301)) 17 18
+185
drivers/gpu/drm/amd/display/dc/dcn301/dcn301_optc.c
··· 1 + /* 2 + * Copyright 2020 Advanced Micro Devices, Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + * 22 + * Authors: AMD 23 + * 24 + */ 25 + 26 + #include "reg_helper.h" 27 + #include "dcn301_optc.h" 28 + #include "dc.h" 29 + #include "dcn_calc_math.h" 30 + #include "dc_dmub_srv.h" 31 + 32 + #include "dml/dcn30/dcn30_fpu.h" 33 + #include "dc_trace.h" 34 + 35 + #define REG(reg)\ 36 + optc1->tg_regs->reg 37 + 38 + #define CTX \ 39 + optc1->base.ctx 40 + 41 + #undef FN 42 + #define FN(reg_name, field_name) \ 43 + optc1->tg_shift->field_name, optc1->tg_mask->field_name 44 + 45 + 46 + /** 47 + * optc301_set_drr() - Program dynamic refresh rate registers m_OTGx_OTG_V_TOTAL_*. 48 + * 49 + * @optc: timing_generator instance. 50 + * @params: parameters used for Dynamic Refresh Rate. 51 + */ 52 + void optc301_set_drr( 53 + struct timing_generator *optc, 54 + const struct drr_params *params) 55 + { 56 + struct optc *optc1 = DCN10TG_FROM_TG(optc); 57 + 58 + if (params != NULL && 59 + params->vertical_total_max > 0 && 60 + params->vertical_total_min > 0) { 61 + 62 + if (params->vertical_total_mid != 0) { 63 + 64 + REG_SET(OTG_V_TOTAL_MID, 0, 65 + OTG_V_TOTAL_MID, params->vertical_total_mid - 1); 66 + 67 + REG_UPDATE_2(OTG_V_TOTAL_CONTROL, 68 + OTG_VTOTAL_MID_REPLACING_MAX_EN, 1, 69 + OTG_VTOTAL_MID_FRAME_NUM, 70 + (uint8_t)params->vertical_total_mid_frame_num); 71 + 72 + } 73 + 74 + optc->funcs->set_vtotal_min_max(optc, params->vertical_total_min - 1, params->vertical_total_max - 1); 75 + 76 + REG_UPDATE_5(OTG_V_TOTAL_CONTROL, 77 + OTG_V_TOTAL_MIN_SEL, 1, 78 + OTG_V_TOTAL_MAX_SEL, 1, 79 + OTG_FORCE_LOCK_ON_EVENT, 0, 80 + OTG_SET_V_TOTAL_MIN_MASK_EN, 0, 81 + OTG_SET_V_TOTAL_MIN_MASK, 0); 82 + // Setup manual flow control for EOF via TRIG_A 83 + optc->funcs->setup_manual_trigger(optc); 84 + 85 + } else { 86 + REG_UPDATE_4(OTG_V_TOTAL_CONTROL, 87 + OTG_SET_V_TOTAL_MIN_MASK, 0, 88 + OTG_V_TOTAL_MIN_SEL, 0, 89 + OTG_V_TOTAL_MAX_SEL, 0, 90 + OTG_FORCE_LOCK_ON_EVENT, 0); 91 + 92 + optc->funcs->set_vtotal_min_max(optc, 0, 0); 93 + } 94 + } 95 + 96 + 97 + void optc301_setup_manual_trigger(struct timing_generator *optc) 98 + { 99 + struct optc *optc1 = DCN10TG_FROM_TG(optc); 100 + 101 + REG_SET_8(OTG_TRIGA_CNTL, 0, 102 + OTG_TRIGA_SOURCE_SELECT, 21, 103 + OTG_TRIGA_SOURCE_PIPE_SELECT, optc->inst, 104 + OTG_TRIGA_RISING_EDGE_DETECT_CNTL, 1, 105 + OTG_TRIGA_FALLING_EDGE_DETECT_CNTL, 0, 106 + OTG_TRIGA_POLARITY_SELECT, 0, 107 + OTG_TRIGA_FREQUENCY_SELECT, 0, 108 + OTG_TRIGA_DELAY, 0, 109 + OTG_TRIGA_CLEAR, 1); 110 + } 111 + 112 + static struct timing_generator_funcs dcn30_tg_funcs = { 113 + .validate_timing = optc1_validate_timing, 114 + .program_timing = optc1_program_timing, 115 + .setup_vertical_interrupt0 = optc1_setup_vertical_interrupt0, 116 + .setup_vertical_interrupt1 = optc1_setup_vertical_interrupt1, 117 + .setup_vertical_interrupt2 = optc1_setup_vertical_interrupt2, 118 + .program_global_sync = optc1_program_global_sync, 119 + .enable_crtc = optc2_enable_crtc, 120 + .disable_crtc = optc1_disable_crtc, 121 + /* used by enable_timing_synchronization. Not need for FPGA */ 122 + .is_counter_moving = optc1_is_counter_moving, 123 + .get_position = optc1_get_position, 124 + .get_frame_count = optc1_get_vblank_counter, 125 + .get_scanoutpos = optc1_get_crtc_scanoutpos, 126 + .get_otg_active_size = optc1_get_otg_active_size, 127 + .set_early_control = optc1_set_early_control, 128 + /* used by enable_timing_synchronization. Not need for FPGA */ 129 + .wait_for_state = optc1_wait_for_state, 130 + .set_blank_color = optc3_program_blank_color, 131 + .did_triggered_reset_occur = optc1_did_triggered_reset_occur, 132 + .triplebuffer_lock = optc3_triplebuffer_lock, 133 + .triplebuffer_unlock = optc2_triplebuffer_unlock, 134 + .enable_reset_trigger = optc1_enable_reset_trigger, 135 + .enable_crtc_reset = optc1_enable_crtc_reset, 136 + .disable_reset_trigger = optc1_disable_reset_trigger, 137 + .lock = optc3_lock, 138 + .unlock = optc1_unlock, 139 + .lock_doublebuffer_enable = optc3_lock_doublebuffer_enable, 140 + .lock_doublebuffer_disable = optc3_lock_doublebuffer_disable, 141 + .enable_optc_clock = optc1_enable_optc_clock, 142 + .set_drr = optc301_set_drr, 143 + .get_last_used_drr_vtotal = optc2_get_last_used_drr_vtotal, 144 + .set_vtotal_min_max = optc3_set_vtotal_min_max, 145 + .set_static_screen_control = optc1_set_static_screen_control, 146 + .program_stereo = optc1_program_stereo, 147 + .is_stereo_left_eye = optc1_is_stereo_left_eye, 148 + .tg_init = optc3_tg_init, 149 + .is_tg_enabled = optc1_is_tg_enabled, 150 + .is_optc_underflow_occurred = optc1_is_optc_underflow_occurred, 151 + .clear_optc_underflow = optc1_clear_optc_underflow, 152 + .setup_global_swap_lock = NULL, 153 + .get_crc = optc1_get_crc, 154 + .configure_crc = optc2_configure_crc, 155 + .set_dsc_config = optc3_set_dsc_config, 156 + .get_dsc_status = optc2_get_dsc_status, 157 + .set_dwb_source = NULL, 158 + .set_odm_bypass = optc3_set_odm_bypass, 159 + .set_odm_combine = optc3_set_odm_combine, 160 + .get_optc_source = optc2_get_optc_source, 161 + .set_out_mux = optc3_set_out_mux, 162 + .set_drr_trigger_window = optc3_set_drr_trigger_window, 163 + .set_vtotal_change_limit = optc3_set_vtotal_change_limit, 164 + .set_gsl = optc2_set_gsl, 165 + .set_gsl_source_select = optc2_set_gsl_source_select, 166 + .set_vtg_params = optc1_set_vtg_params, 167 + .program_manual_trigger = optc2_program_manual_trigger, 168 + .setup_manual_trigger = optc301_setup_manual_trigger, 169 + .get_hw_timing = optc1_get_hw_timing, 170 + .wait_drr_doublebuffer_pending_clear = optc3_wait_drr_doublebuffer_pending_clear, 171 + }; 172 + 173 + void dcn301_timing_generator_init(struct optc *optc1) 174 + { 175 + optc1->base.funcs = &dcn30_tg_funcs; 176 + 177 + optc1->max_h_total = optc1->tg_mask->OTG_H_TOTAL + 1; 178 + optc1->max_v_total = optc1->tg_mask->OTG_V_TOTAL + 1; 179 + 180 + optc1->min_h_blank = 32; 181 + optc1->min_v_blank = 3; 182 + optc1->min_v_blank_interlace = 5; 183 + optc1->min_h_sync_width = 4; 184 + optc1->min_v_sync_width = 1; 185 + }
+36
drivers/gpu/drm/amd/display/dc/dcn301/dcn301_optc.h
··· 1 + /* 2 + * Copyright 2020 Advanced Micro Devices, Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + * 22 + * Authors: AMD 23 + * 24 + */ 25 + 26 + #ifndef __DC_OPTC_DCN301_H__ 27 + #define __DC_OPTC_DCN301_H__ 28 + 29 + #include "dcn20/dcn20_optc.h" 30 + #include "dcn30/dcn30_optc.h" 31 + 32 + void dcn301_timing_generator_init(struct optc *optc1); 33 + void optc301_setup_manual_trigger(struct timing_generator *optc); 34 + void optc301_set_drr(struct timing_generator *optc, const struct drr_params *params); 35 + 36 + #endif /* __DC_OPTC_DCN301_H__ */
+2 -2
drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
··· 42 42 #include "dcn30/dcn30_hubp.h" 43 43 #include "irq/dcn30/irq_service_dcn30.h" 44 44 #include "dcn30/dcn30_dpp.h" 45 - #include "dcn30/dcn30_optc.h" 45 + #include "dcn301/dcn301_optc.h" 46 46 #include "dcn20/dcn20_hwseq.h" 47 47 #include "dcn30/dcn30_hwseq.h" 48 48 #include "dce110/dce110_hw_sequencer.h" ··· 855 855 tgn10->tg_shift = &optc_shift; 856 856 tgn10->tg_mask = &optc_mask; 857 857 858 - dcn30_timing_generator_init(tgn10); 858 + dcn301_timing_generator_init(tgn10); 859 859 860 860 return &tgn10->base; 861 861 }
+1 -1
drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
··· 65 65 .timing_trace = false, 66 66 .clock_trace = true, 67 67 .disable_pplib_clock_request = true, 68 - .pipe_split_policy = MPC_SPLIT_DYNAMIC, 68 + .pipe_split_policy = MPC_SPLIT_AVOID, 69 69 .force_single_disp_pipe_split = false, 70 70 .disable_dcc = DCC_ENABLE, 71 71 .vsr_support = true,
+5 -1
drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c
··· 295 295 pipe = &res_ctx->pipe_ctx[i]; 296 296 timing = &pipe->stream->timing; 297 297 298 - pipes[pipe_cnt].pipe.dest.vtotal = pipe->stream->adjust.v_total_min; 298 + if (pipe->stream->adjust.v_total_min != 0) 299 + pipes[pipe_cnt].pipe.dest.vtotal = pipe->stream->adjust.v_total_min; 300 + else 301 + pipes[pipe_cnt].pipe.dest.vtotal = timing->v_total; 302 + 299 303 pipes[pipe_cnt].pipe.dest.vblank_nom = timing->v_total - pipes[pipe_cnt].pipe.dest.vactive; 300 304 pipes[pipe_cnt].pipe.dest.vblank_nom = min(pipes[pipe_cnt].pipe.dest.vblank_nom, dcn3_14_ip.VBlankNomDefaultUS); 301 305 pipes[pipe_cnt].pipe.dest.vblank_nom = max(pipes[pipe_cnt].pipe.dest.vblank_nom, timing->v_sync_width);
+2 -12
drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
··· 1798 1798 return result; 1799 1799 } 1800 1800 1801 - static bool intel_core_rkl_chk(void) 1802 - { 1803 - #if IS_ENABLED(CONFIG_X86_64) 1804 - struct cpuinfo_x86 *c = &cpu_data(0); 1805 - 1806 - return (c->x86 == 6 && c->x86_model == INTEL_FAM6_ROCKETLAKE); 1807 - #else 1808 - return false; 1809 - #endif 1810 - } 1811 - 1812 1801 static void smu7_init_dpm_defaults(struct pp_hwmgr *hwmgr) 1813 1802 { 1814 1803 struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend); ··· 1824 1835 data->mclk_dpm_key_disabled = hwmgr->feature_mask & PP_MCLK_DPM_MASK ? false : true; 1825 1836 data->sclk_dpm_key_disabled = hwmgr->feature_mask & PP_SCLK_DPM_MASK ? false : true; 1826 1837 data->pcie_dpm_key_disabled = 1827 - intel_core_rkl_chk() || !(hwmgr->feature_mask & PP_PCIE_DPM_MASK); 1838 + !amdgpu_device_pcie_dynamic_switching_supported() || 1839 + !(hwmgr->feature_mask & PP_PCIE_DPM_MASK); 1828 1840 /* need to set voltage control types before EVV patching */ 1829 1841 data->voltage_control = SMU7_VOLTAGE_CONTROL_NONE; 1830 1842 data->vddci_control = SMU7_VOLTAGE_CONTROL_NONE;
+6 -2
drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
··· 1927 1927 *size = 4; 1928 1928 break; 1929 1929 case AMDGPU_PP_SENSOR_GFX_MCLK: 1930 - ret = sienna_cichlid_get_current_clk_freq_by_table(smu, SMU_UCLK, (uint32_t *)data); 1930 + ret = sienna_cichlid_get_smu_metrics_data(smu, 1931 + METRICS_CURR_UCLK, 1932 + (uint32_t *)data); 1931 1933 *(uint32_t *)data *= 100; 1932 1934 *size = 4; 1933 1935 break; 1934 1936 case AMDGPU_PP_SENSOR_GFX_SCLK: 1935 - ret = sienna_cichlid_get_current_clk_freq_by_table(smu, SMU_GFXCLK, (uint32_t *)data); 1937 + ret = sienna_cichlid_get_smu_metrics_data(smu, 1938 + METRICS_AVERAGE_GFXCLK, 1939 + (uint32_t *)data); 1936 1940 *(uint32_t *)data *= 100; 1937 1941 *size = 4; 1938 1942 break;
+1 -1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
··· 949 949 break; 950 950 case AMDGPU_PP_SENSOR_GFX_MCLK: 951 951 ret = smu_v13_0_7_get_smu_metrics_data(smu, 952 - METRICS_AVERAGE_UCLK, 952 + METRICS_CURR_UCLK, 953 953 (uint32_t *)data); 954 954 *(uint32_t *)data *= 100; 955 955 *size = 4;
+10 -1
drivers/gpu/drm/drm_atomic.c
··· 140 140 if (!state->planes) 141 141 goto fail; 142 142 143 + /* 144 + * Because drm_atomic_state can be committed asynchronously we need our 145 + * own reference and cannot rely on the on implied by drm_file in the 146 + * ioctl call. 147 + */ 148 + drm_dev_get(dev); 143 149 state->dev = dev; 144 150 145 151 drm_dbg_atomic(dev, "Allocated atomic state %p\n", state); ··· 305 299 void __drm_atomic_state_free(struct kref *ref) 306 300 { 307 301 struct drm_atomic_state *state = container_of(ref, typeof(*state), ref); 308 - struct drm_mode_config *config = &state->dev->mode_config; 302 + struct drm_device *dev = state->dev; 303 + struct drm_mode_config *config = &dev->mode_config; 309 304 310 305 drm_atomic_state_clear(state); 311 306 ··· 318 311 drm_atomic_state_default_release(state); 319 312 kfree(state); 320 313 } 314 + 315 + drm_dev_put(dev); 321 316 } 322 317 EXPORT_SYMBOL(__drm_atomic_state_free); 323 318
+6
drivers/gpu/drm/drm_client_modeset.c
··· 311 311 can_clone = true; 312 312 dmt_mode = drm_mode_find_dmt(dev, 1024, 768, 60, false); 313 313 314 + if (!dmt_mode) 315 + goto fail; 316 + 314 317 for (i = 0; i < connector_count; i++) { 315 318 if (!enabled[i]) 316 319 continue; ··· 329 326 if (!modes[i]) 330 327 can_clone = false; 331 328 } 329 + kfree(dmt_mode); 332 330 333 331 if (can_clone) { 334 332 DRM_DEBUG_KMS("can clone using 1024x768\n"); 335 333 return true; 336 334 } 335 + fail: 337 336 DRM_INFO("kms: can't enable cloning when we probably wanted to.\n"); 338 337 return false; 339 338 } ··· 867 862 break; 868 863 } 869 864 865 + kfree(modeset->mode); 870 866 modeset->mode = drm_mode_duplicate(dev, mode); 871 867 drm_connector_get(connector); 872 868 modeset->connectors[modeset->num_connectors++] = connector;
+5
drivers/gpu/drm/i915/Makefile
··· 23 23 subdir-ccflags-y += $(call cc-disable-warning, frame-address) 24 24 subdir-ccflags-$(CONFIG_DRM_I915_WERROR) += -Werror 25 25 26 + # Fine grained warnings disable 27 + CFLAGS_i915_pci.o = $(call cc-disable-warning, override-init) 28 + CFLAGS_display/intel_display_device.o = $(call cc-disable-warning, override-init) 29 + CFLAGS_display/intel_fbdev.o = $(call cc-disable-warning, override-init) 30 + 26 31 subdir-ccflags-y += -I$(srctree)/$(src) 27 32 28 33 # Please keep these build lists sorted!
-5
drivers/gpu/drm/i915/display/intel_display_device.c
··· 16 16 #include "intel_display_reg_defs.h" 17 17 #include "intel_fbc.h" 18 18 19 - __diag_push(); 20 - __diag_ignore_all("-Woverride-init", "Allow overriding inherited members"); 21 - 22 19 static const struct intel_display_device_info no_display = {}; 23 20 24 21 #define PIPE_A_OFFSET 0x70000 ··· 661 664 BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | 662 665 BIT(TRANSCODER_C) | BIT(TRANSCODER_D), 663 666 }; 664 - 665 - __diag_pop(); 666 667 667 668 #undef INTEL_VGA_DEVICE 668 669 #undef INTEL_QUANTA_VGA_DEVICE
-5
drivers/gpu/drm/i915/display/intel_fbdev.c
··· 135 135 return i915_gem_fb_mmap(obj, vma); 136 136 } 137 137 138 - __diag_push(); 139 - __diag_ignore_all("-Woverride-init", "Allow overriding the default ops"); 140 - 141 138 static const struct fb_ops intelfb_ops = { 142 139 .owner = THIS_MODULE, 143 140 __FB_DEFAULT_DEFERRED_OPS_RDWR(intel_fbdev), ··· 145 148 __FB_DEFAULT_DEFERRED_OPS_DRAW(intel_fbdev), 146 149 .fb_mmap = intel_fbdev_mmap, 147 150 }; 148 - 149 - __diag_pop(); 150 151 151 152 static int intelfb_alloc(struct drm_fb_helper *helper, 152 153 struct drm_fb_helper_surface_size *sizes)
-5
drivers/gpu/drm/i915/i915_pci.c
··· 38 38 #include "i915_reg.h" 39 39 #include "intel_pci_config.h" 40 40 41 - __diag_push(); 42 - __diag_ignore_all("-Woverride-init", "Allow overriding inherited members"); 43 - 44 41 #define PLATFORM(x) .platform = (x) 45 42 #define GEN(x) \ 46 43 .__runtime.graphics.ip.ver = (x), \ ··· 842 845 }; 843 846 844 847 #undef PLATFORM 845 - 846 - __diag_pop(); 847 848 848 849 /* 849 850 * Make sure any device matches here are from most specific to most
+1
drivers/gpu/drm/i915/i915_perf.c
··· 4431 4431 static const struct i915_range xehp_oa_b_counters[] = { 4432 4432 { .start = 0xdc48, .end = 0xdc48 }, /* OAA_ENABLE_REG */ 4433 4433 { .start = 0xdd00, .end = 0xdd48 }, /* OAG_LCE0_0 - OAA_LENABLE_REG */ 4434 + {} 4434 4435 }; 4435 4436 4436 4437 static const struct i915_range gen7_oa_mux_regs[] = {
+4
drivers/gpu/drm/nouveau/dispnv50/disp.c
··· 1877 1877 nvif_outp_dtor(&nv_encoder->outp); 1878 1878 1879 1879 drm_encoder_cleanup(encoder); 1880 + 1881 + mutex_destroy(&nv_encoder->dp.hpd_irq_lock); 1880 1882 kfree(encoder); 1881 1883 } 1882 1884 ··· 1922 1920 nv_encoder->dcb = dcbe; 1923 1921 nv_encoder->i2c = ddc; 1924 1922 nv_encoder->aux = aux; 1923 + 1924 + mutex_init(&nv_encoder->dp.hpd_irq_lock); 1925 1925 1926 1926 encoder = to_drm_encoder(nv_encoder); 1927 1927 encoder->possible_crtcs = dcbe->heads;
+2 -2
drivers/gpu/drm/nouveau/include/nvkm/subdev/i2c.h
··· 16 16 const struct nvkm_i2c_bus_func *func; 17 17 struct nvkm_i2c_pad *pad; 18 18 #define NVKM_I2C_BUS_CCB(n) /* 'n' is ccb index */ (n) 19 - #define NVKM_I2C_BUS_EXT(n) /* 'n' is dcb external encoder type */ ((n) + 0x100) 19 + #define NVKM_I2C_BUS_EXT(n) /* 'n' is dcb external encoder type */ ((n) + 0x10) 20 20 #define NVKM_I2C_BUS_PRI /* ccb primary comm. port */ -1 21 21 #define NVKM_I2C_BUS_SEC /* ccb secondary comm. port */ -2 22 22 int id; ··· 38 38 const struct nvkm_i2c_aux_func *func; 39 39 struct nvkm_i2c_pad *pad; 40 40 #define NVKM_I2C_AUX_CCB(n) /* 'n' is ccb index */ (n) 41 - #define NVKM_I2C_AUX_EXT(n) /* 'n' is dcb external encoder type */ ((n) + 0x100) 41 + #define NVKM_I2C_AUX_EXT(n) /* 'n' is dcb external encoder type */ ((n) + 0x10) 42 42 int id; 43 43 44 44 struct mutex mutex;
+18 -9
drivers/gpu/drm/nouveau/nvkm/engine/disp/uconn.c
··· 81 81 return -ENOSYS; 82 82 83 83 list_for_each_entry(outp, &conn->disp->outps, head) { 84 - if (outp->info.connector == conn->index && outp->dp.aux) { 85 - if (args->v0.types & NVIF_CONN_EVENT_V0_PLUG ) bits |= NVKM_I2C_PLUG; 86 - if (args->v0.types & NVIF_CONN_EVENT_V0_UNPLUG) bits |= NVKM_I2C_UNPLUG; 87 - if (args->v0.types & NVIF_CONN_EVENT_V0_IRQ ) bits |= NVKM_I2C_IRQ; 84 + if (outp->info.connector == conn->index) 85 + break; 86 + } 88 87 89 - return nvkm_uevent_add(uevent, &device->i2c->event, outp->dp.aux->id, bits, 90 - nvkm_uconn_uevent_aux); 91 - } 88 + if (&outp->head == &conn->disp->outps) 89 + return -EINVAL; 90 + 91 + if (outp->dp.aux && !outp->info.location) { 92 + if (args->v0.types & NVIF_CONN_EVENT_V0_PLUG ) bits |= NVKM_I2C_PLUG; 93 + if (args->v0.types & NVIF_CONN_EVENT_V0_UNPLUG) bits |= NVKM_I2C_UNPLUG; 94 + if (args->v0.types & NVIF_CONN_EVENT_V0_IRQ ) bits |= NVKM_I2C_IRQ; 95 + 96 + return nvkm_uevent_add(uevent, &device->i2c->event, outp->dp.aux->id, bits, 97 + nvkm_uconn_uevent_aux); 92 98 } 93 99 94 100 if (args->v0.types & NVIF_CONN_EVENT_V0_PLUG ) bits |= NVKM_GPIO_HI; 95 101 if (args->v0.types & NVIF_CONN_EVENT_V0_UNPLUG) bits |= NVKM_GPIO_LO; 96 - if (args->v0.types & NVIF_CONN_EVENT_V0_IRQ) 97 - return -EINVAL; 102 + if (args->v0.types & NVIF_CONN_EVENT_V0_IRQ) { 103 + /* TODO: support DP IRQ on ANX9805 and remove this hack. */ 104 + if (!outp->info.location) 105 + return -EINVAL; 106 + } 98 107 99 108 return nvkm_uevent_add(uevent, &device->gpio->event, conn->info.hpd, bits, 100 109 nvkm_uconn_uevent_gpio);
+9 -2
drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c
··· 260 260 { 261 261 struct nvkm_bios *bios = device->bios; 262 262 struct nvkm_i2c *i2c; 263 + struct nvkm_i2c_aux *aux; 263 264 struct dcb_i2c_entry ccbE; 264 265 struct dcb_output dcbE; 265 266 u8 ver, hdr; 266 - int ret, i; 267 + int ret, i, ids; 267 268 268 269 if (!(i2c = *pi2c = kzalloc(sizeof(*i2c), GFP_KERNEL))) 269 270 return -ENOMEM; ··· 407 406 } 408 407 } 409 408 410 - return nvkm_event_init(&nvkm_i2c_intr_func, &i2c->subdev, 4, i, &i2c->event); 409 + ids = 0; 410 + list_for_each_entry(aux, &i2c->aux, head) 411 + ids = max(ids, aux->id + 1); 412 + if (!ids) 413 + return 0; 414 + 415 + return nvkm_event_init(&nvkm_i2c_intr_func, &i2c->subdev, 4, ids, &i2c->event); 411 416 }
+1 -171
drivers/idle/intel_idle.c
··· 199 199 return __intel_idle(dev, drv, index); 200 200 } 201 201 202 - static __always_inline int __intel_idle_hlt(struct cpuidle_device *dev, 203 - struct cpuidle_driver *drv, int index) 204 - { 205 - raw_safe_halt(); 206 - raw_local_irq_disable(); 207 - return index; 208 - } 209 - 210 - /** 211 - * intel_idle_hlt - Ask the processor to enter the given idle state using hlt. 212 - * @dev: cpuidle device of the target CPU. 213 - * @drv: cpuidle driver (assumed to point to intel_idle_driver). 214 - * @index: Target idle state index. 215 - * 216 - * Use the HLT instruction to notify the processor that the CPU represented by 217 - * @dev is idle and it can try to enter the idle state corresponding to @index. 218 - * 219 - * Must be called under local_irq_disable(). 220 - */ 221 - static __cpuidle int intel_idle_hlt(struct cpuidle_device *dev, 222 - struct cpuidle_driver *drv, int index) 223 - { 224 - return __intel_idle_hlt(dev, drv, index); 225 - } 226 - 227 - static __cpuidle int intel_idle_hlt_irq_on(struct cpuidle_device *dev, 228 - struct cpuidle_driver *drv, int index) 229 - { 230 - int ret; 231 - 232 - raw_local_irq_enable(); 233 - ret = __intel_idle_hlt(dev, drv, index); 234 - raw_local_irq_disable(); 235 - 236 - return ret; 237 - } 238 - 239 202 /** 240 203 * intel_idle_s2idle - Ask the processor to enter the given idle state. 241 204 * @dev: cpuidle device of the target CPU. ··· 1242 1279 .enter = NULL } 1243 1280 }; 1244 1281 1245 - static struct cpuidle_state vmguest_cstates[] __initdata = { 1246 - { 1247 - .name = "C1", 1248 - .desc = "HLT", 1249 - .flags = MWAIT2flg(0x00) | CPUIDLE_FLAG_IRQ_ENABLE, 1250 - .exit_latency = 5, 1251 - .target_residency = 10, 1252 - .enter = &intel_idle_hlt, }, 1253 - { 1254 - .name = "C1L", 1255 - .desc = "Long HLT", 1256 - .flags = MWAIT2flg(0x00) | CPUIDLE_FLAG_TLB_FLUSHED, 1257 - .exit_latency = 5, 1258 - .target_residency = 200, 1259 - .enter = &intel_idle_hlt, }, 1260 - { 1261 - .enter = NULL } 1262 - }; 1263 - 1264 1282 static const struct idle_cpu idle_cpu_nehalem __initconst = { 1265 1283 .state_table = nehalem_cstates, 1266 1284 .auto_demotion_disable_flags = NHM_C1_AUTO_DEMOTE | NHM_C3_AUTO_DEMOTE, ··· 1841 1897 1842 1898 static void state_update_enter_method(struct cpuidle_state *state, int cstate) 1843 1899 { 1844 - if (state->enter == intel_idle_hlt) { 1845 - if (force_irq_on) { 1846 - pr_info("forced intel_idle_irq for state %d\n", cstate); 1847 - state->enter = intel_idle_hlt_irq_on; 1848 - } 1849 - return; 1850 - } 1851 - if (state->enter == intel_idle_hlt_irq_on) 1852 - return; /* no update scenarios */ 1853 - 1854 1900 if (state->flags & CPUIDLE_FLAG_INIT_XSTATE) { 1855 1901 /* 1856 1902 * Combining with XSTATE with IBRS or IRQ_ENABLE flags ··· 1872 1938 pr_info("forced intel_idle_irq for state %d\n", cstate); 1873 1939 state->enter = intel_idle_irq; 1874 1940 } 1875 - } 1876 - 1877 - /* 1878 - * For mwait based states, we want to verify the cpuid data to see if the state 1879 - * is actually supported by this specific CPU. 1880 - * For non-mwait based states, this check should be skipped. 1881 - */ 1882 - static bool should_verify_mwait(struct cpuidle_state *state) 1883 - { 1884 - if (state->enter == intel_idle_hlt) 1885 - return false; 1886 - if (state->enter == intel_idle_hlt_irq_on) 1887 - return false; 1888 - 1889 - return true; 1890 1941 } 1891 1942 1892 1943 static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv) ··· 1922 2003 } 1923 2004 1924 2005 mwait_hint = flg2MWAIT(cpuidle_state_table[cstate].flags); 1925 - if (should_verify_mwait(&cpuidle_state_table[cstate]) && !intel_idle_verify_cstate(mwait_hint)) 2006 + if (!intel_idle_verify_cstate(mwait_hint)) 1926 2007 continue; 1927 2008 1928 2009 /* Structure copy. */ ··· 2056 2137 cpuidle_unregister_device(per_cpu_ptr(intel_idle_cpuidle_devices, i)); 2057 2138 } 2058 2139 2059 - /* 2060 - * Match up the latency and break even point of the bare metal (cpu based) 2061 - * states with the deepest VM available state. 2062 - * 2063 - * We only want to do this for the deepest state, the ones that has 2064 - * the TLB_FLUSHED flag set on the . 2065 - * 2066 - * All our short idle states are dominated by vmexit/vmenter latencies, 2067 - * not the underlying hardware latencies so we keep our values for these. 2068 - */ 2069 - static void __init matchup_vm_state_with_baremetal(void) 2070 - { 2071 - int cstate; 2072 - 2073 - for (cstate = 0; cstate < CPUIDLE_STATE_MAX; ++cstate) { 2074 - int matching_cstate; 2075 - 2076 - if (intel_idle_max_cstate_reached(cstate)) 2077 - break; 2078 - 2079 - if (!cpuidle_state_table[cstate].enter) 2080 - break; 2081 - 2082 - if (!(cpuidle_state_table[cstate].flags & CPUIDLE_FLAG_TLB_FLUSHED)) 2083 - continue; 2084 - 2085 - for (matching_cstate = 0; matching_cstate < CPUIDLE_STATE_MAX; ++matching_cstate) { 2086 - if (!icpu->state_table[matching_cstate].enter) 2087 - break; 2088 - if (icpu->state_table[matching_cstate].exit_latency > cpuidle_state_table[cstate].exit_latency) { 2089 - cpuidle_state_table[cstate].exit_latency = icpu->state_table[matching_cstate].exit_latency; 2090 - cpuidle_state_table[cstate].target_residency = icpu->state_table[matching_cstate].target_residency; 2091 - } 2092 - } 2093 - 2094 - } 2095 - } 2096 - 2097 - 2098 - static int __init intel_idle_vminit(const struct x86_cpu_id *id) 2099 - { 2100 - int retval; 2101 - 2102 - cpuidle_state_table = vmguest_cstates; 2103 - 2104 - icpu = (const struct idle_cpu *)id->driver_data; 2105 - 2106 - pr_debug("v" INTEL_IDLE_VERSION " model 0x%X\n", 2107 - boot_cpu_data.x86_model); 2108 - 2109 - intel_idle_cpuidle_devices = alloc_percpu(struct cpuidle_device); 2110 - if (!intel_idle_cpuidle_devices) 2111 - return -ENOMEM; 2112 - 2113 - /* 2114 - * We don't know exactly what the host will do when we go idle, but as a worst estimate 2115 - * we can assume that the exit latency of the deepest host state will be hit for our 2116 - * deep (long duration) guest idle state. 2117 - * The same logic applies to the break even point for the long duration guest idle state. 2118 - * So lets copy these two properties from the table we found for the host CPU type. 2119 - */ 2120 - matchup_vm_state_with_baremetal(); 2121 - 2122 - intel_idle_cpuidle_driver_init(&intel_idle_driver); 2123 - 2124 - retval = cpuidle_register_driver(&intel_idle_driver); 2125 - if (retval) { 2126 - struct cpuidle_driver *drv = cpuidle_get_driver(); 2127 - printk(KERN_DEBUG pr_fmt("intel_idle yielding to %s\n"), 2128 - drv ? drv->name : "none"); 2129 - goto init_driver_fail; 2130 - } 2131 - 2132 - retval = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "idle/intel:online", 2133 - intel_idle_cpu_online, NULL); 2134 - if (retval < 0) 2135 - goto hp_setup_fail; 2136 - 2137 - return 0; 2138 - hp_setup_fail: 2139 - intel_idle_cpuidle_devices_uninit(); 2140 - cpuidle_unregister_driver(&intel_idle_driver); 2141 - init_driver_fail: 2142 - free_percpu(intel_idle_cpuidle_devices); 2143 - return retval; 2144 - } 2145 - 2146 2140 static int __init intel_idle_init(void) 2147 2141 { 2148 2142 const struct x86_cpu_id *id; ··· 2074 2242 id = x86_match_cpu(intel_idle_ids); 2075 2243 if (id) { 2076 2244 if (!boot_cpu_has(X86_FEATURE_MWAIT)) { 2077 - if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) 2078 - return intel_idle_vminit(id); 2079 2245 pr_debug("Please enable MWAIT in BIOS SETUP\n"); 2080 2246 return -ENODEV; 2081 2247 }
+8 -2
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
··· 227 227 __mcp251xfd_chip_set_mode(const struct mcp251xfd_priv *priv, 228 228 const u8 mode_req, bool nowait) 229 229 { 230 + const struct can_bittiming *bt = &priv->can.bittiming; 231 + unsigned long timeout_us = MCP251XFD_POLL_TIMEOUT_US; 230 232 u32 con = 0, con_reqop, osc = 0; 231 233 u8 mode; 232 234 int err; ··· 248 246 if (mode_req == MCP251XFD_REG_CON_MODE_SLEEP || nowait) 249 247 return 0; 250 248 249 + if (bt->bitrate) 250 + timeout_us = max_t(unsigned long, timeout_us, 251 + MCP251XFD_FRAME_LEN_MAX_BITS * USEC_PER_SEC / 252 + bt->bitrate); 253 + 251 254 err = regmap_read_poll_timeout(priv->map_reg, MCP251XFD_REG_CON, con, 252 255 !mcp251xfd_reg_invalid(con) && 253 256 FIELD_GET(MCP251XFD_REG_CON_OPMOD_MASK, 254 257 con) == mode_req, 255 - MCP251XFD_POLL_SLEEP_US, 256 - MCP251XFD_POLL_TIMEOUT_US); 258 + MCP251XFD_POLL_SLEEP_US, timeout_us); 257 259 if (err != -ETIMEDOUT && err != -EBADMSG) 258 260 return err; 259 261
+1
drivers/net/can/spi/mcp251xfd/mcp251xfd.h
··· 387 387 #define MCP251XFD_OSC_STAB_TIMEOUT_US (10 * MCP251XFD_OSC_STAB_SLEEP_US) 388 388 #define MCP251XFD_POLL_SLEEP_US (10) 389 389 #define MCP251XFD_POLL_TIMEOUT_US (USEC_PER_MSEC) 390 + #define MCP251XFD_FRAME_LEN_MAX_BITS (736) 390 391 391 392 /* Misc */ 392 393 #define MCP251XFD_NAPI_WEIGHT 32
+74 -56
drivers/net/can/usb/gs_usb.c
··· 303 303 struct can_bittiming_const bt_const, data_bt_const; 304 304 unsigned int channel; /* channel number */ 305 305 306 - /* time counter for hardware timestamps */ 307 - struct cyclecounter cc; 308 - struct timecounter tc; 309 - spinlock_t tc_lock; /* spinlock to guard access tc->cycle_last */ 310 - struct delayed_work timestamp; 311 - 312 306 u32 feature; 313 307 unsigned int hf_size_tx; 314 308 ··· 319 325 struct gs_can *canch[GS_MAX_INTF]; 320 326 struct usb_anchor rx_submitted; 321 327 struct usb_device *udev; 328 + 329 + /* time counter for hardware timestamps */ 330 + struct cyclecounter cc; 331 + struct timecounter tc; 332 + spinlock_t tc_lock; /* spinlock to guard access tc->cycle_last */ 333 + struct delayed_work timestamp; 334 + 322 335 unsigned int hf_size_rx; 323 336 u8 active_channels; 324 337 }; ··· 389 388 GFP_KERNEL); 390 389 } 391 390 392 - static inline int gs_usb_get_timestamp(const struct gs_can *dev, 391 + static inline int gs_usb_get_timestamp(const struct gs_usb *parent, 393 392 u32 *timestamp_p) 394 393 { 395 394 __le32 timestamp; 396 395 int rc; 397 396 398 - rc = usb_control_msg_recv(dev->udev, 0, GS_USB_BREQ_TIMESTAMP, 397 + rc = usb_control_msg_recv(parent->udev, 0, GS_USB_BREQ_TIMESTAMP, 399 398 USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_INTERFACE, 400 - dev->channel, 0, 399 + 0, 0, 401 400 &timestamp, sizeof(timestamp), 402 401 USB_CTRL_GET_TIMEOUT, 403 402 GFP_KERNEL); ··· 411 410 412 411 static u64 gs_usb_timestamp_read(const struct cyclecounter *cc) __must_hold(&dev->tc_lock) 413 412 { 414 - struct gs_can *dev = container_of(cc, struct gs_can, cc); 413 + struct gs_usb *parent = container_of(cc, struct gs_usb, cc); 415 414 u32 timestamp = 0; 416 415 int err; 417 416 418 - lockdep_assert_held(&dev->tc_lock); 417 + lockdep_assert_held(&parent->tc_lock); 419 418 420 419 /* drop lock for synchronous USB transfer */ 421 - spin_unlock_bh(&dev->tc_lock); 422 - err = gs_usb_get_timestamp(dev, &timestamp); 423 - spin_lock_bh(&dev->tc_lock); 420 + spin_unlock_bh(&parent->tc_lock); 421 + err = gs_usb_get_timestamp(parent, &timestamp); 422 + spin_lock_bh(&parent->tc_lock); 424 423 if (err) 425 - netdev_err(dev->netdev, 426 - "Error %d while reading timestamp. HW timestamps may be inaccurate.", 427 - err); 424 + dev_err(&parent->udev->dev, 425 + "Error %d while reading timestamp. HW timestamps may be inaccurate.", 426 + err); 428 427 429 428 return timestamp; 430 429 } ··· 432 431 static void gs_usb_timestamp_work(struct work_struct *work) 433 432 { 434 433 struct delayed_work *delayed_work = to_delayed_work(work); 435 - struct gs_can *dev; 434 + struct gs_usb *parent; 436 435 437 - dev = container_of(delayed_work, struct gs_can, timestamp); 438 - spin_lock_bh(&dev->tc_lock); 439 - timecounter_read(&dev->tc); 440 - spin_unlock_bh(&dev->tc_lock); 436 + parent = container_of(delayed_work, struct gs_usb, timestamp); 437 + spin_lock_bh(&parent->tc_lock); 438 + timecounter_read(&parent->tc); 439 + spin_unlock_bh(&parent->tc_lock); 441 440 442 - schedule_delayed_work(&dev->timestamp, 441 + schedule_delayed_work(&parent->timestamp, 443 442 GS_USB_TIMESTAMP_WORK_DELAY_SEC * HZ); 444 443 } 445 444 ··· 447 446 struct sk_buff *skb, u32 timestamp) 448 447 { 449 448 struct skb_shared_hwtstamps *hwtstamps = skb_hwtstamps(skb); 449 + struct gs_usb *parent = dev->parent; 450 450 u64 ns; 451 451 452 - spin_lock_bh(&dev->tc_lock); 453 - ns = timecounter_cyc2time(&dev->tc, timestamp); 454 - spin_unlock_bh(&dev->tc_lock); 452 + spin_lock_bh(&parent->tc_lock); 453 + ns = timecounter_cyc2time(&parent->tc, timestamp); 454 + spin_unlock_bh(&parent->tc_lock); 455 455 456 456 hwtstamps->hwtstamp = ns_to_ktime(ns); 457 457 } 458 458 459 - static void gs_usb_timestamp_init(struct gs_can *dev) 459 + static void gs_usb_timestamp_init(struct gs_usb *parent) 460 460 { 461 - struct cyclecounter *cc = &dev->cc; 461 + struct cyclecounter *cc = &parent->cc; 462 462 463 463 cc->read = gs_usb_timestamp_read; 464 464 cc->mask = CYCLECOUNTER_MASK(32); 465 465 cc->shift = 32 - bits_per(NSEC_PER_SEC / GS_USB_TIMESTAMP_TIMER_HZ); 466 466 cc->mult = clocksource_hz2mult(GS_USB_TIMESTAMP_TIMER_HZ, cc->shift); 467 467 468 - spin_lock_init(&dev->tc_lock); 469 - spin_lock_bh(&dev->tc_lock); 470 - timecounter_init(&dev->tc, &dev->cc, ktime_get_real_ns()); 471 - spin_unlock_bh(&dev->tc_lock); 468 + spin_lock_init(&parent->tc_lock); 469 + spin_lock_bh(&parent->tc_lock); 470 + timecounter_init(&parent->tc, &parent->cc, ktime_get_real_ns()); 471 + spin_unlock_bh(&parent->tc_lock); 472 472 473 - INIT_DELAYED_WORK(&dev->timestamp, gs_usb_timestamp_work); 474 - schedule_delayed_work(&dev->timestamp, 473 + INIT_DELAYED_WORK(&parent->timestamp, gs_usb_timestamp_work); 474 + schedule_delayed_work(&parent->timestamp, 475 475 GS_USB_TIMESTAMP_WORK_DELAY_SEC * HZ); 476 476 } 477 477 478 - static void gs_usb_timestamp_stop(struct gs_can *dev) 478 + static void gs_usb_timestamp_stop(struct gs_usb *parent) 479 479 { 480 - cancel_delayed_work_sync(&dev->timestamp); 480 + cancel_delayed_work_sync(&parent->timestamp); 481 481 } 482 482 483 483 static void gs_update_state(struct gs_can *dev, struct can_frame *cf) ··· 561 559 562 560 if (!netif_device_present(netdev)) 563 561 return; 562 + 563 + if (!netif_running(netdev)) 564 + goto resubmit_urb; 564 565 565 566 if (hf->echo_id == -1) { /* normal rx */ 566 567 if (hf->flags & GS_CAN_FLAG_FD) { ··· 838 833 .mode = cpu_to_le32(GS_CAN_MODE_START), 839 834 }; 840 835 struct gs_host_frame *hf; 836 + struct urb *urb = NULL; 841 837 u32 ctrlmode; 842 838 u32 flags = 0; 843 839 int rc, i; ··· 861 855 } 862 856 863 857 if (!parent->active_channels) { 858 + if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP) 859 + gs_usb_timestamp_init(parent); 860 + 864 861 for (i = 0; i < GS_MAX_RX_URBS; i++) { 865 - struct urb *urb; 866 862 u8 *buf; 867 863 868 864 /* alloc rx urb */ 869 865 urb = usb_alloc_urb(0, GFP_KERNEL); 870 - if (!urb) 871 - return -ENOMEM; 866 + if (!urb) { 867 + rc = -ENOMEM; 868 + goto out_usb_kill_anchored_urbs; 869 + } 872 870 873 871 /* alloc rx buffer */ 874 872 buf = kmalloc(dev->parent->hf_size_rx, ··· 880 870 if (!buf) { 881 871 netdev_err(netdev, 882 872 "No memory left for USB buffer\n"); 883 - usb_free_urb(urb); 884 - return -ENOMEM; 873 + rc = -ENOMEM; 874 + goto out_usb_free_urb; 885 875 } 886 876 887 877 /* fill, anchor, and submit rx urb */ ··· 904 894 netdev_err(netdev, 905 895 "usb_submit failed (err=%d)\n", rc); 906 896 907 - usb_unanchor_urb(urb); 908 - usb_free_urb(urb); 909 - break; 897 + goto out_usb_unanchor_urb; 910 898 } 911 899 912 900 /* Drop reference, ··· 934 926 flags |= GS_CAN_MODE_FD; 935 927 936 928 /* if hardware supports timestamps, enable it */ 937 - if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP) { 929 + if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP) 938 930 flags |= GS_CAN_MODE_HW_TIMESTAMP; 939 - 940 - /* start polling timestamp */ 941 - gs_usb_timestamp_init(dev); 942 - } 943 931 944 932 /* finally start device */ 945 933 dev->can.state = CAN_STATE_ERROR_ACTIVE; ··· 946 942 GFP_KERNEL); 947 943 if (rc) { 948 944 netdev_err(netdev, "Couldn't start device (err=%d)\n", rc); 949 - if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP) 950 - gs_usb_timestamp_stop(dev); 951 945 dev->can.state = CAN_STATE_STOPPED; 952 - return rc; 946 + 947 + goto out_usb_kill_anchored_urbs; 953 948 } 954 949 955 950 parent->active_channels++; ··· 956 953 netif_start_queue(netdev); 957 954 958 955 return 0; 956 + 957 + out_usb_unanchor_urb: 958 + usb_unanchor_urb(urb); 959 + out_usb_free_urb: 960 + usb_free_urb(urb); 961 + out_usb_kill_anchored_urbs: 962 + if (!parent->active_channels) { 963 + usb_kill_anchored_urbs(&dev->tx_submitted); 964 + 965 + if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP) 966 + gs_usb_timestamp_stop(parent); 967 + } 968 + 969 + close_candev(netdev); 970 + 971 + return rc; 959 972 } 960 973 961 974 static int gs_usb_get_state(const struct net_device *netdev, ··· 1017 998 1018 999 netif_stop_queue(netdev); 1019 1000 1020 - /* stop polling timestamp */ 1021 - if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP) 1022 - gs_usb_timestamp_stop(dev); 1023 - 1024 1001 /* Stop polling */ 1025 1002 parent->active_channels--; 1026 1003 if (!parent->active_channels) { 1027 1004 usb_kill_anchored_urbs(&parent->rx_submitted); 1005 + 1006 + if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP) 1007 + gs_usb_timestamp_stop(parent); 1028 1008 } 1029 1009 1030 1010 /* Stop sending URBs */
+7 -1
drivers/net/dsa/microchip/ksz8795.c
··· 506 506 (data_hi & masks[STATIC_MAC_TABLE_FWD_PORTS]) >> 507 507 shifts[STATIC_MAC_FWD_PORTS]; 508 508 alu->is_override = (data_hi & masks[STATIC_MAC_TABLE_OVERRIDE]) ? 1 : 0; 509 - data_hi >>= 1; 509 + 510 + /* KSZ8795 family switches have STATIC_MAC_TABLE_USE_FID and 511 + * STATIC_MAC_TABLE_FID definitions off by 1 when doing read on the 512 + * static MAC table compared to doing write. 513 + */ 514 + if (ksz_is_ksz87xx(dev)) 515 + data_hi >>= 1; 510 516 alu->is_static = true; 511 517 alu->is_use_fid = (data_hi & masks[STATIC_MAC_TABLE_USE_FID]) ? 1 : 0; 512 518 alu->fid = (data_hi & masks[STATIC_MAC_TABLE_FID]) >>
+4 -4
drivers/net/dsa/microchip/ksz_common.c
··· 331 331 [STATIC_MAC_TABLE_VALID] = BIT(21), 332 332 [STATIC_MAC_TABLE_USE_FID] = BIT(23), 333 333 [STATIC_MAC_TABLE_FID] = GENMASK(30, 24), 334 - [STATIC_MAC_TABLE_OVERRIDE] = BIT(26), 335 - [STATIC_MAC_TABLE_FWD_PORTS] = GENMASK(24, 20), 334 + [STATIC_MAC_TABLE_OVERRIDE] = BIT(22), 335 + [STATIC_MAC_TABLE_FWD_PORTS] = GENMASK(20, 16), 336 336 [DYNAMIC_MAC_TABLE_ENTRIES_H] = GENMASK(6, 0), 337 - [DYNAMIC_MAC_TABLE_MAC_EMPTY] = BIT(8), 337 + [DYNAMIC_MAC_TABLE_MAC_EMPTY] = BIT(7), 338 338 [DYNAMIC_MAC_TABLE_NOT_READY] = BIT(7), 339 339 [DYNAMIC_MAC_TABLE_ENTRIES] = GENMASK(31, 29), 340 - [DYNAMIC_MAC_TABLE_FID] = GENMASK(26, 20), 340 + [DYNAMIC_MAC_TABLE_FID] = GENMASK(22, 16), 341 341 [DYNAMIC_MAC_TABLE_SRC_PORT] = GENMASK(26, 24), 342 342 [DYNAMIC_MAC_TABLE_TIMESTAMP] = GENMASK(28, 27), 343 343 [P_MII_TX_FLOW_CTRL] = BIT(5),
+7
drivers/net/dsa/microchip/ksz_common.h
··· 601 601 mutex_unlock(mtx); 602 602 } 603 603 604 + static inline bool ksz_is_ksz87xx(struct ksz_device *dev) 605 + { 606 + return dev->chip_id == KSZ8795_CHIP_ID || 607 + dev->chip_id == KSZ8794_CHIP_ID || 608 + dev->chip_id == KSZ8765_CHIP_ID; 609 + } 610 + 604 611 static inline bool ksz_is_ksz88x3(struct ksz_device *dev) 605 612 { 606 613 return dev->chip_id == KSZ8830_CHIP_ID;
+7
drivers/net/dsa/mv88e6xxx/chip.c
··· 109 109 usleep_range(1000, 2000); 110 110 } 111 111 112 + err = mv88e6xxx_read(chip, addr, reg, &data); 113 + if (err) 114 + return err; 115 + 116 + if ((data & mask) == val) 117 + return 0; 118 + 112 119 dev_err(chip->dev, "Timeout while waiting for switch\n"); 113 120 return -ETIMEDOUT; 114 121 }
+2 -2
drivers/net/dsa/qca/ar9331.c
··· 1002 1002 .val_bits = 32, 1003 1003 .reg_stride = 4, 1004 1004 .max_register = AR9331_SW_REG_PAGE, 1005 + .use_single_read = true, 1006 + .use_single_write = true, 1005 1007 1006 1008 .ranges = ar9331_regmap_range, 1007 1009 .num_ranges = ARRAY_SIZE(ar9331_regmap_range), ··· 1020 1018 .val_format_endian_default = REGMAP_ENDIAN_NATIVE, 1021 1019 .read = ar9331_mdio_read, 1022 1020 .write = ar9331_sw_bus_write, 1023 - .max_raw_read = 4, 1024 - .max_raw_write = 4, 1025 1021 }; 1026 1022 1027 1023 static int ar9331_sw_probe(struct mdio_device *mdiodev)
-5
drivers/net/ethernet/brocade/bna/bnad_debugfs.c
··· 512 512 if (!bnad->port_debugfs_root) { 513 513 bnad->port_debugfs_root = 514 514 debugfs_create_dir(name, bna_debugfs_root); 515 - if (!bnad->port_debugfs_root) { 516 - netdev_warn(bnad->netdev, 517 - "debugfs root dir creation failed\n"); 518 - return; 519 - } 520 515 521 516 atomic_inc(&bna_debugfs_port_count); 522 517
+5 -1
drivers/net/ethernet/intel/iavf/iavf.h
··· 255 255 struct workqueue_struct *wq; 256 256 struct work_struct reset_task; 257 257 struct work_struct adminq_task; 258 + struct work_struct finish_config; 258 259 struct delayed_work client_task; 259 260 wait_queue_head_t down_waitqueue; 261 + wait_queue_head_t reset_waitqueue; 260 262 wait_queue_head_t vc_waitqueue; 261 263 struct iavf_q_vector *q_vectors; 262 264 struct list_head vlan_filter_list; ··· 520 518 void iavf_down(struct iavf_adapter *adapter); 521 519 int iavf_process_config(struct iavf_adapter *adapter); 522 520 int iavf_parse_vf_resource_msg(struct iavf_adapter *adapter); 523 - void iavf_schedule_reset(struct iavf_adapter *adapter); 521 + void iavf_schedule_reset(struct iavf_adapter *adapter, u64 flags); 524 522 void iavf_schedule_request_stats(struct iavf_adapter *adapter); 523 + void iavf_schedule_finish_config(struct iavf_adapter *adapter); 525 524 void iavf_reset(struct iavf_adapter *adapter); 526 525 void iavf_set_ethtool_ops(struct net_device *netdev); 527 526 void iavf_update_stats(struct iavf_adapter *adapter); ··· 585 582 void iavf_del_adv_rss_cfg(struct iavf_adapter *adapter); 586 583 struct iavf_mac_filter *iavf_add_filter(struct iavf_adapter *adapter, 587 584 const u8 *macaddr); 585 + int iavf_wait_for_reset(struct iavf_adapter *adapter); 588 586 #endif /* _IAVF_H_ */
+18 -21
drivers/net/ethernet/intel/iavf/iavf_ethtool.c
··· 484 484 { 485 485 struct iavf_adapter *adapter = netdev_priv(netdev); 486 486 u32 orig_flags, new_flags, changed_flags; 487 + int ret = 0; 487 488 u32 i; 488 489 489 490 orig_flags = READ_ONCE(adapter->flags); ··· 532 531 /* issue a reset to force legacy-rx change to take effect */ 533 532 if (changed_flags & IAVF_FLAG_LEGACY_RX) { 534 533 if (netif_running(netdev)) { 535 - adapter->flags |= IAVF_FLAG_RESET_NEEDED; 536 - queue_work(adapter->wq, &adapter->reset_task); 534 + iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED); 535 + ret = iavf_wait_for_reset(adapter); 536 + if (ret) 537 + netdev_warn(netdev, "Changing private flags timeout or interrupted waiting for reset"); 537 538 } 538 539 } 539 540 540 - return 0; 541 + return ret; 541 542 } 542 543 543 544 /** ··· 630 627 { 631 628 struct iavf_adapter *adapter = netdev_priv(netdev); 632 629 u32 new_rx_count, new_tx_count; 630 + int ret = 0; 633 631 634 632 if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending)) 635 633 return -EINVAL; ··· 675 671 } 676 672 677 673 if (netif_running(netdev)) { 678 - adapter->flags |= IAVF_FLAG_RESET_NEEDED; 679 - queue_work(adapter->wq, &adapter->reset_task); 674 + iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED); 675 + ret = iavf_wait_for_reset(adapter); 676 + if (ret) 677 + netdev_warn(netdev, "Changing ring parameters timeout or interrupted waiting for reset"); 680 678 } 681 679 682 - return 0; 680 + return ret; 683 681 } 684 682 685 683 /** ··· 1836 1830 { 1837 1831 struct iavf_adapter *adapter = netdev_priv(netdev); 1838 1832 u32 num_req = ch->combined_count; 1839 - int i; 1833 + int ret = 0; 1840 1834 1841 1835 if ((adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_ADQ) && 1842 1836 adapter->num_tc) { ··· 1858 1852 1859 1853 adapter->num_req_queues = num_req; 1860 1854 adapter->flags |= IAVF_FLAG_REINIT_ITR_NEEDED; 1861 - iavf_schedule_reset(adapter); 1855 + iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED); 1862 1856 1863 - /* wait for the reset is done */ 1864 - for (i = 0; i < IAVF_RESET_WAIT_COMPLETE_COUNT; i++) { 1865 - msleep(IAVF_RESET_WAIT_MS); 1866 - if (adapter->flags & IAVF_FLAG_RESET_PENDING) 1867 - continue; 1868 - break; 1869 - } 1870 - if (i == IAVF_RESET_WAIT_COMPLETE_COUNT) { 1871 - adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED; 1872 - adapter->num_active_queues = num_req; 1873 - return -EOPNOTSUPP; 1874 - } 1857 + ret = iavf_wait_for_reset(adapter); 1858 + if (ret) 1859 + netdev_warn(netdev, "Changing channel count timeout or interrupted waiting for reset"); 1875 1860 1876 - return 0; 1861 + return ret; 1877 1862 } 1878 1863 1879 1864 /**
+150 -87
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 167 167 } 168 168 169 169 /** 170 + * iavf_is_reset_in_progress - Check if a reset is in progress 171 + * @adapter: board private structure 172 + */ 173 + static bool iavf_is_reset_in_progress(struct iavf_adapter *adapter) 174 + { 175 + if (adapter->state == __IAVF_RESETTING || 176 + adapter->flags & (IAVF_FLAG_RESET_PENDING | 177 + IAVF_FLAG_RESET_NEEDED)) 178 + return true; 179 + 180 + return false; 181 + } 182 + 183 + /** 184 + * iavf_wait_for_reset - Wait for reset to finish. 185 + * @adapter: board private structure 186 + * 187 + * Returns 0 if reset finished successfully, negative on timeout or interrupt. 188 + */ 189 + int iavf_wait_for_reset(struct iavf_adapter *adapter) 190 + { 191 + int ret = wait_event_interruptible_timeout(adapter->reset_waitqueue, 192 + !iavf_is_reset_in_progress(adapter), 193 + msecs_to_jiffies(5000)); 194 + 195 + /* If ret < 0 then it means wait was interrupted. 196 + * If ret == 0 then it means we got a timeout while waiting 197 + * for reset to finish. 198 + * If ret > 0 it means reset has finished. 199 + */ 200 + if (ret > 0) 201 + return 0; 202 + else if (ret < 0) 203 + return -EINTR; 204 + else 205 + return -EBUSY; 206 + } 207 + 208 + /** 170 209 * iavf_allocate_dma_mem_d - OS specific memory alloc for shared code 171 210 * @hw: pointer to the HW structure 172 211 * @mem: ptr to mem struct to fill out ··· 301 262 /** 302 263 * iavf_schedule_reset - Set the flags and schedule a reset event 303 264 * @adapter: board private structure 265 + * @flags: IAVF_FLAG_RESET_PENDING or IAVF_FLAG_RESET_NEEDED 304 266 **/ 305 - void iavf_schedule_reset(struct iavf_adapter *adapter) 267 + void iavf_schedule_reset(struct iavf_adapter *adapter, u64 flags) 306 268 { 307 - if (!(adapter->flags & 308 - (IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED))) { 309 - adapter->flags |= IAVF_FLAG_RESET_NEEDED; 269 + if (!test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section) && 270 + !(adapter->flags & 271 + (IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED))) { 272 + adapter->flags |= flags; 310 273 queue_work(adapter->wq, &adapter->reset_task); 311 274 } 312 275 } ··· 336 295 struct iavf_adapter *adapter = netdev_priv(netdev); 337 296 338 297 adapter->tx_timeout_count++; 339 - iavf_schedule_reset(adapter); 298 + iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED); 340 299 } 341 300 342 301 /** ··· 1692 1651 adapter->msix_entries[vector].entry = vector; 1693 1652 1694 1653 err = iavf_acquire_msix_vectors(adapter, v_budget); 1654 + if (!err) 1655 + iavf_schedule_finish_config(adapter); 1695 1656 1696 1657 out: 1697 - netif_set_real_num_rx_queues(adapter->netdev, pairs); 1698 - netif_set_real_num_tx_queues(adapter->netdev, pairs); 1699 1658 return err; 1700 1659 } 1701 1660 ··· 1869 1828 static void iavf_free_q_vectors(struct iavf_adapter *adapter) 1870 1829 { 1871 1830 int q_idx, num_q_vectors; 1872 - int napi_vectors; 1873 1831 1874 1832 if (!adapter->q_vectors) 1875 1833 return; 1876 1834 1877 1835 num_q_vectors = adapter->num_msix_vectors - NONQ_VECS; 1878 - napi_vectors = adapter->num_active_queues; 1879 1836 1880 1837 for (q_idx = 0; q_idx < num_q_vectors; q_idx++) { 1881 1838 struct iavf_q_vector *q_vector = &adapter->q_vectors[q_idx]; 1882 1839 1883 - if (q_idx < napi_vectors) 1884 - netif_napi_del(&q_vector->napi); 1840 + netif_napi_del(&q_vector->napi); 1885 1841 } 1886 1842 kfree(adapter->q_vectors); 1887 1843 adapter->q_vectors = NULL; ··· 1915 1877 goto err_alloc_queues; 1916 1878 } 1917 1879 1918 - rtnl_lock(); 1919 1880 err = iavf_set_interrupt_capability(adapter); 1920 - rtnl_unlock(); 1921 1881 if (err) { 1922 1882 dev_err(&adapter->pdev->dev, 1923 1883 "Unable to setup interrupt capabilities\n"); ··· 1968 1932 /** 1969 1933 * iavf_reinit_interrupt_scheme - Reallocate queues and vectors 1970 1934 * @adapter: board private structure 1935 + * @running: true if adapter->state == __IAVF_RUNNING 1971 1936 * 1972 1937 * Returns 0 on success, negative on failure 1973 1938 **/ 1974 - static int iavf_reinit_interrupt_scheme(struct iavf_adapter *adapter) 1939 + static int iavf_reinit_interrupt_scheme(struct iavf_adapter *adapter, bool running) 1975 1940 { 1976 1941 struct net_device *netdev = adapter->netdev; 1977 1942 int err; 1978 1943 1979 - if (netif_running(netdev)) 1944 + if (running) 1980 1945 iavf_free_traffic_irqs(adapter); 1981 1946 iavf_free_misc_irq(adapter); 1982 1947 iavf_reset_interrupt_capability(adapter); ··· 1999 1962 iavf_map_rings_to_vectors(adapter); 2000 1963 err: 2001 1964 return err; 1965 + } 1966 + 1967 + /** 1968 + * iavf_finish_config - do all netdev work that needs RTNL 1969 + * @work: our work_struct 1970 + * 1971 + * Do work that needs both RTNL and crit_lock. 1972 + **/ 1973 + static void iavf_finish_config(struct work_struct *work) 1974 + { 1975 + struct iavf_adapter *adapter; 1976 + int pairs, err; 1977 + 1978 + adapter = container_of(work, struct iavf_adapter, finish_config); 1979 + 1980 + /* Always take RTNL first to prevent circular lock dependency */ 1981 + rtnl_lock(); 1982 + mutex_lock(&adapter->crit_lock); 1983 + 1984 + if ((adapter->flags & IAVF_FLAG_SETUP_NETDEV_FEATURES) && 1985 + adapter->netdev_registered && 1986 + !test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section)) { 1987 + netdev_update_features(adapter->netdev); 1988 + adapter->flags &= ~IAVF_FLAG_SETUP_NETDEV_FEATURES; 1989 + } 1990 + 1991 + switch (adapter->state) { 1992 + case __IAVF_DOWN: 1993 + if (!adapter->netdev_registered) { 1994 + err = register_netdevice(adapter->netdev); 1995 + if (err) { 1996 + dev_err(&adapter->pdev->dev, "Unable to register netdev (%d)\n", 1997 + err); 1998 + 1999 + /* go back and try again.*/ 2000 + iavf_free_rss(adapter); 2001 + iavf_free_misc_irq(adapter); 2002 + iavf_reset_interrupt_capability(adapter); 2003 + iavf_change_state(adapter, 2004 + __IAVF_INIT_CONFIG_ADAPTER); 2005 + goto out; 2006 + } 2007 + adapter->netdev_registered = true; 2008 + } 2009 + 2010 + /* Set the real number of queues when reset occurs while 2011 + * state == __IAVF_DOWN 2012 + */ 2013 + fallthrough; 2014 + case __IAVF_RUNNING: 2015 + pairs = adapter->num_active_queues; 2016 + netif_set_real_num_rx_queues(adapter->netdev, pairs); 2017 + netif_set_real_num_tx_queues(adapter->netdev, pairs); 2018 + break; 2019 + 2020 + default: 2021 + break; 2022 + } 2023 + 2024 + out: 2025 + mutex_unlock(&adapter->crit_lock); 2026 + rtnl_unlock(); 2027 + } 2028 + 2029 + /** 2030 + * iavf_schedule_finish_config - Set the flags and schedule a reset event 2031 + * @adapter: board private structure 2032 + **/ 2033 + void iavf_schedule_finish_config(struct iavf_adapter *adapter) 2034 + { 2035 + if (!test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section)) 2036 + queue_work(adapter->wq, &adapter->finish_config); 2002 2037 } 2003 2038 2004 2039 /** ··· 2480 2371 adapter->vsi_res->num_queue_pairs); 2481 2372 adapter->flags |= IAVF_FLAG_REINIT_MSIX_NEEDED; 2482 2373 adapter->num_req_queues = adapter->vsi_res->num_queue_pairs; 2483 - iavf_schedule_reset(adapter); 2374 + iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED); 2484 2375 2485 2376 return -EAGAIN; 2486 2377 } ··· 2710 2601 2711 2602 netif_carrier_off(netdev); 2712 2603 adapter->link_up = false; 2713 - 2714 - /* set the semaphore to prevent any callbacks after device registration 2715 - * up to time when state of driver will be set to __IAVF_DOWN 2716 - */ 2717 - rtnl_lock(); 2718 - if (!adapter->netdev_registered) { 2719 - err = register_netdevice(netdev); 2720 - if (err) { 2721 - rtnl_unlock(); 2722 - goto err_register; 2723 - } 2724 - } 2725 - 2726 - adapter->netdev_registered = true; 2727 - 2728 2604 netif_tx_stop_all_queues(netdev); 2605 + 2729 2606 if (CLIENT_ALLOWED(adapter)) { 2730 2607 err = iavf_lan_add_device(adapter); 2731 2608 if (err) ··· 2724 2629 2725 2630 iavf_change_state(adapter, __IAVF_DOWN); 2726 2631 set_bit(__IAVF_VSI_DOWN, adapter->vsi.state); 2727 - rtnl_unlock(); 2728 2632 2729 2633 iavf_misc_irq_enable(adapter); 2730 2634 wake_up(&adapter->down_waitqueue); ··· 2743 2649 /* request initial VLAN offload settings */ 2744 2650 iavf_set_vlan_offload_features(adapter, 0, netdev->features); 2745 2651 2652 + iavf_schedule_finish_config(adapter); 2746 2653 return; 2654 + 2747 2655 err_mem: 2748 2656 iavf_free_rss(adapter); 2749 - err_register: 2750 2657 iavf_free_misc_irq(adapter); 2751 2658 err_sw_init: 2752 2659 iavf_reset_interrupt_capability(adapter); ··· 2774 2679 goto restart_watchdog; 2775 2680 } 2776 2681 2777 - if ((adapter->flags & IAVF_FLAG_SETUP_NETDEV_FEATURES) && 2778 - adapter->netdev_registered && 2779 - !test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section) && 2780 - rtnl_trylock()) { 2781 - netdev_update_features(adapter->netdev); 2782 - rtnl_unlock(); 2783 - adapter->flags &= ~IAVF_FLAG_SETUP_NETDEV_FEATURES; 2784 - } 2785 - 2786 2682 if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED) 2787 2683 iavf_change_state(adapter, __IAVF_COMM_FAILED); 2788 - 2789 - if (adapter->flags & IAVF_FLAG_RESET_NEEDED) { 2790 - adapter->aq_required = 0; 2791 - adapter->current_op = VIRTCHNL_OP_UNKNOWN; 2792 - mutex_unlock(&adapter->crit_lock); 2793 - queue_work(adapter->wq, &adapter->reset_task); 2794 - return; 2795 - } 2796 2684 2797 2685 switch (adapter->state) { 2798 2686 case __IAVF_STARTUP: ··· 2904 2826 /* check for hw reset */ 2905 2827 reg_val = rd32(hw, IAVF_VF_ARQLEN1) & IAVF_VF_ARQLEN1_ARQENABLE_MASK; 2906 2828 if (!reg_val) { 2907 - adapter->flags |= IAVF_FLAG_RESET_PENDING; 2908 2829 adapter->aq_required = 0; 2909 2830 adapter->current_op = VIRTCHNL_OP_UNKNOWN; 2910 2831 dev_err(&adapter->pdev->dev, "Hardware reset detected\n"); 2911 - queue_work(adapter->wq, &adapter->reset_task); 2832 + iavf_schedule_reset(adapter, IAVF_FLAG_RESET_PENDING); 2912 2833 mutex_unlock(&adapter->crit_lock); 2913 2834 queue_delayed_work(adapter->wq, 2914 2835 &adapter->watchdog_task, HZ * 2); ··· 3017 2940 int i = 0, err; 3018 2941 bool running; 3019 2942 3020 - /* Detach interface to avoid subsequent NDO callbacks */ 3021 - rtnl_lock(); 3022 - netif_device_detach(netdev); 3023 - rtnl_unlock(); 3024 - 3025 2943 /* When device is being removed it doesn't make sense to run the reset 3026 2944 * task, just return in such a case. 3027 2945 */ ··· 3024 2952 if (adapter->state != __IAVF_REMOVE) 3025 2953 queue_work(adapter->wq, &adapter->reset_task); 3026 2954 3027 - goto reset_finish; 2955 + return; 3028 2956 } 3029 2957 3030 2958 while (!mutex_trylock(&adapter->client_lock)) ··· 3082 3010 iavf_disable_vf(adapter); 3083 3011 mutex_unlock(&adapter->client_lock); 3084 3012 mutex_unlock(&adapter->crit_lock); 3085 - if (netif_running(netdev)) { 3086 - rtnl_lock(); 3087 - dev_close(netdev); 3088 - rtnl_unlock(); 3089 - } 3090 3013 return; /* Do not attempt to reinit. It's dead, Jim. */ 3091 3014 } 3092 3015 ··· 3123 3056 3124 3057 if ((adapter->flags & IAVF_FLAG_REINIT_MSIX_NEEDED) || 3125 3058 (adapter->flags & IAVF_FLAG_REINIT_ITR_NEEDED)) { 3126 - err = iavf_reinit_interrupt_scheme(adapter); 3059 + err = iavf_reinit_interrupt_scheme(adapter, running); 3127 3060 if (err) 3128 3061 goto reset_err; 3129 3062 } ··· 3218 3151 3219 3152 adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED; 3220 3153 3154 + wake_up(&adapter->reset_waitqueue); 3221 3155 mutex_unlock(&adapter->client_lock); 3222 3156 mutex_unlock(&adapter->crit_lock); 3223 3157 3224 - goto reset_finish; 3158 + return; 3225 3159 reset_err: 3226 3160 if (running) { 3227 3161 set_bit(__IAVF_VSI_DOWN, adapter->vsi.state); ··· 3232 3164 3233 3165 mutex_unlock(&adapter->client_lock); 3234 3166 mutex_unlock(&adapter->crit_lock); 3235 - 3236 - if (netif_running(netdev)) { 3237 - /* Close device to ensure that Tx queues will not be started 3238 - * during netif_device_attach() at the end of the reset task. 3239 - */ 3240 - rtnl_lock(); 3241 - dev_close(netdev); 3242 - rtnl_unlock(); 3243 - } 3244 - 3245 3167 dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n"); 3246 - reset_finish: 3247 - rtnl_lock(); 3248 - netif_device_attach(netdev); 3249 - rtnl_unlock(); 3250 3168 } 3251 3169 3252 3170 /** ··· 3281 3227 } while (pending); 3282 3228 mutex_unlock(&adapter->crit_lock); 3283 3229 3284 - if ((adapter->flags & 3285 - (IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED)) || 3286 - adapter->state == __IAVF_RESETTING) 3230 + if (iavf_is_reset_in_progress(adapter)) 3287 3231 goto freedom; 3288 3232 3289 3233 /* check for error indications */ ··· 4367 4315 static int iavf_change_mtu(struct net_device *netdev, int new_mtu) 4368 4316 { 4369 4317 struct iavf_adapter *adapter = netdev_priv(netdev); 4318 + int ret = 0; 4370 4319 4371 4320 netdev_dbg(netdev, "changing MTU from %d to %d\n", 4372 4321 netdev->mtu, new_mtu); ··· 4378 4325 } 4379 4326 4380 4327 if (netif_running(netdev)) { 4381 - adapter->flags |= IAVF_FLAG_RESET_NEEDED; 4382 - queue_work(adapter->wq, &adapter->reset_task); 4328 + iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED); 4329 + ret = iavf_wait_for_reset(adapter); 4330 + if (ret < 0) 4331 + netdev_warn(netdev, "MTU change interrupted waiting for reset"); 4332 + else if (ret) 4333 + netdev_warn(netdev, "MTU change timed out waiting for reset"); 4383 4334 } 4384 4335 4385 - return 0; 4336 + return ret; 4386 4337 } 4387 4338 4388 4339 #define NETIF_VLAN_OFFLOAD_FEATURES (NETIF_F_HW_VLAN_CTAG_RX | \ ··· 4979 4922 4980 4923 INIT_WORK(&adapter->reset_task, iavf_reset_task); 4981 4924 INIT_WORK(&adapter->adminq_task, iavf_adminq_task); 4925 + INIT_WORK(&adapter->finish_config, iavf_finish_config); 4982 4926 INIT_DELAYED_WORK(&adapter->watchdog_task, iavf_watchdog_task); 4983 4927 INIT_DELAYED_WORK(&adapter->client_task, iavf_client_task); 4984 4928 queue_delayed_work(adapter->wq, &adapter->watchdog_task, ··· 4987 4929 4988 4930 /* Setup the wait queue for indicating transition to down status */ 4989 4931 init_waitqueue_head(&adapter->down_waitqueue); 4932 + 4933 + /* Setup the wait queue for indicating transition to running state */ 4934 + init_waitqueue_head(&adapter->reset_waitqueue); 4990 4935 4991 4936 /* Setup the wait queue for indicating virtchannel events */ 4992 4937 init_waitqueue_head(&adapter->vc_waitqueue); ··· 5122 5061 usleep_range(500, 1000); 5123 5062 } 5124 5063 cancel_delayed_work_sync(&adapter->watchdog_task); 5064 + cancel_work_sync(&adapter->finish_config); 5125 5065 5066 + rtnl_lock(); 5126 5067 if (adapter->netdev_registered) { 5127 - rtnl_lock(); 5128 5068 unregister_netdevice(netdev); 5129 5069 adapter->netdev_registered = false; 5130 - rtnl_unlock(); 5131 5070 } 5071 + rtnl_unlock(); 5072 + 5132 5073 if (CLIENT_ALLOWED(adapter)) { 5133 5074 err = iavf_lan_del_device(adapter); 5134 5075 if (err)
+3 -2
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
··· 1961 1961 case VIRTCHNL_EVENT_RESET_IMPENDING: 1962 1962 dev_info(&adapter->pdev->dev, "Reset indication received from the PF\n"); 1963 1963 if (!(adapter->flags & IAVF_FLAG_RESET_PENDING)) { 1964 - adapter->flags |= IAVF_FLAG_RESET_PENDING; 1965 1964 dev_info(&adapter->pdev->dev, "Scheduling reset task\n"); 1966 - queue_work(adapter->wq, &adapter->reset_task); 1965 + iavf_schedule_reset(adapter, IAVF_FLAG_RESET_PENDING); 1967 1966 } 1968 1967 break; 1969 1968 default: ··· 2236 2237 2237 2238 iavf_process_config(adapter); 2238 2239 adapter->flags |= IAVF_FLAG_SETUP_NETDEV_FEATURES; 2240 + iavf_schedule_finish_config(adapter); 2239 2241 2240 2242 iavf_set_queue_vlan_tag_loc(adapter); 2241 2243 ··· 2285 2285 case VIRTCHNL_OP_ENABLE_QUEUES: 2286 2286 /* enable transmits */ 2287 2287 iavf_irq_enable(adapter, true); 2288 + wake_up(&adapter->reset_waitqueue); 2288 2289 adapter->flags &= ~IAVF_FLAG_QUEUES_DISABLED; 2289 2290 break; 2290 2291 case VIRTCHNL_OP_DISABLE_QUEUES:
+2
drivers/net/ethernet/intel/ice/ice_base.c
··· 800 800 801 801 ice_for_each_q_vector(vsi, v_idx) 802 802 ice_free_q_vector(vsi, v_idx); 803 + 804 + vsi->num_q_vectors = 0; 803 805 } 804 806 805 807 /**
+11 -2
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 2681 2681 2682 2682 ring->rx_max_pending = ICE_MAX_NUM_DESC; 2683 2683 ring->tx_max_pending = ICE_MAX_NUM_DESC; 2684 - ring->rx_pending = vsi->rx_rings[0]->count; 2685 - ring->tx_pending = vsi->tx_rings[0]->count; 2684 + if (vsi->tx_rings && vsi->rx_rings) { 2685 + ring->rx_pending = vsi->rx_rings[0]->count; 2686 + ring->tx_pending = vsi->tx_rings[0]->count; 2687 + } else { 2688 + ring->rx_pending = 0; 2689 + ring->tx_pending = 0; 2690 + } 2686 2691 2687 2692 /* Rx mini and jumbo rings are not supported */ 2688 2693 ring->rx_mini_max_pending = 0; ··· 2720 2715 ICE_REQ_DESC_MULTIPLE); 2721 2716 return -EINVAL; 2722 2717 } 2718 + 2719 + /* Return if there is no rings (device is reloading) */ 2720 + if (!vsi->tx_rings || !vsi->rx_rings) 2721 + return -EBUSY; 2723 2722 2724 2723 new_tx_cnt = ALIGN(ring->tx_pending, ICE_REQ_DESC_MULTIPLE); 2725 2724 if (new_tx_cnt != ring->tx_pending)
-27
drivers/net/ethernet/intel/ice/ice_lib.c
··· 2972 2972 return -ENODEV; 2973 2973 pf = vsi->back; 2974 2974 2975 - /* do not unregister while driver is in the reset recovery pending 2976 - * state. Since reset/rebuild happens through PF service task workqueue, 2977 - * it's not a good idea to unregister netdev that is associated to the 2978 - * PF that is running the work queue items currently. This is done to 2979 - * avoid check_flush_dependency() warning on this wq 2980 - */ 2981 - if (vsi->netdev && !ice_is_reset_in_progress(pf->state) && 2982 - (test_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state))) { 2983 - unregister_netdev(vsi->netdev); 2984 - clear_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state); 2985 - } 2986 - 2987 - if (vsi->type == ICE_VSI_PF) 2988 - ice_devlink_destroy_pf_port(pf); 2989 - 2990 2975 if (test_bit(ICE_FLAG_RSS_ENA, pf->flags)) 2991 2976 ice_rss_clean(vsi); 2992 2977 2993 2978 ice_vsi_close(vsi); 2994 2979 ice_vsi_decfg(vsi); 2995 - 2996 - if (vsi->netdev) { 2997 - if (test_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state)) { 2998 - unregister_netdev(vsi->netdev); 2999 - clear_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state); 3000 - } 3001 - if (test_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state)) { 3002 - free_netdev(vsi->netdev); 3003 - vsi->netdev = NULL; 3004 - clear_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state); 3005 - } 3006 - } 3007 2980 3008 2981 /* retain SW VSI data structure since it is needed to unregister and 3009 2982 * free VSI netdev when PF is not in reset recovery pending state,\
+8 -2
drivers/net/ethernet/intel/ice/ice_main.c
··· 4430 4430 if (err) 4431 4431 return err; 4432 4432 4433 - rtnl_lock(); 4434 4433 err = ice_vsi_open(vsi); 4435 - rtnl_unlock(); 4434 + if (err) 4435 + ice_fltr_remove_all(vsi); 4436 4436 4437 4437 return err; 4438 4438 } ··· 4895 4895 params = ice_vsi_to_params(vsi); 4896 4896 params.flags = ICE_VSI_FLAG_INIT; 4897 4897 4898 + rtnl_lock(); 4898 4899 err = ice_vsi_cfg(vsi, &params); 4899 4900 if (err) 4900 4901 goto err_vsi_cfg; ··· 4903 4902 err = ice_start_eth(ice_get_main_vsi(pf)); 4904 4903 if (err) 4905 4904 goto err_start_eth; 4905 + rtnl_unlock(); 4906 4906 4907 4907 err = ice_init_rdma(pf); 4908 4908 if (err) ··· 4918 4916 4919 4917 err_init_rdma: 4920 4918 ice_vsi_close(ice_get_main_vsi(pf)); 4919 + rtnl_lock(); 4921 4920 err_start_eth: 4922 4921 ice_vsi_decfg(ice_get_main_vsi(pf)); 4923 4922 err_vsi_cfg: 4923 + rtnl_unlock(); 4924 4924 ice_deinit_dev(pf); 4925 4925 return err; 4926 4926 } ··· 4935 4931 { 4936 4932 ice_deinit_features(pf); 4937 4933 ice_deinit_rdma(pf); 4934 + rtnl_lock(); 4938 4935 ice_stop_eth(ice_get_main_vsi(pf)); 4939 4936 ice_vsi_decfg(ice_get_main_vsi(pf)); 4937 + rtnl_unlock(); 4940 4938 ice_deinit_dev(pf); 4941 4939 } 4942 4940
+2 -2
drivers/net/ethernet/intel/igc/igc_main.c
··· 2828 2828 struct netdev_queue *nq = txring_txq(ring); 2829 2829 union igc_adv_tx_desc *tx_desc = NULL; 2830 2830 int cpu = smp_processor_id(); 2831 - u16 ntu = ring->next_to_use; 2832 2831 struct xdp_desc xdp_desc; 2833 - u16 budget; 2832 + u16 budget, ntu; 2834 2833 2835 2834 if (!netif_carrier_ok(ring->netdev)) 2836 2835 return; ··· 2839 2840 /* Avoid transmit queue timeout since we share it with the slow path */ 2840 2841 txq_trans_cond_update(nq); 2841 2842 2843 + ntu = ring->next_to_use; 2842 2844 budget = igc_desc_unused(ring); 2843 2845 2844 2846 while (xsk_tx_peek_desc(pool, &xdp_desc) && budget--) {
+100 -37
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c
··· 4 4 * Copyright (C) 2022 Marvell. 5 5 */ 6 6 7 + #include <crypto/skcipher.h> 7 8 #include <linux/rtnetlink.h> 8 9 #include <linux/bitfield.h> 9 10 #include "otx2_common.h" ··· 42 41 #define MCS_TCI_SCB 0x10 /* epon */ 43 42 #define MCS_TCI_E 0x08 /* encryption */ 44 43 #define MCS_TCI_C 0x04 /* changed text */ 44 + 45 + #define CN10K_MAX_HASH_LEN 16 46 + #define CN10K_MAX_SAK_LEN 32 47 + 48 + static int cn10k_ecb_aes_encrypt(struct otx2_nic *pfvf, u8 *sak, 49 + u16 sak_len, u8 *hash) 50 + { 51 + u8 data[CN10K_MAX_HASH_LEN] = { 0 }; 52 + struct skcipher_request *req = NULL; 53 + struct scatterlist sg_src, sg_dst; 54 + struct crypto_skcipher *tfm; 55 + DECLARE_CRYPTO_WAIT(wait); 56 + int err; 57 + 58 + tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0); 59 + if (IS_ERR(tfm)) { 60 + dev_err(pfvf->dev, "failed to allocate transform for ecb-aes\n"); 61 + return PTR_ERR(tfm); 62 + } 63 + 64 + req = skcipher_request_alloc(tfm, GFP_KERNEL); 65 + if (!req) { 66 + dev_err(pfvf->dev, "failed to allocate request for skcipher\n"); 67 + err = -ENOMEM; 68 + goto free_tfm; 69 + } 70 + 71 + err = crypto_skcipher_setkey(tfm, sak, sak_len); 72 + if (err) { 73 + dev_err(pfvf->dev, "failed to set key for skcipher\n"); 74 + goto free_req; 75 + } 76 + 77 + /* build sg list */ 78 + sg_init_one(&sg_src, data, CN10K_MAX_HASH_LEN); 79 + sg_init_one(&sg_dst, hash, CN10K_MAX_HASH_LEN); 80 + 81 + skcipher_request_set_callback(req, 0, crypto_req_done, &wait); 82 + skcipher_request_set_crypt(req, &sg_src, &sg_dst, 83 + CN10K_MAX_HASH_LEN, NULL); 84 + 85 + err = crypto_skcipher_encrypt(req); 86 + err = crypto_wait_req(err, &wait); 87 + 88 + free_req: 89 + skcipher_request_free(req); 90 + free_tfm: 91 + crypto_free_skcipher(tfm); 92 + return err; 93 + } 45 94 46 95 static struct cn10k_mcs_txsc *cn10k_mcs_get_txsc(struct cn10k_mcs_cfg *cfg, 47 96 struct macsec_secy *secy) ··· 381 330 return ret; 382 331 } 383 332 333 + static int cn10k_mcs_write_keys(struct otx2_nic *pfvf, 334 + struct macsec_secy *secy, 335 + struct mcs_sa_plcy_write_req *req, 336 + u8 *sak, u8 *salt, ssci_t ssci) 337 + { 338 + u8 hash_rev[CN10K_MAX_HASH_LEN]; 339 + u8 sak_rev[CN10K_MAX_SAK_LEN]; 340 + u8 salt_rev[MACSEC_SALT_LEN]; 341 + u8 hash[CN10K_MAX_HASH_LEN]; 342 + u32 ssci_63_32; 343 + int err, i; 344 + 345 + err = cn10k_ecb_aes_encrypt(pfvf, sak, secy->key_len, hash); 346 + if (err) { 347 + dev_err(pfvf->dev, "Generating hash using ECB(AES) failed\n"); 348 + return err; 349 + } 350 + 351 + for (i = 0; i < secy->key_len; i++) 352 + sak_rev[i] = sak[secy->key_len - 1 - i]; 353 + 354 + for (i = 0; i < CN10K_MAX_HASH_LEN; i++) 355 + hash_rev[i] = hash[CN10K_MAX_HASH_LEN - 1 - i]; 356 + 357 + for (i = 0; i < MACSEC_SALT_LEN; i++) 358 + salt_rev[i] = salt[MACSEC_SALT_LEN - 1 - i]; 359 + 360 + ssci_63_32 = (__force u32)cpu_to_be32((__force u32)ssci); 361 + 362 + memcpy(&req->plcy[0][0], sak_rev, secy->key_len); 363 + memcpy(&req->plcy[0][4], hash_rev, CN10K_MAX_HASH_LEN); 364 + memcpy(&req->plcy[0][6], salt_rev, MACSEC_SALT_LEN); 365 + req->plcy[0][7] |= (u64)ssci_63_32 << 32; 366 + 367 + return 0; 368 + } 369 + 384 370 static int cn10k_mcs_write_rx_sa_plcy(struct otx2_nic *pfvf, 385 371 struct macsec_secy *secy, 386 372 struct cn10k_mcs_rxsc *rxsc, 387 373 u8 assoc_num, bool sa_in_use) 388 374 { 389 - unsigned char *src = rxsc->sa_key[assoc_num]; 390 375 struct mcs_sa_plcy_write_req *plcy_req; 391 - u8 *salt_p = rxsc->salt[assoc_num]; 376 + u8 *sak = rxsc->sa_key[assoc_num]; 377 + u8 *salt = rxsc->salt[assoc_num]; 392 378 struct mcs_rx_sc_sa_map *map_req; 393 379 struct mbox *mbox = &pfvf->mbox; 394 - u64 ssci_salt_95_64 = 0; 395 - u8 reg, key_len; 396 - u64 salt_63_0; 397 380 int ret; 398 381 399 382 mutex_lock(&mbox->lock); ··· 445 360 goto fail; 446 361 } 447 362 448 - for (reg = 0, key_len = 0; key_len < secy->key_len; key_len += 8) { 449 - memcpy((u8 *)&plcy_req->plcy[0][reg], 450 - (src + reg * 8), 8); 451 - reg++; 452 - } 453 - 454 - if (secy->xpn) { 455 - memcpy((u8 *)&salt_63_0, salt_p, 8); 456 - memcpy((u8 *)&ssci_salt_95_64, salt_p + 8, 4); 457 - ssci_salt_95_64 |= (__force u64)rxsc->ssci[assoc_num] << 32; 458 - 459 - plcy_req->plcy[0][6] = salt_63_0; 460 - plcy_req->plcy[0][7] = ssci_salt_95_64; 461 - } 363 + ret = cn10k_mcs_write_keys(pfvf, secy, plcy_req, sak, 364 + salt, rxsc->ssci[assoc_num]); 365 + if (ret) 366 + goto fail; 462 367 463 368 plcy_req->sa_index[0] = rxsc->hw_sa_id[assoc_num]; 464 369 plcy_req->sa_cnt = 1; ··· 661 586 struct cn10k_mcs_txsc *txsc, 662 587 u8 assoc_num) 663 588 { 664 - unsigned char *src = txsc->sa_key[assoc_num]; 665 589 struct mcs_sa_plcy_write_req *plcy_req; 666 - u8 *salt_p = txsc->salt[assoc_num]; 590 + u8 *sak = txsc->sa_key[assoc_num]; 591 + u8 *salt = txsc->salt[assoc_num]; 667 592 struct mbox *mbox = &pfvf->mbox; 668 - u64 ssci_salt_95_64 = 0; 669 - u8 reg, key_len; 670 - u64 salt_63_0; 671 593 int ret; 672 594 673 595 mutex_lock(&mbox->lock); ··· 675 603 goto fail; 676 604 } 677 605 678 - for (reg = 0, key_len = 0; key_len < secy->key_len; key_len += 8) { 679 - memcpy((u8 *)&plcy_req->plcy[0][reg], (src + reg * 8), 8); 680 - reg++; 681 - } 682 - 683 - if (secy->xpn) { 684 - memcpy((u8 *)&salt_63_0, salt_p, 8); 685 - memcpy((u8 *)&ssci_salt_95_64, salt_p + 8, 4); 686 - ssci_salt_95_64 |= (__force u64)txsc->ssci[assoc_num] << 32; 687 - 688 - plcy_req->plcy[0][6] = salt_63_0; 689 - plcy_req->plcy[0][7] = ssci_salt_95_64; 690 - } 606 + ret = cn10k_mcs_write_keys(pfvf, secy, plcy_req, sak, 607 + salt, txsc->ssci[assoc_num]); 608 + if (ret) 609 + goto fail; 691 610 692 611 plcy_req->plcy[0][8] = assoc_num; 693 612 plcy_req->sa_index[0] = txsc->hw_sa_id[assoc_num];
+3 -2
drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
··· 1454 1454 if (err) 1455 1455 goto err_free_npa_lf; 1456 1456 1457 - /* Enable backpressure */ 1458 - otx2_nix_config_bp(pf, true); 1457 + /* Enable backpressure for CGX mapped PF/VFs */ 1458 + if (!is_otx2_lbkvf(pf->pdev)) 1459 + otx2_nix_config_bp(pf, true); 1459 1460 1460 1461 /* Init Auras and pools used by NIX RQ, for free buffer ptrs */ 1461 1462 err = otx2_rq_aura_pool_init(pf);
+11 -18
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 3846 3846 return 0; 3847 3847 } 3848 3848 3849 - static int __init mtk_init(struct net_device *dev) 3850 - { 3851 - struct mtk_mac *mac = netdev_priv(dev); 3852 - struct mtk_eth *eth = mac->hw; 3853 - int ret; 3854 - 3855 - ret = of_get_ethdev_address(mac->of_node, dev); 3856 - if (ret) { 3857 - /* If the mac address is invalid, use random mac address */ 3858 - eth_hw_addr_random(dev); 3859 - dev_err(eth->dev, "generated random MAC address %pM\n", 3860 - dev->dev_addr); 3861 - } 3862 - 3863 - return 0; 3864 - } 3865 - 3866 3849 static void mtk_uninit(struct net_device *dev) 3867 3850 { 3868 3851 struct mtk_mac *mac = netdev_priv(dev); ··· 4261 4278 }; 4262 4279 4263 4280 static const struct net_device_ops mtk_netdev_ops = { 4264 - .ndo_init = mtk_init, 4265 4281 .ndo_uninit = mtk_uninit, 4266 4282 .ndo_open = mtk_open, 4267 4283 .ndo_stop = mtk_stop, ··· 4321 4339 mac->id = id; 4322 4340 mac->hw = eth; 4323 4341 mac->of_node = np; 4342 + 4343 + err = of_get_ethdev_address(mac->of_node, eth->netdev[id]); 4344 + if (err == -EPROBE_DEFER) 4345 + return err; 4346 + 4347 + if (err) { 4348 + /* If the mac address is invalid, use random mac address */ 4349 + eth_hw_addr_random(eth->netdev[id]); 4350 + dev_err(eth->dev, "generated random MAC address %pM\n", 4351 + eth->netdev[id]->dev_addr); 4352 + } 4324 4353 4325 4354 memset(mac->hwlro_ip, 0, sizeof(mac->hwlro_ip)); 4326 4355 mac->hwlro_ip_cnt = 0;
+1 -1
drivers/net/ethernet/mediatek/mtk_ppe_debugfs.c
··· 98 98 99 99 acct = mtk_foe_entry_get_mib(ppe, i, NULL); 100 100 101 - type = FIELD_GET(MTK_FOE_IB1_PACKET_TYPE, entry->ib1); 101 + type = mtk_get_ib1_pkt_type(ppe->eth, entry->ib1); 102 102 seq_printf(m, "%05x %s %7s", i, 103 103 mtk_foe_entry_state_str(state), 104 104 mtk_foe_pkt_type_str(type));
+2 -1
drivers/net/ethernet/mscc/ocelot_fdma.c
··· 368 368 if (unlikely(!ndev)) 369 369 return false; 370 370 371 - pskb_trim(skb, skb->len - ETH_FCS_LEN); 371 + if (pskb_trim(skb, skb->len - ETH_FCS_LEN)) 372 + return false; 372 373 373 374 skb->dev = ndev; 374 375 skb->protocol = eth_type_trans(skb, skb->dev);
+5 -2
drivers/net/ethernet/qualcomm/emac/emac-mac.c
··· 1260 1260 if (skb->protocol == htons(ETH_P_IP)) { 1261 1261 u32 pkt_len = ((unsigned char *)ip_hdr(skb) - skb->data) 1262 1262 + ntohs(ip_hdr(skb)->tot_len); 1263 - if (skb->len > pkt_len) 1264 - pskb_trim(skb, pkt_len); 1263 + if (skb->len > pkt_len) { 1264 + ret = pskb_trim(skb, pkt_len); 1265 + if (unlikely(ret)) 1266 + return ret; 1267 + } 1265 1268 } 1266 1269 1267 1270 hdr_len = skb_tcp_all_headers(skb);
+34 -11
drivers/net/ethernet/realtek/r8169_main.c
··· 623 623 int cfg9346_usage_count; 624 624 625 625 unsigned supports_gmii:1; 626 + unsigned aspm_manageable:1; 626 627 dma_addr_t counters_phys_addr; 627 628 struct rtl8169_counters *counters; 628 629 struct rtl8169_tc_offsets tc_offset; ··· 2747 2746 if (tp->mac_version < RTL_GIGA_MAC_VER_32) 2748 2747 return; 2749 2748 2750 - if (enable) { 2749 + /* Don't enable ASPM in the chip if OS can't control ASPM */ 2750 + if (enable && tp->aspm_manageable) { 2751 + /* On these chip versions ASPM can even harm 2752 + * bus communication of other PCI devices. 2753 + */ 2754 + if (tp->mac_version == RTL_GIGA_MAC_VER_42 || 2755 + tp->mac_version == RTL_GIGA_MAC_VER_43) 2756 + return; 2757 + 2751 2758 rtl_mod_config5(tp, 0, ASPM_en); 2752 2759 rtl_mod_config2(tp, 0, ClkReqEn); 2753 2760 ··· 4523 4514 } 4524 4515 4525 4516 if (napi_schedule_prep(&tp->napi)) { 4526 - rtl_unlock_config_regs(tp); 4527 - rtl_hw_aspm_clkreq_enable(tp, false); 4528 - rtl_lock_config_regs(tp); 4529 - 4530 4517 rtl_irq_disable(tp); 4531 4518 __napi_schedule(&tp->napi); 4532 4519 } ··· 4582 4577 4583 4578 work_done = rtl_rx(dev, tp, budget); 4584 4579 4585 - if (work_done < budget && napi_complete_done(napi, work_done)) { 4580 + if (work_done < budget && napi_complete_done(napi, work_done)) 4586 4581 rtl_irq_enable(tp); 4587 - 4588 - rtl_unlock_config_regs(tp); 4589 - rtl_hw_aspm_clkreq_enable(tp, true); 4590 - rtl_lock_config_regs(tp); 4591 - } 4592 4582 4593 4583 return work_done; 4594 4584 } ··· 5158 5158 rtl_rar_set(tp, mac_addr); 5159 5159 } 5160 5160 5161 + /* register is set if system vendor successfully tested ASPM 1.2 */ 5162 + static bool rtl_aspm_is_safe(struct rtl8169_private *tp) 5163 + { 5164 + if (tp->mac_version >= RTL_GIGA_MAC_VER_61 && 5165 + r8168_mac_ocp_read(tp, 0xc0b2) & 0xf) 5166 + return true; 5167 + 5168 + return false; 5169 + } 5170 + 5161 5171 static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) 5162 5172 { 5163 5173 struct rtl8169_private *tp; ··· 5236 5226 "unknown chip XID %03x, contact r8169 maintainers (see MAINTAINERS file)\n", 5237 5227 xid); 5238 5228 tp->mac_version = chipset; 5229 + 5230 + /* Disable ASPM L1 as that cause random device stop working 5231 + * problems as well as full system hangs for some PCIe devices users. 5232 + * Chips from RTL8168h partially have issues with L1.2, but seem 5233 + * to work fine with L1 and L1.1. 5234 + */ 5235 + if (rtl_aspm_is_safe(tp)) 5236 + rc = 0; 5237 + else if (tp->mac_version >= RTL_GIGA_MAC_VER_46) 5238 + rc = pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2); 5239 + else 5240 + rc = pci_disable_link_state(pdev, PCIE_LINK_STATE_L1); 5241 + tp->aspm_manageable = !rc; 5239 5242 5240 5243 tp->dash_type = rtl_check_dash(tp); 5241 5244
+19 -5
drivers/net/ethernet/ti/cpsw_ale.c
··· 106 106 107 107 static inline int cpsw_ale_get_field(u32 *ale_entry, u32 start, u32 bits) 108 108 { 109 - int idx; 109 + int idx, idx2; 110 + u32 hi_val = 0; 110 111 111 112 idx = start / 32; 113 + idx2 = (start + bits - 1) / 32; 114 + /* Check if bits to be fetched exceed a word */ 115 + if (idx != idx2) { 116 + idx2 = 2 - idx2; /* flip */ 117 + hi_val = ale_entry[idx2] << ((idx2 * 32) - start); 118 + } 112 119 start -= idx * 32; 113 120 idx = 2 - idx; /* flip */ 114 - return (ale_entry[idx] >> start) & BITMASK(bits); 121 + return (hi_val + (ale_entry[idx] >> start)) & BITMASK(bits); 115 122 } 116 123 117 124 static inline void cpsw_ale_set_field(u32 *ale_entry, u32 start, u32 bits, 118 125 u32 value) 119 126 { 120 - int idx; 127 + int idx, idx2; 121 128 122 129 value &= BITMASK(bits); 123 - idx = start / 32; 130 + idx = start / 32; 131 + idx2 = (start + bits - 1) / 32; 132 + /* Check if bits to be set exceed a word */ 133 + if (idx != idx2) { 134 + idx2 = 2 - idx2; /* flip */ 135 + ale_entry[idx2] &= ~(BITMASK(bits + start - (idx2 * 32))); 136 + ale_entry[idx2] |= (value >> ((idx2 * 32) - start)); 137 + } 124 138 start -= idx * 32; 125 - idx = 2 - idx; /* flip */ 139 + idx = 2 - idx; /* flip */ 126 140 ale_entry[idx] &= ~(BITMASK(bits) << start); 127 141 ale_entry[idx] |= (value << start); 128 142 }
-1
drivers/net/ethernet/wangxun/libwx/wx_hw.c
··· 1511 1511 psrtype = WX_RDB_PL_CFG_L4HDR | 1512 1512 WX_RDB_PL_CFG_L3HDR | 1513 1513 WX_RDB_PL_CFG_L2HDR | 1514 - WX_RDB_PL_CFG_TUN_TUNHDR | 1515 1514 WX_RDB_PL_CFG_TUN_TUNHDR; 1516 1515 wr32(wx, WX_RDB_PL_CFG(0), psrtype); 1517 1516
+14 -7
drivers/net/phy/phy_device.c
··· 3451 3451 { 3452 3452 int rc; 3453 3453 3454 + ethtool_set_ethtool_phy_ops(&phy_ethtool_phy_ops); 3455 + 3454 3456 rc = mdio_bus_init(); 3455 3457 if (rc) 3456 - return rc; 3458 + goto err_ethtool_phy_ops; 3457 3459 3458 - ethtool_set_ethtool_phy_ops(&phy_ethtool_phy_ops); 3459 3460 features_init(); 3460 3461 3461 3462 rc = phy_driver_register(&genphy_c45_driver, THIS_MODULE); 3462 3463 if (rc) 3463 - goto err_c45; 3464 + goto err_mdio_bus; 3464 3465 3465 3466 rc = phy_driver_register(&genphy_driver, THIS_MODULE); 3466 - if (rc) { 3467 - phy_driver_unregister(&genphy_c45_driver); 3467 + if (rc) 3468 + goto err_c45; 3469 + 3470 + return 0; 3471 + 3468 3472 err_c45: 3469 - mdio_bus_exit(); 3470 - } 3473 + phy_driver_unregister(&genphy_c45_driver); 3474 + err_mdio_bus: 3475 + mdio_bus_exit(); 3476 + err_ethtool_phy_ops: 3477 + ethtool_set_ethtool_phy_ops(NULL); 3471 3478 3472 3479 return rc; 3473 3480 }
+6
drivers/net/usb/usbnet.c
··· 1775 1775 } else if (!info->in || !info->out) 1776 1776 status = usbnet_get_endpoints (dev, udev); 1777 1777 else { 1778 + u8 ep_addrs[3] = { 1779 + info->in + USB_DIR_IN, info->out + USB_DIR_OUT, 0 1780 + }; 1781 + 1778 1782 dev->in = usb_rcvbulkpipe (xdev, info->in); 1779 1783 dev->out = usb_sndbulkpipe (xdev, info->out); 1780 1784 if (!(info->flags & FLAG_NO_SETINT)) ··· 1788 1784 else 1789 1785 status = 0; 1790 1786 1787 + if (status == 0 && !usb_check_bulk_endpoints(udev, ep_addrs)) 1788 + status = -EINVAL; 1791 1789 } 1792 1790 if (status >= 0 && dev->status) 1793 1791 status = init_status (dev, udev);
+6 -6
drivers/net/vrf.c
··· 664 664 skb->protocol = htons(ETH_P_IPV6); 665 665 skb->dev = dev; 666 666 667 - rcu_read_lock_bh(); 667 + rcu_read_lock(); 668 668 nexthop = rt6_nexthop((struct rt6_info *)dst, &ipv6_hdr(skb)->daddr); 669 669 neigh = __ipv6_neigh_lookup_noref(dst->dev, nexthop); 670 670 if (unlikely(!neigh)) ··· 672 672 if (!IS_ERR(neigh)) { 673 673 sock_confirm_neigh(skb, neigh); 674 674 ret = neigh_output(neigh, skb, false); 675 - rcu_read_unlock_bh(); 675 + rcu_read_unlock(); 676 676 return ret; 677 677 } 678 - rcu_read_unlock_bh(); 678 + rcu_read_unlock(); 679 679 680 680 IP6_INC_STATS(dev_net(dst->dev), 681 681 ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES); ··· 889 889 } 890 890 } 891 891 892 - rcu_read_lock_bh(); 892 + rcu_read_lock(); 893 893 894 894 neigh = ip_neigh_for_gw(rt, skb, &is_v6gw); 895 895 if (!IS_ERR(neigh)) { ··· 898 898 sock_confirm_neigh(skb, neigh); 899 899 /* if crossing protocols, can not use the cached header */ 900 900 ret = neigh_output(neigh, skb, is_v6gw); 901 - rcu_read_unlock_bh(); 901 + rcu_read_unlock(); 902 902 return ret; 903 903 } 904 904 905 - rcu_read_unlock_bh(); 905 + rcu_read_unlock(); 906 906 vrf_tx_error(skb->dev, skb); 907 907 return -EINVAL; 908 908 }
+1 -1
drivers/of/Kconfig
··· 55 55 56 56 config OF_EARLY_FLATTREE 57 57 bool 58 - select DMA_DECLARE_COHERENT if HAS_DMA 58 + select DMA_DECLARE_COHERENT if HAS_DMA && HAS_IOMEM 59 59 select OF_FLATTREE 60 60 61 61 config OF_PROMTREE
+1 -1
drivers/of/platform.c
··· 552 552 if (!of_get_property(node, "linux,opened", NULL) || 553 553 !of_get_property(node, "linux,boot-display", NULL)) 554 554 continue; 555 - dev = of_platform_device_create(node, "of-display.0", NULL); 555 + dev = of_platform_device_create(node, "of-display", NULL); 556 556 of_node_put(node); 557 557 if (WARN_ON(!dev)) 558 558 return -ENOMEM;
+3
drivers/regulator/da9063-regulator.c
··· 778 778 const struct notification_limit *uv_l = &constr->under_voltage_limits; 779 779 const struct notification_limit *ov_l = &constr->over_voltage_limits; 780 780 781 + if (!config->init_data) /* No config in DT, pointers will be invalid */ 782 + return 0; 783 + 781 784 /* make sure that only one severity is used to clarify if unchanged, enabled or disabled */ 782 785 if ((!!uv_l->prot + !!uv_l->err + !!uv_l->warn) > 1) { 783 786 dev_err(config->dev, "%s: at most one voltage monitoring severity allowed!\n",
+23 -10
drivers/s390/crypto/zcrypt_msgtype6.c
··· 1101 1101 struct ica_xcRB *xcrb, 1102 1102 struct ap_message *ap_msg) 1103 1103 { 1104 - int rc; 1105 1104 struct response_type *rtype = ap_msg->private; 1106 1105 struct { 1107 1106 struct type6_hdr hdr; 1108 1107 struct CPRBX cprbx; 1109 1108 /* ... more data blocks ... */ 1110 1109 } __packed * msg = ap_msg->msg; 1110 + unsigned int max_payload_size; 1111 + int rc, delta; 1111 1112 1112 - /* 1113 - * Set the queue's reply buffer length minus 128 byte padding 1114 - * as reply limit for the card firmware. 1115 - */ 1116 - msg->hdr.fromcardlen1 = min_t(unsigned int, msg->hdr.fromcardlen1, 1117 - zq->reply.bufsize - 128); 1118 - if (msg->hdr.fromcardlen2) 1119 - msg->hdr.fromcardlen2 = 1120 - zq->reply.bufsize - msg->hdr.fromcardlen1 - 128; 1113 + /* calculate maximum payload for this card and msg type */ 1114 + max_payload_size = zq->reply.bufsize - sizeof(struct type86_fmt2_msg); 1115 + 1116 + /* limit each of the two from fields to the maximum payload size */ 1117 + msg->hdr.fromcardlen1 = min(msg->hdr.fromcardlen1, max_payload_size); 1118 + msg->hdr.fromcardlen2 = min(msg->hdr.fromcardlen2, max_payload_size); 1119 + 1120 + /* calculate delta if the sum of both exceeds max payload size */ 1121 + delta = msg->hdr.fromcardlen1 + msg->hdr.fromcardlen2 1122 + - max_payload_size; 1123 + if (delta > 0) { 1124 + /* 1125 + * Sum exceeds maximum payload size, prune fromcardlen1 1126 + * (always trust fromcardlen2) 1127 + */ 1128 + if (delta > msg->hdr.fromcardlen1) { 1129 + rc = -EINVAL; 1130 + goto out; 1131 + } 1132 + msg->hdr.fromcardlen1 -= delta; 1133 + } 1121 1134 1122 1135 init_completion(&rtype->work); 1123 1136 rc = ap_queue_message(zq->queue, ap_msg);
+6 -6
drivers/video/console/sticon.c
··· 156 156 return false; 157 157 } 158 158 159 - static int sticon_set_def_font(int unit, struct console_font *op) 159 + static void sticon_set_def_font(int unit) 160 160 { 161 161 if (font_data[unit] != STI_DEF_FONT) { 162 162 if (--FNTREFCOUNT(font_data[unit]) == 0) { ··· 165 165 } 166 166 font_data[unit] = STI_DEF_FONT; 167 167 } 168 - 169 - return 0; 170 168 } 171 169 172 170 static int sticon_set_font(struct vc_data *vc, struct console_font *op, ··· 244 246 vc->vc_video_erase_char, font_data[vc->vc_num]); 245 247 246 248 /* delete old font in case it is a user font */ 247 - sticon_set_def_font(unit, NULL); 249 + sticon_set_def_font(unit); 248 250 249 251 FNTREFCOUNT(cooked_font)++; 250 252 font_data[unit] = cooked_font; ··· 262 264 263 265 static int sticon_font_default(struct vc_data *vc, struct console_font *op, char *name) 264 266 { 265 - return sticon_set_def_font(vc->vc_num, op); 267 + sticon_set_def_font(vc->vc_num); 268 + 269 + return 0; 266 270 } 267 271 268 272 static int sticon_font_set(struct vc_data *vc, struct console_font *font, ··· 297 297 298 298 /* free memory used by user font */ 299 299 for (i = 0; i < MAX_NR_CONSOLES; i++) 300 - sticon_set_def_font(i, NULL); 300 + sticon_set_def_font(i); 301 301 } 302 302 303 303 static void sticon_clear(struct vc_data *conp, int sy, int sx, int height,
+28 -46
drivers/video/console/vgacon.c
··· 65 65 * Interface used by the world 66 66 */ 67 67 68 - static const char *vgacon_startup(void); 69 - static void vgacon_init(struct vc_data *c, int init); 70 - static void vgacon_deinit(struct vc_data *c); 71 - static void vgacon_cursor(struct vc_data *c, int mode); 72 - static int vgacon_switch(struct vc_data *c); 73 - static int vgacon_blank(struct vc_data *c, int blank, int mode_switch); 74 - static void vgacon_scrolldelta(struct vc_data *c, int lines); 75 68 static int vgacon_set_origin(struct vc_data *c); 76 - static void vgacon_save_screen(struct vc_data *c); 77 - static void vgacon_invert_region(struct vc_data *c, u16 * p, int count); 69 + 78 70 static struct uni_pagedict *vgacon_uni_pagedir; 79 71 static int vgacon_refcount; 80 72 ··· 134 142 write_vga(12, (c->vc_visible_origin - vga_vram_base) / 2); 135 143 } 136 144 137 - static void vgacon_restore_screen(struct vc_data *c) 138 - { 139 - if (c->vc_origin != c->vc_visible_origin) 140 - vgacon_scrolldelta(c, 0); 141 - } 142 - 143 145 static void vgacon_scrolldelta(struct vc_data *c, int lines) 144 146 { 145 147 vc_scrolldelta_helper(c, lines, vga_rolled_over, (void *)vga_vram_base, 146 148 vga_vram_size); 147 149 vga_set_mem_top(c); 150 + } 151 + 152 + static void vgacon_restore_screen(struct vc_data *c) 153 + { 154 + if (c->vc_origin != c->vc_visible_origin) 155 + vgacon_scrolldelta(c, 0); 148 156 } 149 157 150 158 static const char *vgacon_startup(void) ··· 437 445 } 438 446 } 439 447 440 - static void vgacon_set_cursor_size(int xpos, int from, int to) 448 + static void vgacon_set_cursor_size(int from, int to) 441 449 { 442 450 unsigned long flags; 443 451 int curs, cure; ··· 470 478 471 479 static void vgacon_cursor(struct vc_data *c, int mode) 472 480 { 481 + unsigned int c_height; 482 + 473 483 if (c->vc_mode != KD_TEXT) 474 484 return; 475 485 476 486 vgacon_restore_screen(c); 477 487 488 + c_height = c->vc_cell_height; 489 + 478 490 switch (mode) { 479 491 case CM_ERASE: 480 492 write_vga(14, (c->vc_pos - vga_vram_base) / 2); 481 493 if (vga_video_type >= VIDEO_TYPE_VGAC) 482 - vgacon_set_cursor_size(c->state.x, 31, 30); 494 + vgacon_set_cursor_size(31, 30); 483 495 else 484 - vgacon_set_cursor_size(c->state.x, 31, 31); 496 + vgacon_set_cursor_size(31, 31); 485 497 break; 486 498 487 499 case CM_MOVE: ··· 493 497 write_vga(14, (c->vc_pos - vga_vram_base) / 2); 494 498 switch (CUR_SIZE(c->vc_cursor_type)) { 495 499 case CUR_UNDERLINE: 496 - vgacon_set_cursor_size(c->state.x, 497 - c->vc_cell_height - 498 - (c->vc_cell_height < 499 - 10 ? 2 : 3), 500 - c->vc_cell_height - 501 - (c->vc_cell_height < 502 - 10 ? 1 : 2)); 500 + vgacon_set_cursor_size(c_height - 501 + (c_height < 10 ? 2 : 3), 502 + c_height - 503 + (c_height < 10 ? 1 : 2)); 503 504 break; 504 505 case CUR_TWO_THIRDS: 505 - vgacon_set_cursor_size(c->state.x, 506 - c->vc_cell_height / 3, 507 - c->vc_cell_height - 508 - (c->vc_cell_height < 509 - 10 ? 1 : 2)); 506 + vgacon_set_cursor_size(c_height / 3, c_height - 507 + (c_height < 10 ? 1 : 2)); 510 508 break; 511 509 case CUR_LOWER_THIRD: 512 - vgacon_set_cursor_size(c->state.x, 513 - (c->vc_cell_height * 2) / 3, 514 - c->vc_cell_height - 515 - (c->vc_cell_height < 516 - 10 ? 1 : 2)); 510 + vgacon_set_cursor_size(c_height * 2 / 3, c_height - 511 + (c_height < 10 ? 1 : 2)); 517 512 break; 518 513 case CUR_LOWER_HALF: 519 - vgacon_set_cursor_size(c->state.x, 520 - c->vc_cell_height / 2, 521 - c->vc_cell_height - 522 - (c->vc_cell_height < 523 - 10 ? 1 : 2)); 514 + vgacon_set_cursor_size(c_height / 2, c_height - 515 + (c_height < 10 ? 1 : 2)); 524 516 break; 525 517 case CUR_NONE: 526 518 if (vga_video_type >= VIDEO_TYPE_VGAC) 527 - vgacon_set_cursor_size(c->state.x, 31, 30); 519 + vgacon_set_cursor_size(31, 30); 528 520 else 529 - vgacon_set_cursor_size(c->state.x, 31, 31); 521 + vgacon_set_cursor_size(31, 31); 530 522 break; 531 523 default: 532 - vgacon_set_cursor_size(c->state.x, 1, 533 - c->vc_cell_height); 524 + vgacon_set_cursor_size(1, c_height); 534 525 break; 535 526 } 536 527 break; 537 528 } 538 529 } 539 530 540 - static int vgacon_doresize(struct vc_data *c, 531 + static void vgacon_doresize(struct vc_data *c, 541 532 unsigned int width, unsigned int height) 542 533 { 543 534 unsigned long flags; ··· 583 600 } 584 601 585 602 raw_spin_unlock_irqrestore(&vga_lock, flags); 586 - return 0; 587 603 } 588 604 589 605 static int vgacon_switch(struct vc_data *c)
+3
drivers/video/fbdev/au1200fb.c
··· 1732 1732 1733 1733 /* Now hook interrupt too */ 1734 1734 irq = platform_get_irq(dev, 0); 1735 + if (irq < 0) 1736 + return irq; 1737 + 1735 1738 ret = request_irq(irq, au1200fb_handle_irq, 1736 1739 IRQF_SHARED, "lcd", (void *)dev); 1737 1740 if (ret) {
+2 -1
drivers/video/fbdev/bw2.c
··· 17 17 #include <linux/init.h> 18 18 #include <linux/fb.h> 19 19 #include <linux/mm.h> 20 - #include <linux/of_device.h> 20 + #include <linux/of.h> 21 + #include <linux/platform_device.h> 21 22 22 23 #include <asm/io.h> 23 24 #include <asm/fbio.h>
+2 -1
drivers/video/fbdev/cg14.c
··· 17 17 #include <linux/fb.h> 18 18 #include <linux/mm.h> 19 19 #include <linux/uaccess.h> 20 - #include <linux/of_device.h> 20 + #include <linux/of.h> 21 + #include <linux/platform_device.h> 21 22 22 23 #include <asm/io.h> 23 24 #include <asm/fbio.h>
+2 -1
drivers/video/fbdev/cg3.c
··· 17 17 #include <linux/init.h> 18 18 #include <linux/fb.h> 19 19 #include <linux/mm.h> 20 - #include <linux/of_device.h> 20 + #include <linux/of.h> 21 + #include <linux/platform_device.h> 21 22 22 23 #include <asm/io.h> 23 24 #include <asm/fbio.h>
+2 -1
drivers/video/fbdev/cg6.c
··· 17 17 #include <linux/init.h> 18 18 #include <linux/fb.h> 19 19 #include <linux/mm.h> 20 - #include <linux/of_device.h> 20 + #include <linux/of.h> 21 + #include <linux/platform_device.h> 21 22 22 23 #include <asm/io.h> 23 24 #include <asm/fbio.h>
+3 -4
drivers/video/fbdev/core/fbcon.c
··· 1612 1612 } 1613 1613 } 1614 1614 1615 - static void fbcon_redraw(struct vc_data *vc, struct fbcon_display *p, 1616 - int line, int count, int offset) 1615 + static void fbcon_redraw(struct vc_data *vc, int line, int count, int offset) 1617 1616 { 1618 1617 unsigned short *d = (unsigned short *) 1619 1618 (vc->vc_origin + vc->vc_size_row * line); ··· 1826 1827 1827 1828 case SCROLL_REDRAW: 1828 1829 redraw_up: 1829 - fbcon_redraw(vc, p, t, b - t - count, 1830 + fbcon_redraw(vc, t, b - t - count, 1830 1831 count * vc->vc_cols); 1831 1832 fbcon_clear(vc, b - count, 0, count, vc->vc_cols); 1832 1833 scr_memsetw((unsigned short *) (vc->vc_origin + ··· 1912 1913 1913 1914 case SCROLL_REDRAW: 1914 1915 redraw_down: 1915 - fbcon_redraw(vc, p, b - 1, b - t - count, 1916 + fbcon_redraw(vc, b - 1, b - t - count, 1916 1917 -count * vc->vc_cols); 1917 1918 fbcon_clear(vc, t, 0, count, vc->vc_cols); 1918 1919 scr_memsetw((unsigned short *) (vc->vc_origin +
+3 -1
drivers/video/fbdev/ep93xx-fb.c
··· 548 548 } 549 549 550 550 ep93xxfb_set_par(info); 551 - clk_prepare_enable(fbi->clk); 551 + err = clk_prepare_enable(fbi->clk); 552 + if (err) 553 + goto failed_check; 552 554 553 555 err = register_framebuffer(info); 554 556 if (err)
+2 -1
drivers/video/fbdev/ffb.c
··· 16 16 #include <linux/fb.h> 17 17 #include <linux/mm.h> 18 18 #include <linux/timer.h> 19 - #include <linux/of_device.h> 19 + #include <linux/of.h> 20 + #include <linux/platform_device.h> 20 21 21 22 #include <asm/io.h> 22 23 #include <asm/upa.h>
+1 -2
drivers/video/fbdev/grvga.c
··· 12 12 13 13 #include <linux/platform_device.h> 14 14 #include <linux/dma-mapping.h> 15 - #include <linux/of_platform.h> 16 - #include <linux/of_device.h> 15 + #include <linux/of.h> 17 16 #include <linux/module.h> 18 17 #include <linux/kernel.h> 19 18 #include <linux/string.h>
+18 -30
drivers/video/fbdev/imxfb.c
··· 613 613 if (var->hsync_len < 1 || var->hsync_len > 64) 614 614 printk(KERN_ERR "%s: invalid hsync_len %d\n", 615 615 info->fix.id, var->hsync_len); 616 - if (var->left_margin > 255) 616 + if (var->left_margin < 3 || var->left_margin > 255) 617 617 printk(KERN_ERR "%s: invalid left_margin %d\n", 618 618 info->fix.id, var->left_margin); 619 - if (var->right_margin > 255) 619 + if (var->right_margin < 1 || var->right_margin > 255) 620 620 printk(KERN_ERR "%s: invalid right_margin %d\n", 621 621 info->fix.id, var->right_margin); 622 622 if (var->yres < 1 || var->yres > ymax_mask) ··· 673 673 674 674 pr_debug("%s\n",__func__); 675 675 676 - info->pseudo_palette = kmalloc_array(16, sizeof(u32), GFP_KERNEL); 676 + info->pseudo_palette = devm_kmalloc_array(&pdev->dev, 16, 677 + sizeof(u32), GFP_KERNEL); 677 678 if (!info->pseudo_palette) 678 679 return -ENOMEM; 679 680 ··· 869 868 struct imxfb_info *fbi; 870 869 struct lcd_device *lcd; 871 870 struct fb_info *info; 872 - struct resource *res; 873 871 struct imx_fb_videomode *m; 874 872 const struct of_device_id *of_id; 875 873 struct device_node *display_np; ··· 884 884 of_id = of_match_device(imxfb_of_dev_id, &pdev->dev); 885 885 if (of_id) 886 886 pdev->id_entry = of_id->data; 887 - 888 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 889 - if (!res) 890 - return -ENODEV; 891 887 892 888 info = framebuffer_alloc(sizeof(struct imxfb_info), &pdev->dev); 893 889 if (!info) ··· 903 907 if (!display_np) { 904 908 dev_err(&pdev->dev, "No display defined in devicetree\n"); 905 909 ret = -EINVAL; 906 - goto failed_of_parse; 910 + goto failed_init; 907 911 } 908 912 909 913 /* ··· 917 921 if (!fbi->mode) { 918 922 ret = -ENOMEM; 919 923 of_node_put(display_np); 920 - goto failed_of_parse; 924 + goto failed_init; 921 925 } 922 926 923 927 ret = imxfb_of_read_mode(&pdev->dev, display_np, fbi->mode); 924 928 of_node_put(display_np); 925 929 if (ret) 926 - goto failed_of_parse; 930 + goto failed_init; 927 931 928 932 /* Calculate maximum bytes used per pixel. In most cases this should 929 933 * be the same as m->bpp/8 */ ··· 936 940 fbi->clk_ipg = devm_clk_get(&pdev->dev, "ipg"); 937 941 if (IS_ERR(fbi->clk_ipg)) { 938 942 ret = PTR_ERR(fbi->clk_ipg); 939 - goto failed_getclock; 943 + goto failed_init; 940 944 } 941 945 942 946 /* ··· 951 955 */ 952 956 ret = clk_prepare_enable(fbi->clk_ipg); 953 957 if (ret) 954 - goto failed_getclock; 958 + goto failed_init; 955 959 clk_disable_unprepare(fbi->clk_ipg); 956 960 957 961 fbi->clk_ahb = devm_clk_get(&pdev->dev, "ahb"); 958 962 if (IS_ERR(fbi->clk_ahb)) { 959 963 ret = PTR_ERR(fbi->clk_ahb); 960 - goto failed_getclock; 964 + goto failed_init; 961 965 } 962 966 963 967 fbi->clk_per = devm_clk_get(&pdev->dev, "per"); 964 968 if (IS_ERR(fbi->clk_per)) { 965 969 ret = PTR_ERR(fbi->clk_per); 966 - goto failed_getclock; 970 + goto failed_init; 967 971 } 968 972 969 - fbi->regs = devm_ioremap_resource(&pdev->dev, res); 973 + fbi->regs = devm_platform_ioremap_resource(pdev, 0); 970 974 if (IS_ERR(fbi->regs)) { 971 975 ret = PTR_ERR(fbi->regs); 972 - goto failed_ioremap; 976 + goto failed_init; 973 977 } 974 978 975 979 fbi->map_size = PAGE_ALIGN(info->fix.smem_len); ··· 978 982 if (!info->screen_buffer) { 979 983 dev_err(&pdev->dev, "Failed to allocate video RAM\n"); 980 984 ret = -ENOMEM; 981 - goto failed_map; 985 + goto failed_init; 982 986 } 983 987 984 988 info->fix.smem_start = fbi->map_dma; ··· 1030 1034 1031 1035 failed_lcd: 1032 1036 unregister_framebuffer(info); 1033 - 1034 1037 failed_register: 1035 1038 fb_dealloc_cmap(&info->cmap); 1036 1039 failed_cmap: 1037 1040 dma_free_wc(&pdev->dev, fbi->map_size, info->screen_buffer, 1038 1041 fbi->map_dma); 1039 - failed_map: 1040 - failed_ioremap: 1041 - failed_getclock: 1042 - release_mem_region(res->start, resource_size(res)); 1043 - failed_of_parse: 1044 - kfree(info->pseudo_palette); 1045 1042 failed_init: 1046 1043 framebuffer_release(info); 1047 1044 return ret; ··· 1051 1062 fb_dealloc_cmap(&info->cmap); 1052 1063 dma_free_wc(&pdev->dev, fbi->map_size, info->screen_buffer, 1053 1064 fbi->map_dma); 1054 - kfree(info->pseudo_palette); 1055 1065 framebuffer_release(info); 1056 1066 } 1057 1067 1058 - static int __maybe_unused imxfb_suspend(struct device *dev) 1068 + static int imxfb_suspend(struct device *dev) 1059 1069 { 1060 1070 struct fb_info *info = dev_get_drvdata(dev); 1061 1071 struct imxfb_info *fbi = info->par; ··· 1064 1076 return 0; 1065 1077 } 1066 1078 1067 - static int __maybe_unused imxfb_resume(struct device *dev) 1079 + static int imxfb_resume(struct device *dev) 1068 1080 { 1069 1081 struct fb_info *info = dev_get_drvdata(dev); 1070 1082 struct imxfb_info *fbi = info->par; ··· 1074 1086 return 0; 1075 1087 } 1076 1088 1077 - static SIMPLE_DEV_PM_OPS(imxfb_pm_ops, imxfb_suspend, imxfb_resume); 1089 + static DEFINE_SIMPLE_DEV_PM_OPS(imxfb_pm_ops, imxfb_suspend, imxfb_resume); 1078 1090 1079 1091 static struct platform_driver imxfb_driver = { 1080 1092 .driver = { 1081 1093 .name = DRIVER_NAME, 1082 1094 .of_match_table = imxfb_of_dev_id, 1083 - .pm = &imxfb_pm_ops, 1095 + .pm = pm_sleep_ptr(&imxfb_pm_ops), 1084 1096 }, 1085 1097 .probe = imxfb_probe, 1086 1098 .remove_new = imxfb_remove,
+5 -5
drivers/video/fbdev/kyro/STG4000InitDevice.c
··· 83 83 static u32 InitSDRAMRegisters(volatile STG4000REG __iomem *pSTGReg, 84 84 u32 dwSubSysID, u32 dwRevID) 85 85 { 86 - u32 adwSDRAMArgCfg0[] = { 0xa0, 0x80, 0xa0, 0xa0, 0xa0 }; 87 - u32 adwSDRAMCfg1[] = { 0x8732, 0x8732, 0xa732, 0xa732, 0x8732 }; 88 - u32 adwSDRAMCfg2[] = { 0x87d2, 0x87d2, 0xa7d2, 0x87d2, 0xa7d2 }; 89 - u32 adwSDRAMRsh[] = { 36, 39, 40 }; 90 - u32 adwChipSpeed[] = { 110, 120, 125 }; 86 + static const u8 adwSDRAMArgCfg0[] = { 0xa0, 0x80, 0xa0, 0xa0, 0xa0 }; 87 + static const u16 adwSDRAMCfg1[] = { 0x8732, 0x8732, 0xa732, 0xa732, 0x8732 }; 88 + static const u16 adwSDRAMCfg2[] = { 0x87d2, 0x87d2, 0xa7d2, 0x87d2, 0xa7d2 }; 89 + static const u8 adwSDRAMRsh[] = { 36, 39, 40 }; 90 + static const u8 adwChipSpeed[] = { 110, 120, 125 }; 91 91 u32 dwMemTypeIdx; 92 92 u32 dwChipSpeedIdx; 93 93
+2 -1
drivers/video/fbdev/leo.c
··· 16 16 #include <linux/init.h> 17 17 #include <linux/fb.h> 18 18 #include <linux/mm.h> 19 - #include <linux/of_device.h> 20 19 #include <linux/io.h> 20 + #include <linux/of.h> 21 + #include <linux/platform_device.h> 21 22 22 23 #include <asm/fbio.h> 23 24
+1 -3
drivers/video/fbdev/mb862xx/mb862xxfb_accel.c
··· 15 15 #include <linux/module.h> 16 16 #include <linux/pci.h> 17 17 #include <linux/slab.h> 18 - #if defined(CONFIG_OF) 19 - #include <linux/of_platform.h> 20 - #endif 18 + 21 19 #include "mb862xxfb.h" 22 20 #include "mb862xx_reg.h" 23 21 #include "mb862xxfb_accel.h"
+3 -3
drivers/video/fbdev/mb862xx/mb862xxfbdrv.c
··· 18 18 #include <linux/init.h> 19 19 #include <linux/interrupt.h> 20 20 #include <linux/pci.h> 21 - #if defined(CONFIG_OF) 21 + #include <linux/of.h> 22 22 #include <linux/of_address.h> 23 23 #include <linux/of_irq.h> 24 - #include <linux/of_platform.h> 25 - #endif 24 + #include <linux/platform_device.h> 25 + 26 26 #include "mb862xxfb.h" 27 27 #include "mb862xx_reg.h" 28 28
+1 -1
drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c
··· 15 15 #include <linux/gpio/consumer.h> 16 16 #include <linux/interrupt.h> 17 17 #include <linux/jiffies.h> 18 + #include <linux/mod_devicetable.h> 18 19 #include <linux/module.h> 19 20 #include <linux/platform_device.h> 20 21 #include <linux/sched/signal.h> 21 22 #include <linux/slab.h> 22 23 #include <linux/workqueue.h> 23 - #include <linux/of_device.h> 24 24 25 25 #include <video/omapfb_dss.h> 26 26 #include <video/mipi_display.h>
+2 -1
drivers/video/fbdev/p9100.c
··· 15 15 #include <linux/init.h> 16 16 #include <linux/fb.h> 17 17 #include <linux/mm.h> 18 - #include <linux/of_device.h> 18 + #include <linux/of.h> 19 + #include <linux/platform_device.h> 19 20 20 21 #include <asm/io.h> 21 22 #include <asm/fbio.h>
+2 -2
drivers/video/fbdev/platinumfb.c
··· 30 30 #include <linux/fb.h> 31 31 #include <linux/init.h> 32 32 #include <linux/nvram.h> 33 + #include <linux/of.h> 33 34 #include <linux/of_address.h> 34 - #include <linux/of_device.h> 35 - #include <linux/of_platform.h> 35 + #include <linux/platform_device.h> 36 36 37 37 #include "macmodes.h" 38 38 #include "platinumfb.h"
+1 -1
drivers/video/fbdev/sbuslib.c
··· 11 11 #include <linux/fb.h> 12 12 #include <linux/mm.h> 13 13 #include <linux/uaccess.h> 14 - #include <linux/of_device.h> 14 + #include <linux/of.h> 15 15 16 16 #include <asm/fbio.h> 17 17
+2 -1
drivers/video/fbdev/sunxvr1000.c
··· 8 8 #include <linux/kernel.h> 9 9 #include <linux/fb.h> 10 10 #include <linux/init.h> 11 - #include <linux/of_device.h> 11 + #include <linux/of.h> 12 + #include <linux/platform_device.h> 12 13 13 14 struct gfb_info { 14 15 struct fb_info *info;
+1 -1
drivers/video/fbdev/sunxvr2500.c
··· 10 10 #include <linux/fb.h> 11 11 #include <linux/pci.h> 12 12 #include <linux/init.h> 13 - #include <linux/of_device.h> 13 + #include <linux/of.h> 14 14 15 15 #include <asm/io.h> 16 16
+1 -1
drivers/video/fbdev/sunxvr500.c
··· 10 10 #include <linux/fb.h> 11 11 #include <linux/pci.h> 12 12 #include <linux/init.h> 13 - #include <linux/of_device.h> 13 + #include <linux/of.h> 14 14 15 15 #include <asm/io.h> 16 16
+2 -1
drivers/video/fbdev/tcx.c
··· 17 17 #include <linux/init.h> 18 18 #include <linux/fb.h> 19 19 #include <linux/mm.h> 20 - #include <linux/of_device.h> 20 + #include <linux/of.h> 21 + #include <linux/platform_device.h> 21 22 22 23 #include <asm/io.h> 23 24 #include <asm/fbio.h>
+2 -3
drivers/video/fbdev/xilinxfb.c
··· 24 24 #include <linux/module.h> 25 25 #include <linux/kernel.h> 26 26 #include <linux/errno.h> 27 + #include <linux/platform_device.h> 27 28 #include <linux/string.h> 28 29 #include <linux/mm.h> 29 30 #include <linux/fb.h> 30 31 #include <linux/init.h> 31 32 #include <linux/dma-mapping.h> 32 - #include <linux/of_device.h> 33 - #include <linux/of_platform.h> 34 - #include <linux/of_address.h> 33 + #include <linux/of.h> 35 34 #include <linux/io.h> 36 35 #include <linux/slab.h> 37 36
+12 -2
fs/btrfs/block-group.c
··· 1640 1640 { 1641 1641 struct btrfs_fs_info *fs_info = bg->fs_info; 1642 1642 1643 - trace_btrfs_add_unused_block_group(bg); 1644 1643 spin_lock(&fs_info->unused_bgs_lock); 1645 1644 if (list_empty(&bg->bg_list)) { 1646 1645 btrfs_get_block_group(bg); 1646 + trace_btrfs_add_unused_block_group(bg); 1647 1647 list_add_tail(&bg->bg_list, &fs_info->unused_bgs); 1648 - } else { 1648 + } else if (!test_bit(BLOCK_GROUP_FLAG_NEW, &bg->runtime_flags)) { 1649 1649 /* Pull out the block group from the reclaim_bgs list. */ 1650 + trace_btrfs_add_unused_block_group(bg); 1650 1651 list_move_tail(&bg->bg_list, &fs_info->unused_bgs); 1651 1652 } 1652 1653 spin_unlock(&fs_info->unused_bgs_lock); ··· 2088 2087 2089 2088 /* Shouldn't have super stripes in sequential zones */ 2090 2089 if (zoned && nr) { 2090 + kfree(logical); 2091 2091 btrfs_err(fs_info, 2092 2092 "zoned: block group %llu must not contain super block", 2093 2093 cache->start); ··· 2670 2668 next: 2671 2669 btrfs_delayed_refs_rsv_release(fs_info, 1); 2672 2670 list_del_init(&block_group->bg_list); 2671 + clear_bit(BLOCK_GROUP_FLAG_NEW, &block_group->runtime_flags); 2673 2672 } 2674 2673 btrfs_trans_release_chunk_metadata(trans); 2675 2674 } ··· 2709 2706 cache = btrfs_create_block_group_cache(fs_info, chunk_offset); 2710 2707 if (!cache) 2711 2708 return ERR_PTR(-ENOMEM); 2709 + 2710 + /* 2711 + * Mark it as new before adding it to the rbtree of block groups or any 2712 + * list, so that no other task finds it and calls btrfs_mark_bg_unused() 2713 + * before the new flag is set. 2714 + */ 2715 + set_bit(BLOCK_GROUP_FLAG_NEW, &cache->runtime_flags); 2712 2716 2713 2717 cache->length = size; 2714 2718 set_free_space_tree_thresholds(cache);
+5
fs/btrfs/block-group.h
··· 70 70 BLOCK_GROUP_FLAG_NEEDS_FREE_SPACE, 71 71 /* Indicate that the block group is placed on a sequential zone */ 72 72 BLOCK_GROUP_FLAG_SEQUENTIAL_ZONE, 73 + /* 74 + * Indicate that block group is in the list of new block groups of a 75 + * transaction. 76 + */ 77 + BLOCK_GROUP_FLAG_NEW, 73 78 }; 74 79 75 80 enum btrfs_caching_type {
+52 -25
fs/btrfs/inode.c
··· 3482 3482 void btrfs_add_delayed_iput(struct btrfs_inode *inode) 3483 3483 { 3484 3484 struct btrfs_fs_info *fs_info = inode->root->fs_info; 3485 + unsigned long flags; 3485 3486 3486 3487 if (atomic_add_unless(&inode->vfs_inode.i_count, -1, 1)) 3487 3488 return; 3488 3489 3489 3490 atomic_inc(&fs_info->nr_delayed_iputs); 3490 - spin_lock(&fs_info->delayed_iput_lock); 3491 + /* 3492 + * Need to be irq safe here because we can be called from either an irq 3493 + * context (see bio.c and btrfs_put_ordered_extent()) or a non-irq 3494 + * context. 3495 + */ 3496 + spin_lock_irqsave(&fs_info->delayed_iput_lock, flags); 3491 3497 ASSERT(list_empty(&inode->delayed_iput)); 3492 3498 list_add_tail(&inode->delayed_iput, &fs_info->delayed_iputs); 3493 - spin_unlock(&fs_info->delayed_iput_lock); 3499 + spin_unlock_irqrestore(&fs_info->delayed_iput_lock, flags); 3494 3500 if (!test_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags)) 3495 3501 wake_up_process(fs_info->cleaner_kthread); 3496 3502 } ··· 3505 3499 struct btrfs_inode *inode) 3506 3500 { 3507 3501 list_del_init(&inode->delayed_iput); 3508 - spin_unlock(&fs_info->delayed_iput_lock); 3502 + spin_unlock_irq(&fs_info->delayed_iput_lock); 3509 3503 iput(&inode->vfs_inode); 3510 3504 if (atomic_dec_and_test(&fs_info->nr_delayed_iputs)) 3511 3505 wake_up(&fs_info->delayed_iputs_wait); 3512 - spin_lock(&fs_info->delayed_iput_lock); 3506 + spin_lock_irq(&fs_info->delayed_iput_lock); 3513 3507 } 3514 3508 3515 3509 static void btrfs_run_delayed_iput(struct btrfs_fs_info *fs_info, 3516 3510 struct btrfs_inode *inode) 3517 3511 { 3518 3512 if (!list_empty(&inode->delayed_iput)) { 3519 - spin_lock(&fs_info->delayed_iput_lock); 3513 + spin_lock_irq(&fs_info->delayed_iput_lock); 3520 3514 if (!list_empty(&inode->delayed_iput)) 3521 3515 run_delayed_iput_locked(fs_info, inode); 3522 - spin_unlock(&fs_info->delayed_iput_lock); 3516 + spin_unlock_irq(&fs_info->delayed_iput_lock); 3523 3517 } 3524 3518 } 3525 3519 3526 3520 void btrfs_run_delayed_iputs(struct btrfs_fs_info *fs_info) 3527 3521 { 3528 - 3529 - spin_lock(&fs_info->delayed_iput_lock); 3522 + /* 3523 + * btrfs_put_ordered_extent() can run in irq context (see bio.c), which 3524 + * calls btrfs_add_delayed_iput() and that needs to lock 3525 + * fs_info->delayed_iput_lock. So we need to disable irqs here to 3526 + * prevent a deadlock. 3527 + */ 3528 + spin_lock_irq(&fs_info->delayed_iput_lock); 3530 3529 while (!list_empty(&fs_info->delayed_iputs)) { 3531 3530 struct btrfs_inode *inode; 3532 3531 3533 3532 inode = list_first_entry(&fs_info->delayed_iputs, 3534 3533 struct btrfs_inode, delayed_iput); 3535 3534 run_delayed_iput_locked(fs_info, inode); 3536 - cond_resched_lock(&fs_info->delayed_iput_lock); 3535 + if (need_resched()) { 3536 + spin_unlock_irq(&fs_info->delayed_iput_lock); 3537 + cond_resched(); 3538 + spin_lock_irq(&fs_info->delayed_iput_lock); 3539 + } 3537 3540 } 3538 - spin_unlock(&fs_info->delayed_iput_lock); 3541 + spin_unlock_irq(&fs_info->delayed_iput_lock); 3539 3542 } 3540 3543 3541 3544 /* ··· 3674 3659 found_key.type = BTRFS_INODE_ITEM_KEY; 3675 3660 found_key.offset = 0; 3676 3661 inode = btrfs_iget(fs_info->sb, last_objectid, root); 3677 - ret = PTR_ERR_OR_ZERO(inode); 3678 - if (ret && ret != -ENOENT) 3679 - goto out; 3662 + if (IS_ERR(inode)) { 3663 + ret = PTR_ERR(inode); 3664 + inode = NULL; 3665 + if (ret != -ENOENT) 3666 + goto out; 3667 + } 3680 3668 3681 - if (ret == -ENOENT && root == fs_info->tree_root) { 3669 + if (!inode && root == fs_info->tree_root) { 3682 3670 struct btrfs_root *dead_root; 3683 3671 int is_dead_root = 0; 3684 3672 ··· 3742 3724 * deleted but wasn't. The inode number may have been reused, 3743 3725 * but either way, we can delete the orphan item. 3744 3726 */ 3745 - if (ret == -ENOENT || inode->i_nlink) { 3746 - if (!ret) { 3727 + if (!inode || inode->i_nlink) { 3728 + if (inode) { 3747 3729 ret = btrfs_drop_verity_items(BTRFS_I(inode)); 3748 3730 iput(inode); 3731 + inode = NULL; 3749 3732 if (ret) 3750 3733 goto out; 3751 3734 } 3752 3735 trans = btrfs_start_transaction(root, 1); 3753 3736 if (IS_ERR(trans)) { 3754 3737 ret = PTR_ERR(trans); 3755 - iput(inode); 3756 3738 goto out; 3757 3739 } 3758 3740 btrfs_debug(fs_info, "auto deleting %Lu", ··· 3760 3742 ret = btrfs_del_orphan_item(trans, root, 3761 3743 found_key.objectid); 3762 3744 btrfs_end_transaction(trans); 3763 - if (ret) { 3764 - iput(inode); 3745 + if (ret) 3765 3746 goto out; 3766 - } 3767 3747 continue; 3768 3748 } 3769 3749 ··· 4863 4847 ret = -ENOMEM; 4864 4848 goto out; 4865 4849 } 4866 - ret = set_page_extent_mapped(page); 4867 - if (ret < 0) 4868 - goto out_unlock; 4869 4850 4870 4851 if (!PageUptodate(page)) { 4871 4852 ret = btrfs_read_folio(NULL, page_folio(page)); ··· 4877 4864 goto out_unlock; 4878 4865 } 4879 4866 } 4867 + 4868 + /* 4869 + * We unlock the page after the io is completed and then re-lock it 4870 + * above. release_folio() could have come in between that and cleared 4871 + * PagePrivate(), but left the page in the mapping. Set the page mapped 4872 + * here to make sure it's properly set for the subpage stuff. 4873 + */ 4874 + ret = set_page_extent_mapped(page); 4875 + if (ret < 0) 4876 + goto out_unlock; 4877 + 4880 4878 wait_on_page_writeback(page); 4881 4879 4882 4880 lock_extent(io_tree, block_start, block_end, &cached_state); ··· 7873 7849 7874 7850 ret = btrfs_extract_ordered_extent(bbio, dio_data->ordered); 7875 7851 if (ret) { 7876 - bbio->bio.bi_status = errno_to_blk_status(ret); 7877 - btrfs_dio_end_io(bbio); 7852 + btrfs_finish_ordered_extent(dio_data->ordered, NULL, 7853 + file_offset, dip->bytes, 7854 + !ret); 7855 + bio->bi_status = errno_to_blk_status(ret); 7856 + iomap_dio_bio_end_io(bio); 7878 7857 return; 7879 7858 } 7880 7859 }
+1
fs/btrfs/qgroup.c
··· 4445 4445 ulist_free(entry->old_roots); 4446 4446 kfree(entry); 4447 4447 } 4448 + *root = RB_ROOT; 4448 4449 }
+3 -8
fs/btrfs/raid56.c
··· 71 71 static void index_rbio_pages(struct btrfs_raid_bio *rbio); 72 72 static int alloc_rbio_pages(struct btrfs_raid_bio *rbio); 73 73 74 - static int finish_parity_scrub(struct btrfs_raid_bio *rbio, int need_check); 74 + static int finish_parity_scrub(struct btrfs_raid_bio *rbio); 75 75 static void scrub_rbio_work_locked(struct work_struct *work); 76 76 77 77 static void free_raid_bio_pointers(struct btrfs_raid_bio *rbio) ··· 2404 2404 return 0; 2405 2405 } 2406 2406 2407 - static int finish_parity_scrub(struct btrfs_raid_bio *rbio, int need_check) 2407 + static int finish_parity_scrub(struct btrfs_raid_bio *rbio) 2408 2408 { 2409 2409 struct btrfs_io_context *bioc = rbio->bioc; 2410 2410 const u32 sectorsize = bioc->fs_info->sectorsize; ··· 2444 2444 * it. 2445 2445 */ 2446 2446 clear_bit(RBIO_CACHE_READY_BIT, &rbio->flags); 2447 - 2448 - if (!need_check) 2449 - goto writeback; 2450 2447 2451 2448 p_sector.page = alloc_page(GFP_NOFS); 2452 2449 if (!p_sector.page) ··· 2513 2516 q_sector.page = NULL; 2514 2517 } 2515 2518 2516 - writeback: 2517 2519 /* 2518 2520 * time to start writing. Make bios for everything from the 2519 2521 * higher layers (the bio_list in our rbio) and our p/q. Ignore ··· 2695 2699 2696 2700 static void scrub_rbio(struct btrfs_raid_bio *rbio) 2697 2701 { 2698 - bool need_check = false; 2699 2702 int sector_nr; 2700 2703 int ret; 2701 2704 ··· 2717 2722 * We have every sector properly prepared. Can finish the scrub 2718 2723 * and writeback the good content. 2719 2724 */ 2720 - ret = finish_parity_scrub(rbio, need_check); 2725 + ret = finish_parity_scrub(rbio); 2721 2726 wait_event(rbio->io_wait, atomic_read(&rbio->stripes_pending) == 0); 2722 2727 for (sector_nr = 0; sector_nr < rbio->stripe_nsectors; sector_nr++) { 2723 2728 int found_errors;
+6 -11
fs/btrfs/volumes.c
··· 4078 4078 return has_single_bit_set(flags); 4079 4079 } 4080 4080 4081 - static inline int balance_need_close(struct btrfs_fs_info *fs_info) 4082 - { 4083 - /* cancel requested || normal exit path */ 4084 - return atomic_read(&fs_info->balance_cancel_req) || 4085 - (atomic_read(&fs_info->balance_pause_req) == 0 && 4086 - atomic_read(&fs_info->balance_cancel_req) == 0); 4087 - } 4088 - 4089 4081 /* 4090 4082 * Validate target profile against allowed profiles and return true if it's OK. 4091 4083 * Otherwise print the error message and return false. ··· 4267 4275 u64 num_devices; 4268 4276 unsigned seq; 4269 4277 bool reducing_redundancy; 4278 + bool paused = false; 4270 4279 int i; 4271 4280 4272 4281 if (btrfs_fs_closing(fs_info) || ··· 4398 4405 if (ret == -ECANCELED && atomic_read(&fs_info->balance_pause_req)) { 4399 4406 btrfs_info(fs_info, "balance: paused"); 4400 4407 btrfs_exclop_balance(fs_info, BTRFS_EXCLOP_BALANCE_PAUSED); 4408 + paused = true; 4401 4409 } 4402 4410 /* 4403 4411 * Balance can be canceled by: ··· 4427 4433 btrfs_update_ioctl_balance_args(fs_info, bargs); 4428 4434 } 4429 4435 4430 - if ((ret && ret != -ECANCELED && ret != -ENOSPC) || 4431 - balance_need_close(fs_info)) { 4436 + /* We didn't pause, we can clean everything up. */ 4437 + if (!paused) { 4432 4438 reset_balance_state(fs_info); 4433 4439 btrfs_exclop_finish(fs_info); 4434 4440 } ··· 6398 6404 (op == BTRFS_MAP_READ || !dev_replace_is_ongoing || 6399 6405 !dev_replace->tgtdev)) { 6400 6406 set_io_stripe(smap, map, stripe_index, stripe_offset, stripe_nr); 6401 - *mirror_num_ret = mirror_num; 6407 + if (mirror_num_ret) 6408 + *mirror_num_ret = mirror_num; 6402 6409 *bioc_ret = NULL; 6403 6410 ret = 0; 6404 6411 goto out;
+143 -35
fs/ext4/mballoc.c
··· 1006 1006 * fls() instead since we need to know the actual length while modifying 1007 1007 * goal length. 1008 1008 */ 1009 - order = fls(ac->ac_g_ex.fe_len); 1009 + order = fls(ac->ac_g_ex.fe_len) - 1; 1010 1010 min_order = order - sbi->s_mb_best_avail_max_trim_order; 1011 1011 if (min_order < 0) 1012 1012 min_order = 0; 1013 - 1014 - if (1 << min_order < ac->ac_o_ex.fe_len) 1015 - min_order = fls(ac->ac_o_ex.fe_len) + 1; 1016 1013 1017 1014 if (sbi->s_stripe > 0) { 1018 1015 /* ··· 1018 1021 */ 1019 1022 num_stripe_clusters = EXT4_NUM_B2C(sbi, sbi->s_stripe); 1020 1023 if (1 << min_order < num_stripe_clusters) 1021 - min_order = fls(num_stripe_clusters); 1024 + /* 1025 + * We consider 1 order less because later we round 1026 + * up the goal len to num_stripe_clusters 1027 + */ 1028 + min_order = fls(num_stripe_clusters) - 1; 1022 1029 } 1030 + 1031 + if (1 << min_order < ac->ac_o_ex.fe_len) 1032 + min_order = fls(ac->ac_o_ex.fe_len); 1023 1033 1024 1034 for (i = order; i >= min_order; i--) { 1025 1035 int frag_order; ··· 4765 4761 int order, i; 4766 4762 struct ext4_inode_info *ei = EXT4_I(ac->ac_inode); 4767 4763 struct ext4_locality_group *lg; 4768 - struct ext4_prealloc_space *tmp_pa, *cpa = NULL; 4769 - ext4_lblk_t tmp_pa_start, tmp_pa_end; 4764 + struct ext4_prealloc_space *tmp_pa = NULL, *cpa = NULL; 4765 + loff_t tmp_pa_end; 4770 4766 struct rb_node *iter; 4771 4767 ext4_fsblk_t goal_block; 4772 4768 ··· 4774 4770 if (!(ac->ac_flags & EXT4_MB_HINT_DATA)) 4775 4771 return false; 4776 4772 4777 - /* first, try per-file preallocation */ 4773 + /* 4774 + * first, try per-file preallocation by searching the inode pa rbtree. 4775 + * 4776 + * Here, we can't do a direct traversal of the tree because 4777 + * ext4_mb_discard_group_preallocation() can paralelly mark the pa 4778 + * deleted and that can cause direct traversal to skip some entries. 4779 + */ 4778 4780 read_lock(&ei->i_prealloc_lock); 4781 + 4782 + if (RB_EMPTY_ROOT(&ei->i_prealloc_node)) { 4783 + goto try_group_pa; 4784 + } 4785 + 4786 + /* 4787 + * Step 1: Find a pa with logical start immediately adjacent to the 4788 + * original logical start. This could be on the left or right. 4789 + * 4790 + * (tmp_pa->pa_lstart never changes so we can skip locking for it). 4791 + */ 4779 4792 for (iter = ei->i_prealloc_node.rb_node; iter; 4780 4793 iter = ext4_mb_pa_rb_next_iter(ac->ac_o_ex.fe_logical, 4781 - tmp_pa_start, iter)) { 4794 + tmp_pa->pa_lstart, iter)) { 4782 4795 tmp_pa = rb_entry(iter, struct ext4_prealloc_space, 4783 4796 pa_node.inode_node); 4797 + } 4784 4798 4785 - /* all fields in this condition don't change, 4786 - * so we can skip locking for them */ 4787 - tmp_pa_start = tmp_pa->pa_lstart; 4788 - tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len); 4799 + /* 4800 + * Step 2: The adjacent pa might be to the right of logical start, find 4801 + * the left adjacent pa. After this step we'd have a valid tmp_pa whose 4802 + * logical start is towards the left of original request's logical start 4803 + */ 4804 + if (tmp_pa->pa_lstart > ac->ac_o_ex.fe_logical) { 4805 + struct rb_node *tmp; 4806 + tmp = rb_prev(&tmp_pa->pa_node.inode_node); 4789 4807 4790 - /* original request start doesn't lie in this PA */ 4791 - if (ac->ac_o_ex.fe_logical < tmp_pa_start || 4792 - ac->ac_o_ex.fe_logical >= tmp_pa_end) 4793 - continue; 4794 - 4795 - /* non-extent files can't have physical blocks past 2^32 */ 4796 - if (!(ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS)) && 4797 - (tmp_pa->pa_pstart + EXT4_C2B(sbi, tmp_pa->pa_len) > 4798 - EXT4_MAX_BLOCK_FILE_PHYS)) { 4808 + if (tmp) { 4809 + tmp_pa = rb_entry(tmp, struct ext4_prealloc_space, 4810 + pa_node.inode_node); 4811 + } else { 4799 4812 /* 4800 - * Since PAs don't overlap, we won't find any 4801 - * other PA to satisfy this. 4813 + * If there is no adjacent pa to the left then finding 4814 + * an overlapping pa is not possible hence stop searching 4815 + * inode pa tree 4816 + */ 4817 + goto try_group_pa; 4818 + } 4819 + } 4820 + 4821 + BUG_ON(!(tmp_pa && tmp_pa->pa_lstart <= ac->ac_o_ex.fe_logical)); 4822 + 4823 + /* 4824 + * Step 3: If the left adjacent pa is deleted, keep moving left to find 4825 + * the first non deleted adjacent pa. After this step we should have a 4826 + * valid tmp_pa which is guaranteed to be non deleted. 4827 + */ 4828 + for (iter = &tmp_pa->pa_node.inode_node;; iter = rb_prev(iter)) { 4829 + if (!iter) { 4830 + /* 4831 + * no non deleted left adjacent pa, so stop searching 4832 + * inode pa tree 4833 + */ 4834 + goto try_group_pa; 4835 + } 4836 + tmp_pa = rb_entry(iter, struct ext4_prealloc_space, 4837 + pa_node.inode_node); 4838 + spin_lock(&tmp_pa->pa_lock); 4839 + if (tmp_pa->pa_deleted == 0) { 4840 + /* 4841 + * We will keep holding the pa_lock from 4842 + * this point on because we don't want group discard 4843 + * to delete this pa underneath us. Since group 4844 + * discard is anyways an ENOSPC operation it 4845 + * should be okay for it to wait a few more cycles. 4802 4846 */ 4803 4847 break; 4804 - } 4805 - 4806 - /* found preallocated blocks, use them */ 4807 - spin_lock(&tmp_pa->pa_lock); 4808 - if (tmp_pa->pa_deleted == 0 && tmp_pa->pa_free && 4809 - likely(ext4_mb_pa_goal_check(ac, tmp_pa))) { 4810 - atomic_inc(&tmp_pa->pa_count); 4811 - ext4_mb_use_inode_pa(ac, tmp_pa); 4848 + } else { 4812 4849 spin_unlock(&tmp_pa->pa_lock); 4813 - read_unlock(&ei->i_prealloc_lock); 4814 - return true; 4815 4850 } 4816 - spin_unlock(&tmp_pa->pa_lock); 4817 4851 } 4852 + 4853 + BUG_ON(!(tmp_pa && tmp_pa->pa_lstart <= ac->ac_o_ex.fe_logical)); 4854 + BUG_ON(tmp_pa->pa_deleted == 1); 4855 + 4856 + /* 4857 + * Step 4: We now have the non deleted left adjacent pa. Only this 4858 + * pa can possibly satisfy the request hence check if it overlaps 4859 + * original logical start and stop searching if it doesn't. 4860 + */ 4861 + tmp_pa_end = (loff_t)tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len); 4862 + 4863 + if (ac->ac_o_ex.fe_logical >= tmp_pa_end) { 4864 + spin_unlock(&tmp_pa->pa_lock); 4865 + goto try_group_pa; 4866 + } 4867 + 4868 + /* non-extent files can't have physical blocks past 2^32 */ 4869 + if (!(ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS)) && 4870 + (tmp_pa->pa_pstart + EXT4_C2B(sbi, tmp_pa->pa_len) > 4871 + EXT4_MAX_BLOCK_FILE_PHYS)) { 4872 + /* 4873 + * Since PAs don't overlap, we won't find any other PA to 4874 + * satisfy this. 4875 + */ 4876 + spin_unlock(&tmp_pa->pa_lock); 4877 + goto try_group_pa; 4878 + } 4879 + 4880 + if (tmp_pa->pa_free && likely(ext4_mb_pa_goal_check(ac, tmp_pa))) { 4881 + atomic_inc(&tmp_pa->pa_count); 4882 + ext4_mb_use_inode_pa(ac, tmp_pa); 4883 + spin_unlock(&tmp_pa->pa_lock); 4884 + read_unlock(&ei->i_prealloc_lock); 4885 + return true; 4886 + } else { 4887 + /* 4888 + * We found a valid overlapping pa but couldn't use it because 4889 + * it had no free blocks. This should ideally never happen 4890 + * because: 4891 + * 4892 + * 1. When a new inode pa is added to rbtree it must have 4893 + * pa_free > 0 since otherwise we won't actually need 4894 + * preallocation. 4895 + * 4896 + * 2. An inode pa that is in the rbtree can only have it's 4897 + * pa_free become zero when another thread calls: 4898 + * ext4_mb_new_blocks 4899 + * ext4_mb_use_preallocated 4900 + * ext4_mb_use_inode_pa 4901 + * 4902 + * 3. Further, after the above calls make pa_free == 0, we will 4903 + * immediately remove it from the rbtree in: 4904 + * ext4_mb_new_blocks 4905 + * ext4_mb_release_context 4906 + * ext4_mb_put_pa 4907 + * 4908 + * 4. Since the pa_free becoming 0 and pa_free getting removed 4909 + * from tree both happen in ext4_mb_new_blocks, which is always 4910 + * called with i_data_sem held for data allocations, we can be 4911 + * sure that another process will never see a pa in rbtree with 4912 + * pa_free == 0. 4913 + */ 4914 + WARN_ON_ONCE(tmp_pa->pa_free == 0); 4915 + } 4916 + spin_unlock(&tmp_pa->pa_lock); 4917 + try_group_pa: 4818 4918 read_unlock(&ei->i_prealloc_lock); 4819 4919 4820 4920 /* can we use group allocation? */
+14
fs/ext4/xattr.c
··· 1782 1782 memmove(here, (void *)here + size, 1783 1783 (void *)last - (void *)here + sizeof(__u32)); 1784 1784 memset(last, 0, size); 1785 + 1786 + /* 1787 + * Update i_inline_off - moved ibody region might contain 1788 + * system.data attribute. Handling a failure here won't 1789 + * cause other complications for setting an xattr. 1790 + */ 1791 + if (!is_block && ext4_has_inline_data(inode)) { 1792 + ret = ext4_find_inline_data_nolock(inode); 1793 + if (ret) { 1794 + ext4_warning_inode(inode, 1795 + "unable to update i_inline_off"); 1796 + goto out; 1797 + } 1798 + } 1785 1799 } else if (s->not_found) { 1786 1800 /* Insert new name. */ 1787 1801 size_t size = EXT4_XATTR_LEN(name_len);
+1 -3
fs/fuse/dir.c
··· 258 258 spin_unlock(&fi->lock); 259 259 } 260 260 kfree(forget); 261 - if (ret == -ENOMEM) 261 + if (ret == -ENOMEM || ret == -EINTR) 262 262 goto out; 263 263 if (ret || fuse_invalid_attr(&outarg.attr) || 264 264 fuse_stale_inode(inode, outarg.generation, &outarg.attr)) ··· 395 395 goto out_put_forget; 396 396 397 397 err = -EIO; 398 - if (!outarg->nodeid) 399 - goto out_put_forget; 400 398 if (fuse_invalid_attr(&outarg->attr)) 401 399 goto out_put_forget; 402 400
+6 -2
fs/fuse/inode.c
··· 1134 1134 process_init_limits(fc, arg); 1135 1135 1136 1136 if (arg->minor >= 6) { 1137 - u64 flags = arg->flags | (u64) arg->flags2 << 32; 1137 + u64 flags = arg->flags; 1138 + 1139 + if (flags & FUSE_INIT_EXT) 1140 + flags |= (u64) arg->flags2 << 32; 1138 1141 1139 1142 ra_pages = arg->max_readahead / PAGE_SIZE; 1140 1143 if (flags & FUSE_ASYNC_READ) ··· 1257 1254 FUSE_ABORT_ERROR | FUSE_MAX_PAGES | FUSE_CACHE_SYMLINKS | 1258 1255 FUSE_NO_OPENDIR_SUPPORT | FUSE_EXPLICIT_INVAL_DATA | 1259 1256 FUSE_HANDLE_KILLPRIV_V2 | FUSE_SETXATTR_EXT | FUSE_INIT_EXT | 1260 - FUSE_SECURITY_CTX | FUSE_CREATE_SUPP_GROUP; 1257 + FUSE_SECURITY_CTX | FUSE_CREATE_SUPP_GROUP | 1258 + FUSE_HAS_EXPIRE_ONLY; 1261 1259 #ifdef CONFIG_FUSE_DAX 1262 1260 if (fm->fc->dax) 1263 1261 flags |= FUSE_MAP_ALIGNMENT;
+13 -8
fs/fuse/ioctl.c
··· 9 9 #include <linux/compat.h> 10 10 #include <linux/fileattr.h> 11 11 12 - static ssize_t fuse_send_ioctl(struct fuse_mount *fm, struct fuse_args *args) 12 + static ssize_t fuse_send_ioctl(struct fuse_mount *fm, struct fuse_args *args, 13 + struct fuse_ioctl_out *outarg) 13 14 { 14 - ssize_t ret = fuse_simple_request(fm, args); 15 + ssize_t ret; 16 + 17 + args->out_args[0].size = sizeof(*outarg); 18 + args->out_args[0].value = outarg; 19 + 20 + ret = fuse_simple_request(fm, args); 15 21 16 22 /* Translate ENOSYS, which shouldn't be returned from fs */ 17 23 if (ret == -ENOSYS) 18 24 ret = -ENOTTY; 25 + 26 + if (ret >= 0 && outarg->result == -ENOSYS) 27 + outarg->result = -ENOTTY; 19 28 20 29 return ret; 21 30 } ··· 273 264 } 274 265 275 266 ap.args.out_numargs = 2; 276 - ap.args.out_args[0].size = sizeof(outarg); 277 - ap.args.out_args[0].value = &outarg; 278 267 ap.args.out_args[1].size = out_size; 279 268 ap.args.out_pages = true; 280 269 ap.args.out_argvar = true; 281 270 282 - transferred = fuse_send_ioctl(fm, &ap.args); 271 + transferred = fuse_send_ioctl(fm, &ap.args, &outarg); 283 272 err = transferred; 284 273 if (transferred < 0) 285 274 goto out; ··· 406 399 args.in_args[1].size = inarg.in_size; 407 400 args.in_args[1].value = ptr; 408 401 args.out_numargs = 2; 409 - args.out_args[0].size = sizeof(outarg); 410 - args.out_args[0].value = &outarg; 411 402 args.out_args[1].size = inarg.out_size; 412 403 args.out_args[1].value = ptr; 413 404 414 - err = fuse_send_ioctl(fm, &args); 405 + err = fuse_send_ioctl(fm, &args, &outarg); 415 406 if (!err) { 416 407 if (outarg.result < 0) 417 408 err = outarg.result;
+2 -2
fs/iomap/buffered-io.c
··· 872 872 while ((ret = iomap_iter(&iter, ops)) > 0) 873 873 iter.processed = iomap_write_iter(&iter, i); 874 874 875 - if (unlikely(ret < 0)) 875 + if (unlikely(iter.pos == iocb->ki_pos)) 876 876 return ret; 877 877 ret = iter.pos - iocb->ki_pos; 878 - iocb->ki_pos += ret; 878 + iocb->ki_pos = iter.pos; 879 879 return ret; 880 880 } 881 881 EXPORT_SYMBOL_GPL(iomap_file_buffered_write);
+100 -189
fs/jbd2/checkpoint.c
··· 27 27 * 28 28 * Called with j_list_lock held. 29 29 */ 30 - static inline void __buffer_unlink_first(struct journal_head *jh) 30 + static inline void __buffer_unlink(struct journal_head *jh) 31 31 { 32 32 transaction_t *transaction = jh->b_cp_transaction; 33 33 ··· 38 38 if (transaction->t_checkpoint_list == jh) 39 39 transaction->t_checkpoint_list = NULL; 40 40 } 41 - } 42 - 43 - /* 44 - * Unlink a buffer from a transaction checkpoint(io) list. 45 - * 46 - * Called with j_list_lock held. 47 - */ 48 - static inline void __buffer_unlink(struct journal_head *jh) 49 - { 50 - transaction_t *transaction = jh->b_cp_transaction; 51 - 52 - __buffer_unlink_first(jh); 53 - if (transaction->t_checkpoint_io_list == jh) { 54 - transaction->t_checkpoint_io_list = jh->b_cpnext; 55 - if (transaction->t_checkpoint_io_list == jh) 56 - transaction->t_checkpoint_io_list = NULL; 57 - } 58 - } 59 - 60 - /* 61 - * Move a buffer from the checkpoint list to the checkpoint io list 62 - * 63 - * Called with j_list_lock held 64 - */ 65 - static inline void __buffer_relink_io(struct journal_head *jh) 66 - { 67 - transaction_t *transaction = jh->b_cp_transaction; 68 - 69 - __buffer_unlink_first(jh); 70 - 71 - if (!transaction->t_checkpoint_io_list) { 72 - jh->b_cpnext = jh->b_cpprev = jh; 73 - } else { 74 - jh->b_cpnext = transaction->t_checkpoint_io_list; 75 - jh->b_cpprev = transaction->t_checkpoint_io_list->b_cpprev; 76 - jh->b_cpprev->b_cpnext = jh; 77 - jh->b_cpnext->b_cpprev = jh; 78 - } 79 - transaction->t_checkpoint_io_list = jh; 80 41 } 81 42 82 43 /* ··· 144 183 struct buffer_head *bh = journal->j_chkpt_bhs[i]; 145 184 BUFFER_TRACE(bh, "brelse"); 146 185 __brelse(bh); 186 + journal->j_chkpt_bhs[i] = NULL; 147 187 } 148 188 *batch_count = 0; 149 189 } ··· 204 242 jh = transaction->t_checkpoint_list; 205 243 bh = jh2bh(jh); 206 244 207 - if (buffer_locked(bh)) { 208 - get_bh(bh); 209 - spin_unlock(&journal->j_list_lock); 210 - wait_on_buffer(bh); 211 - /* the journal_head may have gone by now */ 212 - BUFFER_TRACE(bh, "brelse"); 213 - __brelse(bh); 214 - goto retry; 215 - } 216 245 if (jh->b_transaction != NULL) { 217 246 transaction_t *t = jh->b_transaction; 218 247 tid_t tid = t->t_tid; ··· 238 285 spin_lock(&journal->j_list_lock); 239 286 goto restart; 240 287 } 241 - if (!buffer_dirty(bh)) { 288 + if (!trylock_buffer(bh)) { 289 + /* 290 + * The buffer is locked, it may be writing back, or 291 + * flushing out in the last couple of cycles, or 292 + * re-adding into a new transaction, need to check 293 + * it again until it's unlocked. 294 + */ 295 + get_bh(bh); 296 + spin_unlock(&journal->j_list_lock); 297 + wait_on_buffer(bh); 298 + /* the journal_head may have gone by now */ 299 + BUFFER_TRACE(bh, "brelse"); 300 + __brelse(bh); 301 + goto retry; 302 + } else if (!buffer_dirty(bh)) { 303 + unlock_buffer(bh); 242 304 BUFFER_TRACE(bh, "remove from checkpoint"); 243 - if (__jbd2_journal_remove_checkpoint(jh)) 244 - /* The transaction was released; we're done */ 305 + /* 306 + * If the transaction was released or the checkpoint 307 + * list was empty, we're done. 308 + */ 309 + if (__jbd2_journal_remove_checkpoint(jh) || 310 + !transaction->t_checkpoint_list) 245 311 goto out; 246 - continue; 312 + } else { 313 + unlock_buffer(bh); 314 + /* 315 + * We are about to write the buffer, it could be 316 + * raced by some other transaction shrink or buffer 317 + * re-log logic once we release the j_list_lock, 318 + * leave it on the checkpoint list and check status 319 + * again to make sure it's clean. 320 + */ 321 + BUFFER_TRACE(bh, "queue"); 322 + get_bh(bh); 323 + J_ASSERT_BH(bh, !buffer_jwrite(bh)); 324 + journal->j_chkpt_bhs[batch_count++] = bh; 325 + transaction->t_chp_stats.cs_written++; 326 + transaction->t_checkpoint_list = jh->b_cpnext; 247 327 } 248 - /* 249 - * Important: we are about to write the buffer, and 250 - * possibly block, while still holding the journal 251 - * lock. We cannot afford to let the transaction 252 - * logic start messing around with this buffer before 253 - * we write it to disk, as that would break 254 - * recoverability. 255 - */ 256 - BUFFER_TRACE(bh, "queue"); 257 - get_bh(bh); 258 - J_ASSERT_BH(bh, !buffer_jwrite(bh)); 259 - journal->j_chkpt_bhs[batch_count++] = bh; 260 - __buffer_relink_io(jh); 261 - transaction->t_chp_stats.cs_written++; 328 + 262 329 if ((batch_count == JBD2_NR_BATCH) || 263 - need_resched() || 264 - spin_needbreak(&journal->j_list_lock)) 330 + need_resched() || spin_needbreak(&journal->j_list_lock) || 331 + jh2bh(transaction->t_checkpoint_list) == journal->j_chkpt_bhs[0]) 265 332 goto unlock_and_flush; 266 333 } 267 334 ··· 295 322 goto restart; 296 323 } 297 324 298 - /* 299 - * Now we issued all of the transaction's buffers, let's deal 300 - * with the buffers that are out for I/O. 301 - */ 302 - restart2: 303 - /* Did somebody clean up the transaction in the meanwhile? */ 304 - if (journal->j_checkpoint_transactions != transaction || 305 - transaction->t_tid != this_tid) 306 - goto out; 307 - 308 - while (transaction->t_checkpoint_io_list) { 309 - jh = transaction->t_checkpoint_io_list; 310 - bh = jh2bh(jh); 311 - if (buffer_locked(bh)) { 312 - get_bh(bh); 313 - spin_unlock(&journal->j_list_lock); 314 - wait_on_buffer(bh); 315 - /* the journal_head may have gone by now */ 316 - BUFFER_TRACE(bh, "brelse"); 317 - __brelse(bh); 318 - spin_lock(&journal->j_list_lock); 319 - goto restart2; 320 - } 321 - 322 - /* 323 - * Now in whatever state the buffer currently is, we 324 - * know that it has been written out and so we can 325 - * drop it from the list 326 - */ 327 - if (__jbd2_journal_remove_checkpoint(jh)) 328 - break; 329 - } 330 325 out: 331 326 spin_unlock(&journal->j_list_lock); 332 327 result = jbd2_cleanup_journal_tail(journal); ··· 350 409 /* Checkpoint list management */ 351 410 352 411 /* 353 - * journal_clean_one_cp_list 412 + * journal_shrink_one_cp_list 354 413 * 355 - * Find all the written-back checkpoint buffers in the given list and 356 - * release them. If 'destroy' is set, clean all buffers unconditionally. 414 + * Find all the written-back checkpoint buffers in the given list 415 + * and try to release them. If the whole transaction is released, set 416 + * the 'released' parameter. Return the number of released checkpointed 417 + * buffers. 357 418 * 358 419 * Called with j_list_lock held. 359 - * Returns 1 if we freed the transaction, 0 otherwise. 360 420 */ 361 - static int journal_clean_one_cp_list(struct journal_head *jh, bool destroy) 421 + static unsigned long journal_shrink_one_cp_list(struct journal_head *jh, 422 + bool destroy, bool *released) 362 423 { 363 424 struct journal_head *last_jh; 364 425 struct journal_head *next_jh = jh; 426 + unsigned long nr_freed = 0; 427 + int ret; 365 428 429 + *released = false; 366 430 if (!jh) 367 431 return 0; 368 432 ··· 376 430 jh = next_jh; 377 431 next_jh = jh->b_cpnext; 378 432 379 - if (!destroy && __cp_buffer_busy(jh)) 380 - return 0; 381 - 382 - if (__jbd2_journal_remove_checkpoint(jh)) 383 - return 1; 384 - /* 385 - * This function only frees up some memory 386 - * if possible so we dont have an obligation 387 - * to finish processing. Bail out if preemption 388 - * requested: 389 - */ 390 - if (need_resched()) 391 - return 0; 392 - } while (jh != last_jh); 393 - 394 - return 0; 395 - } 396 - 397 - /* 398 - * journal_shrink_one_cp_list 399 - * 400 - * Find 'nr_to_scan' written-back checkpoint buffers in the given list 401 - * and try to release them. If the whole transaction is released, set 402 - * the 'released' parameter. Return the number of released checkpointed 403 - * buffers. 404 - * 405 - * Called with j_list_lock held. 406 - */ 407 - static unsigned long journal_shrink_one_cp_list(struct journal_head *jh, 408 - unsigned long *nr_to_scan, 409 - bool *released) 410 - { 411 - struct journal_head *last_jh; 412 - struct journal_head *next_jh = jh; 413 - unsigned long nr_freed = 0; 414 - int ret; 415 - 416 - if (!jh || *nr_to_scan == 0) 417 - return 0; 418 - 419 - last_jh = jh->b_cpprev; 420 - do { 421 - jh = next_jh; 422 - next_jh = jh->b_cpnext; 423 - 424 - (*nr_to_scan)--; 425 - if (__cp_buffer_busy(jh)) 426 - continue; 433 + if (destroy) { 434 + ret = __jbd2_journal_remove_checkpoint(jh); 435 + } else { 436 + ret = jbd2_journal_try_remove_checkpoint(jh); 437 + if (ret < 0) 438 + continue; 439 + } 427 440 428 441 nr_freed++; 429 - ret = __jbd2_journal_remove_checkpoint(jh); 430 442 if (ret) { 431 443 *released = true; 432 444 break; ··· 392 488 393 489 if (need_resched()) 394 490 break; 395 - } while (jh != last_jh && *nr_to_scan); 491 + } while (jh != last_jh); 396 492 397 493 return nr_freed; 398 494 } ··· 410 506 unsigned long *nr_to_scan) 411 507 { 412 508 transaction_t *transaction, *last_transaction, *next_transaction; 413 - bool released; 509 + bool __maybe_unused released; 414 510 tid_t first_tid = 0, last_tid = 0, next_tid = 0; 415 511 tid_t tid = 0; 416 512 unsigned long nr_freed = 0; 417 - unsigned long nr_scanned = *nr_to_scan; 513 + unsigned long freed; 418 514 419 515 again: 420 516 spin_lock(&journal->j_list_lock); ··· 443 539 transaction = next_transaction; 444 540 next_transaction = transaction->t_cpnext; 445 541 tid = transaction->t_tid; 446 - released = false; 447 542 448 - nr_freed += journal_shrink_one_cp_list(transaction->t_checkpoint_list, 449 - nr_to_scan, &released); 450 - if (*nr_to_scan == 0) 451 - break; 452 - if (need_resched() || spin_needbreak(&journal->j_list_lock)) 453 - break; 454 - if (released) 455 - continue; 456 - 457 - nr_freed += journal_shrink_one_cp_list(transaction->t_checkpoint_io_list, 458 - nr_to_scan, &released); 543 + freed = journal_shrink_one_cp_list(transaction->t_checkpoint_list, 544 + false, &released); 545 + nr_freed += freed; 546 + (*nr_to_scan) -= min(*nr_to_scan, freed); 459 547 if (*nr_to_scan == 0) 460 548 break; 461 549 if (need_resched() || spin_needbreak(&journal->j_list_lock)) ··· 468 572 if (*nr_to_scan && next_tid) 469 573 goto again; 470 574 out: 471 - nr_scanned -= *nr_to_scan; 472 575 trace_jbd2_shrink_checkpoint_list(journal, first_tid, tid, last_tid, 473 - nr_freed, nr_scanned, next_tid); 576 + nr_freed, next_tid); 474 577 475 578 return nr_freed; 476 579 } ··· 485 590 void __jbd2_journal_clean_checkpoint_list(journal_t *journal, bool destroy) 486 591 { 487 592 transaction_t *transaction, *last_transaction, *next_transaction; 488 - int ret; 593 + bool released; 489 594 490 595 transaction = journal->j_checkpoint_transactions; 491 596 if (!transaction) ··· 496 601 do { 497 602 transaction = next_transaction; 498 603 next_transaction = transaction->t_cpnext; 499 - ret = journal_clean_one_cp_list(transaction->t_checkpoint_list, 500 - destroy); 604 + journal_shrink_one_cp_list(transaction->t_checkpoint_list, 605 + destroy, &released); 501 606 /* 502 607 * This function only frees up some memory if possible so we 503 608 * dont have an obligation to finish processing. Bail out if ··· 505 610 */ 506 611 if (need_resched()) 507 612 return; 508 - if (ret) 509 - continue; 510 - /* 511 - * It is essential that we are as careful as in the case of 512 - * t_checkpoint_list with removing the buffer from the list as 513 - * we can possibly see not yet submitted buffers on io_list 514 - */ 515 - ret = journal_clean_one_cp_list(transaction-> 516 - t_checkpoint_io_list, destroy); 517 - if (need_resched()) 518 - return; 519 613 /* 520 614 * Stop scanning if we couldn't free the transaction. This 521 615 * avoids pointless scanning of transactions which still 522 616 * weren't checkpointed. 523 617 */ 524 - if (!ret) 618 + if (!released) 525 619 return; 526 620 } while (transaction != last_transaction); 527 621 } ··· 589 705 jbd2_journal_put_journal_head(jh); 590 706 591 707 /* Is this transaction empty? */ 592 - if (transaction->t_checkpoint_list || transaction->t_checkpoint_io_list) 708 + if (transaction->t_checkpoint_list) 593 709 return 0; 594 710 595 711 /* ··· 618 734 __jbd2_journal_drop_transaction(journal, transaction); 619 735 jbd2_journal_free_transaction(transaction); 620 736 return 1; 737 + } 738 + 739 + /* 740 + * Check the checkpoint buffer and try to remove it from the checkpoint 741 + * list if it's clean. Returns -EBUSY if it is not clean, returns 1 if 742 + * it frees the transaction, 0 otherwise. 743 + * 744 + * This function is called with j_list_lock held. 745 + */ 746 + int jbd2_journal_try_remove_checkpoint(struct journal_head *jh) 747 + { 748 + struct buffer_head *bh = jh2bh(jh); 749 + 750 + if (!trylock_buffer(bh)) 751 + return -EBUSY; 752 + if (buffer_dirty(bh)) { 753 + unlock_buffer(bh); 754 + return -EBUSY; 755 + } 756 + unlock_buffer(bh); 757 + 758 + /* 759 + * Buffer is clean and the IO has finished (we held the buffer 760 + * lock) so the checkpoint is done. We can safely remove the 761 + * buffer from this transaction. 762 + */ 763 + JBUFFER_TRACE(jh, "remove from checkpoint list"); 764 + return __jbd2_journal_remove_checkpoint(jh); 621 765 } 622 766 623 767 /* ··· 709 797 J_ASSERT(transaction->t_forget == NULL); 710 798 J_ASSERT(transaction->t_shadow_list == NULL); 711 799 J_ASSERT(transaction->t_checkpoint_list == NULL); 712 - J_ASSERT(transaction->t_checkpoint_io_list == NULL); 713 800 J_ASSERT(atomic_read(&transaction->t_updates) == 0); 714 801 J_ASSERT(journal->j_committing_transaction != transaction); 715 802 J_ASSERT(journal->j_running_transaction != transaction);
+1 -2
fs/jbd2/commit.c
··· 1141 1141 spin_lock(&journal->j_list_lock); 1142 1142 commit_transaction->t_state = T_FINISHED; 1143 1143 /* Check if the transaction can be dropped now that we are finished */ 1144 - if (commit_transaction->t_checkpoint_list == NULL && 1145 - commit_transaction->t_checkpoint_io_list == NULL) { 1144 + if (commit_transaction->t_checkpoint_list == NULL) { 1146 1145 __jbd2_journal_drop_transaction(journal, commit_transaction); 1147 1146 jbd2_journal_free_transaction(commit_transaction); 1148 1147 }
+8 -32
fs/jbd2/transaction.c
··· 1784 1784 * Otherwise, if the buffer has been written to disk, 1785 1785 * it is safe to remove the checkpoint and drop it. 1786 1786 */ 1787 - if (!buffer_dirty(bh)) { 1788 - __jbd2_journal_remove_checkpoint(jh); 1787 + if (jbd2_journal_try_remove_checkpoint(jh) >= 0) { 1789 1788 spin_unlock(&journal->j_list_lock); 1790 1789 goto drop; 1791 1790 } ··· 2099 2100 __brelse(bh); 2100 2101 } 2101 2102 2102 - /* 2103 - * Called from jbd2_journal_try_to_free_buffers(). 2104 - * 2105 - * Called under jh->b_state_lock 2106 - */ 2107 - static void 2108 - __journal_try_to_free_buffer(journal_t *journal, struct buffer_head *bh) 2109 - { 2110 - struct journal_head *jh; 2111 - 2112 - jh = bh2jh(bh); 2113 - 2114 - if (buffer_locked(bh) || buffer_dirty(bh)) 2115 - goto out; 2116 - 2117 - if (jh->b_next_transaction != NULL || jh->b_transaction != NULL) 2118 - goto out; 2119 - 2120 - spin_lock(&journal->j_list_lock); 2121 - if (jh->b_cp_transaction != NULL) { 2122 - /* written-back checkpointed metadata buffer */ 2123 - JBUFFER_TRACE(jh, "remove from checkpoint list"); 2124 - __jbd2_journal_remove_checkpoint(jh); 2125 - } 2126 - spin_unlock(&journal->j_list_lock); 2127 - out: 2128 - return; 2129 - } 2130 - 2131 2103 /** 2132 2104 * jbd2_journal_try_to_free_buffers() - try to free page buffers. 2133 2105 * @journal: journal for operation ··· 2156 2186 continue; 2157 2187 2158 2188 spin_lock(&jh->b_state_lock); 2159 - __journal_try_to_free_buffer(journal, bh); 2189 + if (!jh->b_transaction && !jh->b_next_transaction) { 2190 + spin_lock(&journal->j_list_lock); 2191 + /* Remove written-back checkpointed metadata buffer */ 2192 + if (jh->b_cp_transaction != NULL) 2193 + jbd2_journal_try_remove_checkpoint(jh); 2194 + spin_unlock(&journal->j_list_lock); 2195 + } 2160 2196 spin_unlock(&jh->b_state_lock); 2161 2197 jbd2_journal_put_journal_head(jh); 2162 2198 if (buffer_jbd(bh))
+2 -2
fs/smb/client/cifsfs.h
··· 159 159 #endif /* CONFIG_CIFS_NFSD_EXPORT */ 160 160 161 161 /* when changing internal version - update following two lines at same time */ 162 - #define SMB3_PRODUCT_BUILD 43 163 - #define CIFS_VERSION "2.43" 162 + #define SMB3_PRODUCT_BUILD 44 163 + #define CIFS_VERSION "2.44" 164 164 #endif /* _CIFSFS_H */
+13 -4
fs/smb/client/ioctl.c
··· 433 433 * Dump encryption keys. This is an old ioctl that only 434 434 * handles AES-128-{CCM,GCM}. 435 435 */ 436 - if (pSMBFile == NULL) 437 - break; 438 436 if (!capable(CAP_SYS_ADMIN)) { 439 437 rc = -EACCES; 440 438 break; 441 439 } 442 440 443 - tcon = tlink_tcon(pSMBFile->tlink); 441 + cifs_sb = CIFS_SB(inode->i_sb); 442 + tlink = cifs_sb_tlink(cifs_sb); 443 + if (IS_ERR(tlink)) { 444 + rc = PTR_ERR(tlink); 445 + break; 446 + } 447 + tcon = tlink_tcon(tlink); 444 448 if (!smb3_encryption_required(tcon)) { 445 449 rc = -EOPNOTSUPP; 450 + cifs_put_tlink(tlink); 446 451 break; 447 452 } 448 453 pkey_inf.cipher_type = ··· 464 459 rc = -EFAULT; 465 460 else 466 461 rc = 0; 462 + cifs_put_tlink(tlink); 467 463 break; 468 464 case CIFS_DUMP_FULL_KEY: 469 465 /* ··· 476 470 rc = -EACCES; 477 471 break; 478 472 } 479 - tcon = tlink_tcon(pSMBFile->tlink); 473 + cifs_sb = CIFS_SB(inode->i_sb); 474 + tlink = cifs_sb_tlink(cifs_sb); 475 + tcon = tlink_tcon(tlink); 480 476 rc = cifs_dump_full_key(tcon, (void __user *)arg); 477 + cifs_put_tlink(tlink); 481 478 break; 482 479 case CIFS_IOC_NOTIFY: 483 480 if (!S_ISDIR(inode->i_mode)) {
+66 -9
fs/xfs/libxfs/xfs_da_format.h
··· 591 591 uint8_t valuelen; /* actual length of value (no NULL) */ 592 592 uint8_t flags; /* flags bits (see xfs_attr_leaf.h) */ 593 593 uint8_t nameval[]; /* name & value bytes concatenated */ 594 - } list[1]; /* variable sized array */ 594 + } list[]; /* variable sized array */ 595 595 }; 596 596 597 597 typedef struct xfs_attr_leaf_map { /* RLE map of free bytes */ ··· 620 620 typedef struct xfs_attr_leaf_name_local { 621 621 __be16 valuelen; /* number of bytes in value */ 622 622 __u8 namelen; /* length of name bytes */ 623 - __u8 nameval[1]; /* name/value bytes */ 623 + /* 624 + * In Linux 6.5 this flex array was converted from nameval[1] to 625 + * nameval[]. Be very careful here about extra padding at the end; 626 + * see xfs_attr_leaf_entsize_local() for details. 627 + */ 628 + __u8 nameval[]; /* name/value bytes */ 624 629 } xfs_attr_leaf_name_local_t; 625 630 626 631 typedef struct xfs_attr_leaf_name_remote { 627 632 __be32 valueblk; /* block number of value bytes */ 628 633 __be32 valuelen; /* number of bytes in value */ 629 634 __u8 namelen; /* length of name bytes */ 630 - __u8 name[1]; /* name bytes */ 635 + /* 636 + * In Linux 6.5 this flex array was converted from name[1] to name[]. 637 + * Be very careful here about extra padding at the end; see 638 + * xfs_attr_leaf_entsize_remote() for details. 639 + */ 640 + __u8 name[]; /* name bytes */ 631 641 } xfs_attr_leaf_name_remote_t; 632 642 633 643 typedef struct xfs_attr_leafblock { 634 644 xfs_attr_leaf_hdr_t hdr; /* constant-structure header block */ 635 - xfs_attr_leaf_entry_t entries[1]; /* sorted on key, not name */ 645 + xfs_attr_leaf_entry_t entries[]; /* sorted on key, not name */ 636 646 /* 637 647 * The rest of the block contains the following structures after the 638 648 * leaf entries, growing from the bottom up. The variables are never ··· 674 664 675 665 struct xfs_attr3_leafblock { 676 666 struct xfs_attr3_leaf_hdr hdr; 677 - struct xfs_attr_leaf_entry entries[1]; 667 + struct xfs_attr_leaf_entry entries[]; 678 668 679 669 /* 680 670 * The rest of the block contains the following structures after the ··· 757 747 */ 758 748 static inline int xfs_attr_leaf_entsize_remote(int nlen) 759 749 { 760 - return round_up(sizeof(struct xfs_attr_leaf_name_remote) - 1 + 761 - nlen, XFS_ATTR_LEAF_NAME_ALIGN); 750 + /* 751 + * Prior to Linux 6.5, struct xfs_attr_leaf_name_remote ended with 752 + * name[1], which was used as a flexarray. The layout of this struct 753 + * is 9 bytes of fixed-length fields followed by a __u8 flex array at 754 + * offset 9. 755 + * 756 + * On most architectures, struct xfs_attr_leaf_name_remote had two 757 + * bytes of implicit padding at the end of the struct to make the 758 + * struct length 12. After converting name[1] to name[], there are 759 + * three implicit padding bytes and the struct size remains 12. 760 + * However, there are compiler configurations that do not add implicit 761 + * padding at all (m68k) and have been broken for years. 762 + * 763 + * This entsize computation historically added (the xattr name length) 764 + * to (the padded struct length - 1) and rounded that sum up to the 765 + * nearest multiple of 4 (NAME_ALIGN). IOWs, round_up(11 + nlen, 4). 766 + * This is encoded in the ondisk format, so we cannot change this. 767 + * 768 + * Compute the entsize from offsetof of the flexarray and manually 769 + * adding bytes for the implicit padding. 770 + */ 771 + const size_t remotesize = 772 + offsetof(struct xfs_attr_leaf_name_remote, name) + 2; 773 + 774 + return round_up(remotesize + nlen, XFS_ATTR_LEAF_NAME_ALIGN); 762 775 } 763 776 764 777 static inline int xfs_attr_leaf_entsize_local(int nlen, int vlen) 765 778 { 766 - return round_up(sizeof(struct xfs_attr_leaf_name_local) - 1 + 767 - nlen + vlen, XFS_ATTR_LEAF_NAME_ALIGN); 779 + /* 780 + * Prior to Linux 6.5, struct xfs_attr_leaf_name_local ended with 781 + * nameval[1], which was used as a flexarray. The layout of this 782 + * struct is 3 bytes of fixed-length fields followed by a __u8 flex 783 + * array at offset 3. 784 + * 785 + * struct xfs_attr_leaf_name_local had zero bytes of implicit padding 786 + * at the end of the struct to make the struct length 4. On most 787 + * architectures, after converting nameval[1] to nameval[], there is 788 + * one implicit padding byte and the struct size remains 4. However, 789 + * there are compiler configurations that do not add implicit padding 790 + * at all (m68k) and would break. 791 + * 792 + * This entsize computation historically added (the xattr name and 793 + * value length) to (the padded struct length - 1) and rounded that sum 794 + * up to the nearest multiple of 4 (NAME_ALIGN). IOWs, the formula is 795 + * round_up(3 + nlen + vlen, 4). This is encoded in the ondisk format, 796 + * so we cannot change this. 797 + * 798 + * Compute the entsize from offsetof of the flexarray and manually 799 + * adding bytes for the implicit padding. 800 + */ 801 + const size_t localsize = 802 + offsetof(struct xfs_attr_leaf_name_local, nameval); 803 + 804 + return round_up(localsize + nlen + vlen, XFS_ATTR_LEAF_NAME_ALIGN); 768 805 } 769 806 770 807 static inline int xfs_attr_leaf_entsize_local_max(int bsize)
+2 -2
fs/xfs/libxfs/xfs_fs.h
··· 592 592 struct xfs_attrlist { 593 593 __s32 al_count; /* number of entries in attrlist */ 594 594 __s32 al_more; /* T/F: more attrs (do call again) */ 595 - __s32 al_offset[1]; /* byte offsets of attrs [var-sized] */ 595 + __s32 al_offset[]; /* byte offsets of attrs [var-sized] */ 596 596 }; 597 597 598 598 struct xfs_attrlist_ent { /* data from attr_list() */ 599 599 __u32 a_valuelen; /* number bytes in value of attr */ 600 - char a_name[1]; /* attr name (NULL terminated) */ 600 + char a_name[]; /* attr name (NULL terminated) */ 601 601 }; 602 602 603 603 typedef struct xfs_fsop_attrlist_handlereq {
+3 -2
fs/xfs/xfs_ondisk.h
··· 56 56 57 57 /* dir/attr trees */ 58 58 XFS_CHECK_STRUCT_SIZE(struct xfs_attr3_leaf_hdr, 80); 59 - XFS_CHECK_STRUCT_SIZE(struct xfs_attr3_leafblock, 88); 59 + XFS_CHECK_STRUCT_SIZE(struct xfs_attr3_leafblock, 80); 60 60 XFS_CHECK_STRUCT_SIZE(struct xfs_attr3_rmt_hdr, 56); 61 61 XFS_CHECK_STRUCT_SIZE(struct xfs_da3_blkinfo, 56); 62 62 XFS_CHECK_STRUCT_SIZE(struct xfs_da3_intnode, 64); ··· 88 88 XFS_CHECK_OFFSET(xfs_attr_leaf_name_remote_t, valuelen, 4); 89 89 XFS_CHECK_OFFSET(xfs_attr_leaf_name_remote_t, namelen, 8); 90 90 XFS_CHECK_OFFSET(xfs_attr_leaf_name_remote_t, name, 9); 91 - XFS_CHECK_STRUCT_SIZE(xfs_attr_leafblock_t, 40); 91 + XFS_CHECK_STRUCT_SIZE(xfs_attr_leafblock_t, 32); 92 + XFS_CHECK_STRUCT_SIZE(struct xfs_attr_shortform, 4); 92 93 XFS_CHECK_OFFSET(struct xfs_attr_shortform, hdr.totsize, 0); 93 94 XFS_CHECK_OFFSET(struct xfs_attr_shortform, hdr.count, 2); 94 95 XFS_CHECK_OFFSET(struct xfs_attr_shortform, list[0].namelen, 4);
+1 -1
include/kvm/arm_vgic.h
··· 431 431 432 432 int vgic_v4_load(struct kvm_vcpu *vcpu); 433 433 void vgic_v4_commit(struct kvm_vcpu *vcpu); 434 - int vgic_v4_put(struct kvm_vcpu *vcpu, bool need_db); 434 + int vgic_v4_put(struct kvm_vcpu *vcpu); 435 435 436 436 /* CPU HP callbacks */ 437 437 void kvm_vgic_cpu_up(void);
-2
include/linux/blk-mq.h
··· 397 397 */ 398 398 struct blk_mq_tags *sched_tags; 399 399 400 - /** @queued: Number of queued requests. */ 401 - unsigned long queued; 402 400 /** @run: Number of dispatched requests. */ 403 401 unsigned long run; 404 402
+1 -6
include/linux/jbd2.h
··· 614 614 struct journal_head *t_checkpoint_list; 615 615 616 616 /* 617 - * Doubly-linked circular list of all buffers submitted for IO while 618 - * checkpointing. [j_list_lock] 619 - */ 620 - struct journal_head *t_checkpoint_io_list; 621 - 622 - /* 623 617 * Doubly-linked circular list of metadata buffers being 624 618 * shadowed by log IO. The IO buffers on the iobuf list and 625 619 * the shadow buffers on this list match each other one for ··· 1443 1449 void __jbd2_journal_clean_checkpoint_list(journal_t *journal, bool destroy); 1444 1450 unsigned long jbd2_journal_shrink_checkpoint_list(journal_t *journal, unsigned long *nr_to_scan); 1445 1451 int __jbd2_journal_remove_checkpoint(struct journal_head *); 1452 + int jbd2_journal_try_remove_checkpoint(struct journal_head *jh); 1446 1453 void jbd2_journal_destroy_checkpoint(journal_t *journal); 1447 1454 void __jbd2_journal_insert_checkpoint(struct journal_head *, transaction_t *); 1448 1455
+1 -1
include/linux/tcp.h
··· 513 513 struct request_sock_queue *queue = &inet_csk(sk)->icsk_accept_queue; 514 514 int somaxconn = READ_ONCE(sock_net(sk)->core.sysctl_somaxconn); 515 515 516 - queue->fastopenq.max_qlen = min_t(unsigned int, backlog, somaxconn); 516 + WRITE_ONCE(queue->fastopenq.max_qlen, min_t(unsigned int, backlog, somaxconn)); 517 517 } 518 518 519 519 static inline void tcp_move_syn(struct tcp_sock *tp,
+5 -2
include/net/bluetooth/hci_core.h
··· 593 593 const char *fw_info; 594 594 struct dentry *debugfs; 595 595 596 - #ifdef CONFIG_DEV_COREDUMP 597 596 struct hci_devcoredump dump; 598 - #endif 599 597 600 598 struct device dev; 601 599 ··· 820 822 821 823 struct hci_conn *conn; 822 824 bool explicit_connect; 825 + /* Accessed without hdev->lock: */ 823 826 hci_conn_flags_t flags; 824 827 u8 privacy_mode; 825 828 }; ··· 1572 1573 bdaddr_t *addr, u8 addr_type); 1573 1574 void hci_conn_params_del(struct hci_dev *hdev, bdaddr_t *addr, u8 addr_type); 1574 1575 void hci_conn_params_clear_disabled(struct hci_dev *hdev); 1576 + void hci_conn_params_free(struct hci_conn_params *param); 1575 1577 1578 + void hci_pend_le_list_del_init(struct hci_conn_params *param); 1579 + void hci_pend_le_list_add(struct hci_conn_params *param, 1580 + struct list_head *list); 1576 1581 struct hci_conn_params *hci_pend_le_action_lookup(struct list_head *list, 1577 1582 bdaddr_t *addr, 1578 1583 u8 addr_type);
+1 -1
include/net/bonding.h
··· 277 277 unsigned short vlan_id; 278 278 }; 279 279 280 - /** 280 + /* 281 281 * Returns NULL if the net_device does not belong to any of the bond's slaves 282 282 * 283 283 * Caller must hold bond lock for read
+2 -1
include/net/cfg802154.h
··· 170 170 } 171 171 172 172 /** 173 - * @WPAN_PHY_FLAG_TRANSMIT_POWER: Indicates that transceiver will support 173 + * enum wpan_phy_flags - WPAN PHY state flags 174 + * @WPAN_PHY_FLAG_TXPOWER: Indicates that transceiver will support 174 175 * transmit power setting. 175 176 * @WPAN_PHY_FLAG_CCA_ED_LEVEL: Indicates that transceiver will support cca ed 176 177 * level setting.
+2 -2
include/net/codel.h
··· 145 145 * @maxpacket: largest packet we've seen so far 146 146 * @drop_count: temp count of dropped packets in dequeue() 147 147 * @drop_len: bytes of dropped packets in dequeue() 148 - * ecn_mark: number of packets we ECN marked instead of dropping 149 - * ce_mark: number of packets CE marked because sojourn time was above ce_threshold 148 + * @ecn_mark: number of packets we ECN marked instead of dropping 149 + * @ce_mark: number of packets CE marked because sojourn time was above ce_threshold 150 150 */ 151 151 struct codel_stats { 152 152 u32 maxpacket;
+16 -12
include/net/devlink.h
··· 221 221 /** 222 222 * struct devlink_dpipe_header - dpipe header object 223 223 * @name: header name 224 - * @id: index, global/local detrmined by global bit 224 + * @id: index, global/local determined by global bit 225 225 * @fields: fields 226 226 * @fields_count: number of fields 227 227 * @global: indicates if header is shared like most protocol header ··· 241 241 * @header_index: header index (packets can have several headers of same 242 242 * type like in case of tunnels) 243 243 * @header: header 244 - * @fieled_id: field index 244 + * @field_id: field index 245 245 */ 246 246 struct devlink_dpipe_match { 247 247 enum devlink_dpipe_match_type type; ··· 256 256 * @header_index: header index (packets can have several headers of same 257 257 * type like in case of tunnels) 258 258 * @header: header 259 - * @fieled_id: field index 259 + * @field_id: field index 260 260 */ 261 261 struct devlink_dpipe_action { 262 262 enum devlink_dpipe_action_type type; ··· 292 292 * struct devlink_dpipe_entry - table entry object 293 293 * @index: index of the entry in the table 294 294 * @match_values: match values 295 - * @matche_values_count: count of matches tuples 295 + * @match_values_count: count of matches tuples 296 296 * @action_values: actions values 297 297 * @action_values_count: count of actions values 298 298 * @counter: value of counter ··· 342 342 */ 343 343 struct devlink_dpipe_table { 344 344 void *priv; 345 + /* private: */ 345 346 struct list_head list; 347 + /* public: */ 346 348 const char *name; 347 349 bool counters_enabled; 348 350 bool counter_control_extern; ··· 357 355 358 356 /** 359 357 * struct devlink_dpipe_table_ops - dpipe_table ops 360 - * @actions_dump - dumps all tables actions 361 - * @matches_dump - dumps all tables matches 362 - * @entries_dump - dumps all active entries in the table 363 - * @counters_set_update - when changing the counter status hardware sync 358 + * @actions_dump: dumps all tables actions 359 + * @matches_dump: dumps all tables matches 360 + * @entries_dump: dumps all active entries in the table 361 + * @counters_set_update: when changing the counter status hardware sync 364 362 * maybe needed to allocate/free counter related 365 363 * resources 366 - * @size_get - get size 364 + * @size_get: get size 367 365 */ 368 366 struct devlink_dpipe_table_ops { 369 367 int (*actions_dump)(void *priv, struct sk_buff *skb); ··· 376 374 377 375 /** 378 376 * struct devlink_dpipe_headers - dpipe headers 379 - * @headers - header array can be shared (global bit) or driver specific 380 - * @headers_count - count of headers 377 + * @headers: header array can be shared (global bit) or driver specific 378 + * @headers_count: count of headers 381 379 */ 382 380 struct devlink_dpipe_headers { 383 381 struct devlink_dpipe_header **headers; ··· 389 387 * @size_min: minimum size which can be set 390 388 * @size_max: maximum size which can be set 391 389 * @size_granularity: size granularity 392 - * @size_unit: resource's basic unit 390 + * @unit: resource's basic unit 393 391 */ 394 392 struct devlink_resource_size_params { 395 393 u64 size_min; ··· 459 457 460 458 /** 461 459 * struct devlink_param - devlink configuration parameter data 460 + * @id: devlink parameter id number 462 461 * @name: name of the parameter 463 462 * @generic: indicates if the parameter is generic or driver specific 464 463 * @type: parameter type ··· 635 632 * struct devlink_flash_update_params - Flash Update parameters 636 633 * @fw: pointer to the firmware data to update from 637 634 * @component: the flash component to update 635 + * @overwrite_mask: which types of flash update are supported (may be %0) 638 636 * 639 637 * With the exception of fw, drivers must opt-in to parameters by 640 638 * setting the appropriate bit in the supported_flash_update_params field in
+1 -1
include/net/inet_frag.h
··· 29 29 }; 30 30 31 31 /** 32 - * fragment queue flags 32 + * enum: fragment queue flags 33 33 * 34 34 * @INET_FRAG_FIRST_IN: first fragment has arrived 35 35 * @INET_FRAG_LAST_IN: final fragment has arrived
+1 -1
include/net/llc_conn.h
··· 111 111 void llc_conn_resend_i_pdu_as_rsp(struct sock *sk, u8 nr, u8 first_f_bit); 112 112 int llc_conn_remove_acked_pdus(struct sock *conn, u8 nr, u16 *how_many_unacked); 113 113 struct sock *llc_lookup_established(struct llc_sap *sap, struct llc_addr *daddr, 114 - struct llc_addr *laddr); 114 + struct llc_addr *laddr, const struct net *net); 115 115 void llc_sap_add_socket(struct llc_sap *sap, struct sock *sk); 116 116 void llc_sap_remove_socket(struct llc_sap *sap, struct sock *sk); 117 117
+4 -2
include/net/llc_pdu.h
··· 269 269 /** 270 270 * llc_pdu_decode_da - extracts dest address of input frame 271 271 * @skb: input skb that destination address must be extracted from it 272 - * @sa: pointer to destination address (6 byte array). 272 + * @da: pointer to destination address (6 byte array). 273 273 * 274 274 * This function extracts destination address(MAC) of input frame. 275 275 */ ··· 321 321 322 322 /** 323 323 * llc_pdu_init_as_test_cmd - sets PDU as TEST 324 - * @skb - Address of the skb to build 324 + * @skb: Address of the skb to build 325 325 * 326 326 * Sets a PDU as TEST 327 327 */ ··· 369 369 /** 370 370 * llc_pdu_init_as_xid_cmd - sets bytes 3, 4 & 5 of LLC header as XID 371 371 * @skb: input skb that header must be set into it. 372 + * @svcs_supported: The class of the LLC (I or II) 373 + * @rx_window: The size of the receive window of the LLC 372 374 * 373 375 * This function sets third,fourth,fifth and sixth bytes of LLC header as 374 376 * a XID PDU.
+1 -1
include/net/nsh.h
··· 192 192 193 193 /** 194 194 * struct nsh_md1_ctx - Keeps track of NSH context data 195 - * @nshc<1-4>: NSH Contexts. 195 + * @context: NSH Contexts. 196 196 */ 197 197 struct nsh_md1_ctx { 198 198 __be32 context[4];
+1 -1
include/net/pie.h
··· 17 17 /** 18 18 * struct pie_params - contains pie parameters 19 19 * @target: target delay in pschedtime 20 - * @tudpate: interval at which drop probability is calculated 20 + * @tupdate: interval at which drop probability is calculated 21 21 * @limit: total number of packets that can be in the queue 22 22 * @alpha: parameter to control drop probability 23 23 * @beta: parameter to control drop probability
+1 -1
include/net/rsi_91x.h
··· 1 - /** 1 + /* 2 2 * Copyright (c) 2017 Redpine Signals Inc. 3 3 * 4 4 * Permission to use, copy, modify, and/or distribute this software for any
+24 -7
include/net/tcp.h
··· 1509 1509 static inline int keepalive_intvl_when(const struct tcp_sock *tp) 1510 1510 { 1511 1511 struct net *net = sock_net((struct sock *)tp); 1512 + int val; 1512 1513 1513 - return tp->keepalive_intvl ? : 1514 - READ_ONCE(net->ipv4.sysctl_tcp_keepalive_intvl); 1514 + /* Paired with WRITE_ONCE() in tcp_sock_set_keepintvl() 1515 + * and do_tcp_setsockopt(). 1516 + */ 1517 + val = READ_ONCE(tp->keepalive_intvl); 1518 + 1519 + return val ? : READ_ONCE(net->ipv4.sysctl_tcp_keepalive_intvl); 1515 1520 } 1516 1521 1517 1522 static inline int keepalive_time_when(const struct tcp_sock *tp) 1518 1523 { 1519 1524 struct net *net = sock_net((struct sock *)tp); 1525 + int val; 1520 1526 1521 - return tp->keepalive_time ? : 1522 - READ_ONCE(net->ipv4.sysctl_tcp_keepalive_time); 1527 + /* Paired with WRITE_ONCE() in tcp_sock_set_keepidle_locked() */ 1528 + val = READ_ONCE(tp->keepalive_time); 1529 + 1530 + return val ? : READ_ONCE(net->ipv4.sysctl_tcp_keepalive_time); 1523 1531 } 1524 1532 1525 1533 static inline int keepalive_probes(const struct tcp_sock *tp) 1526 1534 { 1527 1535 struct net *net = sock_net((struct sock *)tp); 1536 + int val; 1528 1537 1529 - return tp->keepalive_probes ? : 1530 - READ_ONCE(net->ipv4.sysctl_tcp_keepalive_probes); 1538 + /* Paired with WRITE_ONCE() in tcp_sock_set_keepcnt() 1539 + * and do_tcp_setsockopt(). 1540 + */ 1541 + val = READ_ONCE(tp->keepalive_probes); 1542 + 1543 + return val ? : READ_ONCE(net->ipv4.sysctl_tcp_keepalive_probes); 1531 1544 } 1532 1545 1533 1546 static inline u32 keepalive_time_elapsed(const struct tcp_sock *tp) ··· 2061 2048 static inline u32 tcp_notsent_lowat(const struct tcp_sock *tp) 2062 2049 { 2063 2050 struct net *net = sock_net((struct sock *)tp); 2064 - return tp->notsent_lowat ?: READ_ONCE(net->ipv4.sysctl_tcp_notsent_lowat); 2051 + u32 val; 2052 + 2053 + val = READ_ONCE(tp->notsent_lowat); 2054 + 2055 + return val ?: READ_ONCE(net->ipv4.sysctl_tcp_notsent_lowat); 2065 2056 } 2066 2057 2067 2058 bool tcp_stream_memory_free(const struct sock *sk, int wake);
+4 -8
include/trace/events/jbd2.h
··· 462 462 TRACE_EVENT(jbd2_shrink_checkpoint_list, 463 463 464 464 TP_PROTO(journal_t *journal, tid_t first_tid, tid_t tid, tid_t last_tid, 465 - unsigned long nr_freed, unsigned long nr_scanned, 466 - tid_t next_tid), 465 + unsigned long nr_freed, tid_t next_tid), 467 466 468 - TP_ARGS(journal, first_tid, tid, last_tid, nr_freed, 469 - nr_scanned, next_tid), 467 + TP_ARGS(journal, first_tid, tid, last_tid, nr_freed, next_tid), 470 468 471 469 TP_STRUCT__entry( 472 470 __field(dev_t, dev) ··· 472 474 __field(tid_t, tid) 473 475 __field(tid_t, last_tid) 474 476 __field(unsigned long, nr_freed) 475 - __field(unsigned long, nr_scanned) 476 477 __field(tid_t, next_tid) 477 478 ), 478 479 ··· 481 484 __entry->tid = tid; 482 485 __entry->last_tid = last_tid; 483 486 __entry->nr_freed = nr_freed; 484 - __entry->nr_scanned = nr_scanned; 485 487 __entry->next_tid = next_tid; 486 488 ), 487 489 488 490 TP_printk("dev %d,%d shrink transaction %u-%u(%u) freed %lu " 489 - "scanned %lu next transaction %u", 491 + "next transaction %u", 490 492 MAJOR(__entry->dev), MINOR(__entry->dev), 491 493 __entry->first_tid, __entry->tid, __entry->last_tid, 492 - __entry->nr_freed, __entry->nr_scanned, __entry->next_tid) 494 + __entry->nr_freed, __entry->next_tid) 493 495 ); 494 496 495 497 #endif /* _TRACE_JBD2_H */
+3
include/uapi/linux/fuse.h
··· 206 206 * - add extension header 207 207 * - add FUSE_EXT_GROUPS 208 208 * - add FUSE_CREATE_SUPP_GROUP 209 + * - add FUSE_HAS_EXPIRE_ONLY 209 210 */ 210 211 211 212 #ifndef _LINUX_FUSE_H ··· 370 369 * FUSE_HAS_INODE_DAX: use per inode DAX 371 370 * FUSE_CREATE_SUPP_GROUP: add supplementary group info to create, mkdir, 372 371 * symlink and mknod (single group that matches parent) 372 + * FUSE_HAS_EXPIRE_ONLY: kernel supports expiry-only entry invalidation 373 373 */ 374 374 #define FUSE_ASYNC_READ (1 << 0) 375 375 #define FUSE_POSIX_LOCKS (1 << 1) ··· 408 406 #define FUSE_SECURITY_CTX (1ULL << 32) 409 407 #define FUSE_HAS_INODE_DAX (1ULL << 33) 410 408 #define FUSE_CREATE_SUPP_GROUP (1ULL << 34) 409 + #define FUSE_HAS_EXPIRE_ONLY (1ULL << 35) 411 410 412 411 /** 413 412 * CUSE INIT request/reply flags
+27 -27
io_uring/io_uring.c
··· 1948 1948 ret = io_issue_sqe(req, issue_flags); 1949 1949 if (ret != -EAGAIN) 1950 1950 break; 1951 + 1952 + /* 1953 + * If REQ_F_NOWAIT is set, then don't wait or retry with 1954 + * poll. -EAGAIN is final for that case. 1955 + */ 1956 + if (req->flags & REQ_F_NOWAIT) 1957 + break; 1958 + 1951 1959 /* 1952 1960 * We can get EAGAIN for iopolled IO even though we're 1953 1961 * forcing a sync submission from here, since we can't ··· 3437 3429 unsigned long addr, unsigned long len, 3438 3430 unsigned long pgoff, unsigned long flags) 3439 3431 { 3440 - const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags); 3441 - struct vm_unmapped_area_info info; 3442 3432 void *ptr; 3443 3433 3444 3434 /* ··· 3451 3445 if (IS_ERR(ptr)) 3452 3446 return -ENOMEM; 3453 3447 3454 - info.flags = VM_UNMAPPED_AREA_TOPDOWN; 3455 - info.length = len; 3456 - info.low_limit = max(PAGE_SIZE, mmap_min_addr); 3457 - info.high_limit = arch_get_mmap_base(addr, current->mm->mmap_base); 3458 - #ifdef SHM_COLOUR 3459 - info.align_mask = PAGE_MASK & (SHM_COLOUR - 1UL); 3460 - #else 3461 - info.align_mask = PAGE_MASK & (SHMLBA - 1UL); 3462 - #endif 3463 - info.align_offset = (unsigned long) ptr; 3464 - 3465 3448 /* 3466 - * A failed mmap() very likely causes application failure, 3467 - * so fall back to the bottom-up function here. This scenario 3468 - * can happen with large stack limits and large mmap() 3469 - * allocations. 3449 + * Some architectures have strong cache aliasing requirements. 3450 + * For such architectures we need a coherent mapping which aliases 3451 + * kernel memory *and* userspace memory. To achieve that: 3452 + * - use a NULL file pointer to reference physical memory, and 3453 + * - use the kernel virtual address of the shared io_uring context 3454 + * (instead of the userspace-provided address, which has to be 0UL 3455 + * anyway). 3456 + * For architectures without such aliasing requirements, the 3457 + * architecture will return any suitable mapping because addr is 0. 3470 3458 */ 3471 - addr = vm_unmapped_area(&info); 3472 - if (offset_in_page(addr)) { 3473 - info.flags = 0; 3474 - info.low_limit = TASK_UNMAPPED_BASE; 3475 - info.high_limit = mmap_end; 3476 - addr = vm_unmapped_area(&info); 3477 - } 3478 - 3479 - return addr; 3459 + filp = NULL; 3460 + flags |= MAP_SHARED; 3461 + pgoff = 0; /* has been translated to ptr above */ 3462 + #ifdef SHM_COLOUR 3463 + addr = (uintptr_t) ptr; 3464 + #else 3465 + addr = 0UL; 3466 + #endif 3467 + return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags); 3480 3468 } 3481 3469 3482 3470 #else /* !CONFIG_MMU */ ··· 3870 3870 ctx->syscall_iopoll = 1; 3871 3871 3872 3872 ctx->compat = in_compat_syscall(); 3873 - if (!capable(CAP_IPC_LOCK)) 3873 + if (!ns_capable_noaudit(&init_user_ns, CAP_IPC_LOCK)) 3874 3874 ctx->user = get_uid(current_user()); 3875 3875 3876 3876 /*
+25 -7
kernel/bpf/verifier.c
··· 5573 5573 * Since recursion is prevented by check_cfg() this algorithm 5574 5574 * only needs a local stack of MAX_CALL_FRAMES to remember callsites 5575 5575 */ 5576 - static int check_max_stack_depth(struct bpf_verifier_env *env) 5576 + static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx) 5577 5577 { 5578 - int depth = 0, frame = 0, idx = 0, i = 0, subprog_end; 5579 5578 struct bpf_subprog_info *subprog = env->subprog_info; 5580 5579 struct bpf_insn *insn = env->prog->insnsi; 5580 + int depth = 0, frame = 0, i, subprog_end; 5581 5581 bool tail_call_reachable = false; 5582 5582 int ret_insn[MAX_CALL_FRAMES]; 5583 5583 int ret_prog[MAX_CALL_FRAMES]; 5584 5584 int j; 5585 5585 5586 + i = subprog[idx].start; 5586 5587 process_func: 5587 5588 /* protect against potential stack overflow that might happen when 5588 5589 * bpf2bpf calls get combined with tailcalls. Limit the caller's stack ··· 5622 5621 continue_func: 5623 5622 subprog_end = subprog[idx + 1].start; 5624 5623 for (; i < subprog_end; i++) { 5625 - int next_insn; 5624 + int next_insn, sidx; 5626 5625 5627 5626 if (!bpf_pseudo_call(insn + i) && !bpf_pseudo_func(insn + i)) 5628 5627 continue; ··· 5632 5631 5633 5632 /* find the callee */ 5634 5633 next_insn = i + insn[i].imm + 1; 5635 - idx = find_subprog(env, next_insn); 5636 - if (idx < 0) { 5634 + sidx = find_subprog(env, next_insn); 5635 + if (sidx < 0) { 5637 5636 WARN_ONCE(1, "verifier bug. No program starts at insn %d\n", 5638 5637 next_insn); 5639 5638 return -EFAULT; 5640 5639 } 5641 - if (subprog[idx].is_async_cb) { 5642 - if (subprog[idx].has_tail_call) { 5640 + if (subprog[sidx].is_async_cb) { 5641 + if (subprog[sidx].has_tail_call) { 5643 5642 verbose(env, "verifier bug. subprog has tail_call and async cb\n"); 5644 5643 return -EFAULT; 5645 5644 } ··· 5648 5647 continue; 5649 5648 } 5650 5649 i = next_insn; 5650 + idx = sidx; 5651 5651 5652 5652 if (subprog[idx].has_tail_call) 5653 5653 tail_call_reachable = true; ··· 5682 5680 i = ret_insn[frame]; 5683 5681 idx = ret_prog[frame]; 5684 5682 goto continue_func; 5683 + } 5684 + 5685 + static int check_max_stack_depth(struct bpf_verifier_env *env) 5686 + { 5687 + struct bpf_subprog_info *si = env->subprog_info; 5688 + int ret; 5689 + 5690 + for (int i = 0; i < env->subprog_cnt; i++) { 5691 + if (!i || si[i].is_async_cb) { 5692 + ret = check_max_stack_depth_subprog(env, i); 5693 + if (ret < 0) 5694 + return ret; 5695 + } 5696 + continue; 5697 + } 5698 + return 0; 5685 5699 } 5686 5700 5687 5701 #ifndef CONFIG_BPF_JIT_ALWAYS_ON
+5 -5
kernel/sys.c
··· 2535 2535 else 2536 2536 return -EINVAL; 2537 2537 break; 2538 - case PR_GET_AUXV: 2539 - if (arg4 || arg5) 2540 - return -EINVAL; 2541 - error = prctl_get_auxv((void __user *)arg2, arg3); 2542 - break; 2543 2538 default: 2544 2539 return -EINVAL; 2545 2540 } ··· 2688 2693 break; 2689 2694 case PR_SET_VMA: 2690 2695 error = prctl_set_vma(arg2, arg3, arg4, arg5); 2696 + break; 2697 + case PR_GET_AUXV: 2698 + if (arg4 || arg5) 2699 + return -EINVAL; 2700 + error = prctl_get_auxv((void __user *)arg2, arg3); 2691 2701 break; 2692 2702 #ifdef CONFIG_KSM 2693 2703 case PR_SET_MEMORY_MERGE:
+13 -1
kernel/trace/ring_buffer.c
··· 536 536 unsigned flags; 537 537 int cpus; 538 538 atomic_t record_disabled; 539 + atomic_t resizing; 539 540 cpumask_var_t cpumask; 540 541 541 542 struct lock_class_key *reader_lock_key; ··· 2168 2167 2169 2168 /* prevent another thread from changing buffer sizes */ 2170 2169 mutex_lock(&buffer->mutex); 2171 - 2170 + atomic_inc(&buffer->resizing); 2172 2171 2173 2172 if (cpu_id == RING_BUFFER_ALL_CPUS) { 2174 2173 /* ··· 2323 2322 atomic_dec(&buffer->record_disabled); 2324 2323 } 2325 2324 2325 + atomic_dec(&buffer->resizing); 2326 2326 mutex_unlock(&buffer->mutex); 2327 2327 return 0; 2328 2328 ··· 2344 2342 } 2345 2343 } 2346 2344 out_err_unlock: 2345 + atomic_dec(&buffer->resizing); 2347 2346 mutex_unlock(&buffer->mutex); 2348 2347 return err; 2349 2348 } ··· 5542 5539 if (local_read(&cpu_buffer_a->committing)) 5543 5540 goto out_dec; 5544 5541 if (local_read(&cpu_buffer_b->committing)) 5542 + goto out_dec; 5543 + 5544 + /* 5545 + * When resize is in progress, we cannot swap it because 5546 + * it will mess the state of the cpu buffer. 5547 + */ 5548 + if (atomic_read(&buffer_a->resizing)) 5549 + goto out_dec; 5550 + if (atomic_read(&buffer_b->resizing)) 5545 5551 goto out_dec; 5546 5552 5547 5553 buffer_a->buffers[cpu] = cpu_buffer_b;
+2 -1
kernel/trace/trace.c
··· 1928 1928 * place on this CPU. We fail to record, but we reset 1929 1929 * the max trace buffer (no one writes directly to it) 1930 1930 * and flag that it failed. 1931 + * Another reason is resize is in progress. 1931 1932 */ 1932 1933 trace_array_printk_buf(tr->max_buffer.buffer, _THIS_IP_, 1933 - "Failed to swap buffers due to commit in progress\n"); 1934 + "Failed to swap buffers due to commit or resize in progress\n"); 1934 1935 } 1935 1936 1936 1937 WARN_ON_ONCE(ret && ret != -EAGAIN && ret != -EBUSY);
+2 -1
kernel/trace/trace_events_hist.c
··· 6668 6668 goto out_unreg; 6669 6669 6670 6670 if (has_hist_vars(hist_data) || hist_data->n_var_refs) { 6671 - if (save_hist_vars(hist_data)) 6671 + ret = save_hist_vars(hist_data); 6672 + if (ret) 6672 6673 goto out_unreg; 6673 6674 } 6674 6675
-4
kernel/trace/tracing_map.h
··· 272 272 extern u64 tracing_map_read_var(struct tracing_map_elt *elt, unsigned int i); 273 273 extern u64 tracing_map_read_var_once(struct tracing_map_elt *elt, unsigned int i); 274 274 275 - extern void tracing_map_set_field_descr(struct tracing_map *map, 276 - unsigned int i, 277 - unsigned int key_offset, 278 - tracing_map_cmp_fn_t cmp_fn); 279 275 extern int 280 276 tracing_map_sort_entries(struct tracing_map *map, 281 277 struct tracing_map_sort_key *sort_keys,
+2 -1
lib/maple_tree.c
··· 3692 3692 mas->offset = slot; 3693 3693 pivots[slot] = mas->last; 3694 3694 if (mas->last != ULONG_MAX) 3695 - slot++; 3695 + pivots[++slot] = ULONG_MAX; 3696 + 3696 3697 mas->depth = 1; 3697 3698 mas_set_height(mas); 3698 3699 ma_set_meta(node, maple_leaf_64, 0, slot);
+7 -8
lib/sbitmap.c
··· 550 550 551 551 static void __sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr) 552 552 { 553 - int i, wake_index; 553 + int i, wake_index, woken; 554 554 555 555 if (!atomic_read(&sbq->ws_active)) 556 556 return; ··· 567 567 */ 568 568 wake_index = sbq_index_inc(wake_index); 569 569 570 - /* 571 - * It is sufficient to wake up at least one waiter to 572 - * guarantee forward progress. 573 - */ 574 - if (waitqueue_active(&ws->wait) && 575 - wake_up_nr(&ws->wait, nr)) 576 - break; 570 + if (waitqueue_active(&ws->wait)) { 571 + woken = wake_up_nr(&ws->wait, nr); 572 + if (woken == nr) 573 + break; 574 + nr -= woken; 575 + } 577 576 } 578 577 579 578 if (wake_index != atomic_read(&sbq->wake_index))
+4 -1
lib/test_maple_tree.c
··· 1898 1898 725}; 1899 1899 static const unsigned long level2_32[] = { 1747, 2000, 1750, 1755, 1900 1900 1760, 1765}; 1901 + unsigned long last_index; 1901 1902 1902 1903 if (MAPLE_32BIT) { 1903 1904 nr_entries = 500; 1904 1905 level2 = level2_32; 1906 + last_index = 0x138e; 1905 1907 } else { 1906 1908 nr_entries = 200; 1907 1909 level2 = level2_64; 1910 + last_index = 0x7d6; 1908 1911 } 1909 1912 1910 1913 for (i = 0; i <= nr_entries; i++) ··· 2014 2011 2015 2012 val = mas_next(&mas, ULONG_MAX); 2016 2013 MT_BUG_ON(mt, val != NULL); 2017 - MT_BUG_ON(mt, mas.index != 0x7d6); 2014 + MT_BUG_ON(mt, mas.index != last_index); 2018 2015 MT_BUG_ON(mt, mas.last != ULONG_MAX); 2019 2016 2020 2017 val = mas_prev(&mas, 0);
+5 -4
mm/mlock.c
··· 477 477 { 478 478 unsigned long nstart, end, tmp; 479 479 struct vm_area_struct *vma, *prev; 480 - int error; 481 480 VMA_ITERATOR(vmi, current->mm, start); 482 481 483 482 VM_BUG_ON(offset_in_page(start)); ··· 497 498 nstart = start; 498 499 tmp = vma->vm_start; 499 500 for_each_vma_range(vmi, vma, end) { 501 + int error; 500 502 vm_flags_t newflags; 501 503 502 504 if (vma->vm_start != tmp) ··· 511 511 tmp = end; 512 512 error = mlock_fixup(&vmi, vma, &prev, nstart, tmp, newflags); 513 513 if (error) 514 - break; 514 + return error; 515 + tmp = vma_iter_end(&vmi); 515 516 nstart = tmp; 516 517 } 517 518 518 - if (vma_iter_end(&vmi) < end) 519 + if (tmp < end) 519 520 return -ENOMEM; 520 521 521 - return error; 522 + return 0; 522 523 } 523 524 524 525 /*
+7 -7
net/bluetooth/hci_conn.c
··· 118 118 */ 119 119 params->explicit_connect = false; 120 120 121 - list_del_init(&params->action); 121 + hci_pend_le_list_del_init(params); 122 122 123 123 switch (params->auto_connect) { 124 124 case HCI_AUTO_CONN_EXPLICIT: ··· 127 127 return; 128 128 case HCI_AUTO_CONN_DIRECT: 129 129 case HCI_AUTO_CONN_ALWAYS: 130 - list_add(&params->action, &hdev->pend_le_conns); 130 + hci_pend_le_list_add(params, &hdev->pend_le_conns); 131 131 break; 132 132 case HCI_AUTO_CONN_REPORT: 133 - list_add(&params->action, &hdev->pend_le_reports); 133 + hci_pend_le_list_add(params, &hdev->pend_le_reports); 134 134 break; 135 135 default: 136 136 break; ··· 1426 1426 if (params->auto_connect == HCI_AUTO_CONN_DISABLED || 1427 1427 params->auto_connect == HCI_AUTO_CONN_REPORT || 1428 1428 params->auto_connect == HCI_AUTO_CONN_EXPLICIT) { 1429 - list_del_init(&params->action); 1430 - list_add(&params->action, &hdev->pend_le_conns); 1429 + hci_pend_le_list_del_init(params); 1430 + hci_pend_le_list_add(params, &hdev->pend_le_conns); 1431 1431 } 1432 1432 1433 1433 params->explicit_connect = true; ··· 1684 1684 if (!link) { 1685 1685 hci_conn_drop(acl); 1686 1686 hci_conn_drop(sco); 1687 - return NULL; 1687 + return ERR_PTR(-ENOLINK); 1688 1688 } 1689 1689 1690 1690 sco->setting = setting; ··· 2254 2254 if (!link) { 2255 2255 hci_conn_drop(le); 2256 2256 hci_conn_drop(cis); 2257 - return NULL; 2257 + return ERR_PTR(-ENOLINK); 2258 2258 } 2259 2259 2260 2260 /* If LE is already connected and CIS handle is already set proceed to
+34 -8
net/bluetooth/hci_core.c
··· 1972 1972 struct adv_monitor *monitor) 1973 1973 { 1974 1974 int status = 0; 1975 + int handle; 1975 1976 1976 1977 switch (hci_get_adv_monitor_offload_ext(hdev)) { 1977 1978 case HCI_ADV_MONITOR_EXT_NONE: /* also goes here when powered off */ ··· 1981 1980 goto free_monitor; 1982 1981 1983 1982 case HCI_ADV_MONITOR_EXT_MSFT: 1983 + handle = monitor->handle; 1984 1984 status = msft_remove_monitor(hdev, monitor); 1985 1985 bt_dev_dbg(hdev, "%s remove monitor %d msft status %d", 1986 - hdev->name, monitor->handle, status); 1986 + hdev->name, handle, status); 1987 1987 break; 1988 1988 } 1989 1989 ··· 2251 2249 return NULL; 2252 2250 } 2253 2251 2254 - /* This function requires the caller holds hdev->lock */ 2252 + /* This function requires the caller holds hdev->lock or rcu_read_lock */ 2255 2253 struct hci_conn_params *hci_pend_le_action_lookup(struct list_head *list, 2256 2254 bdaddr_t *addr, u8 addr_type) 2257 2255 { 2258 2256 struct hci_conn_params *param; 2259 2257 2260 - list_for_each_entry(param, list, action) { 2258 + rcu_read_lock(); 2259 + 2260 + list_for_each_entry_rcu(param, list, action) { 2261 2261 if (bacmp(&param->addr, addr) == 0 && 2262 - param->addr_type == addr_type) 2262 + param->addr_type == addr_type) { 2263 + rcu_read_unlock(); 2263 2264 return param; 2265 + } 2264 2266 } 2265 2267 2268 + rcu_read_unlock(); 2269 + 2266 2270 return NULL; 2271 + } 2272 + 2273 + /* This function requires the caller holds hdev->lock */ 2274 + void hci_pend_le_list_del_init(struct hci_conn_params *param) 2275 + { 2276 + if (list_empty(&param->action)) 2277 + return; 2278 + 2279 + list_del_rcu(&param->action); 2280 + synchronize_rcu(); 2281 + INIT_LIST_HEAD(&param->action); 2282 + } 2283 + 2284 + /* This function requires the caller holds hdev->lock */ 2285 + void hci_pend_le_list_add(struct hci_conn_params *param, 2286 + struct list_head *list) 2287 + { 2288 + list_add_rcu(&param->action, list); 2267 2289 } 2268 2290 2269 2291 /* This function requires the caller holds hdev->lock */ ··· 2323 2297 return params; 2324 2298 } 2325 2299 2326 - static void hci_conn_params_free(struct hci_conn_params *params) 2300 + void hci_conn_params_free(struct hci_conn_params *params) 2327 2301 { 2302 + hci_pend_le_list_del_init(params); 2303 + 2328 2304 if (params->conn) { 2329 2305 hci_conn_drop(params->conn); 2330 2306 hci_conn_put(params->conn); 2331 2307 } 2332 2308 2333 - list_del(&params->action); 2334 2309 list_del(&params->list); 2335 2310 kfree(params); 2336 2311 } ··· 2369 2342 continue; 2370 2343 } 2371 2344 2372 - list_del(&params->list); 2373 - kfree(params); 2345 + hci_conn_params_free(params); 2374 2346 } 2375 2347 2376 2348 BT_DBG("All LE disabled connection parameters were removed");
+9 -6
net/bluetooth/hci_event.c
··· 1564 1564 1565 1565 params = hci_conn_params_lookup(hdev, &cp->bdaddr, cp->bdaddr_type); 1566 1566 if (params) 1567 - params->privacy_mode = cp->mode; 1567 + WRITE_ONCE(params->privacy_mode, cp->mode); 1568 1568 1569 1569 hci_dev_unlock(hdev); 1570 1570 ··· 2784 2784 hci_enable_advertising(hdev); 2785 2785 } 2786 2786 2787 + /* Inform sockets conn is gone before we delete it */ 2788 + hci_disconn_cfm(conn, HCI_ERROR_UNSPECIFIED); 2789 + 2787 2790 goto done; 2788 2791 } 2789 2792 ··· 2807 2804 2808 2805 case HCI_AUTO_CONN_DIRECT: 2809 2806 case HCI_AUTO_CONN_ALWAYS: 2810 - list_del_init(&params->action); 2811 - list_add(&params->action, &hdev->pend_le_conns); 2807 + hci_pend_le_list_del_init(params); 2808 + hci_pend_le_list_add(params, &hdev->pend_le_conns); 2812 2809 break; 2813 2810 2814 2811 default: ··· 3426 3423 3427 3424 case HCI_AUTO_CONN_DIRECT: 3428 3425 case HCI_AUTO_CONN_ALWAYS: 3429 - list_del_init(&params->action); 3430 - list_add(&params->action, &hdev->pend_le_conns); 3426 + hci_pend_le_list_del_init(params); 3427 + hci_pend_le_list_add(params, &hdev->pend_le_conns); 3431 3428 hci_update_passive_scan(hdev); 3432 3429 break; 3433 3430 ··· 5965 5962 params = hci_pend_le_action_lookup(&hdev->pend_le_conns, &conn->dst, 5966 5963 conn->dst_type); 5967 5964 if (params) { 5968 - list_del_init(&params->action); 5965 + hci_pend_le_list_del_init(params); 5969 5966 if (params->conn) { 5970 5967 hci_conn_drop(params->conn); 5971 5968 hci_conn_put(params->conn);
+108 -13
net/bluetooth/hci_sync.c
··· 2160 2160 return 0; 2161 2161 } 2162 2162 2163 + struct conn_params { 2164 + bdaddr_t addr; 2165 + u8 addr_type; 2166 + hci_conn_flags_t flags; 2167 + u8 privacy_mode; 2168 + }; 2169 + 2163 2170 /* Adds connection to resolve list if needed. 2164 2171 * Setting params to NULL programs local hdev->irk 2165 2172 */ 2166 2173 static int hci_le_add_resolve_list_sync(struct hci_dev *hdev, 2167 - struct hci_conn_params *params) 2174 + struct conn_params *params) 2168 2175 { 2169 2176 struct hci_cp_le_add_to_resolv_list cp; 2170 2177 struct smp_irk *irk; 2171 2178 struct bdaddr_list_with_irk *entry; 2179 + struct hci_conn_params *p; 2172 2180 2173 2181 if (!use_ll_privacy(hdev)) 2174 2182 return 0; ··· 2211 2203 /* Default privacy mode is always Network */ 2212 2204 params->privacy_mode = HCI_NETWORK_PRIVACY; 2213 2205 2206 + rcu_read_lock(); 2207 + p = hci_pend_le_action_lookup(&hdev->pend_le_conns, 2208 + &params->addr, params->addr_type); 2209 + if (!p) 2210 + p = hci_pend_le_action_lookup(&hdev->pend_le_reports, 2211 + &params->addr, params->addr_type); 2212 + if (p) 2213 + WRITE_ONCE(p->privacy_mode, HCI_NETWORK_PRIVACY); 2214 + rcu_read_unlock(); 2215 + 2214 2216 done: 2215 2217 if (hci_dev_test_flag(hdev, HCI_PRIVACY)) 2216 2218 memcpy(cp.local_irk, hdev->irk, 16); ··· 2233 2215 2234 2216 /* Set Device Privacy Mode. */ 2235 2217 static int hci_le_set_privacy_mode_sync(struct hci_dev *hdev, 2236 - struct hci_conn_params *params) 2218 + struct conn_params *params) 2237 2219 { 2238 2220 struct hci_cp_le_set_privacy_mode cp; 2239 2221 struct smp_irk *irk; ··· 2258 2240 bacpy(&cp.bdaddr, &irk->bdaddr); 2259 2241 cp.mode = HCI_DEVICE_PRIVACY; 2260 2242 2243 + /* Note: params->privacy_mode is not updated since it is a copy */ 2244 + 2261 2245 return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_PRIVACY_MODE, 2262 2246 sizeof(cp), &cp, HCI_CMD_TIMEOUT); 2263 2247 } ··· 2269 2249 * properly set the privacy mode. 2270 2250 */ 2271 2251 static int hci_le_add_accept_list_sync(struct hci_dev *hdev, 2272 - struct hci_conn_params *params, 2252 + struct conn_params *params, 2273 2253 u8 *num_entries) 2274 2254 { 2275 2255 struct hci_cp_le_add_to_accept_list cp; ··· 2467 2447 return __hci_cmd_sync_sk(hdev, opcode, 0, NULL, 0, HCI_CMD_TIMEOUT, sk); 2468 2448 } 2469 2449 2450 + static struct conn_params *conn_params_copy(struct list_head *list, size_t *n) 2451 + { 2452 + struct hci_conn_params *params; 2453 + struct conn_params *p; 2454 + size_t i; 2455 + 2456 + rcu_read_lock(); 2457 + 2458 + i = 0; 2459 + list_for_each_entry_rcu(params, list, action) 2460 + ++i; 2461 + *n = i; 2462 + 2463 + rcu_read_unlock(); 2464 + 2465 + p = kvcalloc(*n, sizeof(struct conn_params), GFP_KERNEL); 2466 + if (!p) 2467 + return NULL; 2468 + 2469 + rcu_read_lock(); 2470 + 2471 + i = 0; 2472 + list_for_each_entry_rcu(params, list, action) { 2473 + /* Racing adds are handled in next scan update */ 2474 + if (i >= *n) 2475 + break; 2476 + 2477 + /* No hdev->lock, but: addr, addr_type are immutable. 2478 + * privacy_mode is only written by us or in 2479 + * hci_cc_le_set_privacy_mode that we wait for. 2480 + * We should be idempotent so MGMT updating flags 2481 + * while we are processing is OK. 2482 + */ 2483 + bacpy(&p[i].addr, &params->addr); 2484 + p[i].addr_type = params->addr_type; 2485 + p[i].flags = READ_ONCE(params->flags); 2486 + p[i].privacy_mode = READ_ONCE(params->privacy_mode); 2487 + ++i; 2488 + } 2489 + 2490 + rcu_read_unlock(); 2491 + 2492 + *n = i; 2493 + return p; 2494 + } 2495 + 2470 2496 /* Device must not be scanning when updating the accept list. 2471 2497 * 2472 2498 * Update is done using the following sequence: ··· 2532 2466 */ 2533 2467 static u8 hci_update_accept_list_sync(struct hci_dev *hdev) 2534 2468 { 2535 - struct hci_conn_params *params; 2469 + struct conn_params *params; 2536 2470 struct bdaddr_list *b, *t; 2537 2471 u8 num_entries = 0; 2538 2472 bool pend_conn, pend_report; 2539 2473 u8 filter_policy; 2474 + size_t i, n; 2540 2475 int err; 2541 2476 2542 2477 /* Pause advertising if resolving list can be used as controllers ··· 2571 2504 if (hci_conn_hash_lookup_le(hdev, &b->bdaddr, b->bdaddr_type)) 2572 2505 continue; 2573 2506 2507 + /* Pointers not dereferenced, no locks needed */ 2574 2508 pend_conn = hci_pend_le_action_lookup(&hdev->pend_le_conns, 2575 2509 &b->bdaddr, 2576 2510 b->bdaddr_type); ··· 2600 2532 * available accept list entries in the controller, then 2601 2533 * just abort and return filer policy value to not use the 2602 2534 * accept list. 2535 + * 2536 + * The list and params may be mutated while we wait for events, 2537 + * so make a copy and iterate it. 2603 2538 */ 2604 - list_for_each_entry(params, &hdev->pend_le_conns, action) { 2605 - err = hci_le_add_accept_list_sync(hdev, params, &num_entries); 2606 - if (err) 2607 - goto done; 2539 + 2540 + params = conn_params_copy(&hdev->pend_le_conns, &n); 2541 + if (!params) { 2542 + err = -ENOMEM; 2543 + goto done; 2608 2544 } 2545 + 2546 + for (i = 0; i < n; ++i) { 2547 + err = hci_le_add_accept_list_sync(hdev, &params[i], 2548 + &num_entries); 2549 + if (err) { 2550 + kvfree(params); 2551 + goto done; 2552 + } 2553 + } 2554 + 2555 + kvfree(params); 2609 2556 2610 2557 /* After adding all new pending connections, walk through 2611 2558 * the list of pending reports and also add these to the 2612 2559 * accept list if there is still space. Abort if space runs out. 2613 2560 */ 2614 - list_for_each_entry(params, &hdev->pend_le_reports, action) { 2615 - err = hci_le_add_accept_list_sync(hdev, params, &num_entries); 2616 - if (err) 2617 - goto done; 2561 + 2562 + params = conn_params_copy(&hdev->pend_le_reports, &n); 2563 + if (!params) { 2564 + err = -ENOMEM; 2565 + goto done; 2618 2566 } 2567 + 2568 + for (i = 0; i < n; ++i) { 2569 + err = hci_le_add_accept_list_sync(hdev, &params[i], 2570 + &num_entries); 2571 + if (err) { 2572 + kvfree(params); 2573 + goto done; 2574 + } 2575 + } 2576 + 2577 + kvfree(params); 2619 2578 2620 2579 /* Use the allowlist unless the following conditions are all true: 2621 2580 * - We are not currently suspending ··· 4932 4837 struct hci_conn_params *p; 4933 4838 4934 4839 list_for_each_entry(p, &hdev->le_conn_params, list) { 4840 + hci_pend_le_list_del_init(p); 4935 4841 if (p->conn) { 4936 4842 hci_conn_drop(p->conn); 4937 4843 hci_conn_put(p->conn); 4938 4844 p->conn = NULL; 4939 4845 } 4940 - list_del_init(&p->action); 4941 4846 } 4942 4847 4943 4848 BT_DBG("All LE pending actions cleared");
+32 -23
net/bluetooth/iso.c
··· 123 123 { 124 124 struct iso_conn *conn = hcon->iso_data; 125 125 126 - if (conn) 126 + if (conn) { 127 + if (!conn->hcon) 128 + conn->hcon = hcon; 127 129 return conn; 130 + } 128 131 129 132 conn = kzalloc(sizeof(*conn), GFP_KERNEL); 130 133 if (!conn) ··· 303 300 goto unlock; 304 301 } 305 302 306 - hci_dev_unlock(hdev); 307 - hci_dev_put(hdev); 303 + lock_sock(sk); 308 304 309 305 err = iso_chan_add(conn, sk, NULL); 310 - if (err) 311 - return err; 312 - 313 - lock_sock(sk); 306 + if (err) { 307 + release_sock(sk); 308 + goto unlock; 309 + } 314 310 315 311 /* Update source addr of the socket */ 316 312 bacpy(&iso_pi(sk)->src, &hcon->src); ··· 323 321 } 324 322 325 323 release_sock(sk); 326 - return err; 327 324 328 325 unlock: 329 326 hci_dev_unlock(hdev); ··· 390 389 goto unlock; 391 390 } 392 391 393 - hci_dev_unlock(hdev); 394 - hci_dev_put(hdev); 392 + lock_sock(sk); 395 393 396 394 err = iso_chan_add(conn, sk, NULL); 397 - if (err) 398 - return err; 399 - 400 - lock_sock(sk); 395 + if (err) { 396 + release_sock(sk); 397 + goto unlock; 398 + } 401 399 402 400 /* Update source addr of the socket */ 403 401 bacpy(&iso_pi(sk)->src, &hcon->src); ··· 413 413 } 414 414 415 415 release_sock(sk); 416 - return err; 417 416 418 417 unlock: 419 418 hci_dev_unlock(hdev); ··· 1071 1072 size_t len) 1072 1073 { 1073 1074 struct sock *sk = sock->sk; 1074 - struct iso_conn *conn = iso_pi(sk)->conn; 1075 1075 struct sk_buff *skb, **frag; 1076 + size_t mtu; 1076 1077 int err; 1077 1078 1078 1079 BT_DBG("sock %p, sk %p", sock, sk); ··· 1084 1085 if (msg->msg_flags & MSG_OOB) 1085 1086 return -EOPNOTSUPP; 1086 1087 1087 - if (sk->sk_state != BT_CONNECTED) 1088 - return -ENOTCONN; 1088 + lock_sock(sk); 1089 1089 1090 - skb = bt_skb_sendmsg(sk, msg, len, conn->hcon->hdev->iso_mtu, 1091 - HCI_ISO_DATA_HDR_SIZE, 0); 1090 + if (sk->sk_state != BT_CONNECTED) { 1091 + release_sock(sk); 1092 + return -ENOTCONN; 1093 + } 1094 + 1095 + mtu = iso_pi(sk)->conn->hcon->hdev->iso_mtu; 1096 + 1097 + release_sock(sk); 1098 + 1099 + skb = bt_skb_sendmsg(sk, msg, len, mtu, HCI_ISO_DATA_HDR_SIZE, 0); 1092 1100 if (IS_ERR(skb)) 1093 1101 return PTR_ERR(skb); 1094 1102 ··· 1108 1102 while (len) { 1109 1103 struct sk_buff *tmp; 1110 1104 1111 - tmp = bt_skb_sendmsg(sk, msg, len, conn->hcon->hdev->iso_mtu, 1112 - 0, 0); 1105 + tmp = bt_skb_sendmsg(sk, msg, len, mtu, 0, 0); 1113 1106 if (IS_ERR(tmp)) { 1114 1107 kfree_skb(skb); 1115 1108 return PTR_ERR(tmp); ··· 1163 1158 BT_DBG("sk %p", sk); 1164 1159 1165 1160 if (test_and_clear_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) { 1161 + lock_sock(sk); 1166 1162 switch (sk->sk_state) { 1167 1163 case BT_CONNECT2: 1168 - lock_sock(sk); 1169 1164 iso_conn_defer_accept(pi->conn->hcon); 1170 1165 sk->sk_state = BT_CONFIG; 1171 1166 release_sock(sk); 1172 1167 return 0; 1173 1168 case BT_CONNECT: 1169 + release_sock(sk); 1174 1170 return iso_connect_cis(sk); 1171 + default: 1172 + release_sock(sk); 1173 + break; 1175 1174 } 1176 1175 } 1177 1176
+12 -16
net/bluetooth/mgmt.c
··· 1297 1297 /* Needed for AUTO_OFF case where might not "really" 1298 1298 * have been powered off. 1299 1299 */ 1300 - list_del_init(&p->action); 1300 + hci_pend_le_list_del_init(p); 1301 1301 1302 1302 switch (p->auto_connect) { 1303 1303 case HCI_AUTO_CONN_DIRECT: 1304 1304 case HCI_AUTO_CONN_ALWAYS: 1305 - list_add(&p->action, &hdev->pend_le_conns); 1305 + hci_pend_le_list_add(p, &hdev->pend_le_conns); 1306 1306 break; 1307 1307 case HCI_AUTO_CONN_REPORT: 1308 - list_add(&p->action, &hdev->pend_le_reports); 1308 + hci_pend_le_list_add(p, &hdev->pend_le_reports); 1309 1309 break; 1310 1310 default: 1311 1311 break; ··· 5169 5169 goto unlock; 5170 5170 } 5171 5171 5172 - params->flags = current_flags; 5172 + WRITE_ONCE(params->flags, current_flags); 5173 5173 status = MGMT_STATUS_SUCCESS; 5174 5174 5175 5175 /* Update passive scan if HCI_CONN_FLAG_DEVICE_PRIVACY ··· 7285 7285 7286 7286 bt_dev_dbg(hdev, "err %d", err); 7287 7287 7288 - memcpy(&rp.addr, &cp->addr.bdaddr, sizeof(rp.addr)); 7288 + memcpy(&rp.addr, &cp->addr, sizeof(rp.addr)); 7289 7289 7290 7290 status = mgmt_status(err); 7291 7291 if (status == MGMT_STATUS_SUCCESS) { ··· 7580 7580 if (params->auto_connect == auto_connect) 7581 7581 return 0; 7582 7582 7583 - list_del_init(&params->action); 7583 + hci_pend_le_list_del_init(params); 7584 7584 7585 7585 switch (auto_connect) { 7586 7586 case HCI_AUTO_CONN_DISABLED: ··· 7589 7589 * connect to device, keep connecting. 7590 7590 */ 7591 7591 if (params->explicit_connect) 7592 - list_add(&params->action, &hdev->pend_le_conns); 7592 + hci_pend_le_list_add(params, &hdev->pend_le_conns); 7593 7593 break; 7594 7594 case HCI_AUTO_CONN_REPORT: 7595 7595 if (params->explicit_connect) 7596 - list_add(&params->action, &hdev->pend_le_conns); 7596 + hci_pend_le_list_add(params, &hdev->pend_le_conns); 7597 7597 else 7598 - list_add(&params->action, &hdev->pend_le_reports); 7598 + hci_pend_le_list_add(params, &hdev->pend_le_reports); 7599 7599 break; 7600 7600 case HCI_AUTO_CONN_DIRECT: 7601 7601 case HCI_AUTO_CONN_ALWAYS: 7602 7602 if (!is_connected(hdev, addr, addr_type)) 7603 - list_add(&params->action, &hdev->pend_le_conns); 7603 + hci_pend_le_list_add(params, &hdev->pend_le_conns); 7604 7604 break; 7605 7605 } 7606 7606 ··· 7823 7823 goto unlock; 7824 7824 } 7825 7825 7826 - list_del(&params->action); 7827 - list_del(&params->list); 7828 - kfree(params); 7826 + hci_conn_params_free(params); 7829 7827 7830 7828 device_removed(sk, hdev, &cp->addr.bdaddr, cp->addr.type); 7831 7829 } else { ··· 7854 7856 p->auto_connect = HCI_AUTO_CONN_EXPLICIT; 7855 7857 continue; 7856 7858 } 7857 - list_del(&p->action); 7858 - list_del(&p->list); 7859 - kfree(p); 7859 + hci_conn_params_free(p); 7860 7860 } 7861 7861 7862 7862 bt_dev_dbg(hdev, "All LE connection parameters were removed");
+12 -11
net/bluetooth/sco.c
··· 126 126 struct hci_dev *hdev = hcon->hdev; 127 127 struct sco_conn *conn = hcon->sco_data; 128 128 129 - if (conn) 129 + if (conn) { 130 + if (!conn->hcon) 131 + conn->hcon = hcon; 130 132 return conn; 133 + } 131 134 132 135 conn = kzalloc(sizeof(struct sco_conn), GFP_KERNEL); 133 136 if (!conn) ··· 271 268 goto unlock; 272 269 } 273 270 274 - hci_dev_unlock(hdev); 275 - hci_dev_put(hdev); 276 - 277 271 conn = sco_conn_add(hcon); 278 272 if (!conn) { 279 273 hci_conn_drop(hcon); 280 - return -ENOMEM; 274 + err = -ENOMEM; 275 + goto unlock; 281 276 } 282 277 283 - err = sco_chan_add(conn, sk, NULL); 284 - if (err) 285 - return err; 286 - 287 278 lock_sock(sk); 279 + 280 + err = sco_chan_add(conn, sk, NULL); 281 + if (err) { 282 + release_sock(sk); 283 + goto unlock; 284 + } 288 285 289 286 /* Update source addr of the socket */ 290 287 bacpy(&sco_pi(sk)->src, &hcon->src); ··· 298 295 } 299 296 300 297 release_sock(sk); 301 - 302 - return err; 303 298 304 299 unlock: 305 300 hci_dev_unlock(hdev);
+6 -6
net/can/bcm.c
··· 1526 1526 1527 1527 lock_sock(sk); 1528 1528 1529 + #if IS_ENABLED(CONFIG_PROC_FS) 1530 + /* remove procfs entry */ 1531 + if (net->can.bcmproc_dir && bo->bcm_proc_read) 1532 + remove_proc_entry(bo->procname, net->can.bcmproc_dir); 1533 + #endif /* CONFIG_PROC_FS */ 1534 + 1529 1535 list_for_each_entry_safe(op, next, &bo->tx_ops, list) 1530 1536 bcm_remove_op(op); 1531 1537 ··· 1566 1560 1567 1561 list_for_each_entry_safe(op, next, &bo->rx_ops, list) 1568 1562 bcm_remove_op(op); 1569 - 1570 - #if IS_ENABLED(CONFIG_PROC_FS) 1571 - /* remove procfs entry */ 1572 - if (net->can.bcmproc_dir && bo->bcm_proc_read) 1573 - remove_proc_entry(bo->procname, net->can.bcmproc_dir); 1574 - #endif /* CONFIG_PROC_FS */ 1575 1563 1576 1564 /* remove device reference */ 1577 1565 if (bo->bound) {
+24 -33
net/can/raw.c
··· 84 84 struct sock sk; 85 85 int bound; 86 86 int ifindex; 87 + struct net_device *dev; 87 88 struct list_head notifier; 88 89 int loopback; 89 90 int recv_own_msgs; ··· 278 277 if (!net_eq(dev_net(dev), sock_net(sk))) 279 278 return; 280 279 281 - if (ro->ifindex != dev->ifindex) 280 + if (ro->dev != dev) 282 281 return; 283 282 284 283 switch (msg) { ··· 293 292 294 293 ro->ifindex = 0; 295 294 ro->bound = 0; 295 + ro->dev = NULL; 296 296 ro->count = 0; 297 297 release_sock(sk); 298 298 ··· 339 337 340 338 ro->bound = 0; 341 339 ro->ifindex = 0; 340 + ro->dev = NULL; 342 341 343 342 /* set default filter to single entry dfilter */ 344 343 ro->dfilter.can_id = 0; ··· 388 385 389 386 lock_sock(sk); 390 387 388 + rtnl_lock(); 391 389 /* remove current filters & unregister */ 392 390 if (ro->bound) { 393 - if (ro->ifindex) { 394 - struct net_device *dev; 395 - 396 - dev = dev_get_by_index(sock_net(sk), ro->ifindex); 397 - if (dev) { 398 - raw_disable_allfilters(dev_net(dev), dev, sk); 399 - dev_put(dev); 400 - } 401 - } else { 391 + if (ro->dev) 392 + raw_disable_allfilters(dev_net(ro->dev), ro->dev, sk); 393 + else 402 394 raw_disable_allfilters(sock_net(sk), NULL, sk); 403 - } 404 395 } 405 396 406 397 if (ro->count > 1) ··· 402 405 403 406 ro->ifindex = 0; 404 407 ro->bound = 0; 408 + ro->dev = NULL; 405 409 ro->count = 0; 406 410 free_percpu(ro->uniq); 411 + rtnl_unlock(); 407 412 408 413 sock_orphan(sk); 409 414 sock->sk = NULL; ··· 421 422 struct sockaddr_can *addr = (struct sockaddr_can *)uaddr; 422 423 struct sock *sk = sock->sk; 423 424 struct raw_sock *ro = raw_sk(sk); 425 + struct net_device *dev = NULL; 424 426 int ifindex; 425 427 int err = 0; 426 428 int notify_enetdown = 0; ··· 431 431 if (addr->can_family != AF_CAN) 432 432 return -EINVAL; 433 433 434 + rtnl_lock(); 434 435 lock_sock(sk); 435 436 436 437 if (ro->bound && addr->can_ifindex == ro->ifindex) 437 438 goto out; 438 439 439 440 if (addr->can_ifindex) { 440 - struct net_device *dev; 441 - 442 441 dev = dev_get_by_index(sock_net(sk), addr->can_ifindex); 443 442 if (!dev) { 444 443 err = -ENODEV; ··· 466 467 if (!err) { 467 468 if (ro->bound) { 468 469 /* unregister old filters */ 469 - if (ro->ifindex) { 470 - struct net_device *dev; 471 - 472 - dev = dev_get_by_index(sock_net(sk), 473 - ro->ifindex); 474 - if (dev) { 475 - raw_disable_allfilters(dev_net(dev), 476 - dev, sk); 477 - dev_put(dev); 478 - } 479 - } else { 470 + if (ro->dev) 471 + raw_disable_allfilters(dev_net(ro->dev), 472 + ro->dev, sk); 473 + else 480 474 raw_disable_allfilters(sock_net(sk), NULL, sk); 481 - } 482 475 } 483 476 ro->ifindex = ifindex; 484 477 ro->bound = 1; 478 + ro->dev = dev; 485 479 } 486 480 487 481 out: 488 482 release_sock(sk); 483 + rtnl_unlock(); 489 484 490 485 if (notify_enetdown) { 491 486 sk->sk_err = ENETDOWN; ··· 546 553 rtnl_lock(); 547 554 lock_sock(sk); 548 555 549 - if (ro->bound && ro->ifindex) { 550 - dev = dev_get_by_index(sock_net(sk), ro->ifindex); 551 - if (!dev) { 556 + dev = ro->dev; 557 + if (ro->bound && dev) { 558 + if (dev->reg_state != NETREG_REGISTERED) { 552 559 if (count > 1) 553 560 kfree(filter); 554 561 err = -ENODEV; ··· 589 596 ro->count = count; 590 597 591 598 out_fil: 592 - dev_put(dev); 593 599 release_sock(sk); 594 600 rtnl_unlock(); 595 601 ··· 606 614 rtnl_lock(); 607 615 lock_sock(sk); 608 616 609 - if (ro->bound && ro->ifindex) { 610 - dev = dev_get_by_index(sock_net(sk), ro->ifindex); 611 - if (!dev) { 617 + dev = ro->dev; 618 + if (ro->bound && dev) { 619 + if (dev->reg_state != NETREG_REGISTERED) { 612 620 err = -ENODEV; 613 621 goto out_err; 614 622 } ··· 632 640 ro->err_mask = err_mask; 633 641 634 642 out_err: 635 - dev_put(dev); 636 643 release_sock(sk); 637 644 rtnl_unlock(); 638 645
+1 -1
net/ipv4/esp4.c
··· 1132 1132 err = crypto_aead_setkey(aead, key, keylen); 1133 1133 1134 1134 free_key: 1135 - kfree(key); 1135 + kfree_sensitive(key); 1136 1136 1137 1137 error: 1138 1138 return err;
+1 -1
net/ipv4/inet_connection_sock.c
··· 1019 1019 1020 1020 icsk = inet_csk(sk_listener); 1021 1021 net = sock_net(sk_listener); 1022 - max_syn_ack_retries = icsk->icsk_syn_retries ? : 1022 + max_syn_ack_retries = READ_ONCE(icsk->icsk_syn_retries) ? : 1023 1023 READ_ONCE(net->ipv4.sysctl_tcp_synack_retries); 1024 1024 /* Normally all the openreqs are young and become mature 1025 1025 * (i.e. converted to established socket) for first timeout.
+2 -15
net/ipv4/inet_hashtables.c
··· 650 650 spin_lock(lock); 651 651 if (osk) { 652 652 WARN_ON_ONCE(sk->sk_hash != osk->sk_hash); 653 - ret = sk_hashed(osk); 654 - if (ret) { 655 - /* Before deleting the node, we insert a new one to make 656 - * sure that the look-up-sk process would not miss either 657 - * of them and that at least one node would exist in ehash 658 - * table all the time. Otherwise there's a tiny chance 659 - * that lookup process could find nothing in ehash table. 660 - */ 661 - __sk_nulls_add_node_tail_rcu(sk, list); 662 - sk_nulls_del_node_init_rcu(osk); 663 - } 664 - goto unlock; 665 - } 666 - if (found_dup_sk) { 653 + ret = sk_nulls_del_node_init_rcu(osk); 654 + } else if (found_dup_sk) { 667 655 *found_dup_sk = inet_ehash_lookup_by_sk(sk, list); 668 656 if (*found_dup_sk) 669 657 ret = false; ··· 660 672 if (ret) 661 673 __sk_nulls_add_node_rcu(sk, list); 662 674 663 - unlock: 664 675 spin_unlock(lock); 665 676 666 677 return ret;
+4 -4
net/ipv4/inet_timewait_sock.c
··· 88 88 } 89 89 EXPORT_SYMBOL_GPL(inet_twsk_put); 90 90 91 - static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw, 92 - struct hlist_nulls_head *list) 91 + static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw, 92 + struct hlist_nulls_head *list) 93 93 { 94 - hlist_nulls_add_tail_rcu(&tw->tw_node, list); 94 + hlist_nulls_add_head_rcu(&tw->tw_node, list); 95 95 } 96 96 97 97 static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw, ··· 144 144 145 145 spin_lock(lock); 146 146 147 - inet_twsk_add_node_tail_rcu(tw, &ehead->chain); 147 + inet_twsk_add_node_rcu(tw, &ehead->chain); 148 148 149 149 /* Step 3: Remove SK from hash chain */ 150 150 if (__sk_nulls_del_node_init_rcu(sk))
+4 -2
net/ipv4/ip_gre.c
··· 548 548 goto err_free_skb; 549 549 550 550 if (skb->len > dev->mtu + dev->hard_header_len) { 551 - pskb_trim(skb, dev->mtu + dev->hard_header_len); 551 + if (pskb_trim(skb, dev->mtu + dev->hard_header_len)) 552 + goto err_free_skb; 552 553 truncate = true; 553 554 } 554 555 ··· 690 689 goto free_skb; 691 690 692 691 if (skb->len > dev->mtu + dev->hard_header_len) { 693 - pskb_trim(skb, dev->mtu + dev->hard_header_len); 692 + if (pskb_trim(skb, dev->mtu + dev->hard_header_len)) 693 + goto free_skb; 694 694 truncate = true; 695 695 } 696 696
+30 -27
net/ipv4/tcp.c
··· 3291 3291 return -EINVAL; 3292 3292 3293 3293 lock_sock(sk); 3294 - inet_csk(sk)->icsk_syn_retries = val; 3294 + WRITE_ONCE(inet_csk(sk)->icsk_syn_retries, val); 3295 3295 release_sock(sk); 3296 3296 return 0; 3297 3297 } ··· 3300 3300 void tcp_sock_set_user_timeout(struct sock *sk, u32 val) 3301 3301 { 3302 3302 lock_sock(sk); 3303 - inet_csk(sk)->icsk_user_timeout = val; 3303 + WRITE_ONCE(inet_csk(sk)->icsk_user_timeout, val); 3304 3304 release_sock(sk); 3305 3305 } 3306 3306 EXPORT_SYMBOL(tcp_sock_set_user_timeout); ··· 3312 3312 if (val < 1 || val > MAX_TCP_KEEPIDLE) 3313 3313 return -EINVAL; 3314 3314 3315 - tp->keepalive_time = val * HZ; 3315 + /* Paired with WRITE_ONCE() in keepalive_time_when() */ 3316 + WRITE_ONCE(tp->keepalive_time, val * HZ); 3316 3317 if (sock_flag(sk, SOCK_KEEPOPEN) && 3317 3318 !((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))) { 3318 3319 u32 elapsed = keepalive_time_elapsed(tp); ··· 3345 3344 return -EINVAL; 3346 3345 3347 3346 lock_sock(sk); 3348 - tcp_sk(sk)->keepalive_intvl = val * HZ; 3347 + WRITE_ONCE(tcp_sk(sk)->keepalive_intvl, val * HZ); 3349 3348 release_sock(sk); 3350 3349 return 0; 3351 3350 } ··· 3357 3356 return -EINVAL; 3358 3357 3359 3358 lock_sock(sk); 3360 - tcp_sk(sk)->keepalive_probes = val; 3359 + /* Paired with READ_ONCE() in keepalive_probes() */ 3360 + WRITE_ONCE(tcp_sk(sk)->keepalive_probes, val); 3361 3361 release_sock(sk); 3362 3362 return 0; 3363 3363 } ··· 3560 3558 if (val < 1 || val > MAX_TCP_KEEPINTVL) 3561 3559 err = -EINVAL; 3562 3560 else 3563 - tp->keepalive_intvl = val * HZ; 3561 + WRITE_ONCE(tp->keepalive_intvl, val * HZ); 3564 3562 break; 3565 3563 case TCP_KEEPCNT: 3566 3564 if (val < 1 || val > MAX_TCP_KEEPCNT) 3567 3565 err = -EINVAL; 3568 3566 else 3569 - tp->keepalive_probes = val; 3567 + WRITE_ONCE(tp->keepalive_probes, val); 3570 3568 break; 3571 3569 case TCP_SYNCNT: 3572 3570 if (val < 1 || val > MAX_TCP_SYNCNT) 3573 3571 err = -EINVAL; 3574 3572 else 3575 - icsk->icsk_syn_retries = val; 3573 + WRITE_ONCE(icsk->icsk_syn_retries, val); 3576 3574 break; 3577 3575 3578 3576 case TCP_SAVE_SYN: ··· 3585 3583 3586 3584 case TCP_LINGER2: 3587 3585 if (val < 0) 3588 - tp->linger2 = -1; 3586 + WRITE_ONCE(tp->linger2, -1); 3589 3587 else if (val > TCP_FIN_TIMEOUT_MAX / HZ) 3590 - tp->linger2 = TCP_FIN_TIMEOUT_MAX; 3588 + WRITE_ONCE(tp->linger2, TCP_FIN_TIMEOUT_MAX); 3591 3589 else 3592 - tp->linger2 = val * HZ; 3590 + WRITE_ONCE(tp->linger2, val * HZ); 3593 3591 break; 3594 3592 3595 3593 case TCP_DEFER_ACCEPT: 3596 3594 /* Translate value in seconds to number of retransmits */ 3597 - icsk->icsk_accept_queue.rskq_defer_accept = 3598 - secs_to_retrans(val, TCP_TIMEOUT_INIT / HZ, 3599 - TCP_RTO_MAX / HZ); 3595 + WRITE_ONCE(icsk->icsk_accept_queue.rskq_defer_accept, 3596 + secs_to_retrans(val, TCP_TIMEOUT_INIT / HZ, 3597 + TCP_RTO_MAX / HZ)); 3600 3598 break; 3601 3599 3602 3600 case TCP_WINDOW_CLAMP: ··· 3620 3618 if (val < 0) 3621 3619 err = -EINVAL; 3622 3620 else 3623 - icsk->icsk_user_timeout = val; 3621 + WRITE_ONCE(icsk->icsk_user_timeout, val); 3624 3622 break; 3625 3623 3626 3624 case TCP_FASTOPEN: ··· 3658 3656 if (!tp->repair) 3659 3657 err = -EPERM; 3660 3658 else 3661 - tp->tsoffset = val - tcp_time_stamp_raw(); 3659 + WRITE_ONCE(tp->tsoffset, val - tcp_time_stamp_raw()); 3662 3660 break; 3663 3661 case TCP_REPAIR_WINDOW: 3664 3662 err = tcp_repair_set_window(tp, optval, optlen); 3665 3663 break; 3666 3664 case TCP_NOTSENT_LOWAT: 3667 - tp->notsent_lowat = val; 3665 + WRITE_ONCE(tp->notsent_lowat, val); 3668 3666 sk->sk_write_space(sk); 3669 3667 break; 3670 3668 case TCP_INQ: ··· 3676 3674 case TCP_TX_DELAY: 3677 3675 if (val) 3678 3676 tcp_enable_tx_delay(); 3679 - tp->tcp_tx_delay = val; 3677 + WRITE_ONCE(tp->tcp_tx_delay, val); 3680 3678 break; 3681 3679 default: 3682 3680 err = -ENOPROTOOPT; ··· 3993 3991 val = keepalive_probes(tp); 3994 3992 break; 3995 3993 case TCP_SYNCNT: 3996 - val = icsk->icsk_syn_retries ? : 3994 + val = READ_ONCE(icsk->icsk_syn_retries) ? : 3997 3995 READ_ONCE(net->ipv4.sysctl_tcp_syn_retries); 3998 3996 break; 3999 3997 case TCP_LINGER2: 4000 - val = tp->linger2; 3998 + val = READ_ONCE(tp->linger2); 4001 3999 if (val >= 0) 4002 4000 val = (val ? : READ_ONCE(net->ipv4.sysctl_tcp_fin_timeout)) / HZ; 4003 4001 break; 4004 4002 case TCP_DEFER_ACCEPT: 4005 - val = retrans_to_secs(icsk->icsk_accept_queue.rskq_defer_accept, 4006 - TCP_TIMEOUT_INIT / HZ, TCP_RTO_MAX / HZ); 4003 + val = READ_ONCE(icsk->icsk_accept_queue.rskq_defer_accept); 4004 + val = retrans_to_secs(val, TCP_TIMEOUT_INIT / HZ, 4005 + TCP_RTO_MAX / HZ); 4007 4006 break; 4008 4007 case TCP_WINDOW_CLAMP: 4009 4008 val = tp->window_clamp; ··· 4141 4138 break; 4142 4139 4143 4140 case TCP_USER_TIMEOUT: 4144 - val = icsk->icsk_user_timeout; 4141 + val = READ_ONCE(icsk->icsk_user_timeout); 4145 4142 break; 4146 4143 4147 4144 case TCP_FASTOPEN: 4148 - val = icsk->icsk_accept_queue.fastopenq.max_qlen; 4145 + val = READ_ONCE(icsk->icsk_accept_queue.fastopenq.max_qlen); 4149 4146 break; 4150 4147 4151 4148 case TCP_FASTOPEN_CONNECT: ··· 4157 4154 break; 4158 4155 4159 4156 case TCP_TX_DELAY: 4160 - val = tp->tcp_tx_delay; 4157 + val = READ_ONCE(tp->tcp_tx_delay); 4161 4158 break; 4162 4159 4163 4160 case TCP_TIMESTAMP: 4164 - val = tcp_time_stamp_raw() + tp->tsoffset; 4161 + val = tcp_time_stamp_raw() + READ_ONCE(tp->tsoffset); 4165 4162 break; 4166 4163 case TCP_NOTSENT_LOWAT: 4167 - val = tp->notsent_lowat; 4164 + val = READ_ONCE(tp->notsent_lowat); 4168 4165 break; 4169 4166 case TCP_INQ: 4170 4167 val = tp->recvmsg_inq;
+4 -2
net/ipv4/tcp_fastopen.c
··· 296 296 static bool tcp_fastopen_queue_check(struct sock *sk) 297 297 { 298 298 struct fastopen_queue *fastopenq; 299 + int max_qlen; 299 300 300 301 /* Make sure the listener has enabled fastopen, and we don't 301 302 * exceed the max # of pending TFO requests allowed before trying ··· 309 308 * temporarily vs a server not supporting Fast Open at all. 310 309 */ 311 310 fastopenq = &inet_csk(sk)->icsk_accept_queue.fastopenq; 312 - if (fastopenq->max_qlen == 0) 311 + max_qlen = READ_ONCE(fastopenq->max_qlen); 312 + if (max_qlen == 0) 313 313 return false; 314 314 315 - if (fastopenq->qlen >= fastopenq->max_qlen) { 315 + if (fastopenq->qlen >= max_qlen) { 316 316 struct request_sock *req1; 317 317 spin_lock(&fastopenq->lock); 318 318 req1 = fastopenq->rskq_rst_head;
+6 -4
net/ipv4/tcp_ipv4.c
··· 307 307 inet->inet_daddr, 308 308 inet->inet_sport, 309 309 usin->sin_port)); 310 - tp->tsoffset = secure_tcp_ts_off(net, inet->inet_saddr, 311 - inet->inet_daddr); 310 + WRITE_ONCE(tp->tsoffset, 311 + secure_tcp_ts_off(net, inet->inet_saddr, 312 + inet->inet_daddr)); 312 313 } 313 314 314 315 inet->inet_id = get_random_u16(); ··· 989 988 tcp_rsk(req)->rcv_nxt, 990 989 req->rsk_rcv_wnd >> inet_rsk(req)->rcv_wscale, 991 990 tcp_time_stamp_raw() + tcp_rsk(req)->ts_off, 992 - req->ts_recent, 991 + READ_ONCE(req->ts_recent), 993 992 0, 994 993 tcp_md5_do_lookup(sk, l3index, addr, AF_INET), 995 994 inet_rsk(req)->no_srccheck ? IP_REPLY_ARG_NOSRCCHECK : 0, 996 - ip_hdr(skb)->tos, tcp_rsk(req)->txhash); 995 + ip_hdr(skb)->tos, 996 + READ_ONCE(tcp_rsk(req)->txhash)); 997 997 } 998 998 999 999 /*
+7 -4
net/ipv4/tcp_minisocks.c
··· 528 528 newicsk->icsk_ack.lrcvtime = tcp_jiffies32; 529 529 530 530 newtp->lsndtime = tcp_jiffies32; 531 - newsk->sk_txhash = treq->txhash; 531 + newsk->sk_txhash = READ_ONCE(treq->txhash); 532 532 newtp->total_retrans = req->num_retrans; 533 533 534 534 tcp_init_xmit_timers(newsk); ··· 555 555 newtp->max_window = newtp->snd_wnd; 556 556 557 557 if (newtp->rx_opt.tstamp_ok) { 558 - newtp->rx_opt.ts_recent = req->ts_recent; 558 + newtp->rx_opt.ts_recent = READ_ONCE(req->ts_recent); 559 559 newtp->rx_opt.ts_recent_stamp = ktime_get_seconds(); 560 560 newtp->tcp_header_len = sizeof(struct tcphdr) + TCPOLEN_TSTAMP_ALIGNED; 561 561 } else { ··· 619 619 tcp_parse_options(sock_net(sk), skb, &tmp_opt, 0, NULL); 620 620 621 621 if (tmp_opt.saw_tstamp) { 622 - tmp_opt.ts_recent = req->ts_recent; 622 + tmp_opt.ts_recent = READ_ONCE(req->ts_recent); 623 623 if (tmp_opt.rcv_tsecr) 624 624 tmp_opt.rcv_tsecr -= tcp_rsk(req)->ts_off; 625 625 /* We do not store true stamp, but it is not required, ··· 758 758 759 759 /* In sequence, PAWS is OK. */ 760 760 761 + /* TODO: We probably should defer ts_recent change once 762 + * we take ownership of @req. 763 + */ 761 764 if (tmp_opt.saw_tstamp && !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt)) 762 - req->ts_recent = tmp_opt.rcv_tsval; 765 + WRITE_ONCE(req->ts_recent, tmp_opt.rcv_tsval); 763 766 764 767 if (TCP_SKB_CB(skb)->seq == tcp_rsk(req)->rcv_isn) { 765 768 /* Truncate SYN, it is out of window starting
+3 -3
net/ipv4/tcp_output.c
··· 878 878 if (likely(ireq->tstamp_ok)) { 879 879 opts->options |= OPTION_TS; 880 880 opts->tsval = tcp_skb_timestamp(skb) + tcp_rsk(req)->ts_off; 881 - opts->tsecr = req->ts_recent; 881 + opts->tsecr = READ_ONCE(req->ts_recent); 882 882 remaining -= TCPOLEN_TSTAMP_ALIGNED; 883 883 } 884 884 if (likely(ireq->sack_ok)) { ··· 3660 3660 rcu_read_lock(); 3661 3661 md5 = tcp_rsk(req)->af_specific->req_md5_lookup(sk, req_to_sk(req)); 3662 3662 #endif 3663 - skb_set_hash(skb, tcp_rsk(req)->txhash, PKT_HASH_TYPE_L4); 3663 + skb_set_hash(skb, READ_ONCE(tcp_rsk(req)->txhash), PKT_HASH_TYPE_L4); 3664 3664 /* bpf program will be interested in the tcp_flags */ 3665 3665 TCP_SKB_CB(skb)->tcp_flags = TCPHDR_SYN | TCPHDR_ACK; 3666 3666 tcp_header_size = tcp_synack_options(sk, req, mss, skb, &opts, md5, ··· 4210 4210 4211 4211 /* Paired with WRITE_ONCE() in sock_setsockopt() */ 4212 4212 if (READ_ONCE(sk->sk_txrehash) == SOCK_TXREHASH_ENABLED) 4213 - tcp_rsk(req)->txhash = net_tx_rndhash(); 4213 + WRITE_ONCE(tcp_rsk(req)->txhash, net_tx_rndhash()); 4214 4214 res = af_ops->send_synack(sk, NULL, &fl, req, NULL, TCP_SYNACK_NORMAL, 4215 4215 NULL); 4216 4216 if (!res) {
+11 -5
net/ipv4/udp_offload.c
··· 274 274 __sum16 check; 275 275 __be16 newlen; 276 276 277 - if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST) 278 - return __udp_gso_segment_list(gso_skb, features, is_ipv6); 279 - 280 277 mss = skb_shinfo(gso_skb)->gso_size; 281 278 if (gso_skb->len <= sizeof(*uh) + mss) 282 279 return ERR_PTR(-EINVAL); 280 + 281 + if (skb_gso_ok(gso_skb, features | NETIF_F_GSO_ROBUST)) { 282 + /* Packet is from an untrusted source, reset gso_segs. */ 283 + skb_shinfo(gso_skb)->gso_segs = DIV_ROUND_UP(gso_skb->len - sizeof(*uh), 284 + mss); 285 + return NULL; 286 + } 287 + 288 + if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST) 289 + return __udp_gso_segment_list(gso_skb, features, is_ipv6); 283 290 284 291 skb_pull(gso_skb, sizeof(*uh)); 285 292 ··· 395 388 if (!pskb_may_pull(skb, sizeof(struct udphdr))) 396 389 goto out; 397 390 398 - if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 && 399 - !skb_gso_ok(skb, features | NETIF_F_GSO_ROBUST)) 391 + if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) 400 392 return __udp_gso_segment(skb, features, false); 401 393 402 394 mss = skb_shinfo(skb)->gso_size;
+2 -1
net/ipv6/ip6_gre.c
··· 955 955 goto tx_err; 956 956 957 957 if (skb->len > dev->mtu + dev->hard_header_len) { 958 - pskb_trim(skb, dev->mtu + dev->hard_header_len); 958 + if (pskb_trim(skb, dev->mtu + dev->hard_header_len)) 959 + goto tx_err; 959 960 truncate = true; 960 961 } 961 962
+2 -2
net/ipv6/tcp_ipv6.c
··· 1126 1126 tcp_rsk(req)->rcv_nxt, 1127 1127 req->rsk_rcv_wnd >> inet_rsk(req)->rcv_wscale, 1128 1128 tcp_time_stamp_raw() + tcp_rsk(req)->ts_off, 1129 - req->ts_recent, sk->sk_bound_dev_if, 1129 + READ_ONCE(req->ts_recent), sk->sk_bound_dev_if, 1130 1130 tcp_v6_md5_do_lookup(sk, &ipv6_hdr(skb)->saddr, l3index), 1131 1131 ipv6_get_dsfield(ipv6_hdr(skb)), 0, sk->sk_priority, 1132 - tcp_rsk(req)->txhash); 1132 + READ_ONCE(tcp_rsk(req)->txhash)); 1133 1133 } 1134 1134 1135 1135
+1 -2
net/ipv6/udp_offload.c
··· 43 43 if (!pskb_may_pull(skb, sizeof(struct udphdr))) 44 44 goto out; 45 45 46 - if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 && 47 - !skb_gso_ok(skb, features | NETIF_F_GSO_ROBUST)) 46 + if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) 48 47 return __udp_gso_segment(skb, features, true); 49 48 50 49 mss = skb_shinfo(skb)->gso_size;
+1 -1
net/llc/af_llc.c
··· 402 402 memcpy(laddr.mac, addr->sllc_mac, IFHWADDRLEN); 403 403 laddr.lsap = addr->sllc_sap; 404 404 rc = -EADDRINUSE; /* mac + sap clash. */ 405 - ask = llc_lookup_established(sap, &daddr, &laddr); 405 + ask = llc_lookup_established(sap, &daddr, &laddr, &init_net); 406 406 if (ask) { 407 407 sock_put(ask); 408 408 goto out_put;
+30 -19
net/llc/llc_conn.c
··· 453 453 static inline bool llc_estab_match(const struct llc_sap *sap, 454 454 const struct llc_addr *daddr, 455 455 const struct llc_addr *laddr, 456 - const struct sock *sk) 456 + const struct sock *sk, 457 + const struct net *net) 457 458 { 458 459 struct llc_sock *llc = llc_sk(sk); 459 460 460 - return llc->laddr.lsap == laddr->lsap && 461 + return net_eq(sock_net(sk), net) && 462 + llc->laddr.lsap == laddr->lsap && 461 463 llc->daddr.lsap == daddr->lsap && 462 464 ether_addr_equal(llc->laddr.mac, laddr->mac) && 463 465 ether_addr_equal(llc->daddr.mac, daddr->mac); ··· 470 468 * @sap: SAP 471 469 * @daddr: address of remote LLC (MAC + SAP) 472 470 * @laddr: address of local LLC (MAC + SAP) 471 + * @net: netns to look up a socket in 473 472 * 474 473 * Search connection list of the SAP and finds connection using the remote 475 474 * mac, remote sap, local mac, and local sap. Returns pointer for ··· 479 476 */ 480 477 static struct sock *__llc_lookup_established(struct llc_sap *sap, 481 478 struct llc_addr *daddr, 482 - struct llc_addr *laddr) 479 + struct llc_addr *laddr, 480 + const struct net *net) 483 481 { 484 482 struct sock *rc; 485 483 struct hlist_nulls_node *node; ··· 490 486 rcu_read_lock(); 491 487 again: 492 488 sk_nulls_for_each_rcu(rc, node, laddr_hb) { 493 - if (llc_estab_match(sap, daddr, laddr, rc)) { 489 + if (llc_estab_match(sap, daddr, laddr, rc, net)) { 494 490 /* Extra checks required by SLAB_TYPESAFE_BY_RCU */ 495 491 if (unlikely(!refcount_inc_not_zero(&rc->sk_refcnt))) 496 492 goto again; 497 493 if (unlikely(llc_sk(rc)->sap != sap || 498 - !llc_estab_match(sap, daddr, laddr, rc))) { 494 + !llc_estab_match(sap, daddr, laddr, rc, net))) { 499 495 sock_put(rc); 500 496 continue; 501 497 } ··· 517 513 518 514 struct sock *llc_lookup_established(struct llc_sap *sap, 519 515 struct llc_addr *daddr, 520 - struct llc_addr *laddr) 516 + struct llc_addr *laddr, 517 + const struct net *net) 521 518 { 522 519 struct sock *sk; 523 520 524 521 local_bh_disable(); 525 - sk = __llc_lookup_established(sap, daddr, laddr); 522 + sk = __llc_lookup_established(sap, daddr, laddr, net); 526 523 local_bh_enable(); 527 524 return sk; 528 525 } 529 526 530 527 static inline bool llc_listener_match(const struct llc_sap *sap, 531 528 const struct llc_addr *laddr, 532 - const struct sock *sk) 529 + const struct sock *sk, 530 + const struct net *net) 533 531 { 534 532 struct llc_sock *llc = llc_sk(sk); 535 533 536 - return sk->sk_type == SOCK_STREAM && sk->sk_state == TCP_LISTEN && 534 + return net_eq(sock_net(sk), net) && 535 + sk->sk_type == SOCK_STREAM && sk->sk_state == TCP_LISTEN && 537 536 llc->laddr.lsap == laddr->lsap && 538 537 ether_addr_equal(llc->laddr.mac, laddr->mac); 539 538 } 540 539 541 540 static struct sock *__llc_lookup_listener(struct llc_sap *sap, 542 - struct llc_addr *laddr) 541 + struct llc_addr *laddr, 542 + const struct net *net) 543 543 { 544 544 struct sock *rc; 545 545 struct hlist_nulls_node *node; ··· 553 545 rcu_read_lock(); 554 546 again: 555 547 sk_nulls_for_each_rcu(rc, node, laddr_hb) { 556 - if (llc_listener_match(sap, laddr, rc)) { 548 + if (llc_listener_match(sap, laddr, rc, net)) { 557 549 /* Extra checks required by SLAB_TYPESAFE_BY_RCU */ 558 550 if (unlikely(!refcount_inc_not_zero(&rc->sk_refcnt))) 559 551 goto again; 560 552 if (unlikely(llc_sk(rc)->sap != sap || 561 - !llc_listener_match(sap, laddr, rc))) { 553 + !llc_listener_match(sap, laddr, rc, net))) { 562 554 sock_put(rc); 563 555 continue; 564 556 } ··· 582 574 * llc_lookup_listener - Finds listener for local MAC + SAP 583 575 * @sap: SAP 584 576 * @laddr: address of local LLC (MAC + SAP) 577 + * @net: netns to look up a socket in 585 578 * 586 579 * Search connection list of the SAP and finds connection listening on 587 580 * local mac, and local sap. Returns pointer for parent socket found, ··· 590 581 * Caller has to make sure local_bh is disabled. 591 582 */ 592 583 static struct sock *llc_lookup_listener(struct llc_sap *sap, 593 - struct llc_addr *laddr) 584 + struct llc_addr *laddr, 585 + const struct net *net) 594 586 { 587 + struct sock *rc = __llc_lookup_listener(sap, laddr, net); 595 588 static struct llc_addr null_addr; 596 - struct sock *rc = __llc_lookup_listener(sap, laddr); 597 589 598 590 if (!rc) 599 - rc = __llc_lookup_listener(sap, &null_addr); 591 + rc = __llc_lookup_listener(sap, &null_addr, net); 600 592 601 593 return rc; 602 594 } 603 595 604 596 static struct sock *__llc_lookup(struct llc_sap *sap, 605 597 struct llc_addr *daddr, 606 - struct llc_addr *laddr) 598 + struct llc_addr *laddr, 599 + const struct net *net) 607 600 { 608 - struct sock *sk = __llc_lookup_established(sap, daddr, laddr); 601 + struct sock *sk = __llc_lookup_established(sap, daddr, laddr, net); 609 602 610 - return sk ? : llc_lookup_listener(sap, laddr); 603 + return sk ? : llc_lookup_listener(sap, laddr, net); 611 604 } 612 605 613 606 /** ··· 787 776 llc_pdu_decode_da(skb, daddr.mac); 788 777 llc_pdu_decode_dsap(skb, &daddr.lsap); 789 778 790 - sk = __llc_lookup(sap, &saddr, &daddr); 779 + sk = __llc_lookup(sap, &saddr, &daddr, dev_net(skb->dev)); 791 780 if (!sk) 792 781 goto drop; 793 782
+1 -1
net/llc/llc_if.c
··· 92 92 daddr.lsap = dsap; 93 93 memcpy(daddr.mac, dmac, sizeof(daddr.mac)); 94 94 memcpy(laddr.mac, lmac, sizeof(laddr.mac)); 95 - existing = llc_lookup_established(llc->sap, &daddr, &laddr); 95 + existing = llc_lookup_established(llc->sap, &daddr, &laddr, sock_net(sk)); 96 96 if (existing) { 97 97 if (existing->sk_state == TCP_ESTABLISHED) { 98 98 sk = existing;
-3
net/llc/llc_input.c
··· 163 163 void (*sta_handler)(struct sk_buff *skb); 164 164 void (*sap_handler)(struct llc_sap *sap, struct sk_buff *skb); 165 165 166 - if (!net_eq(dev_net(dev), &init_net)) 167 - goto drop; 168 - 169 166 /* 170 167 * When the interface is in promisc. mode, drop all the crap that it 171 168 * receives, do not try to analyse it.
+11 -7
net/llc/llc_sap.c
··· 294 294 295 295 static inline bool llc_dgram_match(const struct llc_sap *sap, 296 296 const struct llc_addr *laddr, 297 - const struct sock *sk) 297 + const struct sock *sk, 298 + const struct net *net) 298 299 { 299 300 struct llc_sock *llc = llc_sk(sk); 300 301 301 302 return sk->sk_type == SOCK_DGRAM && 302 - llc->laddr.lsap == laddr->lsap && 303 - ether_addr_equal(llc->laddr.mac, laddr->mac); 303 + net_eq(sock_net(sk), net) && 304 + llc->laddr.lsap == laddr->lsap && 305 + ether_addr_equal(llc->laddr.mac, laddr->mac); 304 306 } 305 307 306 308 /** 307 309 * llc_lookup_dgram - Finds dgram socket for the local sap/mac 308 310 * @sap: SAP 309 311 * @laddr: address of local LLC (MAC + SAP) 312 + * @net: netns to look up a socket in 310 313 * 311 314 * Search socket list of the SAP and finds connection using the local 312 315 * mac, and local sap. Returns pointer for socket found, %NULL otherwise. 313 316 */ 314 317 static struct sock *llc_lookup_dgram(struct llc_sap *sap, 315 - const struct llc_addr *laddr) 318 + const struct llc_addr *laddr, 319 + const struct net *net) 316 320 { 317 321 struct sock *rc; 318 322 struct hlist_nulls_node *node; ··· 326 322 rcu_read_lock_bh(); 327 323 again: 328 324 sk_nulls_for_each_rcu(rc, node, laddr_hb) { 329 - if (llc_dgram_match(sap, laddr, rc)) { 325 + if (llc_dgram_match(sap, laddr, rc, net)) { 330 326 /* Extra checks required by SLAB_TYPESAFE_BY_RCU */ 331 327 if (unlikely(!refcount_inc_not_zero(&rc->sk_refcnt))) 332 328 goto again; 333 329 if (unlikely(llc_sk(rc)->sap != sap || 334 - !llc_dgram_match(sap, laddr, rc))) { 330 + !llc_dgram_match(sap, laddr, rc, net))) { 335 331 sock_put(rc); 336 332 continue; 337 333 } ··· 433 429 llc_sap_mcast(sap, &laddr, skb); 434 430 kfree_skb(skb); 435 431 } else { 436 - struct sock *sk = llc_lookup_dgram(sap, &laddr); 432 + struct sock *sk = llc_lookup_dgram(sap, &laddr, dev_net(skb->dev)); 437 433 if (sk) { 438 434 llc_sap_rcv(sap, skb, sk); 439 435 sock_put(sk);
+10 -2
net/netfilter/nf_tables_api.c
··· 3685 3685 if (err < 0) 3686 3686 return err; 3687 3687 } 3688 - 3689 - cond_resched(); 3690 3688 } 3691 3689 3692 3690 return 0; ··· 3708 3710 err = nft_chain_validate(&ctx, chain); 3709 3711 if (err < 0) 3710 3712 return err; 3713 + 3714 + cond_resched(); 3711 3715 } 3712 3716 3713 3717 return 0; ··· 4086 4086 } else { 4087 4087 list_for_each_entry(chain, &table->chains, list) { 4088 4088 if (!nft_is_active_next(net, chain)) 4089 + continue; 4090 + if (nft_chain_is_bound(chain)) 4089 4091 continue; 4090 4092 4091 4093 ctx.chain = chain; ··· 10519 10517 10520 10518 if (!tb[NFTA_VERDICT_CODE]) 10521 10519 return -EINVAL; 10520 + 10521 + /* zero padding hole for memcmp */ 10522 + memset(data, 0, sizeof(*data)); 10522 10523 data->verdict.code = ntohl(nla_get_be32(tb[NFTA_VERDICT_CODE])); 10523 10524 10524 10525 switch (data->verdict.code) { ··· 10804 10799 ctx.family = table->family; 10805 10800 ctx.table = table; 10806 10801 list_for_each_entry(chain, &table->chains, list) { 10802 + if (nft_chain_is_bound(chain)) 10803 + continue; 10804 + 10807 10805 ctx.chain = chain; 10808 10806 list_for_each_entry_safe(rule, nr, &chain->rules, list) { 10809 10807 list_del(&rule->list);
+5 -1
net/netfilter/nft_set_pipapo.c
··· 1929 1929 int i, start, rules_fx; 1930 1930 1931 1931 match_start = data; 1932 - match_end = (const u8 *)nft_set_ext_key_end(&e->ext)->data; 1932 + 1933 + if (nft_set_ext_exists(&e->ext, NFT_SET_EXT_KEY_END)) 1934 + match_end = (const u8 *)nft_set_ext_key_end(&e->ext)->data; 1935 + else 1936 + match_end = data; 1933 1937 1934 1938 start = first_rule; 1935 1939 rules_fx = rules_f0;
+47 -52
net/sched/cls_bpf.c
··· 406 406 return 0; 407 407 } 408 408 409 - static int cls_bpf_set_parms(struct net *net, struct tcf_proto *tp, 410 - struct cls_bpf_prog *prog, unsigned long base, 411 - struct nlattr **tb, struct nlattr *est, u32 flags, 412 - struct netlink_ext_ack *extack) 413 - { 414 - bool is_bpf, is_ebpf, have_exts = false; 415 - u32 gen_flags = 0; 416 - int ret; 417 - 418 - is_bpf = tb[TCA_BPF_OPS_LEN] && tb[TCA_BPF_OPS]; 419 - is_ebpf = tb[TCA_BPF_FD]; 420 - if ((!is_bpf && !is_ebpf) || (is_bpf && is_ebpf)) 421 - return -EINVAL; 422 - 423 - ret = tcf_exts_validate(net, tp, tb, est, &prog->exts, flags, 424 - extack); 425 - if (ret < 0) 426 - return ret; 427 - 428 - if (tb[TCA_BPF_FLAGS]) { 429 - u32 bpf_flags = nla_get_u32(tb[TCA_BPF_FLAGS]); 430 - 431 - if (bpf_flags & ~TCA_BPF_FLAG_ACT_DIRECT) 432 - return -EINVAL; 433 - 434 - have_exts = bpf_flags & TCA_BPF_FLAG_ACT_DIRECT; 435 - } 436 - if (tb[TCA_BPF_FLAGS_GEN]) { 437 - gen_flags = nla_get_u32(tb[TCA_BPF_FLAGS_GEN]); 438 - if (gen_flags & ~CLS_BPF_SUPPORTED_GEN_FLAGS || 439 - !tc_flags_valid(gen_flags)) 440 - return -EINVAL; 441 - } 442 - 443 - prog->exts_integrated = have_exts; 444 - prog->gen_flags = gen_flags; 445 - 446 - ret = is_bpf ? cls_bpf_prog_from_ops(tb, prog) : 447 - cls_bpf_prog_from_efd(tb, prog, gen_flags, tp); 448 - if (ret < 0) 449 - return ret; 450 - 451 - if (tb[TCA_BPF_CLASSID]) { 452 - prog->res.classid = nla_get_u32(tb[TCA_BPF_CLASSID]); 453 - tcf_bind_filter(tp, &prog->res, base); 454 - } 455 - 456 - return 0; 457 - } 458 - 459 409 static int cls_bpf_change(struct net *net, struct sk_buff *in_skb, 460 410 struct tcf_proto *tp, unsigned long base, 461 411 u32 handle, struct nlattr **tca, ··· 413 463 struct netlink_ext_ack *extack) 414 464 { 415 465 struct cls_bpf_head *head = rtnl_dereference(tp->root); 466 + bool is_bpf, is_ebpf, have_exts = false; 416 467 struct cls_bpf_prog *oldprog = *arg; 417 468 struct nlattr *tb[TCA_BPF_MAX + 1]; 469 + bool bound_to_filter = false; 418 470 struct cls_bpf_prog *prog; 471 + u32 gen_flags = 0; 419 472 int ret; 420 473 421 474 if (tca[TCA_OPTIONS] == NULL) ··· 457 504 goto errout; 458 505 prog->handle = handle; 459 506 460 - ret = cls_bpf_set_parms(net, tp, prog, base, tb, tca[TCA_RATE], flags, 461 - extack); 507 + is_bpf = tb[TCA_BPF_OPS_LEN] && tb[TCA_BPF_OPS]; 508 + is_ebpf = tb[TCA_BPF_FD]; 509 + if ((!is_bpf && !is_ebpf) || (is_bpf && is_ebpf)) { 510 + ret = -EINVAL; 511 + goto errout_idr; 512 + } 513 + 514 + ret = tcf_exts_validate(net, tp, tb, tca[TCA_RATE], &prog->exts, 515 + flags, extack); 462 516 if (ret < 0) 463 517 goto errout_idr; 518 + 519 + if (tb[TCA_BPF_FLAGS]) { 520 + u32 bpf_flags = nla_get_u32(tb[TCA_BPF_FLAGS]); 521 + 522 + if (bpf_flags & ~TCA_BPF_FLAG_ACT_DIRECT) { 523 + ret = -EINVAL; 524 + goto errout_idr; 525 + } 526 + 527 + have_exts = bpf_flags & TCA_BPF_FLAG_ACT_DIRECT; 528 + } 529 + if (tb[TCA_BPF_FLAGS_GEN]) { 530 + gen_flags = nla_get_u32(tb[TCA_BPF_FLAGS_GEN]); 531 + if (gen_flags & ~CLS_BPF_SUPPORTED_GEN_FLAGS || 532 + !tc_flags_valid(gen_flags)) { 533 + ret = -EINVAL; 534 + goto errout_idr; 535 + } 536 + } 537 + 538 + prog->exts_integrated = have_exts; 539 + prog->gen_flags = gen_flags; 540 + 541 + ret = is_bpf ? cls_bpf_prog_from_ops(tb, prog) : 542 + cls_bpf_prog_from_efd(tb, prog, gen_flags, tp); 543 + if (ret < 0) 544 + goto errout_idr; 545 + 546 + if (tb[TCA_BPF_CLASSID]) { 547 + prog->res.classid = nla_get_u32(tb[TCA_BPF_CLASSID]); 548 + tcf_bind_filter(tp, &prog->res, base); 549 + bound_to_filter = true; 550 + } 464 551 465 552 ret = cls_bpf_offload(tp, prog, oldprog, extack); 466 553 if (ret) ··· 523 530 return 0; 524 531 525 532 errout_parms: 533 + if (bound_to_filter) 534 + tcf_unbind_filter(tp, &prog->res); 526 535 cls_bpf_free_parms(prog); 527 536 errout_idr: 528 537 if (!oldprog)
+47 -52
net/sched/cls_flower.c
··· 2173 2173 return mask->meta.l2_miss; 2174 2174 } 2175 2175 2176 - static int fl_set_parms(struct net *net, struct tcf_proto *tp, 2177 - struct cls_fl_filter *f, struct fl_flow_mask *mask, 2178 - unsigned long base, struct nlattr **tb, 2179 - struct nlattr *est, 2180 - struct fl_flow_tmplt *tmplt, 2181 - u32 flags, u32 fl_flags, 2182 - struct netlink_ext_ack *extack) 2183 - { 2184 - int err; 2185 - 2186 - err = tcf_exts_validate_ex(net, tp, tb, est, &f->exts, flags, 2187 - fl_flags, extack); 2188 - if (err < 0) 2189 - return err; 2190 - 2191 - if (tb[TCA_FLOWER_CLASSID]) { 2192 - f->res.classid = nla_get_u32(tb[TCA_FLOWER_CLASSID]); 2193 - if (flags & TCA_ACT_FLAGS_NO_RTNL) 2194 - rtnl_lock(); 2195 - tcf_bind_filter(tp, &f->res, base); 2196 - if (flags & TCA_ACT_FLAGS_NO_RTNL) 2197 - rtnl_unlock(); 2198 - } 2199 - 2200 - err = fl_set_key(net, tb, &f->key, &mask->key, extack); 2201 - if (err) 2202 - return err; 2203 - 2204 - fl_mask_update_range(mask); 2205 - fl_set_masked_key(&f->mkey, &f->key, mask); 2206 - 2207 - if (!fl_mask_fits_tmplt(tmplt, mask)) { 2208 - NL_SET_ERR_MSG_MOD(extack, "Mask does not fit the template"); 2209 - return -EINVAL; 2210 - } 2211 - 2212 - /* Enable tc skb extension if filter matches on data extracted from 2213 - * this extension. 2214 - */ 2215 - if (fl_needs_tc_skb_ext(&mask->key)) { 2216 - f->needs_tc_skb_ext = 1; 2217 - tc_skb_ext_tc_enable(); 2218 - } 2219 - 2220 - return 0; 2221 - } 2222 - 2223 2176 static int fl_ht_insert_unique(struct cls_fl_filter *fnew, 2224 2177 struct cls_fl_filter *fold, 2225 2178 bool *in_ht) ··· 2204 2251 struct cls_fl_head *head = fl_head_dereference(tp); 2205 2252 bool rtnl_held = !(flags & TCA_ACT_FLAGS_NO_RTNL); 2206 2253 struct cls_fl_filter *fold = *arg; 2254 + bool bound_to_filter = false; 2207 2255 struct cls_fl_filter *fnew; 2208 2256 struct fl_flow_mask *mask; 2209 2257 struct nlattr **tb; ··· 2289 2335 if (err < 0) 2290 2336 goto errout_idr; 2291 2337 2292 - err = fl_set_parms(net, tp, fnew, mask, base, tb, tca[TCA_RATE], 2293 - tp->chain->tmplt_priv, flags, fnew->flags, 2294 - extack); 2295 - if (err) 2338 + err = tcf_exts_validate_ex(net, tp, tb, tca[TCA_RATE], 2339 + &fnew->exts, flags, fnew->flags, 2340 + extack); 2341 + if (err < 0) 2296 2342 goto errout_idr; 2343 + 2344 + if (tb[TCA_FLOWER_CLASSID]) { 2345 + fnew->res.classid = nla_get_u32(tb[TCA_FLOWER_CLASSID]); 2346 + if (flags & TCA_ACT_FLAGS_NO_RTNL) 2347 + rtnl_lock(); 2348 + tcf_bind_filter(tp, &fnew->res, base); 2349 + if (flags & TCA_ACT_FLAGS_NO_RTNL) 2350 + rtnl_unlock(); 2351 + bound_to_filter = true; 2352 + } 2353 + 2354 + err = fl_set_key(net, tb, &fnew->key, &mask->key, extack); 2355 + if (err) 2356 + goto unbind_filter; 2357 + 2358 + fl_mask_update_range(mask); 2359 + fl_set_masked_key(&fnew->mkey, &fnew->key, mask); 2360 + 2361 + if (!fl_mask_fits_tmplt(tp->chain->tmplt_priv, mask)) { 2362 + NL_SET_ERR_MSG_MOD(extack, "Mask does not fit the template"); 2363 + err = -EINVAL; 2364 + goto unbind_filter; 2365 + } 2366 + 2367 + /* Enable tc skb extension if filter matches on data extracted from 2368 + * this extension. 2369 + */ 2370 + if (fl_needs_tc_skb_ext(&mask->key)) { 2371 + fnew->needs_tc_skb_ext = 1; 2372 + tc_skb_ext_tc_enable(); 2373 + } 2297 2374 2298 2375 err = fl_check_assign_mask(head, fnew, fold, mask); 2299 2376 if (err) 2300 - goto errout_idr; 2377 + goto unbind_filter; 2301 2378 2302 2379 err = fl_ht_insert_unique(fnew, fold, &in_ht); 2303 2380 if (err) ··· 2419 2434 fnew->mask->filter_ht_params); 2420 2435 errout_mask: 2421 2436 fl_mask_put(head, fnew->mask); 2437 + 2438 + unbind_filter: 2439 + if (bound_to_filter) { 2440 + if (flags & TCA_ACT_FLAGS_NO_RTNL) 2441 + rtnl_lock(); 2442 + tcf_unbind_filter(tp, &fnew->res); 2443 + if (flags & TCA_ACT_FLAGS_NO_RTNL) 2444 + rtnl_unlock(); 2445 + } 2446 + 2422 2447 errout_idr: 2423 2448 if (!fold) 2424 2449 idr_remove(&head->handle_idr, fnew->handle);
+12 -23
net/sched/cls_matchall.c
··· 159 159 [TCA_MATCHALL_FLAGS] = { .type = NLA_U32 }, 160 160 }; 161 161 162 - static int mall_set_parms(struct net *net, struct tcf_proto *tp, 163 - struct cls_mall_head *head, 164 - unsigned long base, struct nlattr **tb, 165 - struct nlattr *est, u32 flags, u32 fl_flags, 166 - struct netlink_ext_ack *extack) 167 - { 168 - int err; 169 - 170 - err = tcf_exts_validate_ex(net, tp, tb, est, &head->exts, flags, 171 - fl_flags, extack); 172 - if (err < 0) 173 - return err; 174 - 175 - if (tb[TCA_MATCHALL_CLASSID]) { 176 - head->res.classid = nla_get_u32(tb[TCA_MATCHALL_CLASSID]); 177 - tcf_bind_filter(tp, &head->res, base); 178 - } 179 - return 0; 180 - } 181 - 182 162 static int mall_change(struct net *net, struct sk_buff *in_skb, 183 163 struct tcf_proto *tp, unsigned long base, 184 164 u32 handle, struct nlattr **tca, ··· 167 187 { 168 188 struct cls_mall_head *head = rtnl_dereference(tp->root); 169 189 struct nlattr *tb[TCA_MATCHALL_MAX + 1]; 190 + bool bound_to_filter = false; 170 191 struct cls_mall_head *new; 171 192 u32 userflags = 0; 172 193 int err; ··· 207 226 goto err_alloc_percpu; 208 227 } 209 228 210 - err = mall_set_parms(net, tp, new, base, tb, tca[TCA_RATE], 211 - flags, new->flags, extack); 212 - if (err) 229 + err = tcf_exts_validate_ex(net, tp, tb, tca[TCA_RATE], 230 + &new->exts, flags, new->flags, extack); 231 + if (err < 0) 213 232 goto err_set_parms; 233 + 234 + if (tb[TCA_MATCHALL_CLASSID]) { 235 + new->res.classid = nla_get_u32(tb[TCA_MATCHALL_CLASSID]); 236 + tcf_bind_filter(tp, &new->res, base); 237 + bound_to_filter = true; 238 + } 214 239 215 240 if (!tc_skip_hw(new->flags)) { 216 241 err = mall_replace_hw_filter(tp, new, (unsigned long)new, ··· 233 246 return 0; 234 247 235 248 err_replace_hw_filter: 249 + if (bound_to_filter) 250 + tcf_unbind_filter(tp, &new->res); 236 251 err_set_parms: 237 252 free_percpu(new->pf); 238 253 err_alloc_percpu:
+37 -11
net/sched/cls_u32.c
··· 712 712 [TCA_U32_FLAGS] = { .type = NLA_U32 }, 713 713 }; 714 714 715 + static void u32_unbind_filter(struct tcf_proto *tp, struct tc_u_knode *n, 716 + struct nlattr **tb) 717 + { 718 + if (tb[TCA_U32_CLASSID]) 719 + tcf_unbind_filter(tp, &n->res); 720 + } 721 + 722 + static void u32_bind_filter(struct tcf_proto *tp, struct tc_u_knode *n, 723 + unsigned long base, struct nlattr **tb) 724 + { 725 + if (tb[TCA_U32_CLASSID]) { 726 + n->res.classid = nla_get_u32(tb[TCA_U32_CLASSID]); 727 + tcf_bind_filter(tp, &n->res, base); 728 + } 729 + } 730 + 715 731 static int u32_set_parms(struct net *net, struct tcf_proto *tp, 716 - unsigned long base, 717 732 struct tc_u_knode *n, struct nlattr **tb, 718 733 struct nlattr *est, u32 flags, u32 fl_flags, 719 734 struct netlink_ext_ack *extack) ··· 774 759 775 760 if (ht_old) 776 761 ht_old->refcnt--; 777 - } 778 - if (tb[TCA_U32_CLASSID]) { 779 - n->res.classid = nla_get_u32(tb[TCA_U32_CLASSID]); 780 - tcf_bind_filter(tp, &n->res, base); 781 762 } 782 763 783 764 if (ifindex >= 0) ··· 914 903 if (!new) 915 904 return -ENOMEM; 916 905 917 - err = u32_set_parms(net, tp, base, new, tb, 918 - tca[TCA_RATE], flags, new->flags, 919 - extack); 906 + err = u32_set_parms(net, tp, new, tb, tca[TCA_RATE], 907 + flags, new->flags, extack); 920 908 921 909 if (err) { 922 910 __u32_destroy_key(new); 923 911 return err; 924 912 } 925 913 914 + u32_bind_filter(tp, new, base, tb); 915 + 926 916 err = u32_replace_hw_knode(tp, new, flags, extack); 927 917 if (err) { 918 + u32_unbind_filter(tp, new, tb); 919 + 920 + if (tb[TCA_U32_LINK]) { 921 + struct tc_u_hnode *ht_old; 922 + 923 + ht_old = rtnl_dereference(n->ht_down); 924 + if (ht_old) 925 + ht_old->refcnt++; 926 + } 928 927 __u32_destroy_key(new); 929 928 return err; 930 929 } ··· 1095 1074 } 1096 1075 #endif 1097 1076 1098 - err = u32_set_parms(net, tp, base, n, tb, tca[TCA_RATE], 1077 + err = u32_set_parms(net, tp, n, tb, tca[TCA_RATE], 1099 1078 flags, n->flags, extack); 1079 + 1080 + u32_bind_filter(tp, n, base, tb); 1081 + 1100 1082 if (err == 0) { 1101 1083 struct tc_u_knode __rcu **ins; 1102 1084 struct tc_u_knode *pins; 1103 1085 1104 1086 err = u32_replace_hw_knode(tp, n, flags, extack); 1105 1087 if (err) 1106 - goto errhw; 1088 + goto errunbind; 1107 1089 1108 1090 if (!tc_in_hw(n->flags)) 1109 1091 n->flags |= TCA_CLS_FLAGS_NOT_IN_HW; ··· 1124 1100 return 0; 1125 1101 } 1126 1102 1127 - errhw: 1103 + errunbind: 1104 + u32_unbind_filter(tp, n, tb); 1105 + 1128 1106 #ifdef CONFIG_CLS_U32_MARK 1129 1107 free_percpu(n->pcpu_success); 1130 1108 #endif
+4 -1
scripts/Makefile.build
··· 264 264 265 265 rust_allowed_features := new_uninit 266 266 267 + # `--out-dir` is required to avoid temporaries being created by `rustc` in the 268 + # current working directory, which may be not accessible in the out-of-tree 269 + # modules case. 267 270 rust_common_cmd = \ 268 271 RUST_MODFILE=$(modfile) $(RUSTC_OR_CLIPPY) $(rust_flags) \ 269 272 -Zallow-features=$(rust_allowed_features) \ ··· 275 272 --extern alloc --extern kernel \ 276 273 --crate-type rlib -L $(objtree)/rust/ \ 277 274 --crate-name $(basename $(notdir $@)) \ 278 - --emit=dep-info=$(depfile) 275 + --out-dir $(dir $@) --emit=dep-info=$(depfile) 279 276 280 277 # `--emit=obj`, `--emit=asm` and `--emit=llvm-ir` imply a single codegen unit 281 278 # will be used. We explicitly request `-Ccodegen-units=1` in any case, and
+5 -1
scripts/Makefile.host
··· 86 86 hostcxx_flags = -Wp,-MMD,$(depfile) \ 87 87 $(KBUILD_HOSTCXXFLAGS) $(HOST_EXTRACXXFLAGS) \ 88 88 $(HOSTCXXFLAGS_$(target-stem).o) 89 - hostrust_flags = --emit=dep-info=$(depfile) \ 89 + 90 + # `--out-dir` is required to avoid temporaries being created by `rustc` in the 91 + # current working directory, which may be not accessible in the out-of-tree 92 + # modules case. 93 + hostrust_flags = --out-dir $(dir $@) --emit=dep-info=$(depfile) \ 90 94 $(KBUILD_HOSTRUSTFLAGS) $(HOST_EXTRARUSTFLAGS) \ 91 95 $(HOSTRUSTFLAGS_$(target-stem)) 92 96
+1 -1
scripts/clang-tools/gen_compile_commands.py
··· 19 19 _DEFAULT_LOG_LEVEL = 'WARNING' 20 20 21 21 _FILENAME_PATTERN = r'^\..*\.cmd$' 22 - _LINE_PATTERN = r'^savedcmd_[^ ]*\.o := (.* )([^ ]*\.c) *(;|$)' 22 + _LINE_PATTERN = r'^savedcmd_[^ ]*\.o := (.* )([^ ]*\.[cS]) *(;|$)' 23 23 _VALID_LOG_LEVELS = ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] 24 24 # The tools/ directory adopts a different build system, and produces .cmd 25 25 # files in a different format. Do not support it.
+4 -7
scripts/kconfig/gconf.c
··· 636 636 { 637 637 GtkWidget *dialog; 638 638 const gchar *intro_text = 639 - "Welcome to gkc, the GTK+ graphical configuration tool\n" 639 + "Welcome to gconfig, the GTK+ graphical configuration tool.\n" 640 640 "For each option, a blank box indicates the feature is disabled, a\n" 641 641 "check indicates it is enabled, and a dot indicates that it is to\n" 642 642 "be compiled as a module. Clicking on the box will cycle through the three states.\n" ··· 647 647 "Although there is no cross reference yet to help you figure out\n" 648 648 "what other options must be enabled to support the option you\n" 649 649 "are interested in, you can still view the help of a grayed-out\n" 650 - "option.\n" 651 - "\n" 652 - "Toggling Show Debug Info under the Options menu will show \n" 653 - "the dependencies, which you can then match by examining other options."; 650 + "option."; 654 651 655 652 dialog = gtk_message_dialog_new(GTK_WINDOW(main_wnd), 656 653 GTK_DIALOG_DESTROY_WITH_PARENT, ··· 664 667 { 665 668 GtkWidget *dialog; 666 669 const gchar *about_text = 667 - "gkc is copyright (c) 2002 Romain Lievin <roms@lpg.ticalc.org>.\n" 670 + "gconfig is copyright (c) 2002 Romain Lievin <roms@lpg.ticalc.org>.\n" 668 671 "Based on the source code from Roman Zippel.\n"; 669 672 670 673 dialog = gtk_message_dialog_new(GTK_WINDOW(main_wnd), ··· 682 685 { 683 686 GtkWidget *dialog; 684 687 const gchar *license_text = 685 - "gkc is released under the terms of the GNU GPL v2.\n" 688 + "gconfig is released under the terms of the GNU GPL v2.\n" 686 689 "For more information, please see the source code or\n" 687 690 "visit http://www.fsf.org/licenses/licenses.html\n"; 688 691
+24 -11
security/keys/request_key.c
··· 401 401 set_bit(KEY_FLAG_USER_CONSTRUCT, &key->flags); 402 402 403 403 if (dest_keyring) { 404 - ret = __key_link_lock(dest_keyring, &ctx->index_key); 404 + ret = __key_link_lock(dest_keyring, &key->index_key); 405 405 if (ret < 0) 406 406 goto link_lock_failed; 407 - ret = __key_link_begin(dest_keyring, &ctx->index_key, &edit); 408 - if (ret < 0) 409 - goto link_prealloc_failed; 410 407 } 411 408 412 - /* attach the key to the destination keyring under lock, but we do need 409 + /* 410 + * Attach the key to the destination keyring under lock, but we do need 413 411 * to do another check just in case someone beat us to it whilst we 414 - * waited for locks */ 412 + * waited for locks. 413 + * 414 + * The caller might specify a comparison function which looks for keys 415 + * that do not exactly match but are still equivalent from the caller's 416 + * perspective. The __key_link_begin() operation must be done only after 417 + * an actual key is determined. 418 + */ 415 419 mutex_lock(&key_construction_mutex); 416 420 417 421 rcu_read_lock(); ··· 424 420 if (!IS_ERR(key_ref)) 425 421 goto key_already_present; 426 422 427 - if (dest_keyring) 423 + if (dest_keyring) { 424 + ret = __key_link_begin(dest_keyring, &key->index_key, &edit); 425 + if (ret < 0) 426 + goto link_alloc_failed; 428 427 __key_link(dest_keyring, key, &edit); 428 + } 429 429 430 430 mutex_unlock(&key_construction_mutex); 431 431 if (dest_keyring) 432 - __key_link_end(dest_keyring, &ctx->index_key, edit); 432 + __key_link_end(dest_keyring, &key->index_key, edit); 433 433 mutex_unlock(&user->cons_lock); 434 434 *_key = key; 435 435 kleave(" = 0 [%d]", key_serial(key)); ··· 446 438 mutex_unlock(&key_construction_mutex); 447 439 key = key_ref_to_ptr(key_ref); 448 440 if (dest_keyring) { 441 + ret = __key_link_begin(dest_keyring, &key->index_key, &edit); 442 + if (ret < 0) 443 + goto link_alloc_failed_unlocked; 449 444 ret = __key_link_check_live_key(dest_keyring, key); 450 445 if (ret == 0) 451 446 __key_link(dest_keyring, key, &edit); 452 - __key_link_end(dest_keyring, &ctx->index_key, edit); 447 + __key_link_end(dest_keyring, &key->index_key, edit); 453 448 if (ret < 0) 454 449 goto link_check_failed; 455 450 } ··· 467 456 kleave(" = %d [linkcheck]", ret); 468 457 return ret; 469 458 470 - link_prealloc_failed: 471 - __key_link_end(dest_keyring, &ctx->index_key, edit); 459 + link_alloc_failed: 460 + mutex_unlock(&key_construction_mutex); 461 + link_alloc_failed_unlocked: 462 + __key_link_end(dest_keyring, &key->index_key, edit); 472 463 link_lock_failed: 473 464 mutex_unlock(&user->cons_lock); 474 465 key_put(key);
+1 -1
security/keys/trusted-keys/trusted_tpm2.c
··· 186 186 } 187 187 188 188 /** 189 - * tpm_buf_append_auth() - append TPMS_AUTH_COMMAND to the buffer. 189 + * tpm2_buf_append_auth() - append TPMS_AUTH_COMMAND to the buffer. 190 190 * 191 191 * @buf: an allocated tpm_buf instance 192 192 * @session_handle: session handle
+1
sound/core/seq/seq_ports.c
··· 149 149 write_lock_irq(&client->ports_lock); 150 150 list_for_each_entry(p, &client->ports_list_head, list) { 151 151 if (p->addr.port == port) { 152 + kfree(new_port); 152 153 num = -EBUSY; 153 154 goto unlock; 154 155 }
+7 -5
sound/drivers/pcmtest.c
··· 110 110 struct timer_list timer_instance; 111 111 }; 112 112 113 - static struct pcmtst *pcmtst; 114 - 115 113 static struct snd_pcm_hardware snd_pcmtst_hw = { 116 114 .info = (SNDRV_PCM_INFO_INTERLEAVED | 117 115 SNDRV_PCM_INFO_BLOCK_TRANSFER | ··· 550 552 static int pcmtst_probe(struct platform_device *pdev) 551 553 { 552 554 struct snd_card *card; 555 + struct pcmtst *pcmtst; 553 556 int err; 554 557 555 558 err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); ··· 572 573 if (err < 0) 573 574 return err; 574 575 576 + platform_set_drvdata(pdev, pcmtst); 577 + 575 578 return 0; 576 579 } 577 580 578 - static int pdev_remove(struct platform_device *dev) 581 + static void pdev_remove(struct platform_device *pdev) 579 582 { 583 + struct pcmtst *pcmtst = platform_get_drvdata(pdev); 584 + 580 585 snd_pcmtst_free(pcmtst); 581 - return 0; 582 586 } 583 587 584 588 static struct platform_device pcmtst_pdev = { ··· 591 589 592 590 static struct platform_driver pcmtst_pdrv = { 593 591 .probe = pcmtst_probe, 594 - .remove = pdev_remove, 592 + .remove_new = pdev_remove, 595 593 .driver = { 596 594 .name = "pcmtest", 597 595 },
+50 -12
sound/pci/hda/patch_realtek.c
··· 122 122 unsigned int ultra_low_power:1; 123 123 unsigned int has_hs_key:1; 124 124 unsigned int no_internal_mic_pin:1; 125 + unsigned int en_3kpull_low:1; 125 126 126 127 /* for PLL fix */ 127 128 hda_nid_t pll_nid; ··· 3623 3622 if (!hp_pin) 3624 3623 hp_pin = 0x21; 3625 3624 3625 + alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */ 3626 3626 hp_pin_sense = snd_hda_jack_detect(codec, hp_pin); 3627 3627 3628 3628 if (hp_pin_sense) ··· 3640 3638 /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly 3641 3639 * when booting with headset plugged. So skip setting it for the codec alc257 3642 3640 */ 3643 - if (codec->core.vendor_id != 0x10ec0236 && 3644 - codec->core.vendor_id != 0x10ec0257) 3641 + if (spec->en_3kpull_low) 3645 3642 alc_update_coef_idx(codec, 0x46, 0, 3 << 12); 3646 3643 3647 3644 if (!spec->no_shutup_pins) ··· 4620 4619 spec->mute_led_coef.mask = 1 << 5; 4621 4620 spec->mute_led_coef.on = 0; 4622 4621 spec->mute_led_coef.off = 1 << 5; 4622 + snd_hda_gen_add_mute_led_cdev(codec, coef_mute_led_set); 4623 + } 4624 + } 4625 + 4626 + static void alc236_fixup_hp_mute_led_coefbit2(struct hda_codec *codec, 4627 + const struct hda_fixup *fix, int action) 4628 + { 4629 + struct alc_spec *spec = codec->spec; 4630 + 4631 + if (action == HDA_FIXUP_ACT_PRE_PROBE) { 4632 + spec->mute_led_polarity = 0; 4633 + spec->mute_led_coef.idx = 0x07; 4634 + spec->mute_led_coef.mask = 1; 4635 + spec->mute_led_coef.on = 1; 4636 + spec->mute_led_coef.off = 0; 4623 4637 snd_hda_gen_add_mute_led_cdev(codec, coef_mute_led_set); 4624 4638 } 4625 4639 } ··· 7159 7143 ALC285_FIXUP_HP_GPIO_LED, 7160 7144 ALC285_FIXUP_HP_MUTE_LED, 7161 7145 ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED, 7146 + ALC236_FIXUP_HP_MUTE_LED_COEFBIT2, 7162 7147 ALC236_FIXUP_HP_GPIO_LED, 7163 7148 ALC236_FIXUP_HP_MUTE_LED, 7164 7149 ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF, ··· 7230 7213 ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN, 7231 7214 ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS, 7232 7215 ALC236_FIXUP_DELL_DUAL_CODECS, 7216 + ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI, 7233 7217 }; 7234 7218 7235 7219 /* A special fixup for Lenovo C940 and Yoga Duet 7; ··· 8650 8632 .type = HDA_FIXUP_FUNC, 8651 8633 .v.func = alc285_fixup_hp_spectre_x360_mute_led, 8652 8634 }, 8635 + [ALC236_FIXUP_HP_MUTE_LED_COEFBIT2] = { 8636 + .type = HDA_FIXUP_FUNC, 8637 + .v.func = alc236_fixup_hp_mute_led_coefbit2, 8638 + }, 8653 8639 [ALC236_FIXUP_HP_GPIO_LED] = { 8654 8640 .type = HDA_FIXUP_FUNC, 8655 8641 .v.func = alc236_fixup_hp_gpio_led, ··· 9167 9145 [ALC287_FIXUP_CS35L41_I2C_2] = { 9168 9146 .type = HDA_FIXUP_FUNC, 9169 9147 .v.func = cs35l41_fixup_i2c_two, 9170 - .chained = true, 9171 - .chain_id = ALC269_FIXUP_THINKPAD_ACPI, 9172 9148 }, 9173 9149 [ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED] = { 9174 9150 .type = HDA_FIXUP_FUNC, ··· 9303 9283 .chained = true, 9304 9284 .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 9305 9285 }, 9286 + [ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI] = { 9287 + .type = HDA_FIXUP_FUNC, 9288 + .v.func = cs35l41_fixup_i2c_two, 9289 + .chained = true, 9290 + .chain_id = ALC269_FIXUP_THINKPAD_ACPI, 9291 + }, 9306 9292 }; 9307 9293 9308 9294 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 9419 9393 SND_PCI_QUIRK(0x1028, 0x0c1c, "Dell Precision 3540", ALC236_FIXUP_DELL_DUAL_CODECS), 9420 9394 SND_PCI_QUIRK(0x1028, 0x0c1d, "Dell Precision 3440", ALC236_FIXUP_DELL_DUAL_CODECS), 9421 9395 SND_PCI_QUIRK(0x1028, 0x0c1e, "Dell Precision 3540", ALC236_FIXUP_DELL_DUAL_CODECS), 9396 + SND_PCI_QUIRK(0x1028, 0x0cbd, "Dell Oasis 13 CS MTL-U", ALC245_FIXUP_CS35L41_SPI_2), 9397 + SND_PCI_QUIRK(0x1028, 0x0cbe, "Dell Oasis 13 2-IN-1 MTL-U", ALC245_FIXUP_CS35L41_SPI_2), 9398 + SND_PCI_QUIRK(0x1028, 0x0cbf, "Dell Oasis 13 Low Weight MTU-L", ALC245_FIXUP_CS35L41_SPI_2), 9399 + SND_PCI_QUIRK(0x1028, 0x0cc1, "Dell Oasis 14 MTL-H/U", ALC287_FIXUP_CS35L41_I2C_2), 9400 + SND_PCI_QUIRK(0x1028, 0x0cc2, "Dell Oasis 14 2-in-1 MTL-H/U", ALC287_FIXUP_CS35L41_I2C_2), 9401 + SND_PCI_QUIRK(0x1028, 0x0cc3, "Dell Oasis 14 Low Weight MTL-U", ALC287_FIXUP_CS35L41_I2C_2), 9402 + SND_PCI_QUIRK(0x1028, 0x0cc4, "Dell Oasis 16 MTL-H/U", ALC287_FIXUP_CS35L41_I2C_2), 9403 + SND_PCI_QUIRK(0x1028, 0x0cc5, "Dell Oasis MLK 14 RPL-P", ALC287_FIXUP_CS35L41_I2C_2), 9422 9404 SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 9423 9405 SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 9424 9406 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2), ··· 9550 9516 SND_PCI_QUIRK(0x103c, 0x886d, "HP ZBook Fury 17.3 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), 9551 9517 SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), 9552 9518 SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), 9519 + SND_PCI_QUIRK(0x103c, 0x887a, "HP Laptop 15s-eq2xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2), 9553 9520 SND_PCI_QUIRK(0x103c, 0x888d, "HP ZBook Power 15.6 inch G8 Mobile Workstation PC", ALC236_FIXUP_HP_GPIO_LED), 9554 9521 SND_PCI_QUIRK(0x103c, 0x8895, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED), 9555 9522 SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED), ··· 9762 9727 SND_PCI_QUIRK(0x1558, 0x5157, "Clevo W517GU1", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9763 9728 SND_PCI_QUIRK(0x1558, 0x51a1, "Clevo NS50MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9764 9729 SND_PCI_QUIRK(0x1558, 0x51b1, "Clevo NS50AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9730 + SND_PCI_QUIRK(0x1558, 0x51b3, "Clevo NS70AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9765 9731 SND_PCI_QUIRK(0x1558, 0x5630, "Clevo NP50RNJS", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9766 9732 SND_PCI_QUIRK(0x1558, 0x70a1, "Clevo NB70T[HJK]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9767 9733 SND_PCI_QUIRK(0x1558, 0x70b3, "Clevo NK70SB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 9846 9810 SND_PCI_QUIRK(0x17aa, 0x22be, "Thinkpad X1 Carbon 8th", ALC285_FIXUP_THINKPAD_HEADSET_JACK), 9847 9811 SND_PCI_QUIRK(0x17aa, 0x22c1, "Thinkpad P1 Gen 3", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK), 9848 9812 SND_PCI_QUIRK(0x17aa, 0x22c2, "Thinkpad X1 Extreme Gen 3", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK), 9849 - SND_PCI_QUIRK(0x17aa, 0x22f1, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2), 9850 - SND_PCI_QUIRK(0x17aa, 0x22f2, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2), 9851 - SND_PCI_QUIRK(0x17aa, 0x22f3, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2), 9852 - SND_PCI_QUIRK(0x17aa, 0x2316, "Thinkpad P1 Gen 6", ALC287_FIXUP_CS35L41_I2C_2), 9853 - SND_PCI_QUIRK(0x17aa, 0x2317, "Thinkpad P1 Gen 6", ALC287_FIXUP_CS35L41_I2C_2), 9854 - SND_PCI_QUIRK(0x17aa, 0x2318, "Thinkpad Z13 Gen2", ALC287_FIXUP_CS35L41_I2C_2), 9855 - SND_PCI_QUIRK(0x17aa, 0x2319, "Thinkpad Z16 Gen2", ALC287_FIXUP_CS35L41_I2C_2), 9856 - SND_PCI_QUIRK(0x17aa, 0x231a, "Thinkpad Z16 Gen2", ALC287_FIXUP_CS35L41_I2C_2), 9813 + SND_PCI_QUIRK(0x17aa, 0x22f1, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI), 9814 + SND_PCI_QUIRK(0x17aa, 0x22f2, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI), 9815 + SND_PCI_QUIRK(0x17aa, 0x22f3, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI), 9816 + SND_PCI_QUIRK(0x17aa, 0x2316, "Thinkpad P1 Gen 6", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI), 9817 + SND_PCI_QUIRK(0x17aa, 0x2317, "Thinkpad P1 Gen 6", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI), 9818 + SND_PCI_QUIRK(0x17aa, 0x2318, "Thinkpad Z13 Gen2", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI), 9819 + SND_PCI_QUIRK(0x17aa, 0x2319, "Thinkpad Z16 Gen2", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI), 9820 + SND_PCI_QUIRK(0x17aa, 0x231a, "Thinkpad Z16 Gen2", ALC287_FIXUP_CS35L41_I2C_2_THINKPAD_ACPI), 9857 9821 SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), 9858 9822 SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), 9859 9823 SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), ··· 10718 10682 spec->shutup = alc256_shutup; 10719 10683 spec->init_hook = alc256_init; 10720 10684 spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */ 10685 + if (codec->bus->pci->vendor == PCI_VENDOR_ID_AMD) 10686 + spec->en_3kpull_low = true; 10721 10687 break; 10722 10688 case 0x10ec0257: 10723 10689 spec->codec_variant = ALC269_TYPE_ALC257;
+2 -2
sound/soc/amd/ps/pci-ps.c
··· 257 257 &sdw_manager_bitmap, 1); 258 258 259 259 if (ret) { 260 - dev_err(dev, "Failed to read mipi-sdw-manager-list: %d\n", ret); 260 + dev_dbg(dev, "Failed to read mipi-sdw-manager-list: %d\n", ret); 261 261 return -EINVAL; 262 262 } 263 263 count = hweight32(sdw_manager_bitmap); ··· 641 641 ret = get_acp63_device_config(val, pci, adata); 642 642 /* ACP PCI driver probe should be continued even PDM or SoundWire Devices are not found */ 643 643 if (ret) { 644 - dev_err(&pci->dev, "get acp device config failed:%d\n", ret); 644 + dev_dbg(&pci->dev, "get acp device config failed:%d\n", ret); 645 645 goto skip_pdev_creation; 646 646 } 647 647 ret = create_acp63_platform_devs(pci, adata, addr);
+6
sound/soc/codecs/cs42l51-i2c.c
··· 19 19 }; 20 20 MODULE_DEVICE_TABLE(i2c, cs42l51_i2c_id); 21 21 22 + const struct of_device_id cs42l51_of_match[] = { 23 + { .compatible = "cirrus,cs42l51", }, 24 + { } 25 + }; 26 + MODULE_DEVICE_TABLE(of, cs42l51_of_match); 27 + 22 28 static int cs42l51_i2c_probe(struct i2c_client *i2c) 23 29 { 24 30 struct regmap_config config;
-7
sound/soc/codecs/cs42l51.c
··· 823 823 } 824 824 EXPORT_SYMBOL_GPL(cs42l51_resume); 825 825 826 - const struct of_device_id cs42l51_of_match[] = { 827 - { .compatible = "cirrus,cs42l51", }, 828 - { } 829 - }; 830 - MODULE_DEVICE_TABLE(of, cs42l51_of_match); 831 - EXPORT_SYMBOL_GPL(cs42l51_of_match); 832 - 833 826 MODULE_AUTHOR("Arnaud Patard <arnaud.patard@rtp-net.org>"); 834 827 MODULE_DESCRIPTION("Cirrus Logic CS42L51 ALSA SoC Codec Driver"); 835 828 MODULE_LICENSE("GPL");
-1
sound/soc/codecs/cs42l51.h
··· 16 16 void cs42l51_remove(struct device *dev); 17 17 int __maybe_unused cs42l51_suspend(struct device *dev); 18 18 int __maybe_unused cs42l51_resume(struct device *dev); 19 - extern const struct of_device_id cs42l51_of_match[]; 20 19 21 20 #define CS42L51_CHIP_ID 0x1B 22 21 #define CS42L51_CHIP_REV_A 0x00
-1
sound/soc/codecs/rt5640.c
··· 53 53 {RT5640_PR_BASE + 0x3d, 0x3600}, 54 54 {RT5640_PR_BASE + 0x12, 0x0aa8}, 55 55 {RT5640_PR_BASE + 0x14, 0x0aaa}, 56 - {RT5640_PR_BASE + 0x20, 0x6110}, 57 56 {RT5640_PR_BASE + 0x21, 0xe0e0}, 58 57 {RT5640_PR_BASE + 0x23, 0x1804}, 59 58 };
+1 -7
sound/soc/fsl/fsl_sai.c
··· 507 507 savediv / 2 - 1); 508 508 } 509 509 510 - if (sai->soc_data->max_register >= FSL_SAI_MCTL) { 511 - /* SAI is in master mode at this point, so enable MCLK */ 512 - regmap_update_bits(sai->regmap, FSL_SAI_MCTL, 513 - FSL_SAI_MCTL_MCLK_EN, FSL_SAI_MCTL_MCLK_EN); 514 - } 515 - 516 510 return 0; 517 511 } 518 512 ··· 713 719 u32 xcsr, count = 100; 714 720 715 721 regmap_update_bits(sai->regmap, FSL_SAI_xCSR(tx, ofs), 716 - FSL_SAI_CSR_TERE, 0); 722 + FSL_SAI_CSR_TERE | FSL_SAI_CSR_BCE, 0); 717 723 718 724 /* TERE will remain set till the end of current frame */ 719 725 do {
+1
sound/soc/fsl/fsl_sai.h
··· 91 91 /* SAI Transmit/Receive Control Register */ 92 92 #define FSL_SAI_CSR_TERE BIT(31) 93 93 #define FSL_SAI_CSR_SE BIT(30) 94 + #define FSL_SAI_CSR_BCE BIT(28) 94 95 #define FSL_SAI_CSR_FR BIT(25) 95 96 #define FSL_SAI_CSR_SR BIT(24) 96 97 #define FSL_SAI_CSR_xF_SHIFT 16
+4 -5
sound/soc/sof/ipc3-dtrace.c
··· 186 186 struct snd_sof_dfsentry *dfse = file->private_data; 187 187 struct sof_ipc_trace_filter_elem *elems = NULL; 188 188 struct snd_sof_dev *sdev = dfse->sdev; 189 - loff_t pos = 0; 190 189 int num_elems; 191 190 char *string; 192 191 int ret; ··· 200 201 if (!string) 201 202 return -ENOMEM; 202 203 203 - /* assert null termination */ 204 - string[count] = 0; 205 - ret = simple_write_to_buffer(string, count, &pos, from, count); 206 - if (ret < 0) 204 + if (copy_from_user(string, from, count)) { 205 + ret = -EFAULT; 207 206 goto error; 207 + } 208 + string[count] = '\0'; 208 209 209 210 ret = trace_filter_parse(sdev, string, &num_elems, &elems); 210 211 if (ret < 0)
+8
tools/arch/arm64/include/asm/cputype.h
··· 126 126 #define APPLE_CPU_PART_M1_FIRESTORM_MAX 0x029 127 127 #define APPLE_CPU_PART_M2_BLIZZARD 0x032 128 128 #define APPLE_CPU_PART_M2_AVALANCHE 0x033 129 + #define APPLE_CPU_PART_M2_BLIZZARD_PRO 0x034 130 + #define APPLE_CPU_PART_M2_AVALANCHE_PRO 0x035 131 + #define APPLE_CPU_PART_M2_BLIZZARD_MAX 0x038 132 + #define APPLE_CPU_PART_M2_AVALANCHE_MAX 0x039 129 133 130 134 #define AMPERE_CPU_PART_AMPERE1 0xAC3 131 135 ··· 185 181 #define MIDR_APPLE_M1_FIRESTORM_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM_MAX) 186 182 #define MIDR_APPLE_M2_BLIZZARD MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD) 187 183 #define MIDR_APPLE_M2_AVALANCHE MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE) 184 + #define MIDR_APPLE_M2_BLIZZARD_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD_PRO) 185 + #define MIDR_APPLE_M2_AVALANCHE_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE_PRO) 186 + #define MIDR_APPLE_M2_BLIZZARD_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD_MAX) 187 + #define MIDR_APPLE_M2_AVALANCHE_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE_MAX) 188 188 #define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1) 189 189 190 190 /* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */
+1 -1
tools/build/feature/Makefile
··· 208 208 $(BUILD) -ltraceevent 209 209 210 210 $(OUTPUT)test-libtracefs.bin: 211 - $(BUILD) -ltracefs 211 + $(BUILD) $(shell $(PKG_CONFIG) --cflags libtraceevent 2>/dev/null) -ltracefs 212 212 213 213 $(OUTPUT)test-libcrypto.bin: 214 214 $(BUILD) -lcrypto
+4 -1
tools/include/uapi/asm-generic/unistd.h
··· 817 817 #define __NR_set_mempolicy_home_node 450 818 818 __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node) 819 819 820 + #define __NR_cachestat 451 821 + __SYSCALL(__NR_cachestat, sys_cachestat) 822 + 820 823 #undef __NR_syscalls 821 - #define __NR_syscalls 451 824 + #define __NR_syscalls 452 822 825 823 826 /* 824 827 * 32 bit systems traditionally used different
+93 -2
tools/include/uapi/drm/i915_drm.h
··· 280 280 #define I915_PMU_ENGINE_SEMA(class, instance) \ 281 281 __I915_PMU_ENGINE(class, instance, I915_SAMPLE_SEMA) 282 282 283 - #define __I915_PMU_OTHER(x) (__I915_PMU_ENGINE(0xff, 0xff, 0xf) + 1 + (x)) 283 + /* 284 + * Top 4 bits of every non-engine counter are GT id. 285 + */ 286 + #define __I915_PMU_GT_SHIFT (60) 287 + 288 + #define ___I915_PMU_OTHER(gt, x) \ 289 + (((__u64)__I915_PMU_ENGINE(0xff, 0xff, 0xf) + 1 + (x)) | \ 290 + ((__u64)(gt) << __I915_PMU_GT_SHIFT)) 291 + 292 + #define __I915_PMU_OTHER(x) ___I915_PMU_OTHER(0, x) 284 293 285 294 #define I915_PMU_ACTUAL_FREQUENCY __I915_PMU_OTHER(0) 286 295 #define I915_PMU_REQUESTED_FREQUENCY __I915_PMU_OTHER(1) ··· 298 289 #define I915_PMU_SOFTWARE_GT_AWAKE_TIME __I915_PMU_OTHER(4) 299 290 300 291 #define I915_PMU_LAST /* Deprecated - do not use */ I915_PMU_RC6_RESIDENCY 292 + 293 + #define __I915_PMU_ACTUAL_FREQUENCY(gt) ___I915_PMU_OTHER(gt, 0) 294 + #define __I915_PMU_REQUESTED_FREQUENCY(gt) ___I915_PMU_OTHER(gt, 1) 295 + #define __I915_PMU_INTERRUPTS(gt) ___I915_PMU_OTHER(gt, 2) 296 + #define __I915_PMU_RC6_RESIDENCY(gt) ___I915_PMU_OTHER(gt, 3) 297 + #define __I915_PMU_SOFTWARE_GT_AWAKE_TIME(gt) ___I915_PMU_OTHER(gt, 4) 301 298 302 299 /* Each region is a minimum of 16k, and there are at most 255 of them. 303 300 */ ··· 674 659 * If the IOCTL is successful, the returned parameter will be set to one of the 675 660 * following values: 676 661 * * 0 if HuC firmware load is not complete, 677 - * * 1 if HuC firmware is authenticated and running. 662 + * * 1 if HuC firmware is loaded and fully authenticated, 663 + * * 2 if HuC firmware is loaded and authenticated for clear media only 678 664 */ 679 665 #define I915_PARAM_HUC_STATUS 42 680 666 ··· 786 770 * timestamp frequency, but differs on some platforms. 787 771 */ 788 772 #define I915_PARAM_OA_TIMESTAMP_FREQUENCY 57 773 + 774 + /* 775 + * Query the status of PXP support in i915. 776 + * 777 + * The query can fail in the following scenarios with the listed error codes: 778 + * -ENODEV = PXP support is not available on the GPU device or in the 779 + * kernel due to missing component drivers or kernel configs. 780 + * 781 + * If the IOCTL is successful, the returned parameter will be set to one of 782 + * the following values: 783 + * 1 = PXP feature is supported and is ready for use. 784 + * 2 = PXP feature is supported but should be ready soon (pending 785 + * initialization of non-i915 system dependencies). 786 + * 787 + * NOTE: When param is supported (positive return values), user space should 788 + * still refer to the GEM PXP context-creation UAPI header specs to be 789 + * aware of possible failure due to system state machine at the time. 790 + */ 791 + #define I915_PARAM_PXP_STATUS 58 789 792 790 793 /* Must be kept compact -- no holes and well documented */ 791 794 ··· 2131 2096 * 2132 2097 * -ENODEV: feature not available 2133 2098 * -EPERM: trying to mark a recoverable or not bannable context as protected 2099 + * -ENXIO: A dependency such as a component driver or firmware is not yet 2100 + * loaded so user space may need to attempt again. Depending on the 2101 + * device, this error may be reported if protected context creation is 2102 + * attempted very early after kernel start because the internal timeout 2103 + * waiting for such dependencies is not guaranteed to be larger than 2104 + * required (numbers differ depending on system and kernel config): 2105 + * - ADL/RPL: dependencies may take up to 3 seconds from kernel start 2106 + * while context creation internal timeout is 250 milisecs 2107 + * - MTL: dependencies may take up to 8 seconds from kernel start 2108 + * while context creation internal timeout is 250 milisecs 2109 + * NOTE: such dependencies happen once, so a subsequent call to create a 2110 + * protected context after a prior successful call will not experience 2111 + * such timeouts and will not return -ENXIO (unless the driver is reloaded, 2112 + * or, depending on the device, resumes from a suspended state). 2113 + * -EIO: The firmware did not succeed in creating the protected context. 2134 2114 */ 2135 2115 #define I915_CONTEXT_PARAM_PROTECTED_CONTENT 0xd 2136 2116 /* Must be kept compact -- no holes and well documented */ ··· 3680 3630 * 3681 3631 * For I915_GEM_CREATE_EXT_PROTECTED_CONTENT usage see 3682 3632 * struct drm_i915_gem_create_ext_protected_content. 3633 + * 3634 + * For I915_GEM_CREATE_EXT_SET_PAT usage see 3635 + * struct drm_i915_gem_create_ext_set_pat. 3683 3636 */ 3684 3637 #define I915_GEM_CREATE_EXT_MEMORY_REGIONS 0 3685 3638 #define I915_GEM_CREATE_EXT_PROTECTED_CONTENT 1 3639 + #define I915_GEM_CREATE_EXT_SET_PAT 2 3686 3640 __u64 extensions; 3687 3641 }; 3688 3642 ··· 3799 3745 struct i915_user_extension base; 3800 3746 /** @flags: reserved for future usage, currently MBZ */ 3801 3747 __u32 flags; 3748 + }; 3749 + 3750 + /** 3751 + * struct drm_i915_gem_create_ext_set_pat - The 3752 + * I915_GEM_CREATE_EXT_SET_PAT extension. 3753 + * 3754 + * If this extension is provided, the specified caching policy (PAT index) is 3755 + * applied to the buffer object. 3756 + * 3757 + * Below is an example on how to create an object with specific caching policy: 3758 + * 3759 + * .. code-block:: C 3760 + * 3761 + * struct drm_i915_gem_create_ext_set_pat set_pat_ext = { 3762 + * .base = { .name = I915_GEM_CREATE_EXT_SET_PAT }, 3763 + * .pat_index = 0, 3764 + * }; 3765 + * struct drm_i915_gem_create_ext create_ext = { 3766 + * .size = PAGE_SIZE, 3767 + * .extensions = (uintptr_t)&set_pat_ext, 3768 + * }; 3769 + * 3770 + * int err = ioctl(fd, DRM_IOCTL_I915_GEM_CREATE_EXT, &create_ext); 3771 + * if (err) ... 3772 + */ 3773 + struct drm_i915_gem_create_ext_set_pat { 3774 + /** @base: Extension link. See struct i915_user_extension. */ 3775 + struct i915_user_extension base; 3776 + /** 3777 + * @pat_index: PAT index to be set 3778 + * PAT index is a bit field in Page Table Entry to control caching 3779 + * behaviors for GPU accesses. The definition of PAT index is 3780 + * platform dependent and can be found in hardware specifications, 3781 + */ 3782 + __u32 pat_index; 3783 + /** @rsvd: reserved for future use */ 3784 + __u32 rsvd; 3802 3785 }; 3803 3786 3804 3787 /* ID of the protected content session managed by i915 when PXP is active */
+5
tools/include/uapi/linux/fcntl.h
··· 112 112 113 113 #define AT_RECURSIVE 0x8000 /* Apply to the entire subtree */ 114 114 115 + /* Flags for name_to_handle_at(2). We reuse AT_ flag space to save bits... */ 116 + #define AT_HANDLE_FID AT_REMOVEDIR /* file handle is needed to 117 + compare object identity and may not 118 + be usable to open_by_handle_at(2) */ 119 + 115 120 #endif /* _UAPI_LINUX_FCNTL_H */
+5 -1
tools/include/uapi/linux/kvm.h
··· 1190 1190 #define KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP 225 1191 1191 #define KVM_CAP_PMU_EVENT_MASKED_EVENTS 226 1192 1192 #define KVM_CAP_COUNTER_OFFSET 227 1193 + #define KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE 228 1194 + #define KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES 229 1193 1195 1194 1196 #ifdef KVM_CAP_IRQ_ROUTING 1195 1197 ··· 1444 1442 #define KVM_DEV_TYPE_XIVE KVM_DEV_TYPE_XIVE 1445 1443 KVM_DEV_TYPE_ARM_PV_TIME, 1446 1444 #define KVM_DEV_TYPE_ARM_PV_TIME KVM_DEV_TYPE_ARM_PV_TIME 1445 + KVM_DEV_TYPE_RISCV_AIA, 1446 + #define KVM_DEV_TYPE_RISCV_AIA KVM_DEV_TYPE_RISCV_AIA 1447 1447 KVM_DEV_TYPE_MAX, 1448 1448 }; 1449 1449 ··· 1617 1613 #define KVM_GET_DEBUGREGS _IOR(KVMIO, 0xa1, struct kvm_debugregs) 1618 1614 #define KVM_SET_DEBUGREGS _IOW(KVMIO, 0xa2, struct kvm_debugregs) 1619 1615 /* 1620 - * vcpu version available with KVM_ENABLE_CAP 1616 + * vcpu version available with KVM_CAP_ENABLE_CAP 1621 1617 * vm version available with KVM_CAP_ENABLE_CAP_VM 1622 1618 */ 1623 1619 #define KVM_ENABLE_CAP _IOW(KVMIO, 0xa3, struct kvm_enable_cap)
+14
tools/include/uapi/linux/mman.h
··· 4 4 5 5 #include <asm/mman.h> 6 6 #include <asm-generic/hugetlb_encode.h> 7 + #include <linux/types.h> 7 8 8 9 #define MREMAP_MAYMOVE 1 9 10 #define MREMAP_FIXED 2 ··· 41 40 #define MAP_HUGE_1GB HUGETLB_FLAG_ENCODE_1GB 42 41 #define MAP_HUGE_2GB HUGETLB_FLAG_ENCODE_2GB 43 42 #define MAP_HUGE_16GB HUGETLB_FLAG_ENCODE_16GB 43 + 44 + struct cachestat_range { 45 + __u64 off; 46 + __u64 len; 47 + }; 48 + 49 + struct cachestat { 50 + __u64 nr_cache; 51 + __u64 nr_dirty; 52 + __u64 nr_writeback; 53 + __u64 nr_evicted; 54 + __u64 nr_recently_evicted; 55 + }; 44 56 45 57 #endif /* _UAPI_LINUX_MMAN_H */
+2 -1
tools/include/uapi/linux/mount.h
··· 74 74 #define MOVE_MOUNT_T_AUTOMOUNTS 0x00000020 /* Follow automounts on to path */ 75 75 #define MOVE_MOUNT_T_EMPTY_PATH 0x00000040 /* Empty to path permitted */ 76 76 #define MOVE_MOUNT_SET_GROUP 0x00000100 /* Set sharing group instead */ 77 - #define MOVE_MOUNT__MASK 0x00000177 77 + #define MOVE_MOUNT_BENEATH 0x00000200 /* Mount beneath top mount */ 78 + #define MOVE_MOUNT__MASK 0x00000377 78 79 79 80 /* 80 81 * fsopen() flags.
+11
tools/include/uapi/linux/prctl.h
··· 294 294 295 295 #define PR_SET_MEMORY_MERGE 67 296 296 #define PR_GET_MEMORY_MERGE 68 297 + 298 + #define PR_RISCV_V_SET_CONTROL 69 299 + #define PR_RISCV_V_GET_CONTROL 70 300 + # define PR_RISCV_V_VSTATE_CTRL_DEFAULT 0 301 + # define PR_RISCV_V_VSTATE_CTRL_OFF 1 302 + # define PR_RISCV_V_VSTATE_CTRL_ON 2 303 + # define PR_RISCV_V_VSTATE_CTRL_INHERIT (1 << 4) 304 + # define PR_RISCV_V_VSTATE_CTRL_CUR_MASK 0x3 305 + # define PR_RISCV_V_VSTATE_CTRL_NEXT_MASK 0xc 306 + # define PR_RISCV_V_VSTATE_CTRL_MASK 0x1f 307 + 297 308 #endif /* _LINUX_PRCTL_H */
+31
tools/include/uapi/linux/vhost.h
··· 45 45 #define VHOST_SET_LOG_BASE _IOW(VHOST_VIRTIO, 0x04, __u64) 46 46 /* Specify an eventfd file descriptor to signal on log write. */ 47 47 #define VHOST_SET_LOG_FD _IOW(VHOST_VIRTIO, 0x07, int) 48 + /* By default, a device gets one vhost_worker that its virtqueues share. This 49 + * command allows the owner of the device to create an additional vhost_worker 50 + * for the device. It can later be bound to 1 or more of its virtqueues using 51 + * the VHOST_ATTACH_VRING_WORKER command. 52 + * 53 + * This must be called after VHOST_SET_OWNER and the caller must be the owner 54 + * of the device. The new thread will inherit caller's cgroups and namespaces, 55 + * and will share the caller's memory space. The new thread will also be 56 + * counted against the caller's RLIMIT_NPROC value. 57 + * 58 + * The worker's ID used in other commands will be returned in 59 + * vhost_worker_state. 60 + */ 61 + #define VHOST_NEW_WORKER _IOR(VHOST_VIRTIO, 0x8, struct vhost_worker_state) 62 + /* Free a worker created with VHOST_NEW_WORKER if it's not attached to any 63 + * virtqueue. If userspace is not able to call this for workers its created, 64 + * the kernel will free all the device's workers when the device is closed. 65 + */ 66 + #define VHOST_FREE_WORKER _IOW(VHOST_VIRTIO, 0x9, struct vhost_worker_state) 48 67 49 68 /* Ring setup. */ 50 69 /* Set number of descriptors in ring. This parameter can not ··· 89 70 #define VHOST_VRING_BIG_ENDIAN 1 90 71 #define VHOST_SET_VRING_ENDIAN _IOW(VHOST_VIRTIO, 0x13, struct vhost_vring_state) 91 72 #define VHOST_GET_VRING_ENDIAN _IOW(VHOST_VIRTIO, 0x14, struct vhost_vring_state) 73 + /* Attach a vhost_worker created with VHOST_NEW_WORKER to one of the device's 74 + * virtqueues. 75 + * 76 + * This will replace the virtqueue's existing worker. If the replaced worker 77 + * is no longer attached to any virtqueues, it can be freed with 78 + * VHOST_FREE_WORKER. 79 + */ 80 + #define VHOST_ATTACH_VRING_WORKER _IOW(VHOST_VIRTIO, 0x15, \ 81 + struct vhost_vring_worker) 82 + /* Return the vring worker's ID */ 83 + #define VHOST_GET_VRING_WORKER _IOWR(VHOST_VIRTIO, 0x16, \ 84 + struct vhost_vring_worker) 92 85 93 86 /* The following ioctls use eventfd file descriptors to signal and poll 94 87 * for events. */
+79 -2
tools/include/uapi/sound/asound.h
··· 274 274 #define SNDRV_PCM_INFO_DOUBLE 0x00000004 /* Double buffering needed for PCM start/stop */ 275 275 #define SNDRV_PCM_INFO_BATCH 0x00000010 /* double buffering */ 276 276 #define SNDRV_PCM_INFO_SYNC_APPLPTR 0x00000020 /* need the explicit sync of appl_ptr update */ 277 + #define SNDRV_PCM_INFO_PERFECT_DRAIN 0x00000040 /* silencing at the end of stream is not required */ 277 278 #define SNDRV_PCM_INFO_INTERLEAVED 0x00000100 /* channels are interleaved */ 278 279 #define SNDRV_PCM_INFO_NONINTERLEAVED 0x00000200 /* channels are not interleaved */ 279 280 #define SNDRV_PCM_INFO_COMPLEX 0x00000400 /* complex frame organization (mmap only) */ ··· 384 383 #define SNDRV_PCM_HW_PARAMS_NORESAMPLE (1<<0) /* avoid rate resampling */ 385 384 #define SNDRV_PCM_HW_PARAMS_EXPORT_BUFFER (1<<1) /* export buffer */ 386 385 #define SNDRV_PCM_HW_PARAMS_NO_PERIOD_WAKEUP (1<<2) /* disable period wakeups */ 386 + #define SNDRV_PCM_HW_PARAMS_NO_DRAIN_SILENCE (1<<3) /* suppress drain with the filling 387 + * of the silence samples 388 + */ 387 389 388 390 struct snd_interval { 389 391 unsigned int min, max; ··· 712 708 * Raw MIDI section - /dev/snd/midi?? 713 709 */ 714 710 715 - #define SNDRV_RAWMIDI_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 2) 711 + #define SNDRV_RAWMIDI_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 4) 716 712 717 713 enum { 718 714 SNDRV_RAWMIDI_STREAM_OUTPUT = 0, ··· 723 719 #define SNDRV_RAWMIDI_INFO_OUTPUT 0x00000001 724 720 #define SNDRV_RAWMIDI_INFO_INPUT 0x00000002 725 721 #define SNDRV_RAWMIDI_INFO_DUPLEX 0x00000004 722 + #define SNDRV_RAWMIDI_INFO_UMP 0x00000008 726 723 727 724 struct snd_rawmidi_info { 728 725 unsigned int device; /* RO/WR (control): device number */ ··· 784 779 }; 785 780 #endif 786 781 782 + /* UMP EP info flags */ 783 + #define SNDRV_UMP_EP_INFO_STATIC_BLOCKS 0x01 784 + 785 + /* UMP EP Protocol / JRTS capability bits */ 786 + #define SNDRV_UMP_EP_INFO_PROTO_MIDI_MASK 0x0300 787 + #define SNDRV_UMP_EP_INFO_PROTO_MIDI1 0x0100 /* MIDI 1.0 */ 788 + #define SNDRV_UMP_EP_INFO_PROTO_MIDI2 0x0200 /* MIDI 2.0 */ 789 + #define SNDRV_UMP_EP_INFO_PROTO_JRTS_MASK 0x0003 790 + #define SNDRV_UMP_EP_INFO_PROTO_JRTS_TX 0x0001 /* JRTS Transmit */ 791 + #define SNDRV_UMP_EP_INFO_PROTO_JRTS_RX 0x0002 /* JRTS Receive */ 792 + 793 + /* UMP Endpoint information */ 794 + struct snd_ump_endpoint_info { 795 + int card; /* card number */ 796 + int device; /* device number */ 797 + unsigned int flags; /* additional info */ 798 + unsigned int protocol_caps; /* protocol capabilities */ 799 + unsigned int protocol; /* current protocol */ 800 + unsigned int num_blocks; /* # of function blocks */ 801 + unsigned short version; /* UMP major/minor version */ 802 + unsigned short family_id; /* MIDI device family ID */ 803 + unsigned short model_id; /* MIDI family model ID */ 804 + unsigned int manufacturer_id; /* MIDI manufacturer ID */ 805 + unsigned char sw_revision[4]; /* software revision */ 806 + unsigned short padding; 807 + unsigned char name[128]; /* endpoint name string */ 808 + unsigned char product_id[128]; /* unique product id string */ 809 + unsigned char reserved[32]; 810 + } __packed; 811 + 812 + /* UMP direction */ 813 + #define SNDRV_UMP_DIR_INPUT 0x01 814 + #define SNDRV_UMP_DIR_OUTPUT 0x02 815 + #define SNDRV_UMP_DIR_BIDIRECTION 0x03 816 + 817 + /* UMP block info flags */ 818 + #define SNDRV_UMP_BLOCK_IS_MIDI1 (1U << 0) /* MIDI 1.0 port w/o restrict */ 819 + #define SNDRV_UMP_BLOCK_IS_LOWSPEED (1U << 1) /* 31.25Kbps B/W MIDI1 port */ 820 + 821 + /* UMP block user-interface hint */ 822 + #define SNDRV_UMP_BLOCK_UI_HINT_UNKNOWN 0x00 823 + #define SNDRV_UMP_BLOCK_UI_HINT_RECEIVER 0x01 824 + #define SNDRV_UMP_BLOCK_UI_HINT_SENDER 0x02 825 + #define SNDRV_UMP_BLOCK_UI_HINT_BOTH 0x03 826 + 827 + /* UMP groups and blocks */ 828 + #define SNDRV_UMP_MAX_GROUPS 16 829 + #define SNDRV_UMP_MAX_BLOCKS 32 830 + 831 + /* UMP Block information */ 832 + struct snd_ump_block_info { 833 + int card; /* card number */ 834 + int device; /* device number */ 835 + unsigned char block_id; /* block ID (R/W) */ 836 + unsigned char direction; /* UMP direction */ 837 + unsigned char active; /* Activeness */ 838 + unsigned char first_group; /* first group ID */ 839 + unsigned char num_groups; /* number of groups */ 840 + unsigned char midi_ci_version; /* MIDI-CI support version */ 841 + unsigned char sysex8_streams; /* max number of sysex8 streams */ 842 + unsigned char ui_hint; /* user interface hint */ 843 + unsigned int flags; /* various info flags */ 844 + unsigned char name[128]; /* block name string */ 845 + unsigned char reserved[32]; 846 + } __packed; 847 + 787 848 #define SNDRV_RAWMIDI_IOCTL_PVERSION _IOR('W', 0x00, int) 788 849 #define SNDRV_RAWMIDI_IOCTL_INFO _IOR('W', 0x01, struct snd_rawmidi_info) 789 850 #define SNDRV_RAWMIDI_IOCTL_USER_PVERSION _IOW('W', 0x02, int) ··· 857 786 #define SNDRV_RAWMIDI_IOCTL_STATUS _IOWR('W', 0x20, struct snd_rawmidi_status) 858 787 #define SNDRV_RAWMIDI_IOCTL_DROP _IOW('W', 0x30, int) 859 788 #define SNDRV_RAWMIDI_IOCTL_DRAIN _IOW('W', 0x31, int) 789 + /* Additional ioctls for UMP rawmidi devices */ 790 + #define SNDRV_UMP_IOCTL_ENDPOINT_INFO _IOR('W', 0x40, struct snd_ump_endpoint_info) 791 + #define SNDRV_UMP_IOCTL_BLOCK_INFO _IOR('W', 0x41, struct snd_ump_block_info) 860 792 861 793 /* 862 794 * Timer section - /dev/snd/timer ··· 1035 961 * * 1036 962 ****************************************************************************/ 1037 963 1038 - #define SNDRV_CTL_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 8) 964 + #define SNDRV_CTL_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 9) 1039 965 1040 966 struct snd_ctl_card_info { 1041 967 int card; /* card number */ ··· 1196 1122 #define SNDRV_CTL_IOCTL_RAWMIDI_NEXT_DEVICE _IOWR('U', 0x40, int) 1197 1123 #define SNDRV_CTL_IOCTL_RAWMIDI_INFO _IOWR('U', 0x41, struct snd_rawmidi_info) 1198 1124 #define SNDRV_CTL_IOCTL_RAWMIDI_PREFER_SUBDEVICE _IOW('U', 0x42, int) 1125 + #define SNDRV_CTL_IOCTL_UMP_NEXT_DEVICE _IOWR('U', 0x43, int) 1126 + #define SNDRV_CTL_IOCTL_UMP_ENDPOINT_INFO _IOWR('U', 0x44, struct snd_ump_endpoint_info) 1127 + #define SNDRV_CTL_IOCTL_UMP_BLOCK_INFO _IOWR('U', 0x45, struct snd_ump_block_info) 1199 1128 #define SNDRV_CTL_IOCTL_POWER _IOWR('U', 0xd0, int) 1200 1129 #define SNDRV_CTL_IOCTL_POWER_STATE _IOR('U', 0xd1, int) 1201 1130
+12 -6
tools/lib/subcmd/help.c
··· 68 68 while (ci < cmds->cnt && ei < excludes->cnt) { 69 69 cmp = strcmp(cmds->names[ci]->name, excludes->names[ei]->name); 70 70 if (cmp < 0) { 71 - zfree(&cmds->names[cj]); 72 - cmds->names[cj++] = cmds->names[ci++]; 71 + if (ci == cj) { 72 + ci++; 73 + cj++; 74 + } else { 75 + zfree(&cmds->names[cj]); 76 + cmds->names[cj++] = cmds->names[ci++]; 77 + } 73 78 } else if (cmp == 0) { 74 79 ci++; 75 80 ei++; ··· 82 77 ei++; 83 78 } 84 79 } 85 - 86 - while (ci < cmds->cnt) { 87 - zfree(&cmds->names[cj]); 88 - cmds->names[cj++] = cmds->names[ci++]; 80 + if (ci != cj) { 81 + while (ci < cmds->cnt) { 82 + zfree(&cmds->names[cj]); 83 + cmds->names[cj++] = cmds->names[ci++]; 84 + } 89 85 } 90 86 for (ci = cj; ci < cmds->cnt; ci++) 91 87 zfree(&cmds->names[ci]);
+2 -2
tools/perf/Makefile.config
··· 155 155 ifdef CSINCLUDES 156 156 LIBOPENCSD_CFLAGS := -I$(CSINCLUDES) 157 157 endif 158 - OPENCSDLIBS := -lopencsd_c_api 158 + OPENCSDLIBS := -lopencsd_c_api -lopencsd 159 159 ifeq ($(findstring -static,${LDFLAGS}),-static) 160 - OPENCSDLIBS += -lopencsd -lstdc++ 160 + OPENCSDLIBS += -lstdc++ 161 161 endif 162 162 ifdef CSLIBS 163 163 LIBOPENCSD_LDFLAGS := -L$(CSLIBS)
+1
tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl
··· 365 365 448 n64 process_mrelease sys_process_mrelease 366 366 449 n64 futex_waitv sys_futex_waitv 367 367 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 368 + 451 n64 cachestat sys_cachestat
+1
tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
··· 537 537 448 common process_mrelease sys_process_mrelease 538 538 449 common futex_waitv sys_futex_waitv 539 539 450 nospu set_mempolicy_home_node sys_set_mempolicy_home_node 540 + 451 common cachestat sys_cachestat
+1
tools/perf/arch/s390/entry/syscalls/syscall.tbl
··· 453 453 448 common process_mrelease sys_process_mrelease sys_process_mrelease 454 454 449 common futex_waitv sys_futex_waitv sys_futex_waitv 455 455 450 common set_mempolicy_home_node sys_set_mempolicy_home_node sys_set_mempolicy_home_node 456 + 451 common cachestat sys_cachestat sys_cachestat
+1
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
··· 372 372 448 common process_mrelease sys_process_mrelease 373 373 449 common futex_waitv sys_futex_waitv 374 374 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 375 + 451 common cachestat sys_cachestat 375 376 376 377 # 377 378 # Due to a historical design error, certain syscalls are numbered differently
+2 -1
tools/perf/pmu-events/arch/x86/amdzen1/recommended.json
··· 169 169 }, 170 170 { 171 171 "MetricName": "nps1_die_to_dram", 172 - "BriefDescription": "Approximate: Combined DRAM B/bytes of all channels on a NPS1 node (die) (may need --metric-no-group)", 172 + "BriefDescription": "Approximate: Combined DRAM B/bytes of all channels on a NPS1 node (die)", 173 173 "MetricExpr": "dram_channel_data_controller_0 + dram_channel_data_controller_1 + dram_channel_data_controller_2 + dram_channel_data_controller_3 + dram_channel_data_controller_4 + dram_channel_data_controller_5 + dram_channel_data_controller_6 + dram_channel_data_controller_7", 174 + "MetricConstraint": "NO_GROUP_EVENTS", 174 175 "MetricGroup": "data_fabric", 175 176 "PerPkg": "1", 176 177 "ScaleUnit": "6.1e-5MiB"
+2 -1
tools/perf/pmu-events/arch/x86/amdzen2/recommended.json
··· 169 169 }, 170 170 { 171 171 "MetricName": "nps1_die_to_dram", 172 - "BriefDescription": "Approximate: Combined DRAM B/bytes of all channels on a NPS1 node (die) (may need --metric-no-group)", 172 + "BriefDescription": "Approximate: Combined DRAM B/bytes of all channels on a NPS1 node (die)", 173 173 "MetricExpr": "dram_channel_data_controller_0 + dram_channel_data_controller_1 + dram_channel_data_controller_2 + dram_channel_data_controller_3 + dram_channel_data_controller_4 + dram_channel_data_controller_5 + dram_channel_data_controller_6 + dram_channel_data_controller_7", 174 + "MetricConstraint": "NO_GROUP_EVENTS", 174 175 "MetricGroup": "data_fabric", 175 176 "PerPkg": "1", 176 177 "ScaleUnit": "6.1e-5MiB"
+2 -1
tools/perf/pmu-events/arch/x86/amdzen3/recommended.json
··· 205 205 }, 206 206 { 207 207 "MetricName": "nps1_die_to_dram", 208 - "BriefDescription": "Approximate: Combined DRAM B/bytes of all channels on a NPS1 node (die) (may need --metric-no-group)", 208 + "BriefDescription": "Approximate: Combined DRAM B/bytes of all channels on a NPS1 node (die)", 209 209 "MetricExpr": "dram_channel_data_controller_0 + dram_channel_data_controller_1 + dram_channel_data_controller_2 + dram_channel_data_controller_3 + dram_channel_data_controller_4 + dram_channel_data_controller_5 + dram_channel_data_controller_6 + dram_channel_data_controller_7", 210 210 "MetricGroup": "data_fabric", 211 211 "PerPkg": "1", 212 + "MetricConstraint": "NO_GROUP_EVENTS", 212 213 "ScaleUnit": "6.1e-5MiB" 213 214 } 214 215 ]
+77
tools/perf/tests/shell/test_uprobe_from_different_cu.sh
··· 1 + #!/bin/bash 2 + # test perf probe of function from different CU 3 + # SPDX-License-Identifier: GPL-2.0 4 + 5 + set -e 6 + 7 + temp_dir=$(mktemp -d /tmp/perf-uprobe-different-cu-sh.XXXXXXXXXX) 8 + 9 + cleanup() 10 + { 11 + trap - EXIT TERM INT 12 + if [[ "${temp_dir}" =~ ^/tmp/perf-uprobe-different-cu-sh.*$ ]]; then 13 + echo "--- Cleaning up ---" 14 + perf probe -x ${temp_dir}/testfile -d foo 15 + rm -f "${temp_dir}/"* 16 + rmdir "${temp_dir}" 17 + fi 18 + } 19 + 20 + trap_cleanup() 21 + { 22 + cleanup 23 + exit 1 24 + } 25 + 26 + trap trap_cleanup EXIT TERM INT 27 + 28 + cat > ${temp_dir}/testfile-foo.h << EOF 29 + struct t 30 + { 31 + int *p; 32 + int c; 33 + }; 34 + 35 + extern int foo (int i, struct t *t); 36 + EOF 37 + 38 + cat > ${temp_dir}/testfile-foo.c << EOF 39 + #include "testfile-foo.h" 40 + 41 + int 42 + foo (int i, struct t *t) 43 + { 44 + int j, res = 0; 45 + for (j = 0; j < i && j < t->c; j++) 46 + res += t->p[j]; 47 + 48 + return res; 49 + } 50 + EOF 51 + 52 + cat > ${temp_dir}/testfile-main.c << EOF 53 + #include "testfile-foo.h" 54 + 55 + static struct t g; 56 + 57 + int 58 + main (int argc, char **argv) 59 + { 60 + int i; 61 + int j[argc]; 62 + g.c = argc; 63 + g.p = j; 64 + for (i = 0; i < argc; i++) 65 + j[i] = (int) argv[i][0]; 66 + return foo (3, &g); 67 + } 68 + EOF 69 + 70 + gcc -g -Og -flto -c ${temp_dir}/testfile-foo.c -o ${temp_dir}/testfile-foo.o 71 + gcc -g -Og -c ${temp_dir}/testfile-main.c -o ${temp_dir}/testfile-main.o 72 + gcc -g -Og -o ${temp_dir}/testfile ${temp_dir}/testfile-foo.o ${temp_dir}/testfile-main.o 73 + 74 + perf probe -x ${temp_dir}/testfile --funcs foo 75 + perf probe -x ${temp_dir}/testfile foo 76 + 77 + cleanup
+2 -2
tools/perf/tests/task-exit.c
··· 58 58 59 59 signal(SIGCHLD, sig_handler); 60 60 61 - evlist = evlist__new_default(); 61 + evlist = evlist__new_dummy(); 62 62 if (evlist == NULL) { 63 - pr_debug("evlist__new_default\n"); 63 + pr_debug("evlist__new_dummy\n"); 64 64 return -1; 65 65 } 66 66
+5
tools/perf/trace/beauty/include/linux/socket.h
··· 177 177 #define SCM_RIGHTS 0x01 /* rw: access rights (array of int) */ 178 178 #define SCM_CREDENTIALS 0x02 /* rw: struct ucred */ 179 179 #define SCM_SECURITY 0x03 /* rw: security label */ 180 + #define SCM_PIDFD 0x04 /* ro: pidfd (int) */ 180 181 181 182 struct ucred { 182 183 __u32 pid; ··· 327 326 */ 328 327 329 328 #define MSG_ZEROCOPY 0x4000000 /* Use user data in kernel path */ 329 + #define MSG_SPLICE_PAGES 0x8000000 /* Splice the pages from the iterator in sendmsg() */ 330 330 #define MSG_FASTOPEN 0x20000000 /* Send data in TCP SYN */ 331 331 #define MSG_CMSG_CLOEXEC 0x40000000 /* Set close_on_exec for file 332 332 descriptor received through ··· 338 336 #define MSG_CMSG_COMPAT 0 /* We never have 32 bit fixups */ 339 337 #endif 340 338 339 + /* Flags to be cleared on entry by sendmsg and sendmmsg syscalls */ 340 + #define MSG_INTERNAL_SENDMSG_FLAGS \ 341 + (MSG_SPLICE_PAGES | MSG_SENDPAGE_NOPOLICY | MSG_SENDPAGE_DECRYPTED) 341 342 342 343 /* Setsockoptions(2) level. Thanks to BSD these must match IPPROTO_xxx */ 343 344 #define SOL_IP 0
+1 -1
tools/perf/trace/beauty/move_mount_flags.sh
··· 10 10 linux_mount=${linux_header_dir}/mount.h 11 11 12 12 printf "static const char *move_mount_flags[] = {\n" 13 - regex='^[[:space:]]*#[[:space:]]*define[[:space:]]+MOVE_MOUNT_([^_]+_[[:alnum:]_]+)[[:space:]]+(0x[[:xdigit:]]+)[[:space:]]*.*' 13 + regex='^[[:space:]]*#[[:space:]]*define[[:space:]]+MOVE_MOUNT_([^_]+[[:alnum:]_]+)[[:space:]]+(0x[[:xdigit:]]+)[[:space:]]*.*' 14 14 grep -E $regex ${linux_mount} | \ 15 15 sed -r "s/$regex/\2 \1/g" | \ 16 16 xargs printf "\t[ilog2(%s) + 1] = \"%s\",\n"
+8
tools/perf/trace/beauty/msg_flags.c
··· 8 8 #ifndef MSG_WAITFORONE 9 9 #define MSG_WAITFORONE 0x10000 10 10 #endif 11 + #ifndef MSG_BATCH 12 + #define MSG_BATCH 0x40000 13 + #endif 14 + #ifndef MSG_ZEROCOPY 15 + #define MSG_ZEROCOPY 0x4000000 16 + #endif 11 17 #ifndef MSG_SPLICE_PAGES 12 18 #define MSG_SPLICE_PAGES 0x8000000 13 19 #endif ··· 56 50 P_MSG_FLAG(NOSIGNAL); 57 51 P_MSG_FLAG(MORE); 58 52 P_MSG_FLAG(WAITFORONE); 53 + P_MSG_FLAG(BATCH); 54 + P_MSG_FLAG(ZEROCOPY); 59 55 P_MSG_FLAG(SPLICE_PAGES); 60 56 P_MSG_FLAG(FASTOPEN); 61 57 P_MSG_FLAG(CMSG_CLOEXEC);
+3 -1
tools/perf/util/dwarf-aux.c
··· 478 478 { 479 479 Dwarf_Die cu_die; 480 480 Dwarf_Files *files; 481 + Dwarf_Attribute attr_mem; 481 482 482 - if (idx < 0 || !dwarf_diecu(dw_die, &cu_die, NULL, NULL) || 483 + if (idx < 0 || !dwarf_attr_integrate(dw_die, DW_AT_decl_file, &attr_mem) || 484 + !dwarf_cu_die(attr_mem.cu, &cu_die, NULL, NULL, NULL, NULL, NULL, NULL) || 483 485 dwarf_getsrcfiles(&cu_die, &files, NULL) != 0) 484 486 return NULL; 485 487
+8
tools/perf/util/parse-events.c
··· 1216 1216 if (term->type_term == PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE) { 1217 1217 const struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type); 1218 1218 1219 + if (!pmu) { 1220 + char *err_str; 1221 + 1222 + if (asprintf(&err_str, "Failed to find PMU for type %d", attr->type) >= 0) 1223 + parse_events_error__handle(err, term->err_term, 1224 + err_str, /*help=*/NULL); 1225 + return -EINVAL; 1226 + } 1219 1227 if (perf_pmu__supports_legacy_cache(pmu)) { 1220 1228 attr->type = PERF_TYPE_HW_CACHE; 1221 1229 return parse_events__decode_legacy_cache(term->config, pmu->type,
+3 -3
tools/testing/radix-tree/maple.c
··· 206 206 e = i - 1; 207 207 } else { 208 208 if (i >= 4) 209 - e = i - 4; 210 - else if (i == 3) 211 - e = i - 2; 209 + e = i - 3; 210 + else if (i >= 1) 211 + e = i - 1; 212 212 else 213 213 e = 0; 214 214 }
+1
tools/testing/selftests/alsa/.gitignore
··· 1 1 mixer-test 2 2 pcm-test 3 + test-pcmtest-driver
+1 -3
tools/testing/selftests/alsa/test-pcmtest-driver.c
··· 47 47 48 48 sprintf(pf, "/sys/kernel/debug/pcmtest/fill_pattern%d", i); 49 49 fp = fopen(pf, "r"); 50 - if (!fp) { 51 - fclose(fpl); 50 + if (!fp) 52 51 return -1; 53 - } 54 52 fread(patterns[i].buf, 1, patterns[i].len, fp); 55 53 fclose(fp); 56 54 }
+1 -1
tools/testing/selftests/arm64/Makefile
··· 42 42 done 43 43 44 44 # Avoid any output on non arm64 on emit_tests 45 - emit_tests: all 45 + emit_tests: 46 46 @for DIR in $(ARM64_SUBTARGETS); do \ 47 47 BUILD_TARGET=$(OUTPUT)/$$DIR; \ 48 48 make OUTPUT=$$BUILD_TARGET -C $$DIR $@; \
+23 -2
tools/testing/selftests/bpf/progs/async_stack_depth.c
··· 22 22 return buf[69]; 23 23 } 24 24 25 + __attribute__((noinline)) 26 + static int bad_timer_cb(void *map, int *key, struct bpf_timer *timer) 27 + { 28 + volatile char buf[300] = {}; 29 + return buf[255] + timer_cb(NULL, NULL, NULL); 30 + } 31 + 25 32 SEC("tc") 26 - __failure __msg("combined stack size of 2 calls") 27 - int prog(struct __sk_buff *ctx) 33 + __failure __msg("combined stack size of 2 calls is 576. Too large") 34 + int pseudo_call_check(struct __sk_buff *ctx) 28 35 { 29 36 struct hmap_elem *elem; 30 37 volatile char buf[256] = {}; ··· 42 35 43 36 timer_cb(NULL, NULL, NULL); 44 37 return bpf_timer_set_callback(&elem->timer, timer_cb) + buf[0]; 38 + } 39 + 40 + SEC("tc") 41 + __failure __msg("combined stack size of 2 calls is 608. Too large") 42 + int async_call_root_check(struct __sk_buff *ctx) 43 + { 44 + struct hmap_elem *elem; 45 + volatile char buf[256] = {}; 46 + 47 + elem = bpf_map_lookup_elem(&hmap, &(int){0}); 48 + if (!elem) 49 + return 0; 50 + 51 + return bpf_timer_set_callback(&elem->timer, bad_timer_cb) + buf[0]; 45 52 } 46 53 47 54 char _license[] SEC("license") = "GPL";
+2 -2
tools/testing/selftests/mincore/mincore_selftest.c
··· 150 150 MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB, 151 151 -1, 0); 152 152 if (addr == MAP_FAILED) { 153 - if (errno == ENOMEM) 154 - SKIP(return, "No huge pages available."); 153 + if (errno == ENOMEM || errno == EINVAL) 154 + SKIP(return, "No huge pages available or CONFIG_HUGETLB_PAGE disabled."); 155 155 else 156 156 TH_LOG("mmap error: %s", strerror(errno)); 157 157 }
tools/testing/selftests/mm/charge_reserved_hugetlb.sh
tools/testing/selftests/mm/check_config.sh
tools/testing/selftests/mm/hugetlb_reparenting_test.sh
+1 -1
tools/testing/selftests/mm/mkdirty.c
··· 321 321 munmap: 322 322 munmap(dst, pagesize); 323 323 free(src); 324 - #endif /* __NR_userfaultfd */ 325 324 } 325 + #endif /* __NR_userfaultfd */ 326 326 327 327 int main(void) 328 328 {
tools/testing/selftests/mm/run_vmtests.sh
tools/testing/selftests/mm/test_hmm.sh
tools/testing/selftests/mm/test_vmalloc.sh
tools/testing/selftests/mm/va_high_addr_switch.sh
tools/testing/selftests/mm/write_hugetlb_memory.sh
+1 -1
tools/testing/selftests/riscv/Makefile
··· 43 43 done 44 44 45 45 # Avoid any output on non riscv on emit_tests 46 - emit_tests: all 46 + emit_tests: 47 47 @for DIR in $(RISCV_SUBTARGETS); do \ 48 48 BUILD_TARGET=$(OUTPUT)/$$DIR; \ 49 49 $(MAKE) OUTPUT=$$BUILD_TARGET -C $$DIR $@; \
+2
tools/testing/selftests/tc-testing/config
··· 5 5 CONFIG_NF_CONNTRACK_MARK=y 6 6 CONFIG_NF_CONNTRACK_ZONES=y 7 7 CONFIG_NF_CONNTRACK_LABELS=y 8 + CONFIG_NF_CONNTRACK_PROCFS=y 9 + CONFIG_NF_FLOW_TABLE=m 8 10 CONFIG_NF_NAT=m 9 11 CONFIG_NETFILTER_XT_TARGET_LOG=m 10 12
+1
tools/testing/selftests/tc-testing/settings
··· 1 + timeout=900
+1 -2
tools/testing/selftests/timers/raw_skew.c
··· 129 129 printf("%lld.%i(est)", eppm/1000, abs((int)(eppm%1000))); 130 130 131 131 /* Avg the two actual freq samples adjtimex gave us */ 132 - ppm = (tx1.freq + tx2.freq) * 1000 / 2; 133 - ppm = (long long)tx1.freq * 1000; 132 + ppm = (long long)(tx1.freq + tx2.freq) * 1000 / 2; 134 133 ppm = shift_right(ppm, 16); 135 134 printf(" %lld.%i(act)", ppm/1000, abs((int)(ppm%1000))); 136 135