Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge branch arm64/for-next/cpufeature into kvmarm-master/next

Merge arm64/for-next/cpufeature in to resolve conflicts resulting from
the removal of CONFIG_PAN.

* arm64/for-next/cpufeature:
arm64: Add support for FEAT_{LS64, LS64_V}
KVM: arm64: Enable FEAT_{LS64, LS64_V} in the supported guest
arm64: Provide basic EL2 setup for FEAT_{LS64, LS64_V} usage at EL0/1
KVM: arm64: Handle DABT caused by LS64* instructions on unsupported memory
KVM: arm64: Add documentation for KVM_EXIT_ARM_LDST64B
KVM: arm64: Add exit to userspace on {LD,ST}64B* outside of memslots
arm64: Unconditionally enable PAN support
arm64: Unconditionally enable LSE support
arm64: Add support for TSV110 Spectre-BHB mitigation

Signed-off-by: Marc Zyngier <maz@kernel.org>

+193 -105
+12
Documentation/arch/arm64/booting.rst
··· 556 556 557 557 - MDCR_EL3.TPM (bit 6) must be initialized to 0b0 558 558 559 + For CPUs with support for 64-byte loads and stores without status (FEAT_LS64): 560 + 561 + - If the kernel is entered at EL1 and EL2 is present: 562 + 563 + - HCRX_EL2.EnALS (bit 1) must be initialised to 0b1. 564 + 565 + For CPUs with support for 64-byte stores with status (FEAT_LS64_V): 566 + 567 + - If the kernel is entered at EL1 and EL2 is present: 568 + 569 + - HCRX_EL2.EnASR (bit 2) must be initialised to 0b1. 570 + 559 571 The requirements described above for CPU mode, caches, MMUs, architected 560 572 timers, coherency and system registers apply to all CPUs. All CPUs must 561 573 enter the kernel in the same exception level. Where the values documented
+7
Documentation/arch/arm64/elf_hwcaps.rst
··· 444 444 HWCAP3_LSFE 445 445 Functionality implied by ID_AA64ISAR3_EL1.LSFE == 0b0001 446 446 447 + HWCAP3_LS64 448 + Functionality implied by ID_AA64ISAR1_EL1.LS64 == 0b0001. Note that 449 + the function of instruction ld64b/st64b requires support by CPU, system 450 + and target (device) memory location and HWCAP3_LS64 implies the support 451 + of CPU. User should only use ld64b/st64b on supported target (device) 452 + memory location, otherwise fallback to the non-atomic alternatives. 453 + 447 454 448 455 4. Unused AT_HWCAP bits 449 456 -----------------------
+36 -7
Documentation/virt/kvm/api.rst
··· 1303 1303 information or because there is no device mapped at the accessed IPA, then 1304 1304 userspace can ask the kernel to inject an external abort using the address 1305 1305 from the exiting fault on the VCPU. It is a programming error to set 1306 - ext_dabt_pending after an exit which was not either KVM_EXIT_MMIO or 1307 - KVM_EXIT_ARM_NISV. This feature is only available if the system supports 1308 - KVM_CAP_ARM_INJECT_EXT_DABT. This is a helper which provides commonality in 1309 - how userspace reports accesses for the above cases to guests, across different 1310 - userspace implementations. Nevertheless, userspace can still emulate all Arm 1311 - exceptions by manipulating individual registers using the KVM_SET_ONE_REG API. 1306 + ext_dabt_pending after an exit which was not either KVM_EXIT_MMIO, 1307 + KVM_EXIT_ARM_NISV, or KVM_EXIT_ARM_LDST64B. This feature is only available if 1308 + the system supports KVM_CAP_ARM_INJECT_EXT_DABT. This is a helper which 1309 + provides commonality in how userspace reports accesses for the above cases to 1310 + guests, across different userspace implementations. Nevertheless, userspace 1311 + can still emulate all Arm exceptions by manipulating individual registers 1312 + using the KVM_SET_ONE_REG API. 1312 1313 1313 1314 See KVM_GET_VCPU_EVENTS for the data structure. 1314 1315 ··· 7051 7050 7052 7051 :: 7053 7052 7054 - /* KVM_EXIT_ARM_NISV */ 7053 + /* KVM_EXIT_ARM_NISV / KVM_EXIT_ARM_LDST64B */ 7055 7054 struct { 7056 7055 __u64 esr_iss; 7057 7056 __u64 fault_ipa; 7058 7057 } arm_nisv; 7058 + 7059 + - KVM_EXIT_ARM_NISV: 7059 7060 7060 7061 Used on arm64 systems. If a guest accesses memory not in a memslot, 7061 7062 KVM will typically return to userspace and ask it to do MMIO emulation on its ··· 7091 7088 Note that although KVM_CAP_ARM_NISV_TO_USER will be reported if 7092 7089 queried outside of a protected VM context, the feature will not be 7093 7090 exposed if queried on a protected VM file descriptor. 7091 + 7092 + - KVM_EXIT_ARM_LDST64B: 7093 + 7094 + Used on arm64 systems. When a guest using a LD64B, ST64B, ST64BV, ST64BV0, 7095 + outside of a memslot, KVM will return to userspace with KVM_EXIT_ARM_LDST64B, 7096 + exposing the relevant ESR_EL2 information and faulting IPA, similarly to 7097 + KVM_EXIT_ARM_NISV. 7098 + 7099 + Userspace is supposed to fully emulate the instructions, which includes: 7100 + 7101 + - fetch of the operands for a store, including ACCDATA_EL1 in the case 7102 + of a ST64BV0 instruction 7103 + - deal with the endianness if the guest is big-endian 7104 + - emulate the access, including the delivery of an exception if the 7105 + access didn't succeed 7106 + - provide a return value in the case of ST64BV/ST64BV0 7107 + - return the data in the case of a load 7108 + - increment PC if the instruction was successfully executed 7109 + 7110 + Note that there is no expectation of performance for this emulation, as it 7111 + involves a large number of interaction with the guest state. It is, however, 7112 + expected that the instruction's semantics are preserved, specially the 7113 + single-copy atomicity property of the 64 byte access. 7114 + 7115 + This exit reason must be handled if userspace sets ID_AA64ISAR1_EL1.LS64 to a 7116 + non-zero value, indicating that FEAT_LS64* is enabled. 7094 7117 7095 7118 :: 7096 7119
-33
arch/arm64/Kconfig
··· 1680 1680 config ARM64_SW_TTBR0_PAN 1681 1681 bool "Emulate Privileged Access Never using TTBR0_EL1 switching" 1682 1682 depends on !KCSAN 1683 - select ARM64_PAN 1684 1683 help 1685 1684 Enabling this option prevents the kernel from accessing 1686 1685 user-space memory directly by pointing TTBR0_EL1 to a reserved ··· 1857 1858 Kernels built with this configuration option enabled continue 1858 1859 to work on pre-ARMv8.1 hardware and the performance impact is 1859 1860 minimal. If unsure, say Y. 1860 - 1861 - config ARM64_PAN 1862 - bool "Enable support for Privileged Access Never (PAN)" 1863 - default y 1864 - help 1865 - Privileged Access Never (PAN; part of the ARMv8.1 Extensions) 1866 - prevents the kernel or hypervisor from accessing user-space (EL0) 1867 - memory directly. 1868 - 1869 - Choosing this option will cause any unprotected (not using 1870 - copy_to_user et al) memory access to fail with a permission fault. 1871 - 1872 - The feature is detected at runtime, and will remain as a 'nop' 1873 - instruction if the cpu does not implement the feature. 1874 - 1875 - config ARM64_LSE_ATOMICS 1876 - bool 1877 - default ARM64_USE_LSE_ATOMICS 1878 - 1879 - config ARM64_USE_LSE_ATOMICS 1880 - bool "Atomic instructions" 1881 - default y 1882 - help 1883 - As part of the Large System Extensions, ARMv8.1 introduces new 1884 - atomic instructions that are designed specifically to scale in 1885 - very large systems. 1886 - 1887 - Say Y here to make use of these instructions for the in-kernel 1888 - atomic routines. This incurs a small overhead on CPUs that do 1889 - not support these instructions. 1890 1861 1891 1862 endmenu # "ARMv8.1 architectural features" 1892 1863 ··· 2094 2125 depends on ARM64_AS_HAS_MTE && ARM64_TAGGED_ADDR_ABI 2095 2126 depends on AS_HAS_ARMV8_5 2096 2127 # Required for tag checking in the uaccess routines 2097 - select ARM64_PAN 2098 2128 select ARCH_HAS_SUBPAGE_FAULTS 2099 2129 select ARCH_USES_HIGH_VMA_FLAGS 2100 2130 select ARCH_USES_PG_ARCH_2 ··· 2125 2157 config ARM64_EPAN 2126 2158 bool "Enable support for Enhanced Privileged Access Never (EPAN)" 2127 2159 default y 2128 - depends on ARM64_PAN 2129 2160 help 2130 2161 Enhanced Privileged Access Never (EPAN) allows Privileged 2131 2162 Access Never to be used with Execute-only mappings.
-2
arch/arm64/include/asm/cpucaps.h
··· 19 19 "cap must be < ARM64_NCAPS"); 20 20 21 21 switch (cap) { 22 - case ARM64_HAS_PAN: 23 - return IS_ENABLED(CONFIG_ARM64_PAN); 24 22 case ARM64_HAS_EPAN: 25 23 return IS_ENABLED(CONFIG_ARM64_EPAN); 26 24 case ARM64_SVE:
+11 -1
arch/arm64/include/asm/el2_setup.h
··· 83 83 /* Enable GCS if supported */ 84 84 mrs_s x1, SYS_ID_AA64PFR1_EL1 85 85 ubfx x1, x1, #ID_AA64PFR1_EL1_GCS_SHIFT, #4 86 - cbz x1, .Lset_hcrx_\@ 86 + cbz x1, .Lskip_gcs_hcrx_\@ 87 87 orr x0, x0, #HCRX_EL2_GCSEn 88 + 89 + .Lskip_gcs_hcrx_\@: 90 + /* Enable LS64, LS64_V if supported */ 91 + mrs_s x1, SYS_ID_AA64ISAR1_EL1 92 + ubfx x1, x1, #ID_AA64ISAR1_EL1_LS64_SHIFT, #4 93 + cbz x1, .Lset_hcrx_\@ 94 + orr x0, x0, #HCRX_EL2_EnALS 95 + cmp x1, #ID_AA64ISAR1_EL1_LS64_LS64_V 96 + b.lt .Lset_hcrx_\@ 97 + orr x0, x0, #HCRX_EL2_EnASR 88 98 89 99 .Lset_hcrx_\@: 90 100 msr_s SYS_HCRX_EL2, x0
+8
arch/arm64/include/asm/esr.h
··· 124 124 #define ESR_ELx_FSC_SEA_TTW(n) (0x14 + (n)) 125 125 #define ESR_ELx_FSC_SECC (0x18) 126 126 #define ESR_ELx_FSC_SECC_TTW(n) (0x1c + (n)) 127 + #define ESR_ELx_FSC_EXCL_ATOMIC (0x35) 127 128 #define ESR_ELx_FSC_ADDRSZ (0x00) 128 129 129 130 /* ··· 487 486 (esr == ESR_ELx_FSC_ACCESS_L(2)) || 488 487 (esr == ESR_ELx_FSC_ACCESS_L(1)) || 489 488 (esr == ESR_ELx_FSC_ACCESS_L(0)); 489 + } 490 + 491 + static inline bool esr_fsc_is_excl_atomic_fault(unsigned long esr) 492 + { 493 + esr = esr & ESR_ELx_FSC; 494 + 495 + return esr == ESR_ELx_FSC_EXCL_ATOMIC; 490 496 } 491 497 492 498 static inline bool esr_fsc_is_addr_sz_fault(unsigned long esr)
+1
arch/arm64/include/asm/hwcap.h
··· 179 179 #define KERNEL_HWCAP_MTE_FAR __khwcap3_feature(MTE_FAR) 180 180 #define KERNEL_HWCAP_MTE_STORE_ONLY __khwcap3_feature(MTE_STORE_ONLY) 181 181 #define KERNEL_HWCAP_LSFE __khwcap3_feature(LSFE) 182 + #define KERNEL_HWCAP_LS64 __khwcap3_feature(LS64) 182 183 183 184 /* 184 185 * This yields a mask that user programs can use to figure out what
-23
arch/arm64/include/asm/insn.h
··· 671 671 enum aarch64_insn_register Rn, 672 672 enum aarch64_insn_register Rd, 673 673 u8 lsb); 674 - #ifdef CONFIG_ARM64_LSE_ATOMICS 675 674 u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result, 676 675 enum aarch64_insn_register address, 677 676 enum aarch64_insn_register value, ··· 682 683 enum aarch64_insn_register value, 683 684 enum aarch64_insn_size_type size, 684 685 enum aarch64_insn_mem_order_type order); 685 - #else 686 - static inline 687 - u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result, 688 - enum aarch64_insn_register address, 689 - enum aarch64_insn_register value, 690 - enum aarch64_insn_size_type size, 691 - enum aarch64_insn_mem_atomic_op op, 692 - enum aarch64_insn_mem_order_type order) 693 - { 694 - return AARCH64_BREAK_FAULT; 695 - } 696 - 697 - static inline 698 - u32 aarch64_insn_gen_cas(enum aarch64_insn_register result, 699 - enum aarch64_insn_register address, 700 - enum aarch64_insn_register value, 701 - enum aarch64_insn_size_type size, 702 - enum aarch64_insn_mem_order_type order) 703 - { 704 - return AARCH64_BREAK_FAULT; 705 - } 706 - #endif 707 686 u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type); 708 687 u32 aarch64_insn_gen_dsb(enum aarch64_insn_mb_type type); 709 688 u32 aarch64_insn_gen_mrs(enum aarch64_insn_register result,
+7
arch/arm64/include/asm/kvm_emulate.h
··· 47 47 void kvm_inject_undefined(struct kvm_vcpu *vcpu); 48 48 int kvm_inject_serror_esr(struct kvm_vcpu *vcpu, u64 esr); 49 49 int kvm_inject_sea(struct kvm_vcpu *vcpu, bool iabt, u64 addr); 50 + int kvm_inject_dabt_excl_atomic(struct kvm_vcpu *vcpu, u64 addr); 50 51 void kvm_inject_size_fault(struct kvm_vcpu *vcpu); 51 52 52 53 static inline int kvm_inject_sea_dabt(struct kvm_vcpu *vcpu, u64 addr) ··· 695 694 696 695 if (kvm_has_sctlr2(kvm)) 697 696 vcpu->arch.hcrx_el2 |= HCRX_EL2_SCTLR2En; 697 + 698 + if (kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64)) 699 + vcpu->arch.hcrx_el2 |= HCRX_EL2_EnALS; 700 + 701 + if (kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_V)) 702 + vcpu->arch.hcrx_el2 |= HCRX_EL2_EnASR; 698 703 } 699 704 } 700 705 #endif /* __ARM64_KVM_EMULATE_H__ */
-9
arch/arm64/include/asm/lse.h
··· 4 4 5 5 #include <asm/atomic_ll_sc.h> 6 6 7 - #ifdef CONFIG_ARM64_LSE_ATOMICS 8 - 9 7 #define __LSE_PREAMBLE ".arch_extension lse\n" 10 8 11 9 #include <linux/compiler_types.h> ··· 25 27 #define ARM64_LSE_ATOMIC_INSN(llsc, lse) \ 26 28 ALTERNATIVE(llsc, __LSE_PREAMBLE lse, ARM64_HAS_LSE_ATOMICS) 27 29 28 - #else /* CONFIG_ARM64_LSE_ATOMICS */ 29 - 30 - #define __lse_ll_sc_body(op, ...) __ll_sc_##op(__VA_ARGS__) 31 - 32 - #define ARM64_LSE_ATOMIC_INSN(llsc, lse) llsc 33 - 34 - #endif /* CONFIG_ARM64_LSE_ATOMICS */ 35 30 #endif /* __ASM_LSE_H */
+2 -4
arch/arm64/include/asm/uaccess.h
··· 124 124 125 125 static inline void __uaccess_disable_hw_pan(void) 126 126 { 127 - asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN, 128 - CONFIG_ARM64_PAN)); 127 + asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN)); 129 128 } 130 129 131 130 static inline void __uaccess_enable_hw_pan(void) 132 131 { 133 - asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN, 134 - CONFIG_ARM64_PAN)); 132 + asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN)); 135 133 } 136 134 137 135 static inline void uaccess_disable_privileged(void)
+1
arch/arm64/include/uapi/asm/hwcap.h
··· 146 146 #define HWCAP3_MTE_FAR (1UL << 0) 147 147 #define HWCAP3_MTE_STORE_ONLY (1UL << 1) 148 148 #define HWCAP3_LSFE (1UL << 2) 149 + #define HWCAP3_LS64 (1UL << 3) 149 150 150 151 #endif /* _UAPI__ASM_HWCAP_H */
+28 -6
arch/arm64/kernel/cpufeature.c
··· 240 240 }; 241 241 242 242 static const struct arm64_ftr_bits ftr_id_aa64isar1[] = { 243 + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_LS64_SHIFT, 4, 0), 243 244 ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_XS_SHIFT, 4, 0), 244 245 ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_I8MM_SHIFT, 4, 0), 245 246 ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_DGH_SHIFT, 4, 0), ··· 2165 2164 return cpu_supports_bbml2_noabort(); 2166 2165 } 2167 2166 2168 - #ifdef CONFIG_ARM64_PAN 2169 2167 static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused) 2170 2168 { 2171 2169 /* ··· 2176 2176 sysreg_clear_set(sctlr_el1, SCTLR_EL1_SPAN, 0); 2177 2177 set_pstate_pan(1); 2178 2178 } 2179 - #endif /* CONFIG_ARM64_PAN */ 2180 2179 2181 2180 #ifdef CONFIG_ARM64_RAS_EXTN 2182 2181 static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused) ··· 2258 2259 sysreg_clear_set(tcr_el1, 0, TCR_EL1_E0PD1); 2259 2260 } 2260 2261 #endif /* CONFIG_ARM64_E0PD */ 2262 + 2263 + static void cpu_enable_ls64(struct arm64_cpu_capabilities const *cap) 2264 + { 2265 + sysreg_clear_set(sctlr_el1, SCTLR_EL1_EnALS, SCTLR_EL1_EnALS); 2266 + } 2267 + 2268 + static void cpu_enable_ls64_v(struct arm64_cpu_capabilities const *cap) 2269 + { 2270 + sysreg_clear_set(sctlr_el1, SCTLR_EL1_EnASR, 0); 2271 + } 2261 2272 2262 2273 #ifdef CONFIG_ARM64_PSEUDO_NMI 2263 2274 static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry, ··· 2550 2541 .matches = has_cpuid_feature, 2551 2542 ARM64_CPUID_FIELDS(ID_AA64MMFR0_EL1, ECV, CNTPOFF) 2552 2543 }, 2553 - #ifdef CONFIG_ARM64_PAN 2554 2544 { 2555 2545 .desc = "Privileged Access Never", 2556 2546 .capability = ARM64_HAS_PAN, ··· 2558 2550 .cpu_enable = cpu_enable_pan, 2559 2551 ARM64_CPUID_FIELDS(ID_AA64MMFR1_EL1, PAN, IMP) 2560 2552 }, 2561 - #endif /* CONFIG_ARM64_PAN */ 2562 2553 #ifdef CONFIG_ARM64_EPAN 2563 2554 { 2564 2555 .desc = "Enhanced Privileged Access Never", ··· 2567 2560 ARM64_CPUID_FIELDS(ID_AA64MMFR1_EL1, PAN, PAN3) 2568 2561 }, 2569 2562 #endif /* CONFIG_ARM64_EPAN */ 2570 - #ifdef CONFIG_ARM64_LSE_ATOMICS 2571 2563 { 2572 2564 .desc = "LSE atomic instructions", 2573 2565 .capability = ARM64_HAS_LSE_ATOMICS, ··· 2574 2568 .matches = has_cpuid_feature, 2575 2569 ARM64_CPUID_FIELDS(ID_AA64ISAR0_EL1, ATOMIC, IMP) 2576 2570 }, 2577 - #endif /* CONFIG_ARM64_LSE_ATOMICS */ 2578 2571 { 2579 2572 .desc = "Virtualization Host Extensions", 2580 2573 .capability = ARM64_HAS_VIRT_HOST_EXTN, ··· 3153 3148 .matches = has_cpuid_feature, 3154 3149 ARM64_CPUID_FIELDS(ID_AA64MMFR1_EL1, XNX, IMP) 3155 3150 }, 3151 + { 3152 + .desc = "LS64", 3153 + .capability = ARM64_HAS_LS64, 3154 + .type = ARM64_CPUCAP_SYSTEM_FEATURE, 3155 + .matches = has_cpuid_feature, 3156 + .cpu_enable = cpu_enable_ls64, 3157 + ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, LS64, LS64) 3158 + }, 3159 + { 3160 + .desc = "LS64_V", 3161 + .capability = ARM64_HAS_LS64_V, 3162 + .type = ARM64_CPUCAP_SYSTEM_FEATURE, 3163 + .matches = has_cpuid_feature, 3164 + .cpu_enable = cpu_enable_ls64_v, 3165 + ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, LS64, LS64_V) 3166 + }, 3156 3167 {}, 3157 3168 }; 3158 3169 ··· 3288 3267 HWCAP_CAP(ID_AA64ISAR1_EL1, BF16, EBF16, CAP_HWCAP, KERNEL_HWCAP_EBF16), 3289 3268 HWCAP_CAP(ID_AA64ISAR1_EL1, DGH, IMP, CAP_HWCAP, KERNEL_HWCAP_DGH), 3290 3269 HWCAP_CAP(ID_AA64ISAR1_EL1, I8MM, IMP, CAP_HWCAP, KERNEL_HWCAP_I8MM), 3270 + HWCAP_CAP(ID_AA64ISAR1_EL1, LS64, LS64, CAP_HWCAP, KERNEL_HWCAP_LS64), 3291 3271 HWCAP_CAP(ID_AA64ISAR2_EL1, LUT, IMP, CAP_HWCAP, KERNEL_HWCAP_LUT), 3292 3272 HWCAP_CAP(ID_AA64ISAR3_EL1, FAMINMAX, IMP, CAP_HWCAP, KERNEL_HWCAP_FAMINMAX), 3293 3273 HWCAP_CAP(ID_AA64ISAR3_EL1, LSFE, IMP, CAP_HWCAP, KERNEL_HWCAP_LSFE),
+1
arch/arm64/kernel/cpuinfo.c
··· 81 81 [KERNEL_HWCAP_PACA] = "paca", 82 82 [KERNEL_HWCAP_PACG] = "pacg", 83 83 [KERNEL_HWCAP_GCS] = "gcs", 84 + [KERNEL_HWCAP_LS64] = "ls64", 84 85 [KERNEL_HWCAP_DCPODP] = "dcpodp", 85 86 [KERNEL_HWCAP_SVE2] = "sve2", 86 87 [KERNEL_HWCAP_SVEAES] = "sveaes",
+1
arch/arm64/kernel/proton-pack.c
··· 887 887 MIDR_ALL_VERSIONS(MIDR_CORTEX_X2), 888 888 MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2), 889 889 MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1), 890 + MIDR_ALL_VERSIONS(MIDR_HISI_TSV110), 890 891 {}, 891 892 }; 892 893 static const struct midr_range spectre_bhb_k24_list[] = {
-7
arch/arm64/kvm/at.c
··· 1700 1700 } 1701 1701 } 1702 1702 1703 - #ifdef CONFIG_ARM64_LSE_ATOMICS 1704 1703 static int __lse_swap_desc(u64 __user *ptep, u64 old, u64 new) 1705 1704 { 1706 1705 u64 tmp = old; ··· 1724 1725 1725 1726 return ret; 1726 1727 } 1727 - #else 1728 - static int __lse_swap_desc(u64 __user *ptep, u64 old, u64 new) 1729 - { 1730 - return -EINVAL; 1731 - } 1732 - #endif 1733 1728 1734 1729 static int __llsc_swap_desc(u64 __user *ptep, u64 old, u64 new) 1735 1730 {
+1 -1
arch/arm64/kvm/hyp/entry.S
··· 126 126 127 127 add x1, x1, #VCPU_CONTEXT 128 128 129 - ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN) 129 + ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN) 130 130 131 131 // Store the guest regs x2 and x3 132 132 stp x2, x3, [x1, #CPU_XREG_OFFSET(2)]
+34
arch/arm64/kvm/inject_fault.c
··· 253 253 return 1; 254 254 } 255 255 256 + static int kvm_inject_nested_excl_atomic(struct kvm_vcpu *vcpu, u64 addr) 257 + { 258 + u64 esr = FIELD_PREP(ESR_ELx_EC_MASK, ESR_ELx_EC_DABT_LOW) | 259 + FIELD_PREP(ESR_ELx_FSC, ESR_ELx_FSC_EXCL_ATOMIC) | 260 + ESR_ELx_IL; 261 + 262 + vcpu_write_sys_reg(vcpu, addr, FAR_EL2); 263 + return kvm_inject_nested_sync(vcpu, esr); 264 + } 265 + 266 + /** 267 + * kvm_inject_dabt_excl_atomic - inject a data abort for unsupported exclusive 268 + * or atomic access 269 + * @vcpu: The VCPU to receive the data abort 270 + * @addr: The address to report in the DFAR 271 + * 272 + * It is assumed that this code is called from the VCPU thread and that the 273 + * VCPU therefore is not currently executing guest code. 274 + */ 275 + int kvm_inject_dabt_excl_atomic(struct kvm_vcpu *vcpu, u64 addr) 276 + { 277 + u64 esr; 278 + 279 + if (is_nested_ctxt(vcpu) && (vcpu_read_sys_reg(vcpu, HCR_EL2) & HCR_VM)) 280 + return kvm_inject_nested_excl_atomic(vcpu, addr); 281 + 282 + __kvm_inject_sea(vcpu, false, addr); 283 + esr = vcpu_read_sys_reg(vcpu, exception_esr_elx(vcpu)); 284 + esr &= ~ESR_ELx_FSC; 285 + esr |= ESR_ELx_FSC_EXCL_ATOMIC; 286 + vcpu_write_sys_reg(vcpu, esr, exception_esr_elx(vcpu)); 287 + return 1; 288 + } 289 + 256 290 void kvm_inject_size_fault(struct kvm_vcpu *vcpu) 257 291 { 258 292 unsigned long addr, esr;
+26 -1
arch/arm64/kvm/mmio.c
··· 159 159 bool is_write; 160 160 int len; 161 161 u8 data_buf[8]; 162 + u64 esr; 163 + 164 + esr = kvm_vcpu_get_esr(vcpu); 162 165 163 166 /* 164 167 * No valid syndrome? Ask userspace for help if it has ··· 171 168 * though, so directly deliver an exception to the guest. 172 169 */ 173 170 if (!kvm_vcpu_dabt_isvalid(vcpu)) { 174 - trace_kvm_mmio_nisv(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu), 171 + trace_kvm_mmio_nisv(*vcpu_pc(vcpu), esr, 175 172 kvm_vcpu_get_hfar(vcpu), fault_ipa); 176 173 177 174 if (vcpu_is_protected(vcpu)) ··· 186 183 } 187 184 188 185 return -ENOSYS; 186 + } 187 + 188 + /* 189 + * When (DFSC == 0b00xxxx || DFSC == 0b10101x) && DFSC != 0b0000xx 190 + * ESR_EL2[12:11] describe the Load/Store Type. This allows us to 191 + * punt the LD64B/ST64B/ST64BV/ST64BV0 instructions to userspace, 192 + * which will have to provide a full emulation of these 4 193 + * instructions. No, we don't expect this do be fast. 194 + * 195 + * We rely on traps being set if the corresponding features are not 196 + * enabled, so if we get here, userspace has promised us to handle 197 + * it already. 198 + */ 199 + switch (kvm_vcpu_trap_get_fault(vcpu)) { 200 + case 0b000100 ... 0b001111: 201 + case 0b101010 ... 0b101011: 202 + if (FIELD_GET(GENMASK(12, 11), esr)) { 203 + run->exit_reason = KVM_EXIT_ARM_LDST64B; 204 + run->arm_nisv.esr_iss = esr & ~(u64)ESR_ELx_FSC; 205 + run->arm_nisv.fault_ipa = fault_ipa; 206 + return 0; 207 + } 189 208 } 190 209 191 210 /*
+13 -1
arch/arm64/kvm/mmu.c
··· 1845 1845 return ret; 1846 1846 } 1847 1847 1848 + /* 1849 + * Guest performs atomic/exclusive operations on memory with unsupported 1850 + * attributes (e.g. ld64b/st64b on normal memory when no FEAT_LS64WB) 1851 + * and trigger the exception here. Since the memslot is valid, inject 1852 + * the fault back to the guest. 1853 + */ 1854 + if (esr_fsc_is_excl_atomic_fault(kvm_vcpu_get_esr(vcpu))) { 1855 + kvm_inject_dabt_excl_atomic(vcpu, kvm_vcpu_get_hfar(vcpu)); 1856 + return 1; 1857 + } 1858 + 1848 1859 if (nested) 1849 1860 adjust_nested_fault_perms(nested, &prot, &writable); 1850 1861 ··· 2093 2082 /* Check the stage-2 fault is trans. fault or write fault */ 2094 2083 if (!esr_fsc_is_translation_fault(esr) && 2095 2084 !esr_fsc_is_permission_fault(esr) && 2096 - !esr_fsc_is_access_flag_fault(esr)) { 2085 + !esr_fsc_is_access_flag_fault(esr) && 2086 + !esr_fsc_is_excl_atomic_fault(esr)) { 2097 2087 kvm_err("Unsupported FSC: EC=%#x xFSC=%#lx ESR_EL2=%#lx\n", 2098 2088 kvm_vcpu_trap_get_class(vcpu), 2099 2089 (unsigned long)kvm_vcpu_trap_get_fault(vcpu),
-2
arch/arm64/lib/insn.c
··· 611 611 state); 612 612 } 613 613 614 - #ifdef CONFIG_ARM64_LSE_ATOMICS 615 614 static u32 aarch64_insn_encode_ldst_order(enum aarch64_insn_mem_order_type type, 616 615 u32 insn) 617 616 { ··· 754 755 return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RS, insn, 755 756 value); 756 757 } 757 - #endif 758 758 759 759 u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst, 760 760 enum aarch64_insn_register src,
-7
arch/arm64/net/bpf_jit_comp.c
··· 776 776 return 0; 777 777 } 778 778 779 - #ifdef CONFIG_ARM64_LSE_ATOMICS 780 779 static int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx) 781 780 { 782 781 const u8 code = insn->code; ··· 842 843 843 844 return 0; 844 845 } 845 - #else 846 - static inline int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx) 847 - { 848 - return -EINVAL; 849 - } 850 - #endif 851 846 852 847 static int emit_ll_sc_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx) 853 848 {
+2
arch/arm64/tools/cpucaps
··· 46 46 HAS_LDAPR 47 47 HAS_LPA2 48 48 HAS_LSE_ATOMICS 49 + HAS_LS64 50 + HAS_LS64_V 49 51 HAS_MOPS 50 52 HAS_NESTED_VIRT 51 53 HAS_BBML2_NOABORT
+2 -1
include/uapi/linux/kvm.h
··· 180 180 #define KVM_EXIT_MEMORY_FAULT 39 181 181 #define KVM_EXIT_TDX 40 182 182 #define KVM_EXIT_ARM_SEA 41 183 + #define KVM_EXIT_ARM_LDST64B 42 183 184 184 185 /* For KVM_EXIT_INTERNAL_ERROR */ 185 186 /* Emulate instruction failed. */ ··· 403 402 } eoi; 404 403 /* KVM_EXIT_HYPERV */ 405 404 struct kvm_hyperv_exit hyperv; 406 - /* KVM_EXIT_ARM_NISV */ 405 + /* KVM_EXIT_ARM_NISV / KVM_EXIT_ARM_LDST64B */ 407 406 struct { 408 407 __u64 esr_iss; 409 408 __u64 fault_ipa;