Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

KVM: arm64: nv: Avoid NV stage-2 code when NV is not supported

The NV stage-2 manipulation functions kvm_nested_s2_unmap(),
kvm_nested_s2_wp(), and others, are being called for any stage-2
manipulation regardless of whether nested virtualization is supported or
enabled for the VM.

For protected KVM (pKVM), `struct kvm_pgtable` uses the
`pkvm_mappings` member of the union. This member aliases `ia_bits`,
which is used by the non-protected NV code paths. Attempting to
read `pgt->ia_bits` in these functions results in treating
protected mapping pointers or state values as bit-shift amounts.
This triggers a UBSAN shift-out-of-bounds error:

UBSAN: shift-out-of-bounds in arch/arm64/kvm/nested.c:1127:34
shift exponent 174565952 is too large for 64-bit type 'unsigned long'
Call trace:
__ubsan_handle_shift_out_of_bounds+0x28c/0x2c0
kvm_nested_s2_unmap+0x228/0x248
kvm_arch_flush_shadow_memslot+0x98/0xc0
kvm_set_memslot+0x248/0xce0

Since pKVM and NV are mutually exclusive, prevent entry into these
NV handling functions if the VM has not allocated any nested MMUs
(i.e., `kvm->arch.nested_mmus_size` is 0).

Fixes: 7270cc9157f47 ("KVM: arm64: nv: Handle VNCR_EL2 invalidation from MMU notifiers")
Suggested-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
Link: https://patch.msgid.link/20260202152310.113467-1-tabba@google.com
Signed-off-by: Marc Zyngier <maz@kernel.org>

authored by

Fuad Tabba and committed by
Marc Zyngier
0c4762e2 82a32eac

+12
+12
arch/arm64/kvm/nested.c
··· 1101 1101 1102 1102 lockdep_assert_held_write(&kvm->mmu_lock); 1103 1103 1104 + if (!kvm->arch.nested_mmus_size) 1105 + return; 1106 + 1104 1107 for (i = 0; i < kvm->arch.nested_mmus_size; i++) { 1105 1108 struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i]; 1106 1109 ··· 1119 1116 int i; 1120 1117 1121 1118 lockdep_assert_held_write(&kvm->mmu_lock); 1119 + 1120 + if (!kvm->arch.nested_mmus_size) 1121 + return; 1122 1122 1123 1123 for (i = 0; i < kvm->arch.nested_mmus_size; i++) { 1124 1124 struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i]; ··· 1139 1133 1140 1134 lockdep_assert_held_write(&kvm->mmu_lock); 1141 1135 1136 + if (!kvm->arch.nested_mmus_size) 1137 + return; 1138 + 1142 1139 for (i = 0; i < kvm->arch.nested_mmus_size; i++) { 1143 1140 struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i]; 1144 1141 ··· 1153 1144 void kvm_arch_flush_shadow_all(struct kvm *kvm) 1154 1145 { 1155 1146 int i; 1147 + 1148 + if (!kvm->arch.nested_mmus_size) 1149 + return; 1156 1150 1157 1151 for (i = 0; i < kvm->arch.nested_mmus_size; i++) { 1158 1152 struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];