Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

KVM: arm64: Fix ID register initialization for non-protected pKVM guests

In protected mode, the hypervisor maintains a separate instance of
the `kvm` structure for each VM. For non-protected VMs, this structure is
initialized from the host's `kvm` state.

Currently, `pkvm_init_features_from_host()` copies the
`KVM_ARCH_FLAG_ID_REGS_INITIALIZED` flag from the host without the
underlying `id_regs` data being initialized. This results in the
hypervisor seeing the flag as set while the ID registers remain zeroed.

Consequently, `kvm_has_feat()` checks at EL2 fail (return 0) for
non-protected VMs. This breaks logic that relies on feature detection,
such as `ctxt_has_tcrx()` for TCR2_EL1 support. As a result, certain
system registers (e.g., TCR2_EL1, PIR_EL1, POR_EL1) are not
saved/restored during the world switch, which could lead to state
corruption.

Fix this by explicitly copying the ID registers from the host `kvm` to
the hypervisor `kvm` for non-protected VMs during initialization, since
we trust the host with its non-protected guests' features. Also ensure
`KVM_ARCH_FLAG_ID_REGS_INITIALIZED` is cleared initially in
`pkvm_init_features_from_host` so that `vm_copy_id_regs` can properly
initialize them and set the flag once done.

Fixes: 41d6028e28bd ("KVM: arm64: Convert the SVE guest vcpu flag to a vm flag")
Signed-off-by: Fuad Tabba <tabba@google.com>
Link: https://patch.msgid.link/20260213143815.1732675-4-tabba@google.com
Signed-off-by: Marc Zyngier <maz@kernel.org>

authored by

Fuad Tabba and committed by
Marc Zyngier
7e7c2cf0 9cb0468d

+33 -2
+33 -2
arch/arm64/kvm/hyp/nvhe/pkvm.c
··· 342 342 /* No restrictions for non-protected VMs. */ 343 343 if (!kvm_vm_is_protected(kvm)) { 344 344 hyp_vm->kvm.arch.flags = host_arch_flags; 345 + hyp_vm->kvm.arch.flags &= ~BIT_ULL(KVM_ARCH_FLAG_ID_REGS_INITIALIZED); 345 346 346 347 bitmap_copy(kvm->arch.vcpu_features, 347 348 host_kvm->arch.vcpu_features, ··· 472 471 return ret; 473 472 } 474 473 474 + static int vm_copy_id_regs(struct pkvm_hyp_vcpu *hyp_vcpu) 475 + { 476 + struct pkvm_hyp_vm *hyp_vm = pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu); 477 + const struct kvm *host_kvm = hyp_vm->host_kvm; 478 + struct kvm *kvm = &hyp_vm->kvm; 479 + 480 + if (!test_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &host_kvm->arch.flags)) 481 + return -EINVAL; 482 + 483 + if (test_and_set_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags)) 484 + return 0; 485 + 486 + memcpy(kvm->arch.id_regs, host_kvm->arch.id_regs, sizeof(kvm->arch.id_regs)); 487 + 488 + return 0; 489 + } 490 + 491 + static int pkvm_vcpu_init_sysregs(struct pkvm_hyp_vcpu *hyp_vcpu) 492 + { 493 + int ret = 0; 494 + 495 + if (pkvm_hyp_vcpu_is_protected(hyp_vcpu)) 496 + kvm_init_pvm_id_regs(&hyp_vcpu->vcpu); 497 + else 498 + ret = vm_copy_id_regs(hyp_vcpu); 499 + 500 + return ret; 501 + } 502 + 475 503 static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, 476 504 struct pkvm_hyp_vm *hyp_vm, 477 505 struct kvm_vcpu *host_vcpu) ··· 520 490 hyp_vcpu->vcpu.arch.cflags = READ_ONCE(host_vcpu->arch.cflags); 521 491 hyp_vcpu->vcpu.arch.mp_state.mp_state = KVM_MP_STATE_STOPPED; 522 492 523 - if (pkvm_hyp_vcpu_is_protected(hyp_vcpu)) 524 - kvm_init_pvm_id_regs(&hyp_vcpu->vcpu); 493 + ret = pkvm_vcpu_init_sysregs(hyp_vcpu); 494 + if (ret) 495 + goto done; 525 496 526 497 ret = pkvm_vcpu_init_traps(hyp_vcpu); 527 498 if (ret)