Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

riscv: kvm: Fix vstimecmp update hazard on RV32

On RV32, updating the 64-bit stimecmp (or vstimecmp) CSR requires two
separate 32-bit writes. A race condition exists if the timer triggers
during these two writes.

The RISC-V Privileged Specification (e.g., Section 3.2.1 for mtimecmp)
recommends a specific 3-step sequence to avoid spurious interrupts
when updating 64-bit comparison registers on 32-bit systems:

1. Set the low-order bits (stimecmp) to all ones (ULONG_MAX).
2. Set the high-order bits (stimecmph) to the desired value.
3. Set the low-order bits (stimecmp) to the desired value.

Current implementation writes the LSB first without ensuring a future
value, which may lead to a transient state where the 64-bit comparison
is incorrectly evaluated as "expired" by the hardware. This results in
spurious timer interrupts.

This patch adopts the spec-recommended 3-step sequence to ensure the
intermediate 64-bit state is never smaller than the current time.

Fixes: 8f5cb44b1bae ("RISC-V: KVM: Support sstc extension")
Signed-off-by: Naohiko Shimizu <naohiko.shimizu@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Link: https://patch.msgid.link/20260104135938.524-3-naohiko.shimizu@gmail.com
Signed-off-by: Paul Walmsley <pjw@kernel.org>

authored by

Naohiko Shimizu and committed by
Paul Walmsley
75870639 eaa9bb1d

+4 -2
+4 -2
arch/riscv/kvm/vcpu_timer.c
··· 72 72 static int kvm_riscv_vcpu_update_vstimecmp(struct kvm_vcpu *vcpu, u64 ncycles) 73 73 { 74 74 #if defined(CONFIG_32BIT) 75 - ncsr_write(CSR_VSTIMECMP, ncycles & 0xFFFFFFFF); 75 + ncsr_write(CSR_VSTIMECMP, ULONG_MAX); 76 76 ncsr_write(CSR_VSTIMECMPH, ncycles >> 32); 77 + ncsr_write(CSR_VSTIMECMP, (u32)ncycles); 77 78 #else 78 79 ncsr_write(CSR_VSTIMECMP, ncycles); 79 80 #endif ··· 308 307 return; 309 308 310 309 #if defined(CONFIG_32BIT) 311 - ncsr_write(CSR_VSTIMECMP, (u32)t->next_cycles); 310 + ncsr_write(CSR_VSTIMECMP, ULONG_MAX); 312 311 ncsr_write(CSR_VSTIMECMPH, (u32)(t->next_cycles >> 32)); 312 + ncsr_write(CSR_VSTIMECMP, (u32)(t->next_cycles)); 313 313 #else 314 314 ncsr_write(CSR_VSTIMECMP, t->next_cycles); 315 315 #endif