Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

arm64: entry: Use split preemption logic

The generic irqentry code now provides
irqentry_exit_to_kernel_mode_preempt() and
irqentry_exit_to_kernel_mode_after_preempt(), which can be used
where architectures have different state requirements for involuntary
preemption and exception return, as is the case on arm64.

Use the new functions on arm64, aligning our exit to kernel mode logic
with the style of our exit to user mode logic. This removes the need for
the recently-added bodge in arch_irqentry_exit_need_resched(), and
allows preemption to occur when returning from any exception taken from
kernel mode, which is nicer for RT.

In an ideal world, we'd remove arch_irqentry_exit_need_resched(), and
fold the conditionality directly into the architecture-specific entry
code. That way all the logic necessary to avoid preempting from a
pseudo-NMI could be constrained specifically to the EL1 IRQ/FIQ paths,
avoiding redundant work for other exceptions, and making the flow a bit
clearer. At present it looks like that would require a larger
refactoring (e.g. for the PREEMPT_DYNAMIC logic), and so I've left that
as-is for now.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Jinjie Ruan <ruanjinjie@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@kernel.org>
Cc: Vladimir Murzin <vladimir.murzin@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Jinjie Ruan <ruanjinjie@huawei.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

authored by

Mark Rutland and committed by
Catalin Marinas
ae654112 a07b7b21

+12 -21
+8 -13
arch/arm64/include/asm/entry-common.h
··· 29 29 30 30 static inline bool arch_irqentry_exit_need_resched(void) 31 31 { 32 - if (system_uses_irq_prio_masking()) { 33 - /* 34 - * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC 35 - * priority masking is used the GIC irqchip driver will clear DAIF.IF 36 - * using gic_arch_enable_irqs() for normal IRQs. If anything is set in 37 - * DAIF we must have handled an NMI, so skip preemption. 38 - */ 39 - if (read_sysreg(daif)) 40 - return false; 41 - } else { 42 - if (read_sysreg(daif) & (PSR_D_BIT | PSR_A_BIT)) 43 - return false; 44 - } 32 + /* 33 + * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC 34 + * priority masking is used the GIC irqchip driver will clear DAIF.IF 35 + * using gic_arch_enable_irqs() for normal IRQs. If anything is set in 36 + * DAIF we must have handled an NMI, so skip preemption. 37 + */ 38 + if (system_uses_irq_prio_masking() && read_sysreg(daif)) 39 + return false; 45 40 46 41 /* 47 42 * Preempting a task from an IRQ means we leave copies of PSTATE
+4 -8
arch/arm64/kernel/entry-common.c
··· 54 54 static void noinstr arm64_exit_to_kernel_mode(struct pt_regs *regs, 55 55 irqentry_state_t state) 56 56 { 57 + local_irq_disable(); 58 + irqentry_exit_to_kernel_mode_preempt(regs, state); 59 + local_daif_mask(); 57 60 mte_check_tfsr_exit(); 58 - irqentry_exit_to_kernel_mode(regs, state); 61 + irqentry_exit_to_kernel_mode_after_preempt(regs, state); 59 62 } 60 63 61 64 /* ··· 304 301 state = arm64_enter_from_kernel_mode(regs); 305 302 local_daif_inherit(regs); 306 303 do_mem_abort(far, esr, regs); 307 - local_daif_mask(); 308 304 arm64_exit_to_kernel_mode(regs, state); 309 305 } 310 306 ··· 315 313 state = arm64_enter_from_kernel_mode(regs); 316 314 local_daif_inherit(regs); 317 315 do_sp_pc_abort(far, esr, regs); 318 - local_daif_mask(); 319 316 arm64_exit_to_kernel_mode(regs, state); 320 317 } 321 318 ··· 325 324 state = arm64_enter_from_kernel_mode(regs); 326 325 local_daif_inherit(regs); 327 326 do_el1_undef(regs, esr); 328 - local_daif_mask(); 329 327 arm64_exit_to_kernel_mode(regs, state); 330 328 } 331 329 ··· 335 335 state = arm64_enter_from_kernel_mode(regs); 336 336 local_daif_inherit(regs); 337 337 do_el1_bti(regs, esr); 338 - local_daif_mask(); 339 338 arm64_exit_to_kernel_mode(regs, state); 340 339 } 341 340 ··· 345 346 state = arm64_enter_from_kernel_mode(regs); 346 347 local_daif_inherit(regs); 347 348 do_el1_gcs(regs, esr); 348 - local_daif_mask(); 349 349 arm64_exit_to_kernel_mode(regs, state); 350 350 } 351 351 ··· 355 357 state = arm64_enter_from_kernel_mode(regs); 356 358 local_daif_inherit(regs); 357 359 do_el1_mops(regs, esr); 358 - local_daif_mask(); 359 360 arm64_exit_to_kernel_mode(regs, state); 360 361 } 361 362 ··· 420 423 state = arm64_enter_from_kernel_mode(regs); 421 424 local_daif_inherit(regs); 422 425 do_el1_fpac(regs, esr); 423 - local_daif_mask(); 424 426 arm64_exit_to_kernel_mode(regs, state); 425 427 } 426 428