Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'x86-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 fixes from Thomas Gleixner:
"Three interrupt related fixes for X86:

- Move disabling of the local APIC after invoking fixup_irqs() to
ensure that interrupts which are incoming are noted in the IRR and
not ignored.

- Unbreak affinity setting.

The rework of the entry code reused the regular exception entry
code for device interrupts. The vector number is pushed into the
errorcode slot on the stack which is then lifted into an argument
and set to -1 because that's regs->orig_ax which is used in quite
some places to check whether the entry came from a syscall.

But it was overlooked that orig_ax is used in the affinity cleanup
code to validate whether the interrupt has arrived on the new
target. It turned out that this vector check is pointless because
interrupts are never moved from one vector to another on the same
CPU. That check is a historical leftover from the time where x86
supported multi-CPU affinities, but not longer needed with the now
strict single CPU affinity. Famous last words ...

- Add a missing check for an empty cpumask into the matrix allocator.

The affinity change added a warning to catch the case where an
interrupt is moved on the same CPU to a different vector. This
triggers because a condition with an empty cpumask returns an
assignment from the allocator as the allocator uses for_each_cpu()
without checking the cpumask for being empty. The historical
inconsistent for_each_cpu() behaviour of ignoring the cpumask and
unconditionally claiming that CPU0 is in the mask struck again.
Sigh.

plus a new entry into the MAINTAINER file for the HPE/UV platform"

* tag 'x86-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
genirq/matrix: Deal with the sillyness of for_each_cpu() on UP
x86/irq: Unbreak interrupt affinity setting
x86/hotplug: Silence APIC only after all interrupts are migrated
MAINTAINERS: Add entry for HPE Superdome Flex (UV) maintainers

+45 -13
+9
MAINTAINERS
··· 18875 18875 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/core 18876 18876 F: arch/x86/platform 18877 18877 18878 + X86 PLATFORM UV HPE SUPERDOME FLEX 18879 + M: Steve Wahl <steve.wahl@hpe.com> 18880 + R: Dimitri Sivanich <dimitri.sivanich@hpe.com> 18881 + R: Russ Anderson <russ.anderson@hpe.com> 18882 + S: Supported 18883 + F: arch/x86/include/asm/uv/ 18884 + F: arch/x86/kernel/apic/x2apic_uv_x.c 18885 + F: arch/x86/platform/uv/ 18886 + 18878 18887 X86 VDSO 18879 18888 M: Andy Lutomirski <luto@kernel.org> 18880 18889 L: linux-kernel@vger.kernel.org
+9 -7
arch/x86/kernel/apic/vector.c
··· 161 161 apicd->move_in_progress = true; 162 162 apicd->prev_vector = apicd->vector; 163 163 apicd->prev_cpu = apicd->cpu; 164 + WARN_ON_ONCE(apicd->cpu == newcpu); 164 165 } else { 165 166 irq_matrix_free(vector_matrix, apicd->cpu, apicd->vector, 166 167 managed); ··· 911 910 __send_cleanup_vector(apicd); 912 911 } 913 912 914 - static void __irq_complete_move(struct irq_cfg *cfg, unsigned vector) 913 + void irq_complete_move(struct irq_cfg *cfg) 915 914 { 916 915 struct apic_chip_data *apicd; 917 916 ··· 919 918 if (likely(!apicd->move_in_progress)) 920 919 return; 921 920 922 - if (vector == apicd->vector && apicd->cpu == smp_processor_id()) 921 + /* 922 + * If the interrupt arrived on the new target CPU, cleanup the 923 + * vector on the old target CPU. A vector check is not required 924 + * because an interrupt can never move from one vector to another 925 + * on the same CPU. 926 + */ 927 + if (apicd->cpu == smp_processor_id()) 923 928 __send_cleanup_vector(apicd); 924 - } 925 - 926 - void irq_complete_move(struct irq_cfg *cfg) 927 - { 928 - __irq_complete_move(cfg, ~get_irq_regs()->orig_ax); 929 929 } 930 930 931 931 /*
+20 -6
arch/x86/kernel/smpboot.c
··· 1594 1594 if (ret) 1595 1595 return ret; 1596 1596 1597 - /* 1598 - * Disable the local APIC. Otherwise IPI broadcasts will reach 1599 - * it. It still responds normally to INIT, NMI, SMI, and SIPI 1600 - * messages. 1601 - */ 1602 - apic_soft_disable(); 1603 1597 cpu_disable_common(); 1598 + 1599 + /* 1600 + * Disable the local APIC. Otherwise IPI broadcasts will reach 1601 + * it. It still responds normally to INIT, NMI, SMI, and SIPI 1602 + * messages. 1603 + * 1604 + * Disabling the APIC must happen after cpu_disable_common() 1605 + * which invokes fixup_irqs(). 1606 + * 1607 + * Disabling the APIC preserves already set bits in IRR, but 1608 + * an interrupt arriving after disabling the local APIC does not 1609 + * set the corresponding IRR bit. 1610 + * 1611 + * fixup_irqs() scans IRR for set bits so it can raise a not 1612 + * yet handled interrupt on the new destination CPU via an IPI 1613 + * but obviously it can't do so for IRR bits which are not set. 1614 + * IOW, interrupts arriving after disabling the local APIC will 1615 + * be lost. 1616 + */ 1617 + apic_soft_disable(); 1604 1618 1605 1619 return 0; 1606 1620 }
+7
kernel/irq/matrix.c
··· 380 380 unsigned int cpu, bit; 381 381 struct cpumap *cm; 382 382 383 + /* 384 + * Not required in theory, but matrix_find_best_cpu() uses 385 + * for_each_cpu() which ignores the cpumask on UP . 386 + */ 387 + if (cpumask_empty(msk)) 388 + return -EINVAL; 389 + 383 390 cpu = matrix_find_best_cpu(m, msk); 384 391 if (cpu == UINT_MAX) 385 392 return -ENOSPC;