Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

MIPS: mm: Prevent a TLB shutdown on initial uniquification

Depending on the particular CPU implementation a TLB shutdown may occur
if multiple matching entries are detected upon the execution of a TLBP
or the TLBWI/TLBWR instructions. Given that we don't know what entries
we have been handed we need to be very careful with the initial TLB
setup and avoid all these instructions.

Therefore read all the TLB entries one by one with the TLBR instruction,
bypassing the content addressing logic, and truncate any large pages in
place so as to avoid a case in the second step where an incoming entry
for a large page at a lower address overlaps with a replacement entry
chosen at another index. Then preinitialize the TLB using addresses
outside our usual unique range and avoiding clashes with any entries
received, before making the usual call to local_flush_tlb_all().

This fixes (at least) R4x00 cores if TLBP hits multiple matching TLB
entries (SGI IP22 PROM for examples sets up all TLBs to the same virtual
address).

Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk>
Fixes: 35ad7e181541 ("MIPS: mm: tlb-r4k: Uniquify TLB entries on init")
Cc: stable@vger.kernel.org
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com> # Boston I6400, M5150 sim
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>

authored by

Maciej W. Rozycki and committed by
Thomas Bogendoerfer
9f048fa4 09782e72

+64 -38
+64 -38
arch/mips/mm/tlb-r4k.c
··· 15 15 #include <linux/mm.h> 16 16 #include <linux/hugetlb.h> 17 17 #include <linux/export.h> 18 + #include <linux/sort.h> 18 19 19 20 #include <asm/cpu.h> 20 21 #include <asm/cpu-type.h> ··· 509 508 510 509 __setup("ntlb=", set_ntlb); 511 510 512 - /* Initialise all TLB entries with unique values */ 511 + 512 + /* Comparison function for EntryHi VPN fields. */ 513 + static int r4k_vpn_cmp(const void *a, const void *b) 514 + { 515 + long v = *(unsigned long *)a - *(unsigned long *)b; 516 + int s = sizeof(long) > sizeof(int) ? sizeof(long) * 8 - 1: 0; 517 + return s ? (v != 0) | v >> s : v; 518 + } 519 + 520 + /* 521 + * Initialise all TLB entries with unique values that do not clash with 522 + * what we have been handed over and what we'll be using ourselves. 523 + */ 513 524 static void r4k_tlb_uniquify(void) 514 525 { 515 - int entry = num_wired_entries(); 526 + unsigned long tlb_vpns[1 << MIPS_CONF1_TLBS_SIZE]; 527 + int tlbsize = current_cpu_data.tlbsize; 528 + int start = num_wired_entries(); 529 + unsigned long vpn_mask; 530 + int cnt, ent, idx, i; 531 + 532 + vpn_mask = GENMASK(cpu_vmbits - 1, 13); 533 + vpn_mask |= IS_ENABLED(CONFIG_64BIT) ? 3ULL << 62 : 1 << 31; 516 534 517 535 htw_stop(); 536 + 537 + for (i = start, cnt = 0; i < tlbsize; i++, cnt++) { 538 + unsigned long vpn; 539 + 540 + write_c0_index(i); 541 + mtc0_tlbr_hazard(); 542 + tlb_read(); 543 + tlb_read_hazard(); 544 + vpn = read_c0_entryhi(); 545 + vpn &= vpn_mask & PAGE_MASK; 546 + tlb_vpns[cnt] = vpn; 547 + 548 + /* Prevent any large pages from overlapping regular ones. */ 549 + write_c0_pagemask(read_c0_pagemask() & PM_DEFAULT_MASK); 550 + mtc0_tlbw_hazard(); 551 + tlb_write_indexed(); 552 + tlbw_use_hazard(); 553 + } 554 + 555 + sort(tlb_vpns, cnt, sizeof(tlb_vpns[0]), r4k_vpn_cmp, NULL); 556 + 557 + write_c0_pagemask(PM_DEFAULT_MASK); 518 558 write_c0_entrylo0(0); 519 559 write_c0_entrylo1(0); 520 560 521 - while (entry < current_cpu_data.tlbsize) { 522 - unsigned long asid_mask = cpu_asid_mask(&current_cpu_data); 523 - unsigned long asid = 0; 524 - int idx; 561 + idx = 0; 562 + ent = tlbsize; 563 + for (i = start; i < tlbsize; i++) 564 + while (1) { 565 + unsigned long entryhi, vpn; 525 566 526 - /* Skip wired MMID to make ginvt_mmid work */ 527 - if (cpu_has_mmid) 528 - asid = MMID_KERNEL_WIRED + 1; 567 + entryhi = UNIQUE_ENTRYHI(ent); 568 + vpn = entryhi & vpn_mask & PAGE_MASK; 529 569 530 - /* Check for match before using UNIQUE_ENTRYHI */ 531 - do { 532 - if (cpu_has_mmid) { 533 - write_c0_memorymapid(asid); 534 - write_c0_entryhi(UNIQUE_ENTRYHI(entry)); 535 - } else { 536 - write_c0_entryhi(UNIQUE_ENTRYHI(entry) | asid); 537 - } 538 - mtc0_tlbw_hazard(); 539 - tlb_probe(); 540 - tlb_probe_hazard(); 541 - idx = read_c0_index(); 542 - /* No match or match is on current entry */ 543 - if (idx < 0 || idx == entry) 570 + if (idx >= cnt || vpn < tlb_vpns[idx]) { 571 + write_c0_entryhi(entryhi); 572 + write_c0_index(i); 573 + mtc0_tlbw_hazard(); 574 + tlb_write_indexed(); 575 + ent++; 544 576 break; 545 - /* 546 - * If we hit a match, we need to try again with 547 - * a different ASID. 548 - */ 549 - asid++; 550 - } while (asid < asid_mask); 551 - 552 - if (idx >= 0 && idx != entry) 553 - panic("Unable to uniquify TLB entry %d", idx); 554 - 555 - write_c0_index(entry); 556 - mtc0_tlbw_hazard(); 557 - tlb_write_indexed(); 558 - entry++; 559 - } 577 + } else if (vpn == tlb_vpns[idx]) { 578 + ent++; 579 + } else { 580 + idx++; 581 + } 582 + } 560 583 561 584 tlbw_use_hazard(); 562 585 htw_start(); ··· 627 602 628 603 /* From this point on the ARC firmware is dead. */ 629 604 r4k_tlb_uniquify(); 605 + local_flush_tlb_all(); 630 606 631 607 /* Did I tell you that ARC SUCKS? */ 632 608 }