mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-12-04 17:46:48 +07:00
8d8997f34e
On pSeries, we always force the IO space to be mapped using 4K pages even with a 64K base page size to cope with some limitations in the HV interface to some devices. However, the SLB miss handler code to discriminate between vmalloc and ioremap space uses a CPU feature section such that the code is nop'ed out when the processor support large pages non-cachable mappings. Thus, we end up always using the ioremap page size for vmalloc segments on such processors, causing a discrepency between the segment and the hash table, and thus a hang continously hashing the page. It works for the first segment of the vmalloc space since that segment is "bolted" in by C code correctly, and thankfully we almost never use the vmalloc space beyond the first segment, but the new percpu code made the bug happen. This fixes it by removing the feature section from the assembly, we now always do the comparison between vmalloc and ioremap. Signed-off-by; Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> |
||
---|---|---|
.. | ||
40x_mmu.c | ||
44x_mmu.c | ||
dma-noncoherent.c | ||
fault.c | ||
fsl_booke_mmu.c | ||
gup.c | ||
hash_low_32.S | ||
hash_low_64.S | ||
hash_native_64.c | ||
hash_utils_64.c | ||
highmem.c | ||
hugetlbpage.c | ||
init_32.c | ||
init_64.c | ||
Makefile | ||
mem.c | ||
mmap_64.c | ||
mmu_context_hash32.c | ||
mmu_context_hash64.c | ||
mmu_context_nohash.c | ||
mmu_decl.h | ||
numa.c | ||
pgtable_32.c | ||
pgtable_64.c | ||
pgtable.c | ||
ppc_mmu_32.c | ||
slb_low.S | ||
slb.c | ||
slice.c | ||
stab.c | ||
subpage-prot.c | ||
tlb_hash32.c | ||
tlb_hash64.c | ||
tlb_low_64e.S | ||
tlb_nohash_low.S | ||
tlb_nohash.c |