linux_dsm_epyc7002/arch/powerpc/mm
Christophe Leroy 4622a2d431 powerpc/6xx: fix setup and use of SPRN_SPRG_PGDIR for hash32
Not only the 603 but all 6xx need SPRN_SPRG_PGDIR to be initialised at
startup. This patch move it from __setup_cpu_603() to start_here()
and __secondary_start(), close to the initialisation of SPRN_THREAD.

Previously, virt addr of PGDIR was retrieved from thread struct.
Now that it is the phys addr which is stored in SPRN_SPRG_PGDIR,
hash_page() shall not convert it to phys anymore.
This patch removes the conversion.

Fixes: 93c4a162b0 ("powerpc/6xx: Store PGDIR physical address in a SPRG")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-03-19 00:30:19 +11:00
..
ptdump powerpc/mm: Check secondary hash page table 2019-03-02 14:43:05 +11:00
8xx_mmu.c powerpc/8xx: don't disable large TLBs with CONFIG_STRICT_KERNEL_RWX 2019-02-23 21:04:32 +11:00
40x_mmu.c powerpc/mm/32: add base address to mmu_mapin_ram() 2019-02-23 21:04:31 +11:00
44x_mmu.c powerpc/mm/32: add base address to mmu_mapin_ram() 2019-02-23 21:04:31 +11:00
copro_fault.c
dma-noncoherent.c
drmem.c
fault.c
fsl_booke_mmu.c powerpc/mm/32: add base address to mmu_mapin_ram() 2019-02-23 21:04:31 +11:00
hash64_4k.c
hash64_64k.c
hash_low_32.S powerpc/6xx: fix setup and use of SPRN_SPRG_PGDIR for hash32 2019-03-19 00:30:19 +11:00
hash_native_64.c
hash_utils_64.c treewide: add checks for the return value of memblock_alloc*() 2019-03-12 10:04:02 -07:00
highmem.c
hugepage-hash64.c
hugetlbpage-book3e.c
hugetlbpage-hash64.c powerpc updates for 5.1 2019-03-07 12:56:26 -08:00
hugetlbpage-radix.c powerpc updates for 5.1 2019-03-07 12:56:26 -08:00
hugetlbpage.c
init_32.c powerpc/8xx: don't disable large TLBs with CONFIG_STRICT_KERNEL_RWX 2019-02-23 21:04:32 +11:00
init_64.c powerpc/mm: fix "section_base" set but not used 2019-03-02 14:43:05 +11:00
init-common.c
Makefile powerpc/mm: Disable kcov for SLB routines 2019-03-12 14:06:12 +11:00
mem.c
mmap.c
mmu_context_book3s64.c
mmu_context_hash32.c
mmu_context_iommu.c powerpc/mm/iommu: allow large IOMMU page size only for hugetlb backing 2019-03-05 21:07:19 -08:00
mmu_context_nohash.c treewide: add checks for the return value of memblock_alloc*() 2019-03-12 10:04:02 -07:00
mmu_context.c
mmu_decl.h powerpc/8xx: don't disable large TLBs with CONFIG_STRICT_KERNEL_RWX 2019-02-23 21:04:32 +11:00
numa.c memblock: memblock_phys_alloc_try_nid(): don't panic 2019-03-12 10:04:01 -07:00
pgtable_32.c powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX 2019-02-23 21:04:32 +11:00
pgtable_64.c
pgtable-book3e.c treewide: add checks for the return value of memblock_alloc*() 2019-03-12 10:04:02 -07:00
pgtable-book3s64.c treewide: add checks for the return value of memblock_alloc*() 2019-03-12 10:04:02 -07:00
pgtable-frag.c
pgtable-hash64.c
pgtable-radix.c treewide: add checks for the return value of memblock_alloc*() 2019-03-12 10:04:02 -07:00
pgtable.c
pkeys.c
ppc_mmu_32.c treewide: add checks for the return value of memblock_alloc*() 2019-03-12 10:04:02 -07:00
slb.c
slice.c powerpc/mm/hash: Handle mmap_min_addr correctly in get_unmapped_area topdown search 2019-02-26 16:26:29 +11:00
subpage-prot.c
tlb_hash32.c
tlb_hash64.c
tlb_low_64e.S
tlb_nohash_low.S
tlb_nohash.c
tlb-radix.c
vphn.c
vphn.h