linux_dsm_epyc7002/arch/powerpc/mm/book3s64/mmu_context.c

297 lines
7.2 KiB
C
Raw Normal View History

// SPDX-License-Identifier: GPL-2.0-or-later
/*
* MMU context allocation for 64-bit kernels.
*
* Copyright (C) 2004 Anton Blanchard, IBM Corp. <anton@samba.org>
*/
#include <linux/sched.h>
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/mm.h>
#include <linux/pkeys.h>
#include <linux/spinlock.h>
#include <linux/idr.h>
#include <linux/export.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
#include <linux/gfp.h>
#include <linux/slab.h>
#include <asm/mmu_context.h>
#include <asm/pgalloc.h>
static DEFINE_IDA(mmu_context_ida);
static int alloc_context_id(int min_id, int max_id)
{
return ida_alloc_range(&mmu_context_ida, min_id, max_id, GFP_KERNEL);
}
void hash__reserve_context_id(int id)
{
int result = ida_alloc_range(&mmu_context_ida, id, id, GFP_KERNEL);
WARN(result != id, "mmu: Failed to reserve context id %d (rc %d)\n", id, result);
}
int hash__alloc_context_id(void)
{
unsigned long max;
if (mmu_has_feature(MMU_FTR_68_BIT_VA))
max = MAX_USER_CONTEXT;
else
max = MAX_USER_CONTEXT_65BIT_VA;
return alloc_context_id(MIN_USER_CONTEXT, max);
}
EXPORT_SYMBOL_GPL(hash__alloc_context_id);
void slb_setup_new_exec(void);
powerpc/mm/64s/hash: Reallocate context ids on fork When using the Hash Page Table (HPT) MMU, userspace memory mappings are managed at two levels. Firstly in the Linux page tables, much like other architectures, and secondly in the SLB (Segment Lookaside Buffer) and HPT. It's the SLB and HPT that are actually used by the hardware to do translations. As part of the series adding support for 4PB user virtual address space using the hash MMU, we added support for allocating multiple "context ids" per process, one for each 512TB chunk of address space. These are tracked in an array called extended_id in the mm_context_t of a process that has done a mapping above 512TB. If such a process forks (ie. clone(2) without CLONE_VM set) it's mm is copied, including the mm_context_t, and then init_new_context() is called to reinitialise parts of the mm_context_t as appropriate to separate the address spaces of the two processes. The key step in ensuring the two processes have separate address spaces is to allocate a new context id for the process, this is done at the beginning of hash__init_new_context(). If we didn't allocate a new context id then the two processes would share mappings as far as the SLB and HPT are concerned, even though their Linux page tables would be separate. For mappings above 512TB, which use the extended_id array, we neglected to allocate new context ids on fork, meaning the parent and child use the same ids and therefore share those mappings even though they're supposed to be separate. This can lead to the parent seeing writes done by the child, which is essentially memory corruption. There is an additional exposure which is that if the child process exits, all its context ids are freed, including the context ids that are still in use by the parent for mappings above 512TB. One or more of those ids can then be reallocated to a third process, that process can then read/write to the parent's mappings above 512TB. Additionally if the freed id is used for the third process's primary context id, then the parent is able to read/write to the third process's mappings *below* 512TB. All of these are fundamental failures to enforce separation between processes. The only mitigating factor is that the bug only occurs if a process creates mappings above 512TB, and most applications still do not create such mappings. Only machines using the hash page table MMU are affected, eg. PowerPC 970 (G5), PA6T, Power5/6/7/8/9. By default Power9 bare metal machines (powernv) use the Radix MMU and are not affected, unless the machine has been explicitly booted in HPT mode (using disable_radix on the kernel command line). KVM guests on Power9 may be affected if the host or guest is configured to use the HPT MMU. LPARs under PowerVM on Power9 are affected as they always use the HPT MMU. Kernels built with PAGE_SIZE=4K are not affected. The fix is relatively simple, we need to reallocate context ids for all extended mappings on fork. Fixes: f384796c40dc ("powerpc/mm: Add support for handling > 512TB address in SLB miss") Cc: stable@vger.kernel.org # v4.17+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-06-12 20:35:07 +07:00
static int realloc_context_ids(mm_context_t *ctx)
{
int i, id;
/*
* id 0 (aka. ctx->id) is special, we always allocate a new one, even if
* there wasn't one allocated previously (which happens in the exec
* case where ctx is newly allocated).
*
* We have to be a bit careful here. We must keep the existing ids in
* the array, so that we can test if they're non-zero to decide if we
* need to allocate a new one. However in case of error we must free the
* ids we've allocated but *not* any of the existing ones (or risk a
* UAF). That's why we decrement i at the start of the error handling
* loop, to skip the id that we just tested but couldn't reallocate.
*/
for (i = 0; i < ARRAY_SIZE(ctx->extended_id); i++) {
if (i == 0 || ctx->extended_id[i]) {
id = hash__alloc_context_id();
if (id < 0)
goto error;
ctx->extended_id[i] = id;
}
}
/* The caller expects us to return id */
return ctx->id;
error:
for (i--; i >= 0; i--) {
if (ctx->extended_id[i])
ida_free(&mmu_context_ida, ctx->extended_id[i]);
}
return id;
}
static int hash__init_new_context(struct mm_struct *mm)
{
int index;
mm->context.hash_context = kmalloc(sizeof(struct hash_mm_context),
GFP_KERNEL);
if (!mm->context.hash_context)
return -ENOMEM;
/*
* The old code would re-promote on fork, we don't do that when using
* slices as it could cause problem promoting slices that have been
* forced down to 4K.
*
* For book3s we have MMU_NO_CONTEXT set to be ~0. Hence check
* explicitly against context.id == 0. This ensures that we properly
* initialize context slice details for newly allocated mm's (which will
* have id == 0) and don't alter context slice inherited via fork (which
* will have id != 0).
*
* We should not be calling init_new_context() on init_mm. Hence a
* check against 0 is OK.
*/
if (mm->context.id == 0) {
memset(mm->context.hash_context, 0, sizeof(struct hash_mm_context));
slice_init_new_context_exec(mm);
} else {
/* This is fork. Copy hash_context details from current->mm */
memcpy(mm->context.hash_context, current->mm->context.hash_context, sizeof(struct hash_mm_context));
#ifdef CONFIG_PPC_SUBPAGE_PROT
/* inherit subpage prot detalis if we have one. */
if (current->mm->context.hash_context->spt) {
mm->context.hash_context->spt = kmalloc(sizeof(struct subpage_prot_table),
GFP_KERNEL);
if (!mm->context.hash_context->spt) {
kfree(mm->context.hash_context);
return -ENOMEM;
}
}
#endif
}
powerpc/mm/64s/hash: Reallocate context ids on fork When using the Hash Page Table (HPT) MMU, userspace memory mappings are managed at two levels. Firstly in the Linux page tables, much like other architectures, and secondly in the SLB (Segment Lookaside Buffer) and HPT. It's the SLB and HPT that are actually used by the hardware to do translations. As part of the series adding support for 4PB user virtual address space using the hash MMU, we added support for allocating multiple "context ids" per process, one for each 512TB chunk of address space. These are tracked in an array called extended_id in the mm_context_t of a process that has done a mapping above 512TB. If such a process forks (ie. clone(2) without CLONE_VM set) it's mm is copied, including the mm_context_t, and then init_new_context() is called to reinitialise parts of the mm_context_t as appropriate to separate the address spaces of the two processes. The key step in ensuring the two processes have separate address spaces is to allocate a new context id for the process, this is done at the beginning of hash__init_new_context(). If we didn't allocate a new context id then the two processes would share mappings as far as the SLB and HPT are concerned, even though their Linux page tables would be separate. For mappings above 512TB, which use the extended_id array, we neglected to allocate new context ids on fork, meaning the parent and child use the same ids and therefore share those mappings even though they're supposed to be separate. This can lead to the parent seeing writes done by the child, which is essentially memory corruption. There is an additional exposure which is that if the child process exits, all its context ids are freed, including the context ids that are still in use by the parent for mappings above 512TB. One or more of those ids can then be reallocated to a third process, that process can then read/write to the parent's mappings above 512TB. Additionally if the freed id is used for the third process's primary context id, then the parent is able to read/write to the third process's mappings *below* 512TB. All of these are fundamental failures to enforce separation between processes. The only mitigating factor is that the bug only occurs if a process creates mappings above 512TB, and most applications still do not create such mappings. Only machines using the hash page table MMU are affected, eg. PowerPC 970 (G5), PA6T, Power5/6/7/8/9. By default Power9 bare metal machines (powernv) use the Radix MMU and are not affected, unless the machine has been explicitly booted in HPT mode (using disable_radix on the kernel command line). KVM guests on Power9 may be affected if the host or guest is configured to use the HPT MMU. LPARs under PowerVM on Power9 are affected as they always use the HPT MMU. Kernels built with PAGE_SIZE=4K are not affected. The fix is relatively simple, we need to reallocate context ids for all extended mappings on fork. Fixes: f384796c40dc ("powerpc/mm: Add support for handling > 512TB address in SLB miss") Cc: stable@vger.kernel.org # v4.17+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-06-12 20:35:07 +07:00
index = realloc_context_ids(&mm->context);
if (index < 0) {
#ifdef CONFIG_PPC_SUBPAGE_PROT
kfree(mm->context.hash_context->spt);
#endif
kfree(mm->context.hash_context);
powerpc/mm/64s/hash: Reallocate context ids on fork When using the Hash Page Table (HPT) MMU, userspace memory mappings are managed at two levels. Firstly in the Linux page tables, much like other architectures, and secondly in the SLB (Segment Lookaside Buffer) and HPT. It's the SLB and HPT that are actually used by the hardware to do translations. As part of the series adding support for 4PB user virtual address space using the hash MMU, we added support for allocating multiple "context ids" per process, one for each 512TB chunk of address space. These are tracked in an array called extended_id in the mm_context_t of a process that has done a mapping above 512TB. If such a process forks (ie. clone(2) without CLONE_VM set) it's mm is copied, including the mm_context_t, and then init_new_context() is called to reinitialise parts of the mm_context_t as appropriate to separate the address spaces of the two processes. The key step in ensuring the two processes have separate address spaces is to allocate a new context id for the process, this is done at the beginning of hash__init_new_context(). If we didn't allocate a new context id then the two processes would share mappings as far as the SLB and HPT are concerned, even though their Linux page tables would be separate. For mappings above 512TB, which use the extended_id array, we neglected to allocate new context ids on fork, meaning the parent and child use the same ids and therefore share those mappings even though they're supposed to be separate. This can lead to the parent seeing writes done by the child, which is essentially memory corruption. There is an additional exposure which is that if the child process exits, all its context ids are freed, including the context ids that are still in use by the parent for mappings above 512TB. One or more of those ids can then be reallocated to a third process, that process can then read/write to the parent's mappings above 512TB. Additionally if the freed id is used for the third process's primary context id, then the parent is able to read/write to the third process's mappings *below* 512TB. All of these are fundamental failures to enforce separation between processes. The only mitigating factor is that the bug only occurs if a process creates mappings above 512TB, and most applications still do not create such mappings. Only machines using the hash page table MMU are affected, eg. PowerPC 970 (G5), PA6T, Power5/6/7/8/9. By default Power9 bare metal machines (powernv) use the Radix MMU and are not affected, unless the machine has been explicitly booted in HPT mode (using disable_radix on the kernel command line). KVM guests on Power9 may be affected if the host or guest is configured to use the HPT MMU. LPARs under PowerVM on Power9 are affected as they always use the HPT MMU. Kernels built with PAGE_SIZE=4K are not affected. The fix is relatively simple, we need to reallocate context ids for all extended mappings on fork. Fixes: f384796c40dc ("powerpc/mm: Add support for handling > 512TB address in SLB miss") Cc: stable@vger.kernel.org # v4.17+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-06-12 20:35:07 +07:00
return index;
}
pkey_mm_init(mm);
return index;
}
void hash__setup_new_exec(void)
{
slice_setup_new_exec();
slb_setup_new_exec();
}
static int radix__init_new_context(struct mm_struct *mm)
{
unsigned long rts_field;
powerpc/mm/radix: Workaround prefetch issue with KVM There's a somewhat architectural issue with Radix MMU and KVM. When coming out of a guest with AIL (Alternate Interrupt Location, ie, MMU enabled), we start executing hypervisor code with the PID register still containing whatever the guest has been using. The problem is that the CPU can (and will) then start prefetching or speculatively load from whatever host context has that same PID (if any), thus bringing translations for that context into the TLB, which Linux doesn't know about. This can cause stale translations and subsequent crashes. Fixing this in a way that is neither racy nor a huge performance impact is difficult. We could just make the host invalidations always use broadcast forms but that would hurt single threaded programs for example. We chose to fix it instead by partitioning the PID space between guest and host. This is possible because today Linux only use 19 out of the 20 bits of PID space, so existing guests will work if we make the host use the top half of the 20 bits space. We additionally add support for a property to indicate to Linux the size of the PID register which will be useful if we eventually have processors with a larger PID space available. There is still an issue with malicious guests purposefully setting the PID register to a value in the hosts PID range. Hopefully future HW can prevent that, but in the meantime, we handle it with a pair of kludges: - On the way out of a guest, before we clear the current VCPU in the PACA, we check the PID and if it's outside of the permitted range we flush the TLB for that PID. - When context switching, if the mm is "new" on that CPU (the corresponding bit was set for the first time in the mm cpumask), we check if any sibling thread is in KVM (has a non-NULL VCPU pointer in the PACA). If that is the case, we also flush the PID for that CPU (core). This second part is needed to handle the case where a process is migrated (or starts a new pthread) on a sibling thread of the CPU coming out of KVM, as there's a window where stale translations can exist before we detect it and flush them out. A future optimization could be added by keeping track of whether the PID has ever been used and avoid doing that for completely fresh PIDs. We could similarily mark PIDs that have been the subject of a global invalidation as "fresh". But for now this will do. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [mpe: Rework the asm to build with CONFIG_PPC_RADIX_MMU=n, drop unneeded include of kvm_book3s_asm.h] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-24 11:26:06 +07:00
int index, max_id;
powerpc/mm/radix: Workaround prefetch issue with KVM There's a somewhat architectural issue with Radix MMU and KVM. When coming out of a guest with AIL (Alternate Interrupt Location, ie, MMU enabled), we start executing hypervisor code with the PID register still containing whatever the guest has been using. The problem is that the CPU can (and will) then start prefetching or speculatively load from whatever host context has that same PID (if any), thus bringing translations for that context into the TLB, which Linux doesn't know about. This can cause stale translations and subsequent crashes. Fixing this in a way that is neither racy nor a huge performance impact is difficult. We could just make the host invalidations always use broadcast forms but that would hurt single threaded programs for example. We chose to fix it instead by partitioning the PID space between guest and host. This is possible because today Linux only use 19 out of the 20 bits of PID space, so existing guests will work if we make the host use the top half of the 20 bits space. We additionally add support for a property to indicate to Linux the size of the PID register which will be useful if we eventually have processors with a larger PID space available. There is still an issue with malicious guests purposefully setting the PID register to a value in the hosts PID range. Hopefully future HW can prevent that, but in the meantime, we handle it with a pair of kludges: - On the way out of a guest, before we clear the current VCPU in the PACA, we check the PID and if it's outside of the permitted range we flush the TLB for that PID. - When context switching, if the mm is "new" on that CPU (the corresponding bit was set for the first time in the mm cpumask), we check if any sibling thread is in KVM (has a non-NULL VCPU pointer in the PACA). If that is the case, we also flush the PID for that CPU (core). This second part is needed to handle the case where a process is migrated (or starts a new pthread) on a sibling thread of the CPU coming out of KVM, as there's a window where stale translations can exist before we detect it and flush them out. A future optimization could be added by keeping track of whether the PID has ever been used and avoid doing that for completely fresh PIDs. We could similarily mark PIDs that have been the subject of a global invalidation as "fresh". But for now this will do. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [mpe: Rework the asm to build with CONFIG_PPC_RADIX_MMU=n, drop unneeded include of kvm_book3s_asm.h] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-07-24 11:26:06 +07:00
max_id = (1 << mmu_pid_bits) - 1;
index = alloc_context_id(mmu_base_pid, max_id);
if (index < 0)
return index;
/*
* set the process table entry,
*/
rts_field = radix__get_tree_size();
process_tb[index].prtb0 = cpu_to_be64(rts_field | __pa(mm->pgd) | RADIX_PGD_INDEX_SIZE);
/*
* Order the above store with subsequent update of the PID
* register (at which point HW can start loading/caching
* the entry) and the corresponding load by the MMU from
* the L2 cache.
*/
asm volatile("ptesync;isync" : : : "memory");
mm->context.hash_context = NULL;
return index;
}
int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
{
int index;
if (radix_enabled())
index = radix__init_new_context(mm);
else
index = hash__init_new_context(mm);
if (index < 0)
return index;
mm->context.id = index;
mm->context.pte_frag = NULL;
mm->context.pmd_frag = NULL;
powerpc/mmu: Add userspace-to-physical addresses translation cache We are adding support for DMA memory pre-registration to be used in conjunction with VFIO. The idea is that the userspace which is going to run a guest may want to pre-register a user space memory region so it all gets pinned once and never goes away. Having this done, a hypervisor will not have to pin/unpin pages on every DMA map/unmap request. This is going to help with multiple pinning of the same memory. Another use of it is in-kernel real mode (mmu off) acceleration of DMA requests where real time translation of guest physical to host physical addresses is non-trivial and may fail as linux ptes may be temporarily invalid. Also, having cached host physical addresses (compared to just pinning at the start and then walking the page table again on every H_PUT_TCE), we can be sure that the addresses which we put into TCE table are the ones we already pinned. This adds a list of memory regions to mm_context_t. Each region consists of a header and a list of physical addresses. This adds API to: 1. register/unregister memory regions; 2. do final cleanup (which puts all pre-registered pages); 3. do userspace to physical address translation; 4. manage usage counters; multiple registration of the same memory is allowed (once per container). This implements 2 counters per registered memory region: - @mapped: incremented on every DMA mapping; decremented on unmapping; initialized to 1 when a region is just registered; once it becomes zero, no more mappings allowe; - @used: incremented on every "register" ioctl; decremented on "unregister"; unregistration is allowed for DMA mapped regions unless it is the very last reference. For the very last reference this checks that the region is still mapped and returns -EBUSY so the userspace gets to know that memory is still pinned and unregistration needs to be retried; @used remains 1. Host physical addresses are stored in vmalloc'ed array. In order to access these in the real mode (mmu off), there is a real_vmalloc_addr() helper. In-kernel acceleration patchset will move it from KVM to MMU code. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-06-05 13:35:24 +07:00
#ifdef CONFIG_SPAPR_TCE_IOMMU
mm_iommu_init(mm);
#endif
atomic_set(&mm->context.active_cpus, 0);
atomic_set(&mm->context.copros, 0);
return 0;
}
void __destroy_context(int context_id)
{
ida_free(&mmu_context_ida, context_id);
}
EXPORT_SYMBOL_GPL(__destroy_context);
static void destroy_contexts(mm_context_t *ctx)
{
int index, context_id;
for (index = 0; index < ARRAY_SIZE(ctx->extended_id); index++) {
context_id = ctx->extended_id[index];
if (context_id)
ida_free(&mmu_context_ida, context_id);
}
kfree(ctx->hash_context);
}
static void pmd_frag_destroy(void *pmd_frag)
{
int count;
struct page *page;
page = virt_to_page(pmd_frag);
/* drop all the pending references */
count = ((unsigned long)pmd_frag & ~PAGE_MASK) >> PMD_FRAG_SIZE_SHIFT;
/* We allow PTE_FRAG_NR fragments from a PTE page */
if (atomic_sub_and_test(PMD_FRAG_NR - count, &page->pt_frag_refcount)) {
pgtable_pmd_page_dtor(page);
__free_page(page);
}
}
static void destroy_pagetable_cache(struct mm_struct *mm)
{
void *frag;
frag = mm->context.pte_frag;
if (frag)
pte_frag_destroy(frag);
frag = mm->context.pmd_frag;
if (frag)
pmd_frag_destroy(frag);
return;
}
void destroy_context(struct mm_struct *mm)
{
powerpc/mmu: Add userspace-to-physical addresses translation cache We are adding support for DMA memory pre-registration to be used in conjunction with VFIO. The idea is that the userspace which is going to run a guest may want to pre-register a user space memory region so it all gets pinned once and never goes away. Having this done, a hypervisor will not have to pin/unpin pages on every DMA map/unmap request. This is going to help with multiple pinning of the same memory. Another use of it is in-kernel real mode (mmu off) acceleration of DMA requests where real time translation of guest physical to host physical addresses is non-trivial and may fail as linux ptes may be temporarily invalid. Also, having cached host physical addresses (compared to just pinning at the start and then walking the page table again on every H_PUT_TCE), we can be sure that the addresses which we put into TCE table are the ones we already pinned. This adds a list of memory regions to mm_context_t. Each region consists of a header and a list of physical addresses. This adds API to: 1. register/unregister memory regions; 2. do final cleanup (which puts all pre-registered pages); 3. do userspace to physical address translation; 4. manage usage counters; multiple registration of the same memory is allowed (once per container). This implements 2 counters per registered memory region: - @mapped: incremented on every DMA mapping; decremented on unmapping; initialized to 1 when a region is just registered; once it becomes zero, no more mappings allowe; - @used: incremented on every "register" ioctl; decremented on "unregister"; unregistration is allowed for DMA mapped regions unless it is the very last reference. For the very last reference this checks that the region is still mapped and returns -EBUSY so the userspace gets to know that memory is still pinned and unregistration needs to be retried; @used remains 1. Host physical addresses are stored in vmalloc'ed array. In order to access these in the real mode (mmu off), there is a real_vmalloc_addr() helper. In-kernel acceleration patchset will move it from KVM to MMU code. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-06-05 13:35:24 +07:00
#ifdef CONFIG_SPAPR_TCE_IOMMU
powerpc/mm/iommu, vfio/spapr: Put pages on VFIO container shutdown At the moment the userspace tool is expected to request pinning of the entire guest RAM when VFIO IOMMU SPAPR v2 driver is present. When the userspace process finishes, all the pinned pages need to be put; this is done as a part of the userspace memory context (MM) destruction which happens on the very last mmdrop(). This approach has a problem that a MM of the userspace process may live longer than the userspace process itself as kernel threads use userspace process MMs which was runnning on a CPU where the kernel thread was scheduled to. If this happened, the MM remains referenced until this exact kernel thread wakes up again and releases the very last reference to the MM, on an idle system this can take even hours. This moves preregistered regions tracking from MM to VFIO; insteads of using mm_iommu_table_group_mem_t::used, tce_container::prereg_list is added so each container releases regions which it has pre-registered. This changes the userspace interface to return EBUSY if a memory region is already registered in a container. However it should not have any practical effect as the only userspace tool available now does register memory region once per container anyway. As tce_iommu_register_pages/tce_iommu_unregister_pages are called under container->lock, this does not need additional locking. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30 13:52:05 +07:00
WARN_ON_ONCE(!list_empty(&mm->context.iommu_group_mem_list));
powerpc/mmu: Add userspace-to-physical addresses translation cache We are adding support for DMA memory pre-registration to be used in conjunction with VFIO. The idea is that the userspace which is going to run a guest may want to pre-register a user space memory region so it all gets pinned once and never goes away. Having this done, a hypervisor will not have to pin/unpin pages on every DMA map/unmap request. This is going to help with multiple pinning of the same memory. Another use of it is in-kernel real mode (mmu off) acceleration of DMA requests where real time translation of guest physical to host physical addresses is non-trivial and may fail as linux ptes may be temporarily invalid. Also, having cached host physical addresses (compared to just pinning at the start and then walking the page table again on every H_PUT_TCE), we can be sure that the addresses which we put into TCE table are the ones we already pinned. This adds a list of memory regions to mm_context_t. Each region consists of a header and a list of physical addresses. This adds API to: 1. register/unregister memory regions; 2. do final cleanup (which puts all pre-registered pages); 3. do userspace to physical address translation; 4. manage usage counters; multiple registration of the same memory is allowed (once per container). This implements 2 counters per registered memory region: - @mapped: incremented on every DMA mapping; decremented on unmapping; initialized to 1 when a region is just registered; once it becomes zero, no more mappings allowe; - @used: incremented on every "register" ioctl; decremented on "unregister"; unregistration is allowed for DMA mapped regions unless it is the very last reference. For the very last reference this checks that the region is still mapped and returns -EBUSY so the userspace gets to know that memory is still pinned and unregistration needs to be retried; @used remains 1. Host physical addresses are stored in vmalloc'ed array. In order to access these in the real mode (mmu off), there is a real_vmalloc_addr() helper. In-kernel acceleration patchset will move it from KVM to MMU code. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-06-05 13:35:24 +07:00
#endif
if (radix_enabled())
WARN_ON(process_tb[mm->context.id].prtb0 != 0);
else
subpage_prot_free(mm);
destroy_contexts(&mm->context);
mm->context.id = MMU_NO_CONTEXT;
}
void arch_exit_mmap(struct mm_struct *mm)
{
destroy_pagetable_cache(mm);
if (radix_enabled()) {
/*
* Radix doesn't have a valid bit in the process table
* entries. However we know that at least P9 implementation
* will avoid caching an entry with an invalid RTS field,
* and 0 is invalid. So this will do.
*
* This runs before the "fullmm" tlb flush in exit_mmap,
* which does a RIC=2 tlbie to clear the process table
* entry. See the "fullmm" comments in tlb-radix.c.
*
* No barrier required here after the store because
* this process will do the invalidate, which starts with
* ptesync.
*/
process_tb[mm->context.id].prtb0 = 0;
}
}
#ifdef CONFIG_PPC_RADIX_MMU
void radix__switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
{
mtspr(SPRN_PID, next->context.id);
isync();
}
#endif