When memory is migrated to the GPU, it is likely to be accessed by GPU
code soon afterwards. Instead of waiting for a GPU fault, map the
migrated memory into the GPU page tables with the same access permissions
as the source CPU page table entries. This preserves copy on write
semantics.
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Presumably the intent here was that hmm_range_fault() could put the data
into some HW specific format and thus avoid some work. However, nothing
actually does that, and it isn't clear how anything actually could do that
as hmm_range_fault() provides CPU addresses which must be DMA mapped.
Perhaps there is some special HW that does not need DMA mapping, but we
don't have any examples of this, and the theoretical performance win of
avoiding an extra scan over the pfns array doesn't seem worth the
complexity. Plus pfns needs to be scanned anyhow to sort out any
DEVICE_PRIVATE pages.
This version replaces the uint64_t with an usigned long containing a pfn
and fixed flags. On input flags is filled with the HMM_PFN_REQ_* values,
on successful output it is filled with HMM_PFN_* values, describing the
state of the pages.
amdgpu is simple to convert, it doesn't use snapshot and doesn't use
per-page flags.
nouveau uses only 16 hmm_pte entries at most (ie fits in a few cache
lines), and it sweeps over its pfns array a couple of times anyhow. It
also has a nasty call chain before it reaches the dma map and hardware
suggesting performance isn't important:
nouveau_svm_fault():
args.i.m.method = NVIF_VMM_V0_PFNMAP
nouveau_range_fault()
nvif_object_ioctl()
client->driver->ioctl()
struct nvif_driver nvif_driver_nvkm:
.ioctl = nvkm_client_ioctl
nvkm_ioctl()
nvkm_ioctl_path()
nvkm_ioctl_v0[type].func(..)
nvkm_ioctl_mthd()
nvkm_object_mthd()
struct nvkm_object_func nvkm_uvmm:
.mthd = nvkm_uvmm_mthd
nvkm_uvmm_mthd()
nvkm_uvmm_mthd_pfnmap()
nvkm_vmm_pfn_map()
nvkm_vmm_ptes_get_map()
func == gp100_vmm_pgt_pfn
struct nvkm_vmm_desc_func gp100_vmm_desc_spt:
.pfn = gp100_vmm_pgt_pfn
nvkm_vmm_iter()
REF_PTES == func == gp100_vmm_pgt_pfn()
dma_map_page()
Link: https://lore.kernel.org/r/5-v2-b4e84f444c7d+24f57-hmm_no_flags_jgg@mellanox.com
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
This is just an alias for HMM_PFN_ERROR, nothing cares that the error was
because of a special page vs any other error case.
Link: https://lore.kernel.org/r/4-v2-b4e84f444c7d+24f57-hmm_no_flags_jgg@mellanox.com
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
hmm_vma_walk->last is supposed to be updated after every write to the
pfns, so that it can be returned by hmm_range_fault(). However, this is
not done consistently. Fortunately nothing checks the return code of
hmm_range_fault() for anything other than error.
More importantly last must be set before returning -EBUSY as it is used to
prevent reading an output pfn as an input flags when the loop restarts.
For clarity and simplicity make hmm_range_fault() return 0 or -ERRNO. Only
set last when returning -EBUSY.
Link: https://lore.kernel.org/r/2-v2-b4e84f444c7d+24f57-hmm_no_flags_jgg@mellanox.com
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
core:
- revert drm_mm atomic patch
- dt binding fixes
fbcon:
- null ptr error fix
i915:
- GVT fixes
nouveau:
- runpm fix
- svm fixes
amdgpu:
- HDCP fixes
- gfx10 fix
- Misc display fixes
- BACO fixes
amdkfd:
- Fix memory leak
vboxvideo:
- remove conflicting fbs
vc4:
- mode validation fix
xen:
- fix PTR_ERR usage
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJejR7RAAoJEAx081l5xIa+heoP/jBjHdwaZsLOkZFqFr8613mR
juL53xEn7RK7lnVSWBRqAdJznRyJqAyCtDrxW5Au3t0x6zetOv3AhS8fQBtqv/Ze
ghSdLZdjuTd4Lm2qyS2aWU3DBNnBlYcGcTD6bxwJn3EXSSR1z8YabT2osOOhj7cz
HwZjhW/XmIOpuhCKDEyyzeTCSLRISBdzgipIbuUXHeqUGB2jt/vpHTiqm+FgTFo5
hrHfM2EPpUK7LDq+REfXFeIwaQvtRDB1AJ3p8JM9iLt2GvTfavjGIDKDG9XReQhC
l8WeaMQD69ZbujmBX+XRuP7qj+vfnVYcV/RuMm8Yr09W2Ac8RZOQknvp9PlVmqVl
xYAvSu1DSD/88/5fvaDxYR2pyCE2ui+3hbFd9WHGFEX8h/cIhGJZ+cZqDO4K0nmY
vw5JmtSr0Eiwrv2dFn/CFYCxxGz0BdGxVOskbUCwZUPHaTLTAbpfFIaKEABW/sbR
k5P+cQFjxUJMbCMQeZKSa7L2GZAjf/K/SKyRNMeBuCsfn8KpEUo1W4hhGijtxL//
akwBFwVeZ0bLTELwB6mFdYGQ987vIcfYjJV0ochPlJCPWJ1OGx6jh1+gYR81W7Np
5mVFApS2FybmBaTQYWH0E+AhJxyxEHE6spgaGCgCHTlknANmCu3lF90NFVPAK7JB
Vvlj3gJnOzoguejWzomE
=HNn4
-----END PGP SIGNATURE-----
Merge tag 'drm-next-2020-04-08' of git://anongit.freedesktop.org/drm/drm
Pull drm fixes from Dave Airlie:
"This is a set of fixes that have queued up, I think I might have
another pull with some more before rc1 but I'd like to dequeue what I
have now just in case Easter is more eggciting that expected.
The main thing in here is a fix for a longstanding nouveau power
management issues on certain laptops, it should help runtime
suspend/resume for a lot of people.
There is also a reverted patch for some drm_mm behaviour in atomic
contexts.
Summary:
core:
- revert drm_mm atomic patch
- dt binding fixes
fbcon:
- null ptr error fix
i915:
- GVT fixes
nouveau:
- runpm fix
- svm fixes
amdgpu:
- HDCP fixes
- gfx10 fix
- Misc display fixes
- BACO fixes
amdkfd:
- Fix memory leak
vboxvideo:
- remove conflicting fbs
vc4:
- mode validation fix
xen:
- fix PTR_ERR usage"
* tag 'drm-next-2020-04-08' of git://anongit.freedesktop.org/drm/drm: (41 commits)
drm/nouveau/kms/nv50-: wait for FIFO space on PIO channels
drm/nouveau/nvif: protect waits against GPU falling off the bus
drm/nouveau/nvif: access PTIMER through usermode class, if available
drm/nouveau/gr/gp107,gp108: implement workaround for HW hanging during init
drm/nouveau: workaround runpm fail by disabling PCI power management on certain intel bridges
drm/nouveau/svm: remove useless SVM range check
drm/nouveau/svm: check for SVM initialized before migrating
drm/nouveau/svm: fix vma range check for migration
drm/nouveau: remove checks for return value of debugfs functions
drm/nouveau/ttm: evict other IO mappings when running out of BAR1 space
drm/amdkfd: kfree the wrong pointer
drm/amd/display: increase HDCP authentication delay
drm/amd/display: Correctly cancel future watchdog and callback events
drm/amd/display: Don't try hdcp1.4 when content_type is set to type1
drm/amd/powerplay: move the ASIC specific nbio operation out of smu_v11_0.c
drm/amd/powerplay: drop redundant BIF doorbell interrupt operations
drm/amd/display: Fix dcn21 num_states
drm/amd/display: Enable BT2020 in COLOR_ENCODING property
drm/amd/display: LFC not working on 2.0x range monitors (v2)
drm/amd/display: Support plane level CTM
...
When nouveau processes GPU faults, it checks to see if the fault address
falls within the "unmanaged" range which is reserved for fixed allocations
instead of addresses chosen by the core mm code. If start is greater than
or equal to svmm->unmanaged.limit, then limit will also be greater than
svmm->unmanaged.limit which is greater than svmm->unmanaged.start and the
start = max_t(u64, start, svmm->unmanaged.limit) will change nothing.
Just remove the useless lines of code.
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
When migrating system memory to GPU memory, check that SVM has been
enabled. Even though most errors can be ignored since migration is
a performance optimization, return an error because this is a violation
of the API.
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
find_vma_intersection(mm, start, end) only guarantees that end is greater
than or equal to vma->vm_start but doesn't guarantee that start is
greater than or equal to vma->vm_start. The calculation for the
intersecting range in nouveau_svmm_bind() isn't accounting for this and
can call migrate_vma_setup() with a starting address less than
vma->vm_start. This results in migrate_vma_setup() returning -EINVAL for
the range instead of nouveau skipping that part of the range and migrating
the rest.
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Now that flags are handled on a fine-grained per-page basis this global
flag is redundant and has a confusing overlap with the pfn_flags_mask and
default_flags.
Normalize the HMM_FAULT_SNAPSHOT behavior into one place. Callers needing
the SNAPSHOT behavior should set a pfn_flags_mask and default_flags that
always results in a cleared HMM_PFN_VALID. Then no pages will be faulted,
and HMM_FAULT_SNAPSHOT is not a special flow that overrides the masking
mechanism.
As this is the last flag, also remove the flags argument. If future flags
are needed they can be part of the struct hmm_range function arguments.
Link: https://lore.kernel.org/r/20200327200021.29372-5-jgg@ziepe.ca
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Remove the HMM_PFN_DEVICE_PRIVATE flag, no driver has ever set this flag
on input, and the only place that uses it on output can be trivially
changed to use is_device_private_page().
This removes the ability to request that device_private pages are faulted
back into system memory.
Link: https://lore.kernel.org/r/20200316193216.920734-4-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Remove the hmm_mirror object and use the mmu_interval_notifier API instead
for the range, and use the normal mmu_notifier API for the general
invalidation callback.
While here re-organize the pagefault path so the locking pattern is clear.
nouveau is the only driver that uses a temporary range object and instead
forwards nearly every invalidation range directly to the HW. While this is
not how the mmu_interval_notifier was intended to be used, the overheads on
the pagefaulting path are similar to the existing hmm_mirror version.
Particularly since the interval tree will be small.
Link: https://lore.kernel.org/r/20191112202231.3856-10-jgg@ziepe.ca
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
There is no reason to get the invalidate_range_start() callback via an
indirection through hmm_mirror, just register a normal notifier directly.
Link: https://lore.kernel.org/r/20191112202231.3856-9-jgg@ziepe.ca
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
All users pass PAGE_SIZE here, and if we wanted to support single entries
for huge pages we should really just add a HMM_FAULT_HUGEPAGE flag instead
that uses the huge page size instead of having the caller calculate that
size once, just for the hmm code to verify it.
Link: https://lore.kernel.org/r/20190806160554.14046-8-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
The start, end and page_shift values are all saved in the range structure,
so we might as well use that for argument passing.
Link: https://lore.kernel.org/r/20190806160554.14046-7-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
We'll need the nouveau_svmm structure to improve the function soon. For
now this allows using the svmm->mm reference to unlock the mmap_sem, and
thus the same dereference chain that the caller uses to lock and unlock
it.
Link: https://lore.kernel.org/r/20190806160554.14046-4-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Since hmm_range_fault() doesn't use the struct hmm_range vma field, remove
it.
Link: https://lore.kernel.org/r/20190726005650.2566-8-rcampbell@nvidia.com
Suggested-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
This allows easier expansion to other flags, and also makes the callers a
little easier to read.
Link: https://lore.kernel.org/r/20190726005650.2566-4-rcampbell@nvidia.com
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
The hmm_mirror_ops callback function sync_cpu_device_pagetables() passes a
struct hmm_update which is a simplified version of struct
mmu_notifier_range. This is unnecessary so replace hmm_update with
mmu_notifier_range directly.
Link: https://lore.kernel.org/r/20190726005650.2566-2-rcampbell@nvidia.com
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
[jgg: white space tuning]
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-EAGAIN has a magic meaning for non-blocking faults, so don't overload it.
Given that the caller doesn't check for specific error codes this change
is purely cosmetic.
Link: https://lore.kernel.org/r/20190724065258.16603-6-hch@lst.de
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Currently nouveau_svm_fault expects nouveau_range_fault to never unlock
mmap_sem, but the latter unlocks it for a random selection of error
codes. Fix this up by always unlocking mmap_sem for non-zero return values
in nouveau_range_fault, and only unlocking it in the caller for successful
returns.
Link: https://lore.kernel.org/r/20190724065258.16603-5-hch@lst.de
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
The parameter is always false, so remove it as well as the -EAGAIN
handling that can only happen for the non-blocking case.
Link: https://lore.kernel.org/r/20190724065258.16603-4-hch@lst.de
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
These two functions are marked as a legacy APIs to get rid of, but seem to
suit the current nouveau flow. Move it to the only user in preparation
for fixing a locking bug involving caller and callee. All comments
referring to the old API have been removed as this now is a driver private
helper.
Link: https://lore.kernel.org/r/20190724065258.16603-3-hch@lst.de
Tested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Ralph observes that hmm_range_register() can only be called by a driver
while a mirror is registered. Make this clear in the API by passing in the
mirror structure as a parameter.
This also simplifies understanding the lifetime model for struct hmm, as
the hmm pointer must be valid as part of a registered mirror so all we
need in hmm_register_range() is a simple kref_get.
Suggested-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Philip Yang <Philip.Yang@amd.com>
This add an ioctl to migrate a range of process address space to the
device memory. On platform without cache coherent bus (x86, ARM, ...)
this means that CPU can not access that range directly, instead CPU
will fault which will migrate the memory back to system memory.
This is behind a staging flag so that we can evolve the API.
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Device memory can be use in SVM, in which case we do not have any of
the existing buffer object. This commit add infrastructure to allow
use of device memory without nouveau_bo. Again this is a temporary
solution until a rework of GPU memory management.
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
This uses HMM to mirror a process' CPU page tables into a channel's page
tables, and keep them synchronised so that both the CPU and GPU are able
to access the same memory at the same virtual address.
While this code also supports Volta/Turing, it's only enabled for Pascal
GPUs currently due to channel recovery being unreliable right now on the
later GPUs.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>