linux_dsm_epyc7002/drivers/gpu/drm/i915/i915_gem_userptr.c

862 lines
22 KiB
C
Raw Normal View History

drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
/*
* Copyright © 2012-2014 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*
*/
#include <drm/drmP.h>
#include <drm/i915_drm.h>
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
#include "i915_drv.h"
#include "i915_trace.h"
#include "intel_drv.h"
#include <linux/mmu_context.h>
#include <linux/mmu_notifier.h>
#include <linux/mempolicy.h>
#include <linux/swap.h>
#include <linux/sched/mm.h>
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
struct i915_mm_struct {
struct mm_struct *mm;
struct drm_i915_private *i915;
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
struct i915_mmu_notifier *mn;
struct hlist_node node;
struct kref kref;
struct work_struct work;
};
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
#if defined(CONFIG_MMU_NOTIFIER)
#include <linux/interval_tree.h>
struct i915_mmu_notifier {
spinlock_t lock;
struct hlist_node node;
struct mmu_notifier mn;
struct rb_root_cached objects;
drm/i915/userptr: Flush cancellations before mmu-notifier invalidate returns In order to ensure that all invalidations are completed before the operation returns to userspace (i.e. before the munmap() syscall returns) we need to wait upon the outstanding operations. We are allowed to block inside the invalidate_range_start callback, and as struct_mutex is the inner lock with mmap_sem we can wait upon the struct_mutex without provoking lockdep into warning about a deadlock. However, we don't actually want to wait upon outstanding rendering whilst holding the struct_mutex if we can help it otherwise we also block other processes from submitting work to the GPU. So first we do a wait without the lock and then when we reacquire the lock, we double check that everything is ready for removing the invalidated pages. Finally to wait upon the outstanding unpinning tasks, we create a private workqueue as a means to conveniently wait upon all at once. The drawback is that this workqueue is per-mm, so any threads concurrently invalidating objects will wait upon each other. The advantage of using the workqueue is that we can wait in parallel for completion of rendering and unpinning of several objects (of particular importance if the process terminates with a whole mm full of objects). v2: Apply a cup of tea to the changelog. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94699 Testcase: igt/gem_userptr_blits/sync-unmap-cycles Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-1-git-send-email-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2016-04-05 20:59:59 +07:00
struct workqueue_struct *wq;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
};
struct i915_mmu_object {
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
struct i915_mmu_notifier *mn;
struct drm_i915_gem_object *obj;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
struct interval_tree_node it;
struct list_head link;
drm/i915: Use a task to cancel the userptr on invalidate_range Whilst discussing possible ways to trigger an invalidate_range on a userptr with an aliased GGTT mmapping (and so cause a struct_mutex deadlock), the conclusion is that we can, and we must, prevent any possible deadlock by avoiding taking the mutex at all during invalidate_range. This has numerous advantages all of which stem from avoid the sleeping function from inside the unknown context. In particular, it simplifies the invalidate_range because we no longer have to juggle the spinlock/mutex and can just hold the spinlock for the entire walk. To compensate, we have to make get_pages a bit more complicated in order to serialise with a pending cancel_userptr worker. As we hold the struct_mutex, we have no choice but to return EAGAIN and hope that the worker is then flushed before we retry after reacquiring the struct_mutex. The important caveat is that the invalidate_range itself is no longer synchronous. There exists a small but definite period in time in which the old PTE's page remain accessible via the GPU. Note however that the physical pages themselves are not invalidated by the mmu_notifier, just the CPU view of the address space. The impact should be limited to a delay in pages being flushed, rather than a possibility of writing to the wrong pages. The only race condition that this worsens is remapping an userptr active on the GPU where fresh work may still reference the old pages due to struct_mutex contention. Given that userspace is racing with the GPU, it is fair to say that the results are undefined. v2: Only queue (and importantly only take one refcnt) the worker once. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:47 +07:00
struct work_struct work;
bool attached;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
};
static void cancel_userptr(struct work_struct *work)
{
drm/i915: Use a task to cancel the userptr on invalidate_range Whilst discussing possible ways to trigger an invalidate_range on a userptr with an aliased GGTT mmapping (and so cause a struct_mutex deadlock), the conclusion is that we can, and we must, prevent any possible deadlock by avoiding taking the mutex at all during invalidate_range. This has numerous advantages all of which stem from avoid the sleeping function from inside the unknown context. In particular, it simplifies the invalidate_range because we no longer have to juggle the spinlock/mutex and can just hold the spinlock for the entire walk. To compensate, we have to make get_pages a bit more complicated in order to serialise with a pending cancel_userptr worker. As we hold the struct_mutex, we have no choice but to return EAGAIN and hope that the worker is then flushed before we retry after reacquiring the struct_mutex. The important caveat is that the invalidate_range itself is no longer synchronous. There exists a small but definite period in time in which the old PTE's page remain accessible via the GPU. Note however that the physical pages themselves are not invalidated by the mmu_notifier, just the CPU view of the address space. The impact should be limited to a delay in pages being flushed, rather than a possibility of writing to the wrong pages. The only race condition that this worsens is remapping an userptr active on the GPU where fresh work may still reference the old pages due to struct_mutex contention. Given that userspace is racing with the GPU, it is fair to say that the results are undefined. v2: Only queue (and importantly only take one refcnt) the worker once. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:47 +07:00
struct i915_mmu_object *mo = container_of(work, typeof(*mo), work);
struct drm_i915_gem_object *obj = mo->obj;
drm/i915/userptr: Reinvent GGTT self-faulting protection lockdep doesn't like us taking the mm->mmap_sem inside the get_pages callback for a couple of reasons. The straightforward deadlock: [13755.434059] ============================================= [13755.434061] [ INFO: possible recursive locking detected ] [13755.434064] 4.11.0-rc1-CI-CI_DRM_297+ #1 Tainted: G U [13755.434066] --------------------------------------------- [13755.434068] gem_userptr_bli/8398 is trying to acquire lock: [13755.434070] (&mm->mmap_sem){++++++}, at: [<ffffffffa00c988a>] i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434096] but task is already holding lock: [13755.434098] (&mm->mmap_sem){++++++}, at: [<ffffffff8104d485>] __do_page_fault+0x105/0x560 [13755.434105] other info that might help us debug this: [13755.434108] Possible unsafe locking scenario: [13755.434110] CPU0 [13755.434111] ---- [13755.434112] lock(&mm->mmap_sem); [13755.434115] lock(&mm->mmap_sem); [13755.434117] *** DEADLOCK *** [13755.434121] May be due to missing lock nesting notation [13755.434126] 2 locks held by gem_userptr_bli/8398: [13755.434128] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8104d485>] __do_page_fault+0x105/0x560 [13755.434135] #1: (&obj->mm.lock){+.+.+.}, at: [<ffffffffa00b887d>] __i915_gem_object_get_pages+0x1d/0x70 [i915] [13755.434156] stack backtrace: [13755.434161] CPU: 3 PID: 8398 Comm: gem_userptr_bli Tainted: G U 4.11.0-rc1-CI-CI_DRM_297+ #1 [13755.434165] Hardware name: GIGABYTE GB-BKi7(H)A-7500/MFLP7AP-00, BIOS F4 02/20/2017 [13755.434169] Call Trace: [13755.434174] dump_stack+0x67/0x92 [13755.434178] __lock_acquire+0x133a/0x1b50 [13755.434182] lock_acquire+0xc9/0x220 [13755.434200] ? i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434204] down_read+0x42/0x70 [13755.434221] ? i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434238] i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434255] ____i915_gem_object_get_pages+0x25/0x60 [i915] [13755.434272] __i915_gem_object_get_pages+0x59/0x70 [i915] [13755.434288] i915_gem_fault+0x397/0x6a0 [i915] [13755.434304] ? i915_gem_fault+0x1a1/0x6a0 [i915] [13755.434308] ? __lock_acquire+0x449/0x1b50 [13755.434311] ? __lock_acquire+0x449/0x1b50 [13755.434315] ? vm_mmap_pgoff+0xa9/0xd0 [13755.434318] __do_fault+0x19/0x70 [13755.434321] __handle_mm_fault+0x863/0xe50 [13755.434325] handle_mm_fault+0x17f/0x370 [13755.434329] ? handle_mm_fault+0x40/0x370 [13755.434332] __do_page_fault+0x279/0x560 [13755.434336] do_page_fault+0xc/0x10 [13755.434339] page_fault+0x22/0x30 [13755.434342] RIP: 0033:0x7f5ab91b5880 [13755.434345] RSP: 002b:00007fff62922218 EFLAGS: 00010216 [13755.434348] RAX: 0000000000b74500 RBX: 00007f5ab7f81000 RCX: 0000000000000000 [13755.434352] RDX: 0000000000100000 RSI: 00007f5ab7f81000 RDI: 00007f5aba61c000 [13755.434355] RBP: 00007f5aba61c000 R08: 0000000000000007 R09: 0000000100000000 [13755.434359] R10: 000000000000037d R11: 00007f5ab91b5840 R12: 0000000000000001 [13755.434362] R13: 0000000000000005 R14: 0000000000000001 R15: 0000000000000000 and cyclic deadlocks: [ 2566.458979] ====================================================== [ 2566.459054] [ INFO: possible circular locking dependency detected ] [ 2566.459127] 4.11.0-rc1+ #26 Not tainted [ 2566.459194] ------------------------------------------------------- [ 2566.459266] gem_streaming_w/759 is trying to acquire lock: [ 2566.459334] (&obj->mm.lock){+.+.+.}, at: [<ffffffffa034bc80>] i915_gem_object_pin_pages+0x0/0xc0 [i915] [ 2566.459605] [ 2566.459605] but task is already holding lock: [ 2566.459699] (&mm->mmap_sem){++++++}, at: [<ffffffff8106fd11>] __do_page_fault+0x121/0x500 [ 2566.459814] [ 2566.459814] which lock already depends on the new lock. [ 2566.459814] [ 2566.459934] [ 2566.459934] the existing dependency chain (in reverse order) is: [ 2566.460030] [ 2566.460030] -> #1 (&mm->mmap_sem){++++++}: [ 2566.460139] lock_acquire+0xfe/0x220 [ 2566.460214] down_read+0x4e/0x90 [ 2566.460444] i915_gem_userptr_get_pages+0x6e/0x340 [i915] [ 2566.460669] ____i915_gem_object_get_pages+0x8b/0xd0 [i915] [ 2566.460900] __i915_gem_object_get_pages+0x6a/0x80 [i915] [ 2566.461132] __i915_vma_do_pin+0x7fa/0x930 [i915] [ 2566.461352] eb_add_vma+0x67b/0x830 [i915] [ 2566.461572] eb_lookup_vmas+0xafe/0x1010 [i915] [ 2566.461792] i915_gem_do_execbuffer+0x715/0x2870 [i915] [ 2566.462012] i915_gem_execbuffer2+0x106/0x2b0 [i915] [ 2566.462152] drm_ioctl+0x36c/0x670 [drm] [ 2566.462236] do_vfs_ioctl+0x12c/0xa60 [ 2566.462317] SyS_ioctl+0x41/0x70 [ 2566.462399] entry_SYSCALL_64_fastpath+0x1c/0xb1 [ 2566.462477] [ 2566.462477] -> #0 (&obj->mm.lock){+.+.+.}: [ 2566.462587] __lock_acquire+0x1602/0x1790 [ 2566.462661] lock_acquire+0xfe/0x220 [ 2566.462893] i915_gem_object_pin_pages+0x4c/0xc0 [i915] [ 2566.463116] i915_gem_fault+0x2c2/0x8c0 [i915] [ 2566.463197] __do_fault+0x42/0x130 [ 2566.463276] __handle_mm_fault+0x92c/0x1280 [ 2566.463356] handle_mm_fault+0x1e2/0x440 [ 2566.463443] __do_page_fault+0x1c4/0x500 [ 2566.463529] do_page_fault+0xc/0x10 [ 2566.463613] page_fault+0x1f/0x30 [ 2566.463693] [ 2566.463693] other info that might help us debug this: [ 2566.463693] [ 2566.463820] Possible unsafe locking scenario: [ 2566.463820] [ 2566.463918] CPU0 CPU1 [ 2566.463988] ---- ---- [ 2566.464068] lock(&mm->mmap_sem); [ 2566.464143] lock(&obj->mm.lock); [ 2566.464226] lock(&mm->mmap_sem); [ 2566.464304] lock(&obj->mm.lock); [ 2566.464378] [ 2566.464378] *** DEADLOCK *** [ 2566.464378] [ 2566.464504] 1 lock held by gem_streaming_w/759: [ 2566.464576] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8106fd11>] __do_page_fault+0x121/0x500 [ 2566.464699] [ 2566.464699] stack backtrace: [ 2566.464801] CPU: 0 PID: 759 Comm: gem_streaming_w Not tainted 4.11.0-rc1+ #26 [ 2566.464881] Hardware name: GIGABYTE GB-BXBT-1900/MZBAYAB-00, BIOS F8 03/02/2016 [ 2566.464983] Call Trace: [ 2566.465061] dump_stack+0x68/0x9f [ 2566.465144] print_circular_bug+0x20b/0x260 [ 2566.465234] __lock_acquire+0x1602/0x1790 [ 2566.465323] ? debug_check_no_locks_freed+0x1a0/0x1a0 [ 2566.465564] ? i915_gem_object_wait+0x238/0x650 [i915] [ 2566.465657] ? debug_lockdep_rcu_enabled.part.4+0x1a/0x30 [ 2566.465749] lock_acquire+0xfe/0x220 [ 2566.465985] ? i915_sg_trim+0x1b0/0x1b0 [i915] [ 2566.466223] i915_gem_object_pin_pages+0x4c/0xc0 [i915] [ 2566.466461] ? i915_sg_trim+0x1b0/0x1b0 [i915] [ 2566.466699] i915_gem_fault+0x2c2/0x8c0 [i915] [ 2566.466939] ? i915_gem_pwrite_ioctl+0xce0/0xce0 [i915] [ 2566.467030] ? __lock_acquire+0x642/0x1790 [ 2566.467122] ? __lock_acquire+0x642/0x1790 [ 2566.467209] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 2566.467299] ? get_unmapped_area+0x1b4/0x1d0 [ 2566.467387] __do_fault+0x42/0x130 [ 2566.467474] __handle_mm_fault+0x92c/0x1280 [ 2566.467564] ? __pmd_alloc+0x1e0/0x1e0 [ 2566.467651] ? vm_mmap_pgoff+0x160/0x190 [ 2566.467740] ? handle_mm_fault+0x111/0x440 [ 2566.467827] handle_mm_fault+0x1e2/0x440 [ 2566.467914] ? handle_mm_fault+0x5d/0x440 [ 2566.468002] __do_page_fault+0x1c4/0x500 [ 2566.468090] do_page_fault+0xc/0x10 [ 2566.468180] page_fault+0x1f/0x30 [ 2566.468263] RIP: 0033:0x557895ced32a [ 2566.468337] RSP: 002b:00007fffd6dd8a10 EFLAGS: 00010202 [ 2566.468419] RAX: 00007f659a4db000 RBX: 0000000000000003 RCX: 00007f659ad032da [ 2566.468501] RDX: 0000000000000000 RSI: 0000000000100000 RDI: 0000000000000000 [ 2566.468586] RBP: 0000000000000007 R08: 0000000000000003 R09: 0000000100000000 [ 2566.468667] R10: 0000000000000001 R11: 0000000000000246 R12: 0000557895ceda60 [ 2566.468749] R13: 0000000000000001 R14: 00007fffd6dd8ac0 R15: 00007f659a4db000 By checking the status of the gup worker (serialized by the obj->mm.lock) we can determine whether it is still active, has failed or has succeeded. If the worker is still active (or failed), we know that it cannot be bound and so we can skip taking struct_mutex (risking potential recursion). As we check the worker status, we mark it to discard any partial results, forcing us to restart on the next get_pages. Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Fixes: 1c8782dd313e ("drm/i915/userptr: Disallow wrapping GTT into a userptr") Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Testcase: igt/gem_userptr_blits/dmabuf-sync Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170315140150.19432-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-03-15 21:01:50 +07:00
struct work_struct *active;
/* Cancel any active worker and force us to re-evaluate gup */
mutex_lock(&obj->mm.lock);
active = fetch_and_zero(&obj->userptr.work);
mutex_unlock(&obj->mm.lock);
if (active)
goto out;
i915_gem_object_wait(obj, I915_WAIT_ALL, MAX_SCHEDULE_TIMEOUT, NULL);
drm/i915/userptr: Reinvent GGTT self-faulting protection lockdep doesn't like us taking the mm->mmap_sem inside the get_pages callback for a couple of reasons. The straightforward deadlock: [13755.434059] ============================================= [13755.434061] [ INFO: possible recursive locking detected ] [13755.434064] 4.11.0-rc1-CI-CI_DRM_297+ #1 Tainted: G U [13755.434066] --------------------------------------------- [13755.434068] gem_userptr_bli/8398 is trying to acquire lock: [13755.434070] (&mm->mmap_sem){++++++}, at: [<ffffffffa00c988a>] i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434096] but task is already holding lock: [13755.434098] (&mm->mmap_sem){++++++}, at: [<ffffffff8104d485>] __do_page_fault+0x105/0x560 [13755.434105] other info that might help us debug this: [13755.434108] Possible unsafe locking scenario: [13755.434110] CPU0 [13755.434111] ---- [13755.434112] lock(&mm->mmap_sem); [13755.434115] lock(&mm->mmap_sem); [13755.434117] *** DEADLOCK *** [13755.434121] May be due to missing lock nesting notation [13755.434126] 2 locks held by gem_userptr_bli/8398: [13755.434128] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8104d485>] __do_page_fault+0x105/0x560 [13755.434135] #1: (&obj->mm.lock){+.+.+.}, at: [<ffffffffa00b887d>] __i915_gem_object_get_pages+0x1d/0x70 [i915] [13755.434156] stack backtrace: [13755.434161] CPU: 3 PID: 8398 Comm: gem_userptr_bli Tainted: G U 4.11.0-rc1-CI-CI_DRM_297+ #1 [13755.434165] Hardware name: GIGABYTE GB-BKi7(H)A-7500/MFLP7AP-00, BIOS F4 02/20/2017 [13755.434169] Call Trace: [13755.434174] dump_stack+0x67/0x92 [13755.434178] __lock_acquire+0x133a/0x1b50 [13755.434182] lock_acquire+0xc9/0x220 [13755.434200] ? i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434204] down_read+0x42/0x70 [13755.434221] ? i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434238] i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434255] ____i915_gem_object_get_pages+0x25/0x60 [i915] [13755.434272] __i915_gem_object_get_pages+0x59/0x70 [i915] [13755.434288] i915_gem_fault+0x397/0x6a0 [i915] [13755.434304] ? i915_gem_fault+0x1a1/0x6a0 [i915] [13755.434308] ? __lock_acquire+0x449/0x1b50 [13755.434311] ? __lock_acquire+0x449/0x1b50 [13755.434315] ? vm_mmap_pgoff+0xa9/0xd0 [13755.434318] __do_fault+0x19/0x70 [13755.434321] __handle_mm_fault+0x863/0xe50 [13755.434325] handle_mm_fault+0x17f/0x370 [13755.434329] ? handle_mm_fault+0x40/0x370 [13755.434332] __do_page_fault+0x279/0x560 [13755.434336] do_page_fault+0xc/0x10 [13755.434339] page_fault+0x22/0x30 [13755.434342] RIP: 0033:0x7f5ab91b5880 [13755.434345] RSP: 002b:00007fff62922218 EFLAGS: 00010216 [13755.434348] RAX: 0000000000b74500 RBX: 00007f5ab7f81000 RCX: 0000000000000000 [13755.434352] RDX: 0000000000100000 RSI: 00007f5ab7f81000 RDI: 00007f5aba61c000 [13755.434355] RBP: 00007f5aba61c000 R08: 0000000000000007 R09: 0000000100000000 [13755.434359] R10: 000000000000037d R11: 00007f5ab91b5840 R12: 0000000000000001 [13755.434362] R13: 0000000000000005 R14: 0000000000000001 R15: 0000000000000000 and cyclic deadlocks: [ 2566.458979] ====================================================== [ 2566.459054] [ INFO: possible circular locking dependency detected ] [ 2566.459127] 4.11.0-rc1+ #26 Not tainted [ 2566.459194] ------------------------------------------------------- [ 2566.459266] gem_streaming_w/759 is trying to acquire lock: [ 2566.459334] (&obj->mm.lock){+.+.+.}, at: [<ffffffffa034bc80>] i915_gem_object_pin_pages+0x0/0xc0 [i915] [ 2566.459605] [ 2566.459605] but task is already holding lock: [ 2566.459699] (&mm->mmap_sem){++++++}, at: [<ffffffff8106fd11>] __do_page_fault+0x121/0x500 [ 2566.459814] [ 2566.459814] which lock already depends on the new lock. [ 2566.459814] [ 2566.459934] [ 2566.459934] the existing dependency chain (in reverse order) is: [ 2566.460030] [ 2566.460030] -> #1 (&mm->mmap_sem){++++++}: [ 2566.460139] lock_acquire+0xfe/0x220 [ 2566.460214] down_read+0x4e/0x90 [ 2566.460444] i915_gem_userptr_get_pages+0x6e/0x340 [i915] [ 2566.460669] ____i915_gem_object_get_pages+0x8b/0xd0 [i915] [ 2566.460900] __i915_gem_object_get_pages+0x6a/0x80 [i915] [ 2566.461132] __i915_vma_do_pin+0x7fa/0x930 [i915] [ 2566.461352] eb_add_vma+0x67b/0x830 [i915] [ 2566.461572] eb_lookup_vmas+0xafe/0x1010 [i915] [ 2566.461792] i915_gem_do_execbuffer+0x715/0x2870 [i915] [ 2566.462012] i915_gem_execbuffer2+0x106/0x2b0 [i915] [ 2566.462152] drm_ioctl+0x36c/0x670 [drm] [ 2566.462236] do_vfs_ioctl+0x12c/0xa60 [ 2566.462317] SyS_ioctl+0x41/0x70 [ 2566.462399] entry_SYSCALL_64_fastpath+0x1c/0xb1 [ 2566.462477] [ 2566.462477] -> #0 (&obj->mm.lock){+.+.+.}: [ 2566.462587] __lock_acquire+0x1602/0x1790 [ 2566.462661] lock_acquire+0xfe/0x220 [ 2566.462893] i915_gem_object_pin_pages+0x4c/0xc0 [i915] [ 2566.463116] i915_gem_fault+0x2c2/0x8c0 [i915] [ 2566.463197] __do_fault+0x42/0x130 [ 2566.463276] __handle_mm_fault+0x92c/0x1280 [ 2566.463356] handle_mm_fault+0x1e2/0x440 [ 2566.463443] __do_page_fault+0x1c4/0x500 [ 2566.463529] do_page_fault+0xc/0x10 [ 2566.463613] page_fault+0x1f/0x30 [ 2566.463693] [ 2566.463693] other info that might help us debug this: [ 2566.463693] [ 2566.463820] Possible unsafe locking scenario: [ 2566.463820] [ 2566.463918] CPU0 CPU1 [ 2566.463988] ---- ---- [ 2566.464068] lock(&mm->mmap_sem); [ 2566.464143] lock(&obj->mm.lock); [ 2566.464226] lock(&mm->mmap_sem); [ 2566.464304] lock(&obj->mm.lock); [ 2566.464378] [ 2566.464378] *** DEADLOCK *** [ 2566.464378] [ 2566.464504] 1 lock held by gem_streaming_w/759: [ 2566.464576] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8106fd11>] __do_page_fault+0x121/0x500 [ 2566.464699] [ 2566.464699] stack backtrace: [ 2566.464801] CPU: 0 PID: 759 Comm: gem_streaming_w Not tainted 4.11.0-rc1+ #26 [ 2566.464881] Hardware name: GIGABYTE GB-BXBT-1900/MZBAYAB-00, BIOS F8 03/02/2016 [ 2566.464983] Call Trace: [ 2566.465061] dump_stack+0x68/0x9f [ 2566.465144] print_circular_bug+0x20b/0x260 [ 2566.465234] __lock_acquire+0x1602/0x1790 [ 2566.465323] ? debug_check_no_locks_freed+0x1a0/0x1a0 [ 2566.465564] ? i915_gem_object_wait+0x238/0x650 [i915] [ 2566.465657] ? debug_lockdep_rcu_enabled.part.4+0x1a/0x30 [ 2566.465749] lock_acquire+0xfe/0x220 [ 2566.465985] ? i915_sg_trim+0x1b0/0x1b0 [i915] [ 2566.466223] i915_gem_object_pin_pages+0x4c/0xc0 [i915] [ 2566.466461] ? i915_sg_trim+0x1b0/0x1b0 [i915] [ 2566.466699] i915_gem_fault+0x2c2/0x8c0 [i915] [ 2566.466939] ? i915_gem_pwrite_ioctl+0xce0/0xce0 [i915] [ 2566.467030] ? __lock_acquire+0x642/0x1790 [ 2566.467122] ? __lock_acquire+0x642/0x1790 [ 2566.467209] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 2566.467299] ? get_unmapped_area+0x1b4/0x1d0 [ 2566.467387] __do_fault+0x42/0x130 [ 2566.467474] __handle_mm_fault+0x92c/0x1280 [ 2566.467564] ? __pmd_alloc+0x1e0/0x1e0 [ 2566.467651] ? vm_mmap_pgoff+0x160/0x190 [ 2566.467740] ? handle_mm_fault+0x111/0x440 [ 2566.467827] handle_mm_fault+0x1e2/0x440 [ 2566.467914] ? handle_mm_fault+0x5d/0x440 [ 2566.468002] __do_page_fault+0x1c4/0x500 [ 2566.468090] do_page_fault+0xc/0x10 [ 2566.468180] page_fault+0x1f/0x30 [ 2566.468263] RIP: 0033:0x557895ced32a [ 2566.468337] RSP: 002b:00007fffd6dd8a10 EFLAGS: 00010202 [ 2566.468419] RAX: 00007f659a4db000 RBX: 0000000000000003 RCX: 00007f659ad032da [ 2566.468501] RDX: 0000000000000000 RSI: 0000000000100000 RDI: 0000000000000000 [ 2566.468586] RBP: 0000000000000007 R08: 0000000000000003 R09: 0000000100000000 [ 2566.468667] R10: 0000000000000001 R11: 0000000000000246 R12: 0000557895ceda60 [ 2566.468749] R13: 0000000000000001 R14: 00007fffd6dd8ac0 R15: 00007f659a4db000 By checking the status of the gup worker (serialized by the obj->mm.lock) we can determine whether it is still active, has failed or has succeeded. If the worker is still active (or failed), we know that it cannot be bound and so we can skip taking struct_mutex (risking potential recursion). As we check the worker status, we mark it to discard any partial results, forcing us to restart on the next get_pages. Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Fixes: 1c8782dd313e ("drm/i915/userptr: Disallow wrapping GTT into a userptr") Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Testcase: igt/gem_userptr_blits/dmabuf-sync Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170315140150.19432-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-03-15 21:01:50 +07:00
mutex_lock(&obj->base.dev->struct_mutex);
/* We are inside a kthread context and can't be interrupted */
if (i915_gem_object_unbind(obj) == 0)
__i915_gem_object_put_pages(obj, I915_MM_NORMAL);
WARN_ONCE(i915_gem_object_has_pages(obj),
"Failed to release pages: bind_count=%d, pages_pin_count=%d, pin_global=%d\n",
obj->bind_count,
atomic_read(&obj->mm.pages_pin_count),
obj->pin_global);
drm/i915/userptr: Reinvent GGTT self-faulting protection lockdep doesn't like us taking the mm->mmap_sem inside the get_pages callback for a couple of reasons. The straightforward deadlock: [13755.434059] ============================================= [13755.434061] [ INFO: possible recursive locking detected ] [13755.434064] 4.11.0-rc1-CI-CI_DRM_297+ #1 Tainted: G U [13755.434066] --------------------------------------------- [13755.434068] gem_userptr_bli/8398 is trying to acquire lock: [13755.434070] (&mm->mmap_sem){++++++}, at: [<ffffffffa00c988a>] i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434096] but task is already holding lock: [13755.434098] (&mm->mmap_sem){++++++}, at: [<ffffffff8104d485>] __do_page_fault+0x105/0x560 [13755.434105] other info that might help us debug this: [13755.434108] Possible unsafe locking scenario: [13755.434110] CPU0 [13755.434111] ---- [13755.434112] lock(&mm->mmap_sem); [13755.434115] lock(&mm->mmap_sem); [13755.434117] *** DEADLOCK *** [13755.434121] May be due to missing lock nesting notation [13755.434126] 2 locks held by gem_userptr_bli/8398: [13755.434128] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8104d485>] __do_page_fault+0x105/0x560 [13755.434135] #1: (&obj->mm.lock){+.+.+.}, at: [<ffffffffa00b887d>] __i915_gem_object_get_pages+0x1d/0x70 [i915] [13755.434156] stack backtrace: [13755.434161] CPU: 3 PID: 8398 Comm: gem_userptr_bli Tainted: G U 4.11.0-rc1-CI-CI_DRM_297+ #1 [13755.434165] Hardware name: GIGABYTE GB-BKi7(H)A-7500/MFLP7AP-00, BIOS F4 02/20/2017 [13755.434169] Call Trace: [13755.434174] dump_stack+0x67/0x92 [13755.434178] __lock_acquire+0x133a/0x1b50 [13755.434182] lock_acquire+0xc9/0x220 [13755.434200] ? i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434204] down_read+0x42/0x70 [13755.434221] ? i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434238] i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434255] ____i915_gem_object_get_pages+0x25/0x60 [i915] [13755.434272] __i915_gem_object_get_pages+0x59/0x70 [i915] [13755.434288] i915_gem_fault+0x397/0x6a0 [i915] [13755.434304] ? i915_gem_fault+0x1a1/0x6a0 [i915] [13755.434308] ? __lock_acquire+0x449/0x1b50 [13755.434311] ? __lock_acquire+0x449/0x1b50 [13755.434315] ? vm_mmap_pgoff+0xa9/0xd0 [13755.434318] __do_fault+0x19/0x70 [13755.434321] __handle_mm_fault+0x863/0xe50 [13755.434325] handle_mm_fault+0x17f/0x370 [13755.434329] ? handle_mm_fault+0x40/0x370 [13755.434332] __do_page_fault+0x279/0x560 [13755.434336] do_page_fault+0xc/0x10 [13755.434339] page_fault+0x22/0x30 [13755.434342] RIP: 0033:0x7f5ab91b5880 [13755.434345] RSP: 002b:00007fff62922218 EFLAGS: 00010216 [13755.434348] RAX: 0000000000b74500 RBX: 00007f5ab7f81000 RCX: 0000000000000000 [13755.434352] RDX: 0000000000100000 RSI: 00007f5ab7f81000 RDI: 00007f5aba61c000 [13755.434355] RBP: 00007f5aba61c000 R08: 0000000000000007 R09: 0000000100000000 [13755.434359] R10: 000000000000037d R11: 00007f5ab91b5840 R12: 0000000000000001 [13755.434362] R13: 0000000000000005 R14: 0000000000000001 R15: 0000000000000000 and cyclic deadlocks: [ 2566.458979] ====================================================== [ 2566.459054] [ INFO: possible circular locking dependency detected ] [ 2566.459127] 4.11.0-rc1+ #26 Not tainted [ 2566.459194] ------------------------------------------------------- [ 2566.459266] gem_streaming_w/759 is trying to acquire lock: [ 2566.459334] (&obj->mm.lock){+.+.+.}, at: [<ffffffffa034bc80>] i915_gem_object_pin_pages+0x0/0xc0 [i915] [ 2566.459605] [ 2566.459605] but task is already holding lock: [ 2566.459699] (&mm->mmap_sem){++++++}, at: [<ffffffff8106fd11>] __do_page_fault+0x121/0x500 [ 2566.459814] [ 2566.459814] which lock already depends on the new lock. [ 2566.459814] [ 2566.459934] [ 2566.459934] the existing dependency chain (in reverse order) is: [ 2566.460030] [ 2566.460030] -> #1 (&mm->mmap_sem){++++++}: [ 2566.460139] lock_acquire+0xfe/0x220 [ 2566.460214] down_read+0x4e/0x90 [ 2566.460444] i915_gem_userptr_get_pages+0x6e/0x340 [i915] [ 2566.460669] ____i915_gem_object_get_pages+0x8b/0xd0 [i915] [ 2566.460900] __i915_gem_object_get_pages+0x6a/0x80 [i915] [ 2566.461132] __i915_vma_do_pin+0x7fa/0x930 [i915] [ 2566.461352] eb_add_vma+0x67b/0x830 [i915] [ 2566.461572] eb_lookup_vmas+0xafe/0x1010 [i915] [ 2566.461792] i915_gem_do_execbuffer+0x715/0x2870 [i915] [ 2566.462012] i915_gem_execbuffer2+0x106/0x2b0 [i915] [ 2566.462152] drm_ioctl+0x36c/0x670 [drm] [ 2566.462236] do_vfs_ioctl+0x12c/0xa60 [ 2566.462317] SyS_ioctl+0x41/0x70 [ 2566.462399] entry_SYSCALL_64_fastpath+0x1c/0xb1 [ 2566.462477] [ 2566.462477] -> #0 (&obj->mm.lock){+.+.+.}: [ 2566.462587] __lock_acquire+0x1602/0x1790 [ 2566.462661] lock_acquire+0xfe/0x220 [ 2566.462893] i915_gem_object_pin_pages+0x4c/0xc0 [i915] [ 2566.463116] i915_gem_fault+0x2c2/0x8c0 [i915] [ 2566.463197] __do_fault+0x42/0x130 [ 2566.463276] __handle_mm_fault+0x92c/0x1280 [ 2566.463356] handle_mm_fault+0x1e2/0x440 [ 2566.463443] __do_page_fault+0x1c4/0x500 [ 2566.463529] do_page_fault+0xc/0x10 [ 2566.463613] page_fault+0x1f/0x30 [ 2566.463693] [ 2566.463693] other info that might help us debug this: [ 2566.463693] [ 2566.463820] Possible unsafe locking scenario: [ 2566.463820] [ 2566.463918] CPU0 CPU1 [ 2566.463988] ---- ---- [ 2566.464068] lock(&mm->mmap_sem); [ 2566.464143] lock(&obj->mm.lock); [ 2566.464226] lock(&mm->mmap_sem); [ 2566.464304] lock(&obj->mm.lock); [ 2566.464378] [ 2566.464378] *** DEADLOCK *** [ 2566.464378] [ 2566.464504] 1 lock held by gem_streaming_w/759: [ 2566.464576] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8106fd11>] __do_page_fault+0x121/0x500 [ 2566.464699] [ 2566.464699] stack backtrace: [ 2566.464801] CPU: 0 PID: 759 Comm: gem_streaming_w Not tainted 4.11.0-rc1+ #26 [ 2566.464881] Hardware name: GIGABYTE GB-BXBT-1900/MZBAYAB-00, BIOS F8 03/02/2016 [ 2566.464983] Call Trace: [ 2566.465061] dump_stack+0x68/0x9f [ 2566.465144] print_circular_bug+0x20b/0x260 [ 2566.465234] __lock_acquire+0x1602/0x1790 [ 2566.465323] ? debug_check_no_locks_freed+0x1a0/0x1a0 [ 2566.465564] ? i915_gem_object_wait+0x238/0x650 [i915] [ 2566.465657] ? debug_lockdep_rcu_enabled.part.4+0x1a/0x30 [ 2566.465749] lock_acquire+0xfe/0x220 [ 2566.465985] ? i915_sg_trim+0x1b0/0x1b0 [i915] [ 2566.466223] i915_gem_object_pin_pages+0x4c/0xc0 [i915] [ 2566.466461] ? i915_sg_trim+0x1b0/0x1b0 [i915] [ 2566.466699] i915_gem_fault+0x2c2/0x8c0 [i915] [ 2566.466939] ? i915_gem_pwrite_ioctl+0xce0/0xce0 [i915] [ 2566.467030] ? __lock_acquire+0x642/0x1790 [ 2566.467122] ? __lock_acquire+0x642/0x1790 [ 2566.467209] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 2566.467299] ? get_unmapped_area+0x1b4/0x1d0 [ 2566.467387] __do_fault+0x42/0x130 [ 2566.467474] __handle_mm_fault+0x92c/0x1280 [ 2566.467564] ? __pmd_alloc+0x1e0/0x1e0 [ 2566.467651] ? vm_mmap_pgoff+0x160/0x190 [ 2566.467740] ? handle_mm_fault+0x111/0x440 [ 2566.467827] handle_mm_fault+0x1e2/0x440 [ 2566.467914] ? handle_mm_fault+0x5d/0x440 [ 2566.468002] __do_page_fault+0x1c4/0x500 [ 2566.468090] do_page_fault+0xc/0x10 [ 2566.468180] page_fault+0x1f/0x30 [ 2566.468263] RIP: 0033:0x557895ced32a [ 2566.468337] RSP: 002b:00007fffd6dd8a10 EFLAGS: 00010202 [ 2566.468419] RAX: 00007f659a4db000 RBX: 0000000000000003 RCX: 00007f659ad032da [ 2566.468501] RDX: 0000000000000000 RSI: 0000000000100000 RDI: 0000000000000000 [ 2566.468586] RBP: 0000000000000007 R08: 0000000000000003 R09: 0000000100000000 [ 2566.468667] R10: 0000000000000001 R11: 0000000000000246 R12: 0000557895ceda60 [ 2566.468749] R13: 0000000000000001 R14: 00007fffd6dd8ac0 R15: 00007f659a4db000 By checking the status of the gup worker (serialized by the obj->mm.lock) we can determine whether it is still active, has failed or has succeeded. If the worker is still active (or failed), we know that it cannot be bound and so we can skip taking struct_mutex (risking potential recursion). As we check the worker status, we mark it to discard any partial results, forcing us to restart on the next get_pages. Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Fixes: 1c8782dd313e ("drm/i915/userptr: Disallow wrapping GTT into a userptr") Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Testcase: igt/gem_userptr_blits/dmabuf-sync Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170315140150.19432-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-03-15 21:01:50 +07:00
mutex_unlock(&obj->base.dev->struct_mutex);
out:
i915_gem_object_put(obj);
}
static void add_object(struct i915_mmu_object *mo)
{
if (mo->attached)
return;
interval_tree_insert(&mo->it, &mo->mn->objects);
mo->attached = true;
}
static void del_object(struct i915_mmu_object *mo)
{
if (!mo->attached)
return;
interval_tree_remove(&mo->it, &mo->mn->objects);
mo->attached = false;
}
mm, oom: distinguish blockable mode for mmu notifiers There are several blockable mmu notifiers which might sleep in mmu_notifier_invalidate_range_start and that is a problem for the oom_reaper because it needs to guarantee a forward progress so it cannot depend on any sleepable locks. Currently we simply back off and mark an oom victim with blockable mmu notifiers as done after a short sleep. That can result in selecting a new oom victim prematurely because the previous one still hasn't torn its memory down yet. We can do much better though. Even if mmu notifiers use sleepable locks there is no reason to automatically assume those locks are held. Moreover majority of notifiers only care about a portion of the address space and there is absolutely zero reason to fail when we are unmapping an unrelated range. Many notifiers do really block and wait for HW which is harder to handle and we have to bail out though. This patch handles the low hanging fruit. __mmu_notifier_invalidate_range_start gets a blockable flag and callbacks are not allowed to sleep if the flag is set to false. This is achieved by using trylock instead of the sleepable lock for most callbacks and continue as long as we do not block down the call chain. I think we can improve that even further because there is a common pattern to do a range lookup first and then do something about that. The first part can be done without a sleeping lock in most cases AFAICS. The oom_reaper end then simply retries if there is at least one notifier which couldn't make any progress in !blockable mode. A retry loop is already implemented to wait for the mmap_sem and this is basically the same thing. The simplest way for driver developers to test this code path is to wrap userspace code which uses these notifiers into a memcg and set the hard limit to hit the oom. This can be done e.g. after the test faults in all the mmu notifier managed memory and set the hard limit to something really small. Then we are looking for a proper process tear down. [akpm@linux-foundation.org: coding style fixes] [akpm@linux-foundation.org: minor code simplification] Link: http://lkml.kernel.org/r/20180716115058.5559-1-mhocko@kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Acked-by: Christian König <christian.koenig@amd.com> # AMD notifiers Acked-by: Leon Romanovsky <leonro@mellanox.com> # mlx and umem_odp Reported-by: David Rientjes <rientjes@google.com> Cc: "David (ChunMing) Zhou" <David1.Zhou@amd.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: David Airlie <airlied@linux.ie> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Doug Ledford <dledford@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Mike Marciniszyn <mike.marciniszyn@intel.com> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com> Cc: Sudeep Dutt <sudeep.dutt@intel.com> Cc: Ashutosh Dixit <ashutosh.dixit@intel.com> Cc: Dimitri Sivanich <sivanich@sgi.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Juergen Gross <jgross@suse.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Felix Kuehling <felix.kuehling@amd.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 11:52:33 +07:00
static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
struct mm_struct *mm,
unsigned long start,
mm, oom: distinguish blockable mode for mmu notifiers There are several blockable mmu notifiers which might sleep in mmu_notifier_invalidate_range_start and that is a problem for the oom_reaper because it needs to guarantee a forward progress so it cannot depend on any sleepable locks. Currently we simply back off and mark an oom victim with blockable mmu notifiers as done after a short sleep. That can result in selecting a new oom victim prematurely because the previous one still hasn't torn its memory down yet. We can do much better though. Even if mmu notifiers use sleepable locks there is no reason to automatically assume those locks are held. Moreover majority of notifiers only care about a portion of the address space and there is absolutely zero reason to fail when we are unmapping an unrelated range. Many notifiers do really block and wait for HW which is harder to handle and we have to bail out though. This patch handles the low hanging fruit. __mmu_notifier_invalidate_range_start gets a blockable flag and callbacks are not allowed to sleep if the flag is set to false. This is achieved by using trylock instead of the sleepable lock for most callbacks and continue as long as we do not block down the call chain. I think we can improve that even further because there is a common pattern to do a range lookup first and then do something about that. The first part can be done without a sleeping lock in most cases AFAICS. The oom_reaper end then simply retries if there is at least one notifier which couldn't make any progress in !blockable mode. A retry loop is already implemented to wait for the mmap_sem and this is basically the same thing. The simplest way for driver developers to test this code path is to wrap userspace code which uses these notifiers into a memcg and set the hard limit to hit the oom. This can be done e.g. after the test faults in all the mmu notifier managed memory and set the hard limit to something really small. Then we are looking for a proper process tear down. [akpm@linux-foundation.org: coding style fixes] [akpm@linux-foundation.org: minor code simplification] Link: http://lkml.kernel.org/r/20180716115058.5559-1-mhocko@kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Acked-by: Christian König <christian.koenig@amd.com> # AMD notifiers Acked-by: Leon Romanovsky <leonro@mellanox.com> # mlx and umem_odp Reported-by: David Rientjes <rientjes@google.com> Cc: "David (ChunMing) Zhou" <David1.Zhou@amd.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: David Airlie <airlied@linux.ie> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Doug Ledford <dledford@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Mike Marciniszyn <mike.marciniszyn@intel.com> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com> Cc: Sudeep Dutt <sudeep.dutt@intel.com> Cc: Ashutosh Dixit <ashutosh.dixit@intel.com> Cc: Dimitri Sivanich <sivanich@sgi.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Juergen Gross <jgross@suse.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Felix Kuehling <felix.kuehling@amd.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 11:52:33 +07:00
unsigned long end,
bool blockable)
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
{
drm/i915: Use a task to cancel the userptr on invalidate_range Whilst discussing possible ways to trigger an invalidate_range on a userptr with an aliased GGTT mmapping (and so cause a struct_mutex deadlock), the conclusion is that we can, and we must, prevent any possible deadlock by avoiding taking the mutex at all during invalidate_range. This has numerous advantages all of which stem from avoid the sleeping function from inside the unknown context. In particular, it simplifies the invalidate_range because we no longer have to juggle the spinlock/mutex and can just hold the spinlock for the entire walk. To compensate, we have to make get_pages a bit more complicated in order to serialise with a pending cancel_userptr worker. As we hold the struct_mutex, we have no choice but to return EAGAIN and hope that the worker is then flushed before we retry after reacquiring the struct_mutex. The important caveat is that the invalidate_range itself is no longer synchronous. There exists a small but definite period in time in which the old PTE's page remain accessible via the GPU. Note however that the physical pages themselves are not invalidated by the mmu_notifier, just the CPU view of the address space. The impact should be limited to a delay in pages being flushed, rather than a possibility of writing to the wrong pages. The only race condition that this worsens is remapping an userptr active on the GPU where fresh work may still reference the old pages due to struct_mutex contention. Given that userspace is racing with the GPU, it is fair to say that the results are undefined. v2: Only queue (and importantly only take one refcnt) the worker once. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:47 +07:00
struct i915_mmu_notifier *mn =
container_of(_mn, struct i915_mmu_notifier, mn);
struct i915_mmu_object *mo;
struct interval_tree_node *it;
LIST_HEAD(cancelled);
if (RB_EMPTY_ROOT(&mn->objects.rb_root))
mm, oom: distinguish blockable mode for mmu notifiers There are several blockable mmu notifiers which might sleep in mmu_notifier_invalidate_range_start and that is a problem for the oom_reaper because it needs to guarantee a forward progress so it cannot depend on any sleepable locks. Currently we simply back off and mark an oom victim with blockable mmu notifiers as done after a short sleep. That can result in selecting a new oom victim prematurely because the previous one still hasn't torn its memory down yet. We can do much better though. Even if mmu notifiers use sleepable locks there is no reason to automatically assume those locks are held. Moreover majority of notifiers only care about a portion of the address space and there is absolutely zero reason to fail when we are unmapping an unrelated range. Many notifiers do really block and wait for HW which is harder to handle and we have to bail out though. This patch handles the low hanging fruit. __mmu_notifier_invalidate_range_start gets a blockable flag and callbacks are not allowed to sleep if the flag is set to false. This is achieved by using trylock instead of the sleepable lock for most callbacks and continue as long as we do not block down the call chain. I think we can improve that even further because there is a common pattern to do a range lookup first and then do something about that. The first part can be done without a sleeping lock in most cases AFAICS. The oom_reaper end then simply retries if there is at least one notifier which couldn't make any progress in !blockable mode. A retry loop is already implemented to wait for the mmap_sem and this is basically the same thing. The simplest way for driver developers to test this code path is to wrap userspace code which uses these notifiers into a memcg and set the hard limit to hit the oom. This can be done e.g. after the test faults in all the mmu notifier managed memory and set the hard limit to something really small. Then we are looking for a proper process tear down. [akpm@linux-foundation.org: coding style fixes] [akpm@linux-foundation.org: minor code simplification] Link: http://lkml.kernel.org/r/20180716115058.5559-1-mhocko@kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Acked-by: Christian König <christian.koenig@amd.com> # AMD notifiers Acked-by: Leon Romanovsky <leonro@mellanox.com> # mlx and umem_odp Reported-by: David Rientjes <rientjes@google.com> Cc: "David (ChunMing) Zhou" <David1.Zhou@amd.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: David Airlie <airlied@linux.ie> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Doug Ledford <dledford@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Mike Marciniszyn <mike.marciniszyn@intel.com> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com> Cc: Sudeep Dutt <sudeep.dutt@intel.com> Cc: Ashutosh Dixit <ashutosh.dixit@intel.com> Cc: Dimitri Sivanich <sivanich@sgi.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Juergen Gross <jgross@suse.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Felix Kuehling <felix.kuehling@amd.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 11:52:33 +07:00
return 0;
drm/i915: Use a task to cancel the userptr on invalidate_range Whilst discussing possible ways to trigger an invalidate_range on a userptr with an aliased GGTT mmapping (and so cause a struct_mutex deadlock), the conclusion is that we can, and we must, prevent any possible deadlock by avoiding taking the mutex at all during invalidate_range. This has numerous advantages all of which stem from avoid the sleeping function from inside the unknown context. In particular, it simplifies the invalidate_range because we no longer have to juggle the spinlock/mutex and can just hold the spinlock for the entire walk. To compensate, we have to make get_pages a bit more complicated in order to serialise with a pending cancel_userptr worker. As we hold the struct_mutex, we have no choice but to return EAGAIN and hope that the worker is then flushed before we retry after reacquiring the struct_mutex. The important caveat is that the invalidate_range itself is no longer synchronous. There exists a small but definite period in time in which the old PTE's page remain accessible via the GPU. Note however that the physical pages themselves are not invalidated by the mmu_notifier, just the CPU view of the address space. The impact should be limited to a delay in pages being flushed, rather than a possibility of writing to the wrong pages. The only race condition that this worsens is remapping an userptr active on the GPU where fresh work may still reference the old pages due to struct_mutex contention. Given that userspace is racing with the GPU, it is fair to say that the results are undefined. v2: Only queue (and importantly only take one refcnt) the worker once. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:47 +07:00
/* interval ranges are inclusive, but invalidate range is exclusive */
end--;
spin_lock(&mn->lock);
it = interval_tree_iter_first(&mn->objects, start, end);
while (it) {
mm, oom: distinguish blockable mode for mmu notifiers There are several blockable mmu notifiers which might sleep in mmu_notifier_invalidate_range_start and that is a problem for the oom_reaper because it needs to guarantee a forward progress so it cannot depend on any sleepable locks. Currently we simply back off and mark an oom victim with blockable mmu notifiers as done after a short sleep. That can result in selecting a new oom victim prematurely because the previous one still hasn't torn its memory down yet. We can do much better though. Even if mmu notifiers use sleepable locks there is no reason to automatically assume those locks are held. Moreover majority of notifiers only care about a portion of the address space and there is absolutely zero reason to fail when we are unmapping an unrelated range. Many notifiers do really block and wait for HW which is harder to handle and we have to bail out though. This patch handles the low hanging fruit. __mmu_notifier_invalidate_range_start gets a blockable flag and callbacks are not allowed to sleep if the flag is set to false. This is achieved by using trylock instead of the sleepable lock for most callbacks and continue as long as we do not block down the call chain. I think we can improve that even further because there is a common pattern to do a range lookup first and then do something about that. The first part can be done without a sleeping lock in most cases AFAICS. The oom_reaper end then simply retries if there is at least one notifier which couldn't make any progress in !blockable mode. A retry loop is already implemented to wait for the mmap_sem and this is basically the same thing. The simplest way for driver developers to test this code path is to wrap userspace code which uses these notifiers into a memcg and set the hard limit to hit the oom. This can be done e.g. after the test faults in all the mmu notifier managed memory and set the hard limit to something really small. Then we are looking for a proper process tear down. [akpm@linux-foundation.org: coding style fixes] [akpm@linux-foundation.org: minor code simplification] Link: http://lkml.kernel.org/r/20180716115058.5559-1-mhocko@kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Acked-by: Christian König <christian.koenig@amd.com> # AMD notifiers Acked-by: Leon Romanovsky <leonro@mellanox.com> # mlx and umem_odp Reported-by: David Rientjes <rientjes@google.com> Cc: "David (ChunMing) Zhou" <David1.Zhou@amd.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: David Airlie <airlied@linux.ie> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Doug Ledford <dledford@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Mike Marciniszyn <mike.marciniszyn@intel.com> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com> Cc: Sudeep Dutt <sudeep.dutt@intel.com> Cc: Ashutosh Dixit <ashutosh.dixit@intel.com> Cc: Dimitri Sivanich <sivanich@sgi.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Juergen Gross <jgross@suse.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Felix Kuehling <felix.kuehling@amd.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 11:52:33 +07:00
if (!blockable) {
spin_unlock(&mn->lock);
return -EAGAIN;
}
/* The mmu_object is released late when destroying the
* GEM object so it is entirely possible to gain a
* reference on an object in the process of being freed
* since our serialisation is via the spinlock and not
* the struct_mutex - and consequently use it after it
* is freed and then double free it. To prevent that
* use-after-free we only acquire a reference on the
* object if it is not in the process of being destroyed.
*/
mo = container_of(it, struct i915_mmu_object, it);
if (kref_get_unless_zero(&mo->obj->base.refcount))
drm/i915/userptr: Flush cancellations before mmu-notifier invalidate returns In order to ensure that all invalidations are completed before the operation returns to userspace (i.e. before the munmap() syscall returns) we need to wait upon the outstanding operations. We are allowed to block inside the invalidate_range_start callback, and as struct_mutex is the inner lock with mmap_sem we can wait upon the struct_mutex without provoking lockdep into warning about a deadlock. However, we don't actually want to wait upon outstanding rendering whilst holding the struct_mutex if we can help it otherwise we also block other processes from submitting work to the GPU. So first we do a wait without the lock and then when we reacquire the lock, we double check that everything is ready for removing the invalidated pages. Finally to wait upon the outstanding unpinning tasks, we create a private workqueue as a means to conveniently wait upon all at once. The drawback is that this workqueue is per-mm, so any threads concurrently invalidating objects will wait upon each other. The advantage of using the workqueue is that we can wait in parallel for completion of rendering and unpinning of several objects (of particular importance if the process terminates with a whole mm full of objects). v2: Apply a cup of tea to the changelog. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94699 Testcase: igt/gem_userptr_blits/sync-unmap-cycles Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-1-git-send-email-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2016-04-05 20:59:59 +07:00
queue_work(mn->wq, &mo->work);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
list_add(&mo->link, &cancelled);
it = interval_tree_iter_next(it, start, end);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}
list_for_each_entry(mo, &cancelled, link)
del_object(mo);
drm/i915: Use a task to cancel the userptr on invalidate_range Whilst discussing possible ways to trigger an invalidate_range on a userptr with an aliased GGTT mmapping (and so cause a struct_mutex deadlock), the conclusion is that we can, and we must, prevent any possible deadlock by avoiding taking the mutex at all during invalidate_range. This has numerous advantages all of which stem from avoid the sleeping function from inside the unknown context. In particular, it simplifies the invalidate_range because we no longer have to juggle the spinlock/mutex and can just hold the spinlock for the entire walk. To compensate, we have to make get_pages a bit more complicated in order to serialise with a pending cancel_userptr worker. As we hold the struct_mutex, we have no choice but to return EAGAIN and hope that the worker is then flushed before we retry after reacquiring the struct_mutex. The important caveat is that the invalidate_range itself is no longer synchronous. There exists a small but definite period in time in which the old PTE's page remain accessible via the GPU. Note however that the physical pages themselves are not invalidated by the mmu_notifier, just the CPU view of the address space. The impact should be limited to a delay in pages being flushed, rather than a possibility of writing to the wrong pages. The only race condition that this worsens is remapping an userptr active on the GPU where fresh work may still reference the old pages due to struct_mutex contention. Given that userspace is racing with the GPU, it is fair to say that the results are undefined. v2: Only queue (and importantly only take one refcnt) the worker once. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:47 +07:00
spin_unlock(&mn->lock);
drm/i915/userptr: Flush cancellations before mmu-notifier invalidate returns In order to ensure that all invalidations are completed before the operation returns to userspace (i.e. before the munmap() syscall returns) we need to wait upon the outstanding operations. We are allowed to block inside the invalidate_range_start callback, and as struct_mutex is the inner lock with mmap_sem we can wait upon the struct_mutex without provoking lockdep into warning about a deadlock. However, we don't actually want to wait upon outstanding rendering whilst holding the struct_mutex if we can help it otherwise we also block other processes from submitting work to the GPU. So first we do a wait without the lock and then when we reacquire the lock, we double check that everything is ready for removing the invalidated pages. Finally to wait upon the outstanding unpinning tasks, we create a private workqueue as a means to conveniently wait upon all at once. The drawback is that this workqueue is per-mm, so any threads concurrently invalidating objects will wait upon each other. The advantage of using the workqueue is that we can wait in parallel for completion of rendering and unpinning of several objects (of particular importance if the process terminates with a whole mm full of objects). v2: Apply a cup of tea to the changelog. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94699 Testcase: igt/gem_userptr_blits/sync-unmap-cycles Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-1-git-send-email-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2016-04-05 20:59:59 +07:00
if (!list_empty(&cancelled))
flush_workqueue(mn->wq);
mm, oom: distinguish blockable mode for mmu notifiers There are several blockable mmu notifiers which might sleep in mmu_notifier_invalidate_range_start and that is a problem for the oom_reaper because it needs to guarantee a forward progress so it cannot depend on any sleepable locks. Currently we simply back off and mark an oom victim with blockable mmu notifiers as done after a short sleep. That can result in selecting a new oom victim prematurely because the previous one still hasn't torn its memory down yet. We can do much better though. Even if mmu notifiers use sleepable locks there is no reason to automatically assume those locks are held. Moreover majority of notifiers only care about a portion of the address space and there is absolutely zero reason to fail when we are unmapping an unrelated range. Many notifiers do really block and wait for HW which is harder to handle and we have to bail out though. This patch handles the low hanging fruit. __mmu_notifier_invalidate_range_start gets a blockable flag and callbacks are not allowed to sleep if the flag is set to false. This is achieved by using trylock instead of the sleepable lock for most callbacks and continue as long as we do not block down the call chain. I think we can improve that even further because there is a common pattern to do a range lookup first and then do something about that. The first part can be done without a sleeping lock in most cases AFAICS. The oom_reaper end then simply retries if there is at least one notifier which couldn't make any progress in !blockable mode. A retry loop is already implemented to wait for the mmap_sem and this is basically the same thing. The simplest way for driver developers to test this code path is to wrap userspace code which uses these notifiers into a memcg and set the hard limit to hit the oom. This can be done e.g. after the test faults in all the mmu notifier managed memory and set the hard limit to something really small. Then we are looking for a proper process tear down. [akpm@linux-foundation.org: coding style fixes] [akpm@linux-foundation.org: minor code simplification] Link: http://lkml.kernel.org/r/20180716115058.5559-1-mhocko@kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Acked-by: Christian König <christian.koenig@amd.com> # AMD notifiers Acked-by: Leon Romanovsky <leonro@mellanox.com> # mlx and umem_odp Reported-by: David Rientjes <rientjes@google.com> Cc: "David (ChunMing) Zhou" <David1.Zhou@amd.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: David Airlie <airlied@linux.ie> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Doug Ledford <dledford@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Mike Marciniszyn <mike.marciniszyn@intel.com> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com> Cc: Sudeep Dutt <sudeep.dutt@intel.com> Cc: Ashutosh Dixit <ashutosh.dixit@intel.com> Cc: Dimitri Sivanich <sivanich@sgi.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Juergen Gross <jgross@suse.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Felix Kuehling <felix.kuehling@amd.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-22 11:52:33 +07:00
return 0;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}
static const struct mmu_notifier_ops i915_gem_userptr_notifier = {
.invalidate_range_start = i915_gem_userptr_mn_invalidate_range_start,
};
static struct i915_mmu_notifier *
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
i915_mmu_notifier_create(struct mm_struct *mm)
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
{
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
struct i915_mmu_notifier *mn;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
mn = kmalloc(sizeof(*mn), GFP_KERNEL);
if (mn == NULL)
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
return ERR_PTR(-ENOMEM);
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
spin_lock_init(&mn->lock);
mn->mn.ops = &i915_gem_userptr_notifier;
mn->objects = RB_ROOT_CACHED;
drm/i915: Mark the userptr invalidate workqueue as WQ_MEM_RECLAIM Commit 21cc6431e0c2 ("drm/i915: Mark the userptr invalidate workqueue as WQ_MEM_RECLAIM") tried to fixup the check_flush_dependency warning for hitting i915_gem_userptr_mn_invalidate_range_start from within the shrinker, but I failed to notice userptr has 2 similarly named workqueues. I marked up i915-userptr-acquire as WQ_MEM_RECLAIM whereas we only wait upon i915-userptr-release from inside the reclaim paths. [62530.869510] workqueue: PF_MEMALLOC task 7983(gem_shrink) is flushing !WQ_MEM_RECLAIM i915-userptr-release: (null) [62530.869515] ------------[ cut here ]------------ [62530.869519] WARNING: CPU: 1 PID: 7983 at kernel/workqueue.c:2434 check_flush_dependency+0x7f/0x110 [62530.869519] Modules linked in: pegasus mii ip6table_filter ip6_tables bnep iptable_filter snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic binfmt_misc nls_iso8859_1 intel_rapl x86_pkg_temp_thermal intel_powerclamp coretemp snd_hda_intel snd_hda_codec kvm_intel snd_hda_core snd_hwdep kvm snd_pcm irqbypass snd_seq_midi snd_seq_midi_event snd_rawmidi crct10dif_pclmul crc32_pclmul 8250_dw ghash_clmulni_intel snd_seq pcbc snd_seq_device snd_timer btusb aesni_intel btrtl btbcm aes_x86_64 iwlwifi btintel crypto_simd glue_helper cryptd bluetooth snd intel_cstate input_leds idma64 intel_rapl_perf ecdh_generic serio_raw soundcore cfg80211 wmi_bmof virt_dma intel_lpss_pci intel_lpss acpi_als kfifo_buf industrialio winbond_cir soc_button_array rc_core spidev tpm_crb intel_hid acpi_pad mac_hid sparse_keymap [62530.869546] parport_pc ppdev lp parport ip_tables x_tables autofs4 hid_generic usbhid i915 i2c_algo_bit prime_numbers drm_kms_helper syscopyarea e1000e sysfillrect sysimgblt fb_sys_fops ahci ptp pps_core libahci drm wmi video i2c_hid hid [62530.869557] CPU: 1 PID: 7983 Comm: gem_shrink Tainted: G U W L 4.14.0-rc8-drm-tip-ww45-commit-1342299+ #1 [62530.869558] Hardware name: Intel Corporation CoffeeLake Client Platform/CoffeeLake H DDR4 RVP, BIOS CNLSFWR1.R00.X098.A00.1707301945 07/30/2017 [62530.869559] task: ffffa1049dbeec80 task.stack: ffffae7d05c44000 [62530.869560] RIP: 0010:check_flush_dependency+0x7f/0x110 [62530.869561] RSP: 0018:ffffae7d05c473a0 EFLAGS: 00010286 [62530.869562] RAX: 000000000000006e RBX: ffffa1049540f400 RCX: ffffffffa3e55788 [62530.869562] RDX: 0000000000000000 RSI: 0000000000000092 RDI: 0000000000000202 [62530.869563] RBP: ffffae7d05c473c0 R08: 000000000000006e R09: 000000000038bb0e [62530.869563] R10: 0000000000000000 R11: 000000000000006e R12: ffffa1049dbeec80 [62530.869564] R13: 0000000000000000 R14: 0000000000000000 R15: ffffae7d05c473e0 [62530.869565] FS: 00007f621b129880(0000) GS:ffffa1050b240000(0000) knlGS:0000000000000000 [62530.869566] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [62530.869566] CR2: 00007f6214400000 CR3: 0000000353a17003 CR4: 00000000003606e0 [62530.869567] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [62530.869567] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [62530.869568] Call Trace: [62530.869570] flush_workqueue+0x115/0x3d0 [62530.869573] ? wake_up_process+0x15/0x20 [62530.869596] i915_gem_userptr_mn_invalidate_range_start+0x12f/0x160 [i915] [62530.869614] ? i915_gem_userptr_mn_invalidate_range_start+0x12f/0x160 [i915] [62530.869616] __mmu_notifier_invalidate_range_start+0x55/0x80 [62530.869618] try_to_unmap_one+0x791/0x8b0 [62530.869620] ? call_rwsem_down_read_failed+0x18/0x30 [62530.869622] rmap_walk_anon+0x10b/0x260 [62530.869624] rmap_walk+0x48/0x60 [62530.869625] try_to_unmap+0x93/0xf0 [62530.869626] ? page_remove_rmap+0x2a0/0x2a0 [62530.869627] ? page_not_mapped+0x20/0x20 [62530.869629] ? page_get_anon_vma+0x90/0x90 [62530.869630] ? invalid_mkclean_vma+0x20/0x20 [62530.869631] migrate_pages+0x946/0xaa0 [62530.869633] ? __ClearPageMovable+0x10/0x10 [62530.869635] ? isolate_freepages_block+0x3c0/0x3c0 [62530.869636] compact_zone+0x22f/0x970 [62530.869638] compact_zone_order+0xa3/0xd0 [62530.869640] try_to_compact_pages+0x1a5/0x2a0 [62530.869641] ? try_to_compact_pages+0x1a5/0x2a0 [62530.869643] __alloc_pages_direct_compact+0x50/0x110 [62530.869644] __alloc_pages_slowpath+0x4da/0xf30 [62530.869646] __alloc_pages_nodemask+0x262/0x280 [62530.869648] alloc_pages_vma+0x165/0x1e0 [62530.869649] shmem_alloc_hugepage+0xd0/0x130 [62530.869651] ? __radix_tree_insert+0x45/0x230 [62530.869652] ? __vm_enough_memory+0x29/0x130 [62530.869654] shmem_alloc_and_acct_page+0x10d/0x1e0 [62530.869655] shmem_getpage_gfp+0x426/0xc00 [62530.869657] shmem_fault+0xa0/0x1e0 [62530.869659] ? file_update_time+0x60/0x110 [62530.869660] __do_fault+0x1e/0xc0 [62530.869661] __handle_mm_fault+0xa35/0x1170 [62530.869662] handle_mm_fault+0xcc/0x1c0 [62530.869664] __do_page_fault+0x262/0x4f0 [62530.869666] do_page_fault+0x2e/0xe0 [62530.869667] page_fault+0x22/0x30 [62530.869668] RIP: 0033:0x404335 [62530.869669] RSP: 002b:00007fff7829e420 EFLAGS: 00010216 [62530.869670] RAX: 00007f6210400000 RBX: 0000000000000004 RCX: 0000000000b80000 [62530.869670] RDX: 0000000000002e01 RSI: 0000000000008000 RDI: 0000000000000004 [62530.869671] RBP: 0000000000000019 R08: 0000000000000002 R09: 0000000000000000 [62530.869671] R10: 0000000000000559 R11: 0000000000000246 R12: 0000000008000000 [62530.869672] R13: 00000000004042f0 R14: 0000000000000004 R15: 000000000000007e [62530.869673] Code: 00 8b b0 18 05 00 00 48 8d 8b b0 00 00 00 48 8d 90 c0 06 00 00 4d 89 f0 48 c7 c7 40 c0 c8 a3 c6 05 68 c5 e8 00 01 e8 c2 68 04 00 <0f> ff 4d 85 ed 74 18 49 8b 45 20 48 8b 70 08 8b 86 00 01 00 00 [62530.869691] ---[ end trace 01e01ad0ff5781f8 ]--- Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103739 Fixes: 21cc6431e0c2 ("drm/i915: Mark the userptr invalidate workqueue as WQ_MEM_RECLAIM") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171114173520.8829-1-chris@chris-wilson.co.uk Reviewed-by: Matthew Auld <matthew.auld@intel.com> (cherry picked from commit 41729bf2248bc8593e5103d43974079cc269524c) Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2017-11-15 00:35:20 +07:00
mn->wq = alloc_workqueue("i915-userptr-release",
WQ_UNBOUND | WQ_MEM_RECLAIM,
0);
drm/i915/userptr: Flush cancellations before mmu-notifier invalidate returns In order to ensure that all invalidations are completed before the operation returns to userspace (i.e. before the munmap() syscall returns) we need to wait upon the outstanding operations. We are allowed to block inside the invalidate_range_start callback, and as struct_mutex is the inner lock with mmap_sem we can wait upon the struct_mutex without provoking lockdep into warning about a deadlock. However, we don't actually want to wait upon outstanding rendering whilst holding the struct_mutex if we can help it otherwise we also block other processes from submitting work to the GPU. So first we do a wait without the lock and then when we reacquire the lock, we double check that everything is ready for removing the invalidated pages. Finally to wait upon the outstanding unpinning tasks, we create a private workqueue as a means to conveniently wait upon all at once. The drawback is that this workqueue is per-mm, so any threads concurrently invalidating objects will wait upon each other. The advantage of using the workqueue is that we can wait in parallel for completion of rendering and unpinning of several objects (of particular importance if the process terminates with a whole mm full of objects). v2: Apply a cup of tea to the changelog. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94699 Testcase: igt/gem_userptr_blits/sync-unmap-cycles Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-1-git-send-email-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2016-04-05 20:59:59 +07:00
if (mn->wq == NULL) {
kfree(mn);
return ERR_PTR(-ENOMEM);
}
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
return mn;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}
static void
i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj)
{
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
struct i915_mmu_object *mo;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
mo = obj->userptr.mmu_object;
if (mo == NULL)
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
return;
spin_lock(&mo->mn->lock);
del_object(mo);
spin_unlock(&mo->mn->lock);
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
kfree(mo);
obj->userptr.mmu_object = NULL;
}
static struct i915_mmu_notifier *
i915_mmu_notifier_find(struct i915_mm_struct *mm)
{
drm/i915: Preallocate our mmu notifier workequeu to unbreak cpu hotplug deadlock 4.14-rc1 gained the fancy new cross-release support in lockdep, which seems to have uncovered a few more rules about what is allowed and isn't. This one here seems to indicate that allocating a work-queue while holding mmap_sem is a no-go, so let's try to preallocate it. Of course another way to break this chain would be somewhere in the cpu hotplug code, since this isn't the only trace we're finding now which goes through msr_create_device. Full lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 4.14.0-rc1-CI-CI_DRM_3118+ #1 Tainted: G U ------------------------------------------------------ prime_mmap/1551 is trying to acquire lock: (cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffff8109dbb7>] apply_workqueue_attrs+0x17/0x50 but task is already holding lock: (&dev_priv->mm_lock){+.+.}, at: [<ffffffffa01a7b2a>] i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #6 (&dev_priv->mm_lock){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_nested+0x1b/0x20 i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 do_vfs_ioctl+0x94/0x670 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 -> #5 (&mm->mmap_sem){++++}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __might_fault+0x68/0x90 _copy_to_user+0x23/0x70 filldir+0xa5/0x120 dcache_readdir+0xf9/0x170 iterate_dir+0x69/0x1a0 SyS_getdents+0xa5/0x140 entry_SYSCALL_64_fastpath+0x1c/0xb1 -> #4 (&sb->s_type->i_mutex_key#5){++++}: down_write+0x3b/0x70 handle_create+0xcb/0x1e0 devtmpfsd+0x139/0x180 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #3 ((complete)&req.done){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 wait_for_common+0x58/0x210 wait_for_completion+0x1d/0x20 devtmpfs_create_node+0x13d/0x160 device_add+0x5eb/0x620 device_create_groups_vargs+0xe0/0xf0 device_create+0x3a/0x40 msr_device_create+0x2b/0x40 cpuhp_invoke_callback+0xa3/0x840 cpuhp_thread_fun+0x7a/0x150 smpboot_thread_fn+0x18a/0x280 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #2 (cpuhp_state){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpuhp_issue_call+0x10b/0x170 __cpuhp_setup_state_cpuslocked+0x134/0x2a0 __cpuhp_setup_state+0x46/0x60 page_writeback_init+0x43/0x67 pagecache_init+0x3d/0x42 start_kernel+0x3a8/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #1 (cpuhp_state_mutex){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_nested+0x1b/0x20 __cpuhp_setup_state_cpuslocked+0x52/0x2a0 __cpuhp_setup_state+0x46/0x60 page_alloc_init+0x28/0x30 start_kernel+0x145/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #0 (cpu_hotplug_lock.rw_sem){++++}: check_prev_add+0x430/0x840 __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpus_read_lock+0x3d/0xb0 apply_workqueue_attrs+0x17/0x50 __alloc_workqueue_key+0x1d8/0x4d9 i915_gem_userptr_init__mmu_notifier+0x1fb/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 do_vfs_ioctl+0x94/0x670 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 other info that might help us debug this: Chain exists of: cpu_hotplug_lock.rw_sem --> &mm->mmap_sem --> &dev_priv->mm_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&dev_priv->mm_lock); lock(&mm->mmap_sem); lock(&dev_priv->mm_lock); lock(cpu_hotplug_lock.rw_sem); *** DEADLOCK *** 2 locks held by prime_mmap/1551: #0: (&mm->mmap_sem){++++}, at: [<ffffffffa01a7b18>] i915_gem_userptr_init__mmu_notifier+0x138/0x270 [i915] #1: (&dev_priv->mm_lock){+.+.}, at: [<ffffffffa01a7b2a>] i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] stack backtrace: CPU: 4 PID: 1551 Comm: prime_mmap Tainted: G U 4.14.0-rc1-CI-CI_DRM_3118+ #1 Hardware name: Dell Inc. XPS 8300 /0Y2MRG, BIOS A06 10/17/2011 Call Trace: dump_stack+0x68/0x9f print_circular_bug+0x235/0x3c0 ? lockdep_init_map_crosslock+0x20/0x20 check_prev_add+0x430/0x840 __lock_acquire+0x1420/0x15e0 ? __lock_acquire+0x1420/0x15e0 ? lockdep_init_map_crosslock+0x20/0x20 lock_acquire+0xb0/0x200 ? apply_workqueue_attrs+0x17/0x50 cpus_read_lock+0x3d/0xb0 ? apply_workqueue_attrs+0x17/0x50 apply_workqueue_attrs+0x17/0x50 __alloc_workqueue_key+0x1d8/0x4d9 ? __lockdep_init_map+0x57/0x1c0 i915_gem_userptr_init__mmu_notifier+0x1fb/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] ? i915_gem_userptr_release+0x140/0x140 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 ? i915_gem_userptr_release+0x140/0x140 [i915] ? __do_page_fault+0x2a4/0x570 do_vfs_ioctl+0x94/0x670 ? entry_SYSCALL_64_fastpath+0x5/0xb1 ? __this_cpu_preempt_check+0x13/0x20 ? trace_hardirqs_on_caller+0xe3/0x1b0 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 RIP: 0033:0x7fbb83c39587 RSP: 002b:00007fff188dc228 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: ffffffff81492963 RCX: 00007fbb83c39587 RDX: 00007fff188dc260 RSI: 00000000c0186473 RDI: 0000000000000003 RBP: ffffc90001487f88 R08: 0000000000000000 R09: 00007fff188dc2ac R10: 00007fbb83efcb58 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000003 R14: 00000000c0186473 R15: 00007fff188dc2ac ? __this_cpu_preempt_check+0x13/0x20 Note that this also has the minor benefit of slightly reducing the critical section where we hold mmap_sem. v2: Set ret correctly when we raced with another thread. v3: Use Chris' diff. Attach the right lockdep splat. v4: Repaint in Tvrtko's colors (aka don't report ENOMEM if we race and some other thread managed to not also get an ENOMEM and successfully install the mmu notifier. Note that the kernel guarantees that small allocations succeed, so this never actually happens). Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Sasha Levin <alexander.levin@verizon.com> Cc: Marta Lofstedt <marta.lofstedt@intel.com> Cc: Tejun Heo <tj@kernel.org> References: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3180/shard-hsw3/igt@prime_mmap@test_userptr.html Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=102939 Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171009164401.16035-1-daniel.vetter@ffwll.ch
2017-10-09 23:44:00 +07:00
struct i915_mmu_notifier *mn;
int err = 0;
drm/i915: Do not store the error pointer for a failed userptr registration If we fail to create our mmu notification, we report the error back and currently store the error inside the i915_mm_struct. This not only causes subsequent registerations of the same mm to fail (an issue if the first was interrupted by a signal and needed to be restarted) but also causes us to eventually try and free the error pointer. [ 73.419599] BUG: unable to handle kernel NULL pointer dereference at 000000000000004c [ 73.419831] IP: [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.420065] PGD 8650c067 PUD 870bb067 PMD 0 [ 73.420319] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC [ 73.420580] CPU: 0 PID: 42 Comm: kworker/0:1 Tainted: G W 3.17.0-rc6+ #1561 [ 73.420837] Hardware name: Intel Corporation SandyBridge Platform/LosLunas CRB, BIOS ASNBCPT1.86C.0075.P00.1106281639 06/28/2011 [ 73.421405] Workqueue: events __i915_mm_struct_free__worker [ 73.421724] task: ffff880088a81220 ti: ffff880088168000 task.ti: ffff880088168000 [ 73.422051] RIP: 0010:[<ffffffff8114af33>] [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.422410] RSP: 0018:ffff88008816bd50 EFLAGS: 00010286 [ 73.422765] RAX: 0000000000000003 RBX: ffff880086485400 RCX: 0000000000000000 [ 73.423137] RDX: ffff88016d80ee90 RSI: ffff880086485400 RDI: 0000000000000044 [ 73.423513] RBP: ffff88008816bd70 R08: 0000000000000001 R09: 0000000000000000 [ 73.423895] R10: 0000000000000320 R11: 0000000000000001 R12: 0000000000000044 [ 73.424282] R13: ffff880166e5f008 R14: ffff88016d815200 R15: ffff880166e5f040 [ 73.424682] FS: 0000000000000000(0000) GS:ffff88016d800000(0000) knlGS:0000000000000000 [ 73.425099] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 73.425537] CR2: 000000000000004c CR3: 0000000087f5f000 CR4: 00000000000407f0 [ 73.426157] Stack: [ 73.426597] ffff880088a81248 ffff880166e5f038 fffffffffffffffc ffff880166e5f008 [ 73.427096] ffff88008816bd98 ffffffff814a75f2 ffff880166e5f038 ffff8800880f8a28 [ 73.427603] ffff88016d812ac0 ffff88008816be00 ffffffff8106321a ffffffff810631af [ 73.428119] Call Trace: [ 73.428606] [<ffffffff814a75f2>] __i915_mm_struct_free__worker+0x42/0x80 [ 73.429116] [<ffffffff8106321a>] process_one_work+0x1ba/0x610 [ 73.429632] [<ffffffff810631af>] ? process_one_work+0x14f/0x610 [ 73.430153] [<ffffffff810636db>] worker_thread+0x6b/0x4a0 [ 73.430671] [<ffffffff8108d67d>] ? trace_hardirqs_on+0xd/0x10 [ 73.431501] [<ffffffff81063670>] ? process_one_work+0x610/0x610 [ 73.432030] [<ffffffff8106a206>] kthread+0xf6/0x110 [ 73.432561] [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80 [ 73.433100] [<ffffffff8169c22c>] ret_from_fork+0x7c/0xb0 [ 73.433644] [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80 [ 73.434194] Code: 0f 1f 84 00 00 00 00 00 66 66 66 66 90 8b 46 4c 85 c0 0f 8e 10 01 00 00 55 48 89 e5 41 55 41 54 53 48 89 f3 49 89 fc 48 83 ec 08 <48> 83 7f 08 00 0f 84 b1 00 00 00 48 c7 c7 40 e6 ac 82 e8 26 65 [ 73.435942] RIP [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.437017] RSP <ffff88008816bd50> [ 73.437704] CR2: 000000000000004c Fixes regression from commit ad46cb533d586fdb256855437af876617c6cf609 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Aug 7 14:20:40 2014 +0100 drm/i915: Prevent recursive deadlock on releasing a busy userptr Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=84207 Testcase: igt/gem_render_copy_redux Testcase: igt/gem_userptr_blits/create-destroy-sync Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
2014-09-26 16:31:02 +07:00
mn = mm->mn;
if (mn)
return mn;
drm/i915: Preallocate our mmu notifier workequeu to unbreak cpu hotplug deadlock 4.14-rc1 gained the fancy new cross-release support in lockdep, which seems to have uncovered a few more rules about what is allowed and isn't. This one here seems to indicate that allocating a work-queue while holding mmap_sem is a no-go, so let's try to preallocate it. Of course another way to break this chain would be somewhere in the cpu hotplug code, since this isn't the only trace we're finding now which goes through msr_create_device. Full lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 4.14.0-rc1-CI-CI_DRM_3118+ #1 Tainted: G U ------------------------------------------------------ prime_mmap/1551 is trying to acquire lock: (cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffff8109dbb7>] apply_workqueue_attrs+0x17/0x50 but task is already holding lock: (&dev_priv->mm_lock){+.+.}, at: [<ffffffffa01a7b2a>] i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #6 (&dev_priv->mm_lock){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_nested+0x1b/0x20 i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 do_vfs_ioctl+0x94/0x670 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 -> #5 (&mm->mmap_sem){++++}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __might_fault+0x68/0x90 _copy_to_user+0x23/0x70 filldir+0xa5/0x120 dcache_readdir+0xf9/0x170 iterate_dir+0x69/0x1a0 SyS_getdents+0xa5/0x140 entry_SYSCALL_64_fastpath+0x1c/0xb1 -> #4 (&sb->s_type->i_mutex_key#5){++++}: down_write+0x3b/0x70 handle_create+0xcb/0x1e0 devtmpfsd+0x139/0x180 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #3 ((complete)&req.done){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 wait_for_common+0x58/0x210 wait_for_completion+0x1d/0x20 devtmpfs_create_node+0x13d/0x160 device_add+0x5eb/0x620 device_create_groups_vargs+0xe0/0xf0 device_create+0x3a/0x40 msr_device_create+0x2b/0x40 cpuhp_invoke_callback+0xa3/0x840 cpuhp_thread_fun+0x7a/0x150 smpboot_thread_fn+0x18a/0x280 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #2 (cpuhp_state){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpuhp_issue_call+0x10b/0x170 __cpuhp_setup_state_cpuslocked+0x134/0x2a0 __cpuhp_setup_state+0x46/0x60 page_writeback_init+0x43/0x67 pagecache_init+0x3d/0x42 start_kernel+0x3a8/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #1 (cpuhp_state_mutex){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_nested+0x1b/0x20 __cpuhp_setup_state_cpuslocked+0x52/0x2a0 __cpuhp_setup_state+0x46/0x60 page_alloc_init+0x28/0x30 start_kernel+0x145/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #0 (cpu_hotplug_lock.rw_sem){++++}: check_prev_add+0x430/0x840 __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpus_read_lock+0x3d/0xb0 apply_workqueue_attrs+0x17/0x50 __alloc_workqueue_key+0x1d8/0x4d9 i915_gem_userptr_init__mmu_notifier+0x1fb/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 do_vfs_ioctl+0x94/0x670 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 other info that might help us debug this: Chain exists of: cpu_hotplug_lock.rw_sem --> &mm->mmap_sem --> &dev_priv->mm_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&dev_priv->mm_lock); lock(&mm->mmap_sem); lock(&dev_priv->mm_lock); lock(cpu_hotplug_lock.rw_sem); *** DEADLOCK *** 2 locks held by prime_mmap/1551: #0: (&mm->mmap_sem){++++}, at: [<ffffffffa01a7b18>] i915_gem_userptr_init__mmu_notifier+0x138/0x270 [i915] #1: (&dev_priv->mm_lock){+.+.}, at: [<ffffffffa01a7b2a>] i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] stack backtrace: CPU: 4 PID: 1551 Comm: prime_mmap Tainted: G U 4.14.0-rc1-CI-CI_DRM_3118+ #1 Hardware name: Dell Inc. XPS 8300 /0Y2MRG, BIOS A06 10/17/2011 Call Trace: dump_stack+0x68/0x9f print_circular_bug+0x235/0x3c0 ? lockdep_init_map_crosslock+0x20/0x20 check_prev_add+0x430/0x840 __lock_acquire+0x1420/0x15e0 ? __lock_acquire+0x1420/0x15e0 ? lockdep_init_map_crosslock+0x20/0x20 lock_acquire+0xb0/0x200 ? apply_workqueue_attrs+0x17/0x50 cpus_read_lock+0x3d/0xb0 ? apply_workqueue_attrs+0x17/0x50 apply_workqueue_attrs+0x17/0x50 __alloc_workqueue_key+0x1d8/0x4d9 ? __lockdep_init_map+0x57/0x1c0 i915_gem_userptr_init__mmu_notifier+0x1fb/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] ? i915_gem_userptr_release+0x140/0x140 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 ? i915_gem_userptr_release+0x140/0x140 [i915] ? __do_page_fault+0x2a4/0x570 do_vfs_ioctl+0x94/0x670 ? entry_SYSCALL_64_fastpath+0x5/0xb1 ? __this_cpu_preempt_check+0x13/0x20 ? trace_hardirqs_on_caller+0xe3/0x1b0 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 RIP: 0033:0x7fbb83c39587 RSP: 002b:00007fff188dc228 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: ffffffff81492963 RCX: 00007fbb83c39587 RDX: 00007fff188dc260 RSI: 00000000c0186473 RDI: 0000000000000003 RBP: ffffc90001487f88 R08: 0000000000000000 R09: 00007fff188dc2ac R10: 00007fbb83efcb58 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000003 R14: 00000000c0186473 R15: 00007fff188dc2ac ? __this_cpu_preempt_check+0x13/0x20 Note that this also has the minor benefit of slightly reducing the critical section where we hold mmap_sem. v2: Set ret correctly when we raced with another thread. v3: Use Chris' diff. Attach the right lockdep splat. v4: Repaint in Tvrtko's colors (aka don't report ENOMEM if we race and some other thread managed to not also get an ENOMEM and successfully install the mmu notifier. Note that the kernel guarantees that small allocations succeed, so this never actually happens). Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Sasha Levin <alexander.levin@verizon.com> Cc: Marta Lofstedt <marta.lofstedt@intel.com> Cc: Tejun Heo <tj@kernel.org> References: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3180/shard-hsw3/igt@prime_mmap@test_userptr.html Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=102939 Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171009164401.16035-1-daniel.vetter@ffwll.ch
2017-10-09 23:44:00 +07:00
mn = i915_mmu_notifier_create(mm->mm);
if (IS_ERR(mn))
err = PTR_ERR(mn);
drm/i915: Do not store the error pointer for a failed userptr registration If we fail to create our mmu notification, we report the error back and currently store the error inside the i915_mm_struct. This not only causes subsequent registerations of the same mm to fail (an issue if the first was interrupted by a signal and needed to be restarted) but also causes us to eventually try and free the error pointer. [ 73.419599] BUG: unable to handle kernel NULL pointer dereference at 000000000000004c [ 73.419831] IP: [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.420065] PGD 8650c067 PUD 870bb067 PMD 0 [ 73.420319] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC [ 73.420580] CPU: 0 PID: 42 Comm: kworker/0:1 Tainted: G W 3.17.0-rc6+ #1561 [ 73.420837] Hardware name: Intel Corporation SandyBridge Platform/LosLunas CRB, BIOS ASNBCPT1.86C.0075.P00.1106281639 06/28/2011 [ 73.421405] Workqueue: events __i915_mm_struct_free__worker [ 73.421724] task: ffff880088a81220 ti: ffff880088168000 task.ti: ffff880088168000 [ 73.422051] RIP: 0010:[<ffffffff8114af33>] [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.422410] RSP: 0018:ffff88008816bd50 EFLAGS: 00010286 [ 73.422765] RAX: 0000000000000003 RBX: ffff880086485400 RCX: 0000000000000000 [ 73.423137] RDX: ffff88016d80ee90 RSI: ffff880086485400 RDI: 0000000000000044 [ 73.423513] RBP: ffff88008816bd70 R08: 0000000000000001 R09: 0000000000000000 [ 73.423895] R10: 0000000000000320 R11: 0000000000000001 R12: 0000000000000044 [ 73.424282] R13: ffff880166e5f008 R14: ffff88016d815200 R15: ffff880166e5f040 [ 73.424682] FS: 0000000000000000(0000) GS:ffff88016d800000(0000) knlGS:0000000000000000 [ 73.425099] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 73.425537] CR2: 000000000000004c CR3: 0000000087f5f000 CR4: 00000000000407f0 [ 73.426157] Stack: [ 73.426597] ffff880088a81248 ffff880166e5f038 fffffffffffffffc ffff880166e5f008 [ 73.427096] ffff88008816bd98 ffffffff814a75f2 ffff880166e5f038 ffff8800880f8a28 [ 73.427603] ffff88016d812ac0 ffff88008816be00 ffffffff8106321a ffffffff810631af [ 73.428119] Call Trace: [ 73.428606] [<ffffffff814a75f2>] __i915_mm_struct_free__worker+0x42/0x80 [ 73.429116] [<ffffffff8106321a>] process_one_work+0x1ba/0x610 [ 73.429632] [<ffffffff810631af>] ? process_one_work+0x14f/0x610 [ 73.430153] [<ffffffff810636db>] worker_thread+0x6b/0x4a0 [ 73.430671] [<ffffffff8108d67d>] ? trace_hardirqs_on+0xd/0x10 [ 73.431501] [<ffffffff81063670>] ? process_one_work+0x610/0x610 [ 73.432030] [<ffffffff8106a206>] kthread+0xf6/0x110 [ 73.432561] [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80 [ 73.433100] [<ffffffff8169c22c>] ret_from_fork+0x7c/0xb0 [ 73.433644] [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80 [ 73.434194] Code: 0f 1f 84 00 00 00 00 00 66 66 66 66 90 8b 46 4c 85 c0 0f 8e 10 01 00 00 55 48 89 e5 41 55 41 54 53 48 89 f3 49 89 fc 48 83 ec 08 <48> 83 7f 08 00 0f 84 b1 00 00 00 48 c7 c7 40 e6 ac 82 e8 26 65 [ 73.435942] RIP [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.437017] RSP <ffff88008816bd50> [ 73.437704] CR2: 000000000000004c Fixes regression from commit ad46cb533d586fdb256855437af876617c6cf609 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Aug 7 14:20:40 2014 +0100 drm/i915: Prevent recursive deadlock on releasing a busy userptr Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=84207 Testcase: igt/gem_render_copy_redux Testcase: igt/gem_userptr_blits/create-destroy-sync Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
2014-09-26 16:31:02 +07:00
down_write(&mm->mm->mmap_sem);
mutex_lock(&mm->i915->mm_lock);
drm/i915: Preallocate our mmu notifier workequeu to unbreak cpu hotplug deadlock 4.14-rc1 gained the fancy new cross-release support in lockdep, which seems to have uncovered a few more rules about what is allowed and isn't. This one here seems to indicate that allocating a work-queue while holding mmap_sem is a no-go, so let's try to preallocate it. Of course another way to break this chain would be somewhere in the cpu hotplug code, since this isn't the only trace we're finding now which goes through msr_create_device. Full lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 4.14.0-rc1-CI-CI_DRM_3118+ #1 Tainted: G U ------------------------------------------------------ prime_mmap/1551 is trying to acquire lock: (cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffff8109dbb7>] apply_workqueue_attrs+0x17/0x50 but task is already holding lock: (&dev_priv->mm_lock){+.+.}, at: [<ffffffffa01a7b2a>] i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #6 (&dev_priv->mm_lock){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_nested+0x1b/0x20 i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 do_vfs_ioctl+0x94/0x670 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 -> #5 (&mm->mmap_sem){++++}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __might_fault+0x68/0x90 _copy_to_user+0x23/0x70 filldir+0xa5/0x120 dcache_readdir+0xf9/0x170 iterate_dir+0x69/0x1a0 SyS_getdents+0xa5/0x140 entry_SYSCALL_64_fastpath+0x1c/0xb1 -> #4 (&sb->s_type->i_mutex_key#5){++++}: down_write+0x3b/0x70 handle_create+0xcb/0x1e0 devtmpfsd+0x139/0x180 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #3 ((complete)&req.done){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 wait_for_common+0x58/0x210 wait_for_completion+0x1d/0x20 devtmpfs_create_node+0x13d/0x160 device_add+0x5eb/0x620 device_create_groups_vargs+0xe0/0xf0 device_create+0x3a/0x40 msr_device_create+0x2b/0x40 cpuhp_invoke_callback+0xa3/0x840 cpuhp_thread_fun+0x7a/0x150 smpboot_thread_fn+0x18a/0x280 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #2 (cpuhp_state){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpuhp_issue_call+0x10b/0x170 __cpuhp_setup_state_cpuslocked+0x134/0x2a0 __cpuhp_setup_state+0x46/0x60 page_writeback_init+0x43/0x67 pagecache_init+0x3d/0x42 start_kernel+0x3a8/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #1 (cpuhp_state_mutex){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_nested+0x1b/0x20 __cpuhp_setup_state_cpuslocked+0x52/0x2a0 __cpuhp_setup_state+0x46/0x60 page_alloc_init+0x28/0x30 start_kernel+0x145/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #0 (cpu_hotplug_lock.rw_sem){++++}: check_prev_add+0x430/0x840 __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpus_read_lock+0x3d/0xb0 apply_workqueue_attrs+0x17/0x50 __alloc_workqueue_key+0x1d8/0x4d9 i915_gem_userptr_init__mmu_notifier+0x1fb/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 do_vfs_ioctl+0x94/0x670 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 other info that might help us debug this: Chain exists of: cpu_hotplug_lock.rw_sem --> &mm->mmap_sem --> &dev_priv->mm_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&dev_priv->mm_lock); lock(&mm->mmap_sem); lock(&dev_priv->mm_lock); lock(cpu_hotplug_lock.rw_sem); *** DEADLOCK *** 2 locks held by prime_mmap/1551: #0: (&mm->mmap_sem){++++}, at: [<ffffffffa01a7b18>] i915_gem_userptr_init__mmu_notifier+0x138/0x270 [i915] #1: (&dev_priv->mm_lock){+.+.}, at: [<ffffffffa01a7b2a>] i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] stack backtrace: CPU: 4 PID: 1551 Comm: prime_mmap Tainted: G U 4.14.0-rc1-CI-CI_DRM_3118+ #1 Hardware name: Dell Inc. XPS 8300 /0Y2MRG, BIOS A06 10/17/2011 Call Trace: dump_stack+0x68/0x9f print_circular_bug+0x235/0x3c0 ? lockdep_init_map_crosslock+0x20/0x20 check_prev_add+0x430/0x840 __lock_acquire+0x1420/0x15e0 ? __lock_acquire+0x1420/0x15e0 ? lockdep_init_map_crosslock+0x20/0x20 lock_acquire+0xb0/0x200 ? apply_workqueue_attrs+0x17/0x50 cpus_read_lock+0x3d/0xb0 ? apply_workqueue_attrs+0x17/0x50 apply_workqueue_attrs+0x17/0x50 __alloc_workqueue_key+0x1d8/0x4d9 ? __lockdep_init_map+0x57/0x1c0 i915_gem_userptr_init__mmu_notifier+0x1fb/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] ? i915_gem_userptr_release+0x140/0x140 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 ? i915_gem_userptr_release+0x140/0x140 [i915] ? __do_page_fault+0x2a4/0x570 do_vfs_ioctl+0x94/0x670 ? entry_SYSCALL_64_fastpath+0x5/0xb1 ? __this_cpu_preempt_check+0x13/0x20 ? trace_hardirqs_on_caller+0xe3/0x1b0 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 RIP: 0033:0x7fbb83c39587 RSP: 002b:00007fff188dc228 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: ffffffff81492963 RCX: 00007fbb83c39587 RDX: 00007fff188dc260 RSI: 00000000c0186473 RDI: 0000000000000003 RBP: ffffc90001487f88 R08: 0000000000000000 R09: 00007fff188dc2ac R10: 00007fbb83efcb58 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000003 R14: 00000000c0186473 R15: 00007fff188dc2ac ? __this_cpu_preempt_check+0x13/0x20 Note that this also has the minor benefit of slightly reducing the critical section where we hold mmap_sem. v2: Set ret correctly when we raced with another thread. v3: Use Chris' diff. Attach the right lockdep splat. v4: Repaint in Tvrtko's colors (aka don't report ENOMEM if we race and some other thread managed to not also get an ENOMEM and successfully install the mmu notifier. Note that the kernel guarantees that small allocations succeed, so this never actually happens). Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Sasha Levin <alexander.levin@verizon.com> Cc: Marta Lofstedt <marta.lofstedt@intel.com> Cc: Tejun Heo <tj@kernel.org> References: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3180/shard-hsw3/igt@prime_mmap@test_userptr.html Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=102939 Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171009164401.16035-1-daniel.vetter@ffwll.ch
2017-10-09 23:44:00 +07:00
if (mm->mn == NULL && !err) {
/* Protected by mmap_sem (write-lock) */
err = __mmu_notifier_register(&mn->mn, mm->mm);
if (!err) {
/* Protected by mm_lock */
mm->mn = fetch_and_zero(&mn);
}
} else if (mm->mn) {
/*
* Someone else raced and successfully installed the mmu
* notifier, we can cancel our own errors.
*/
drm/i915: Preallocate our mmu notifier workequeu to unbreak cpu hotplug deadlock 4.14-rc1 gained the fancy new cross-release support in lockdep, which seems to have uncovered a few more rules about what is allowed and isn't. This one here seems to indicate that allocating a work-queue while holding mmap_sem is a no-go, so let's try to preallocate it. Of course another way to break this chain would be somewhere in the cpu hotplug code, since this isn't the only trace we're finding now which goes through msr_create_device. Full lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 4.14.0-rc1-CI-CI_DRM_3118+ #1 Tainted: G U ------------------------------------------------------ prime_mmap/1551 is trying to acquire lock: (cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffff8109dbb7>] apply_workqueue_attrs+0x17/0x50 but task is already holding lock: (&dev_priv->mm_lock){+.+.}, at: [<ffffffffa01a7b2a>] i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #6 (&dev_priv->mm_lock){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_nested+0x1b/0x20 i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 do_vfs_ioctl+0x94/0x670 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 -> #5 (&mm->mmap_sem){++++}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __might_fault+0x68/0x90 _copy_to_user+0x23/0x70 filldir+0xa5/0x120 dcache_readdir+0xf9/0x170 iterate_dir+0x69/0x1a0 SyS_getdents+0xa5/0x140 entry_SYSCALL_64_fastpath+0x1c/0xb1 -> #4 (&sb->s_type->i_mutex_key#5){++++}: down_write+0x3b/0x70 handle_create+0xcb/0x1e0 devtmpfsd+0x139/0x180 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #3 ((complete)&req.done){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 wait_for_common+0x58/0x210 wait_for_completion+0x1d/0x20 devtmpfs_create_node+0x13d/0x160 device_add+0x5eb/0x620 device_create_groups_vargs+0xe0/0xf0 device_create+0x3a/0x40 msr_device_create+0x2b/0x40 cpuhp_invoke_callback+0xa3/0x840 cpuhp_thread_fun+0x7a/0x150 smpboot_thread_fn+0x18a/0x280 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #2 (cpuhp_state){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpuhp_issue_call+0x10b/0x170 __cpuhp_setup_state_cpuslocked+0x134/0x2a0 __cpuhp_setup_state+0x46/0x60 page_writeback_init+0x43/0x67 pagecache_init+0x3d/0x42 start_kernel+0x3a8/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #1 (cpuhp_state_mutex){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_nested+0x1b/0x20 __cpuhp_setup_state_cpuslocked+0x52/0x2a0 __cpuhp_setup_state+0x46/0x60 page_alloc_init+0x28/0x30 start_kernel+0x145/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #0 (cpu_hotplug_lock.rw_sem){++++}: check_prev_add+0x430/0x840 __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpus_read_lock+0x3d/0xb0 apply_workqueue_attrs+0x17/0x50 __alloc_workqueue_key+0x1d8/0x4d9 i915_gem_userptr_init__mmu_notifier+0x1fb/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 do_vfs_ioctl+0x94/0x670 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 other info that might help us debug this: Chain exists of: cpu_hotplug_lock.rw_sem --> &mm->mmap_sem --> &dev_priv->mm_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&dev_priv->mm_lock); lock(&mm->mmap_sem); lock(&dev_priv->mm_lock); lock(cpu_hotplug_lock.rw_sem); *** DEADLOCK *** 2 locks held by prime_mmap/1551: #0: (&mm->mmap_sem){++++}, at: [<ffffffffa01a7b18>] i915_gem_userptr_init__mmu_notifier+0x138/0x270 [i915] #1: (&dev_priv->mm_lock){+.+.}, at: [<ffffffffa01a7b2a>] i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] stack backtrace: CPU: 4 PID: 1551 Comm: prime_mmap Tainted: G U 4.14.0-rc1-CI-CI_DRM_3118+ #1 Hardware name: Dell Inc. XPS 8300 /0Y2MRG, BIOS A06 10/17/2011 Call Trace: dump_stack+0x68/0x9f print_circular_bug+0x235/0x3c0 ? lockdep_init_map_crosslock+0x20/0x20 check_prev_add+0x430/0x840 __lock_acquire+0x1420/0x15e0 ? __lock_acquire+0x1420/0x15e0 ? lockdep_init_map_crosslock+0x20/0x20 lock_acquire+0xb0/0x200 ? apply_workqueue_attrs+0x17/0x50 cpus_read_lock+0x3d/0xb0 ? apply_workqueue_attrs+0x17/0x50 apply_workqueue_attrs+0x17/0x50 __alloc_workqueue_key+0x1d8/0x4d9 ? __lockdep_init_map+0x57/0x1c0 i915_gem_userptr_init__mmu_notifier+0x1fb/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] ? i915_gem_userptr_release+0x140/0x140 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 ? i915_gem_userptr_release+0x140/0x140 [i915] ? __do_page_fault+0x2a4/0x570 do_vfs_ioctl+0x94/0x670 ? entry_SYSCALL_64_fastpath+0x5/0xb1 ? __this_cpu_preempt_check+0x13/0x20 ? trace_hardirqs_on_caller+0xe3/0x1b0 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 RIP: 0033:0x7fbb83c39587 RSP: 002b:00007fff188dc228 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: ffffffff81492963 RCX: 00007fbb83c39587 RDX: 00007fff188dc260 RSI: 00000000c0186473 RDI: 0000000000000003 RBP: ffffc90001487f88 R08: 0000000000000000 R09: 00007fff188dc2ac R10: 00007fbb83efcb58 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000003 R14: 00000000c0186473 R15: 00007fff188dc2ac ? __this_cpu_preempt_check+0x13/0x20 Note that this also has the minor benefit of slightly reducing the critical section where we hold mmap_sem. v2: Set ret correctly when we raced with another thread. v3: Use Chris' diff. Attach the right lockdep splat. v4: Repaint in Tvrtko's colors (aka don't report ENOMEM if we race and some other thread managed to not also get an ENOMEM and successfully install the mmu notifier. Note that the kernel guarantees that small allocations succeed, so this never actually happens). Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Sasha Levin <alexander.levin@verizon.com> Cc: Marta Lofstedt <marta.lofstedt@intel.com> Cc: Tejun Heo <tj@kernel.org> References: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3180/shard-hsw3/igt@prime_mmap@test_userptr.html Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=102939 Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171009164401.16035-1-daniel.vetter@ffwll.ch
2017-10-09 23:44:00 +07:00
err = 0;
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
}
mutex_unlock(&mm->i915->mm_lock);
drm/i915: Do not store the error pointer for a failed userptr registration If we fail to create our mmu notification, we report the error back and currently store the error inside the i915_mm_struct. This not only causes subsequent registerations of the same mm to fail (an issue if the first was interrupted by a signal and needed to be restarted) but also causes us to eventually try and free the error pointer. [ 73.419599] BUG: unable to handle kernel NULL pointer dereference at 000000000000004c [ 73.419831] IP: [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.420065] PGD 8650c067 PUD 870bb067 PMD 0 [ 73.420319] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC [ 73.420580] CPU: 0 PID: 42 Comm: kworker/0:1 Tainted: G W 3.17.0-rc6+ #1561 [ 73.420837] Hardware name: Intel Corporation SandyBridge Platform/LosLunas CRB, BIOS ASNBCPT1.86C.0075.P00.1106281639 06/28/2011 [ 73.421405] Workqueue: events __i915_mm_struct_free__worker [ 73.421724] task: ffff880088a81220 ti: ffff880088168000 task.ti: ffff880088168000 [ 73.422051] RIP: 0010:[<ffffffff8114af33>] [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.422410] RSP: 0018:ffff88008816bd50 EFLAGS: 00010286 [ 73.422765] RAX: 0000000000000003 RBX: ffff880086485400 RCX: 0000000000000000 [ 73.423137] RDX: ffff88016d80ee90 RSI: ffff880086485400 RDI: 0000000000000044 [ 73.423513] RBP: ffff88008816bd70 R08: 0000000000000001 R09: 0000000000000000 [ 73.423895] R10: 0000000000000320 R11: 0000000000000001 R12: 0000000000000044 [ 73.424282] R13: ffff880166e5f008 R14: ffff88016d815200 R15: ffff880166e5f040 [ 73.424682] FS: 0000000000000000(0000) GS:ffff88016d800000(0000) knlGS:0000000000000000 [ 73.425099] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 73.425537] CR2: 000000000000004c CR3: 0000000087f5f000 CR4: 00000000000407f0 [ 73.426157] Stack: [ 73.426597] ffff880088a81248 ffff880166e5f038 fffffffffffffffc ffff880166e5f008 [ 73.427096] ffff88008816bd98 ffffffff814a75f2 ffff880166e5f038 ffff8800880f8a28 [ 73.427603] ffff88016d812ac0 ffff88008816be00 ffffffff8106321a ffffffff810631af [ 73.428119] Call Trace: [ 73.428606] [<ffffffff814a75f2>] __i915_mm_struct_free__worker+0x42/0x80 [ 73.429116] [<ffffffff8106321a>] process_one_work+0x1ba/0x610 [ 73.429632] [<ffffffff810631af>] ? process_one_work+0x14f/0x610 [ 73.430153] [<ffffffff810636db>] worker_thread+0x6b/0x4a0 [ 73.430671] [<ffffffff8108d67d>] ? trace_hardirqs_on+0xd/0x10 [ 73.431501] [<ffffffff81063670>] ? process_one_work+0x610/0x610 [ 73.432030] [<ffffffff8106a206>] kthread+0xf6/0x110 [ 73.432561] [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80 [ 73.433100] [<ffffffff8169c22c>] ret_from_fork+0x7c/0xb0 [ 73.433644] [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80 [ 73.434194] Code: 0f 1f 84 00 00 00 00 00 66 66 66 66 90 8b 46 4c 85 c0 0f 8e 10 01 00 00 55 48 89 e5 41 55 41 54 53 48 89 f3 49 89 fc 48 83 ec 08 <48> 83 7f 08 00 0f 84 b1 00 00 00 48 c7 c7 40 e6 ac 82 e8 26 65 [ 73.435942] RIP [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.437017] RSP <ffff88008816bd50> [ 73.437704] CR2: 000000000000004c Fixes regression from commit ad46cb533d586fdb256855437af876617c6cf609 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Aug 7 14:20:40 2014 +0100 drm/i915: Prevent recursive deadlock on releasing a busy userptr Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=84207 Testcase: igt/gem_render_copy_redux Testcase: igt/gem_userptr_blits/create-destroy-sync Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
2014-09-26 16:31:02 +07:00
up_write(&mm->mm->mmap_sem);
if (mn && !IS_ERR(mn)) {
drm/i915: Preallocate our mmu notifier workequeu to unbreak cpu hotplug deadlock 4.14-rc1 gained the fancy new cross-release support in lockdep, which seems to have uncovered a few more rules about what is allowed and isn't. This one here seems to indicate that allocating a work-queue while holding mmap_sem is a no-go, so let's try to preallocate it. Of course another way to break this chain would be somewhere in the cpu hotplug code, since this isn't the only trace we're finding now which goes through msr_create_device. Full lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 4.14.0-rc1-CI-CI_DRM_3118+ #1 Tainted: G U ------------------------------------------------------ prime_mmap/1551 is trying to acquire lock: (cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffff8109dbb7>] apply_workqueue_attrs+0x17/0x50 but task is already holding lock: (&dev_priv->mm_lock){+.+.}, at: [<ffffffffa01a7b2a>] i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #6 (&dev_priv->mm_lock){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_nested+0x1b/0x20 i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 do_vfs_ioctl+0x94/0x670 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 -> #5 (&mm->mmap_sem){++++}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __might_fault+0x68/0x90 _copy_to_user+0x23/0x70 filldir+0xa5/0x120 dcache_readdir+0xf9/0x170 iterate_dir+0x69/0x1a0 SyS_getdents+0xa5/0x140 entry_SYSCALL_64_fastpath+0x1c/0xb1 -> #4 (&sb->s_type->i_mutex_key#5){++++}: down_write+0x3b/0x70 handle_create+0xcb/0x1e0 devtmpfsd+0x139/0x180 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #3 ((complete)&req.done){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 wait_for_common+0x58/0x210 wait_for_completion+0x1d/0x20 devtmpfs_create_node+0x13d/0x160 device_add+0x5eb/0x620 device_create_groups_vargs+0xe0/0xf0 device_create+0x3a/0x40 msr_device_create+0x2b/0x40 cpuhp_invoke_callback+0xa3/0x840 cpuhp_thread_fun+0x7a/0x150 smpboot_thread_fn+0x18a/0x280 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #2 (cpuhp_state){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpuhp_issue_call+0x10b/0x170 __cpuhp_setup_state_cpuslocked+0x134/0x2a0 __cpuhp_setup_state+0x46/0x60 page_writeback_init+0x43/0x67 pagecache_init+0x3d/0x42 start_kernel+0x3a8/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #1 (cpuhp_state_mutex){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_nested+0x1b/0x20 __cpuhp_setup_state_cpuslocked+0x52/0x2a0 __cpuhp_setup_state+0x46/0x60 page_alloc_init+0x28/0x30 start_kernel+0x145/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #0 (cpu_hotplug_lock.rw_sem){++++}: check_prev_add+0x430/0x840 __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpus_read_lock+0x3d/0xb0 apply_workqueue_attrs+0x17/0x50 __alloc_workqueue_key+0x1d8/0x4d9 i915_gem_userptr_init__mmu_notifier+0x1fb/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 do_vfs_ioctl+0x94/0x670 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 other info that might help us debug this: Chain exists of: cpu_hotplug_lock.rw_sem --> &mm->mmap_sem --> &dev_priv->mm_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&dev_priv->mm_lock); lock(&mm->mmap_sem); lock(&dev_priv->mm_lock); lock(cpu_hotplug_lock.rw_sem); *** DEADLOCK *** 2 locks held by prime_mmap/1551: #0: (&mm->mmap_sem){++++}, at: [<ffffffffa01a7b18>] i915_gem_userptr_init__mmu_notifier+0x138/0x270 [i915] #1: (&dev_priv->mm_lock){+.+.}, at: [<ffffffffa01a7b2a>] i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] stack backtrace: CPU: 4 PID: 1551 Comm: prime_mmap Tainted: G U 4.14.0-rc1-CI-CI_DRM_3118+ #1 Hardware name: Dell Inc. XPS 8300 /0Y2MRG, BIOS A06 10/17/2011 Call Trace: dump_stack+0x68/0x9f print_circular_bug+0x235/0x3c0 ? lockdep_init_map_crosslock+0x20/0x20 check_prev_add+0x430/0x840 __lock_acquire+0x1420/0x15e0 ? __lock_acquire+0x1420/0x15e0 ? lockdep_init_map_crosslock+0x20/0x20 lock_acquire+0xb0/0x200 ? apply_workqueue_attrs+0x17/0x50 cpus_read_lock+0x3d/0xb0 ? apply_workqueue_attrs+0x17/0x50 apply_workqueue_attrs+0x17/0x50 __alloc_workqueue_key+0x1d8/0x4d9 ? __lockdep_init_map+0x57/0x1c0 i915_gem_userptr_init__mmu_notifier+0x1fb/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] ? i915_gem_userptr_release+0x140/0x140 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 ? i915_gem_userptr_release+0x140/0x140 [i915] ? __do_page_fault+0x2a4/0x570 do_vfs_ioctl+0x94/0x670 ? entry_SYSCALL_64_fastpath+0x5/0xb1 ? __this_cpu_preempt_check+0x13/0x20 ? trace_hardirqs_on_caller+0xe3/0x1b0 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 RIP: 0033:0x7fbb83c39587 RSP: 002b:00007fff188dc228 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: ffffffff81492963 RCX: 00007fbb83c39587 RDX: 00007fff188dc260 RSI: 00000000c0186473 RDI: 0000000000000003 RBP: ffffc90001487f88 R08: 0000000000000000 R09: 00007fff188dc2ac R10: 00007fbb83efcb58 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000003 R14: 00000000c0186473 R15: 00007fff188dc2ac ? __this_cpu_preempt_check+0x13/0x20 Note that this also has the minor benefit of slightly reducing the critical section where we hold mmap_sem. v2: Set ret correctly when we raced with another thread. v3: Use Chris' diff. Attach the right lockdep splat. v4: Repaint in Tvrtko's colors (aka don't report ENOMEM if we race and some other thread managed to not also get an ENOMEM and successfully install the mmu notifier. Note that the kernel guarantees that small allocations succeed, so this never actually happens). Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Sasha Levin <alexander.levin@verizon.com> Cc: Marta Lofstedt <marta.lofstedt@intel.com> Cc: Tejun Heo <tj@kernel.org> References: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3180/shard-hsw3/igt@prime_mmap@test_userptr.html Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=102939 Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171009164401.16035-1-daniel.vetter@ffwll.ch
2017-10-09 23:44:00 +07:00
destroy_workqueue(mn->wq);
kfree(mn);
}
return err ? ERR_PTR(err) : mm->mn;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}
static int
i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
unsigned flags)
{
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
struct i915_mmu_notifier *mn;
struct i915_mmu_object *mo;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
if (flags & I915_USERPTR_UNSYNCHRONIZED)
return capable(CAP_SYS_ADMIN) ? 0 : -EPERM;
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
if (WARN_ON(obj->userptr.mm == NULL))
return -EINVAL;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
mn = i915_mmu_notifier_find(obj->userptr.mm);
if (IS_ERR(mn))
return PTR_ERR(mn);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
mo = kzalloc(sizeof(*mo), GFP_KERNEL);
if (mo == NULL)
return -ENOMEM;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
mo->mn = mn;
mo->obj = obj;
mo->it.start = obj->userptr.ptr;
mo->it.last = obj->userptr.ptr + obj->base.size - 1;
INIT_WORK(&mo->work, cancel_userptr);
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
obj->userptr.mmu_object = mo;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
return 0;
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
}
static void
i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
struct mm_struct *mm)
{
if (mn == NULL)
return;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
mmu_notifier_unregister(&mn->mn, mm);
drm/i915/userptr: Flush cancellations before mmu-notifier invalidate returns In order to ensure that all invalidations are completed before the operation returns to userspace (i.e. before the munmap() syscall returns) we need to wait upon the outstanding operations. We are allowed to block inside the invalidate_range_start callback, and as struct_mutex is the inner lock with mmap_sem we can wait upon the struct_mutex without provoking lockdep into warning about a deadlock. However, we don't actually want to wait upon outstanding rendering whilst holding the struct_mutex if we can help it otherwise we also block other processes from submitting work to the GPU. So first we do a wait without the lock and then when we reacquire the lock, we double check that everything is ready for removing the invalidated pages. Finally to wait upon the outstanding unpinning tasks, we create a private workqueue as a means to conveniently wait upon all at once. The drawback is that this workqueue is per-mm, so any threads concurrently invalidating objects will wait upon each other. The advantage of using the workqueue is that we can wait in parallel for completion of rendering and unpinning of several objects (of particular importance if the process terminates with a whole mm full of objects). v2: Apply a cup of tea to the changelog. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94699 Testcase: igt/gem_userptr_blits/sync-unmap-cycles Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-1-git-send-email-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2016-04-05 20:59:59 +07:00
destroy_workqueue(mn->wq);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
kfree(mn);
}
#else
static void
i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj)
{
}
static int
i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
unsigned flags)
{
if ((flags & I915_USERPTR_UNSYNCHRONIZED) == 0)
return -ENODEV;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
return 0;
}
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
static void
i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
struct mm_struct *mm)
{
}
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
#endif
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
static struct i915_mm_struct *
__i915_mm_struct_find(struct drm_i915_private *dev_priv, struct mm_struct *real)
{
struct i915_mm_struct *mm;
/* Protected by dev_priv->mm_lock */
hash_for_each_possible(dev_priv->mm_structs, mm, node, (unsigned long)real)
if (mm->mm == real)
return mm;
return NULL;
}
static int
i915_gem_userptr_init__mm_struct(struct drm_i915_gem_object *obj)
{
struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
struct i915_mm_struct *mm;
int ret = 0;
/* During release of the GEM object we hold the struct_mutex. This
* precludes us from calling mmput() at that time as that may be
* the last reference and so call exit_mmap(). exit_mmap() will
* attempt to reap the vma, and if we were holding a GTT mmap
* would then call drm_gem_vm_close() and attempt to reacquire
* the struct mutex. So in order to avoid that recursion, we have
* to defer releasing the mm reference until after we drop the
* struct_mutex, i.e. we need to schedule a worker to do the clean
* up.
*/
mutex_lock(&dev_priv->mm_lock);
mm = __i915_mm_struct_find(dev_priv, current->mm);
if (mm == NULL) {
mm = kmalloc(sizeof(*mm), GFP_KERNEL);
if (mm == NULL) {
ret = -ENOMEM;
goto out;
}
kref_init(&mm->kref);
mm->i915 = to_i915(obj->base.dev);
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
mm->mm = current->mm;
mmgrab(current->mm);
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
mm->mn = NULL;
/* Protected by dev_priv->mm_lock */
hash_add(dev_priv->mm_structs,
&mm->node, (unsigned long)mm->mm);
} else
kref_get(&mm->kref);
obj->userptr.mm = mm;
out:
mutex_unlock(&dev_priv->mm_lock);
return ret;
}
static void
__i915_mm_struct_free__worker(struct work_struct *work)
{
struct i915_mm_struct *mm = container_of(work, typeof(*mm), work);
i915_mmu_notifier_free(mm->mn, mm->mm);
mmdrop(mm->mm);
kfree(mm);
}
static void
__i915_mm_struct_free(struct kref *kref)
{
struct i915_mm_struct *mm = container_of(kref, typeof(*mm), kref);
/* Protected by dev_priv->mm_lock */
hash_del(&mm->node);
mutex_unlock(&mm->i915->mm_lock);
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
INIT_WORK(&mm->work, __i915_mm_struct_free__worker);
queue_work(mm->i915->mm.userptr_wq, &mm->work);
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
}
static void
i915_gem_userptr_release__mm_struct(struct drm_i915_gem_object *obj)
{
if (obj->userptr.mm == NULL)
return;
kref_put_mutex(&obj->userptr.mm->kref,
__i915_mm_struct_free,
&to_i915(obj->base.dev)->mm_lock);
obj->userptr.mm = NULL;
}
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
struct get_pages_work {
struct work_struct work;
struct drm_i915_gem_object *obj;
struct task_struct *task;
};
static struct sg_table *
__i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
struct page **pvec, int num_pages)
drm/i915: avoid leaking DMA mappings We have 3 types of DMA mappings for GEM objects: 1. physically contiguous for stolen and for objects needing contiguous memory 2. DMA-buf mappings imported via a DMA-buf attach operation 3. SG DMA mappings for shmem backed and userptr objects For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the corresponding backing pages and so in practice we create/release the mapping in the object's get_pages/put_pages callback. For 3. the lifetime of the mapping matches that of any existing GPU binding of the object, so we'll create the mapping when the object is bound to the first vma and release the mapping when the object is unbound from its last vma. Since the object can be bound to multiple vmas, we can end up creating a new DMA mapping in the 3. case even if the object already had one. This is not allowed by the DMA API and can lead to leaked mapping data and IOMMU memory space starvation in certain cases. For example HW IOMMU drivers (intel_iommu) allocate a new range from their memory space whenever a mapping is created, silently overriding a pre-existing mapping. Fix this by moving the creation/removal of DMA mappings to the object's get_pages/put_pages callbacks. These callbacks already check for and do an early return in case of any nested calls. This way objects of the 3. case also become more like the other object types. I noticed this issue by enabling DMA debugging, which got disabled after a while due to its internal mapping tables getting full. It also reported errors in connection to random other drivers that did a DMA mapping for an address that was previously mapped by i915 but was never released. Besides these diagnostic messages and the memory space starvation problem for IOMMUs, I'm not aware of this causing a real issue. The fix is based on a patch from Chris. v2: - move the DMA mapping create/remove calls to the get_pages/put_pages callbacks instead of adding new callbacks for these (Chris) v3: - also fix the get_page cache logic on the userptr async path (Chris) Signed-off-by: Imre Deak <imre.deak@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-07-09 16:59:05 +07:00
{
unsigned int max_segment = i915_sg_segment_size();
struct sg_table *st;
unsigned int sg_page_sizes;
drm/i915: avoid leaking DMA mappings We have 3 types of DMA mappings for GEM objects: 1. physically contiguous for stolen and for objects needing contiguous memory 2. DMA-buf mappings imported via a DMA-buf attach operation 3. SG DMA mappings for shmem backed and userptr objects For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the corresponding backing pages and so in practice we create/release the mapping in the object's get_pages/put_pages callback. For 3. the lifetime of the mapping matches that of any existing GPU binding of the object, so we'll create the mapping when the object is bound to the first vma and release the mapping when the object is unbound from its last vma. Since the object can be bound to multiple vmas, we can end up creating a new DMA mapping in the 3. case even if the object already had one. This is not allowed by the DMA API and can lead to leaked mapping data and IOMMU memory space starvation in certain cases. For example HW IOMMU drivers (intel_iommu) allocate a new range from their memory space whenever a mapping is created, silently overriding a pre-existing mapping. Fix this by moving the creation/removal of DMA mappings to the object's get_pages/put_pages callbacks. These callbacks already check for and do an early return in case of any nested calls. This way objects of the 3. case also become more like the other object types. I noticed this issue by enabling DMA debugging, which got disabled after a while due to its internal mapping tables getting full. It also reported errors in connection to random other drivers that did a DMA mapping for an address that was previously mapped by i915 but was never released. Besides these diagnostic messages and the memory space starvation problem for IOMMUs, I'm not aware of this causing a real issue. The fix is based on a patch from Chris. v2: - move the DMA mapping create/remove calls to the get_pages/put_pages callbacks instead of adding new callbacks for these (Chris) v3: - also fix the get_page cache logic on the userptr async path (Chris) Signed-off-by: Imre Deak <imre.deak@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-07-09 16:59:05 +07:00
int ret;
st = kmalloc(sizeof(*st), GFP_KERNEL);
if (!st)
return ERR_PTR(-ENOMEM);
alloc_table:
ret = __sg_alloc_table_from_pages(st, pvec, num_pages,
0, num_pages << PAGE_SHIFT,
max_segment,
GFP_KERNEL);
if (ret) {
kfree(st);
return ERR_PTR(ret);
}
drm/i915: avoid leaking DMA mappings We have 3 types of DMA mappings for GEM objects: 1. physically contiguous for stolen and for objects needing contiguous memory 2. DMA-buf mappings imported via a DMA-buf attach operation 3. SG DMA mappings for shmem backed and userptr objects For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the corresponding backing pages and so in practice we create/release the mapping in the object's get_pages/put_pages callback. For 3. the lifetime of the mapping matches that of any existing GPU binding of the object, so we'll create the mapping when the object is bound to the first vma and release the mapping when the object is unbound from its last vma. Since the object can be bound to multiple vmas, we can end up creating a new DMA mapping in the 3. case even if the object already had one. This is not allowed by the DMA API and can lead to leaked mapping data and IOMMU memory space starvation in certain cases. For example HW IOMMU drivers (intel_iommu) allocate a new range from their memory space whenever a mapping is created, silently overriding a pre-existing mapping. Fix this by moving the creation/removal of DMA mappings to the object's get_pages/put_pages callbacks. These callbacks already check for and do an early return in case of any nested calls. This way objects of the 3. case also become more like the other object types. I noticed this issue by enabling DMA debugging, which got disabled after a while due to its internal mapping tables getting full. It also reported errors in connection to random other drivers that did a DMA mapping for an address that was previously mapped by i915 but was never released. Besides these diagnostic messages and the memory space starvation problem for IOMMUs, I'm not aware of this causing a real issue. The fix is based on a patch from Chris. v2: - move the DMA mapping create/remove calls to the get_pages/put_pages callbacks instead of adding new callbacks for these (Chris) v3: - also fix the get_page cache logic on the userptr async path (Chris) Signed-off-by: Imre Deak <imre.deak@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-07-09 16:59:05 +07:00
ret = i915_gem_gtt_prepare_pages(obj, st);
drm/i915: avoid leaking DMA mappings We have 3 types of DMA mappings for GEM objects: 1. physically contiguous for stolen and for objects needing contiguous memory 2. DMA-buf mappings imported via a DMA-buf attach operation 3. SG DMA mappings for shmem backed and userptr objects For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the corresponding backing pages and so in practice we create/release the mapping in the object's get_pages/put_pages callback. For 3. the lifetime of the mapping matches that of any existing GPU binding of the object, so we'll create the mapping when the object is bound to the first vma and release the mapping when the object is unbound from its last vma. Since the object can be bound to multiple vmas, we can end up creating a new DMA mapping in the 3. case even if the object already had one. This is not allowed by the DMA API and can lead to leaked mapping data and IOMMU memory space starvation in certain cases. For example HW IOMMU drivers (intel_iommu) allocate a new range from their memory space whenever a mapping is created, silently overriding a pre-existing mapping. Fix this by moving the creation/removal of DMA mappings to the object's get_pages/put_pages callbacks. These callbacks already check for and do an early return in case of any nested calls. This way objects of the 3. case also become more like the other object types. I noticed this issue by enabling DMA debugging, which got disabled after a while due to its internal mapping tables getting full. It also reported errors in connection to random other drivers that did a DMA mapping for an address that was previously mapped by i915 but was never released. Besides these diagnostic messages and the memory space starvation problem for IOMMUs, I'm not aware of this causing a real issue. The fix is based on a patch from Chris. v2: - move the DMA mapping create/remove calls to the get_pages/put_pages callbacks instead of adding new callbacks for these (Chris) v3: - also fix the get_page cache logic on the userptr async path (Chris) Signed-off-by: Imre Deak <imre.deak@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-07-09 16:59:05 +07:00
if (ret) {
sg_free_table(st);
if (max_segment > PAGE_SIZE) {
max_segment = PAGE_SIZE;
goto alloc_table;
}
kfree(st);
return ERR_PTR(ret);
drm/i915: avoid leaking DMA mappings We have 3 types of DMA mappings for GEM objects: 1. physically contiguous for stolen and for objects needing contiguous memory 2. DMA-buf mappings imported via a DMA-buf attach operation 3. SG DMA mappings for shmem backed and userptr objects For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the corresponding backing pages and so in practice we create/release the mapping in the object's get_pages/put_pages callback. For 3. the lifetime of the mapping matches that of any existing GPU binding of the object, so we'll create the mapping when the object is bound to the first vma and release the mapping when the object is unbound from its last vma. Since the object can be bound to multiple vmas, we can end up creating a new DMA mapping in the 3. case even if the object already had one. This is not allowed by the DMA API and can lead to leaked mapping data and IOMMU memory space starvation in certain cases. For example HW IOMMU drivers (intel_iommu) allocate a new range from their memory space whenever a mapping is created, silently overriding a pre-existing mapping. Fix this by moving the creation/removal of DMA mappings to the object's get_pages/put_pages callbacks. These callbacks already check for and do an early return in case of any nested calls. This way objects of the 3. case also become more like the other object types. I noticed this issue by enabling DMA debugging, which got disabled after a while due to its internal mapping tables getting full. It also reported errors in connection to random other drivers that did a DMA mapping for an address that was previously mapped by i915 but was never released. Besides these diagnostic messages and the memory space starvation problem for IOMMUs, I'm not aware of this causing a real issue. The fix is based on a patch from Chris. v2: - move the DMA mapping create/remove calls to the get_pages/put_pages callbacks instead of adding new callbacks for these (Chris) v3: - also fix the get_page cache logic on the userptr async path (Chris) Signed-off-by: Imre Deak <imre.deak@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-07-09 16:59:05 +07:00
}
sg_page_sizes = i915_sg_page_sizes(st->sgl);
__i915_gem_object_set_pages(obj, st, sg_page_sizes);
return st;
drm/i915: avoid leaking DMA mappings We have 3 types of DMA mappings for GEM objects: 1. physically contiguous for stolen and for objects needing contiguous memory 2. DMA-buf mappings imported via a DMA-buf attach operation 3. SG DMA mappings for shmem backed and userptr objects For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the corresponding backing pages and so in practice we create/release the mapping in the object's get_pages/put_pages callback. For 3. the lifetime of the mapping matches that of any existing GPU binding of the object, so we'll create the mapping when the object is bound to the first vma and release the mapping when the object is unbound from its last vma. Since the object can be bound to multiple vmas, we can end up creating a new DMA mapping in the 3. case even if the object already had one. This is not allowed by the DMA API and can lead to leaked mapping data and IOMMU memory space starvation in certain cases. For example HW IOMMU drivers (intel_iommu) allocate a new range from their memory space whenever a mapping is created, silently overriding a pre-existing mapping. Fix this by moving the creation/removal of DMA mappings to the object's get_pages/put_pages callbacks. These callbacks already check for and do an early return in case of any nested calls. This way objects of the 3. case also become more like the other object types. I noticed this issue by enabling DMA debugging, which got disabled after a while due to its internal mapping tables getting full. It also reported errors in connection to random other drivers that did a DMA mapping for an address that was previously mapped by i915 but was never released. Besides these diagnostic messages and the memory space starvation problem for IOMMUs, I'm not aware of this causing a real issue. The fix is based on a patch from Chris. v2: - move the DMA mapping create/remove calls to the get_pages/put_pages callbacks instead of adding new callbacks for these (Chris) v3: - also fix the get_page cache logic on the userptr async path (Chris) Signed-off-by: Imre Deak <imre.deak@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-07-09 16:59:05 +07:00
}
drm/i915: Use a task to cancel the userptr on invalidate_range Whilst discussing possible ways to trigger an invalidate_range on a userptr with an aliased GGTT mmapping (and so cause a struct_mutex deadlock), the conclusion is that we can, and we must, prevent any possible deadlock by avoiding taking the mutex at all during invalidate_range. This has numerous advantages all of which stem from avoid the sleeping function from inside the unknown context. In particular, it simplifies the invalidate_range because we no longer have to juggle the spinlock/mutex and can just hold the spinlock for the entire walk. To compensate, we have to make get_pages a bit more complicated in order to serialise with a pending cancel_userptr worker. As we hold the struct_mutex, we have no choice but to return EAGAIN and hope that the worker is then flushed before we retry after reacquiring the struct_mutex. The important caveat is that the invalidate_range itself is no longer synchronous. There exists a small but definite period in time in which the old PTE's page remain accessible via the GPU. Note however that the physical pages themselves are not invalidated by the mmu_notifier, just the CPU view of the address space. The impact should be limited to a delay in pages being flushed, rather than a possibility of writing to the wrong pages. The only race condition that this worsens is remapping an userptr active on the GPU where fresh work may still reference the old pages due to struct_mutex contention. Given that userspace is racing with the GPU, it is fair to say that the results are undefined. v2: Only queue (and importantly only take one refcnt) the worker once. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:47 +07:00
static int
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
__i915_gem_userptr_set_active(struct drm_i915_gem_object *obj,
bool value)
{
drm/i915: Use a task to cancel the userptr on invalidate_range Whilst discussing possible ways to trigger an invalidate_range on a userptr with an aliased GGTT mmapping (and so cause a struct_mutex deadlock), the conclusion is that we can, and we must, prevent any possible deadlock by avoiding taking the mutex at all during invalidate_range. This has numerous advantages all of which stem from avoid the sleeping function from inside the unknown context. In particular, it simplifies the invalidate_range because we no longer have to juggle the spinlock/mutex and can just hold the spinlock for the entire walk. To compensate, we have to make get_pages a bit more complicated in order to serialise with a pending cancel_userptr worker. As we hold the struct_mutex, we have no choice but to return EAGAIN and hope that the worker is then flushed before we retry after reacquiring the struct_mutex. The important caveat is that the invalidate_range itself is no longer synchronous. There exists a small but definite period in time in which the old PTE's page remain accessible via the GPU. Note however that the physical pages themselves are not invalidated by the mmu_notifier, just the CPU view of the address space. The impact should be limited to a delay in pages being flushed, rather than a possibility of writing to the wrong pages. The only race condition that this worsens is remapping an userptr active on the GPU where fresh work may still reference the old pages due to struct_mutex contention. Given that userspace is racing with the GPU, it is fair to say that the results are undefined. v2: Only queue (and importantly only take one refcnt) the worker once. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:47 +07:00
int ret = 0;
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
/* During mm_invalidate_range we need to cancel any userptr that
* overlaps the range being invalidated. Doing so requires the
* struct_mutex, and that risks recursion. In order to cause
* recursion, the user must alias the userptr address space with
* a GTT mmapping (possible with a MAP_FIXED) - then when we have
* to invalidate that mmaping, mm_invalidate_range is called with
* the userptr address *and* the struct_mutex held. To prevent that
* we set a flag under the i915_mmu_notifier spinlock to indicate
* whether this object is valid.
*/
#if defined(CONFIG_MMU_NOTIFIER)
if (obj->userptr.mmu_object == NULL)
drm/i915: Use a task to cancel the userptr on invalidate_range Whilst discussing possible ways to trigger an invalidate_range on a userptr with an aliased GGTT mmapping (and so cause a struct_mutex deadlock), the conclusion is that we can, and we must, prevent any possible deadlock by avoiding taking the mutex at all during invalidate_range. This has numerous advantages all of which stem from avoid the sleeping function from inside the unknown context. In particular, it simplifies the invalidate_range because we no longer have to juggle the spinlock/mutex and can just hold the spinlock for the entire walk. To compensate, we have to make get_pages a bit more complicated in order to serialise with a pending cancel_userptr worker. As we hold the struct_mutex, we have no choice but to return EAGAIN and hope that the worker is then flushed before we retry after reacquiring the struct_mutex. The important caveat is that the invalidate_range itself is no longer synchronous. There exists a small but definite period in time in which the old PTE's page remain accessible via the GPU. Note however that the physical pages themselves are not invalidated by the mmu_notifier, just the CPU view of the address space. The impact should be limited to a delay in pages being flushed, rather than a possibility of writing to the wrong pages. The only race condition that this worsens is remapping an userptr active on the GPU where fresh work may still reference the old pages due to struct_mutex contention. Given that userspace is racing with the GPU, it is fair to say that the results are undefined. v2: Only queue (and importantly only take one refcnt) the worker once. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:47 +07:00
return 0;
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
spin_lock(&obj->userptr.mmu_object->mn->lock);
drm/i915: Use a task to cancel the userptr on invalidate_range Whilst discussing possible ways to trigger an invalidate_range on a userptr with an aliased GGTT mmapping (and so cause a struct_mutex deadlock), the conclusion is that we can, and we must, prevent any possible deadlock by avoiding taking the mutex at all during invalidate_range. This has numerous advantages all of which stem from avoid the sleeping function from inside the unknown context. In particular, it simplifies the invalidate_range because we no longer have to juggle the spinlock/mutex and can just hold the spinlock for the entire walk. To compensate, we have to make get_pages a bit more complicated in order to serialise with a pending cancel_userptr worker. As we hold the struct_mutex, we have no choice but to return EAGAIN and hope that the worker is then flushed before we retry after reacquiring the struct_mutex. The important caveat is that the invalidate_range itself is no longer synchronous. There exists a small but definite period in time in which the old PTE's page remain accessible via the GPU. Note however that the physical pages themselves are not invalidated by the mmu_notifier, just the CPU view of the address space. The impact should be limited to a delay in pages being flushed, rather than a possibility of writing to the wrong pages. The only race condition that this worsens is remapping an userptr active on the GPU where fresh work may still reference the old pages due to struct_mutex contention. Given that userspace is racing with the GPU, it is fair to say that the results are undefined. v2: Only queue (and importantly only take one refcnt) the worker once. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:47 +07:00
/* In order to serialise get_pages with an outstanding
* cancel_userptr, we must drop the struct_mutex and try again.
*/
if (!value)
del_object(obj->userptr.mmu_object);
else if (!work_pending(&obj->userptr.mmu_object->work))
add_object(obj->userptr.mmu_object);
drm/i915: Use a task to cancel the userptr on invalidate_range Whilst discussing possible ways to trigger an invalidate_range on a userptr with an aliased GGTT mmapping (and so cause a struct_mutex deadlock), the conclusion is that we can, and we must, prevent any possible deadlock by avoiding taking the mutex at all during invalidate_range. This has numerous advantages all of which stem from avoid the sleeping function from inside the unknown context. In particular, it simplifies the invalidate_range because we no longer have to juggle the spinlock/mutex and can just hold the spinlock for the entire walk. To compensate, we have to make get_pages a bit more complicated in order to serialise with a pending cancel_userptr worker. As we hold the struct_mutex, we have no choice but to return EAGAIN and hope that the worker is then flushed before we retry after reacquiring the struct_mutex. The important caveat is that the invalidate_range itself is no longer synchronous. There exists a small but definite period in time in which the old PTE's page remain accessible via the GPU. Note however that the physical pages themselves are not invalidated by the mmu_notifier, just the CPU view of the address space. The impact should be limited to a delay in pages being flushed, rather than a possibility of writing to the wrong pages. The only race condition that this worsens is remapping an userptr active on the GPU where fresh work may still reference the old pages due to struct_mutex contention. Given that userspace is racing with the GPU, it is fair to say that the results are undefined. v2: Only queue (and importantly only take one refcnt) the worker once. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:47 +07:00
else
ret = -EAGAIN;
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
spin_unlock(&obj->userptr.mmu_object->mn->lock);
#endif
drm/i915: Use a task to cancel the userptr on invalidate_range Whilst discussing possible ways to trigger an invalidate_range on a userptr with an aliased GGTT mmapping (and so cause a struct_mutex deadlock), the conclusion is that we can, and we must, prevent any possible deadlock by avoiding taking the mutex at all during invalidate_range. This has numerous advantages all of which stem from avoid the sleeping function from inside the unknown context. In particular, it simplifies the invalidate_range because we no longer have to juggle the spinlock/mutex and can just hold the spinlock for the entire walk. To compensate, we have to make get_pages a bit more complicated in order to serialise with a pending cancel_userptr worker. As we hold the struct_mutex, we have no choice but to return EAGAIN and hope that the worker is then flushed before we retry after reacquiring the struct_mutex. The important caveat is that the invalidate_range itself is no longer synchronous. There exists a small but definite period in time in which the old PTE's page remain accessible via the GPU. Note however that the physical pages themselves are not invalidated by the mmu_notifier, just the CPU view of the address space. The impact should be limited to a delay in pages being flushed, rather than a possibility of writing to the wrong pages. The only race condition that this worsens is remapping an userptr active on the GPU where fresh work may still reference the old pages due to struct_mutex contention. Given that userspace is racing with the GPU, it is fair to say that the results are undefined. v2: Only queue (and importantly only take one refcnt) the worker once. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:47 +07:00
return ret;
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
}
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
static void
__i915_gem_userptr_get_pages_worker(struct work_struct *_work)
{
struct get_pages_work *work = container_of(_work, typeof(*work), work);
struct drm_i915_gem_object *obj = work->obj;
const int npages = obj->base.size >> PAGE_SHIFT;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
struct page **pvec;
int pinned, ret;
ret = -ENOMEM;
pinned = 0;
mm: treewide: remove GFP_TEMPORARY allocation flag GFP_TEMPORARY was introduced by commit e12ba74d8ff3 ("Group short-lived and reclaimable kernel allocations") along with __GFP_RECLAIMABLE. It's primary motivation was to allow users to tell that an allocation is short lived and so the allocator can try to place such allocations close together and prevent long term fragmentation. As much as this sounds like a reasonable semantic it becomes much less clear when to use the highlevel GFP_TEMPORARY allocation flag. How long is temporary? Can the context holding that memory sleep? Can it take locks? It seems there is no good answer for those questions. The current implementation of GFP_TEMPORARY is basically GFP_KERNEL | __GFP_RECLAIMABLE which in itself is tricky because basically none of the existing caller provide a way to reclaim the allocated memory. So this is rather misleading and hard to evaluate for any benefits. I have checked some random users and none of them has added the flag with a specific justification. I suspect most of them just copied from other existing users and others just thought it might be a good idea to use without any measuring. This suggests that GFP_TEMPORARY just motivates for cargo cult usage without any reasoning. I believe that our gfp flags are quite complex already and especially those with highlevel semantic should be clearly defined to prevent from confusion and abuse. Therefore I propose dropping GFP_TEMPORARY and replace all existing users to simply use GFP_KERNEL. Please note that SLAB users with shrinkers will still get __GFP_RECLAIMABLE heuristic and so they will be placed properly for memory fragmentation prevention. I can see reasons we might want some gfp flag to reflect shorterm allocations but I propose starting from a clear semantic definition and only then add users with proper justification. This was been brought up before LSF this year by Matthew [1] and it turned out that GFP_TEMPORARY really doesn't have a clear semantic. It seems to be a heuristic without any measured advantage for most (if not all) its current users. The follow up discussion has revealed that opinions on what might be temporary allocation differ a lot between developers. So rather than trying to tweak existing users into a semantic which they haven't expected I propose to simply remove the flag and start from scratch if we really need a semantic for short term allocations. [1] http://lkml.kernel.org/r/20170118054945.GD18349@bombadil.infradead.org [akpm@linux-foundation.org: fix typo] [akpm@linux-foundation.org: coding-style fixes] [sfr@canb.auug.org.au: drm/i915: fix up] Link: http://lkml.kernel.org/r/20170816144703.378d4f4d@canb.auug.org.au Link: http://lkml.kernel.org/r/20170728091904.14627-1-mhocko@kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Mel Gorman <mgorman@suse.de> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Matthew Wilcox <willy@infradead.org> Cc: Neil Brown <neilb@suse.de> Cc: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-14 06:28:29 +07:00
pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
if (pvec != NULL) {
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
struct mm_struct *mm = obj->userptr.mm->mm;
unsigned int flags = 0;
if (!i915_gem_object_is_readonly(obj))
flags |= FOLL_WRITE;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
ret = -EFAULT;
if (mmget_not_zero(mm)) {
down_read(&mm->mmap_sem);
while (pinned < npages) {
ret = get_user_pages_remote
(work->task, mm,
obj->userptr.ptr + pinned * PAGE_SIZE,
npages - pinned,
flags,
mm: add locked parameter to get_user_pages_remote() Patch series "mm: unexport __get_user_pages_unlocked()". This patch series continues the cleanup of get_user_pages*() functions taking advantage of the fact we can now pass gup_flags as we please. It firstly adds an additional 'locked' parameter to get_user_pages_remote() to allow for its callers to utilise VM_FAULT_RETRY functionality. This is necessary as the invocation of __get_user_pages_unlocked() in process_vm_rw_single_vec() makes use of this and no other existing higher level function would allow it to do so. Secondly existing callers of __get_user_pages_unlocked() are replaced with the appropriate higher-level replacement - get_user_pages_unlocked() if the current task and memory descriptor are referenced, or get_user_pages_remote() if other task/memory descriptors are referenced (having acquiring mmap_sem.) This patch (of 2): Add a int *locked parameter to get_user_pages_remote() to allow VM_FAULT_RETRY faulting behaviour similar to get_user_pages_[un]locked(). Taking into account the previous adjustments to get_user_pages*() functions allowing for the passing of gup_flags, we are now in a position where __get_user_pages_unlocked() need only be exported for his ability to allow VM_FAULT_RETRY behaviour, this adjustment allows us to subsequently unexport __get_user_pages_unlocked() as well as allowing for future flexibility in the use of get_user_pages_remote(). [sfr@canb.auug.org.au: merge fix for get_user_pages_remote API change] Link: http://lkml.kernel.org/r/20161122210511.024ec341@canb.auug.org.au Link: http://lkml.kernel.org/r/20161027095141.2569-2-lstoakes@gmail.com Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Jan Kara <jack@suse.cz> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Rik van Riel <riel@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krcmar <rkrcmar@redhat.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-15 06:06:52 +07:00
pvec + pinned, NULL, NULL);
if (ret < 0)
break;
pinned += ret;
}
up_read(&mm->mmap_sem);
mmput(mm);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}
}
mutex_lock(&obj->mm.lock);
if (obj->userptr.work == &work->work) {
struct sg_table *pages = ERR_PTR(ret);
if (pinned == npages) {
pages = __i915_gem_userptr_alloc_pages(obj, pvec,
npages);
if (!IS_ERR(pages)) {
pinned = 0;
pages = NULL;
}
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}
obj->userptr.work = ERR_CAST(pages);
if (IS_ERR(pages))
__i915_gem_userptr_set_active(obj, false);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}
mutex_unlock(&obj->mm.lock);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
release_pages(pvec, pinned);
kvfree(pvec);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
i915_gem_object_put(obj);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
put_task_struct(work->task);
kfree(work);
}
static struct sg_table *
drm/i915/userptr: Disallow wrapping GTT into a userptr If we allow the user to convert a GTT mmap address into a userptr, we may end up in recursion hell, where currently we hit a mutex deadlock but other possibilities include use-after-free during the unbind/cancel_userptr. [ 143.203989] gem_userptr_bli D 0 902 898 0x00000000 [ 143.204054] Call Trace: [ 143.204137] __schedule+0x511/0x1180 [ 143.204195] ? pci_mmcfg_check_reserved+0xc0/0xc0 [ 143.204274] schedule+0x57/0xe0 [ 143.204327] schedule_timeout+0x383/0x670 [ 143.204374] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204457] ? trace_hardirqs_on_thunk+0x1a/0x1c [ 143.204507] ? usleep_range+0x110/0x110 [ 143.204657] ? irq_exit+0x89/0x100 [ 143.204710] ? retint_kernel+0x2d/0x2d [ 143.204794] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204857] ? _raw_spin_unlock_irq+0x33/0x60 [ 143.204944] wait_for_common+0x1f0/0x2f0 [ 143.205006] ? out_of_line_wait_on_atomic_t+0x170/0x170 [ 143.205103] ? wake_up_q+0xa0/0xa0 [ 143.205159] ? flush_workqueue_prep_pwqs+0x15a/0x2c0 [ 143.205237] wait_for_completion+0x1d/0x20 [ 143.205292] flush_workqueue+0x2e9/0xbb0 [ 143.205339] ? flush_workqueue+0x163/0xbb0 [ 143.205418] ? __schedule+0x533/0x1180 [ 143.205498] ? check_flush_dependency+0x1a0/0x1a0 [ 143.205681] i915_gem_userptr_mn_invalidate_range_start+0x1c7/0x270 [i915] [ 143.205865] ? i915_gem_userptr_dmabuf_export+0x40/0x40 [i915] [ 143.205955] __mmu_notifier_invalidate_range_start+0xc6/0x120 [ 143.206044] ? __mmu_notifier_invalidate_range_start+0x51/0x120 [ 143.206123] zap_page_range_single+0x1c7/0x1f0 [ 143.206171] ? unmap_single_vma+0x160/0x160 [ 143.206260] ? unmap_mapping_range+0xa9/0x1b0 [ 143.206308] ? vma_interval_tree_subtree_search+0x75/0xd0 [ 143.206397] unmap_mapping_range+0x18f/0x1b0 [ 143.206444] ? zap_vma_ptes+0x70/0x70 [ 143.206524] ? __pm_runtime_resume+0x67/0xa0 [ 143.206723] i915_gem_release_mmap+0x1ba/0x1c0 [i915] [ 143.206846] i915_vma_unbind+0x5c2/0x690 [i915] [ 143.206925] ? __lock_is_held+0x52/0x100 [ 143.207076] i915_gem_object_set_tiling+0x1db/0x650 [i915] [ 143.207236] i915_gem_set_tiling_ioctl+0x1d3/0x3b0 [i915] [ 143.207377] ? i915_gem_set_tiling_ioctl+0x5/0x3b0 [i915] [ 143.207457] drm_ioctl+0x36c/0x670 [ 143.207535] ? debug_lockdep_rcu_enabled.part.0+0x1a/0x30 [ 143.207730] ? i915_gem_object_set_tiling+0x650/0x650 [i915] [ 143.207793] ? drm_getunique+0x120/0x120 [ 143.207875] ? __handle_mm_fault+0x996/0x14a0 [ 143.207939] ? vm_insert_page+0x340/0x340 [ 143.208028] ? up_write+0x28/0x50 [ 143.208086] ? vm_mmap_pgoff+0x160/0x190 [ 143.208163] do_vfs_ioctl+0x12c/0xa60 [ 143.208218] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 143.208267] ? ioctl_preallocate+0x150/0x150 [ 143.208353] ? __do_page_fault+0x36a/0x6e0 [ 143.208400] ? mark_held_locks+0x23/0xc0 [ 143.208479] ? up_read+0x1f/0x40 [ 143.208526] ? entry_SYSCALL_64_fastpath+0x5/0xc6 [ 143.208669] ? __fget_light+0xa7/0xc0 [ 143.208747] SyS_ioctl+0x41/0x70 To prevent the possibility of a deadlock, we defer scheduling the worker until after we have proven that given the current mm, the userptr range does not overlap a GGTT mmaping. If another thread tries to remap the GGTT over the userptr before the worker is scheduled, it will be stopped by its invalidate-range flushing the current work, before the deadlock can occur. v2: Improve discussion of how we end up in the deadlock. v3: Don't forget to mark the userptr as active after a successful gup_fast. Rename overlaps_ggtt to noncontiguous_or_overlaps_ggtt. v4: Fix test ordering between invalid GTT mmaping and range completion (Tvrtko) Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170308215903.24171-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-03-09 04:59:03 +07:00
__i915_gem_userptr_get_pages_schedule(struct drm_i915_gem_object *obj)
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
{
struct get_pages_work *work;
/* Spawn a worker so that we can acquire the
* user pages without holding our mutex. Access
* to the user pages requires mmap_sem, and we have
* a strict lock ordering of mmap_sem, struct_mutex -
* we already hold struct_mutex here and so cannot
* call gup without encountering a lock inversion.
*
* Userspace will keep on repeating the operation
* (thanks to EAGAIN) until either we hit the fast
* path or the worker completes. If the worker is
* cancelled or superseded, the task is still run
* but the results ignored. (This leads to
* complications that we may have a stray object
* refcount that we need to be wary of when
* checking for existing objects during creation.)
* If the worker encounters an error, it reports
* that error back to this function through
* obj->userptr.work = ERR_PTR.
*/
work = kmalloc(sizeof(*work), GFP_KERNEL);
if (work == NULL)
return ERR_PTR(-ENOMEM);
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
obj->userptr.work = &work->work;
work->obj = i915_gem_object_get(obj);
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
work->task = current;
get_task_struct(work->task);
INIT_WORK(&work->work, __i915_gem_userptr_get_pages_worker);
queue_work(to_i915(obj->base.dev)->mm.userptr_wq, &work->work);
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
return ERR_PTR(-EAGAIN);
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
}
static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
{
const int num_pages = obj->base.size >> PAGE_SHIFT;
drm/i915/userptr: Disallow wrapping GTT into a userptr If we allow the user to convert a GTT mmap address into a userptr, we may end up in recursion hell, where currently we hit a mutex deadlock but other possibilities include use-after-free during the unbind/cancel_userptr. [ 143.203989] gem_userptr_bli D 0 902 898 0x00000000 [ 143.204054] Call Trace: [ 143.204137] __schedule+0x511/0x1180 [ 143.204195] ? pci_mmcfg_check_reserved+0xc0/0xc0 [ 143.204274] schedule+0x57/0xe0 [ 143.204327] schedule_timeout+0x383/0x670 [ 143.204374] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204457] ? trace_hardirqs_on_thunk+0x1a/0x1c [ 143.204507] ? usleep_range+0x110/0x110 [ 143.204657] ? irq_exit+0x89/0x100 [ 143.204710] ? retint_kernel+0x2d/0x2d [ 143.204794] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204857] ? _raw_spin_unlock_irq+0x33/0x60 [ 143.204944] wait_for_common+0x1f0/0x2f0 [ 143.205006] ? out_of_line_wait_on_atomic_t+0x170/0x170 [ 143.205103] ? wake_up_q+0xa0/0xa0 [ 143.205159] ? flush_workqueue_prep_pwqs+0x15a/0x2c0 [ 143.205237] wait_for_completion+0x1d/0x20 [ 143.205292] flush_workqueue+0x2e9/0xbb0 [ 143.205339] ? flush_workqueue+0x163/0xbb0 [ 143.205418] ? __schedule+0x533/0x1180 [ 143.205498] ? check_flush_dependency+0x1a0/0x1a0 [ 143.205681] i915_gem_userptr_mn_invalidate_range_start+0x1c7/0x270 [i915] [ 143.205865] ? i915_gem_userptr_dmabuf_export+0x40/0x40 [i915] [ 143.205955] __mmu_notifier_invalidate_range_start+0xc6/0x120 [ 143.206044] ? __mmu_notifier_invalidate_range_start+0x51/0x120 [ 143.206123] zap_page_range_single+0x1c7/0x1f0 [ 143.206171] ? unmap_single_vma+0x160/0x160 [ 143.206260] ? unmap_mapping_range+0xa9/0x1b0 [ 143.206308] ? vma_interval_tree_subtree_search+0x75/0xd0 [ 143.206397] unmap_mapping_range+0x18f/0x1b0 [ 143.206444] ? zap_vma_ptes+0x70/0x70 [ 143.206524] ? __pm_runtime_resume+0x67/0xa0 [ 143.206723] i915_gem_release_mmap+0x1ba/0x1c0 [i915] [ 143.206846] i915_vma_unbind+0x5c2/0x690 [i915] [ 143.206925] ? __lock_is_held+0x52/0x100 [ 143.207076] i915_gem_object_set_tiling+0x1db/0x650 [i915] [ 143.207236] i915_gem_set_tiling_ioctl+0x1d3/0x3b0 [i915] [ 143.207377] ? i915_gem_set_tiling_ioctl+0x5/0x3b0 [i915] [ 143.207457] drm_ioctl+0x36c/0x670 [ 143.207535] ? debug_lockdep_rcu_enabled.part.0+0x1a/0x30 [ 143.207730] ? i915_gem_object_set_tiling+0x650/0x650 [i915] [ 143.207793] ? drm_getunique+0x120/0x120 [ 143.207875] ? __handle_mm_fault+0x996/0x14a0 [ 143.207939] ? vm_insert_page+0x340/0x340 [ 143.208028] ? up_write+0x28/0x50 [ 143.208086] ? vm_mmap_pgoff+0x160/0x190 [ 143.208163] do_vfs_ioctl+0x12c/0xa60 [ 143.208218] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 143.208267] ? ioctl_preallocate+0x150/0x150 [ 143.208353] ? __do_page_fault+0x36a/0x6e0 [ 143.208400] ? mark_held_locks+0x23/0xc0 [ 143.208479] ? up_read+0x1f/0x40 [ 143.208526] ? entry_SYSCALL_64_fastpath+0x5/0xc6 [ 143.208669] ? __fget_light+0xa7/0xc0 [ 143.208747] SyS_ioctl+0x41/0x70 To prevent the possibility of a deadlock, we defer scheduling the worker until after we have proven that given the current mm, the userptr range does not overlap a GGTT mmaping. If another thread tries to remap the GGTT over the userptr before the worker is scheduled, it will be stopped by its invalidate-range flushing the current work, before the deadlock can occur. v2: Improve discussion of how we end up in the deadlock. v3: Don't forget to mark the userptr as active after a successful gup_fast. Rename overlaps_ggtt to noncontiguous_or_overlaps_ggtt. v4: Fix test ordering between invalid GTT mmaping and range completion (Tvrtko) Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170308215903.24171-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-03-09 04:59:03 +07:00
struct mm_struct *mm = obj->userptr.mm->mm;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
struct page **pvec;
struct sg_table *pages;
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
bool active;
drm/i915/userptr: Disallow wrapping GTT into a userptr If we allow the user to convert a GTT mmap address into a userptr, we may end up in recursion hell, where currently we hit a mutex deadlock but other possibilities include use-after-free during the unbind/cancel_userptr. [ 143.203989] gem_userptr_bli D 0 902 898 0x00000000 [ 143.204054] Call Trace: [ 143.204137] __schedule+0x511/0x1180 [ 143.204195] ? pci_mmcfg_check_reserved+0xc0/0xc0 [ 143.204274] schedule+0x57/0xe0 [ 143.204327] schedule_timeout+0x383/0x670 [ 143.204374] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204457] ? trace_hardirqs_on_thunk+0x1a/0x1c [ 143.204507] ? usleep_range+0x110/0x110 [ 143.204657] ? irq_exit+0x89/0x100 [ 143.204710] ? retint_kernel+0x2d/0x2d [ 143.204794] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204857] ? _raw_spin_unlock_irq+0x33/0x60 [ 143.204944] wait_for_common+0x1f0/0x2f0 [ 143.205006] ? out_of_line_wait_on_atomic_t+0x170/0x170 [ 143.205103] ? wake_up_q+0xa0/0xa0 [ 143.205159] ? flush_workqueue_prep_pwqs+0x15a/0x2c0 [ 143.205237] wait_for_completion+0x1d/0x20 [ 143.205292] flush_workqueue+0x2e9/0xbb0 [ 143.205339] ? flush_workqueue+0x163/0xbb0 [ 143.205418] ? __schedule+0x533/0x1180 [ 143.205498] ? check_flush_dependency+0x1a0/0x1a0 [ 143.205681] i915_gem_userptr_mn_invalidate_range_start+0x1c7/0x270 [i915] [ 143.205865] ? i915_gem_userptr_dmabuf_export+0x40/0x40 [i915] [ 143.205955] __mmu_notifier_invalidate_range_start+0xc6/0x120 [ 143.206044] ? __mmu_notifier_invalidate_range_start+0x51/0x120 [ 143.206123] zap_page_range_single+0x1c7/0x1f0 [ 143.206171] ? unmap_single_vma+0x160/0x160 [ 143.206260] ? unmap_mapping_range+0xa9/0x1b0 [ 143.206308] ? vma_interval_tree_subtree_search+0x75/0xd0 [ 143.206397] unmap_mapping_range+0x18f/0x1b0 [ 143.206444] ? zap_vma_ptes+0x70/0x70 [ 143.206524] ? __pm_runtime_resume+0x67/0xa0 [ 143.206723] i915_gem_release_mmap+0x1ba/0x1c0 [i915] [ 143.206846] i915_vma_unbind+0x5c2/0x690 [i915] [ 143.206925] ? __lock_is_held+0x52/0x100 [ 143.207076] i915_gem_object_set_tiling+0x1db/0x650 [i915] [ 143.207236] i915_gem_set_tiling_ioctl+0x1d3/0x3b0 [i915] [ 143.207377] ? i915_gem_set_tiling_ioctl+0x5/0x3b0 [i915] [ 143.207457] drm_ioctl+0x36c/0x670 [ 143.207535] ? debug_lockdep_rcu_enabled.part.0+0x1a/0x30 [ 143.207730] ? i915_gem_object_set_tiling+0x650/0x650 [i915] [ 143.207793] ? drm_getunique+0x120/0x120 [ 143.207875] ? __handle_mm_fault+0x996/0x14a0 [ 143.207939] ? vm_insert_page+0x340/0x340 [ 143.208028] ? up_write+0x28/0x50 [ 143.208086] ? vm_mmap_pgoff+0x160/0x190 [ 143.208163] do_vfs_ioctl+0x12c/0xa60 [ 143.208218] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 143.208267] ? ioctl_preallocate+0x150/0x150 [ 143.208353] ? __do_page_fault+0x36a/0x6e0 [ 143.208400] ? mark_held_locks+0x23/0xc0 [ 143.208479] ? up_read+0x1f/0x40 [ 143.208526] ? entry_SYSCALL_64_fastpath+0x5/0xc6 [ 143.208669] ? __fget_light+0xa7/0xc0 [ 143.208747] SyS_ioctl+0x41/0x70 To prevent the possibility of a deadlock, we defer scheduling the worker until after we have proven that given the current mm, the userptr range does not overlap a GGTT mmaping. If another thread tries to remap the GGTT over the userptr before the worker is scheduled, it will be stopped by its invalidate-range flushing the current work, before the deadlock can occur. v2: Improve discussion of how we end up in the deadlock. v3: Don't forget to mark the userptr as active after a successful gup_fast. Rename overlaps_ggtt to noncontiguous_or_overlaps_ggtt. v4: Fix test ordering between invalid GTT mmaping and range completion (Tvrtko) Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170308215903.24171-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-03-09 04:59:03 +07:00
int pinned;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
/* If userspace should engineer that these pages are replaced in
* the vma between us binding this page into the GTT and completion
* of rendering... Their loss. If they change the mapping of their
* pages they need to create a new bo to point to the new vma.
*
* However, that still leaves open the possibility of the vma
* being copied upon fork. Which falls under the same userspace
* synchronisation issue as a regular bo, except that this time
* the process may not be expecting that a particular piece of
* memory is tied to the GPU.
*
* Fortunately, we can hook into the mmu_notifier in order to
* discard the page references prior to anything nasty happening
* to the vma (discard or cloning) which should prevent the more
* egregious cases from causing harm.
*/
if (obj->userptr.work) {
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
/* active flag should still be held for the pending work */
if (IS_ERR(obj->userptr.work))
return PTR_ERR(obj->userptr.work);
else
return -EAGAIN;
}
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
pvec = NULL;
pinned = 0;
drm/i915/userptr: Reinvent GGTT self-faulting protection lockdep doesn't like us taking the mm->mmap_sem inside the get_pages callback for a couple of reasons. The straightforward deadlock: [13755.434059] ============================================= [13755.434061] [ INFO: possible recursive locking detected ] [13755.434064] 4.11.0-rc1-CI-CI_DRM_297+ #1 Tainted: G U [13755.434066] --------------------------------------------- [13755.434068] gem_userptr_bli/8398 is trying to acquire lock: [13755.434070] (&mm->mmap_sem){++++++}, at: [<ffffffffa00c988a>] i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434096] but task is already holding lock: [13755.434098] (&mm->mmap_sem){++++++}, at: [<ffffffff8104d485>] __do_page_fault+0x105/0x560 [13755.434105] other info that might help us debug this: [13755.434108] Possible unsafe locking scenario: [13755.434110] CPU0 [13755.434111] ---- [13755.434112] lock(&mm->mmap_sem); [13755.434115] lock(&mm->mmap_sem); [13755.434117] *** DEADLOCK *** [13755.434121] May be due to missing lock nesting notation [13755.434126] 2 locks held by gem_userptr_bli/8398: [13755.434128] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8104d485>] __do_page_fault+0x105/0x560 [13755.434135] #1: (&obj->mm.lock){+.+.+.}, at: [<ffffffffa00b887d>] __i915_gem_object_get_pages+0x1d/0x70 [i915] [13755.434156] stack backtrace: [13755.434161] CPU: 3 PID: 8398 Comm: gem_userptr_bli Tainted: G U 4.11.0-rc1-CI-CI_DRM_297+ #1 [13755.434165] Hardware name: GIGABYTE GB-BKi7(H)A-7500/MFLP7AP-00, BIOS F4 02/20/2017 [13755.434169] Call Trace: [13755.434174] dump_stack+0x67/0x92 [13755.434178] __lock_acquire+0x133a/0x1b50 [13755.434182] lock_acquire+0xc9/0x220 [13755.434200] ? i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434204] down_read+0x42/0x70 [13755.434221] ? i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434238] i915_gem_userptr_get_pages+0x5a/0x2e0 [i915] [13755.434255] ____i915_gem_object_get_pages+0x25/0x60 [i915] [13755.434272] __i915_gem_object_get_pages+0x59/0x70 [i915] [13755.434288] i915_gem_fault+0x397/0x6a0 [i915] [13755.434304] ? i915_gem_fault+0x1a1/0x6a0 [i915] [13755.434308] ? __lock_acquire+0x449/0x1b50 [13755.434311] ? __lock_acquire+0x449/0x1b50 [13755.434315] ? vm_mmap_pgoff+0xa9/0xd0 [13755.434318] __do_fault+0x19/0x70 [13755.434321] __handle_mm_fault+0x863/0xe50 [13755.434325] handle_mm_fault+0x17f/0x370 [13755.434329] ? handle_mm_fault+0x40/0x370 [13755.434332] __do_page_fault+0x279/0x560 [13755.434336] do_page_fault+0xc/0x10 [13755.434339] page_fault+0x22/0x30 [13755.434342] RIP: 0033:0x7f5ab91b5880 [13755.434345] RSP: 002b:00007fff62922218 EFLAGS: 00010216 [13755.434348] RAX: 0000000000b74500 RBX: 00007f5ab7f81000 RCX: 0000000000000000 [13755.434352] RDX: 0000000000100000 RSI: 00007f5ab7f81000 RDI: 00007f5aba61c000 [13755.434355] RBP: 00007f5aba61c000 R08: 0000000000000007 R09: 0000000100000000 [13755.434359] R10: 000000000000037d R11: 00007f5ab91b5840 R12: 0000000000000001 [13755.434362] R13: 0000000000000005 R14: 0000000000000001 R15: 0000000000000000 and cyclic deadlocks: [ 2566.458979] ====================================================== [ 2566.459054] [ INFO: possible circular locking dependency detected ] [ 2566.459127] 4.11.0-rc1+ #26 Not tainted [ 2566.459194] ------------------------------------------------------- [ 2566.459266] gem_streaming_w/759 is trying to acquire lock: [ 2566.459334] (&obj->mm.lock){+.+.+.}, at: [<ffffffffa034bc80>] i915_gem_object_pin_pages+0x0/0xc0 [i915] [ 2566.459605] [ 2566.459605] but task is already holding lock: [ 2566.459699] (&mm->mmap_sem){++++++}, at: [<ffffffff8106fd11>] __do_page_fault+0x121/0x500 [ 2566.459814] [ 2566.459814] which lock already depends on the new lock. [ 2566.459814] [ 2566.459934] [ 2566.459934] the existing dependency chain (in reverse order) is: [ 2566.460030] [ 2566.460030] -> #1 (&mm->mmap_sem){++++++}: [ 2566.460139] lock_acquire+0xfe/0x220 [ 2566.460214] down_read+0x4e/0x90 [ 2566.460444] i915_gem_userptr_get_pages+0x6e/0x340 [i915] [ 2566.460669] ____i915_gem_object_get_pages+0x8b/0xd0 [i915] [ 2566.460900] __i915_gem_object_get_pages+0x6a/0x80 [i915] [ 2566.461132] __i915_vma_do_pin+0x7fa/0x930 [i915] [ 2566.461352] eb_add_vma+0x67b/0x830 [i915] [ 2566.461572] eb_lookup_vmas+0xafe/0x1010 [i915] [ 2566.461792] i915_gem_do_execbuffer+0x715/0x2870 [i915] [ 2566.462012] i915_gem_execbuffer2+0x106/0x2b0 [i915] [ 2566.462152] drm_ioctl+0x36c/0x670 [drm] [ 2566.462236] do_vfs_ioctl+0x12c/0xa60 [ 2566.462317] SyS_ioctl+0x41/0x70 [ 2566.462399] entry_SYSCALL_64_fastpath+0x1c/0xb1 [ 2566.462477] [ 2566.462477] -> #0 (&obj->mm.lock){+.+.+.}: [ 2566.462587] __lock_acquire+0x1602/0x1790 [ 2566.462661] lock_acquire+0xfe/0x220 [ 2566.462893] i915_gem_object_pin_pages+0x4c/0xc0 [i915] [ 2566.463116] i915_gem_fault+0x2c2/0x8c0 [i915] [ 2566.463197] __do_fault+0x42/0x130 [ 2566.463276] __handle_mm_fault+0x92c/0x1280 [ 2566.463356] handle_mm_fault+0x1e2/0x440 [ 2566.463443] __do_page_fault+0x1c4/0x500 [ 2566.463529] do_page_fault+0xc/0x10 [ 2566.463613] page_fault+0x1f/0x30 [ 2566.463693] [ 2566.463693] other info that might help us debug this: [ 2566.463693] [ 2566.463820] Possible unsafe locking scenario: [ 2566.463820] [ 2566.463918] CPU0 CPU1 [ 2566.463988] ---- ---- [ 2566.464068] lock(&mm->mmap_sem); [ 2566.464143] lock(&obj->mm.lock); [ 2566.464226] lock(&mm->mmap_sem); [ 2566.464304] lock(&obj->mm.lock); [ 2566.464378] [ 2566.464378] *** DEADLOCK *** [ 2566.464378] [ 2566.464504] 1 lock held by gem_streaming_w/759: [ 2566.464576] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8106fd11>] __do_page_fault+0x121/0x500 [ 2566.464699] [ 2566.464699] stack backtrace: [ 2566.464801] CPU: 0 PID: 759 Comm: gem_streaming_w Not tainted 4.11.0-rc1+ #26 [ 2566.464881] Hardware name: GIGABYTE GB-BXBT-1900/MZBAYAB-00, BIOS F8 03/02/2016 [ 2566.464983] Call Trace: [ 2566.465061] dump_stack+0x68/0x9f [ 2566.465144] print_circular_bug+0x20b/0x260 [ 2566.465234] __lock_acquire+0x1602/0x1790 [ 2566.465323] ? debug_check_no_locks_freed+0x1a0/0x1a0 [ 2566.465564] ? i915_gem_object_wait+0x238/0x650 [i915] [ 2566.465657] ? debug_lockdep_rcu_enabled.part.4+0x1a/0x30 [ 2566.465749] lock_acquire+0xfe/0x220 [ 2566.465985] ? i915_sg_trim+0x1b0/0x1b0 [i915] [ 2566.466223] i915_gem_object_pin_pages+0x4c/0xc0 [i915] [ 2566.466461] ? i915_sg_trim+0x1b0/0x1b0 [i915] [ 2566.466699] i915_gem_fault+0x2c2/0x8c0 [i915] [ 2566.466939] ? i915_gem_pwrite_ioctl+0xce0/0xce0 [i915] [ 2566.467030] ? __lock_acquire+0x642/0x1790 [ 2566.467122] ? __lock_acquire+0x642/0x1790 [ 2566.467209] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 2566.467299] ? get_unmapped_area+0x1b4/0x1d0 [ 2566.467387] __do_fault+0x42/0x130 [ 2566.467474] __handle_mm_fault+0x92c/0x1280 [ 2566.467564] ? __pmd_alloc+0x1e0/0x1e0 [ 2566.467651] ? vm_mmap_pgoff+0x160/0x190 [ 2566.467740] ? handle_mm_fault+0x111/0x440 [ 2566.467827] handle_mm_fault+0x1e2/0x440 [ 2566.467914] ? handle_mm_fault+0x5d/0x440 [ 2566.468002] __do_page_fault+0x1c4/0x500 [ 2566.468090] do_page_fault+0xc/0x10 [ 2566.468180] page_fault+0x1f/0x30 [ 2566.468263] RIP: 0033:0x557895ced32a [ 2566.468337] RSP: 002b:00007fffd6dd8a10 EFLAGS: 00010202 [ 2566.468419] RAX: 00007f659a4db000 RBX: 0000000000000003 RCX: 00007f659ad032da [ 2566.468501] RDX: 0000000000000000 RSI: 0000000000100000 RDI: 0000000000000000 [ 2566.468586] RBP: 0000000000000007 R08: 0000000000000003 R09: 0000000100000000 [ 2566.468667] R10: 0000000000000001 R11: 0000000000000246 R12: 0000557895ceda60 [ 2566.468749] R13: 0000000000000001 R14: 00007fffd6dd8ac0 R15: 00007f659a4db000 By checking the status of the gup worker (serialized by the obj->mm.lock) we can determine whether it is still active, has failed or has succeeded. If the worker is still active (or failed), we know that it cannot be bound and so we can skip taking struct_mutex (risking potential recursion). As we check the worker status, we mark it to discard any partial results, forcing us to restart on the next get_pages. Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Fixes: 1c8782dd313e ("drm/i915/userptr: Disallow wrapping GTT into a userptr") Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Testcase: igt/gem_userptr_blits/dmabuf-sync Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170315140150.19432-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-03-15 21:01:50 +07:00
if (mm == current->mm) {
pvec = kvmalloc_array(num_pages, sizeof(struct page *),
mm: treewide: remove GFP_TEMPORARY allocation flag GFP_TEMPORARY was introduced by commit e12ba74d8ff3 ("Group short-lived and reclaimable kernel allocations") along with __GFP_RECLAIMABLE. It's primary motivation was to allow users to tell that an allocation is short lived and so the allocator can try to place such allocations close together and prevent long term fragmentation. As much as this sounds like a reasonable semantic it becomes much less clear when to use the highlevel GFP_TEMPORARY allocation flag. How long is temporary? Can the context holding that memory sleep? Can it take locks? It seems there is no good answer for those questions. The current implementation of GFP_TEMPORARY is basically GFP_KERNEL | __GFP_RECLAIMABLE which in itself is tricky because basically none of the existing caller provide a way to reclaim the allocated memory. So this is rather misleading and hard to evaluate for any benefits. I have checked some random users and none of them has added the flag with a specific justification. I suspect most of them just copied from other existing users and others just thought it might be a good idea to use without any measuring. This suggests that GFP_TEMPORARY just motivates for cargo cult usage without any reasoning. I believe that our gfp flags are quite complex already and especially those with highlevel semantic should be clearly defined to prevent from confusion and abuse. Therefore I propose dropping GFP_TEMPORARY and replace all existing users to simply use GFP_KERNEL. Please note that SLAB users with shrinkers will still get __GFP_RECLAIMABLE heuristic and so they will be placed properly for memory fragmentation prevention. I can see reasons we might want some gfp flag to reflect shorterm allocations but I propose starting from a clear semantic definition and only then add users with proper justification. This was been brought up before LSF this year by Matthew [1] and it turned out that GFP_TEMPORARY really doesn't have a clear semantic. It seems to be a heuristic without any measured advantage for most (if not all) its current users. The follow up discussion has revealed that opinions on what might be temporary allocation differ a lot between developers. So rather than trying to tweak existing users into a semantic which they haven't expected I propose to simply remove the flag and start from scratch if we really need a semantic for short term allocations. [1] http://lkml.kernel.org/r/20170118054945.GD18349@bombadil.infradead.org [akpm@linux-foundation.org: fix typo] [akpm@linux-foundation.org: coding-style fixes] [sfr@canb.auug.org.au: drm/i915: fix up] Link: http://lkml.kernel.org/r/20170816144703.378d4f4d@canb.auug.org.au Link: http://lkml.kernel.org/r/20170728091904.14627-1-mhocko@kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Mel Gorman <mgorman@suse.de> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Matthew Wilcox <willy@infradead.org> Cc: Neil Brown <neilb@suse.de> Cc: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-14 06:28:29 +07:00
GFP_KERNEL |
drm/i915/userptr: Disallow wrapping GTT into a userptr If we allow the user to convert a GTT mmap address into a userptr, we may end up in recursion hell, where currently we hit a mutex deadlock but other possibilities include use-after-free during the unbind/cancel_userptr. [ 143.203989] gem_userptr_bli D 0 902 898 0x00000000 [ 143.204054] Call Trace: [ 143.204137] __schedule+0x511/0x1180 [ 143.204195] ? pci_mmcfg_check_reserved+0xc0/0xc0 [ 143.204274] schedule+0x57/0xe0 [ 143.204327] schedule_timeout+0x383/0x670 [ 143.204374] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204457] ? trace_hardirqs_on_thunk+0x1a/0x1c [ 143.204507] ? usleep_range+0x110/0x110 [ 143.204657] ? irq_exit+0x89/0x100 [ 143.204710] ? retint_kernel+0x2d/0x2d [ 143.204794] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204857] ? _raw_spin_unlock_irq+0x33/0x60 [ 143.204944] wait_for_common+0x1f0/0x2f0 [ 143.205006] ? out_of_line_wait_on_atomic_t+0x170/0x170 [ 143.205103] ? wake_up_q+0xa0/0xa0 [ 143.205159] ? flush_workqueue_prep_pwqs+0x15a/0x2c0 [ 143.205237] wait_for_completion+0x1d/0x20 [ 143.205292] flush_workqueue+0x2e9/0xbb0 [ 143.205339] ? flush_workqueue+0x163/0xbb0 [ 143.205418] ? __schedule+0x533/0x1180 [ 143.205498] ? check_flush_dependency+0x1a0/0x1a0 [ 143.205681] i915_gem_userptr_mn_invalidate_range_start+0x1c7/0x270 [i915] [ 143.205865] ? i915_gem_userptr_dmabuf_export+0x40/0x40 [i915] [ 143.205955] __mmu_notifier_invalidate_range_start+0xc6/0x120 [ 143.206044] ? __mmu_notifier_invalidate_range_start+0x51/0x120 [ 143.206123] zap_page_range_single+0x1c7/0x1f0 [ 143.206171] ? unmap_single_vma+0x160/0x160 [ 143.206260] ? unmap_mapping_range+0xa9/0x1b0 [ 143.206308] ? vma_interval_tree_subtree_search+0x75/0xd0 [ 143.206397] unmap_mapping_range+0x18f/0x1b0 [ 143.206444] ? zap_vma_ptes+0x70/0x70 [ 143.206524] ? __pm_runtime_resume+0x67/0xa0 [ 143.206723] i915_gem_release_mmap+0x1ba/0x1c0 [i915] [ 143.206846] i915_vma_unbind+0x5c2/0x690 [i915] [ 143.206925] ? __lock_is_held+0x52/0x100 [ 143.207076] i915_gem_object_set_tiling+0x1db/0x650 [i915] [ 143.207236] i915_gem_set_tiling_ioctl+0x1d3/0x3b0 [i915] [ 143.207377] ? i915_gem_set_tiling_ioctl+0x5/0x3b0 [i915] [ 143.207457] drm_ioctl+0x36c/0x670 [ 143.207535] ? debug_lockdep_rcu_enabled.part.0+0x1a/0x30 [ 143.207730] ? i915_gem_object_set_tiling+0x650/0x650 [i915] [ 143.207793] ? drm_getunique+0x120/0x120 [ 143.207875] ? __handle_mm_fault+0x996/0x14a0 [ 143.207939] ? vm_insert_page+0x340/0x340 [ 143.208028] ? up_write+0x28/0x50 [ 143.208086] ? vm_mmap_pgoff+0x160/0x190 [ 143.208163] do_vfs_ioctl+0x12c/0xa60 [ 143.208218] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 143.208267] ? ioctl_preallocate+0x150/0x150 [ 143.208353] ? __do_page_fault+0x36a/0x6e0 [ 143.208400] ? mark_held_locks+0x23/0xc0 [ 143.208479] ? up_read+0x1f/0x40 [ 143.208526] ? entry_SYSCALL_64_fastpath+0x5/0xc6 [ 143.208669] ? __fget_light+0xa7/0xc0 [ 143.208747] SyS_ioctl+0x41/0x70 To prevent the possibility of a deadlock, we defer scheduling the worker until after we have proven that given the current mm, the userptr range does not overlap a GGTT mmaping. If another thread tries to remap the GGTT over the userptr before the worker is scheduled, it will be stopped by its invalidate-range flushing the current work, before the deadlock can occur. v2: Improve discussion of how we end up in the deadlock. v3: Don't forget to mark the userptr as active after a successful gup_fast. Rename overlaps_ggtt to noncontiguous_or_overlaps_ggtt. v4: Fix test ordering between invalid GTT mmaping and range completion (Tvrtko) Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170308215903.24171-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-03-09 04:59:03 +07:00
__GFP_NORETRY |
__GFP_NOWARN);
if (pvec) /* defer to worker if malloc fails */
pinned = __get_user_pages_fast(obj->userptr.ptr,
num_pages,
!i915_gem_object_is_readonly(obj),
drm/i915/userptr: Disallow wrapping GTT into a userptr If we allow the user to convert a GTT mmap address into a userptr, we may end up in recursion hell, where currently we hit a mutex deadlock but other possibilities include use-after-free during the unbind/cancel_userptr. [ 143.203989] gem_userptr_bli D 0 902 898 0x00000000 [ 143.204054] Call Trace: [ 143.204137] __schedule+0x511/0x1180 [ 143.204195] ? pci_mmcfg_check_reserved+0xc0/0xc0 [ 143.204274] schedule+0x57/0xe0 [ 143.204327] schedule_timeout+0x383/0x670 [ 143.204374] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204457] ? trace_hardirqs_on_thunk+0x1a/0x1c [ 143.204507] ? usleep_range+0x110/0x110 [ 143.204657] ? irq_exit+0x89/0x100 [ 143.204710] ? retint_kernel+0x2d/0x2d [ 143.204794] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204857] ? _raw_spin_unlock_irq+0x33/0x60 [ 143.204944] wait_for_common+0x1f0/0x2f0 [ 143.205006] ? out_of_line_wait_on_atomic_t+0x170/0x170 [ 143.205103] ? wake_up_q+0xa0/0xa0 [ 143.205159] ? flush_workqueue_prep_pwqs+0x15a/0x2c0 [ 143.205237] wait_for_completion+0x1d/0x20 [ 143.205292] flush_workqueue+0x2e9/0xbb0 [ 143.205339] ? flush_workqueue+0x163/0xbb0 [ 143.205418] ? __schedule+0x533/0x1180 [ 143.205498] ? check_flush_dependency+0x1a0/0x1a0 [ 143.205681] i915_gem_userptr_mn_invalidate_range_start+0x1c7/0x270 [i915] [ 143.205865] ? i915_gem_userptr_dmabuf_export+0x40/0x40 [i915] [ 143.205955] __mmu_notifier_invalidate_range_start+0xc6/0x120 [ 143.206044] ? __mmu_notifier_invalidate_range_start+0x51/0x120 [ 143.206123] zap_page_range_single+0x1c7/0x1f0 [ 143.206171] ? unmap_single_vma+0x160/0x160 [ 143.206260] ? unmap_mapping_range+0xa9/0x1b0 [ 143.206308] ? vma_interval_tree_subtree_search+0x75/0xd0 [ 143.206397] unmap_mapping_range+0x18f/0x1b0 [ 143.206444] ? zap_vma_ptes+0x70/0x70 [ 143.206524] ? __pm_runtime_resume+0x67/0xa0 [ 143.206723] i915_gem_release_mmap+0x1ba/0x1c0 [i915] [ 143.206846] i915_vma_unbind+0x5c2/0x690 [i915] [ 143.206925] ? __lock_is_held+0x52/0x100 [ 143.207076] i915_gem_object_set_tiling+0x1db/0x650 [i915] [ 143.207236] i915_gem_set_tiling_ioctl+0x1d3/0x3b0 [i915] [ 143.207377] ? i915_gem_set_tiling_ioctl+0x5/0x3b0 [i915] [ 143.207457] drm_ioctl+0x36c/0x670 [ 143.207535] ? debug_lockdep_rcu_enabled.part.0+0x1a/0x30 [ 143.207730] ? i915_gem_object_set_tiling+0x650/0x650 [i915] [ 143.207793] ? drm_getunique+0x120/0x120 [ 143.207875] ? __handle_mm_fault+0x996/0x14a0 [ 143.207939] ? vm_insert_page+0x340/0x340 [ 143.208028] ? up_write+0x28/0x50 [ 143.208086] ? vm_mmap_pgoff+0x160/0x190 [ 143.208163] do_vfs_ioctl+0x12c/0xa60 [ 143.208218] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 143.208267] ? ioctl_preallocate+0x150/0x150 [ 143.208353] ? __do_page_fault+0x36a/0x6e0 [ 143.208400] ? mark_held_locks+0x23/0xc0 [ 143.208479] ? up_read+0x1f/0x40 [ 143.208526] ? entry_SYSCALL_64_fastpath+0x5/0xc6 [ 143.208669] ? __fget_light+0xa7/0xc0 [ 143.208747] SyS_ioctl+0x41/0x70 To prevent the possibility of a deadlock, we defer scheduling the worker until after we have proven that given the current mm, the userptr range does not overlap a GGTT mmaping. If another thread tries to remap the GGTT over the userptr before the worker is scheduled, it will be stopped by its invalidate-range flushing the current work, before the deadlock can occur. v2: Improve discussion of how we end up in the deadlock. v3: Don't forget to mark the userptr as active after a successful gup_fast. Rename overlaps_ggtt to noncontiguous_or_overlaps_ggtt. v4: Fix test ordering between invalid GTT mmaping and range completion (Tvrtko) Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170308215903.24171-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-03-09 04:59:03 +07:00
pvec);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
active = false;
drm/i915/userptr: Disallow wrapping GTT into a userptr If we allow the user to convert a GTT mmap address into a userptr, we may end up in recursion hell, where currently we hit a mutex deadlock but other possibilities include use-after-free during the unbind/cancel_userptr. [ 143.203989] gem_userptr_bli D 0 902 898 0x00000000 [ 143.204054] Call Trace: [ 143.204137] __schedule+0x511/0x1180 [ 143.204195] ? pci_mmcfg_check_reserved+0xc0/0xc0 [ 143.204274] schedule+0x57/0xe0 [ 143.204327] schedule_timeout+0x383/0x670 [ 143.204374] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204457] ? trace_hardirqs_on_thunk+0x1a/0x1c [ 143.204507] ? usleep_range+0x110/0x110 [ 143.204657] ? irq_exit+0x89/0x100 [ 143.204710] ? retint_kernel+0x2d/0x2d [ 143.204794] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204857] ? _raw_spin_unlock_irq+0x33/0x60 [ 143.204944] wait_for_common+0x1f0/0x2f0 [ 143.205006] ? out_of_line_wait_on_atomic_t+0x170/0x170 [ 143.205103] ? wake_up_q+0xa0/0xa0 [ 143.205159] ? flush_workqueue_prep_pwqs+0x15a/0x2c0 [ 143.205237] wait_for_completion+0x1d/0x20 [ 143.205292] flush_workqueue+0x2e9/0xbb0 [ 143.205339] ? flush_workqueue+0x163/0xbb0 [ 143.205418] ? __schedule+0x533/0x1180 [ 143.205498] ? check_flush_dependency+0x1a0/0x1a0 [ 143.205681] i915_gem_userptr_mn_invalidate_range_start+0x1c7/0x270 [i915] [ 143.205865] ? i915_gem_userptr_dmabuf_export+0x40/0x40 [i915] [ 143.205955] __mmu_notifier_invalidate_range_start+0xc6/0x120 [ 143.206044] ? __mmu_notifier_invalidate_range_start+0x51/0x120 [ 143.206123] zap_page_range_single+0x1c7/0x1f0 [ 143.206171] ? unmap_single_vma+0x160/0x160 [ 143.206260] ? unmap_mapping_range+0xa9/0x1b0 [ 143.206308] ? vma_interval_tree_subtree_search+0x75/0xd0 [ 143.206397] unmap_mapping_range+0x18f/0x1b0 [ 143.206444] ? zap_vma_ptes+0x70/0x70 [ 143.206524] ? __pm_runtime_resume+0x67/0xa0 [ 143.206723] i915_gem_release_mmap+0x1ba/0x1c0 [i915] [ 143.206846] i915_vma_unbind+0x5c2/0x690 [i915] [ 143.206925] ? __lock_is_held+0x52/0x100 [ 143.207076] i915_gem_object_set_tiling+0x1db/0x650 [i915] [ 143.207236] i915_gem_set_tiling_ioctl+0x1d3/0x3b0 [i915] [ 143.207377] ? i915_gem_set_tiling_ioctl+0x5/0x3b0 [i915] [ 143.207457] drm_ioctl+0x36c/0x670 [ 143.207535] ? debug_lockdep_rcu_enabled.part.0+0x1a/0x30 [ 143.207730] ? i915_gem_object_set_tiling+0x650/0x650 [i915] [ 143.207793] ? drm_getunique+0x120/0x120 [ 143.207875] ? __handle_mm_fault+0x996/0x14a0 [ 143.207939] ? vm_insert_page+0x340/0x340 [ 143.208028] ? up_write+0x28/0x50 [ 143.208086] ? vm_mmap_pgoff+0x160/0x190 [ 143.208163] do_vfs_ioctl+0x12c/0xa60 [ 143.208218] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 143.208267] ? ioctl_preallocate+0x150/0x150 [ 143.208353] ? __do_page_fault+0x36a/0x6e0 [ 143.208400] ? mark_held_locks+0x23/0xc0 [ 143.208479] ? up_read+0x1f/0x40 [ 143.208526] ? entry_SYSCALL_64_fastpath+0x5/0xc6 [ 143.208669] ? __fget_light+0xa7/0xc0 [ 143.208747] SyS_ioctl+0x41/0x70 To prevent the possibility of a deadlock, we defer scheduling the worker until after we have proven that given the current mm, the userptr range does not overlap a GGTT mmaping. If another thread tries to remap the GGTT over the userptr before the worker is scheduled, it will be stopped by its invalidate-range flushing the current work, before the deadlock can occur. v2: Improve discussion of how we end up in the deadlock. v3: Don't forget to mark the userptr as active after a successful gup_fast. Rename overlaps_ggtt to noncontiguous_or_overlaps_ggtt. v4: Fix test ordering between invalid GTT mmaping and range completion (Tvrtko) Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170308215903.24171-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-03-09 04:59:03 +07:00
if (pinned < 0) {
pages = ERR_PTR(pinned);
pinned = 0;
} else if (pinned < num_pages) {
pages = __i915_gem_userptr_get_pages_schedule(obj);
active = pages == ERR_PTR(-EAGAIN);
} else {
pages = __i915_gem_userptr_alloc_pages(obj, pvec, num_pages);
drm/i915/userptr: Disallow wrapping GTT into a userptr If we allow the user to convert a GTT mmap address into a userptr, we may end up in recursion hell, where currently we hit a mutex deadlock but other possibilities include use-after-free during the unbind/cancel_userptr. [ 143.203989] gem_userptr_bli D 0 902 898 0x00000000 [ 143.204054] Call Trace: [ 143.204137] __schedule+0x511/0x1180 [ 143.204195] ? pci_mmcfg_check_reserved+0xc0/0xc0 [ 143.204274] schedule+0x57/0xe0 [ 143.204327] schedule_timeout+0x383/0x670 [ 143.204374] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204457] ? trace_hardirqs_on_thunk+0x1a/0x1c [ 143.204507] ? usleep_range+0x110/0x110 [ 143.204657] ? irq_exit+0x89/0x100 [ 143.204710] ? retint_kernel+0x2d/0x2d [ 143.204794] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204857] ? _raw_spin_unlock_irq+0x33/0x60 [ 143.204944] wait_for_common+0x1f0/0x2f0 [ 143.205006] ? out_of_line_wait_on_atomic_t+0x170/0x170 [ 143.205103] ? wake_up_q+0xa0/0xa0 [ 143.205159] ? flush_workqueue_prep_pwqs+0x15a/0x2c0 [ 143.205237] wait_for_completion+0x1d/0x20 [ 143.205292] flush_workqueue+0x2e9/0xbb0 [ 143.205339] ? flush_workqueue+0x163/0xbb0 [ 143.205418] ? __schedule+0x533/0x1180 [ 143.205498] ? check_flush_dependency+0x1a0/0x1a0 [ 143.205681] i915_gem_userptr_mn_invalidate_range_start+0x1c7/0x270 [i915] [ 143.205865] ? i915_gem_userptr_dmabuf_export+0x40/0x40 [i915] [ 143.205955] __mmu_notifier_invalidate_range_start+0xc6/0x120 [ 143.206044] ? __mmu_notifier_invalidate_range_start+0x51/0x120 [ 143.206123] zap_page_range_single+0x1c7/0x1f0 [ 143.206171] ? unmap_single_vma+0x160/0x160 [ 143.206260] ? unmap_mapping_range+0xa9/0x1b0 [ 143.206308] ? vma_interval_tree_subtree_search+0x75/0xd0 [ 143.206397] unmap_mapping_range+0x18f/0x1b0 [ 143.206444] ? zap_vma_ptes+0x70/0x70 [ 143.206524] ? __pm_runtime_resume+0x67/0xa0 [ 143.206723] i915_gem_release_mmap+0x1ba/0x1c0 [i915] [ 143.206846] i915_vma_unbind+0x5c2/0x690 [i915] [ 143.206925] ? __lock_is_held+0x52/0x100 [ 143.207076] i915_gem_object_set_tiling+0x1db/0x650 [i915] [ 143.207236] i915_gem_set_tiling_ioctl+0x1d3/0x3b0 [i915] [ 143.207377] ? i915_gem_set_tiling_ioctl+0x5/0x3b0 [i915] [ 143.207457] drm_ioctl+0x36c/0x670 [ 143.207535] ? debug_lockdep_rcu_enabled.part.0+0x1a/0x30 [ 143.207730] ? i915_gem_object_set_tiling+0x650/0x650 [i915] [ 143.207793] ? drm_getunique+0x120/0x120 [ 143.207875] ? __handle_mm_fault+0x996/0x14a0 [ 143.207939] ? vm_insert_page+0x340/0x340 [ 143.208028] ? up_write+0x28/0x50 [ 143.208086] ? vm_mmap_pgoff+0x160/0x190 [ 143.208163] do_vfs_ioctl+0x12c/0xa60 [ 143.208218] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 143.208267] ? ioctl_preallocate+0x150/0x150 [ 143.208353] ? __do_page_fault+0x36a/0x6e0 [ 143.208400] ? mark_held_locks+0x23/0xc0 [ 143.208479] ? up_read+0x1f/0x40 [ 143.208526] ? entry_SYSCALL_64_fastpath+0x5/0xc6 [ 143.208669] ? __fget_light+0xa7/0xc0 [ 143.208747] SyS_ioctl+0x41/0x70 To prevent the possibility of a deadlock, we defer scheduling the worker until after we have proven that given the current mm, the userptr range does not overlap a GGTT mmaping. If another thread tries to remap the GGTT over the userptr before the worker is scheduled, it will be stopped by its invalidate-range flushing the current work, before the deadlock can occur. v2: Improve discussion of how we end up in the deadlock. v3: Don't forget to mark the userptr as active after a successful gup_fast. Rename overlaps_ggtt to noncontiguous_or_overlaps_ggtt. v4: Fix test ordering between invalid GTT mmaping and range completion (Tvrtko) Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170308215903.24171-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-03-09 04:59:03 +07:00
active = !IS_ERR(pages);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}
drm/i915/userptr: Disallow wrapping GTT into a userptr If we allow the user to convert a GTT mmap address into a userptr, we may end up in recursion hell, where currently we hit a mutex deadlock but other possibilities include use-after-free during the unbind/cancel_userptr. [ 143.203989] gem_userptr_bli D 0 902 898 0x00000000 [ 143.204054] Call Trace: [ 143.204137] __schedule+0x511/0x1180 [ 143.204195] ? pci_mmcfg_check_reserved+0xc0/0xc0 [ 143.204274] schedule+0x57/0xe0 [ 143.204327] schedule_timeout+0x383/0x670 [ 143.204374] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204457] ? trace_hardirqs_on_thunk+0x1a/0x1c [ 143.204507] ? usleep_range+0x110/0x110 [ 143.204657] ? irq_exit+0x89/0x100 [ 143.204710] ? retint_kernel+0x2d/0x2d [ 143.204794] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204857] ? _raw_spin_unlock_irq+0x33/0x60 [ 143.204944] wait_for_common+0x1f0/0x2f0 [ 143.205006] ? out_of_line_wait_on_atomic_t+0x170/0x170 [ 143.205103] ? wake_up_q+0xa0/0xa0 [ 143.205159] ? flush_workqueue_prep_pwqs+0x15a/0x2c0 [ 143.205237] wait_for_completion+0x1d/0x20 [ 143.205292] flush_workqueue+0x2e9/0xbb0 [ 143.205339] ? flush_workqueue+0x163/0xbb0 [ 143.205418] ? __schedule+0x533/0x1180 [ 143.205498] ? check_flush_dependency+0x1a0/0x1a0 [ 143.205681] i915_gem_userptr_mn_invalidate_range_start+0x1c7/0x270 [i915] [ 143.205865] ? i915_gem_userptr_dmabuf_export+0x40/0x40 [i915] [ 143.205955] __mmu_notifier_invalidate_range_start+0xc6/0x120 [ 143.206044] ? __mmu_notifier_invalidate_range_start+0x51/0x120 [ 143.206123] zap_page_range_single+0x1c7/0x1f0 [ 143.206171] ? unmap_single_vma+0x160/0x160 [ 143.206260] ? unmap_mapping_range+0xa9/0x1b0 [ 143.206308] ? vma_interval_tree_subtree_search+0x75/0xd0 [ 143.206397] unmap_mapping_range+0x18f/0x1b0 [ 143.206444] ? zap_vma_ptes+0x70/0x70 [ 143.206524] ? __pm_runtime_resume+0x67/0xa0 [ 143.206723] i915_gem_release_mmap+0x1ba/0x1c0 [i915] [ 143.206846] i915_vma_unbind+0x5c2/0x690 [i915] [ 143.206925] ? __lock_is_held+0x52/0x100 [ 143.207076] i915_gem_object_set_tiling+0x1db/0x650 [i915] [ 143.207236] i915_gem_set_tiling_ioctl+0x1d3/0x3b0 [i915] [ 143.207377] ? i915_gem_set_tiling_ioctl+0x5/0x3b0 [i915] [ 143.207457] drm_ioctl+0x36c/0x670 [ 143.207535] ? debug_lockdep_rcu_enabled.part.0+0x1a/0x30 [ 143.207730] ? i915_gem_object_set_tiling+0x650/0x650 [i915] [ 143.207793] ? drm_getunique+0x120/0x120 [ 143.207875] ? __handle_mm_fault+0x996/0x14a0 [ 143.207939] ? vm_insert_page+0x340/0x340 [ 143.208028] ? up_write+0x28/0x50 [ 143.208086] ? vm_mmap_pgoff+0x160/0x190 [ 143.208163] do_vfs_ioctl+0x12c/0xa60 [ 143.208218] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 143.208267] ? ioctl_preallocate+0x150/0x150 [ 143.208353] ? __do_page_fault+0x36a/0x6e0 [ 143.208400] ? mark_held_locks+0x23/0xc0 [ 143.208479] ? up_read+0x1f/0x40 [ 143.208526] ? entry_SYSCALL_64_fastpath+0x5/0xc6 [ 143.208669] ? __fget_light+0xa7/0xc0 [ 143.208747] SyS_ioctl+0x41/0x70 To prevent the possibility of a deadlock, we defer scheduling the worker until after we have proven that given the current mm, the userptr range does not overlap a GGTT mmaping. If another thread tries to remap the GGTT over the userptr before the worker is scheduled, it will be stopped by its invalidate-range flushing the current work, before the deadlock can occur. v2: Improve discussion of how we end up in the deadlock. v3: Don't forget to mark the userptr as active after a successful gup_fast. Rename overlaps_ggtt to noncontiguous_or_overlaps_ggtt. v4: Fix test ordering between invalid GTT mmaping and range completion (Tvrtko) Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170308215903.24171-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-03-09 04:59:03 +07:00
if (active)
__i915_gem_userptr_set_active(obj, true);
if (IS_ERR(pages))
release_pages(pvec, pinned);
kvfree(pvec);
drm/i915/userptr: Disallow wrapping GTT into a userptr If we allow the user to convert a GTT mmap address into a userptr, we may end up in recursion hell, where currently we hit a mutex deadlock but other possibilities include use-after-free during the unbind/cancel_userptr. [ 143.203989] gem_userptr_bli D 0 902 898 0x00000000 [ 143.204054] Call Trace: [ 143.204137] __schedule+0x511/0x1180 [ 143.204195] ? pci_mmcfg_check_reserved+0xc0/0xc0 [ 143.204274] schedule+0x57/0xe0 [ 143.204327] schedule_timeout+0x383/0x670 [ 143.204374] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204457] ? trace_hardirqs_on_thunk+0x1a/0x1c [ 143.204507] ? usleep_range+0x110/0x110 [ 143.204657] ? irq_exit+0x89/0x100 [ 143.204710] ? retint_kernel+0x2d/0x2d [ 143.204794] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204857] ? _raw_spin_unlock_irq+0x33/0x60 [ 143.204944] wait_for_common+0x1f0/0x2f0 [ 143.205006] ? out_of_line_wait_on_atomic_t+0x170/0x170 [ 143.205103] ? wake_up_q+0xa0/0xa0 [ 143.205159] ? flush_workqueue_prep_pwqs+0x15a/0x2c0 [ 143.205237] wait_for_completion+0x1d/0x20 [ 143.205292] flush_workqueue+0x2e9/0xbb0 [ 143.205339] ? flush_workqueue+0x163/0xbb0 [ 143.205418] ? __schedule+0x533/0x1180 [ 143.205498] ? check_flush_dependency+0x1a0/0x1a0 [ 143.205681] i915_gem_userptr_mn_invalidate_range_start+0x1c7/0x270 [i915] [ 143.205865] ? i915_gem_userptr_dmabuf_export+0x40/0x40 [i915] [ 143.205955] __mmu_notifier_invalidate_range_start+0xc6/0x120 [ 143.206044] ? __mmu_notifier_invalidate_range_start+0x51/0x120 [ 143.206123] zap_page_range_single+0x1c7/0x1f0 [ 143.206171] ? unmap_single_vma+0x160/0x160 [ 143.206260] ? unmap_mapping_range+0xa9/0x1b0 [ 143.206308] ? vma_interval_tree_subtree_search+0x75/0xd0 [ 143.206397] unmap_mapping_range+0x18f/0x1b0 [ 143.206444] ? zap_vma_ptes+0x70/0x70 [ 143.206524] ? __pm_runtime_resume+0x67/0xa0 [ 143.206723] i915_gem_release_mmap+0x1ba/0x1c0 [i915] [ 143.206846] i915_vma_unbind+0x5c2/0x690 [i915] [ 143.206925] ? __lock_is_held+0x52/0x100 [ 143.207076] i915_gem_object_set_tiling+0x1db/0x650 [i915] [ 143.207236] i915_gem_set_tiling_ioctl+0x1d3/0x3b0 [i915] [ 143.207377] ? i915_gem_set_tiling_ioctl+0x5/0x3b0 [i915] [ 143.207457] drm_ioctl+0x36c/0x670 [ 143.207535] ? debug_lockdep_rcu_enabled.part.0+0x1a/0x30 [ 143.207730] ? i915_gem_object_set_tiling+0x650/0x650 [i915] [ 143.207793] ? drm_getunique+0x120/0x120 [ 143.207875] ? __handle_mm_fault+0x996/0x14a0 [ 143.207939] ? vm_insert_page+0x340/0x340 [ 143.208028] ? up_write+0x28/0x50 [ 143.208086] ? vm_mmap_pgoff+0x160/0x190 [ 143.208163] do_vfs_ioctl+0x12c/0xa60 [ 143.208218] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 143.208267] ? ioctl_preallocate+0x150/0x150 [ 143.208353] ? __do_page_fault+0x36a/0x6e0 [ 143.208400] ? mark_held_locks+0x23/0xc0 [ 143.208479] ? up_read+0x1f/0x40 [ 143.208526] ? entry_SYSCALL_64_fastpath+0x5/0xc6 [ 143.208669] ? __fget_light+0xa7/0xc0 [ 143.208747] SyS_ioctl+0x41/0x70 To prevent the possibility of a deadlock, we defer scheduling the worker until after we have proven that given the current mm, the userptr range does not overlap a GGTT mmaping. If another thread tries to remap the GGTT over the userptr before the worker is scheduled, it will be stopped by its invalidate-range flushing the current work, before the deadlock can occur. v2: Improve discussion of how we end up in the deadlock. v3: Don't forget to mark the userptr as active after a successful gup_fast. Rename overlaps_ggtt to noncontiguous_or_overlaps_ggtt. v4: Fix test ordering between invalid GTT mmaping and range completion (Tvrtko) Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170308215903.24171-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2017-03-09 04:59:03 +07:00
return PTR_ERR_OR_ZERO(pages);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}
static void
i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
struct sg_table *pages)
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
{
struct sgt_iter sgt_iter;
struct page *page;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
BUG_ON(obj->userptr.work != NULL);
drm/i915: Fix userptr deadlock with aliased GTT mmappings Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-10-01 18:34:46 +07:00
__i915_gem_userptr_set_active(obj, false);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
if (obj->mm.madv != I915_MADV_WILLNEED)
obj->mm.dirty = false;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
i915_gem_gtt_finish_pages(obj, pages);
drm/i915: avoid leaking DMA mappings We have 3 types of DMA mappings for GEM objects: 1. physically contiguous for stolen and for objects needing contiguous memory 2. DMA-buf mappings imported via a DMA-buf attach operation 3. SG DMA mappings for shmem backed and userptr objects For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the corresponding backing pages and so in practice we create/release the mapping in the object's get_pages/put_pages callback. For 3. the lifetime of the mapping matches that of any existing GPU binding of the object, so we'll create the mapping when the object is bound to the first vma and release the mapping when the object is unbound from its last vma. Since the object can be bound to multiple vmas, we can end up creating a new DMA mapping in the 3. case even if the object already had one. This is not allowed by the DMA API and can lead to leaked mapping data and IOMMU memory space starvation in certain cases. For example HW IOMMU drivers (intel_iommu) allocate a new range from their memory space whenever a mapping is created, silently overriding a pre-existing mapping. Fix this by moving the creation/removal of DMA mappings to the object's get_pages/put_pages callbacks. These callbacks already check for and do an early return in case of any nested calls. This way objects of the 3. case also become more like the other object types. I noticed this issue by enabling DMA debugging, which got disabled after a while due to its internal mapping tables getting full. It also reported errors in connection to random other drivers that did a DMA mapping for an address that was previously mapped by i915 but was never released. Besides these diagnostic messages and the memory space starvation problem for IOMMUs, I'm not aware of this causing a real issue. The fix is based on a patch from Chris. v2: - move the DMA mapping create/remove calls to the get_pages/put_pages callbacks instead of adding new callbacks for these (Chris) v3: - also fix the get_page cache logic on the userptr async path (Chris) Signed-off-by: Imre Deak <imre.deak@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-07-09 16:59:05 +07:00
for_each_sgt_page(page, sgt_iter, pages) {
if (obj->mm.dirty)
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
set_page_dirty(page);
mark_page_accessed(page);
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time ago with promise that one day it will be possible to implement page cache with bigger chunks than PAGE_SIZE. This promise never materialized. And unlikely will. We have many places where PAGE_CACHE_SIZE assumed to be equal to PAGE_SIZE. And it's constant source of confusion on whether PAGE_CACHE_* or PAGE_* constant should be used in a particular case, especially on the border between fs and mm. Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much breakage to be doable. Let's stop pretending that pages in page cache are special. They are not. The changes are pretty straight-forward: - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN}; - page_cache_get() -> get_page(); - page_cache_release() -> put_page(); This patch contains automated changes generated with coccinelle using script below. For some reason, coccinelle doesn't patch header files. I've called spatch for them manually. The only adjustment after coccinelle is revert of changes to PAGE_CAHCE_ALIGN definition: we are going to drop it later. There are few places in the code where coccinelle didn't reach. I'll fix them manually in a separate patch. Comments and documentation also will be addressed with the separate patch. virtual patch @@ expression E; @@ - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ expression E; @@ - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ @@ - PAGE_CACHE_SHIFT + PAGE_SHIFT @@ @@ - PAGE_CACHE_SIZE + PAGE_SIZE @@ @@ - PAGE_CACHE_MASK + PAGE_MASK @@ expression E; @@ - PAGE_CACHE_ALIGN(E) + PAGE_ALIGN(E) @@ expression E; @@ - page_cache_get(E) + get_page(E) @@ expression E; @@ - page_cache_release(E) + put_page(E) Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 19:29:47 +07:00
put_page(page);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}
obj->mm.dirty = false;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
sg_free_table(pages);
kfree(pages);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}
static void
i915_gem_userptr_release(struct drm_i915_gem_object *obj)
{
i915_gem_userptr_release__mmu_notifier(obj);
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
i915_gem_userptr_release__mm_struct(obj);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}
static int
i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj)
{
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
if (obj->userptr.mmu_object)
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
return 0;
return i915_gem_userptr_init__mmu_notifier(obj, 0);
}
static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
I915_GEM_OBJECT_IS_SHRINKABLE,
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
.get_pages = i915_gem_userptr_get_pages,
.put_pages = i915_gem_userptr_put_pages,
.dmabuf_export = i915_gem_userptr_dmabuf_export,
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
.release = i915_gem_userptr_release,
};
/*
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
* Creates a new mm object that wraps some normal memory from the process
* context - user memory.
*
* We impose several restrictions upon the memory being mapped
* into the GPU.
* 1. It must be page aligned (both start/end addresses, i.e ptr and size).
* 2. It must be normal system memory, not a pointer into another map of IO
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
* space (e.g. it must not be a GTT mmapping of another object).
* 3. We only allow a bo as large as we could in theory map into the GTT,
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
* that is we limit the size to the total size of the GTT.
* 4. The bo is marked as being snoopable. The backing pages are left
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
* accessible directly by the CPU, but reads and writes by the GPU may
* incur the cost of a snoop (unless you have an LLC architecture).
*
* Synchronisation between multiple users and the GPU is left to userspace
* through the normal set-domain-ioctl. The kernel will enforce that the
* GPU relinquishes the VMA before it is returned back to the system
* i.e. upon free(), munmap() or process termination. However, the userspace
* malloc() library may not immediately relinquish the VMA after free() and
* instead reuse it whilst the GPU is still reading and writing to the VMA.
* Caveat emptor.
*
* Also note, that the object created here is not currently a "first class"
* object, in that several ioctls are banned. These are the CPU access
* ioctls: mmap(), pwrite and pread. In practice, you are expected to use
* direct access via your pointer rather than use those ioctls. Another
* restriction is that we do not allow userptr surfaces to be pinned to the
* hardware and so we reject any attempt to create a framebuffer out of a
* userptr.
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
*
* If you think this is a good interface to use to pass GPU memory between
* drivers, please use dma-buf instead. In fact, wherever possible use
* dma-buf instead.
*/
int
i915_gem_userptr_ioctl(struct drm_device *dev,
void *data,
struct drm_file *file)
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
{
struct drm_i915_private *dev_priv = to_i915(dev);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
struct drm_i915_gem_userptr *args = data;
struct drm_i915_gem_object *obj;
int ret;
u32 handle;
if (!HAS_LLC(dev_priv) && !HAS_SNOOP(dev_priv)) {
/* We cannot support coherent userptr objects on hw without
* LLC and broken snooping.
*/
return -ENODEV;
}
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
if (args->flags & ~(I915_USERPTR_READ_ONLY |
I915_USERPTR_UNSYNCHRONIZED))
return -EINVAL;
if (!args->user_size)
return -EINVAL;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
if (offset_in_page(args->user_ptr | args->user_size))
return -EINVAL;
if (!access_ok(args->flags & I915_USERPTR_READ_ONLY ? VERIFY_READ : VERIFY_WRITE,
(char __user *)(unsigned long)args->user_ptr, args->user_size))
return -EFAULT;
if (args->flags & I915_USERPTR_READ_ONLY) {
struct i915_hw_ppgtt *ppgtt;
/*
* On almost all of the older hw, we cannot tell the GPU that
* a page is readonly.
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
*/
ppgtt = dev_priv->kernel_context->ppgtt;
if (!ppgtt || !ppgtt->vm.has_read_only)
return -ENODEV;
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}
obj = i915_gem_object_alloc(dev_priv);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
if (obj == NULL)
return -ENOMEM;
drm_gem_private_object_init(dev, &obj->base, args->user_size);
i915_gem_object_init(obj, &i915_gem_userptr_ops);
obj->read_domains = I915_GEM_DOMAIN_CPU;
obj->write_domain = I915_GEM_DOMAIN_CPU;
drm/i915: Split obj->cache_coherent to track r/w Another month, another story in the cache coherency saga. This time, we come to the realisation that i915_gem_object_is_coherent() has been reporting whether we can read from the target without requiring a cache invalidate; but we were using it in places for testing whether we could write into the object without requiring a cache flush. So split the tracking into two, one to decide before reads, one after writes. See commit e27ab73d17ef ("drm/i915: Mark CPU cache as dirty on every transition for CPU writes") for the previous entry in this saga. v2: Be verbose v3: Remove unused function (i915_gem_object_is_coherent) v4: Fix inverted coherency check prior to execbuf (from v2) v5: Add comment for nasty code where we are optimising on gcc's behalf. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=101109 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=101555 Testcase: igt/kms_mmap_write_crc Testcase: igt/kms_pwrite_crc Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Dongwon Kim <dongwon.kim@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Tested-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Acked-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170811111116.10373-1-chris@chris-wilson.co.uk Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2017-08-11 18:11:16 +07:00
i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
obj->userptr.ptr = args->user_ptr;
if (args->flags & I915_USERPTR_READ_ONLY)
i915_gem_object_set_readonly(obj);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
/* And keep a pointer to the current->mm for resolving the user pages
* at binding. This means that we need to hook into the mmu_notifier
* in order to detect if the mmu is destroyed.
*/
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
ret = i915_gem_userptr_init__mm_struct(obj);
if (ret == 0)
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
ret = i915_gem_userptr_init__mmu_notifier(obj, args->flags);
if (ret == 0)
ret = drm_gem_handle_create(file, &obj->base, &handle);
/* drop reference from allocate - handle holds it now */
i915_gem_object_put(obj);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
if (ret)
return ret;
args->handle = handle;
return 0;
}
int i915_gem_init_userptr(struct drm_i915_private *dev_priv)
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
{
drm/i915: Prevent recursive deadlock on releasing a busy userptr During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds.       Tainted: GF          O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces   D 0000000000000000     0  1632   1590 0x00000082  ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010  0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010  ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace:  [<ffffffff81582499>] schedule+0x29/0x70  [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10  [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220  [<ffffffff81583c53>] mutex_lock+0x23/0x40  [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]  [<ffffffff8115a483>] remove_vma+0x33/0x70  [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170  [<ffffffff8104d6eb>] mmput+0x6b/0x100  [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]  [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]  [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]  [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]  [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]  [<ffffffff8127ffeb>] idr_for_each+0xab/0x100  [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]  [<ffffffff81583c46>] ? mutex_lock+0x16/0x40  [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]  [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]  [<ffffffff8118d482>] __fput+0xb2/0x260  [<ffffffff8118d6de>] ____fput+0xe/0x10  [<ffffffff8106f27f>] task_work_run+0x8f/0xf0  [<ffffffff81052228>] do_exit+0x1a8/0x480  [<ffffffff81052551>] do_group_exit+0x51/0xc0  [<ffffffff810525d7>] SyS_exit_group+0x17/0x20  [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-08-07 20:20:40 +07:00
mutex_init(&dev_priv->mm_lock);
hash_init(dev_priv->mm_structs);
dev_priv->mm.userptr_wq =
drm/i915: Mark the userptr invalidate workqueue as WQ_MEM_RECLAIM To silence the critcs: [56532.161115] workqueue: PF_MEMALLOC task 36(khugepaged) is flushing !WQ_MEM_RECLAIM i915-userptr-release: (null) [56532.161138] ------------[ cut here ]------------ [56532.161144] WARNING: CPU: 1 PID: 36 at kernel/workqueue.c:2418 check_flush_dependency+0xe8/0xf0 [56532.161145] Modules linked in: wmi_bmof [56532.161148] CPU: 1 PID: 36 Comm: khugepaged Not tainted 4.13.0-krejzi #1 [56532.161149] Hardware name: HP HP ProBook 470 G3/8102, BIOS N78 Ver. 01.17 06/08/2017 [56532.161150] task: ffff8802371ee200 task.stack: ffffc90000174000 [56532.161152] RIP: 0010:check_flush_dependency+0xe8/0xf0 [56532.161152] RSP: 0018:ffffc900001777b8 EFLAGS: 00010286 [56532.161153] RAX: 000000000000006c RBX: ffff88022fc5a000 RCX: 0000000000000001 [56532.161154] RDX: 0000000000000000 RSI: 0000000000000086 RDI: 00000000ffffffff [56532.161155] RBP: 0000000000000000 R08: 14f038bb55f6dae0 R09: 0000000000000516 [56532.161155] R10: ffffc900001778a0 R11: 000000006c756e28 R12: ffff8802371ee200 [56532.161156] R13: 0000000000000000 R14: 000000000000000b R15: ffffc90000177810 [56532.161157] FS: 0000000000000000(0000) GS:ffff880240480000(0000) knlGS:0000000000000000 [56532.161158] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [56532.161158] CR2: 0000000004795ff8 CR3: 000000000220a000 CR4: 00000000003406e0 [56532.161159] Call Trace: [56532.161161] ? flush_workqueue+0x136/0x3e0 [56532.161178] ? _raw_spin_unlock_irqrestore+0xf/0x30 [56532.161179] ? try_to_wake_up+0x1ce/0x3b0 [56532.161183] ? i915_gem_userptr_mn_invalidate_range_start+0x13f/0x150 [56532.161184] ? _raw_spin_unlock+0xd/0x20 [56532.161186] ? i915_gem_userptr_mn_invalidate_range_start+0x13f/0x150 [56532.161189] ? __mmu_notifier_invalidate_range_start+0x4a/0x70 [56532.161191] ? try_to_unmap_one+0x5e5/0x660 [56532.161193] ? rmap_walk_file+0xe4/0x240 [56532.161195] ? __ClearPageMovable+0x10/0x10 [56532.161196] ? try_to_unmap+0x8c/0xe0 [56532.161197] ? page_remove_rmap+0x280/0x280 [56532.161199] ? page_not_mapped+0x10/0x10 [56532.161200] ? page_get_anon_vma+0x90/0x90 [56532.161202] ? migrate_pages+0x6a5/0x940 [56532.161203] ? isolate_freepages_block+0x330/0x330 [56532.161205] ? compact_zone+0x593/0x6a0 [56532.161206] ? enqueue_task_fair+0xc3/0x1180 [56532.161208] ? compact_zone_order+0x9b/0xc0 [56532.161210] ? get_page_from_freelist+0x24a/0x900 [56532.161212] ? try_to_compact_pages+0xc8/0x240 [56532.161213] ? try_to_compact_pages+0xc8/0x240 [56532.161215] ? __alloc_pages_direct_compact+0x45/0xe0 [56532.161216] ? __alloc_pages_slowpath+0x845/0xb90 [56532.161218] ? __alloc_pages_nodemask+0x176/0x1f0 [56532.161220] ? wait_woken+0x80/0x80 [56532.161222] ? khugepaged+0x29e/0x17d0 [56532.161223] ? wait_woken+0x80/0x80 [56532.161225] ? collapse_shmem.isra.39+0xa60/0xa60 [56532.161226] ? kthread+0x10d/0x130 [56532.161227] ? kthread_create_on_node+0x60/0x60 [56532.161228] ? ret_from_fork+0x22/0x30 [56532.161229] Code: 00 8b b0 10 05 00 00 48 8d 8b b0 00 00 00 48 8d 90 b8 06 00 00 49 89 e8 48 c7 c7 38 55 09 82 c6 05 f9 c6 1d 01 01 e8 0e a1 03 00 <0f> ff e9 6b ff ff ff 90 48 8b 37 40 f6 c6 04 75 1b 48 c1 ee 05 [56532.161251] ---[ end trace 2ce2b4f5f69b803b ]--- Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20170911084135.22903-2-chris@chris-wilson.co.uk Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
2017-09-11 15:41:25 +07:00
alloc_workqueue("i915-userptr-acquire",
drm/i915: Mark the userptr invalidate workqueue as WQ_MEM_RECLAIM Commit 21cc6431e0c2 ("drm/i915: Mark the userptr invalidate workqueue as WQ_MEM_RECLAIM") tried to fixup the check_flush_dependency warning for hitting i915_gem_userptr_mn_invalidate_range_start from within the shrinker, but I failed to notice userptr has 2 similarly named workqueues. I marked up i915-userptr-acquire as WQ_MEM_RECLAIM whereas we only wait upon i915-userptr-release from inside the reclaim paths. [62530.869510] workqueue: PF_MEMALLOC task 7983(gem_shrink) is flushing !WQ_MEM_RECLAIM i915-userptr-release: (null) [62530.869515] ------------[ cut here ]------------ [62530.869519] WARNING: CPU: 1 PID: 7983 at kernel/workqueue.c:2434 check_flush_dependency+0x7f/0x110 [62530.869519] Modules linked in: pegasus mii ip6table_filter ip6_tables bnep iptable_filter snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic binfmt_misc nls_iso8859_1 intel_rapl x86_pkg_temp_thermal intel_powerclamp coretemp snd_hda_intel snd_hda_codec kvm_intel snd_hda_core snd_hwdep kvm snd_pcm irqbypass snd_seq_midi snd_seq_midi_event snd_rawmidi crct10dif_pclmul crc32_pclmul 8250_dw ghash_clmulni_intel snd_seq pcbc snd_seq_device snd_timer btusb aesni_intel btrtl btbcm aes_x86_64 iwlwifi btintel crypto_simd glue_helper cryptd bluetooth snd intel_cstate input_leds idma64 intel_rapl_perf ecdh_generic serio_raw soundcore cfg80211 wmi_bmof virt_dma intel_lpss_pci intel_lpss acpi_als kfifo_buf industrialio winbond_cir soc_button_array rc_core spidev tpm_crb intel_hid acpi_pad mac_hid sparse_keymap [62530.869546] parport_pc ppdev lp parport ip_tables x_tables autofs4 hid_generic usbhid i915 i2c_algo_bit prime_numbers drm_kms_helper syscopyarea e1000e sysfillrect sysimgblt fb_sys_fops ahci ptp pps_core libahci drm wmi video i2c_hid hid [62530.869557] CPU: 1 PID: 7983 Comm: gem_shrink Tainted: G U W L 4.14.0-rc8-drm-tip-ww45-commit-1342299+ #1 [62530.869558] Hardware name: Intel Corporation CoffeeLake Client Platform/CoffeeLake H DDR4 RVP, BIOS CNLSFWR1.R00.X098.A00.1707301945 07/30/2017 [62530.869559] task: ffffa1049dbeec80 task.stack: ffffae7d05c44000 [62530.869560] RIP: 0010:check_flush_dependency+0x7f/0x110 [62530.869561] RSP: 0018:ffffae7d05c473a0 EFLAGS: 00010286 [62530.869562] RAX: 000000000000006e RBX: ffffa1049540f400 RCX: ffffffffa3e55788 [62530.869562] RDX: 0000000000000000 RSI: 0000000000000092 RDI: 0000000000000202 [62530.869563] RBP: ffffae7d05c473c0 R08: 000000000000006e R09: 000000000038bb0e [62530.869563] R10: 0000000000000000 R11: 000000000000006e R12: ffffa1049dbeec80 [62530.869564] R13: 0000000000000000 R14: 0000000000000000 R15: ffffae7d05c473e0 [62530.869565] FS: 00007f621b129880(0000) GS:ffffa1050b240000(0000) knlGS:0000000000000000 [62530.869566] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [62530.869566] CR2: 00007f6214400000 CR3: 0000000353a17003 CR4: 00000000003606e0 [62530.869567] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [62530.869567] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [62530.869568] Call Trace: [62530.869570] flush_workqueue+0x115/0x3d0 [62530.869573] ? wake_up_process+0x15/0x20 [62530.869596] i915_gem_userptr_mn_invalidate_range_start+0x12f/0x160 [i915] [62530.869614] ? i915_gem_userptr_mn_invalidate_range_start+0x12f/0x160 [i915] [62530.869616] __mmu_notifier_invalidate_range_start+0x55/0x80 [62530.869618] try_to_unmap_one+0x791/0x8b0 [62530.869620] ? call_rwsem_down_read_failed+0x18/0x30 [62530.869622] rmap_walk_anon+0x10b/0x260 [62530.869624] rmap_walk+0x48/0x60 [62530.869625] try_to_unmap+0x93/0xf0 [62530.869626] ? page_remove_rmap+0x2a0/0x2a0 [62530.869627] ? page_not_mapped+0x20/0x20 [62530.869629] ? page_get_anon_vma+0x90/0x90 [62530.869630] ? invalid_mkclean_vma+0x20/0x20 [62530.869631] migrate_pages+0x946/0xaa0 [62530.869633] ? __ClearPageMovable+0x10/0x10 [62530.869635] ? isolate_freepages_block+0x3c0/0x3c0 [62530.869636] compact_zone+0x22f/0x970 [62530.869638] compact_zone_order+0xa3/0xd0 [62530.869640] try_to_compact_pages+0x1a5/0x2a0 [62530.869641] ? try_to_compact_pages+0x1a5/0x2a0 [62530.869643] __alloc_pages_direct_compact+0x50/0x110 [62530.869644] __alloc_pages_slowpath+0x4da/0xf30 [62530.869646] __alloc_pages_nodemask+0x262/0x280 [62530.869648] alloc_pages_vma+0x165/0x1e0 [62530.869649] shmem_alloc_hugepage+0xd0/0x130 [62530.869651] ? __radix_tree_insert+0x45/0x230 [62530.869652] ? __vm_enough_memory+0x29/0x130 [62530.869654] shmem_alloc_and_acct_page+0x10d/0x1e0 [62530.869655] shmem_getpage_gfp+0x426/0xc00 [62530.869657] shmem_fault+0xa0/0x1e0 [62530.869659] ? file_update_time+0x60/0x110 [62530.869660] __do_fault+0x1e/0xc0 [62530.869661] __handle_mm_fault+0xa35/0x1170 [62530.869662] handle_mm_fault+0xcc/0x1c0 [62530.869664] __do_page_fault+0x262/0x4f0 [62530.869666] do_page_fault+0x2e/0xe0 [62530.869667] page_fault+0x22/0x30 [62530.869668] RIP: 0033:0x404335 [62530.869669] RSP: 002b:00007fff7829e420 EFLAGS: 00010216 [62530.869670] RAX: 00007f6210400000 RBX: 0000000000000004 RCX: 0000000000b80000 [62530.869670] RDX: 0000000000002e01 RSI: 0000000000008000 RDI: 0000000000000004 [62530.869671] RBP: 0000000000000019 R08: 0000000000000002 R09: 0000000000000000 [62530.869671] R10: 0000000000000559 R11: 0000000000000246 R12: 0000000008000000 [62530.869672] R13: 00000000004042f0 R14: 0000000000000004 R15: 000000000000007e [62530.869673] Code: 00 8b b0 18 05 00 00 48 8d 8b b0 00 00 00 48 8d 90 c0 06 00 00 4d 89 f0 48 c7 c7 40 c0 c8 a3 c6 05 68 c5 e8 00 01 e8 c2 68 04 00 <0f> ff 4d 85 ed 74 18 49 8b 45 20 48 8b 70 08 8b 86 00 01 00 00 [62530.869691] ---[ end trace 01e01ad0ff5781f8 ]--- Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103739 Fixes: 21cc6431e0c2 ("drm/i915: Mark the userptr invalidate workqueue as WQ_MEM_RECLAIM") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171114173520.8829-1-chris@chris-wilson.co.uk Reviewed-by: Matthew Auld <matthew.auld@intel.com> (cherry picked from commit 41729bf2248bc8593e5103d43974079cc269524c) Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2017-11-15 00:35:20 +07:00
WQ_HIGHPRI | WQ_UNBOUND,
drm/i915: Mark the userptr invalidate workqueue as WQ_MEM_RECLAIM To silence the critcs: [56532.161115] workqueue: PF_MEMALLOC task 36(khugepaged) is flushing !WQ_MEM_RECLAIM i915-userptr-release: (null) [56532.161138] ------------[ cut here ]------------ [56532.161144] WARNING: CPU: 1 PID: 36 at kernel/workqueue.c:2418 check_flush_dependency+0xe8/0xf0 [56532.161145] Modules linked in: wmi_bmof [56532.161148] CPU: 1 PID: 36 Comm: khugepaged Not tainted 4.13.0-krejzi #1 [56532.161149] Hardware name: HP HP ProBook 470 G3/8102, BIOS N78 Ver. 01.17 06/08/2017 [56532.161150] task: ffff8802371ee200 task.stack: ffffc90000174000 [56532.161152] RIP: 0010:check_flush_dependency+0xe8/0xf0 [56532.161152] RSP: 0018:ffffc900001777b8 EFLAGS: 00010286 [56532.161153] RAX: 000000000000006c RBX: ffff88022fc5a000 RCX: 0000000000000001 [56532.161154] RDX: 0000000000000000 RSI: 0000000000000086 RDI: 00000000ffffffff [56532.161155] RBP: 0000000000000000 R08: 14f038bb55f6dae0 R09: 0000000000000516 [56532.161155] R10: ffffc900001778a0 R11: 000000006c756e28 R12: ffff8802371ee200 [56532.161156] R13: 0000000000000000 R14: 000000000000000b R15: ffffc90000177810 [56532.161157] FS: 0000000000000000(0000) GS:ffff880240480000(0000) knlGS:0000000000000000 [56532.161158] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [56532.161158] CR2: 0000000004795ff8 CR3: 000000000220a000 CR4: 00000000003406e0 [56532.161159] Call Trace: [56532.161161] ? flush_workqueue+0x136/0x3e0 [56532.161178] ? _raw_spin_unlock_irqrestore+0xf/0x30 [56532.161179] ? try_to_wake_up+0x1ce/0x3b0 [56532.161183] ? i915_gem_userptr_mn_invalidate_range_start+0x13f/0x150 [56532.161184] ? _raw_spin_unlock+0xd/0x20 [56532.161186] ? i915_gem_userptr_mn_invalidate_range_start+0x13f/0x150 [56532.161189] ? __mmu_notifier_invalidate_range_start+0x4a/0x70 [56532.161191] ? try_to_unmap_one+0x5e5/0x660 [56532.161193] ? rmap_walk_file+0xe4/0x240 [56532.161195] ? __ClearPageMovable+0x10/0x10 [56532.161196] ? try_to_unmap+0x8c/0xe0 [56532.161197] ? page_remove_rmap+0x280/0x280 [56532.161199] ? page_not_mapped+0x10/0x10 [56532.161200] ? page_get_anon_vma+0x90/0x90 [56532.161202] ? migrate_pages+0x6a5/0x940 [56532.161203] ? isolate_freepages_block+0x330/0x330 [56532.161205] ? compact_zone+0x593/0x6a0 [56532.161206] ? enqueue_task_fair+0xc3/0x1180 [56532.161208] ? compact_zone_order+0x9b/0xc0 [56532.161210] ? get_page_from_freelist+0x24a/0x900 [56532.161212] ? try_to_compact_pages+0xc8/0x240 [56532.161213] ? try_to_compact_pages+0xc8/0x240 [56532.161215] ? __alloc_pages_direct_compact+0x45/0xe0 [56532.161216] ? __alloc_pages_slowpath+0x845/0xb90 [56532.161218] ? __alloc_pages_nodemask+0x176/0x1f0 [56532.161220] ? wait_woken+0x80/0x80 [56532.161222] ? khugepaged+0x29e/0x17d0 [56532.161223] ? wait_woken+0x80/0x80 [56532.161225] ? collapse_shmem.isra.39+0xa60/0xa60 [56532.161226] ? kthread+0x10d/0x130 [56532.161227] ? kthread_create_on_node+0x60/0x60 [56532.161228] ? ret_from_fork+0x22/0x30 [56532.161229] Code: 00 8b b0 10 05 00 00 48 8d 8b b0 00 00 00 48 8d 90 b8 06 00 00 49 89 e8 48 c7 c7 38 55 09 82 c6 05 f9 c6 1d 01 01 e8 0e a1 03 00 <0f> ff e9 6b ff ff ff 90 48 8b 37 40 f6 c6 04 75 1b 48 c1 ee 05 [56532.161251] ---[ end trace 2ce2b4f5f69b803b ]--- Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20170911084135.22903-2-chris@chris-wilson.co.uk Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
2017-09-11 15:41:25 +07:00
0);
if (!dev_priv->mm.userptr_wq)
return -ENOMEM;
return 0;
}
void i915_gem_cleanup_userptr(struct drm_i915_private *dev_priv)
{
destroy_workqueue(dev_priv->mm.userptr_wq);
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-16 20:22:37 +07:00
}