linux_dsm_epyc7002/drivers/gpu/drm/i915/i915_gem_shrinker.c

622 lines
18 KiB
C
Raw Normal View History

/*
* Copyright © 2008-2015 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*
*/
#include <linux/oom.h>
#include <linux/sched/mm.h>
#include <linux/shmem_fs.h>
#include <linux/slab.h>
#include <linux/swap.h>
#include <linux/pci.h>
#include <linux/dma-buf.h>
#include <linux/vmalloc.h>
#include <drm/i915_drm.h>
#include "i915_drv.h"
#include "i915_trace.h"
drm/i915: Return immediately if trylock fails for direct-reclaim Ignore trying to shrink from i915 if we fail to acquire the struct_mutex in the shrinker while performing direct-reclaim. The trade-off being (much) lower latency for non-i915 clients at an increased risk of being unable to obtain a page from direct-reclaim without hitting the oom-notifier. The proviso being that we still keep trying to hard obtain the lock for kswapd so that we can reap under heavy memory pressure. v2: Taint all mutexes taken within the shrinker with the struct_mutex subclass as an early warning system, and drop I915_SHRINK_ACTIVE from vmap to reduce the number of dangerous paths. We also have to drop I915_SHRINK_ACTIVE from oom-notifier to be able to make the same claim that ACTIVE is only used from outside context, which fits in with a longer strategy of avoiding stalls due to scanning active during shrinking. The danger in using the subclass struct_mutex is that we declare ourselves more knowledgable than lockdep and deprive ourselves of automatic coverage. Instead, we require ourselves to mark up any mutex taken inside the shrinker in order to detect lock-inversion, and if we miss any we are doomed to a deadlock at the worst possible moment. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190107115509.12523-1-chris@chris-wilson.co.uk
2019-01-07 18:54:24 +07:00
static bool shrinker_lock(struct drm_i915_private *i915,
unsigned int flags,
bool *unlock)
{
struct mutex *m = &i915->drm.struct_mutex;
switch (mutex_trylock_recursive(m)) {
Main pull request for drm for 4.10 kernel -----BEGIN PGP SIGNATURE----- iQIcBAABAgAGBQJYT3qqAAoJEAx081l5xIa+dLMP/2dqBybSAeWlPmAwVenIHRtS KFNktISezFSY/LBcIP2mHkFJmjTKBMZFxWnyEJL9NmFUD1cS2WMyNnC1282h/+rD +P8Bsmzmt/daV4UTFxVDpzlmVlavAyakNi6FnSQfAfmf+3PB1yzU3gn8ld9pU/if h7KEp9fDn9eYZreTRfCUloI2yoVpD9d0DG3uaGDN/N0kGUnCC6TZT5ig5j2JO016 fYf/DqoYAk3ItWF9WK/uG7qJIGi37afCpQq+kbSSJk+p3HjJqu8JUe9jzqYdl7j9 26TGSY5o9WLhZkxDgbcCIJzcFJhMmXgMdhjil9lqaHmnNG5FPFU7g8DK1CZqbel9 m8+aRPn1EgxIahMgdl8NblW1pfO2Kco0tZmoP5vXx1uqhivd67h0hiQqp66WxOJd i2yMLncaCEv8M161CVEgtzuI5a7nCfaZv7J9ArzbkD/huBwu51IZgTs7Dz4njgvz VPB5FBTB/ZYteErUNoh6gjF0hLngWvvJSPvuzT+EFO7yypek0IJ28GTdbxYSP+jR 13697s5Itigf/D3KUdRRGsWRzyVVN9n+djkl//sy5ddL9eOlKSKEga4ujOUjTWaW hTvAxpK9GmJS/Iun5jIP6f75zDbi+e8FWUeB/OI2lPtnApaSKdXBTPXsco2RnTEV +G6XrH8IMEIsTxOk7hWU =7s/c -----END PGP SIGNATURE----- Merge tag 'drm-for-v4.10' of git://people.freedesktop.org/~airlied/linux Pull drm updates from Dave Airlie: "This is the main pull request for drm for 4.10 kernel. New drivers: - ZTE VOU display driver (zxdrm) - Amlogic Meson Graphic Controller GXBB/GXL/GXM SoCs (meson) - MXSFB support (mxsfb) Core: - Format handling has been reworked - Better atomic state debugging - drm_mm leak debugging - Atomic explicit fencing support - fbdev helper ops - Documentation updates - MST fbcon fixes Bridge: - Silicon Image SiI8620 driver Panel: - Add support for new simple panels i915: - GVT Device model - Better HDMI2.0 support on skylake - More watermark fixes - GPU idling rework for suspend/resume - DP Audio workarounds - Scheduler prep-work - Opregion CADL handling - GPU scheduler and priority boosting amdgfx/radeon: - Support for virtual devices - New VM manager for non-contig VRAM buffers - UVD powergating - SI register header cleanup - Cursor fixes - Powermanagement fixes nouveau: - Powermangement reworks for better voltage/clock changes - Atomic modesetting support - Displayport Multistream (MST) support. - GP102/104 hang and cursor fixes - GP106 support hisilicon: - hibmc support (BMC chip for aarch64 servers) armada: - add tracing support for overlay change - refactor plane support - de-midlayer the driver omapdrm: - Timing code cleanups rcar-du: - R8A7792/R8A7796 support - Misc fixes. sunxi: - A31 SoC display engine support imx-drm: - YUV format support - Cleanup plane atomic update mali-dp: - Misc fixes dw-hdmi: - Add support for HDMI i2c master controller tegra: - IOMMU support fixes - Error handling fixes tda998x: - Fix connector registration - Improved robustness - Fix infoframe/audio compliance virtio: - fix busid issues - allocate more vbufs qxl: - misc fixes and cleanups. vc4: - Fragment shader threading - ETC1 support - VEC (tv-out) support msm: - A5XX GPU support - Lots of atomic changes tilcdc: - Misc fixes and cleanups. etnaviv: - Fix dma-buf export path - DRAW_INSTANCED support - fix driver on i.MX6SX exynos: - HDMI refactoring fsl-dcu: - fbdev changes" * tag 'drm-for-v4.10' of git://people.freedesktop.org/~airlied/linux: (1343 commits) drm/nouveau/kms/nv50: fix atomic regression on original G80 drm/nouveau/bl: Do not register interface if Apple GMUX detected drm/nouveau/bl: Assign different names to interfaces drm/nouveau/bios/dp: fix handling of LevelEntryTableIndex on DP table 4.2 drm/nouveau/ltc: protect clearing of comptags with mutex drm/nouveau/gr/gf100-: handle GPC/TPC/MPC trap drm/nouveau/core: recognise GP106 chipset drm/nouveau/ttm: wait for bo fence to signal before unmapping vmas drm/nouveau/gr/gf100-: FECS intr handling is not relevant on proprietary ucode drm/nouveau/gr/gf100-: properly ack all FECS error interrupts drm/nouveau/fifo/gf100-: recover from host mmu faults drm: Add fake controlD* symlinks for backwards compat drm/vc4: Don't use drm_put_dev drm/vc4: Document VEC DT binding drm/vc4: Add support for the VEC (Video Encoder) IP drm: Add TV connector states to drm_connector_state drm: Turn DRM_MODE_SUBCONNECTOR_xx definitions into an enum drm/vc4: Fix ->clock_select setting for the VEC encoder drm/amdgpu/dce6: Set MASTER_UPDATE_MODE to 0 in resume_mc_access as well drm/amdgpu: use pin rather than pin_restricted in a few cases ...
2016-12-14 00:35:09 +07:00
case MUTEX_TRYLOCK_RECURSIVE:
*unlock = false;
Main pull request for drm for 4.10 kernel -----BEGIN PGP SIGNATURE----- iQIcBAABAgAGBQJYT3qqAAoJEAx081l5xIa+dLMP/2dqBybSAeWlPmAwVenIHRtS KFNktISezFSY/LBcIP2mHkFJmjTKBMZFxWnyEJL9NmFUD1cS2WMyNnC1282h/+rD +P8Bsmzmt/daV4UTFxVDpzlmVlavAyakNi6FnSQfAfmf+3PB1yzU3gn8ld9pU/if h7KEp9fDn9eYZreTRfCUloI2yoVpD9d0DG3uaGDN/N0kGUnCC6TZT5ig5j2JO016 fYf/DqoYAk3ItWF9WK/uG7qJIGi37afCpQq+kbSSJk+p3HjJqu8JUe9jzqYdl7j9 26TGSY5o9WLhZkxDgbcCIJzcFJhMmXgMdhjil9lqaHmnNG5FPFU7g8DK1CZqbel9 m8+aRPn1EgxIahMgdl8NblW1pfO2Kco0tZmoP5vXx1uqhivd67h0hiQqp66WxOJd i2yMLncaCEv8M161CVEgtzuI5a7nCfaZv7J9ArzbkD/huBwu51IZgTs7Dz4njgvz VPB5FBTB/ZYteErUNoh6gjF0hLngWvvJSPvuzT+EFO7yypek0IJ28GTdbxYSP+jR 13697s5Itigf/D3KUdRRGsWRzyVVN9n+djkl//sy5ddL9eOlKSKEga4ujOUjTWaW hTvAxpK9GmJS/Iun5jIP6f75zDbi+e8FWUeB/OI2lPtnApaSKdXBTPXsco2RnTEV +G6XrH8IMEIsTxOk7hWU =7s/c -----END PGP SIGNATURE----- Merge tag 'drm-for-v4.10' of git://people.freedesktop.org/~airlied/linux Pull drm updates from Dave Airlie: "This is the main pull request for drm for 4.10 kernel. New drivers: - ZTE VOU display driver (zxdrm) - Amlogic Meson Graphic Controller GXBB/GXL/GXM SoCs (meson) - MXSFB support (mxsfb) Core: - Format handling has been reworked - Better atomic state debugging - drm_mm leak debugging - Atomic explicit fencing support - fbdev helper ops - Documentation updates - MST fbcon fixes Bridge: - Silicon Image SiI8620 driver Panel: - Add support for new simple panels i915: - GVT Device model - Better HDMI2.0 support on skylake - More watermark fixes - GPU idling rework for suspend/resume - DP Audio workarounds - Scheduler prep-work - Opregion CADL handling - GPU scheduler and priority boosting amdgfx/radeon: - Support for virtual devices - New VM manager for non-contig VRAM buffers - UVD powergating - SI register header cleanup - Cursor fixes - Powermanagement fixes nouveau: - Powermangement reworks for better voltage/clock changes - Atomic modesetting support - Displayport Multistream (MST) support. - GP102/104 hang and cursor fixes - GP106 support hisilicon: - hibmc support (BMC chip for aarch64 servers) armada: - add tracing support for overlay change - refactor plane support - de-midlayer the driver omapdrm: - Timing code cleanups rcar-du: - R8A7792/R8A7796 support - Misc fixes. sunxi: - A31 SoC display engine support imx-drm: - YUV format support - Cleanup plane atomic update mali-dp: - Misc fixes dw-hdmi: - Add support for HDMI i2c master controller tegra: - IOMMU support fixes - Error handling fixes tda998x: - Fix connector registration - Improved robustness - Fix infoframe/audio compliance virtio: - fix busid issues - allocate more vbufs qxl: - misc fixes and cleanups. vc4: - Fragment shader threading - ETC1 support - VEC (tv-out) support msm: - A5XX GPU support - Lots of atomic changes tilcdc: - Misc fixes and cleanups. etnaviv: - Fix dma-buf export path - DRAW_INSTANCED support - fix driver on i.MX6SX exynos: - HDMI refactoring fsl-dcu: - fbdev changes" * tag 'drm-for-v4.10' of git://people.freedesktop.org/~airlied/linux: (1343 commits) drm/nouveau/kms/nv50: fix atomic regression on original G80 drm/nouveau/bl: Do not register interface if Apple GMUX detected drm/nouveau/bl: Assign different names to interfaces drm/nouveau/bios/dp: fix handling of LevelEntryTableIndex on DP table 4.2 drm/nouveau/ltc: protect clearing of comptags with mutex drm/nouveau/gr/gf100-: handle GPC/TPC/MPC trap drm/nouveau/core: recognise GP106 chipset drm/nouveau/ttm: wait for bo fence to signal before unmapping vmas drm/nouveau/gr/gf100-: FECS intr handling is not relevant on proprietary ucode drm/nouveau/gr/gf100-: properly ack all FECS error interrupts drm/nouveau/fifo/gf100-: recover from host mmu faults drm: Add fake controlD* symlinks for backwards compat drm/vc4: Don't use drm_put_dev drm/vc4: Document VEC DT binding drm/vc4: Add support for the VEC (Video Encoder) IP drm: Add TV connector states to drm_connector_state drm: Turn DRM_MODE_SUBCONNECTOR_xx definitions into an enum drm/vc4: Fix ->clock_select setting for the VEC encoder drm/amdgpu/dce6: Set MASTER_UPDATE_MODE to 0 in resume_mc_access as well drm/amdgpu: use pin rather than pin_restricted in a few cases ...
2016-12-14 00:35:09 +07:00
return true;
case MUTEX_TRYLOCK_FAILED:
*unlock = false;
if (flags & I915_SHRINK_ACTIVE &&
mutex_lock_killable_nested(m, I915_MM_SHRINKER) == 0)
drm/i915: Return immediately if trylock fails for direct-reclaim Ignore trying to shrink from i915 if we fail to acquire the struct_mutex in the shrinker while performing direct-reclaim. The trade-off being (much) lower latency for non-i915 clients at an increased risk of being unable to obtain a page from direct-reclaim without hitting the oom-notifier. The proviso being that we still keep trying to hard obtain the lock for kswapd so that we can reap under heavy memory pressure. v2: Taint all mutexes taken within the shrinker with the struct_mutex subclass as an early warning system, and drop I915_SHRINK_ACTIVE from vmap to reduce the number of dangerous paths. We also have to drop I915_SHRINK_ACTIVE from oom-notifier to be able to make the same claim that ACTIVE is only used from outside context, which fits in with a longer strategy of avoiding stalls due to scanning active during shrinking. The danger in using the subclass struct_mutex is that we declare ourselves more knowledgable than lockdep and deprive ourselves of automatic coverage. Instead, we require ourselves to mark up any mutex taken inside the shrinker in order to detect lock-inversion, and if we miss any we are doomed to a deadlock at the worst possible moment. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190107115509.12523-1-chris@chris-wilson.co.uk
2019-01-07 18:54:24 +07:00
*unlock = true;
return *unlock;
case MUTEX_TRYLOCK_SUCCESS:
*unlock = true;
return true;
}
Main pull request for drm for 4.10 kernel -----BEGIN PGP SIGNATURE----- iQIcBAABAgAGBQJYT3qqAAoJEAx081l5xIa+dLMP/2dqBybSAeWlPmAwVenIHRtS KFNktISezFSY/LBcIP2mHkFJmjTKBMZFxWnyEJL9NmFUD1cS2WMyNnC1282h/+rD +P8Bsmzmt/daV4UTFxVDpzlmVlavAyakNi6FnSQfAfmf+3PB1yzU3gn8ld9pU/if h7KEp9fDn9eYZreTRfCUloI2yoVpD9d0DG3uaGDN/N0kGUnCC6TZT5ig5j2JO016 fYf/DqoYAk3ItWF9WK/uG7qJIGi37afCpQq+kbSSJk+p3HjJqu8JUe9jzqYdl7j9 26TGSY5o9WLhZkxDgbcCIJzcFJhMmXgMdhjil9lqaHmnNG5FPFU7g8DK1CZqbel9 m8+aRPn1EgxIahMgdl8NblW1pfO2Kco0tZmoP5vXx1uqhivd67h0hiQqp66WxOJd i2yMLncaCEv8M161CVEgtzuI5a7nCfaZv7J9ArzbkD/huBwu51IZgTs7Dz4njgvz VPB5FBTB/ZYteErUNoh6gjF0hLngWvvJSPvuzT+EFO7yypek0IJ28GTdbxYSP+jR 13697s5Itigf/D3KUdRRGsWRzyVVN9n+djkl//sy5ddL9eOlKSKEga4ujOUjTWaW hTvAxpK9GmJS/Iun5jIP6f75zDbi+e8FWUeB/OI2lPtnApaSKdXBTPXsco2RnTEV +G6XrH8IMEIsTxOk7hWU =7s/c -----END PGP SIGNATURE----- Merge tag 'drm-for-v4.10' of git://people.freedesktop.org/~airlied/linux Pull drm updates from Dave Airlie: "This is the main pull request for drm for 4.10 kernel. New drivers: - ZTE VOU display driver (zxdrm) - Amlogic Meson Graphic Controller GXBB/GXL/GXM SoCs (meson) - MXSFB support (mxsfb) Core: - Format handling has been reworked - Better atomic state debugging - drm_mm leak debugging - Atomic explicit fencing support - fbdev helper ops - Documentation updates - MST fbcon fixes Bridge: - Silicon Image SiI8620 driver Panel: - Add support for new simple panels i915: - GVT Device model - Better HDMI2.0 support on skylake - More watermark fixes - GPU idling rework for suspend/resume - DP Audio workarounds - Scheduler prep-work - Opregion CADL handling - GPU scheduler and priority boosting amdgfx/radeon: - Support for virtual devices - New VM manager for non-contig VRAM buffers - UVD powergating - SI register header cleanup - Cursor fixes - Powermanagement fixes nouveau: - Powermangement reworks for better voltage/clock changes - Atomic modesetting support - Displayport Multistream (MST) support. - GP102/104 hang and cursor fixes - GP106 support hisilicon: - hibmc support (BMC chip for aarch64 servers) armada: - add tracing support for overlay change - refactor plane support - de-midlayer the driver omapdrm: - Timing code cleanups rcar-du: - R8A7792/R8A7796 support - Misc fixes. sunxi: - A31 SoC display engine support imx-drm: - YUV format support - Cleanup plane atomic update mali-dp: - Misc fixes dw-hdmi: - Add support for HDMI i2c master controller tegra: - IOMMU support fixes - Error handling fixes tda998x: - Fix connector registration - Improved robustness - Fix infoframe/audio compliance virtio: - fix busid issues - allocate more vbufs qxl: - misc fixes and cleanups. vc4: - Fragment shader threading - ETC1 support - VEC (tv-out) support msm: - A5XX GPU support - Lots of atomic changes tilcdc: - Misc fixes and cleanups. etnaviv: - Fix dma-buf export path - DRAW_INSTANCED support - fix driver on i.MX6SX exynos: - HDMI refactoring fsl-dcu: - fbdev changes" * tag 'drm-for-v4.10' of git://people.freedesktop.org/~airlied/linux: (1343 commits) drm/nouveau/kms/nv50: fix atomic regression on original G80 drm/nouveau/bl: Do not register interface if Apple GMUX detected drm/nouveau/bl: Assign different names to interfaces drm/nouveau/bios/dp: fix handling of LevelEntryTableIndex on DP table 4.2 drm/nouveau/ltc: protect clearing of comptags with mutex drm/nouveau/gr/gf100-: handle GPC/TPC/MPC trap drm/nouveau/core: recognise GP106 chipset drm/nouveau/ttm: wait for bo fence to signal before unmapping vmas drm/nouveau/gr/gf100-: FECS intr handling is not relevant on proprietary ucode drm/nouveau/gr/gf100-: properly ack all FECS error interrupts drm/nouveau/fifo/gf100-: recover from host mmu faults drm: Add fake controlD* symlinks for backwards compat drm/vc4: Don't use drm_put_dev drm/vc4: Document VEC DT binding drm/vc4: Add support for the VEC (Video Encoder) IP drm: Add TV connector states to drm_connector_state drm: Turn DRM_MODE_SUBCONNECTOR_xx definitions into an enum drm/vc4: Fix ->clock_select setting for the VEC encoder drm/amdgpu/dce6: Set MASTER_UPDATE_MODE to 0 in resume_mc_access as well drm/amdgpu: use pin rather than pin_restricted in a few cases ...
2016-12-14 00:35:09 +07:00
BUG();
}
static void shrinker_unlock(struct drm_i915_private *i915, bool unlock)
drm/i915: Don't call synchronize_rcu_expedited under struct_mutex Only call synchronize_rcu_expedited after unlocking struct_mutex to avoid deadlock because the workqueues depend on struct_mutex. >From original patch by Andrea: synchronize_rcu/synchronize_sched/synchronize_rcu_expedited() will hang until its own workqueues are run. The i915 gem workqueues will wait on the struct_mutex to be released. So we cannot wait for a quiescent state using those rcu primitives while holding the struct_mutex or it creates a circular lock dependency resulting in kernel hangs (which is reproducible but goes undetected by lockdep). kswapd0 D 0 700 2 0x00000000 Call Trace: ? __schedule+0x1a5/0x660 ? schedule+0x36/0x80 ? _synchronize_rcu_expedited.constprop.65+0x2ef/0x300 ? wake_up_bit+0x20/0x20 ? rcu_stall_kick_kthreads.part.54+0xc0/0xc0 ? rcu_exp_wait_wake+0x530/0x530 ? i915_gem_shrink+0x34b/0x4b0 ? i915_gem_shrinker_scan+0x7c/0x90 ? i915_gem_shrinker_scan+0x7c/0x90 ? shrink_slab.part.61.constprop.72+0x1c1/0x3a0 ? shrink_zone+0x154/0x160 ? kswapd+0x40a/0x720 ? kthread+0xf4/0x130 ? try_to_free_pages+0x450/0x450 ? kthread_create_on_node+0x40/0x40 ? ret_from_fork+0x23/0x30 plasmashell D 0 4657 4614 0x00000000 Call Trace: ? __schedule+0x1a5/0x660 ? schedule+0x36/0x80 ? schedule_preempt_disabled+0xe/0x10 ? __mutex_lock.isra.4+0x1c9/0x790 ? i915_gem_close_object+0x26/0xc0 ? i915_gem_close_object+0x26/0xc0 ? drm_gem_object_release_handle+0x48/0x90 ? drm_gem_handle_delete+0x50/0x80 ? drm_ioctl+0x1fa/0x420 ? drm_gem_handle_create+0x40/0x40 ? pipe_write+0x391/0x410 ? __vfs_write+0xc6/0x120 ? do_vfs_ioctl+0x8b/0x5d0 ? SyS_ioctl+0x3b/0x70 ? entry_SYSCALL_64_fastpath+0x13/0x94 kworker/0:0 D 0 29186 2 0x00000000 Workqueue: events __i915_gem_free_work Call Trace: ? __schedule+0x1a5/0x660 ? schedule+0x36/0x80 ? schedule_preempt_disabled+0xe/0x10 ? __mutex_lock.isra.4+0x1c9/0x790 ? del_timer_sync+0x44/0x50 ? update_curr+0x57/0x110 ? __i915_gem_free_objects+0x31/0x300 ? __i915_gem_free_objects+0x31/0x300 ? __i915_gem_free_work+0x2d/0x40 ? process_one_work+0x13a/0x3b0 ? worker_thread+0x4a/0x460 ? kthread+0xf4/0x130 ? process_one_work+0x3b0/0x3b0 ? kthread_create_on_node+0x40/0x40 ? ret_from_fork+0x23/0x30 Fixes: 3d3d18f086cd ("drm/i915: Avoid rcu_barrier() from reclaim paths (shrinker)") Reported-by: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jani Nikula <jani.nikula@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
2017-04-07 17:49:34 +07:00
{
if (!unlock)
return;
mutex_unlock(&i915->drm.struct_mutex);
drm/i915: Don't call synchronize_rcu_expedited under struct_mutex Only call synchronize_rcu_expedited after unlocking struct_mutex to avoid deadlock because the workqueues depend on struct_mutex. >From original patch by Andrea: synchronize_rcu/synchronize_sched/synchronize_rcu_expedited() will hang until its own workqueues are run. The i915 gem workqueues will wait on the struct_mutex to be released. So we cannot wait for a quiescent state using those rcu primitives while holding the struct_mutex or it creates a circular lock dependency resulting in kernel hangs (which is reproducible but goes undetected by lockdep). kswapd0 D 0 700 2 0x00000000 Call Trace: ? __schedule+0x1a5/0x660 ? schedule+0x36/0x80 ? _synchronize_rcu_expedited.constprop.65+0x2ef/0x300 ? wake_up_bit+0x20/0x20 ? rcu_stall_kick_kthreads.part.54+0xc0/0xc0 ? rcu_exp_wait_wake+0x530/0x530 ? i915_gem_shrink+0x34b/0x4b0 ? i915_gem_shrinker_scan+0x7c/0x90 ? i915_gem_shrinker_scan+0x7c/0x90 ? shrink_slab.part.61.constprop.72+0x1c1/0x3a0 ? shrink_zone+0x154/0x160 ? kswapd+0x40a/0x720 ? kthread+0xf4/0x130 ? try_to_free_pages+0x450/0x450 ? kthread_create_on_node+0x40/0x40 ? ret_from_fork+0x23/0x30 plasmashell D 0 4657 4614 0x00000000 Call Trace: ? __schedule+0x1a5/0x660 ? schedule+0x36/0x80 ? schedule_preempt_disabled+0xe/0x10 ? __mutex_lock.isra.4+0x1c9/0x790 ? i915_gem_close_object+0x26/0xc0 ? i915_gem_close_object+0x26/0xc0 ? drm_gem_object_release_handle+0x48/0x90 ? drm_gem_handle_delete+0x50/0x80 ? drm_ioctl+0x1fa/0x420 ? drm_gem_handle_create+0x40/0x40 ? pipe_write+0x391/0x410 ? __vfs_write+0xc6/0x120 ? do_vfs_ioctl+0x8b/0x5d0 ? SyS_ioctl+0x3b/0x70 ? entry_SYSCALL_64_fastpath+0x13/0x94 kworker/0:0 D 0 29186 2 0x00000000 Workqueue: events __i915_gem_free_work Call Trace: ? __schedule+0x1a5/0x660 ? schedule+0x36/0x80 ? schedule_preempt_disabled+0xe/0x10 ? __mutex_lock.isra.4+0x1c9/0x790 ? del_timer_sync+0x44/0x50 ? update_curr+0x57/0x110 ? __i915_gem_free_objects+0x31/0x300 ? __i915_gem_free_objects+0x31/0x300 ? __i915_gem_free_work+0x2d/0x40 ? process_one_work+0x13a/0x3b0 ? worker_thread+0x4a/0x460 ? kthread+0xf4/0x130 ? process_one_work+0x3b0/0x3b0 ? kthread_create_on_node+0x40/0x40 ? ret_from_fork+0x23/0x30 Fixes: 3d3d18f086cd ("drm/i915: Avoid rcu_barrier() from reclaim paths (shrinker)") Reported-by: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jani Nikula <jani.nikula@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
2017-04-07 17:49:34 +07:00
}
static bool swap_available(void)
{
return get_nr_swap_pages() > 0;
}
static bool can_release_pages(struct drm_i915_gem_object *obj)
{
/* Consider only shrinkable ojects. */
if (!i915_gem_object_is_shrinkable(obj))
return false;
/* Only report true if by unbinding the object and putting its pages
* we can actually make forward progress towards freeing physical
* pages.
*
* If the pages are pinned for any other reason than being bound
* to the GPU, simply unbinding from the GPU is not going to succeed
* in releasing our pin count on the pages themselves.
*/
if (atomic_read(&obj->mm.pages_pin_count) > obj->bind_count)
return false;
/* If any vma are "permanently" pinned, it will prevent us from
* reclaiming the obj->mm.pages. We only allow scanout objects to claim
* a permanent pin, along with a few others like the context objects.
* To simplify the scan, and to avoid walking the list of vma under the
* object, we just check the count of its permanently pinned.
*/
if (READ_ONCE(obj->pin_global))
return false;
/* We can only return physical pages to the system if we can either
* discard the contents (because the user has marked them as being
* purgeable) or if we can move their contents out to swap.
*/
return swap_available() || obj->mm.madv == I915_MADV_DONTNEED;
}
static bool unsafe_drop_pages(struct drm_i915_gem_object *obj)
{
if (i915_gem_object_unbind(obj) == 0)
__i915_gem_object_put_pages(obj, I915_MM_SHRINKER);
return !i915_gem_object_has_pages(obj);
}
drm/i915: Start writeback from the shrinker When we are called to relieve mempressue via the shrinker, the only way we can make progress is either by discarding unwanted pages (those objects that userspace has marked MADV_DONTNEED) or by reclaiming the dirty objects via swap. As we know that is the only way to make further progress, we can initiate the writeback as we invalidate the objects. This means the objects we put onto the inactive anon lru list are already marked for reclaim+writeback and so will trigger a wait upon the writeback inside direct reclaim, greatly improving the success rate of direct reclaim on i915 objects. The corollary is that we may start a slow swap on opportunistic mempressure from the likes of the compaction + migration kthreads. This is limited by those threads only being allowed to shrink idle pages, but also that if we reactivate the page before it is swapped out by gpu activity, we only page the cost of repinning the page. The cost is most felt when an object is reused after mempressure, which hopefully excludes the latency sensitive tasks (as we are just extending the impact of swap thrashing to them). Apparently this is not the first time we've had this idea. Back in commit 5537252b6b6d ("drm/i915: Invalidate our pages under memory pressure") we wanted to start writeback but settled on invalidate after Hugh Dickins warned us about a possibility of a deadlock within shmemfs if we started writeback from shrink_slab. Looking at the callchain, using writeback from i915_gem_shrink should be equivalent to the pageout also employed by shrink_slab, i.e. it should not be any riskier afaict. v2: Leave mmapings intact. At this point, the only mmapings of our objects will be via CPU mmaps on the shmemfs filp, which are out-of-scope for our LRU tracking. Instead leave those pages to the inactive anon LRU page list for aging and pageout as normal. v3: Be selective on which paths trigger writeback, in particular excluding paths shrinking just to reclaim vm space (e.g. mmap, vmap reapers) and avoid starting writeback on the entire process space from within the pm freezer. References: https://bugs.freedesktop.org/show_bug.cgi?id=108686 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Michal Hocko <mhocko@suse.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20190420115539.29081-1-chris@chris-wilson.co.uk
2019-04-20 18:55:39 +07:00
static void __start_writeback(struct drm_i915_gem_object *obj,
unsigned int flags)
{
struct address_space *mapping;
struct writeback_control wbc = {
.sync_mode = WB_SYNC_NONE,
.nr_to_write = SWAP_CLUSTER_MAX,
.range_start = 0,
.range_end = LLONG_MAX,
.for_reclaim = 1,
};
unsigned long i;
lockdep_assert_held(&obj->mm.lock);
GEM_BUG_ON(i915_gem_object_has_pages(obj));
switch (obj->mm.madv) {
case I915_MADV_DONTNEED:
__i915_gem_object_truncate(obj);
case __I915_MADV_PURGED:
return;
}
if (!obj->base.filp)
return;
if (!(flags & I915_SHRINK_WRITEBACK))
return;
/*
* Leave mmapings intact (GTT will have been revoked on unbinding,
* leaving only CPU mmapings around) and add those pages to the LRU
* instead of invoking writeback so they are aged and paged out
* as normal.
*/
mapping = obj->base.filp->f_mapping;
/* Begin writeback on each dirty page */
for (i = 0; i < obj->base.size >> PAGE_SHIFT; i++) {
struct page *page;
page = find_lock_entry(mapping, i);
if (!page || xa_is_value(page))
continue;
if (!page_mapped(page) && clear_page_dirty_for_io(page)) {
int ret;
SetPageReclaim(page);
ret = mapping->a_ops->writepage(page, &wbc);
if (!PageWriteback(page))
ClearPageReclaim(page);
if (!ret)
goto put;
}
unlock_page(page);
put:
put_page(page);
}
}
/**
* i915_gem_shrink - Shrink buffer object caches
* @i915: i915 device
* @target: amount of memory to make available, in pages
* @nr_scanned: optional output for number of pages scanned (incremental)
* @flags: control flags for selecting cache types
*
* This function is the main interface to the shrinker. It will try to release
* up to @target pages of main memory backing storage from buffer objects.
* Selection of the specific caches can be done with @flags. This is e.g. useful
* when purgeable objects should be removed from caches preferentially.
*
* Note that it's not guaranteed that released amount is actually available as
* free system memory - the pages might still be in-used to due to other reasons
* (like cpu mmaps) or the mm core has reused them before we could grab them.
* Therefore code that needs to explicitly shrink buffer objects caches (e.g. to
* avoid deadlocks in memory reclaim) must fall back to i915_gem_shrink_all().
*
* Also note that any kind of pinning (both per-vma address space pins and
* backing storage pins at the buffer object level) result in the shrinker code
* having to skip the object.
*
* Returns:
* The number of pages of backing storage actually released.
*/
unsigned long
i915_gem_shrink(struct drm_i915_private *i915,
unsigned long target,
unsigned long *nr_scanned,
unsigned flags)
{
const struct {
struct list_head *list;
unsigned int bit;
} phases[] = {
{ &i915->mm.unbound_list, I915_SHRINK_UNBOUND },
{ &i915->mm.bound_list, I915_SHRINK_BOUND },
{ NULL, 0 },
}, *phase;
intel_wakeref_t wakeref = 0;
unsigned long count = 0;
unsigned long scanned = 0;
bool unlock;
drm/i915: Return immediately if trylock fails for direct-reclaim Ignore trying to shrink from i915 if we fail to acquire the struct_mutex in the shrinker while performing direct-reclaim. The trade-off being (much) lower latency for non-i915 clients at an increased risk of being unable to obtain a page from direct-reclaim without hitting the oom-notifier. The proviso being that we still keep trying to hard obtain the lock for kswapd so that we can reap under heavy memory pressure. v2: Taint all mutexes taken within the shrinker with the struct_mutex subclass as an early warning system, and drop I915_SHRINK_ACTIVE from vmap to reduce the number of dangerous paths. We also have to drop I915_SHRINK_ACTIVE from oom-notifier to be able to make the same claim that ACTIVE is only used from outside context, which fits in with a longer strategy of avoiding stalls due to scanning active during shrinking. The danger in using the subclass struct_mutex is that we declare ourselves more knowledgable than lockdep and deprive ourselves of automatic coverage. Instead, we require ourselves to mark up any mutex taken inside the shrinker in order to detect lock-inversion, and if we miss any we are doomed to a deadlock at the worst possible moment. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190107115509.12523-1-chris@chris-wilson.co.uk
2019-01-07 18:54:24 +07:00
if (!shrinker_lock(i915, flags, &unlock))
return 0;
drm/i915: Idle the GPU before shinking everything The handling of contexts are peculiar. Instead of tieing their vma to activity, we pin the context. This means that we cannot simply unbind the context object itself at will (which would normally cause us to wait for the vma to be idle), but must manually idle the GPU and retire requests first. A consequence of this peculiarity is when doing a last desperate attempt to recover memory. If the memory is tied up inside active context objects, we will fail to recover any memory simply by trying to unbind the objects without first doing a wait-for-idle. A side-effect of removing the call to shrinker_lock_uninterruptible() from i915_gem_shrinker_oom() was that we removed an unlocked wait-for-idle, and so lost the "natural" shrinkage of context objects. By replacing that with a locked wait from inside i915_gem_shrink(), we not only replace it with the ability to recover all context objects, but do so for all i915_gem_shrink_all() callers. v2: Switching requires request allocation, which is not permitted from inside the shrinker as it only uses ordinary allocations. References: https://bugs.freedesktop.org/show_bug.cgi?id=102936 Fixes: f2123818ffad ("drm/i915: Move dev_priv->mm.[un]bound_list to its own lock") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171108094400.1386-1-chris@chris-wilson.co.uk Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2017-11-08 16:44:00 +07:00
/*
* When shrinking the active list, also consider active contexts.
* Active contexts are pinned until they are retired, and so can
* not be simply unbound to retire and unpin their pages. To shrink
* the contexts, we must wait until the gpu is idle.
*
* We don't care about errors here; if we cannot wait upon the GPU,
* we will free as much as we can and hope to get a second chance.
*/
if (flags & I915_SHRINK_ACTIVE)
drm/i915: Provide a timeout to i915_gem_wait_for_idle() Usually we have no idea about the upper bound we need to wait to catch up with userspace when idling the device, but in a few situations we know the system was idle beforehand and can provide a short timeout in order to very quickly catch a failure, long before hangcheck kicks in. In the following patches, we will use the timeout to curtain two overly long waits, where we know we can expect the GPU to complete within a reasonable time or declare it broken. In particular, with a broken GPU we expect it to fail during the initial GPU setup where do a couple of context switches to record the defaults. This is a task that takes a few milliseconds even on the slowest of devices, but we may have to wait 60s for hangcheck to give in and declare the machine inoperable. In this a case where any gpu hang is unacceptable, both from a timeliness and practical standpoint. The other improvement is that in selftests, we do not need to arm an independent timer to inject a wedge, as we can just limit the timeout on the wait directly. v2: Include the timeout parameter in the trace. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180709122044.7028-1-chris@chris-wilson.co.uk
2018-07-09 19:20:42 +07:00
i915_gem_wait_for_idle(i915,
I915_WAIT_LOCKED,
MAX_SCHEDULE_TIMEOUT);
drm/i915: Idle the GPU before shinking everything The handling of contexts are peculiar. Instead of tieing their vma to activity, we pin the context. This means that we cannot simply unbind the context object itself at will (which would normally cause us to wait for the vma to be idle), but must manually idle the GPU and retire requests first. A consequence of this peculiarity is when doing a last desperate attempt to recover memory. If the memory is tied up inside active context objects, we will fail to recover any memory simply by trying to unbind the objects without first doing a wait-for-idle. A side-effect of removing the call to shrinker_lock_uninterruptible() from i915_gem_shrinker_oom() was that we removed an unlocked wait-for-idle, and so lost the "natural" shrinkage of context objects. By replacing that with a locked wait from inside i915_gem_shrink(), we not only replace it with the ability to recover all context objects, but do so for all i915_gem_shrink_all() callers. v2: Switching requires request allocation, which is not permitted from inside the shrinker as it only uses ordinary allocations. References: https://bugs.freedesktop.org/show_bug.cgi?id=102936 Fixes: f2123818ffad ("drm/i915: Move dev_priv->mm.[un]bound_list to its own lock") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171108094400.1386-1-chris@chris-wilson.co.uk Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2017-11-08 16:44:00 +07:00
trace_i915_gem_shrink(i915, target, flags);
i915_retire_requests(i915);
/*
* Unbinding of objects will require HW access; Let us not wake the
* device just to recover a little memory. If absolutely necessary,
* we will force the wake during oom-notifier.
*/
if (flags & I915_SHRINK_BOUND) {
wakeref = intel_runtime_pm_get_if_in_use(i915);
if (!wakeref)
flags &= ~I915_SHRINK_BOUND;
}
/*
* As we may completely rewrite the (un)bound list whilst unbinding
* (due to retiring requests) we have to strictly process only
* one element of the list at the time, and recheck the list
* on every iteration.
*
* In particular, we must hold a reference whilst removing the
* object as we may end up waiting for and/or retiring the objects.
* This might release the final reference (held by the active list)
* and result in the object being freed from under us. This is
* similar to the precautions the eviction code must take whilst
* removing objects.
*
* Also note that although these lists do not hold a reference to
* the object we can safely grab one here: The final object
* unreferencing and the bound_list are both protected by the
* dev->struct_mutex and so we won't ever be able to observe an
* object on the bound_list with a reference count equals 0.
*/
for (phase = phases; phase->list; phase++) {
struct list_head still_in_list;
struct drm_i915_gem_object *obj;
if ((flags & phase->bit) == 0)
continue;
INIT_LIST_HEAD(&still_in_list);
/*
* We serialize our access to unreferenced objects through
* the use of the struct_mutex. While the objects are not
* yet freed (due to RCU then a workqueue) we still want
* to be able to shrink their pages, so they remain on
* the unbound/bound list until actually freed.
*/
spin_lock(&i915->mm.obj_lock);
while (count < target &&
(obj = list_first_entry_or_null(phase->list,
typeof(*obj),
mm.link))) {
list_move_tail(&obj->mm.link, &still_in_list);
if (flags & I915_SHRINK_PURGEABLE &&
obj->mm.madv != I915_MADV_DONTNEED)
continue;
if (flags & I915_SHRINK_VMAPS &&
!is_vmalloc_addr(obj->mm.mapping))
continue;
if (!(flags & I915_SHRINK_ACTIVE) &&
(i915_gem_object_is_active(obj) ||
i915_gem_object_is_framebuffer(obj)))
continue;
if (!can_release_pages(obj))
continue;
spin_unlock(&i915->mm.obj_lock);
if (unsafe_drop_pages(obj)) {
drm/i915: Mark up obj->mm.lock for shrinker As we may allocate from within the obj->mm.lock we may enter the shrinker for direct reclaim. Operating on the current object is prevented by checking for obj->mm.pages (which is only set as the last operation in the allocation path). However, we need to identify the single recursion of accessing another object's obj->mm.lock as the two locks have identical class and so appear to be the same to lockdep, convincing it that a deadlock is possible. Use mutex_lock_nested() to remove the false positive. [ 2165.945734] ================================= [ 2165.945749] [ INFO: inconsistent lock state ] [ 2165.945765] 4.9.0-rc2+ #2 Tainted: G W [ 2165.945781] --------------------------------- [ 2165.945796] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage. [ 2165.945816] kswapd0/62 [HC0[0]:SC0[0]:HE1:SE1] takes: (&obj->mm.lock){+.+.?.}, at: [<ffffffffc0289a1f>] i915_gem_shrink+0x29f/0x500 [i915] [ 2165.945904] {RECLAIM_FS-ON-W} state was registered at: [ 2165.945931] [<ffffffffb10bd50f>] mark_held_locks+0x6f/0xa0 [ 2165.945956] [<ffffffffb10bf889>] lockdep_trace_alloc+0x69/0xc0 [ 2165.945982] [<ffffffffb11eea53>] kmem_cache_alloc_trace+0x33/0x2a0 [ 2165.946019] [<ffffffffc028a28a>] i915_gem_object_get_pages_stolen+0x6a/0xd0 [i915] [ 2165.946060] [<ffffffffc027e1d0>] ____i915_gem_object_get_pages+0x20/0x60 [i915] [ 2165.946098] [<ffffffffc027e268>] __i915_gem_object_get_pages+0x58/0x70 [i915] [ 2165.946138] [<ffffffffc028a3dc>] _i915_gem_object_create_stolen+0xec/0x120 [i915] [ 2165.946177] [<ffffffffc028af73>] i915_gem_object_create_stolen_for_preallocated+0xf3/0x3f0 [i915] [ 2165.946222] [<ffffffffc02bae43>] intel_alloc_initial_plane_obj.isra.125+0xd3/0x200 [i915] [ 2165.946266] [<ffffffffc02cb1c1>] intel_modeset_init+0x931/0x1530 [i915] [ 2165.946301] [<ffffffffc023d584>] i915_driver_load+0xa14/0x14a0 [i915] [ 2165.946335] [<ffffffffc0248aff>] i915_pci_probe+0x4f/0x70 [i915] [ 2165.946362] [<ffffffffb13cc452>] local_pci_probe+0x42/0xa0 [ 2165.946386] [<ffffffffb13cd903>] pci_device_probe+0x103/0x150 [ 2165.946411] [<ffffffffb14adeb3>] driver_probe_device+0x223/0x430 [ 2165.946436] [<ffffffffb14ae1a3>] __driver_attach+0xe3/0xf0 [ 2165.946461] [<ffffffffb14ab943>] bus_for_each_dev+0x73/0xc0 [ 2165.946485] [<ffffffffb14ad5ee>] driver_attach+0x1e/0x20 [ 2165.946508] [<ffffffffb14ad003>] bus_add_driver+0x173/0x270 [ 2165.946533] [<ffffffffb14aee70>] driver_register+0x60/0xe0 [ 2165.946557] [<ffffffffb13cbd6d>] __pci_register_driver+0x5d/0x60 [ 2165.946606] [<ffffffffc0378057>] soundcore_open+0x17/0x230 [soundcore] [ 2165.946636] [<ffffffffb1000450>] do_one_initcall+0x50/0x180 [ 2165.946661] [<ffffffffb117fd2d>] do_init_module+0x5f/0x1f1 [ 2165.946685] [<ffffffffb1108964>] load_module+0x2174/0x2a80 [ 2165.946709] [<ffffffffb11094df>] SYSC_finit_module+0xdf/0x110 [ 2165.946734] [<ffffffffb110952e>] SyS_finit_module+0xe/0x10 [ 2165.946758] [<ffffffffb1742aea>] entry_SYSCALL_64_fastpath+0x18/0xad [ 2165.946776] irq event stamp: 90871 [ 2165.946788] hardirqs last enabled at (90871): [ 2165.946805] [<ffffffffb173e9da>] __mutex_unlock_slowpath+0x11a/0x1c0 [ 2165.946823] hardirqs last disabled at (90870): [ 2165.946839] [<ffffffffb173e91b>] __mutex_unlock_slowpath+0x5b/0x1c0 [ 2165.946856] softirqs last enabled at (90858): [ 2165.946872] [<ffffffffb174581a>] __do_softirq+0x39a/0x4c6 [ 2165.946887] softirqs last disabled at (90671): [ 2165.946902] [<ffffffffb1066cea>] irq_exit+0xea/0xf0 [ 2165.946916] other info that might help us debug this: [ 2165.946936] Possible unsafe locking scenario: [ 2165.946955] CPU0 [ 2165.946965] ---- [ 2165.946975] lock(&obj->mm.lock); [ 2165.947000] <Interrupt> [ 2165.947010] lock(&obj->mm.lock); [ 2165.947035] *** DEADLOCK *** [ 2165.947054] 2 locks held by kswapd0/62: [ 2165.947067] #0: (shrinker_rwsem){++++..}, at: [<ffffffffb119a20e>] shrink_slab.part.40+0x5e/0x5d0 [ 2165.947120] #1: (&dev->struct_mutex){+.+.+.}, at: [<ffffffffc028954b>] i915_gem_shrinker_lock+0x1b/0x60 [i915] [ 2165.948909] stack backtrace: [ 2165.950650] CPU: 2 PID: 62 Comm: kswapd0 Tainted: G W 4.9.0-rc2+ #2 [ 2165.951587] Hardware name: LENOVO 80MX/Lenovo E31-80, BIOS DCCN34WW(V2.03) 12/01/2015 [ 2165.952484] ffffc90000b5f8c8 ffffffffb137f645 ffff88016c5a2700 ffffffffb25f20a0 [ 2165.953395] ffffc90000b5f918 ffffffffb10bcecd 0000000000000000 ffff880100000001 [ 2165.954305] 0000000000000001 000000000000000a ffff88016c5a2fd0 ffff88016c5a2700 [ 2165.955240] Call Trace: [ 2165.956170] [<ffffffffb137f645>] dump_stack+0x68/0x93 [ 2165.957071] [<ffffffffb10bcecd>] print_usage_bug+0x1dd/0x1f0 [ 2165.957979] [<ffffffffb10bd439>] mark_lock+0x559/0x5c0 [ 2165.958875] [<ffffffffb10bc3f0>] ? print_shortest_lock_dependencies+0x1b0/0x1b0 [ 2165.959829] [<ffffffffb10be04d>] __lock_acquire+0x66d/0x12a0 [ 2165.960729] [<ffffffffb11ef541>] ? __slab_free+0xa1/0x340 [ 2165.961625] [<ffffffffb10dba5d>] ? debug_lockdep_rcu_enabled+0x1d/0x20 [ 2165.962530] [<ffffffffb10bd50f>] ? mark_held_locks+0x6f/0xa0 [ 2165.963457] [<ffffffffb10bf0b0>] lock_acquire+0xf0/0x1f0 [ 2165.964368] [<ffffffffc0289a1f>] ? i915_gem_shrink+0x29f/0x500 [i915] [ 2165.965269] [<ffffffffc0289a1f>] ? i915_gem_shrink+0x29f/0x500 [i915] [ 2165.966150] [<ffffffffb173d837>] mutex_lock_nested+0x77/0x420 [ 2165.967030] [<ffffffffc0289a1f>] ? i915_gem_shrink+0x29f/0x500 [i915] [ 2165.967952] [<ffffffffc027c7a1>] ? __i915_gem_object_put_pages.part.58+0x161/0x1b0 [i915] [ 2165.968835] [<ffffffffc0289a1f>] i915_gem_shrink+0x29f/0x500 [i915] [ 2165.969712] [<ffffffffc0289e40>] i915_gem_shrinker_scan+0x70/0xb0 [i915] [ 2165.970591] [<ffffffffb119a3ae>] shrink_slab.part.40+0x1fe/0x5d0 [ 2165.971504] [<ffffffffb119f19c>] shrink_node+0x22c/0x320 [ 2165.972371] [<ffffffffb11a05fb>] kswapd+0x38b/0x9b0 [ 2165.973238] [<ffffffffb11a0270>] ? mem_cgroup_shrink_node+0x330/0x330 [ 2165.974068] [<ffffffffb108630f>] kthread+0xff/0x120 [ 2165.974929] [<ffffffffb1086210>] ? kthread_park+0x60/0x60 [ 2165.975847] [<ffffffffb1742d57>] ret_from_fork+0x27/0x40 Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Fixes: 1233e2db199d ("drm/i915: Move object backing storage manipulation...") Testcase: igt/gem_ctx_create/maximum-swap Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161031124048.30355-1-chris@chris-wilson.co.uk
2016-10-31 19:40:48 +07:00
/* May arrive from get_pages on another bo */
mutex_lock_nested(&obj->mm.lock,
I915_MM_SHRINKER);
if (!i915_gem_object_has_pages(obj)) {
drm/i915: Start writeback from the shrinker When we are called to relieve mempressue via the shrinker, the only way we can make progress is either by discarding unwanted pages (those objects that userspace has marked MADV_DONTNEED) or by reclaiming the dirty objects via swap. As we know that is the only way to make further progress, we can initiate the writeback as we invalidate the objects. This means the objects we put onto the inactive anon lru list are already marked for reclaim+writeback and so will trigger a wait upon the writeback inside direct reclaim, greatly improving the success rate of direct reclaim on i915 objects. The corollary is that we may start a slow swap on opportunistic mempressure from the likes of the compaction + migration kthreads. This is limited by those threads only being allowed to shrink idle pages, but also that if we reactivate the page before it is swapped out by gpu activity, we only page the cost of repinning the page. The cost is most felt when an object is reused after mempressure, which hopefully excludes the latency sensitive tasks (as we are just extending the impact of swap thrashing to them). Apparently this is not the first time we've had this idea. Back in commit 5537252b6b6d ("drm/i915: Invalidate our pages under memory pressure") we wanted to start writeback but settled on invalidate after Hugh Dickins warned us about a possibility of a deadlock within shmemfs if we started writeback from shrink_slab. Looking at the callchain, using writeback from i915_gem_shrink should be equivalent to the pageout also employed by shrink_slab, i.e. it should not be any riskier afaict. v2: Leave mmapings intact. At this point, the only mmapings of our objects will be via CPU mmaps on the shmemfs filp, which are out-of-scope for our LRU tracking. Instead leave those pages to the inactive anon LRU page list for aging and pageout as normal. v3: Be selective on which paths trigger writeback, in particular excluding paths shrinking just to reclaim vm space (e.g. mmap, vmap reapers) and avoid starting writeback on the entire process space from within the pm freezer. References: https://bugs.freedesktop.org/show_bug.cgi?id=108686 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Michal Hocko <mhocko@suse.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20190420115539.29081-1-chris@chris-wilson.co.uk
2019-04-20 18:55:39 +07:00
__start_writeback(obj, flags);
count += obj->base.size >> PAGE_SHIFT;
}
mutex_unlock(&obj->mm.lock);
}
scanned += obj->base.size >> PAGE_SHIFT;
spin_lock(&i915->mm.obj_lock);
}
list_splice_tail(&still_in_list, phase->list);
spin_unlock(&i915->mm.obj_lock);
}
if (flags & I915_SHRINK_BOUND)
intel_runtime_pm_put(i915, wakeref);
i915_retire_requests(i915);
shrinker_unlock(i915, unlock);
if (nr_scanned)
*nr_scanned += scanned;
return count;
}
/**
* i915_gem_shrink_all - Shrink buffer object caches completely
* @i915: i915 device
*
* This is a simple wraper around i915_gem_shrink() to aggressively shrink all
* caches completely. It also first waits for and retires all outstanding
* requests to also be able to release backing storage for active objects.
*
* This should only be used in code to intentionally quiescent the gpu or as a
* last-ditch effort when memory seems to have run out.
*
* Returns:
* The number of pages of backing storage actually released.
*/
unsigned long i915_gem_shrink_all(struct drm_i915_private *i915)
{
intel_wakeref_t wakeref;
unsigned long freed = 0;
drm/i915: Enable lockless lookup of request tracking via RCU If we enable RCU for the requests (providing a grace period where we can inspect a "dead" request before it is freed), we can allow callers to carefully perform lockless lookup of an active request. However, by enabling deferred freeing of requests, we can potentially hog a lot of memory when dealing with tens of thousands of requests per second - with a quick insertion of a synchronize_rcu() inside our shrinker callback, that issue disappears. v2: Currently, it is our responsibility to handle reclaim i.e. to avoid hogging memory with the delayed slab frees. At the moment, we wait for a grace period in the shrinker, and block for all RCU callbacks on oom. Suggested alternatives focus on flushing our RCU callback when we have a certain number of outstanding request frees, and blocking on that flush after a second high watermark. (So rather than wait for the system to run out of memory, we stop issuing requests - both are nondeterministic.) Paul E. McKenney wrote: Another approach is synchronize_rcu() after some largish number of requests. The advantage of this approach is that it throttles the production of callbacks at the source. The corresponding disadvantage is that it slows things up. Another approach is to use call_rcu(), but if the previous call_rcu() is still in flight, block waiting for it. Yet another approach is the get_state_synchronize_rcu() / cond_synchronize_rcu() pair. The idea is to do something like this: cond_synchronize_rcu(cookie); cookie = get_state_synchronize_rcu(); You would of course do an initial get_state_synchronize_rcu() to get things going. This would not block unless there was less than one grace period's worth of time between invocations. But this assumes a busy system, where there is almost always a grace period in flight. But you can make that happen as follows: cond_synchronize_rcu(cookie); cookie = get_state_synchronize_rcu(); call_rcu(&my_rcu_head, noop_function); Note that you need additional code to make sure that the old callback has completed before doing a new one. Setting and clearing a flag with appropriate memory ordering control suffices (e.g,. smp_load_acquire() and smp_store_release()). v3: More comments on compiler and processor order of operations within the RCU lookup and discover we can use rcu_access_pointer() here instead. v4: Wrap i915_gem_active_get_rcu() to take the rcu_read_lock itself. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: "Goel, Akash" <akash.goel@intel.com> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1470324762-2545-25-git-send-email-chris@chris-wilson.co.uk
2016-08-04 22:32:41 +07:00
with_intel_runtime_pm(i915, wakeref) {
freed = i915_gem_shrink(i915, -1UL, NULL,
I915_SHRINK_BOUND |
I915_SHRINK_UNBOUND |
I915_SHRINK_ACTIVE);
}
drm/i915: Enable lockless lookup of request tracking via RCU If we enable RCU for the requests (providing a grace period where we can inspect a "dead" request before it is freed), we can allow callers to carefully perform lockless lookup of an active request. However, by enabling deferred freeing of requests, we can potentially hog a lot of memory when dealing with tens of thousands of requests per second - with a quick insertion of a synchronize_rcu() inside our shrinker callback, that issue disappears. v2: Currently, it is our responsibility to handle reclaim i.e. to avoid hogging memory with the delayed slab frees. At the moment, we wait for a grace period in the shrinker, and block for all RCU callbacks on oom. Suggested alternatives focus on flushing our RCU callback when we have a certain number of outstanding request frees, and blocking on that flush after a second high watermark. (So rather than wait for the system to run out of memory, we stop issuing requests - both are nondeterministic.) Paul E. McKenney wrote: Another approach is synchronize_rcu() after some largish number of requests. The advantage of this approach is that it throttles the production of callbacks at the source. The corresponding disadvantage is that it slows things up. Another approach is to use call_rcu(), but if the previous call_rcu() is still in flight, block waiting for it. Yet another approach is the get_state_synchronize_rcu() / cond_synchronize_rcu() pair. The idea is to do something like this: cond_synchronize_rcu(cookie); cookie = get_state_synchronize_rcu(); You would of course do an initial get_state_synchronize_rcu() to get things going. This would not block unless there was less than one grace period's worth of time between invocations. But this assumes a busy system, where there is almost always a grace period in flight. But you can make that happen as follows: cond_synchronize_rcu(cookie); cookie = get_state_synchronize_rcu(); call_rcu(&my_rcu_head, noop_function); Note that you need additional code to make sure that the old callback has completed before doing a new one. Setting and clearing a flag with appropriate memory ordering control suffices (e.g,. smp_load_acquire() and smp_store_release()). v3: More comments on compiler and processor order of operations within the RCU lookup and discover we can use rcu_access_pointer() here instead. v4: Wrap i915_gem_active_get_rcu() to take the rcu_read_lock itself. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: "Goel, Akash" <akash.goel@intel.com> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1470324762-2545-25-git-send-email-chris@chris-wilson.co.uk
2016-08-04 22:32:41 +07:00
return freed;
}
static unsigned long
i915_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
{
struct drm_i915_private *i915 =
container_of(shrinker, struct drm_i915_private, mm.shrinker);
struct drm_i915_gem_object *obj;
unsigned long num_objects = 0;
unsigned long count = 0;
spin_lock(&i915->mm.obj_lock);
list_for_each_entry(obj, &i915->mm.unbound_list, mm.link)
if (can_release_pages(obj)) {
count += obj->base.size >> PAGE_SHIFT;
num_objects++;
}
list_for_each_entry(obj, &i915->mm.bound_list, mm.link)
if (!i915_gem_object_is_active(obj) && can_release_pages(obj)) {
count += obj->base.size >> PAGE_SHIFT;
num_objects++;
}
spin_unlock(&i915->mm.obj_lock);
/* Update our preferred vmscan batch size for the next pass.
* Our rough guess for an effective batch size is roughly 2
* available GEM objects worth of pages. That is we don't want
* the shrinker to fire, until it is worth the cost of freeing an
* entire GEM object.
*/
if (num_objects) {
unsigned long avg = 2 * count / num_objects;
i915->mm.shrinker.batch =
max((i915->mm.shrinker.batch + avg) >> 1,
128ul /* default SHRINK_BATCH */);
}
return count;
}
static unsigned long
i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
{
struct drm_i915_private *i915 =
container_of(shrinker, struct drm_i915_private, mm.shrinker);
unsigned long freed;
bool unlock;
sc->nr_scanned = 0;
drm/i915: Return immediately if trylock fails for direct-reclaim Ignore trying to shrink from i915 if we fail to acquire the struct_mutex in the shrinker while performing direct-reclaim. The trade-off being (much) lower latency for non-i915 clients at an increased risk of being unable to obtain a page from direct-reclaim without hitting the oom-notifier. The proviso being that we still keep trying to hard obtain the lock for kswapd so that we can reap under heavy memory pressure. v2: Taint all mutexes taken within the shrinker with the struct_mutex subclass as an early warning system, and drop I915_SHRINK_ACTIVE from vmap to reduce the number of dangerous paths. We also have to drop I915_SHRINK_ACTIVE from oom-notifier to be able to make the same claim that ACTIVE is only used from outside context, which fits in with a longer strategy of avoiding stalls due to scanning active during shrinking. The danger in using the subclass struct_mutex is that we declare ourselves more knowledgable than lockdep and deprive ourselves of automatic coverage. Instead, we require ourselves to mark up any mutex taken inside the shrinker in order to detect lock-inversion, and if we miss any we are doomed to a deadlock at the worst possible moment. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190107115509.12523-1-chris@chris-wilson.co.uk
2019-01-07 18:54:24 +07:00
if (!shrinker_lock(i915, 0, &unlock))
return SHRINK_STOP;
freed = i915_gem_shrink(i915,
sc->nr_to_scan,
&sc->nr_scanned,
I915_SHRINK_BOUND |
I915_SHRINK_UNBOUND |
drm/i915: Start writeback from the shrinker When we are called to relieve mempressue via the shrinker, the only way we can make progress is either by discarding unwanted pages (those objects that userspace has marked MADV_DONTNEED) or by reclaiming the dirty objects via swap. As we know that is the only way to make further progress, we can initiate the writeback as we invalidate the objects. This means the objects we put onto the inactive anon lru list are already marked for reclaim+writeback and so will trigger a wait upon the writeback inside direct reclaim, greatly improving the success rate of direct reclaim on i915 objects. The corollary is that we may start a slow swap on opportunistic mempressure from the likes of the compaction + migration kthreads. This is limited by those threads only being allowed to shrink idle pages, but also that if we reactivate the page before it is swapped out by gpu activity, we only page the cost of repinning the page. The cost is most felt when an object is reused after mempressure, which hopefully excludes the latency sensitive tasks (as we are just extending the impact of swap thrashing to them). Apparently this is not the first time we've had this idea. Back in commit 5537252b6b6d ("drm/i915: Invalidate our pages under memory pressure") we wanted to start writeback but settled on invalidate after Hugh Dickins warned us about a possibility of a deadlock within shmemfs if we started writeback from shrink_slab. Looking at the callchain, using writeback from i915_gem_shrink should be equivalent to the pageout also employed by shrink_slab, i.e. it should not be any riskier afaict. v2: Leave mmapings intact. At this point, the only mmapings of our objects will be via CPU mmaps on the shmemfs filp, which are out-of-scope for our LRU tracking. Instead leave those pages to the inactive anon LRU page list for aging and pageout as normal. v3: Be selective on which paths trigger writeback, in particular excluding paths shrinking just to reclaim vm space (e.g. mmap, vmap reapers) and avoid starting writeback on the entire process space from within the pm freezer. References: https://bugs.freedesktop.org/show_bug.cgi?id=108686 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Michal Hocko <mhocko@suse.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20190420115539.29081-1-chris@chris-wilson.co.uk
2019-04-20 18:55:39 +07:00
I915_SHRINK_PURGEABLE |
I915_SHRINK_WRITEBACK);
if (sc->nr_scanned < sc->nr_to_scan)
freed += i915_gem_shrink(i915,
sc->nr_to_scan - sc->nr_scanned,
&sc->nr_scanned,
I915_SHRINK_BOUND |
drm/i915: Start writeback from the shrinker When we are called to relieve mempressue via the shrinker, the only way we can make progress is either by discarding unwanted pages (those objects that userspace has marked MADV_DONTNEED) or by reclaiming the dirty objects via swap. As we know that is the only way to make further progress, we can initiate the writeback as we invalidate the objects. This means the objects we put onto the inactive anon lru list are already marked for reclaim+writeback and so will trigger a wait upon the writeback inside direct reclaim, greatly improving the success rate of direct reclaim on i915 objects. The corollary is that we may start a slow swap on opportunistic mempressure from the likes of the compaction + migration kthreads. This is limited by those threads only being allowed to shrink idle pages, but also that if we reactivate the page before it is swapped out by gpu activity, we only page the cost of repinning the page. The cost is most felt when an object is reused after mempressure, which hopefully excludes the latency sensitive tasks (as we are just extending the impact of swap thrashing to them). Apparently this is not the first time we've had this idea. Back in commit 5537252b6b6d ("drm/i915: Invalidate our pages under memory pressure") we wanted to start writeback but settled on invalidate after Hugh Dickins warned us about a possibility of a deadlock within shmemfs if we started writeback from shrink_slab. Looking at the callchain, using writeback from i915_gem_shrink should be equivalent to the pageout also employed by shrink_slab, i.e. it should not be any riskier afaict. v2: Leave mmapings intact. At this point, the only mmapings of our objects will be via CPU mmaps on the shmemfs filp, which are out-of-scope for our LRU tracking. Instead leave those pages to the inactive anon LRU page list for aging and pageout as normal. v3: Be selective on which paths trigger writeback, in particular excluding paths shrinking just to reclaim vm space (e.g. mmap, vmap reapers) and avoid starting writeback on the entire process space from within the pm freezer. References: https://bugs.freedesktop.org/show_bug.cgi?id=108686 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Michal Hocko <mhocko@suse.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20190420115539.29081-1-chris@chris-wilson.co.uk
2019-04-20 18:55:39 +07:00
I915_SHRINK_UNBOUND |
I915_SHRINK_WRITEBACK);
if (sc->nr_scanned < sc->nr_to_scan && current_is_kswapd()) {
intel_wakeref_t wakeref;
with_intel_runtime_pm(i915, wakeref) {
freed += i915_gem_shrink(i915,
sc->nr_to_scan - sc->nr_scanned,
&sc->nr_scanned,
I915_SHRINK_ACTIVE |
I915_SHRINK_BOUND |
drm/i915: Start writeback from the shrinker When we are called to relieve mempressue via the shrinker, the only way we can make progress is either by discarding unwanted pages (those objects that userspace has marked MADV_DONTNEED) or by reclaiming the dirty objects via swap. As we know that is the only way to make further progress, we can initiate the writeback as we invalidate the objects. This means the objects we put onto the inactive anon lru list are already marked for reclaim+writeback and so will trigger a wait upon the writeback inside direct reclaim, greatly improving the success rate of direct reclaim on i915 objects. The corollary is that we may start a slow swap on opportunistic mempressure from the likes of the compaction + migration kthreads. This is limited by those threads only being allowed to shrink idle pages, but also that if we reactivate the page before it is swapped out by gpu activity, we only page the cost of repinning the page. The cost is most felt when an object is reused after mempressure, which hopefully excludes the latency sensitive tasks (as we are just extending the impact of swap thrashing to them). Apparently this is not the first time we've had this idea. Back in commit 5537252b6b6d ("drm/i915: Invalidate our pages under memory pressure") we wanted to start writeback but settled on invalidate after Hugh Dickins warned us about a possibility of a deadlock within shmemfs if we started writeback from shrink_slab. Looking at the callchain, using writeback from i915_gem_shrink should be equivalent to the pageout also employed by shrink_slab, i.e. it should not be any riskier afaict. v2: Leave mmapings intact. At this point, the only mmapings of our objects will be via CPU mmaps on the shmemfs filp, which are out-of-scope for our LRU tracking. Instead leave those pages to the inactive anon LRU page list for aging and pageout as normal. v3: Be selective on which paths trigger writeback, in particular excluding paths shrinking just to reclaim vm space (e.g. mmap, vmap reapers) and avoid starting writeback on the entire process space from within the pm freezer. References: https://bugs.freedesktop.org/show_bug.cgi?id=108686 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Michal Hocko <mhocko@suse.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20190420115539.29081-1-chris@chris-wilson.co.uk
2019-04-20 18:55:39 +07:00
I915_SHRINK_UNBOUND |
I915_SHRINK_WRITEBACK);
}
}
drm/i915: Don't call synchronize_rcu_expedited under struct_mutex Only call synchronize_rcu_expedited after unlocking struct_mutex to avoid deadlock because the workqueues depend on struct_mutex. >From original patch by Andrea: synchronize_rcu/synchronize_sched/synchronize_rcu_expedited() will hang until its own workqueues are run. The i915 gem workqueues will wait on the struct_mutex to be released. So we cannot wait for a quiescent state using those rcu primitives while holding the struct_mutex or it creates a circular lock dependency resulting in kernel hangs (which is reproducible but goes undetected by lockdep). kswapd0 D 0 700 2 0x00000000 Call Trace: ? __schedule+0x1a5/0x660 ? schedule+0x36/0x80 ? _synchronize_rcu_expedited.constprop.65+0x2ef/0x300 ? wake_up_bit+0x20/0x20 ? rcu_stall_kick_kthreads.part.54+0xc0/0xc0 ? rcu_exp_wait_wake+0x530/0x530 ? i915_gem_shrink+0x34b/0x4b0 ? i915_gem_shrinker_scan+0x7c/0x90 ? i915_gem_shrinker_scan+0x7c/0x90 ? shrink_slab.part.61.constprop.72+0x1c1/0x3a0 ? shrink_zone+0x154/0x160 ? kswapd+0x40a/0x720 ? kthread+0xf4/0x130 ? try_to_free_pages+0x450/0x450 ? kthread_create_on_node+0x40/0x40 ? ret_from_fork+0x23/0x30 plasmashell D 0 4657 4614 0x00000000 Call Trace: ? __schedule+0x1a5/0x660 ? schedule+0x36/0x80 ? schedule_preempt_disabled+0xe/0x10 ? __mutex_lock.isra.4+0x1c9/0x790 ? i915_gem_close_object+0x26/0xc0 ? i915_gem_close_object+0x26/0xc0 ? drm_gem_object_release_handle+0x48/0x90 ? drm_gem_handle_delete+0x50/0x80 ? drm_ioctl+0x1fa/0x420 ? drm_gem_handle_create+0x40/0x40 ? pipe_write+0x391/0x410 ? __vfs_write+0xc6/0x120 ? do_vfs_ioctl+0x8b/0x5d0 ? SyS_ioctl+0x3b/0x70 ? entry_SYSCALL_64_fastpath+0x13/0x94 kworker/0:0 D 0 29186 2 0x00000000 Workqueue: events __i915_gem_free_work Call Trace: ? __schedule+0x1a5/0x660 ? schedule+0x36/0x80 ? schedule_preempt_disabled+0xe/0x10 ? __mutex_lock.isra.4+0x1c9/0x790 ? del_timer_sync+0x44/0x50 ? update_curr+0x57/0x110 ? __i915_gem_free_objects+0x31/0x300 ? __i915_gem_free_objects+0x31/0x300 ? __i915_gem_free_work+0x2d/0x40 ? process_one_work+0x13a/0x3b0 ? worker_thread+0x4a/0x460 ? kthread+0xf4/0x130 ? process_one_work+0x3b0/0x3b0 ? kthread_create_on_node+0x40/0x40 ? ret_from_fork+0x23/0x30 Fixes: 3d3d18f086cd ("drm/i915: Avoid rcu_barrier() from reclaim paths (shrinker)") Reported-by: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jani Nikula <jani.nikula@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
2017-04-07 17:49:34 +07:00
shrinker_unlock(i915, unlock);
return sc->nr_scanned ? freed : SHRINK_STOP;
}
static int
i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
{
struct drm_i915_private *i915 =
container_of(nb, struct drm_i915_private, mm.oom_notifier);
struct drm_i915_gem_object *obj;
unsigned long unevictable, bound, unbound, freed_pages;
intel_wakeref_t wakeref;
freed_pages = 0;
with_intel_runtime_pm(i915, wakeref)
freed_pages += i915_gem_shrink(i915, -1UL, NULL,
I915_SHRINK_BOUND |
drm/i915: Start writeback from the shrinker When we are called to relieve mempressue via the shrinker, the only way we can make progress is either by discarding unwanted pages (those objects that userspace has marked MADV_DONTNEED) or by reclaiming the dirty objects via swap. As we know that is the only way to make further progress, we can initiate the writeback as we invalidate the objects. This means the objects we put onto the inactive anon lru list are already marked for reclaim+writeback and so will trigger a wait upon the writeback inside direct reclaim, greatly improving the success rate of direct reclaim on i915 objects. The corollary is that we may start a slow swap on opportunistic mempressure from the likes of the compaction + migration kthreads. This is limited by those threads only being allowed to shrink idle pages, but also that if we reactivate the page before it is swapped out by gpu activity, we only page the cost of repinning the page. The cost is most felt when an object is reused after mempressure, which hopefully excludes the latency sensitive tasks (as we are just extending the impact of swap thrashing to them). Apparently this is not the first time we've had this idea. Back in commit 5537252b6b6d ("drm/i915: Invalidate our pages under memory pressure") we wanted to start writeback but settled on invalidate after Hugh Dickins warned us about a possibility of a deadlock within shmemfs if we started writeback from shrink_slab. Looking at the callchain, using writeback from i915_gem_shrink should be equivalent to the pageout also employed by shrink_slab, i.e. it should not be any riskier afaict. v2: Leave mmapings intact. At this point, the only mmapings of our objects will be via CPU mmaps on the shmemfs filp, which are out-of-scope for our LRU tracking. Instead leave those pages to the inactive anon LRU page list for aging and pageout as normal. v3: Be selective on which paths trigger writeback, in particular excluding paths shrinking just to reclaim vm space (e.g. mmap, vmap reapers) and avoid starting writeback on the entire process space from within the pm freezer. References: https://bugs.freedesktop.org/show_bug.cgi?id=108686 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Michal Hocko <mhocko@suse.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20190420115539.29081-1-chris@chris-wilson.co.uk
2019-04-20 18:55:39 +07:00
I915_SHRINK_UNBOUND |
I915_SHRINK_WRITEBACK);
/* Because we may be allocating inside our own driver, we cannot
* assert that there are no objects with pinned pages that are not
* being pointed to by hardware.
*/
unbound = bound = unevictable = 0;
spin_lock(&i915->mm.obj_lock);
list_for_each_entry(obj, &i915->mm.unbound_list, mm.link) {
if (!can_release_pages(obj))
unevictable += obj->base.size >> PAGE_SHIFT;
else
unbound += obj->base.size >> PAGE_SHIFT;
}
list_for_each_entry(obj, &i915->mm.bound_list, mm.link) {
if (!can_release_pages(obj))
unevictable += obj->base.size >> PAGE_SHIFT;
else
bound += obj->base.size >> PAGE_SHIFT;
}
spin_unlock(&i915->mm.obj_lock);
if (freed_pages || unbound || bound)
pr_info("Purging GPU memory, %lu pages freed, "
"%lu pages still pinned.\n",
freed_pages, unevictable);
*(unsigned long *)ptr += freed_pages;
return NOTIFY_DONE;
}
static int
i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr)
{
struct drm_i915_private *i915 =
container_of(nb, struct drm_i915_private, mm.vmap_notifier);
struct i915_vma *vma, *next;
unsigned long freed_pages = 0;
intel_wakeref_t wakeref;
bool unlock;
if (!shrinker_lock(i915, 0, &unlock))
return NOTIFY_DONE;
/* Force everything onto the inactive lists */
if (i915_gem_wait_for_idle(i915,
I915_WAIT_LOCKED,
MAX_SCHEDULE_TIMEOUT))
goto out;
with_intel_runtime_pm(i915, wakeref)
freed_pages += i915_gem_shrink(i915, -1UL, NULL,
I915_SHRINK_BOUND |
I915_SHRINK_UNBOUND |
I915_SHRINK_VMAPS);
/* We also want to clear any cached iomaps as they wrap vmap */
mutex_lock(&i915->ggtt.vm.mutex);
list_for_each_entry_safe(vma, next,
drm/i915: Stop tracking MRU activity on VMA Our goal is to remove struct_mutex and replace it with fine grained locking. One of the thorny issues is our eviction logic for reclaiming space for an execbuffer (or GTT mmaping, among a few other examples). While eviction itself is easy to move under a per-VM mutex, performing the activity tracking is less agreeable. One solution is not to do any MRU tracking and do a simple coarse evaluation during eviction of active/inactive, with a loose temporal ordering of last insertion/evaluation. That keeps all the locking constrained to when we are manipulating the VM itself, neatly avoiding the tricky handling of possible recursive locking during execbuf and elsewhere. Note that discarding the MRU (currently implemented as a pair of lists, to avoid scanning the active list for a NONBLOCKING search) is unlikely to impact upon our efficiency to reclaim VM space (where we think a LRU model is best) as our current strategy is to use random idle replacement first before doing a search, and over time the use of softpinned 48b per-ppGTT is growing (thereby eliminating any need to perform any eviction searches, in theory at least) with the remaining users being found on much older devices (gen2-gen6). v2: Changelog and commentary rewritten to elaborate on the duality of a single list being both an inactive and active list. v3: Consolidate bool parameters into a single set of flags; don't comment on the duality of a single variable being a multiplicity of bits. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190128102356.15037-1-chris@chris-wilson.co.uk
2019-01-28 17:23:52 +07:00
&i915->ggtt.vm.bound_list, vm_link) {
unsigned long count = vma->node.size >> PAGE_SHIFT;
drm/i915: Stop tracking MRU activity on VMA Our goal is to remove struct_mutex and replace it with fine grained locking. One of the thorny issues is our eviction logic for reclaiming space for an execbuffer (or GTT mmaping, among a few other examples). While eviction itself is easy to move under a per-VM mutex, performing the activity tracking is less agreeable. One solution is not to do any MRU tracking and do a simple coarse evaluation during eviction of active/inactive, with a loose temporal ordering of last insertion/evaluation. That keeps all the locking constrained to when we are manipulating the VM itself, neatly avoiding the tricky handling of possible recursive locking during execbuf and elsewhere. Note that discarding the MRU (currently implemented as a pair of lists, to avoid scanning the active list for a NONBLOCKING search) is unlikely to impact upon our efficiency to reclaim VM space (where we think a LRU model is best) as our current strategy is to use random idle replacement first before doing a search, and over time the use of softpinned 48b per-ppGTT is growing (thereby eliminating any need to perform any eviction searches, in theory at least) with the remaining users being found on much older devices (gen2-gen6). v2: Changelog and commentary rewritten to elaborate on the duality of a single list being both an inactive and active list. v3: Consolidate bool parameters into a single set of flags; don't comment on the duality of a single variable being a multiplicity of bits. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190128102356.15037-1-chris@chris-wilson.co.uk
2019-01-28 17:23:52 +07:00
if (!vma->iomap || i915_vma_is_active(vma))
continue;
mutex_unlock(&i915->ggtt.vm.mutex);
drm/i915: Stop tracking MRU activity on VMA Our goal is to remove struct_mutex and replace it with fine grained locking. One of the thorny issues is our eviction logic for reclaiming space for an execbuffer (or GTT mmaping, among a few other examples). While eviction itself is easy to move under a per-VM mutex, performing the activity tracking is less agreeable. One solution is not to do any MRU tracking and do a simple coarse evaluation during eviction of active/inactive, with a loose temporal ordering of last insertion/evaluation. That keeps all the locking constrained to when we are manipulating the VM itself, neatly avoiding the tricky handling of possible recursive locking during execbuf and elsewhere. Note that discarding the MRU (currently implemented as a pair of lists, to avoid scanning the active list for a NONBLOCKING search) is unlikely to impact upon our efficiency to reclaim VM space (where we think a LRU model is best) as our current strategy is to use random idle replacement first before doing a search, and over time the use of softpinned 48b per-ppGTT is growing (thereby eliminating any need to perform any eviction searches, in theory at least) with the remaining users being found on much older devices (gen2-gen6). v2: Changelog and commentary rewritten to elaborate on the duality of a single list being both an inactive and active list. v3: Consolidate bool parameters into a single set of flags; don't comment on the duality of a single variable being a multiplicity of bits. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190128102356.15037-1-chris@chris-wilson.co.uk
2019-01-28 17:23:52 +07:00
if (i915_vma_unbind(vma) == 0)
freed_pages += count;
mutex_lock(&i915->ggtt.vm.mutex);
}
mutex_unlock(&i915->ggtt.vm.mutex);
out:
shrinker_unlock(i915, unlock);
*(unsigned long *)ptr += freed_pages;
return NOTIFY_DONE;
}
/**
* i915_gem_shrinker_register - Register the i915 shrinker
* @i915: i915 device
*
* This function registers and sets up the i915 shrinker and OOM handler.
*/
void i915_gem_shrinker_register(struct drm_i915_private *i915)
{
i915->mm.shrinker.scan_objects = i915_gem_shrinker_scan;
i915->mm.shrinker.count_objects = i915_gem_shrinker_count;
i915->mm.shrinker.seeks = DEFAULT_SEEKS;
i915->mm.shrinker.batch = 4096;
WARN_ON(register_shrinker(&i915->mm.shrinker));
i915->mm.oom_notifier.notifier_call = i915_gem_shrinker_oom;
WARN_ON(register_oom_notifier(&i915->mm.oom_notifier));
i915->mm.vmap_notifier.notifier_call = i915_gem_shrinker_vmap;
WARN_ON(register_vmap_purge_notifier(&i915->mm.vmap_notifier));
}
/**
* i915_gem_shrinker_unregister - Unregisters the i915 shrinker
* @i915: i915 device
*
* This function unregisters the i915 shrinker and OOM handler.
*/
void i915_gem_shrinker_unregister(struct drm_i915_private *i915)
{
WARN_ON(unregister_vmap_purge_notifier(&i915->mm.vmap_notifier));
WARN_ON(unregister_oom_notifier(&i915->mm.oom_notifier));
unregister_shrinker(&i915->mm.shrinker);
}
drm/i915: Return immediately if trylock fails for direct-reclaim Ignore trying to shrink from i915 if we fail to acquire the struct_mutex in the shrinker while performing direct-reclaim. The trade-off being (much) lower latency for non-i915 clients at an increased risk of being unable to obtain a page from direct-reclaim without hitting the oom-notifier. The proviso being that we still keep trying to hard obtain the lock for kswapd so that we can reap under heavy memory pressure. v2: Taint all mutexes taken within the shrinker with the struct_mutex subclass as an early warning system, and drop I915_SHRINK_ACTIVE from vmap to reduce the number of dangerous paths. We also have to drop I915_SHRINK_ACTIVE from oom-notifier to be able to make the same claim that ACTIVE is only used from outside context, which fits in with a longer strategy of avoiding stalls due to scanning active during shrinking. The danger in using the subclass struct_mutex is that we declare ourselves more knowledgable than lockdep and deprive ourselves of automatic coverage. Instead, we require ourselves to mark up any mutex taken inside the shrinker in order to detect lock-inversion, and if we miss any we are doomed to a deadlock at the worst possible moment. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190107115509.12523-1-chris@chris-wilson.co.uk
2019-01-07 18:54:24 +07:00
void i915_gem_shrinker_taints_mutex(struct drm_i915_private *i915,
struct mutex *mutex)
{
drm/i915: Return immediately if trylock fails for direct-reclaim Ignore trying to shrink from i915 if we fail to acquire the struct_mutex in the shrinker while performing direct-reclaim. The trade-off being (much) lower latency for non-i915 clients at an increased risk of being unable to obtain a page from direct-reclaim without hitting the oom-notifier. The proviso being that we still keep trying to hard obtain the lock for kswapd so that we can reap under heavy memory pressure. v2: Taint all mutexes taken within the shrinker with the struct_mutex subclass as an early warning system, and drop I915_SHRINK_ACTIVE from vmap to reduce the number of dangerous paths. We also have to drop I915_SHRINK_ACTIVE from oom-notifier to be able to make the same claim that ACTIVE is only used from outside context, which fits in with a longer strategy of avoiding stalls due to scanning active during shrinking. The danger in using the subclass struct_mutex is that we declare ourselves more knowledgable than lockdep and deprive ourselves of automatic coverage. Instead, we require ourselves to mark up any mutex taken inside the shrinker in order to detect lock-inversion, and if we miss any we are doomed to a deadlock at the worst possible moment. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190107115509.12523-1-chris@chris-wilson.co.uk
2019-01-07 18:54:24 +07:00
bool unlock = false;
if (!IS_ENABLED(CONFIG_LOCKDEP))
return;
drm/i915: Return immediately if trylock fails for direct-reclaim Ignore trying to shrink from i915 if we fail to acquire the struct_mutex in the shrinker while performing direct-reclaim. The trade-off being (much) lower latency for non-i915 clients at an increased risk of being unable to obtain a page from direct-reclaim without hitting the oom-notifier. The proviso being that we still keep trying to hard obtain the lock for kswapd so that we can reap under heavy memory pressure. v2: Taint all mutexes taken within the shrinker with the struct_mutex subclass as an early warning system, and drop I915_SHRINK_ACTIVE from vmap to reduce the number of dangerous paths. We also have to drop I915_SHRINK_ACTIVE from oom-notifier to be able to make the same claim that ACTIVE is only used from outside context, which fits in with a longer strategy of avoiding stalls due to scanning active during shrinking. The danger in using the subclass struct_mutex is that we declare ourselves more knowledgable than lockdep and deprive ourselves of automatic coverage. Instead, we require ourselves to mark up any mutex taken inside the shrinker in order to detect lock-inversion, and if we miss any we are doomed to a deadlock at the worst possible moment. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190107115509.12523-1-chris@chris-wilson.co.uk
2019-01-07 18:54:24 +07:00
if (!lockdep_is_held_type(&i915->drm.struct_mutex, -1)) {
mutex_acquire(&i915->drm.struct_mutex.dep_map,
I915_MM_NORMAL, 0, _RET_IP_);
unlock = true;
}
fs_reclaim_acquire(GFP_KERNEL);
drm/i915: Return immediately if trylock fails for direct-reclaim Ignore trying to shrink from i915 if we fail to acquire the struct_mutex in the shrinker while performing direct-reclaim. The trade-off being (much) lower latency for non-i915 clients at an increased risk of being unable to obtain a page from direct-reclaim without hitting the oom-notifier. The proviso being that we still keep trying to hard obtain the lock for kswapd so that we can reap under heavy memory pressure. v2: Taint all mutexes taken within the shrinker with the struct_mutex subclass as an early warning system, and drop I915_SHRINK_ACTIVE from vmap to reduce the number of dangerous paths. We also have to drop I915_SHRINK_ACTIVE from oom-notifier to be able to make the same claim that ACTIVE is only used from outside context, which fits in with a longer strategy of avoiding stalls due to scanning active during shrinking. The danger in using the subclass struct_mutex is that we declare ourselves more knowledgable than lockdep and deprive ourselves of automatic coverage. Instead, we require ourselves to mark up any mutex taken inside the shrinker in order to detect lock-inversion, and if we miss any we are doomed to a deadlock at the worst possible moment. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190107115509.12523-1-chris@chris-wilson.co.uk
2019-01-07 18:54:24 +07:00
/*
* As we invariably rely on the struct_mutex within the shrinker,
* but have a complicated recursion dance, taint all the mutexes used
* within the shrinker with the struct_mutex. For completeness, we
* taint with all subclass of struct_mutex, even though we should
* only need tainting by I915_MM_NORMAL to catch possible ABBA
* deadlocks from using struct_mutex inside @mutex.
*/
mutex_acquire(&i915->drm.struct_mutex.dep_map,
I915_MM_SHRINKER, 0, _RET_IP_);
mutex_acquire(&mutex->dep_map, 0, 0, _RET_IP_);
mutex_release(&mutex->dep_map, 0, _RET_IP_);
mutex_release(&i915->drm.struct_mutex.dep_map, 0, _RET_IP_);
fs_reclaim_release(GFP_KERNEL);
drm/i915: Return immediately if trylock fails for direct-reclaim Ignore trying to shrink from i915 if we fail to acquire the struct_mutex in the shrinker while performing direct-reclaim. The trade-off being (much) lower latency for non-i915 clients at an increased risk of being unable to obtain a page from direct-reclaim without hitting the oom-notifier. The proviso being that we still keep trying to hard obtain the lock for kswapd so that we can reap under heavy memory pressure. v2: Taint all mutexes taken within the shrinker with the struct_mutex subclass as an early warning system, and drop I915_SHRINK_ACTIVE from vmap to reduce the number of dangerous paths. We also have to drop I915_SHRINK_ACTIVE from oom-notifier to be able to make the same claim that ACTIVE is only used from outside context, which fits in with a longer strategy of avoiding stalls due to scanning active during shrinking. The danger in using the subclass struct_mutex is that we declare ourselves more knowledgable than lockdep and deprive ourselves of automatic coverage. Instead, we require ourselves to mark up any mutex taken inside the shrinker in order to detect lock-inversion, and if we miss any we are doomed to a deadlock at the worst possible moment. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190107115509.12523-1-chris@chris-wilson.co.uk
2019-01-07 18:54:24 +07:00
if (unlock)
mutex_release(&i915->drm.struct_mutex.dep_map, 0, _RET_IP_);
}