Commit Graph

423 Commits

Author SHA1 Message Date
Chris Wilson
6ad145fe02 drm/i915/gt: Give engine->kernel_context distinct timeline lock classes
Assign a separate lockclass to the perma-pinned timelines of the
kernel_context, such that we can use them from within the user timelines
should we ever need to inject GPU operations to fixup faults during
request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191008185941.15228-1-chris@chris-wilson.co.uk
2019-10-08 22:19:00 +01:00
Chris Wilson
d99f7b079c drm/i915/gt: Flush submission tasklet before waiting/retiring
A common bane of ours is arbitrary delays in ksoftirqd processing our
submission tasklet. Give the submission tasklet a kick before we wait to
avoid those delays eating into a tight timeout.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191008105655.13256-1-chris@chris-wilson.co.uk
2019-10-08 16:23:55 +01:00
Chris Wilson
d14a701b00 drm/i915/selftests: Assign the intel_runtime_pm pointer for mock_uncore
Couple up our mock_uncore to know about the fake global device and its
runtime powermanagement.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191008145045.23157-1-chris@chris-wilson.co.uk
2019-10-08 16:21:50 +01:00
Chris Wilson
3de1627851 drm/i915/selftests: Assign the mock_engine->uncore shortcut
Set up the engine->uncore shortcut on mock_engine creation.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191008071121.25088-1-chris@chris-wilson.co.uk
2019-10-08 10:14:46 +01:00
Chris Wilson
20af04f3dd drm/i915/execlists: Assign virtual_engine->uncore from first sibling
Copy across the engine->uncore shortcut to the virtual_engine from its
first physical engine, similar to the handling of the engine->gt
backpointer.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191008070342.4045-1-chris@chris-wilson.co.uk
2019-10-08 10:14:29 +01:00
Chris Wilson
a1b58ee3cb drm/i915/gt: Treat a busy timeline as 'active' while waiting
If we cannot claim the timeline->mutex while preparing for a wait on it,
we have to skip the timeline. In doing so, treat it as active so that
under a intel_gt_wait_for_idle() loop, we repeat the wait after
scheduling away.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191006165002.30312-4-chris@chris-wilson.co.uk
2019-10-07 21:44:02 +01:00
Chris Wilson
1664f35aa7 drm/i915/selftests: Appease lockdep
Disable irqs around updating the context image to keep lockdep happy:

<4>[  673.483340] WARNING: possible irq lock inversion dependency detected
<4>[  673.483342] 5.4.0-rc1-CI-Trybot_5118+ #1 Tainted: G     U
<4>[  673.483342] --------------------------------------------------------
<4>[  673.483343] swapper/2/0 just changed the state of lock:
<4>[  673.483344] ffff88845db885a0 (&i915_request_get(rq)->submit/1){-...}, at: __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  673.483387] but this lock took another, HARDIRQ-unsafe lock in the past:
<4>[  673.483388]  (&ce->pin_mutex/2){+...}
<4>[  673.483389]

                  and interrupts could create inverse lock ordering between them.

<4>[  673.483390]
                  other info that might help us debug this:
<4>[  673.483390] Chain exists of:
                    &i915_request_get(rq)->submit/1 --> &engine->active.lock --> &ce->pin_mutex/2

<4>[  673.483392]  Possible interrupt unsafe locking scenario:

<4>[  673.483392]        CPU0                    CPU1
<4>[  673.483393]        ----                    ----
<4>[  673.483393]   lock(&ce->pin_mutex/2);
<4>[  673.483394]                                local_irq_disable();
<4>[  673.483395]                                lock(&i915_request_get(rq)->submit/1);
<4>[  673.483396]                                lock(&engine->active.lock);
<4>[  673.483396]   <Interrupt>
<4>[  673.483397]     lock(&i915_request_get(rq)->submit/1);
<4>[  673.483398]
                   *** DEADLOCK ***

<4>[  673.483398] 2 locks held by swapper/2/0:
<4>[  673.483399]  #0: ffff8883f61ac9b0 (&(&gt->irq_lock)->rlock){-.-.}, at: gen11_gt_irq_handler+0x42/0x280 [i915]
<4>[  673.483433]  #1: ffff88845db8c418 (&(&rq->lock)->rlock){-.-.}, at: intel_engine_breadcrumbs_irq+0x34a/0x5a0 [i915]
<4>[  673.483463]
                  the shortest dependencies between 2nd lock and 1st lock:
<4>[  673.483466]   -> (&ce->pin_mutex/2){+...} ops: 614520 {
<4>[  673.483468]      HARDIRQ-ON-W at:
<4>[  673.483471]                         lock_acquire+0xa7/0x1c0
<4>[  673.483501]                         live_unlite_restore+0x1d8/0x6c0 [i915]
<4>[  673.483543]                         __i915_subtests+0xb8/0x210 [i915]
<4>[  673.483581]                         __run_selftests+0x112/0x170 [i915]
<4>[  673.483615]                         i915_live_selftests+0x2c/0x60 [i915]
<4>[  673.483644]                         i915_pci_probe+0x93/0x1b0 [i915]
<4>[  673.483646]                         pci_device_probe+0x9e/0x120
<4>[  673.483648]                         really_probe+0xea/0x420
<4>[  673.483649]                         driver_probe_device+0x10b/0x120
<4>[  673.483651]                         device_driver_attach+0x4a/0x50
<4>[  673.483652]                         __driver_attach+0x97/0x130
<4>[  673.483653]                         bus_for_each_dev+0x74/0xc0
<4>[  673.483654]                         bus_add_driver+0x142/0x220
<4>[  673.483655]                         driver_register+0x56/0xf0
<4>[  673.483657]                         do_one_initcall+0x58/0x2ff
<4>[  673.483659]                         do_init_module+0x56/0x1f8
<4>[  673.483660]                         load_module+0x243e/0x29f0
<4>[  673.483661]                         __do_sys_finit_module+0xe9/0x110
<4>[  673.483662]                         do_syscall_64+0x4f/0x210
<4>[  673.483665]                         entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[  673.483665]      INITIAL USE at:
<4>[  673.483667]                        lock_acquire+0xa7/0x1c0
<4>[  673.483698]                        live_unlite_restore+0x1d8/0x6c0 [i915]
<4>[  673.483733]                        __i915_subtests+0xb8/0x210 [i915]
<4>[  673.483764]                        __run_selftests+0x112/0x170 [i915]
<4>[  673.483793]                        i915_live_selftests+0x2c/0x60 [i915]
<4>[  673.483821]                        i915_pci_probe+0x93/0x1b0 [i915]
<4>[  673.483822]                        pci_device_probe+0x9e/0x120
<4>[  673.483824]                        really_probe+0xea/0x420
<4>[  673.483825]                        driver_probe_device+0x10b/0x120
<4>[  673.483826]                        device_driver_attach+0x4a/0x50
<4>[  673.483827]                        __driver_attach+0x97/0x130
<4>[  673.483828]                        bus_for_each_dev+0x74/0xc0
<4>[  673.483829]                        bus_add_driver+0x142/0x220
<4>[  673.483830]                        driver_register+0x56/0xf0
<4>[  673.483831]                        do_one_initcall+0x58/0x2ff
<4>[  673.483833]                        do_init_module+0x56/0x1f8
<4>[  673.483834]                        load_module+0x243e/0x29f0
<4>[  673.483835]                        __do_sys_finit_module+0xe9/0x110
<4>[  673.483836]                        do_syscall_64+0x4f/0x210
<4>[  673.483837]                        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[  673.483838]    }
<4>[  673.483868]    ... key      at: [<ffffffffa0a8f132>] __key.70113+0x2/0xffffffffffef2ed0 [i915]
<4>[  673.483869]    ... acquired at:
<4>[  673.483935]    __execlists_reset+0xfb/0xc20 [i915]
<4>[  673.483965]    execlists_reset+0x3d/0x50 [i915]
<4>[  673.483995]    intel_engine_reset+0xdf/0x230 [i915]
<4>[  673.484022]    live_preempt_hang+0x1d7/0x2e0 [i915]
<4>[  673.484064]    __i915_subtests+0xb8/0x210 [i915]
<4>[  673.484130]    __run_selftests+0x112/0x170 [i915]
<4>[  673.484163]    i915_live_selftests+0x2c/0x60 [i915]
<4>[  673.484193]    i915_pci_probe+0x93/0x1b0 [i915]
<4>[  673.484194]    pci_device_probe+0x9e/0x120
<4>[  673.484195]    really_probe+0xea/0x420
<4>[  673.484196]    driver_probe_device+0x10b/0x120
<4>[  673.484197]    device_driver_attach+0x4a/0x50
<4>[  673.484198]    __driver_attach+0x97/0x130
<4>[  673.484199]    bus_for_each_dev+0x74/0xc0
<4>[  673.484200]    bus_add_driver+0x142/0x220
<4>[  673.484202]    driver_register+0x56/0xf0
<4>[  673.484203]    do_one_initcall+0x58/0x2ff
<4>[  673.484204]    do_init_module+0x56/0x1f8
<4>[  673.484205]    load_module+0x243e/0x29f0
<4>[  673.484206]    __do_sys_finit_module+0xe9/0x110
<4>[  673.484207]    do_syscall_64+0x4f/0x210
<4>[  673.484208]    entry_SYSCALL_64_after_hwframe+0x49/0xbe

<4>[  673.484209]  -> (&engine->active.lock){..-.} ops: 972791 {
<4>[  673.484211]     IN-SOFTIRQ-W at:
<4>[  673.484213]                       lock_acquire+0xa7/0x1c0
<4>[  673.484214]                       _raw_spin_lock_irqsave+0x33/0x50
<4>[  673.484244]                       execlists_submission_tasklet+0xaf/0x100 [i915]
<4>[  673.484246]                       tasklet_action_common.isra.18+0x6c/0x1c0
<4>[  673.484247]                       __do_softirq+0xdf/0x47f
<4>[  673.484248]                       irq_exit+0xba/0xc0
<4>[  673.484249]                       do_IRQ+0x83/0x160
<4>[  673.484250]                       ret_from_intr+0x0/0x1d
<4>[  673.484252]                       cpuidle_enter_state+0xb2/0x450
<4>[  673.484253]                       cpuidle_enter+0x24/0x40
<4>[  673.484254]                       do_idle+0x1e7/0x250
<4>[  673.484256]                       cpu_startup_entry+0x14/0x20
<4>[  673.484257]                       start_secondary+0x15f/0x1b0
<4>[  673.484258]                       secondary_startup_64+0xa4/0xb0
<4>[  673.484259]     INITIAL USE at:
<4>[  673.484261]                      lock_acquire+0xa7/0x1c0
<4>[  673.484290]                      intel_engine_init_active+0x7e/0xb0 [i915]
<4>[  673.484305]                      intel_engines_setup+0x1cd/0x3b0 [i915]
<4>[  673.484305]                      i915_gem_init+0x12d/0x900 [i915]
<4>[  673.484305]                      i915_driver_probe+0xb70/0x15d0 [i915]
<4>[  673.484305]                      i915_pci_probe+0x43/0x1b0 [i915]
<4>[  673.484305]                      pci_device_probe+0x9e/0x120
<4>[  673.484305]                      really_probe+0xea/0x420
<4>[  673.484305]                      driver_probe_device+0x10b/0x120
<4>[  673.484305]                      device_driver_attach+0x4a/0x50
<4>[  673.484305]                      __driver_attach+0x97/0x130
<4>[  673.484305]                      bus_for_each_dev+0x74/0xc0
<4>[  673.484305]                      bus_add_driver+0x142/0x220
<4>[  673.484305]                      driver_register+0x56/0xf0
<4>[  673.484305]                      do_one_initcall+0x58/0x2ff
<4>[  673.484305]                      do_init_module+0x56/0x1f8
<4>[  673.484305]                      load_module+0x243e/0x29f0
<4>[  673.484305]                      __do_sys_finit_module+0xe9/0x110
<4>[  673.484305]                      do_syscall_64+0x4f/0x210
<4>[  673.484305]                      entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[  673.484305]   }
<4>[  673.484305]   ... key      at: [<ffffffffa0a8f160>] __key.70307+0x0/0xffffffffffef2ea0 [i915]
<4>[  673.484305]   ... acquired at:
<4>[  673.484305]    _raw_spin_lock_irqsave+0x33/0x50
<4>[  673.484305]    execlists_submit_request+0x2b/0x1e0 [i915]
<4>[  673.484305]    submit_notify+0xa8/0x13c [i915]
<4>[  673.484305]    __i915_sw_fence_complete+0x81/0x250 [i915]
<4>[  673.484305]    i915_sw_fence_wake+0x51/0x70 [i915]
<4>[  673.484305]    __i915_sw_fence_complete+0x1ee/0x250 [i915]
<4>[  673.484305]    dma_i915_sw_fence_wake+0x1b/0x30 [i915]
<4>[  673.484305]    dma_fence_signal_locked+0x9e/0x1b0
<4>[  673.484305]    dma_fence_signal+0x1f/0x40
<4>[  673.484305]    fence_work+0x28/0x80 [i915]
<4>[  673.484305]    process_one_work+0x26a/0x620
<4>[  673.484305]    worker_thread+0x37/0x380
<4>[  673.484305]    kthread+0x119/0x130
<4>[  673.484305]    ret_from_fork+0x24/0x50

<4>[  673.484305] -> (&i915_request_get(rq)->submit/1){-...} ops: 857694 {
<4>[  673.484305]    IN-HARDIRQ-W at:
<4>[  673.484305]                     lock_acquire+0xa7/0x1c0
<4>[  673.484305]                     _raw_spin_lock_irqsave_nested+0x39/0x50
<4>[  673.484305]                     __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  673.484305]                     intel_engine_breadcrumbs_irq+0x3d0/0x5a0 [i915]
<4>[  673.484305]                     cs_irq_handler+0x39/0x50 [i915]
<4>[  673.484305]                     gen11_gt_irq_handler+0x17b/0x280 [i915]
<4>[  673.484305]                     gen11_irq_handler+0x54/0xf0 [i915]
<4>[  673.484305]                     __handle_irq_event_percpu+0x41/0x2c0
<4>[  673.484305]                     handle_irq_event_percpu+0x2b/0x70
<4>[  673.484305]                     handle_irq_event+0x2f/0x50
<4>[  673.484305]                     handle_edge_irq+0x99/0x1b0
<4>[  673.484305]                     do_IRQ+0x7e/0x160
<4>[  673.484305]                     ret_from_intr+0x0/0x1d
<4>[  673.484305]                     cpuidle_enter_state+0xb2/0x450
<4>[  673.484305]                     cpuidle_enter+0x24/0x40
<4>[  673.484305]                     do_idle+0x1e7/0x250
<4>[  673.484305]                     cpu_startup_entry+0x14/0x20
<4>[  673.484305]                     start_secondary+0x15f/0x1b0
<4>[  673.484305]                     secondary_startup_64+0xa4/0xb0
<4>[  673.484305]    INITIAL USE at:
<4>[  673.484305]                    lock_acquire+0xa7/0x1c0
<4>[  673.484305]                    _raw_spin_lock_irqsave_nested+0x39/0x50
<4>[  673.484305]                    __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  673.484305]                    __engine_park+0x233/0x420 [i915]
<4>[  673.484305]                    ____intel_wakeref_put_last+0x1c/0x70 [i915]
<4>[  673.484305]                    intel_gt_resume+0x202/0x2c0 [i915]
<4>[  673.484305]                    i915_gem_init+0x36e/0x900 [i915]
<4>[  673.484305]                    i915_driver_probe+0xb70/0x15d0 [i915]
<4>[  673.484305]                    i915_pci_probe+0x43/0x1b0 [i915]
<4>[  673.484305]                    pci_device_probe+0x9e/0x120
<4>[  673.484305]                    really_probe+0xea/0x420
<4>[  673.484305]                    driver_probe_device+0x10b/0x120
<4>[  673.484305]                    device_driver_attach+0x4a/0x50
<4>[  673.484305]                    __driver_attach+0x97/0x130
<4>[  673.484305]                    bus_for_each_dev+0x74/0xc0
<4>[  673.484305]                    bus_add_driver+0x142/0x220
<4>[  673.484305]                    driver_register+0x56/0xf0
<4>[  673.484305]                    do_one_initcall+0x58/0x2ff
<4>[  673.484305]                    do_init_module+0x56/0x1f8
<4>[  673.484305]                    load_module+0x243e/0x29f0
<4>[  673.484305]                    __do_sys_finit_module+0xe9/0x110
<4>[  673.484305]                    do_syscall_64+0x4f/0x210
<4>[  673.484305]                    entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[  673.484305]  }
<4>[  673.484305]  ... key      at: [<ffffffffa0a8f6a1>] __key.80173+0x1/0xffffffffffef2960 [i915]
<4>[  673.484305]  ... acquired at:
<4>[  673.484305]    mark_lock+0x382/0x500
<4>[  673.484305]    __lock_acquire+0x7e1/0x15d0
<4>[  673.484305]    lock_acquire+0xa7/0x1c0
<4>[  673.484305]    _raw_spin_lock_irqsave_nested+0x39/0x50
<4>[  673.484305]    __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  673.484305]    intel_engine_breadcrumbs_irq+0x3d0/0x5a0 [i915]
<4>[  673.484305]    cs_irq_handler+0x39/0x50 [i915]
<4>[  673.484305]    gen11_gt_irq_handler+0x17b/0x280 [i915]
<4>[  673.484305]    gen11_irq_handler+0x54/0xf0 [i915]
<4>[  673.484305]    __handle_irq_event_percpu+0x41/0x2c0
<4>[  673.484305]    handle_irq_event_percpu+0x2b/0x70
<4>[  673.484305]    handle_irq_event+0x2f/0x50
<4>[  673.484305]    handle_edge_irq+0x99/0x1b0
<4>[  673.484305]    do_IRQ+0x7e/0x160
<4>[  673.484305]    ret_from_intr+0x0/0x1d
<4>[  673.484305]    cpuidle_enter_state+0xb2/0x450
<4>[  673.484305]    cpuidle_enter+0x24/0x40
<4>[  673.484305]    do_idle+0x1e7/0x250
<4>[  673.484305]    cpu_startup_entry+0x14/0x20
<4>[  673.484305]    start_secondary+0x15f/0x1b0
<4>[  673.484305]    secondary_startup_64+0xa4/0xb0

<4>[  673.484305]
                  stack backtrace:
<4>[  673.484305] CPU: 2 PID: 0 Comm: swapper/2 Tainted: G     U            5.4.0-rc1-CI-Trybot_5118+ #1
<4>[  673.484305] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP TLC, BIOS ICLSFWR1.R00.3183.A00.1905020411 05/02/2019
<4>[  673.484305] Call Trace:
<4>[  673.484305]  <IRQ>
<4>[  673.484305]  dump_stack+0x67/0x9b
<4>[  673.484305]  check_usage_forwards+0x13c/0x150
<4>[  673.484305]  ? mark_lock+0x382/0x500
<4>[  673.484305]  mark_lock+0x382/0x500
<4>[  673.484305]  ? check_usage_backwards+0x140/0x140
<4>[  673.484305]  __lock_acquire+0x7e1/0x15d0
<4>[  673.484305]  ? debug_object_deactivate+0x17e/0x190
<4>[  673.484305]  lock_acquire+0xa7/0x1c0
<4>[  673.484305]  ? __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  673.484305]  _raw_spin_lock_irqsave_nested+0x39/0x50
<4>[  673.484305]  ? __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  673.484305]  __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  673.484305]  intel_engine_breadcrumbs_irq+0x3d0/0x5a0 [i915]
<4>[  673.484305]  cs_irq_handler+0x39/0x50 [i915]
<4>[  673.484305]  gen11_gt_irq_handler+0x17b/0x280 [i915]
<4>[  673.484305]  gen11_irq_handler+0x54/0xf0 [i915]
<4>[  673.484305]  __handle_irq_event_percpu+0x41/0x2c0
<4>[  673.484305]  handle_irq_event_percpu+0x2b/0x70
<4>[  673.484305]  handle_irq_event+0x2f/0x50
<4>[  673.484305]  handle_edge_irq+0x99/0x1b0
<4>[  673.484305]  do_IRQ+0x7e/0x160
<4>[  673.484305]  common_interrupt+0xf/0xf
<4>[  673.484305]  </IRQ>

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004203121.31138-1-chris@chris-wilson.co.uk
2019-10-07 21:44:02 +01:00
Chris Wilson
08ad9a3846 drm/i915/execlists: Fix annotation for decoupling virtual request
As we may signal a request and take the engine->active.lock within the
signaler, the engine submission paths have to use a nested annotation on
their requests -- but we guarantee that we can never submit on the same
engine as the signaling fence.

<4>[  723.763281] WARNING: possible circular locking dependency detected
<4>[  723.763285] 5.3.0-g80fa0e042cdb-drmtip_379+ #1 Tainted: G     U
<4>[  723.763288] ------------------------------------------------------
<4>[  723.763291] gem_exec_await/1388 is trying to acquire lock:
<4>[  723.763294] ffff93a7b53221d8 (&engine->active.lock){..-.}, at: execlists_submit_request+0x2b/0x1e0 [i915]
<4>[  723.763378]
                  but task is already holding lock:
<4>[  723.763381] ffff93a7c25f6d20 (&i915_request_get(rq)->submit/1){-.-.}, at: __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  723.763420]
                  which lock already depends on the new lock.

<4>[  723.763423]
                  the existing dependency chain (in reverse order) is:
<4>[  723.763427]
                  -> #2 (&i915_request_get(rq)->submit/1){-.-.}:
<4>[  723.763434]        _raw_spin_lock_irqsave_nested+0x39/0x50
<4>[  723.763478]        __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  723.763513]        intel_engine_breadcrumbs_irq+0x3aa/0x5e0 [i915]
<4>[  723.763600]        cs_irq_handler+0x49/0x50 [i915]
<4>[  723.763659]        gen11_gt_irq_handler+0x17b/0x280 [i915]
<4>[  723.763690]        gen11_irq_handler+0x54/0xf0 [i915]
<4>[  723.763695]        __handle_irq_event_percpu+0x41/0x2d0
<4>[  723.763699]        handle_irq_event_percpu+0x2b/0x70
<4>[  723.763702]        handle_irq_event+0x2f/0x50
<4>[  723.763706]        handle_edge_irq+0xee/0x1a0
<4>[  723.763709]        do_IRQ+0x7e/0x160
<4>[  723.763712]        ret_from_intr+0x0/0x1d
<4>[  723.763717]        __slab_alloc.isra.28.constprop.33+0x4f/0x70
<4>[  723.763720]        kmem_cache_alloc+0x28d/0x2f0
<4>[  723.763724]        vm_area_dup+0x15/0x40
<4>[  723.763727]        dup_mm+0x2dd/0x550
<4>[  723.763730]        copy_process+0xf21/0x1ef0
<4>[  723.763734]        _do_fork+0x71/0x670
<4>[  723.763737]        __se_sys_clone+0x6e/0xa0
<4>[  723.763741]        do_syscall_64+0x4f/0x210
<4>[  723.763744]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[  723.763747]
                  -> #1 (&(&rq->lock)->rlock#2){-.-.}:
<4>[  723.763752]        _raw_spin_lock+0x2a/0x40
<4>[  723.763789]        __unwind_incomplete_requests+0x3eb/0x450 [i915]
<4>[  723.763825]        __execlists_submission_tasklet+0x9ec/0x1d60 [i915]
<4>[  723.763864]        execlists_submission_tasklet+0x34/0x50 [i915]
<4>[  723.763874]        tasklet_action_common.isra.5+0x47/0xb0
<4>[  723.763878]        __do_softirq+0xd8/0x4ae
<4>[  723.763881]        irq_exit+0xa9/0xc0
<4>[  723.763883]        smp_apic_timer_interrupt+0xb7/0x280
<4>[  723.763887]        apic_timer_interrupt+0xf/0x20
<4>[  723.763892]        cpuidle_enter_state+0xae/0x450
<4>[  723.763895]        cpuidle_enter+0x24/0x40
<4>[  723.763899]        do_idle+0x1e7/0x250
<4>[  723.763902]        cpu_startup_entry+0x14/0x20
<4>[  723.763905]        start_secondary+0x15f/0x1b0
<4>[  723.763908]        secondary_startup_64+0xa4/0xb0
<4>[  723.763911]
                  -> #0 (&engine->active.lock){..-.}:
<4>[  723.763916]        __lock_acquire+0x15d8/0x1ea0
<4>[  723.763919]        lock_acquire+0xa6/0x1c0
<4>[  723.763922]        _raw_spin_lock_irqsave+0x33/0x50
<4>[  723.763956]        execlists_submit_request+0x2b/0x1e0 [i915]
<4>[  723.764002]        submit_notify+0xa8/0x13c [i915]
<4>[  723.764035]        __i915_sw_fence_complete+0x81/0x250 [i915]
<4>[  723.764054]        i915_sw_fence_wake+0x51/0x64 [i915]
<4>[  723.764054]        __i915_sw_fence_complete+0x1ee/0x250 [i915]
<4>[  723.764054]        dma_i915_sw_fence_wake_timer+0x14/0x20 [i915]
<4>[  723.764054]        dma_fence_signal_locked+0x9e/0x1c0
<4>[  723.764054]        dma_fence_signal+0x1f/0x40
<4>[  723.764054]        vgem_fence_signal_ioctl+0x67/0xc0 [vgem]
<4>[  723.764054]        drm_ioctl_kernel+0x83/0xf0
<4>[  723.764054]        drm_ioctl+0x2f3/0x3b0
<4>[  723.764054]        do_vfs_ioctl+0xa0/0x6f0
<4>[  723.764054]        ksys_ioctl+0x35/0x60
<4>[  723.764054]        __x64_sys_ioctl+0x11/0x20
<4>[  723.764054]        do_syscall_64+0x4f/0x210
<4>[  723.764054]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[  723.764054]
                  other info that might help us debug this:

<4>[  723.764054] Chain exists of:
                    &engine->active.lock --> &(&rq->lock)->rlock#2 --> &i915_request_get(rq)->submit/1

<4>[  723.764054]  Possible unsafe locking scenario:

<4>[  723.764054]        CPU0                    CPU1
<4>[  723.764054]        ----                    ----
<4>[  723.764054]   lock(&i915_request_get(rq)->submit/1);
<4>[  723.764054]                                lock(&(&rq->lock)->rlock#2);
<4>[  723.764054]                                lock(&i915_request_get(rq)->submit/1);
<4>[  723.764054]   lock(&engine->active.lock);
<4>[  723.764054]
                   *** DEADLOCK ***

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111862
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004194758.19679-1-chris@chris-wilson.co.uk
2019-10-07 21:44:02 +01:00
Chris Wilson
cd6a851385 drm/i915/gt: Prefer local path to runtime powermanagement
Avoid going to the base i915 device when we already have a path from gt
to the runtime powermanagement interface. The benefit is that it looks a
bit more self-consistent to always be acquiring the gt->uncore->rpm for
use with the gt->uncore.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191007154531.1750-1-chris@chris-wilson.co.uk
2019-10-07 21:44:02 +01:00
Colin Ian King
b9dcb97b6c drm/i915: make array hw_engine_mask static, makes object smaller
Don't populate the array hw_engine_mask on the stack but instead make it
static. Makes the object code smaller by 316 bytes.

Before:
   text	   data	    bss	    dec	    hex	filename
  34004	   4388	    320	  38712	   9738	gpu/drm/i915/gt/intel_reset.o

After:
   text	   data	    bss	    dec	    hex	filename
  33528	   4548	    320	  38396	   95fc	gpu/drm/i915/gt/intel_reset.o

(gcc version 9.2.1, amd64)

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191007154151.23245-1-colin.king@canonical.com
2019-10-07 21:44:02 +01:00
Chris Wilson
abc47ff61d drm/i915/gt: Restore dropped 'interruptible' flag
Lost in the rebasing was Tvrtko's reminder that we need to keep an
uninterruptible wait around for the Ironlake VT-d w/a

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191006165002.30312-3-chris@chris-wilson.co.uk
2019-10-07 13:51:59 +01:00
CQ Tang
0e5493cab5 drm/i915/stolen: make the object creation interface consistent
Our other backends return an actual error value upon failure. Do the
same for stolen objects, which currently just return NULL on failure.

Signed-off-by: CQ Tang <cq.tang@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004170452.15410-2-matthew.auld@intel.com
2019-10-04 19:27:41 +01:00
Chris Wilson
2af402982a drm/i915/selftests: Drop vestigal struct_mutex guards
We no longer need struct_mutex to serialise request emission, so remove
it from the gt selftests.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-20-chris@chris-wilson.co.uk
2019-10-04 15:39:41 +01:00
Chris Wilson
cb5eb07278 drm/i915/overlay: Drop struct_mutex guard
The overlay uses the modeset mutex to control itself and only required
the struct_mutex for requests, which is now obsolete.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-16-chris@chris-wilson.co.uk
2019-10-04 15:39:39 +01:00
Chris Wilson
a4e7ccdac3 drm/i915: Move context management under GEM
Keep track of the GEM contexts underneath i915->gem.contexts and assign
them their own lock for the purposes of list management.

v2: Focus on lock tracking; ctx->vm is protected by ctx->mutex
v3: Correct split with removal of logical HW ID

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-15-chris@chris-wilson.co.uk
2019-10-04 15:39:34 +01:00
Chris Wilson
2935ed5339 drm/i915: Remove logical HW ID
With the introduction of ctx->engines[] we allow multiple logical
contexts to be used on the same engine (e.g. with virtual engines).
According to bspec, aach logical context requires a unique tag in order
for context-switching to occur correctly between them. [Simple
experiments show that it is not so easy to trick the HW into performing
a lite-restore with matching logical IDs, though my memory from early
Broadwell experiments do suggest that it should be generating
lite-restores.]

We only need to keep a unique tag for the active lifetime of the
context, and for as long as we need to identify that context. The HW
uses the tag to determine if it should use a lite-restore (why not the
LRCA?) and passes the tag back for various status identifies. The only
status we need to track is for OA, so when using perf, we assign the
specific context a unique tag.

v2: Calculate required number of tags to fill ELSP.

Fixes: 976b55f0e1 ("drm/i915: Allow a context to define its set of engines")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111895
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-14-chris@chris-wilson.co.uk
2019-10-04 15:39:30 +01:00
Chris Wilson
a2b4dead98 drm/i915: Move global activity tracking from GEM to GT
As our global unpark/park keep track of the number of active users, we
can simply move the accounting from the GEM layer to the base GT layer.
It was placed originally inside GEM to benefit from the 100ms extra
delay on idleness, but that has been eliminated and now there is no
substantive difference between the layers. In moving it, we move another
piece of the puzzle out from underneath struct_mutex.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-13-chris@chris-wilson.co.uk
2019-10-04 15:39:30 +01:00
Chris Wilson
6610197542 drm/i915: Move request runtime management onto gt
Requests are run from the gt and are tided into the gt runtime power
management, so pull the runtime request management under gt/

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-12-chris@chris-wilson.co.uk
2019-10-04 15:39:26 +01:00
Chris Wilson
f33a8a5160 drm/i915: Merge wait_for_timelines with retire_request
wait_for_timelines is essentially the same loop as retiring requests
(with an extra timeout), so merge the two into one routine.

v2: i915_retire_requests_timeout and keep VT'd w/a as !interruptible

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-10-chris@chris-wilson.co.uk
2019-10-04 15:39:23 +01:00
Chris Wilson
7e80576266 drm/i915: Drop struct_mutex from around i915_retire_requests()
We don't need to hold struct_mutex now for retiring requests, so drop it
from i915_retire_requests() and i915_gem_wait_for_idle(), finally
removing I915_WAIT_LOCKED for good.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-8-chris@chris-wilson.co.uk
2019-10-04 15:39:17 +01:00
Chris Wilson
b723484069 drm/i915: Move idle barrier cleanup into engine-pm
Now that we now longer need to guarantee that the active callback is
under the struct_mutex, we can lift it out of the i915_gem_park() and
into the engine parking itself.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-7-chris@chris-wilson.co.uk
2019-10-04 15:39:16 +01:00
Chris Wilson
b1e3177bd1 drm/i915: Coordinate i915_active with its own mutex
Forgo the struct_mutex serialisation for i915_active, and interpose its
own mutex handling for active/retire.

This is a multi-layered sleight-of-hand. First, we had to ensure that no
active/retire callbacks accidentally inverted the mutex ordering rules,
nor assumed that they were themselves serialised by struct_mutex. More
challenging though, is the rule over updating elements of the active
rbtree. Instead of the whole i915_active now being serialised by
struct_mutex, allocations/rotations of the tree are serialised by the
i915_active.mutex and individual nodes are serialised by the caller
using the i915_timeline.mutex (we need to use nested spinlocks to
interact with the dma_fence callback lists).

The pain point here is that instead of a single mutex around execbuf, we
now have to take a mutex for active tracker (one for each vma, context,
etc) and a couple of spinlocks for each fence update. The improvement in
fine grained locking allowing for multiple concurrent clients
(eventually!) should be worth it in typical loads.

v2: Add some comments that barely elucidate anything :(

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-6-chris@chris-wilson.co.uk
2019-10-04 15:39:12 +01:00
Chris Wilson
274cbf20fd drm/i915: Push the i915_active.retire into a worker
As we need to use a mutex to serialise i915_active activation
(because we want to allow the callback to sleep), we need to push the
i915_active.retire into a worker callback in case we get need to retire
from an atomic context.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-5-chris@chris-wilson.co.uk
2019-10-04 15:39:10 +01:00
Chris Wilson
2850748ef8 drm/i915: Pull i915_vma_pin under the vm->mutex
Replace the struct_mutex requirement for pinning the i915_vma with the
local vm->mutex instead. Note that the vm->mutex is tainted by the
shrinker (we require unbinding from inside fs-reclaim) and so we cannot
allocate while holding that mutex. Instead we have to preallocate
workers to do allocate and apply the PTE updates after we have we
reserved their slot in the drm_mm (using fences to order the PTE writes
with the GPU work and with later unbind).

In adding the asynchronous vma binding, one subtle requirement is to
avoid coupling the binding fence into the backing object->resv. That is
the asynchronous binding only applies to the vma timeline itself and not
to the pages as that is a more global timeline (the binding of one vma
does not need to be ordered with another vma, nor does the implicit GEM
fencing depend on a vma, only on writes to the backing store). Keeping
the vma binding distinct from the backing store timelines is verified by
a number of async gem_exec_fence and gem_exec_schedule tests. The way we
do this is quite simple, we keep the fence for the vma binding separate
and only wait on it as required, and never add it to the obj->resv
itself.

Another consequence in reducing the locking around the vma is the
destruction of the vma is no longer globally serialised by struct_mutex.
A natural solution would be to add a kref to i915_vma, but that requires
decoupling the reference cycles, possibly by introducing a new
i915_mm_pages object that is own by both obj->mm and vma->pages.
However, we have not taken that route due to the overshadowing lmem/ttm
discussions, and instead play a series of complicated games with
trylocks to (hopefully) ensure that only one destruction path is called!

v2: Add some commentary, and some helpers to reduce patch churn.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-4-chris@chris-wilson.co.uk
2019-10-04 15:39:02 +01:00
Chris Wilson
b290a78b5c drm/i915: Use helpers for drm_mm_node booleans
A subset of 71724f7089 ("drm/mm: Use helpers for drm_mm_node booleans")
in order to prepare drm-intel-next-queued for subsequent patches before
we can backmerge 71724f7089 itself.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004142226.13711-1-chris@chris-wilson.co.uk
2019-10-04 15:34:27 +01:00
Chris Wilson
44d0a9c05b drm/i915/execlists: Skip redundant resubmission
If we unwind the active requests, and on resubmission discover that we
intend to preempt the active contexts with themselves, simply skip the
ELSP submission.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191003210100.22250-1-chris@chris-wilson.co.uk
2019-10-04 12:52:24 +01:00
Chris Wilson
fcde8c7eea drm/i915/selftests: Exercise potential false lite-restore
If execlists's lite-restore is based on the common GEM context tag
rather than the per-intel_context LRCA, then a context switch between
two intel_contexts on the same engine derived from the same GEM context
will perform a lite-restore instead of a full context switch. We can
exploit this by poisoning the ringbuffer of the first context and trying
to trick a simple RING_TAIL update (i.e. lite-restore)

v2: Also check what happens if preempt ce[0] with ce[1] (both instances
on the same engine from the same parent context) [Tvrtko]

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191002183459.26614-1-chris@chris-wilson.co.uk
2019-10-03 00:26:02 +01:00
Chris Wilson
f8db4d051b drm/i915: Initialise breadcrumb lists on the virtual engine
With deferring the breadcrumb signalling to the virtual engine (thanks
preempt-to-busy) we need to make sure the lists and irq-worker are ready
to send a signal.

[41958.710544] BUG: kernel NULL pointer dereference, address: 0000000000000000
[41958.710553] #PF: supervisor write access in kernel mode
[41958.710556] #PF: error_code(0x0002) - not-present page
[41958.710558] PGD 0 P4D 0
[41958.710562] Oops: 0002 [#1] SMP
[41958.710565] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G     U            5.3.0+ #207
[41958.710568] Hardware name: Intel Corporation NUC7i5BNK/NUC7i5BNB, BIOS BNKBL357.86A.0052.2017.0918.1346 09/18/2017
[41958.710602] RIP: 0010:i915_request_enable_breadcrumb+0xe1/0x130 [i915]
[41958.710605] Code: 8b 44 24 30 48 89 41 08 48 89 08 48 8b 85 98 01 00 00 48 8d 8d 90 01 00 00 48 89 95 98 01 00 00 49 89 4c 24 28 49 89 44 24 30 <48> 89 10 f0 80 4b 30 10 c6 85 88 01 00 00 00 e9 1a ff ff ff 48 83
[41958.710609] RSP: 0018:ffffc90000003de0 EFLAGS: 00010046
[41958.710612] RAX: 0000000000000000 RBX: ffff888735424480 RCX: ffff8887cddb2190
[41958.710614] RDX: ffff8887cddb3570 RSI: ffff888850362190 RDI: ffff8887cddb2188
[41958.710617] RBP: ffff8887cddb2000 R08: ffff8888503624a8 R09: 0000000000000100
[41958.710619] R10: 0000000000000001 R11: 0000000000000000 R12: ffff8887cddb3548
[41958.710622] R13: 0000000000000000 R14: 0000000000000046 R15: ffff888850362070
[41958.710625] FS:  0000000000000000(0000) GS:ffff88885ea00000(0000) knlGS:0000000000000000
[41958.710628] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[41958.710630] CR2: 0000000000000000 CR3: 0000000002c09002 CR4: 00000000001606f0
[41958.710633] Call Trace:
[41958.710636]  <IRQ>
[41958.710668]  __i915_request_submit+0x12b/0x160 [i915]
[41958.710693]  virtual_submit_request+0x67/0x120 [i915]
[41958.710720]  __unwind_incomplete_requests+0x131/0x170 [i915]
[41958.710744]  execlists_dequeue+0xb40/0xe00 [i915]
[41958.710771]  execlists_submission_tasklet+0x10f/0x150 [i915]
[41958.710776]  tasklet_action_common.isra.17+0x41/0xa0
[41958.710781]  __do_softirq+0xc8/0x221
[41958.710785]  irq_exit+0xa6/0xb0
[41958.710788]  smp_apic_timer_interrupt+0x4d/0x80
[41958.710791]  apic_timer_interrupt+0xf/0x20
[41958.710794]  </IRQ>

Fixes: cb2377a919 ("drm/i915: Fixup preempt-to-busy vs reset of a virtual request")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191001103518.9113-1-chris@chris-wilson.co.uk
2019-10-01 11:46:52 +01:00
Chris Wilson
1d6f1d16d3 drm/i915/gt: Only unwedge if we can reset first
Unwedging the GPU requires a successful GPU reset before we restore the
default submission, or else we may see residual context switch events
that we were not expecting.

v2: Pull in the special-case reset_clobbers_display, and explain why it
should be safe in the context of unwedging.

v3: Just forget all about resets before unwedging if it will clobber the
display; risk it all.

Reported-by: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> #v1
Link: https://patchwork.freedesktop.org/patch/msgid/20190927160335.10622-1-chris@chris-wilson.co.uk
2019-09-30 17:48:14 +01:00
Chris Wilson
4abc6e7c91 drm/i915/selftests: Provide a mock GPU reset routine
For those mock tests that may wish to pretend triggering a GPU reset and
processing the cleanup.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190927211749.2181-3-chris@chris-wilson.co.uk
2019-09-27 23:25:46 +01:00
Chris Wilson
4e18ca703f drm/i915/selftests: Distinguish mock device from no wakeref
On systems that have no runtime-pm, we mark the wakeref as being -1. We
therefore cannot use that value for the mock-gt indicator, so opt for
-ENODEV instead. The wakeref should never be an error value -- one
hopes!

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190927211749.2181-2-chris@chris-wilson.co.uk
2019-09-27 23:25:30 +01:00
Chris Wilson
260e6b7127 drm/i915: Pass intel_gt to has-reset?
As we execute GPU resets on a gt/ basis, and use the intel_gt as the
primary for all other reset functions, also use it for the has-reset?
predicates. Gradually simplifying the churn of pointers.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190927211749.2181-1-chris@chris-wilson.co.uk
2019-09-27 23:25:14 +01:00
Chris Wilson
42b899fb9a drm/i915/selftests: Do not try to sanitize mock HW
If we are mocking the device, skip trying to sanitize the pm HW state.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190927210646.29664-1-chris@chris-wilson.co.uk
2019-09-27 22:17:30 +01:00
Matthew Auld
b178a3f681 drm/i915: check for kernel_context
Explosions during early driver init on the error path. Make sure we fail
gracefully.

[ 9547.672258] BUG: kernel NULL pointer dereference, address: 000000000000007c
[ 9547.672288] #PF: supervisor read access in kernel mode
[ 9547.672292] #PF: error_code(0x0000) - not-present page
[ 9547.672296] PGD 8000000846b41067 P4D 8000000846b41067 PUD 797034067 PMD 0
[ 9547.672303] Oops: 0000 [#1] SMP PTI
[ 9547.672307] CPU: 1 PID: 25634 Comm: i915_selftest Tainted: G     U            5.3.0-rc8+ #73
[ 9547.672313] Hardware name:  /NUC6i7KYB, BIOS KYSKLi70.86A.0050.2017.0831.1924 08/31/2017
[ 9547.672395] RIP: 0010:intel_context_unpin+0x9/0x100 [i915]
[ 9547.672400] Code: 6b 60 00 e9 17 ff ff ff bd fc ff ff ff e9 7c ff ff ff 66 66 2e 0f 1f 84 00 00 00 00
 00 0f 1f 40 00 0f 1f 44 00 00 41 54 55 53 <8b> 47 7c 83 f8 01 74 26 8d 48 ff f0 0f b1 4f 7c 48 8d 57 7c
 75 05
[ 9547.672413] RSP: 0018:ffffae8ac24ff878 EFLAGS: 00010246
[ 9547.672417] RAX: ffff944a1b7842d0 RBX: ffff944a1b784000 RCX: ffff944a12dd6fa8
[ 9547.672422] RDX: ffff944a1b7842c0 RSI: ffff944a12dd5328 RDI: 0000000000000000
[ 9547.672428] RBP: 0000000000000000 R08: ffff944a11e5d840 R09: 0000000000000000
[ 9547.672433] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ 9547.672438] R13: ffffffffc11aaf00 R14: 00000000ffffffe4 R15: ffff944a0e29bf38
[ 9547.672443] FS:  00007fc259b88ac0(0000) GS:ffff944a1f880000(0000) knlGS:0000000000000000
[ 9547.672449] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 9547.672454] CR2: 000000000000007c CR3: 0000000853346003 CR4: 00000000003606e0
[ 9547.672459] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 9547.672464] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 9547.672469] Call Trace:
[ 9547.672518]  intel_engine_cleanup_common+0xe3/0x270 [i915]
[ 9547.672567]  execlists_destroy+0xe/0x30 [i915]
[ 9547.672669]  intel_engines_init+0x94/0xf0 [i915]
[ 9547.672749]  i915_gem_init+0x191/0x950 [i915]

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190927173409.31175-2-matthew.auld@intel.com
2019-09-27 20:08:57 +01:00
Daniele Ceraolo Spurio
901045c3f0 drm/i915/huc: fix version parsing from CSS header
The HuC FW has silently switched to encoding the version the same way as
the GuC FW does, i.e. major.minor.patch instead of just major.minor. All
the current blobs follow the new scheme, but since minor and patch are
both zero there is no difference in the end results and we happily load
them. New binaries, however, will have non-zero values in there, so we
need to make sure to parse them correctly.

Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Anusha Srivatsa <anusha.srivatsa@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
Acked-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190925222121.4000-1-daniele.ceraolospurio@intel.com
2019-09-27 10:20:20 -07:00
Andi Shyti
c113236718 drm/i915: Extract GT render sleep (rc6) management
Continuing the theme of breaking intel_pm.c up in a reasonable chunk of
powermanagement utilities, pull out the rc6 setup into its GT handler.

Based on a patch by Chris Wilson.

Signed-off-by: Andi Shyti <andi.shyti@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190919143840.20384-1-andi.shyti@intel.com
Link: https://patchwork.freedesktop.org/patch/msgid/20190927110849.28734-1-chris@chris-wilson.co.uk
2019-09-27 13:01:57 +01:00
Michał Winiarski
74b2089a10 drm/i915: Add definitions for MI_MATH command
We can use it in i915 for updating parts of unmasked registers from
within a batch. We're also adding Gen8+ versions of CS_GPR registers
(aka MI_MATH_REG in the coprocessor).

Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926100635.9416-4-michal.winiarski@intel.com
2019-09-26 22:11:04 +01:00
Sebastian Andrzej Siewior
e3792238c1 drm/i915: Don't disable interrupts for intel_engine_breadcrumbs_irq()
The function intel_engine_breadcrumbs_irq() is always invoked from an interrupt
handler and for that reason it invokes (as an optimisation) only spin_lock()
for locking assuming that the interrupts are already disabled. The
function intel_engine_signal_breadcrumbs() is provided to disable
interrupts while the former function is invoked so that assumption is
also true for callers from preemptible context.

On PREEMPT_RT local_irq_disable() really disables interrupts and this
forbids to invoke spin_lock() which becomes a sleeping spinlock.

This is also problematic with `threadirqs' in conjunction with
irq_work. With force threading the interrupt handler, the handler is
invoked with disabled BH but with interrupts enabled. This is okay and
the lock itself is never acquired in IRQ context. This changes with
irq_work (signal_irq_work()) which _still_ invokes
intel_engine_breadcrumbs_irq() from IRQ context. Lockdep should see this
and complain.

Acquire the locks in intel_engine_breadcrumbs_irq() with _irqsave()
suffix and let all callers invoke intel_engine_breadcrumbs_irq()
directly instead using intel_engine_signal_breadcrumbs().

Reported-by: Clark Williams <williams@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926105644.16703-2-bigeasy@linutronix.de
2019-09-26 18:44:35 +01:00
Sebastian Andrzej Siewior
132dfc78d3 drm/i915: Drop the IRQ-off asserts
The lockdep_assert_irqs_disabled() check is needless. The previous
lockdep_assert_held() check ensures that the lock is acquired and while
the lock is acquired lockdep also prints a warning if the interrupts are
not disabled if they have to be.
These IRQ-off asserts trigger on PREEMPT_RT because the locks become
sleeping locks and do not really disable interrupts.

Remove lockdep_assert_irqs_disabled().

Reported-by: Clark Williams <williams@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926105644.16703-3-bigeasy@linutronix.de
2019-09-26 18:44:35 +01:00
Michał Winiarski
7d5255e0ce drm/i915: Adjust length of MI_LOAD_REGISTER_REG
Default length value of MI_LOAD_REGISTER_REG is 1.
Also move it out of cmd-parser-only registers since we're going to use
it in i915.

Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926133142.2838-3-chris@chris-wilson.co.uk
2019-09-26 18:44:35 +01:00
Michał Winiarski
e123752374 drm/i915/execlists: Use per-process HWSP as scratch
Some of our commands (MI_FLUSH_DW / PIPE_CONTROL) require a post-sync write
operation to be performed. Currently we're using dedicated VMA for
PIPE_CONTROL and global HWSP for MI_FLUSH_DW.
On execlists platforms, each of our contexts has an area that can be
used as scratch space. Let's use that instead.

Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926133142.2838-2-chris@chris-wilson.co.uk
2019-09-26 18:44:35 +01:00
Michał Winiarski
5311f5171e drm/i915: Define explicit wedged on init reset state
We're currently using scratch presence as a way of identifying that we
entered wedged state at driver initialization time.
Let's use a separate flag rather than rely on scratch.

Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926133142.2838-1-chris@chris-wilson.co.uk
2019-09-26 18:44:35 +01:00
Chris Wilson
f9d4eae25d drm/i915/execlists: Simplify gen12_csb_parse
Having decided that we only care about the promotion predicate, we can
simplify gen12_csb_parse to simply check whether we need to jump to a
new queue.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190925130845.17952-1-chris@chris-wilson.co.uk
2019-09-25 19:26:47 +01:00
Chris Wilson
7dc56af526 drm/i915/selftests: Verify the LRC register layout between init and HW
Before we submit the first context to HW, we need to construct a valid
image of the register state. This layout is defined by the HW and should
match the layout generated by HW when it saves the context image.
Asserting that this should be equivalent should help avoid any undefined
behaviour and verify that we haven't missed anything important!

Of course, having insisted that the initial register state within the
LRC should match that returned by HW, we need to ensure that it does.

v2: Drop the RELATIVE_MMIO flag from gen11, we ignore it for
constructing the lrc image.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190924145950.3011-1-chris@chris-wilson.co.uk
2019-09-24 17:27:19 +01:00
Chris Wilson
e2144503bf drm/i915: Prevent bonded requests from overtaking each other on preemption
Force bonded requests to run on distinct engines so that they cannot be
shuffled onto the same engine where timeslicing will reverse the order.
A bonded request will often wait on a semaphore signaled by its master,
creating an implicit dependency -- if we ignore that implicit dependency
and allow the bonded request to run on the same engine and before its
master, we will cause a GPU hang. [Whether it will hang the GPU is
debatable, we should keep on timeslicing and each timeslice should be
"accidentally" counted as forward progress, in which case it should run
but at one-half to one-third speed.]

We can prevent this inversion by restricting which engines we allow
ourselves to jump to upon preemption, i.e. baking in the arrangement
established at first execution. (We should also consider capturing the
implicit dependency using i915_sched_add_dependency(), but first we need
to think about the constraints that requires on the execution/retirement
ordering.)

Fixes: 8ee36e048c ("drm/i915/execlists: Minimalistic timeslicing")
References: ee1136908e ("drm/i915/execlists: Virtual engine bonding")
Testcase: igt/gem_exec_balancer/bonded-slice
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923152844.8914-3-chris@chris-wilson.co.uk
2019-09-23 20:44:14 +01:00
Chris Wilson
cb2377a919 drm/i915: Fixup preempt-to-busy vs reset of a virtual request
Due to the nature of preempt-to-busy the execlists active tracking and
the schedule queue may become temporarily desync'ed (between resubmission
to HW and its ack from HW). This means that we may have unwound a
request and passed it back to the virtual engine, but it is still
inflight on the HW and may even result in a GPU hang. If we detect that
GPU hang and try to reset, the hanging request->engine will no longer
match the current engine, which means that the request is not on the
execlists active list and we should not try to find an older incomplete
request. Given that we have deduced this must be a request on a virtual
engine, it is the single active request in the context and so must be
guilty (as the context is still inflight, it is prevented from being
executed on another engine as we process the reset).

Fixes: 22b7a426bb ("drm/i915/execlists: Preempt-to-busy")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923152844.8914-2-chris@chris-wilson.co.uk
2019-09-23 20:44:14 +01:00
Chris Wilson
b647c7df01 drm/i915: Fixup preempt-to-busy vs resubmission of a virtual request
As preempt-to-busy leaves the request on the HW as the resubmission is
processed, that request may complete in the background and even cause a
second virtual request to enter queue. This second virtual request
breaks our "single request in the virtual pipeline" assumptions.
Furthermore, as the virtual request may be completed and retired, we
lose the reference the virtual engine assumes is held. Normally, just
removing the request from the scheduler queue removes it from the
engine, but the virtual engine keeps track of its singleton request via
its ve->request. This pointer needs protecting with a reference.

v2: Drop unnecessary motion of rq->engine = owner

Fixes: 22b7a426bb ("drm/i915/execlists: Preempt-to-busy")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923152844.8914-1-chris@chris-wilson.co.uk
2019-09-23 20:43:59 +01:00
Chris Wilson
0d7cf7bc15 drm/i915/execlists: Refactor -EIO markup of hung requests
Pull setting -EIO on the hung requests into its own utility function.
Having allowed ourselves to short-circuit submission of completed
requests, we can now do the mark_eio() prior to submission and avoid
some redundant operations.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923110056.15176-4-chris@chris-wilson.co.uk
2019-09-23 16:21:38 +01:00
Chris Wilson
c0bb487dc1 drm/i915: Only enqueue already completed requests
If we are asked to submit a completed request, just move it onto the
active-list without modifying it's payload. If we try to emit the
modified payload of a completed request, we risk racing with the
ring->head update during retirement which may advance the head past our
breadcrumb and so we generate a warning for the emission being behind
the RING_HEAD.

v2: Commentary for the sneaky, shared responsibility between functions.
v3: Spelling mistakes and bonus assertion

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923110056.15176-3-chris@chris-wilson.co.uk
2019-09-23 16:21:37 +01:00
Chris Wilson
3231f8c011 drm/i915/execlists: Drop redundant list_del_init(&rq->sched.link)
Since amalgamating the queued and active lists in commit 422d7df4f0
("drm/i915: Replace engine->timeline with a plain list"), performing a
i915_request_submit() will remove the request from the execlists
priority queue.

References: 422d7df4f0 ("drm/i915: Replace engine->timeline with a plain list")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923110056.15176-2-chris@chris-wilson.co.uk
2019-09-23 16:21:36 +01:00