Inside the execlists submission tasklet, we often make the mistake of
assuming that everything beneath the request is available for use.
However, the submission and the request live on two separate timelines,
and the request contents may be freed from an early retirement before we
have had a chance to run the submission tasklet (think ksoftirqd). To
safeguard ourselves against any mistakes, flush the tasklet before we
unpin the context if execlists still has a reference to this context.
v2: Pull hw_context->active tracking into schedule_in and schedule_out.
References: 60367132a2 ("drm/i915: Avoid use-after-free of ctx in request tracepoints")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181003110941.27886-1-chris@chris-wilson.co.uk
Currently, the backend scheduling code abuses struct_mutex into order to
have a global lock to manipulate a temporary list (without widespread
allocation) and to protect against list modifications. This is an
extraneous coupling to struct_mutex and further can not extend beyond
the local device.
Pull all the code that needs to be under the one true lock into
i915_scheduler.c, and make it so.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181001144755.7978-2-chris@chris-wilson.co.uk
Taken from an idea used for FQ_CODEL, we give the first request of a
new request flows a small priority boost. These flows are likely to
correspond with short, interactive tasks and so be more latency sensitive
than the longer free running queues. As soon as the client has more than
one request in the queue, further requests are not boosted and it settles
down into ordinary steady state behaviour. Such small kicks dramatically
help combat the starvation issue, by allowing each client the opportunity
to run even when the system is under heavy throughput load (within the
constraints of the user selected priority).
v2: Mark the preempted request as the start of a new flow, to prevent a
single client being continually gazumped by its peers.
Testcase: igt/benchmarks/rrul
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181001144755.7978-1-chris@chris-wilson.co.uk
As we are about to allow ourselves to slightly bump the user priority
into a few different sublevels, packthose internal priority lists
into the same i915_priolist to keep the rbtree compact and avoid having
to allocate the default user priority even after the internal bumping.
The downside to having an requests[] rather than a node per active list,
is that we then have to walk over the empty higher priority lists. To
compensate, we track the active buckets and use a small bitmap to skip
over any inactive ones.
v2: Use MASK of internal levels to simplify our usage.
v3: Prevent overflow when SHIFT is zero.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181001123204.23982-4-chris@chris-wilson.co.uk
If the request is currently on the HW (in port 0), then we do not need
to kick the submission tasklet to evaluate whether we should be
preempting itself in order to execute it again.
In the case that was annoying me:
execlists_schedule: rq(18:211173).prio=0 -> 2
need_preempt: last(18:211174).prio=0, queue.prio=2
We are bumping the priority of the first of a pair of requests running
in the current context. Then when evaluating preempt, we would see that
that our priority request is higher than the last executing request in
ELSP0 and so trigger preemption, not realising that our intended request
was already executing.
v2: As we assume state of the execlists->port[] that is only valid while
we hold the timeline lock we have to repeat some earlier tests that on
the validity of the node.
v3: Wrap guc submission under the timeline.lock as is now the way of all
things.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180925083205.2229-2-chris@chris-wilson.co.uk
Now that we are confident in providing full-ppgtt where supported,
remove the ability to override the context isolation.
v2: Remove faked aliasing-ppgtt for testing as it no longer is accepted.
v3: s/USES/HAS/ to match usage and reject attempts to load the module on
old GVT-g setups that do not provide support for full-ppgtt.
v4: Insulate ABI ppGTT values from our internal enum (later plans
involve moving ppGTT depth out of the enum, thus potentially breaking
ABI unless we document the current values).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Acked-by: Zhi Wang <zhi.a.wang@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180926201222.5643-1-chris@chris-wilson.co.uk
We mix hexa- and decimal which is confusing when reading the logs. So make
the single odd one out instance decimal for consistency.
v2:
* Do the intel_ringbuffer.c as well. (Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180926145033.16318-1-tvrtko.ursulin@linux.intel.com
Our execlist dispatch code requires a ppGTT so make sure we enforce that
option in intel_sanitize_enable_ppgtt(). The comment already tries to
explain that execlists requires ppgtt, but was written when gen8 may
have also taken the legacy path; so rewrite the code to match the
comment by using HAS_EXECLISTS() feature instead of the gen.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180922141804.21183-1-chris@chris-wilson.co.uk
In the sequence
<0>[ 531.960431] drv_self-4806 7.... 527402570us : intel_gpu_reset: engine_mask=1, ret=0, retry=0
<0>[ 531.960431] drv_self-4806 7.... 527402571us : execlists_reset: rcs0 request global=115de, current=71133
<0>[ 531.960431] drv_self-4806 7d..1 527402571us : execlists_cancel_port_requests: rcs0:port0 global=71134 (fence 826b:198), (current 71133)
<0>[ 531.960431] drv_self-4806 7d..1 527402572us : execlists_cancel_port_requests: rcs0:port1 global=71135 (fence 826c:53), (current 71133)
<0>[ 531.960431] drv_self-4806 7d..1 527402572us : __i915_request_unsubmit: rcs0 fence 826c:53 <- global=71135, current 71133
<0>[ 531.960431] drv_self-4806 7d..1 527402579us : __i915_request_unsubmit: rcs0 fence 826b:198 <- global=71134, current 71133
<0>[ 531.960431] drv_self-4806 7.... 527402613us : intel_engine_cancel_stop_cs: rcs0
<0>[ 531.960431] drv_self-4806 7.... 527402624us : execlists_reset_finish: rcs0
we are missing the execlists_submission_tasklet() invocation before the
execlists_reset_fini() implying that either the queue is empty, or we
failed to schedule and run the tasklet on finish. Add an assert so we
are sure that on unsubmitting the incomplete request after reset, the
queue is indeed populated.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180919195544.1511-12-chris@chris-wilson.co.uk
Fix up the error unwind for logical_ring_init() failing by moving the
cleanup into the callers who own the various bits of state during
initialisation, so we don't forget to free the state allocated by the
caller.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180920195948.16448-1-chris@chris-wilson.co.uk
That we use a WB mapping for updating the RING_TAIL register inside the
context image even on !llc machines has been a source of consternation
for every reader. It appears to work on bsw+, but it may just have been
that we have been incredibly bad at detecting the errors.
v2: With extra enthusiasm.
v3: Drop force of map type for pinned default_state as by the time we
pin it, the map type is always WB and doesn't conflict with the earlier
use by ce->state.
v4: Transfer engine->default_state from MAP_WC to MAP_WB on creation so
we do not need the MAP_FORCE littered around the backends
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914123504.2062-3-chris@chris-wilson.co.uk
Now that we reload both RING_HEAD and RING_TAIL when rebinding the
context, we do not need to scrub those registers immediately on resume.
v2: Handle the perma-pinned contexts.
v3: Set RING_TAIL on context-pin so that we always have known state in
the context image for the ring registers and all parties have similar
code (ripe for refactoring).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914123504.2062-1-chris@chris-wilson.co.uk
Future gen reduce the number of bits we will have available to
differentiate between contexts, so reduce the lifetime of the ID
assignment from that of the context to its current active cycle (i.e.
only while it is pinned for use by the HW, will it have a constant ID).
This means that instead of a max of 2k allocated contexts (worst case
before fun with bit twiddling), we instead have a limit of 2k in flight
contexts (minus a few that have been pinned by the kernel or by perf).
To reduce the number of contexts id we require, we allocate a context id
on first and mark it as pinned for as long as the GEM context itself is,
that is we keep it pinned it while active on each engine. If we exhaust
our context id space, then we try to reclaim an id from an idle context.
In the extreme case where all context ids are pinned by active contexts,
we force the system to idle in order to recover ids.
We cannot reduce the scope of an HW-ID to an engine (allowing the same
gem_context to have different ids on each engine) as in the future we
will need to preassign an id before we know which engine the
context is being executed on.
v2: Improved commentary (Tvrtko) [I tried at least]
References: https://bugs.freedesktop.org/show_bug.cgi?id=107788
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180904153117.3907-1-chris@chris-wilson.co.uk
There are two issues with the current RPCS programming for Icelake:
Expansion of the slice count bitfield has been missed, as well as the
required programming workaround for the subslice count bitfield size
limitation.
1)
Bitfield width for configuring the active slice count has grown so we need
to program the GEN8_R_PWR_CLK_STATE accordingly.
Current code was always requesting eight times the number of slices (due
writing to a bitfield starting three bits higher than it should). These
requests were luckily a) capped by the hardware to the available number of
slices, and b) we haven't yet exported the code to ask for reduced slice
configurations.
Due both of the above there was no impact from this incorrect programming
but we should still fix it.
2)
Due subslice count bitfield being only three bits wide and furthermore
capped to a maximum documented value of four, special programming
workaround is needed to enable more than four subslices.
With this programming driver has to consider the GT configuration as
2x4x8, while the hardware internally translates this to 1x8x8.
A limitation stemming from this is that either a subslice count between
one and four can be selected, or a subslice count equaling the total
number of subslices in all selected slices. In other words, odd subslice
counts greater than four are impossible, as are odd subslice counts
greater than a single slice subslice count.
This also had no impact in the current code base due breakage from 1)
always reqesting more than one slice.
While fixing this we also add some asserts to flag up any future bitfield
overflows.
v2:
* Use a local in all branches for clarity. (Lionel)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Bspec: 12247
Reported-by: tony.ye@intel.com
Suggested-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: tony.ye@intel.com
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903113007.2643-1-tvrtko.ursulin@linux.intel.com
On finishing the reset, the intention is to restart the GPU before we
relinquish the forcewake taken to handle the reset - the goal being the
GPU reloads a context before it is allowed to sleep. For this purpose,
we used tasklet_flush() which although it accomplished the goal of
restarting the GPU, carried with it a sting in its tail: it cleared the
TASKLET_STATE_SCHED bit. This meant that if another CPU queued a new
request to this engine, we would clear the flag and later attempt to
requeue the tasklet on the local CPU, breaking the per-cpu softirq
lists.
Remove the dangerous tasklet_kill() and just run the tasklet func
directly as we know it is safe to do so (the tasklets are internally
locked to allow mixed usage from direct submission).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180828152702.27536-1-chris@chris-wilson.co.uk
The RS_CTX_ENABLE and CTX_SAVE_INHIBIT bits are not present on ICL
anymore, but we still try to set them and then check them with
GEM_BUG_ON, resulting in a BUG() call. The bug can be reproduced by
igt/drv_selftest/live_hangcheck/others-priority and our CI was able
to catch it.
It is worth noticing that commit 05f0addd9b ("drm/i915/icl: Enhanced
execution list support") already tried to avoid the save bits
on ICL, but only inside populate_lr_context().
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Testcase: igt/drv_selftest/live_hangcheck/others-priority
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107399
References: 05f0addd9b ("drm/i915/icl: Enhanced execution list support")
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180809235852.24516-1-paulo.r.zanoni@intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
After disabling resource streamer on ICL (due to it actually not
existing there), I got feedback that there have been some experimental
patches for mesa to use RS years ago, but nothing ever landed or shipped
because there was no performance improvement.
This removes it from kernel keeping the uapi defines around for
compatibility.
v2: - re-add the inadvertent removal of CTX_CTRL_INHIBIT_SYN_CTX_SWITCH
- don't bother trying to document removed params on uapi header:
applications should know that from the query.
(from Chris)
v3: - disable CTX_CTRL_RS_CTX_ENABLE istead of removing it
- reword commit message after Daniele confirmed no performance
regression on his machine
- reword commit message to make clear RS is being removed due to
never been used
v4: - move I915_EXEC_RESOURCE_STREAMER to __I915_EXEC_ILLEGAL_FLAGS so
the check on ioctl() is made much earlier by
i915_gem_check_execbuffer() (suggested by Tvrtko)
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Acked-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180803232443.17193-1-lucas.demarchi@intel.com
In the aub trace utility, the context images are terminated with a
MI_BATCH_BUFFER_END; the simulator is reported as complaining otherwise.
Do the same for our protocontext image for completeness, and in passing
apply the magic bit for gen10 to mark the end of the context image.
Reported-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180730164325.12770-1-chris@chris-wilson.co.uk
Since ggtt_offset_bias is now stored in ggtt.pin_bias, it is duplicated
inside i915_gem_context, and can instead be accessed directly from ggtt.
v3:
Added a helper function to retrieve the ggtt.pin_bias from the vma.
v4:
Moved the helper function to the previous patch in the series.
Dropped the bias from intel_ring_pin. This introduces a slight functional
change since we are always pinning the ring a bit higher if GuC is present
even though we don't really need to.
v8:
Fixed patch not applying on the most recent upstream.
Signed-off-by: Jakub Bartmiński <jakub.bartminski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180727141148.30874-4-jakub.bartminski@intel.com
As GEN8_LR_CONTEXT_ALIGN is I915_GTT_MIN_ALIGNMENT is it functionally
equivalent to 0, and we will not be able to reduce the min-alignment for
the GTT, so passing 0 is and will remain equivalent.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180727092947.1953-1-chris@chris-wilson.co.uk
Using PAGE_SIZE for virtual offset alignment is superfluous as it is
equal to the minimum gtt alignment and so equivalent to 0. It is also
the wrong value to use as we stopped using physical page constructs for
the virtual GTT, i.e. it would be preferrable to use I915_GTT_PAGE_SIZE
and in these cases merely imply I915_GTT_MIN_ALIGNMENT.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180727091855.1879-1-chris@chris-wilson.co.uk
A reasonably common operation is to pin the map of the vma alongside the
vma itself for the lifetime of the vma, and so release both pins at the
same time as destroying the vma. It is common enough to pull into the
release function, making that central function more attractive to a
couple of other callsites.
The continual ulterior motive is to sweep over errors on module load
aborting...
Testcase: igt/drv_module_reload/basic-reload-inject
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180721125037.20127-1-chris@chris-wilson.co.uk
There's a race between idling the engine and finishing off the last
tasklet (as we may kick the tasklets after declaring an individual
engine idle). However, since we do not need to access the device until
we try to submit to the ELSP register (processing the CSB just requires
normal CPU access to the HWSP, and when idle we should not need to
submit!) we can defer the assertion unto that point. The assertion is
still useful as it does verify that we do hold the longterm GT wakeref
taken from request allocation until request completion.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107274
Fixes: 9512f985c3 ("drm/i915/execlists: Direct submission of new requests (avoid tasklet/ksoftirqd)")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180719075029.28643-1-chris@chris-wilson.co.uk
We make a decision at module load whether to use the GuC backend or not,
but lose that setup across set-wedge. Currently, the guc doesn't
override the engine->set_default_submission hook letting execlists sneak
back in temporarily on unwedging leading to an unbalanced park/unpark.
v2: Remove comment about switching back temporarily to execlists on
guc_submission_disable(). We currently only call disable on shutdown,
and plan to also call disable before suspend and reset, in which case we
will either restore guc submission or mark the driver as wedged, making
the reset back to execlists pointless.
v3: Move reset.prepare across
Fixes: 63572937ce ("drm/i915/execlists: Flush pending preemption events during reset")
Testcase: igt/drv_module_reload/basic-reload-inject
Testcase: igt/gem_eio
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180717202932.1423-1-chris@chris-wilson.co.uk
Inject a failure into preemption completion to pretend as if the HW
didn't successfully handle preemption and we are forced to do a reset in
the middle.
v2: Wait for preemption, to force testing with the missed preemption.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180716132154.12539-1-chris@chris-wilson.co.uk
On reset/wedging, we cancel all pending replies from the HW and we also
want to cancel an outstanding preemption event. Since we use the same
function to cancel the pending replies for reset and for a preemption
event, we can simply clear the active tracking for all.
v2: Keep execlists_user_end() markup for wedging
v3: Move assignment to inline to hide the bare assignment.
Fixes: 60a9432454 ("drm/i915/execlists: Drop clear_gtiir() on GPU reset")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180716125424.5715-1-chris@chris-wilson.co.uk
If we declare the driver wedged before the GPU truly is, then we may see
the GPU complete some CS events following our cancellation. This leaves
us quite confused as we deleted all the bookkeeping and thus complain
about the inconsistent state.
We can just ignore the remaining events and let the GPU idle by not
feeding it, and so avoid trying to racily overwrite shared state. We
rely on there being a full GPU reset before unwedging, giving us the
opportunity to reset the shared state.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107188
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180716080332.32283-4-chris@chris-wilson.co.uk
The kernel recently gained an augmented rbtree with the purpose of
cacheing the leftmost element of the rbtree, a frequent optimisation to
avoid calls to rb_first() which is also employed by the
execlists->queue. Switch from our open-coded cache to the library.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180629075348.27358-9-chris@chris-wilson.co.uk
Across a reset, the seqno (and thus hangcheck) should restart and the
hangcheck naturally progress, for when it does not, we want to declare an
emergency. Currently, we only detect if reset and reinit fails, but we
do not detect if the call to reinit succeeds but the HW is fried - as we
are resetting hangcheck on initialisation the engine. Remove that and
rely on the natural progress to reset the hangcheck timer.
References: e21b141376 ("drm/i915: Mark the hangcheck as idle when unparking the engines")
References: 1fd00c0fae ("drm/i915: Declare the driver wedged if hangcheck makes no progress")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180709130208.11730-2-chris@chris-wilson.co.uk
Back in commit 27af5eea54 ("drm/i915: Move execlists irq handler to a
bottom half"), we came to the conclusion that running our CSB processing
and ELSP submission from inside the irq handler was a bad idea. A really
bad idea as we could impose nearly 1s latency on other users of the
system, on average! Deferring our work to a tasklet allowed us to do the
processing with irqs enabled, reducing the impact to an average of about
50us.
We have since eradicated the use of forcewaked mmio from inside the CSB
processing and ELSP submission, bringing the impact down to around 5us
(on Kabylake); an order of magnitude better than our measurements 2
years ago on Broadwell and only about 2x worse on average than the
gem_syslatency on an unladen system.
In this iteration of the tasklet-vs-direct submission debate, we seek a
compromise where by we submit new requests immediately to the HW but
defer processing the CS interrupt onto a tasklet. We gain the advantage
of low-latency and ksoftirqd avoidance when waking up the HW, while
avoiding the system-wide starvation of our CS irq-storms.
Comparing the impact on the maximum latency observed (that is the time
stolen from an RT process) over a 120s interval, repeated several times
(using gem_syslatency, similar to RT's cyclictest) while the system is
fully laden with i915 nops, we see that direct submission an actually
improve the worse case.
Maximum latency in microseconds of a third party RT thread
(gem_syslatency -t 120 -f 2)
x Always using tasklets (a couple of >1000us outliers removed)
+ Only using tasklets from CS irq, direct submission of requests
+------------------------------------------------------------------------+
| + |
| + |
| + |
| + + |
| + + + |
| + + + + x x x |
| +++ + + + x x x x x x |
| +++ + ++ + + *x x x x x x |
| +++ + ++ + * *x x * x x x |
| + +++ + ++ * * +*xxx * x x xx |
| * +++ + ++++* *x+**xx+ * x x xxxx x |
| **x++++*++**+*x*x****x+ * +x xx xxxx x x |
|x* ******+***************++*+***xxxxxx* xx*x xxx + x+|
| |__________MA___________| |
| |______M__A________| |
+------------------------------------------------------------------------+
N Min Max Median Avg Stddev
x 118 91 186 124 125.28814 16.279137
+ 120 92 187 109 112.00833 13.458617
Difference at 95.0% confidence
-13.2798 +/- 3.79219
-10.5994% +/- 3.02677%
(Student's t, pooled s = 14.9237)
However the mean latency is adversely affected:
Mean latency in microseconds of a third party RT thread
(gem_syslatency -t 120 -f 1)
x Always using tasklets
+ Only using tasklets from CS irq, direct submission of requests
+------------------------------------------------------------------------+
| xxxxxx + ++ |
| xxxxxx + ++ |
| xxxxxx + +++ ++ |
| xxxxxxx +++++ ++ |
| xxxxxxx +++++ ++ |
| xxxxxxx +++++ +++ |
| xxxxxxx + ++++++++++ |
| xxxxxxxx ++ ++++++++++ |
| xxxxxxxx ++ ++++++++++ |
| xxxxxxxxxx +++++++++++++++ |
| xxxxxxxxxxx x +++++++++++++++ |
|x xxxxxxxxxxxxx x + + ++++++++++++++++++ +|
| |__A__| |
| |____A___| |
+------------------------------------------------------------------------+
N Min Max Median Avg Stddev
x 120 3.506 3.727 3.631 3.6321417 0.02773109
+ 120 3.834 4.149 4.039 4.0375167 0.041221676
Difference at 95.0% confidence
0.405375 +/- 0.00888913
11.1608% +/- 0.244735%
(Student's t, pooled s = 0.03513)
However, since the mean latency corresponds to the amount of irqsoff
processing we have to do for a CS interrupt, we only need to speed that
up to benefit not just system latency but our own throughput.
v2: Remember to defer submissions when under reset.
v4: Only use direct submission for new requests
v5: Be aware that with mixing direct tasklet evaluation and deferred
tasklets, we may end up idling before running the deferred tasklet.
v6: Remove the redudant likely() from tasklet_is_enabled(), restrict the
annotation to reset_in_progress().
v7: Take the full timeline.lock when enabling perf_pmu stats as the
tasklet is no longer a valid guard. A consequence is that the stats are
now only valid for engines also using the timeline.lock to process
state.
Testcase: igt/gem_exec_latency/*rthog*
References: 27af5eea54 ("drm/i915: Move execlists irq handler to a bottom half")
Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-9-chris@chris-wilson.co.uk
Now that we use the CSB stored in the CPU friendly HWSP, we do not need
to track interrupts for when the mmio CSB registers are valid and can
just check where we read up to last from the cached HWSP. This means we
can forgo the atomic bit tracking from interrupt, and in the next patch
it means we can check the CSB at any time.
v2: Change the splitting inside reset_prepare, we only want to lose
testing the interrupt in this patch, the next patch requires the change
in locking
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-8-chris@chris-wilson.co.uk
As we now never read back our current head position from the CSB
pointers register, and the HW itself doesn't use it to prevent
overwriting unread CSB entries, we do not need to keep updating the
register. As it turns out this register is not listed as being shadowed,
and so requires forcewake -- but we haven't been taking forcewake around
it so the writes has probably been regularly dropped. Fortuitously, we
only read the value after a reset where it did not matter, and zero was
the right answer (well, close enough).
Mika pointed out that this was how we used to do it (accidentally!)
before he fixed it in commit cc53699b25 ("drm/i915: Use masked write
for Context Status Buffer Pointer").
References: cc53699b25 ("drm/i915: Use masked write for Context Status Buffer Pointer")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-7-chris@chris-wilson.co.uk
On HW reset, the HW clears the write pointer (to 0). But since it also
writes its first CSB entry to slot 0, we need to reset the write pointer
back to the element before (so the first entry we read is 0).
This is required for the next patch, where we trust the CSB completely!
v2: Use _MASKED_FIELD
v3: Store the reset value, so that we differentiate between mmio/hwsp
transparently and without pretense.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-6-chris@chris-wilson.co.uk
Following the removal of the last workarounds, the only CSB mmio access
is for the old vGPU interface. The mmio registers presented by vGPU do
not require forcewake and can be treated as ordinary volatile memory,
i.e. they behave just like the HWSP access just at a different location.
We can reduce the CSB access to a set of read/write/buffer pointers and
treat the various paths identically and not worry about forcewake.
(Forcewake is nightmare for worstcase latency, and we want to process
this all with irqsoff -- no latency allowed!)
v2: Comments, comments, comments. Well, 2 bonus comments.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-5-chris@chris-wilson.co.uk
In the next patch, we will process the CSB events directly from the
submission path, rather than only after a CS interrupt. Hence, we will
no longer have the need for a loop until the has-interrupt bit is clear,
and in the meantime can remove that small optimisation.
v2: Tvrtko pointed out it was safer to unconditionally kick the tasklet
after each irq, when assuming that the tasklet is called for each irq.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-4-chris@chris-wilson.co.uk
In the following patch, we will process the CSB events under the
timeline.lock and not serialised by the tasklet. This also means that we
will need to protect access to common variables such as
execlists->csb_head with the timeline.lock during reset.
v2: Move sync_irq to avoid deadlocks between taking timeline.lock from
our interrupt handler.
v3: Kill off the synchronize_hardirq as it raises more questions than
answered; now we use the timeline.lock entirely for CSB serialisation
between the irq and elsewhere, we don't need to be so heavy handed with
flushing
v4: Treat request cancellation (wedging after failed reset) similarly
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-3-chris@chris-wilson.co.uk
In the next patch, we will begin processing the CSB from inside the
submission path (underneath an irqsoff section, and even from inside
interrupt handlers). This means that updating the execlists->port[] will
no longer be serialised by the tasklet but needs to be locked by the
engine->timeline.lock instead. Pull dequeue and submit under the same
lock for protection. (An alternate future plan is to keep the in/out
arrays separate for concurrent processing and reduced lock coverage.)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-2-chris@chris-wilson.co.uk
Due to how we only release the pining on the context state on
retirement and never track activity on the context vma itself, the
object can never be active at the point of release. Replace the
conditional transfer of ownership onto an active-reference with an
assert that the object is idle.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180625100604.22598-2-chris@chris-wilson.co.uk
As we may cancel the ce->state allocation during context pinning (but
crucially after we mark ce as operational), that means we may be asked
to destroy a nonexistent ce->state. Given the choice in handing a
complex error path on pinning, and just ignoring the lack of state in
destroy, choice the latter for simplicity.
Reported-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180625100604.22598-1-chris@chris-wilson.co.uk
Having the w/a registers as an open-coded table leaves a trap for the
unwary; it would be easy to miss incrementing the LRI counter when
adding a new register to the list. Instead, pull the list of registers
into a table, so that we only need add new registers to that table
rather than try and remember important side-effects of earlier chunks of
GPU instructions.
Suggested-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180618094150.30895-1-chris@chris-wilson.co.uk
The SF and clipper units mishandle the provoking vertex in some cases,
which can cause misrendering with shaders that use flat shaded inputs.
There are chicken bits in 3D_CHICKEN3 (for SF) and FF_SLICE_CHICKEN
(for the clipper) that work around the issue. These registers are
unfortunately not part of the logical context (even the power context),
and so we must reload them every time we start executing in a context.
Bugzilla: https://bugs.freedesktop.org/103047
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180615190605.16238-1-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: stable@vger.kernel.org
We can avoid the mmio read of the CSB pointers after reset based on the
knowledge that the HW always start writing at entry 0 in the CSB buffer.
We need to reset our CSB head tracking after GPU reset (and on
sanitization after resume) so that we are expecting to read from entry
0, hence we reset our head tracking back to the entry before (the last
entry in the ring).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180615093137.14270-2-chris@chris-wilson.co.uk
In the unlikely case where we have failed to keep submitting to the GPU,
we end up with the ELSP queue empty but a pending queue of requests.
Here, we skip the per-engine reset as there is no guilty request, but in
doing so we also skip the engine restart leaving ourselves with a
permanently hung engine. A quick way to recover is by moving the tasklet
kick to execlists_reset_finish() (from init_hw). We still emit the error
on hanging, so the error is not lost but we should be able to recover.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180604073441.6737-2-chris@chris-wilson.co.uk
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
On allocation error, do not jump to the unwind handler that tries to
free the error pointer.
Reported-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: a89d1f921c ("drm/i915: Split i915_gem_timeline into individual timelines")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180611153332.14824-1-chris@chris-wilson.co.uk
The HW only accepts offsets within ring->size, and fails peculiarly if
the RING_HEAD or RING_TAIL is set to ring->size. Therefore whenever we
set ring->head/ring->tail we want to make sure it is within value (using
intel_ring_wrap()).
v2: Double check execlists as well
v3: Remove redundancy with assert_ring_tail_valid()
v4: Just assert in intel_ring_reset() rather than be over-defensive.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> #v2
Link: https://patchwork.freedesktop.org/patch/msgid/20180611110845.31890-2-chris@chris-wilson.co.uk
In the near future, I want to subclass gen6_hw_ppgtt as it contains a
few specialised members and I wish to add more. To avoid the ugliness of
using ppgtt->base.base, rename the i915_hw_ppgtt base member
(i915_address_space) as vm, which is our common shorthand for an
i915_address_space local.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180605153758.18422-1-chris@chris-wilson.co.uk