Extend the error handling code with operations found in other nearby error
handling code
A simplified version of the sematic match that finds this problem is as
follows: (http://coccinelle.lip6.fr/)
// <smpl>
@r exists@
@r@
statement S1,S2,S3;
constant C1,C2,C3;
@@
*if (...)
{... S1 return -C1;}
...
*if (...)
{... when != S1
return -C2;}
...
*if (...)
{... S1 return -C3;}
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@kernel.org
The return from move_to_gtt_domain() may indicate a pending signal which
needs to handled as opposed to an actual error, for instance, so report
the original return value rather than forcing an EINVAL.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
When the GPU is reset, the fence registers are invalidated, so release
the objects and clear them out.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Only drm/i915 does the bookkeeping that makes the information useful,
and the information maintained is driver specific, so move it out of the
core and into its single user.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Airlie <airlied@redhat.com>
At that point as the object is no longer in any GPU write domain it must
not be on the list, so the list_del() is redundant.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Just reschedule the retire requests again if the device is currently
busy. The request list will be pruned along other paths so will never
grow unbounded and so we can afford to miss the occasional pruning.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
With multiple rings generating requests independently, the outstanding
requests must also be track independently.
Reported-by: Wang Jinjin <jinjin.wang@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=30380
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Introduced by 48b956c5, I had thought I had already fixed this. Oh well.
Reported-by: Sitsofe Wheeler <sitsofe@yahoo.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Daniel Vetter pointed out that in this case is would be clearer and
cleaner to use a spinlock instead of a mutex to protect the per-file
request list manipulation. Make it so.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Owain Ainsworth reported an issue between the interaction of the
hangcheck and userspace immediately (and permanently) falling back to
s/w rasterisation. In order to break the mutex and begin resetting the
GPU, we must abort the current operation (usually within the wait) and
climb sufficiently far back up the call chain to drop the mutex. In his
implementation, Owain has a loop within the ioctl handler to detect the
hang and then sleep until the error handler has run. I've chosen to
return to userspace and report an EAGAIN which should trigger the
userspace ioctl handler to repeat the call (simply because it felt less
invasive...). Before hitting a wedged GPU, we then wait upon completion
of the error handler.
Reported-by: Owain G. Ainsworth <zerooa@googlemail.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Avoid cause latencies in other clients by not taking the global struct
mutex and moving the per-client request manipulation a local per-client
mutex. For example, this allows a compositor to schedule a page-flip
(through X) whilst an OpenGL application is monopolising the GPU.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
We need to drain the pending flips prior to disabling the pipe during
modeset, and these need to be done in an uninterruptible fashion.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This is already performed with the pipelined flush, so by the time we
schedule the flush in the page-flip, the ring is NULL and we OOPs
instead.
Reported-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
A minor typo caused a single fence register to be incorrectly
programmed, resulting in occassional tiling corruption.
Reported-and-tested-by: Hans de Bruin <bruinjm@xs4all.nl>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=18962
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@kernel.org
Keep a list of pinned objects and display it via debugfs. Now all
objects that exist in the GTT are always tracked on one of the
active, flushing, inactive or pinned lists.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
If we have queued a page flip on the current fb and then request a mode
change, wait until the page flip completes before performing the new
request.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Track if the gpu requires the fence for the execution of a batch buffer
and so only wait upon the retirement of the object's last rendering
seqno if the fence is in use by the GPU.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Introduce intel_init_render_ring_buffer(), intel_init_bsd_ring_buffer
for ring initialization.
Signed-off-by: Xiang, Haihao <haihao.xiang@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Clear the GPU read domain for the inactive objects on a reset so that
they are correctly invalidated on reuse.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Owain Ainsworth noticed that the reset code failed to clear the flushing
list leaving the driver in an inconsistent state following a hung GPU.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
When flushing the GPU domains,we emit a flush on *both* rings, even
though they share a unified cache. Only emit the flush on the currently
active ring.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Change the semantics to retire any buffer older than the current seqno
rather than repeatedly calling calling the function to retire the
buffer at the head of the list matching the request seqno.
Whilst this should have no semantic impact on the implementation, Daniel
was wondering if there was a bug where we might miss a retirement and so
end up with a continually growing active list.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
With 5 places to update when adding handling for fence registers, it is
easy to overlook one or two. Correct that oversight, but fence
management should be improved before a new set of registers is added.
Bugzilla: https://bugs.freedesktop.org/show_bug?id=30199
Original patch by: Yuanhan Liu <yuanhan.liu@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@kernel.org
As we currently may need to acquire a fence register during a modeset,
we need to be able to do so in an uninterruptible manner. So expose that
parameter to the callers of the fence management code.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This ensures that we do wait upon the flushes to complete if necessary
and avoid the visual tears, whilst enabling pipelined page-flips.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
I pulled the wrong version of the patch from Daniel Vetter which was
missing the read barriers -- and the one that was causing all the trouble
was from i915_gem_object_put_fence_reg(), leading to GPU hangs on gen3.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
By reducing the hangcheck frequency we check less often, conserving
resources, and still detect a lock up quickly. On a fast machine with a
slow GPU (like a Core2 paired with a 945G) it is easy for the hangcheck to
misfire as we check too fast.
Also once hung and if we fail to completely reset the chip, we have a
nasty habit of proclaming a hang many times a second and generating a
strobe-like display.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This spinlock only served debugging purposes in a time when we could not
be sure of the mutex ever being released upon a GPU hang. As we now
should be able rely on hangcheck to do the job for us (and that error
reporting should not itself require the struct mutex) we can kill the
incomplete attempt at protection.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
By allocating the request prior to writing to the ringbuffer, we can
abort the operation without leaving the GPU in an inconsistent state.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
... take advantage of the new implicit request issuing of
i915_wait_request.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
One caller (for the pageflip support) wants a purely pipelined flush.
Distinguish this case by a new parameter. This will also be useful
later on for pipelined fencing.
v2: Simplify the code by depending upon the implicit request emitting
of i915_wait_request.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
[ickle: And drop the non-interruptible support in the process.]
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
By moving one i915_add_request we can solely depend on the new
auto-seqno-numbering behaviour.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>