Commit Graph

45571 Commits

Author SHA1 Message Date
Ville Syrjälä
0ba7c51a6f drm/i915: Fix hotplug irq ack on i965/g4x
Just like with PIPESTAT, the edge triggered IIR on i965/g4x
also causes problems for hotplug interrupts. To make sure
we don't get the IIR port interrupt bit stuck low with the
ISR bit high we must force an edge in ISR. Unfortunately
we can't borrow the PIPESTAT trick and toggle the enable
bits in PORT_HOTPLUG_EN as that act itself generates hotplug
interrupts. Instead we just have to loop until we've cleared
PORT_HOTPLUG_STAT, or we just give up and WARN.

v2: Don't frob with PORT_HOTPLUG_EN

Cc: stable@vger.kernel.org
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180614175625.1615-1-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
2018-07-04 23:03:30 +03:00
Chris Wilson
1f6f00238a drm/i915/selftests: Drop struct_mutex around lowlevel pggtt allocation
For a ppgtt that we are constructing, there is no struct_mutex
dependence so skip it. In the process, also ping the scheduler
frequently to try and avoid the NMI watchdog.

v2: gen6 requires struct_mutex to clean up (currently)

Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
References: https://bugs.freedesktop.org/show_bug.cgi?id=107094
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180703135331.12265-1-chris@chris-wilson.co.uk
2018-07-03 22:09:22 +01:00
Chris Wilson
38b7fb0b2a drm/i915/selftests: Release the struct_mutex to free the objects
live_gtt is a very slow test to run, simply because it tries to allocate
and use as much as the 48b address space as possibly can and in the
process will try to own all of the system memory. This leads to resource
exhaustion and CPU starvation; the latter impacts us when the NMI
watchdog declares a task hung due to a mutex contention with ourselves.
This we can prevent by releasing the struct_mutex and forcing our
i915/rcu workers to run, and in particular flushing the freed object
worker that is the cause for concern.

References: https://bugs.freedesktop.org/show_bug.cgi?id=107094
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180703101829.7360-1-chris@chris-wilson.co.uk
2018-07-03 22:02:36 +01:00
Tarun Vyas
a608987970 drm/i915: Wait for PSR exit before checking for vblank evasion
The PIPEDSL freezes on PSR entry and if PSR hasn't fully exited, then
the pipe_update_start call schedules itself out to check back later.

On ChromeOS-4.4 kernel, which is fairly up-to-date w.r.t drm/i915 but
lags w.r.t core kernel code, hot plugging an external display triggers
tons of "potential atomic update errors" in the dmesg, on *pipe A*. A
closer analysis reveals that we try to read the scanline 3 times and
eventually timeout, b/c PSR hasn't exited fully leading to a PIPEDSL
stuck @ 1599. This issue is not seen on upstream kernels, b/c for *some*
reason we loop inside intel_pipe_update start for ~2+ msec which in this
case is more than enough to exit PSR fully, hence an *unstuck* PIPEDSL
counter, hence no error. On the other hand, the ChromeOS kernel spends
~1.1 msec looping inside intel_pipe_update_start and hence errors out
b/c the source is still in PSR.

Regardless, we should wait for PSR exit (if PSR is disabled, we incur
a ~1-2 usec penalty) before reading the PIPEDSL, b/c if we haven't
fully exited PSR, then checking for vblank evasion isn't actually
applicable.

v4: Comment explaining psr_wait after enabling VBL interrupts (DK)

v5: CAN_PSR() to handle platforms that don't support PSR.

v6: Handle local_irq_disable on early return (Chris)

Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Tarun Vyas <tarun.vyas@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627200250.1515-2-tarun.vyas@intel.com
2018-07-02 10:56:33 -07:00
Tarun Vyas
c43dbcbbcc drm/i915/psr: Lockless version of psr_wait_for_idle
This is a lockless version of the exisiting psr_wait_for_idle().
We want to wait for PSR to idle out inside intel_pipe_update_start.
At the time of a pipe update, we should never race with any psr
enable or disable code, which is a part of crtc enable/disable.
The follow up patch will use this lockless wait inside pipe_update_
start to wait for PSR to idle out before checking for vblank evasion.
We need to keep the wait in pipe_update_start to as less as it can be.
So,we can live and flourish w/o taking any psr locks at all.

Even if psr is never enabled, psr2_enabled will be false and this
function will wait for PSR1 to idle out, which should just return
immediately, so a very short (~1-2 usec) wait for cases where PSR
is disabled.

v2: Add comment to explain the 25msec timeout (DK)

v3: Rename psr_wait_for_idle to __psr_wait_for_idle_locked to avoid
    naming conflicts and propagate err (if any) to the caller (Chris)

v5: Form a series with the next patch

v7: Better explain the need for lockless wait and increase the max
    timeout to handle refresh rates < 60 Hz (Daniel Vetter)

v8: Rebase

Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Tarun Vyas <tarun.vyas@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627200250.1515-1-tarun.vyas@intel.com
2018-07-02 10:52:39 -07:00
Dhinakaran Pandiyan
abdd322f68 drm/i915: Remove unnecessary check for unsupported modifiers for NV12
There is already a check to allow only RGB8888 formats with CCS
modifiers.

Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628061854.6430-1-dhinakaran.pandiyan@intel.com
2018-07-02 10:37:38 -07:00
Vathsala Nagaraju
00b062967f drm/i915/psr: Add psr1 live status
Prints live state of psr1.Extending the existing
PSR2 live state function to cover psr1.

Tested on KBL with psr2 and psr1 panel.

v2: rebase
v3: DK
    Rename psr2_live_status to psr_source_status.
v4: DK
    Move EDP_PSR_STATUS_STATE_SHIFT below EDP_PSR_STATUS_STATE_MASK.
    Pass seq to psr_source_status, handle source status prints in
    psr_source_status.
v5: Fixed CI warning messages
v6:
    Remove extra space in the title before the colon.(DK)
    Rebase. (Jani)
v7: Use tabs for indenting the values.(Jani)
v8: Addressed dk's review comments.

Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>

Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Vathsala Nagaraju <vathsala.nagaraju@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1530086910-15914-1-git-send-email-vathsala.nagaraju@intel.com
2018-07-02 10:36:20 -07:00
Chris Wilson
7e7367d3bc drm/i915: Try GGTT mmapping whole object as partial
If the whole object is already pinned by HW for use as scanout, we will
fail to move it to the mappable region and so must resort to using a
partial VMA covering the whole object.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104513
Fixes: aa136d9d72 ("drm/i915: Convert partial ggtt vma to full ggtt if it spans the entire object")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180630090509.469-1-chris@chris-wilson.co.uk
2018-07-02 17:36:09 +01:00
Jani Nikula
e67005e59a drm/i915: abstract and document register picking macros
Try to describe what the pick variants do, and which to prefer. No
functional changes.

Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180629102039.2435-1-jani.nikula@intel.com
2018-07-02 17:36:23 +03:00
Michal Wajdeczko
1ea29bbd47 drm/i915/guc: Print CTL params passed to Guc
While debugging we may want to examine params passed to GuC.

v2: drop #ifdef DEBUG_GUC - Michal

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com> #1
Cc: Michal Winiarski <michal.winiarski@intel.com>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180618111821.47088-1-michal.wajdeczko@intel.com
2018-06-29 23:34:17 +01:00
Chris Wilson
be01de596e drm/i915/selftests: Attach the fence to the object when making busy
make_obj_busy() makes a dummy busy object, but didn't attach the fence
to the reservation object, so it would not have registered as busy. For
completeness, attach the dummy request as the exclusive fence and mark
the object as written (in i915_vma_move_to_active)

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180629133717.11761-2-chris@chris-wilson.co.uk
2018-06-29 21:07:39 +01:00
Chris Wilson
d78e2bbf48 drm/i915/selftests: Mark up write into scratch vma
We correctly attach the exclusive fetch for the scratch object when
emitting a request that writes into it, but for completeness we should
also declared the write to i915_vma_move_to_active()

Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180629133717.11761-1-chris@chris-wilson.co.uk
2018-06-29 20:52:46 +01:00
Maarten Lankhorst
4572095957 drm/i915: Remove delayed FBC activation.
The only time we should start FBC is when we have waited a vblank
after the atomic update. We've already forced a vblank wait by doing
wait_for_flip_done before intel_post_plane_update(), so we don't need
to wait a second time before enabling.

Removing the worker simplifies the code and removes possible race
conditions, like happening in 103167.

Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103167
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180625163758.10871-2-maarten.lankhorst@linux.intel.com
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
2018-06-29 10:06:31 +02:00
Maarten Lankhorst
c9855a561a drm/i915: Block enabling FBC until flips have been completed
There is a small race window in which FBC can be enabled after
pre_plane_update is called, but before the page flip has been
queued or completed.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103167
Link: https://patchwork.freedesktop.org/patch/msgid/20180625163758.10871-1-maarten.lankhorst@linux.intel.com
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
2018-06-29 10:06:08 +02:00
Chris Wilson
9512f985c3 drm/i915/execlists: Direct submission of new requests (avoid tasklet/ksoftirqd)
Back in commit 27af5eea54 ("drm/i915: Move execlists irq handler to a
bottom half"), we came to the conclusion that running our CSB processing
and ELSP submission from inside the irq handler was a bad idea. A really
bad idea as we could impose nearly 1s latency on other users of the
system, on average! Deferring our work to a tasklet allowed us to do the
processing with irqs enabled, reducing the impact to an average of about
50us.

We have since eradicated the use of forcewaked mmio from inside the CSB
processing and ELSP submission, bringing the impact down to around 5us
(on Kabylake); an order of magnitude better than our measurements 2
years ago on Broadwell and only about 2x worse on average than the
gem_syslatency on an unladen system.

In this iteration of the tasklet-vs-direct submission debate, we seek a
compromise where by we submit new requests immediately to the HW but
defer processing the CS interrupt onto a tasklet. We gain the advantage
of low-latency and ksoftirqd avoidance when waking up the HW, while
avoiding the system-wide starvation of our CS irq-storms.

Comparing the impact on the maximum latency observed (that is the time
stolen from an RT process) over a 120s interval, repeated several times
(using gem_syslatency, similar to RT's cyclictest) while the system is
fully laden with i915 nops, we see that direct submission an actually
improve the worse case.

Maximum latency in microseconds of a third party RT thread
(gem_syslatency -t 120 -f 2)
  x Always using tasklets (a couple of >1000us outliers removed)
  + Only using tasklets from CS irq, direct submission of requests
+------------------------------------------------------------------------+
|          +                                                             |
|          +                                                             |
|          +                                                             |
|          +       +                                                     |
|          + +     +                                                     |
|       +  + +     +  x     x     x                                      |
|      +++ + +     +  x  x  x  x  x  x                                   |
|      +++ + ++  + +  *x x  x  x  x  x                                   |
|      +++ + ++  + *  *x x  *  x  x  x                                   |
|    + +++ + ++  * * +*xxx  *  x  x  xx                                  |
|    * +++ + ++++* *x+**xx+ *  x  x xxxx x                               |
|   **x++++*++**+*x*x****x+ * +x xx xxxx x          x                    |
|x* ******+***************++*+***xxxxxx* xx*x     xxx +                x+|
|             |__________MA___________|                                  |
|      |______M__A________|                                              |
+------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x 118            91           186           124     125.28814     16.279137
+ 120            92           187           109     112.00833     13.458617
Difference at 95.0% confidence
	-13.2798 +/- 3.79219
	-10.5994% +/- 3.02677%
	(Student's t, pooled s = 14.9237)

However the mean latency is adversely affected:

Mean latency in microseconds of a third party RT thread
(gem_syslatency -t 120 -f 1)
  x Always using tasklets
  + Only using tasklets from CS irq, direct submission of requests
+------------------------------------------------------------------------+
|           xxxxxx                                        +   ++         |
|           xxxxxx                                        +   ++         |
|           xxxxxx                                      + +++ ++         |
|           xxxxxxx                                     +++++ ++         |
|           xxxxxxx                                     +++++ ++         |
|           xxxxxxx                                     +++++ +++        |
|           xxxxxxx                                   + ++++++++++       |
|           xxxxxxxx                                 ++ ++++++++++       |
|           xxxxxxxx                                 ++ ++++++++++       |
|          xxxxxxxxxx                                +++++++++++++++     |
|         xxxxxxxxxxx    x                           +++++++++++++++     |
|x       xxxxxxxxxxxxx   x           +            + ++++++++++++++++++  +|
|           |__A__|                                                      |
|                                                      |____A___|        |
+------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x 120         3.506         3.727         3.631     3.6321417    0.02773109
+ 120         3.834         4.149         4.039     4.0375167   0.041221676
Difference at 95.0% confidence
	0.405375 +/- 0.00888913
	11.1608% +/- 0.244735%
	(Student's t, pooled s = 0.03513)

However, since the mean latency corresponds to the amount of irqsoff
processing we have to do for a CS interrupt, we only need to speed that
up to benefit not just system latency but our own throughput.

v2: Remember to defer submissions when under reset.
v4: Only use direct submission for new requests
v5: Be aware that with mixing direct tasklet evaluation and deferred
tasklets, we may end up idling before running the deferred tasklet.
v6: Remove the redudant likely() from tasklet_is_enabled(), restrict the
annotation to reset_in_progress().
v7: Take the full timeline.lock when enabling perf_pmu stats as the
tasklet is no longer a valid guard. A consequence is that the stats are
now only valid for engines also using the timeline.lock to process
state.

Testcase: igt/gem_exec_latency/*rthog*
References: 27af5eea54 ("drm/i915: Move execlists irq handler to a bottom half")
Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-9-chris@chris-wilson.co.uk
2018-06-28 22:55:10 +01:00
Chris Wilson
fd8526e509 drm/i915/execlists: Trust the CSB
Now that we use the CSB stored in the CPU friendly HWSP, we do not need
to track interrupts for when the mmio CSB registers are valid and can
just check where we read up to last from the cached HWSP. This means we
can forgo the atomic bit tracking from interrupt, and in the next patch
it means we can check the CSB at any time.

v2: Change the splitting inside reset_prepare, we only want to lose
testing the interrupt in this patch, the next patch requires the change
in locking

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-8-chris@chris-wilson.co.uk
2018-06-28 22:55:09 +01:00
Chris Wilson
3800cd1953 drm/i915/execlists: Stop storing the CSB read pointer in the mmio register
As we now never read back our current head position from the CSB
pointers register, and the HW itself doesn't use it to prevent
overwriting unread CSB entries, we do not need to keep updating the
register. As it turns out this register is not listed as being shadowed,
and so requires forcewake -- but we haven't been taking forcewake around
it so the writes has probably been regularly dropped. Fortuitously, we
only read the value after a reset where it did not matter, and zero was
the right answer (well, close enough).

Mika pointed out that this was how we used to do it (accidentally!)
before he fixed it in commit cc53699b25 ("drm/i915: Use masked write
for Context Status Buffer Pointer").

References: cc53699b25 ("drm/i915: Use masked write for Context Status Buffer Pointer")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-7-chris@chris-wilson.co.uk
2018-06-28 22:55:08 +01:00
Chris Wilson
f4b58f0438 drm/i915/execlists: Reset CSB write pointer after reset
On HW reset, the HW clears the write pointer (to 0). But since it also
writes its first CSB entry to slot 0, we need to reset the write pointer
back to the element before (so the first entry we read is 0).

This is required for the next patch, where we trust the CSB completely!

v2: Use _MASKED_FIELD
v3: Store the reset value, so that we differentiate between mmio/hwsp
transparently and without pretense.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-6-chris@chris-wilson.co.uk
2018-06-28 22:55:07 +01:00
Chris Wilson
bc4237ec8d drm/i915/execlists: Unify CSB access pointers
Following the removal of the last workarounds, the only CSB mmio access
is for the old vGPU interface. The mmio registers presented by vGPU do
not require forcewake and can be treated as ordinary volatile memory,
i.e. they behave just like the HWSP access just at a different location.
We can reduce the CSB access to a set of read/write/buffer pointers and
treat the various paths identically and not worry about forcewake.
(Forcewake is nightmare for worstcase latency, and we want to process
this all with irqsoff -- no latency allowed!)

v2: Comments, comments, comments. Well, 2 bonus comments.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-5-chris@chris-wilson.co.uk
2018-06-28 22:55:06 +01:00
Chris Wilson
8ea397fa70 drm/i915/execlists: Process one CSB update at a time
In the next patch, we will process the CSB events directly from the
submission path, rather than only after a CS interrupt. Hence, we will
no longer have the need for a loop until the has-interrupt bit is clear,
and in the meantime can remove that small optimisation.

v2: Tvrtko pointed out it was safer to unconditionally kick the tasklet
after each irq, when assuming that the tasklet is called for each irq.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-4-chris@chris-wilson.co.uk
2018-06-28 22:55:04 +01:00
Chris Wilson
d8857d541c drm/i915/execlists: Pull CSB reset under the timeline.lock
In the following patch, we will process the CSB events under the
timeline.lock and not serialised by the tasklet. This also means that we
will need to protect access to common variables such as
execlists->csb_head with the timeline.lock during reset.

v2: Move sync_irq to avoid deadlocks between taking timeline.lock from
our interrupt handler.
v3: Kill off the synchronize_hardirq as it raises more questions than
answered; now we use the timeline.lock entirely for CSB serialisation
between the irq and elsewhere, we don't need to be so heavy handed with
flushing
v4: Treat request cancellation (wedging after failed reset) similarly

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-3-chris@chris-wilson.co.uk
2018-06-28 22:55:04 +01:00
Chris Wilson
0b02befa82 drm/i915/execlists: Pull submit after dequeue under timeline lock
In the next patch, we will begin processing the CSB from inside the
submission path (underneath an irqsoff section, and even from inside
interrupt handlers). This means that updating the execlists->port[] will
no longer be serialised by the tasklet but needs to be locked by the
engine->timeline.lock instead. Pull dequeue and submit under the same
lock for protection. (An alternate future plan is to keep the in/out
arrays separate for concurrent processing and reduced lock coverage.)

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-2-chris@chris-wilson.co.uk
2018-06-28 22:55:03 +01:00
Chris Wilson
74093f3ecc drm/i915: Drop posting reads to flush master interrupts
We do not need to do a posting read of our uncached mmio write to
re-enable the master interrupt lines after handling an interrupt, so
don't. This saves us a slow UC read before we can process the interrupt,
most noticeable in execlists where any stalls imposes extra latency on
GPU command execution.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjala <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-1-chris@chris-wilson.co.uk
2018-06-28 22:55:02 +01:00
Michal Wajdeczko
f7dc0157e4 drm/i915/uc: Fetch GuC/HuC firmwares from guc/huc specific init
We're fetching GuC/HuC firmwares directly from uc level during
init_early stage but this breaks guc/huc struct isolation and
also strict SW-only initialization rule for init_early. Move fw
fetching to init phase and do it separately per guc/huc struct.

v2: don't forget to move wopcm_init - Michele
v3: fetch in init_misc phase - Michal

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com> #2
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628141522.62788-2-michal.wajdeczko@intel.com
2018-06-28 22:51:33 +01:00
Michal Wajdeczko
c39d2e7e35 drm/i915/guc: Use intel_guc_init_misc to hide GuC internals
We will add more init steps to misc phase and there is no need
to expose them separately for use in uc_init_misc function.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628141522.62788-1-michal.wajdeczko@intel.com
2018-06-28 22:51:32 +01:00
Chris Wilson
e3be4079ea drm/i915: Only signal from interrupt when requested
Avoid calling dma_fence_signal() from inside the interrupt if we haven't
enabled signaling on the request.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627201304.15817-4-chris@chris-wilson.co.uk
2018-06-28 20:56:35 +01:00
Chris Wilson
78796877c3 drm/i915: Move the irq_counter inside the spinlock
Rather than have multiple locked instructions inside the notify_ring()
irq handler, move them inside the spinlock and reduce their intrinsic
locking.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627201304.15817-3-chris@chris-wilson.co.uk
2018-06-28 20:56:35 +01:00
Chris Wilson
69dc4d003e drm/i915: Only trigger missed-seqno checking next to boundary
If we have more interrupts pending (because we know there are more
breadcrumb signals before the completion), then we do not need to
trigger an irq_seqno_barrier or even wakeup the task on this interrupt
as there will be another. To allow some margin of error (we are trying
to work around incoherent seqno after all), we wakeup the breadcrumb
before the target as well as on the target.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627201304.15817-2-chris@chris-wilson.co.uk
2018-06-28 20:56:35 +01:00
Chris Wilson
3f88325c2e drm/i915: Reduce spinlock hold time during notify_ring() interrupt
By taking advantage of the RCU protection of the task struct, we can find
the appropriate signaler under the spinlock and then release the spinlock
before waking the task and signaling the fence.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627201304.15817-1-chris@chris-wilson.co.uk
2018-06-28 20:56:35 +01:00
Chris Wilson
10195b1e44 drm/i915: Show vma allocator stack when in doubt
At the moment, gem_exec_gttfill fails with a sporadic EBUSY due to us
wanting to unbind a pinned batch. Let's dump who first bound that vma to
see if that helps us identify who still unexpectedly has it pinned.

v2: We cannot allocate inside the printer (as it may be on an fs-reclaim
path), so hope for the best and build the string on the stack
v3: stack depth of 16 routinely overflows a 512 character string, limit
it to 12 to avoid unsightly truncation.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628132206.8329-1-chris@chris-wilson.co.uk
2018-06-28 20:56:35 +01:00
Thomas Zimmermann
a24362ead9 drm/i915: Replace drm_dev_unref with drm_dev_put
This patch unifies the naming of DRM functions for reference counting
of struct drm_device. The resulting code is more aligned with the rest
of the Linux kernel interfaces.

Signed-off-by: Thomas Zimmermann <tdz@users.sourceforge.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180618110154.30462-6-tdz@users.sourceforge.net
2018-06-28 19:09:46 +02:00
Thomas Zimmermann
01159b47a4 drm/i915: Replace drm_gem_object_unreference_unlocked with put function
This patch unifies the naming of DRM functions for reference counting
of struct drm_gem_object. The resulting code is more aligned with the
rest of the Linux kernel interfaces.

Signed-off-by: Thomas Zimmermann <tdz@users.sourceforge.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180618110154.30462-5-tdz@users.sourceforge.net
2018-06-28 19:09:39 +02:00
Thomas Zimmermann
55f95c2723 drm/i915: Replace __drm_gem_object_unreference with __drm_gem_object_put
This patch unifies the naming of DRM functions for reference counting
of struct drm_gem_object. The resulting code is more aligned with the
rest of the Linux kernel interfaces.

Signed-off-by: Thomas Zimmermann <tdz@users.sourceforge.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180618110154.30462-4-tdz@users.sourceforge.net
2018-06-28 19:09:34 +02:00
Thomas Zimmermann
0f67706e66 drm/i915: Replace drm_gem_object_{un/reference} with {put,get} functions
This patch unifies the naming of DRM functions for reference counting
of struct drm_gem_object. The resulting code is more aligned with the
rest of the Linux kernel interfaces.

Signed-off-by: Thomas Zimmermann <tdz@users.sourceforge.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180618110154.30462-3-tdz@users.sourceforge.net
2018-06-28 19:09:28 +02:00
Thomas Zimmermann
ef196b5c2f drm/i915: Replace drm_connector_{un/reference} with put,get functions
This patch unifies the naming of DRM functions for reference counting
of struct drm_connector. The resulting code is more aligned with the
rest of the Linux kernel interfaces.

Signed-off-by: Thomas Zimmermann <tdz@users.sourceforge.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180618110154.30462-2-tdz@users.sourceforge.net
2018-06-28 19:09:21 +02:00
Anusha Srivatsa
3160422251 drm/i915/icp: Add Interrupt Support
This patch addresses Interrupts from south display engine (SDE).

ICP has two registers - SHOTPLUG_CTL_DDI and SHOTPLUG_CTL_TC.
Introduce these registers and their intended values.

Introduce icp_irq_handler().

The icp_irq_postinstall() takes care of
enabling all PCH interrupt sources, to unmask
them as needed with SDEIMR, as is done
done by ibx_irq_pre_postinstall() for earlier platforms.
We do not need to explicitly call the ibx_irq_pre_postinstall().

Also, while changing these,
s/CPT/PPT/CPT-CNP comment.

v2:
- remove redundant register defines.(Lucas)
- Change register names to be more consistent with
previous platforms (Lucas)

v3:
-Reorder bit defines to a more appropriate location.
 Change the comments. Confirm in the commit message that
 icp_irq_postinstall() need not go to
 ibx_irq_pre_postinstall() and ibx_irq_postinstall()
 as in earlier platforms. (Paulo)

Cc: Lucas De Marchi <lucas.de.marchi@gmail.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
[Paulo: coding style bikesheds and rebases].
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1530046343-30649-1-git-send-email-anusha.srivatsa@intel.com
2018-06-27 14:28:51 -07:00
Chris Wilson
a61b47f672 drm/i915: Wait for engines to idle before retiring
In the next^W forthcoming patch, we will start to defer retiring the
request from the engine list if it is still active on the submission
backend. To preserve the semantics that after wait-for-idle completes
the system is idle and fully retired, we need to therefore wait for the
backends to idle before calling i915_retire_requests().

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627115334.16282-1-chris@chris-wilson.co.uk
2018-06-27 19:00:22 +01:00
Imre Deak
67ca07e7ac drm/i915/icl: Add power well support
Add the definition for ICL power wells and their mapping to power
domains. On ICL there are 3 power well control registers, we'll select
the correct one based on higher bits of the power well ID. The offset
for the control and status flags within this register is based on the
lower bits of the ID as on older platforms.

As the DC state programming is also the same as on old platforms we can
reuse the corresponding helpers. For this we mark here the DC-off power
well as shared among multiple platforms.

Other than the above the delta between old platforms and ICL:
- Pipe C has its own power well, so we can save some additional power in the
  pipe A+B and (non-eDP) pipe A configurations.
- Power wells for port E/F DDI/AUX IO and Thunderbolt 1-4 AUX IO

v2:
- Rebase on drm-tip after prep patch for this was merged there as
  requested by Paulo.
- Actually add the new AUX and DDI power well control regs (Rakshmi)

v3:
- Fix power well register names in code comments
- Add TBT AUX->power well 3 dependency

v4:
- Rebase

v5:
- Detach AUX power wells from the INIT power domain. These power wells
  can only be enabled in a TC/TBT connected state and otherwise not
  needed during driver initialization.

v6:
- Use _MMIO_PORT(...) instead _MMIO(_PICK(...)) (Paulo)
  Fix checkpatch warnings.

Cc: Animesh Manna <animesh.manna@intel.com>
Cc: Rakshmi Bhatia <rakshmi.bhatia@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Animesh Manna <animesh.manna@intel.com> (v1)
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180626142232.22361-1-imre.deak@intel.com
2018-06-27 16:01:19 +03:00
José Roberto de Souza
00c8f19463 drm/i915/psr: Enable CRC check in the static frame on the sink side
Sink can be configured to calculate the CRC over the static frame and
compare with the CRC calculated and transmited in the VSC SDP by
source, if there is a mismatch sink will do a short pulse in HPD
and set DP_PSR_LINK_CRC_ERROR in DP_PSR_ERROR_STATUS.

Spec: 7723

v6:
andling DP_PSR_LINK_CRC_ERROR here and remove "bdw+" from commit
message

v4:
patch moved to after 'drm/i915/psr: Avoid PSR exit max time timeout'
to avoid touch in 2 patches EDP_PSR_DEBUG.

v3:
disabling PSR instead of exiting on error

Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180626201644.21932-5-jose.souza@intel.com
2018-06-26 17:15:55 -07:00
José Roberto de Souza
3ebe3df50b drm/i915/psr: Avoid PSR exit max time timeout
Specification requires that max time should be masked from bdw and
forward but it can be also safely enabled to hsw.
This will make PSR exits more deterministic and only when really
needed. If this was used to fix a issue in some panel than can
only self-refresh for a few seconds, that panel will interrupt
and assert one of the PSR errors handled in:
'drm/i915/psr: Handle PSR RFB storage error' and
'drm/i915/psr: Begin to handle PSR/PSR2 errors set by sink'

Spec: 21664

v4:
patch moved to before 'drm/i915/psr/bdw+: Enable CRC check in the
static frame on the sink side' to avoid touch in 2 patches
EDP_PSR_DEBUG.

Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180626201644.21932-4-jose.souza@intel.com
2018-06-26 17:15:00 -07:00
José Roberto de Souza
93bf76ed88 drm/i915/psr: Handle PSR errors
Sink will interrupt source when it have any PSR error.
DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR is a PSR2 but already
handling it here.
The only missing error to be handled is DP_PSR_LINK_CRC_ERROR that
will be taken in care in a futher patch.

v6:
not handling DP_PSR_LINK_CRC_ERROR here

v5:
handling all PSR errors here, so the commit message and
comment have changed

v3:
disabling PSR instead of exiting on error

Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180626201644.21932-3-jose.souza@intel.com
2018-06-26 17:13:05 -07:00
José Roberto de Souza
cc3054ff62 drm/i915/psr: Begin to handle PSR/PSR2 errors set by sink
eDP spec states that sink device will do a short pulse in HPD
line when there is a PSR/PSR2 error that needs to be handled by
source, this is handling the first and most simples error:
DP_PSR_SINK_INTERNAL_ERROR.

Here taking the safest approach and disabling PSR(at least until
the next modeset), to avoid multiple rendering issues due to
bad pannels.

v5:
added lockdep_assert in psr_disable and renamed psr_disable()
to intel_psr_disable_locked()

v4:
Using CAN_PSR instead of HAS_PSR in intel_psr_short_pulse

v3:
disabling PSR instead of exiting on error

Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180626201644.21932-2-jose.souza@intel.com
2018-06-26 17:11:35 -07:00
José Roberto de Souza
42f53ffcad drm/i915/psr: Remove intel_crtc_state parameter from disable_source()
It was only used in VLV/CHV so after the removal of the PSR support
for those platforms it is not necessary any more.

v7: Rebased

Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180626201644.21932-1-jose.souza@intel.com
2018-06-26 17:00:14 -07:00
Dhinakaran Pandiyan
bcc233b2aa drm/i915/psr: Warn for erroneous enabling of both PSR1 and PSR2.
Depending whether PSR1 or PSR2 was configured, we print a warning if the
corresponding control mmio indicated PSR was erroneously enabled. As
Chris pointed out, it makes more sense to check for both the mmio's
since we expect neither PSR1 nor PSR2 to be enabled when psr_activate() is
called.

v2: Read PSR2 control register only on supported platforms (Rodrigo)

Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180626090522.17682-1-dhinakaran.pandiyan@intel.com
2018-06-26 11:45:07 -07:00
Dhinakaran Pandiyan
c12e0643a0 drm/i915/psr: Fix race in intel_psr_work()
Commit 5422b37c90 ("drm/i915/psr: Kill delays when activating psr
back.") switched from delayed work to the plain variant and while doing so
removed the check for work_busy() before scheduling a PSR activation.
This appears to cause consecutive executions of psr_activate() in this
scenario - after a worker picks up the PSR work item for execution and
before the work function can acquire the PSR mutex, a psr_flush() can
get hold of the mutex and schedule another PSR work. Without a psr_exit()
between the two psr_activate() calls, warning messages get printed.
Further, since we drop the mutex in the midst of psr_work() to wait for
PSR to idle, another work item can also get scheduled. Fix this by
returning if PSR was already active.

Fixes: 5422b37c90 ("drm/i915/psr: Kill delays when activating psr back.")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=106948
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180625054741.3919-1-dhinakaran.pandiyan@intel.com
2018-06-26 11:44:55 -07:00
Rodrigo Vivi
cf5d862db2 drm/i915/psr: Kill useless function pointers.
At some point we introduced the function pointers
on PSR code to help with VLV/CHV separation logic
because it had a different HW implementation from PSR.

Since all converged to HSW PSR and we dropped the
VLV/CHV support, let's also kill the useless function
pointers and leave the code cleaner.

Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180626052536.15137-1-rodrigo.vivi@intel.com
2018-06-26 10:07:24 -07:00
Imre Deak
525280552b drm/i915/ddi: Get AUX power domain for DP main link too
So far we got an AUX power domain reference only for the duration of DP
AUX transfers. However, the following suggests that we also need these
for main link functionality:
- The specification doesn't state whether it's needed or not for main
  link functionality, but suggests that these power wells need to be
  enabled already during display core initialization (Sequences to
  Initialize Display).
- For PSR we need to keep the AUX power well enabled.
- On ICL combo PHY ports (non-TC) the AUX power well is needed for
  link training too: while the port is enabled with a DP link training
  test pattern trying to toggle the AUX power well will time out.
- On ICL MG PHY ports (TC) the AUX power well is needed also for main
  link functionality (both in DP and HDMI modes).
- Windows enables these power wells both for main and AUX lane
  functionality.

Based on the above take an AUX power reference for main link
functionality too. This makes a difference only on GEN10+ (GLK+)
platforms, where we have separate port specific AUX power wells.

For PSR we still need to distinguish between port A and the other
ports, since on port A DC states must stay enabled for main link
functionality, but DC states must be disabled for driver initiated
AUX transfers. So re-use the corresponding helper from intel_psr.c.

Since we take now a reference for main link functionality on all DP
ports we can forgo taking the separate power ref for PSR functionality.

v2:
- Make sure DC states stay enabled when taking the ref on port A.
  (Ville)

v3: (Ville)
- Fix comment about logic for encoders without a crtc state and
  add FIXME note for a simplification to avoid calling get_power_domains
  in such cases.
- Use intel_crtc_has_dp_encoder() instead !intel_crtc_has_type(HDMI).

Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
[Clarified code comments in intel_ddi_main_link_aux_domain() and
 intel_ddi_get_power_domains() (Imre)]
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180621184449.26634-1-imre.deak@intel.com
2018-06-26 13:00:52 +03:00
Chris Wilson
efe79d48a7 drm/i915: Context objects can never be active when freed
Due to how we only release the pining on the context state on
retirement and never track activity on the context vma itself, the
object can never be active at the point of release. Replace the
conditional transfer of ownership onto an active-reference with an
assert that the object is idle.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180625100604.22598-2-chris@chris-wilson.co.uk
2018-06-25 16:28:23 +01:00
Chris Wilson
dd12c6ca5b drm/i915/execlists: Check for ce->state before destroy
As we may cancel the ce->state allocation during context pinning (but
crucially after we mark ce as operational), that means we may be asked
to destroy a nonexistent ce->state. Given the choice in handing a
complex error path on pinning, and just ignoring the lack of state in
destroy, choice the latter for simplicity.

Reported-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180625100604.22598-1-chris@chris-wilson.co.uk
2018-06-25 16:28:23 +01:00
Chris Wilson
8d52e44780 drm/i915: Defer modeset cleanup to a secondary task
If we avoid cleaning up the old state immediately in
intel_atomic_commit_tail() and defer it to a second task, we can avoid
taking heavily contended locks when the caller is ready to procede.
Subsequent modesets will wait for the cleanup operation (either directly
via the ordered modeset wq or indirectly through the atomic helperr)
which keeps the number of inflight cleanup tasks in check.

As an example, during reset an immediate modeset is performed to disable
the displays before the HW is reset, which must avoid struct_mutex to
avoid recursion. Moving the cleanup to a separate task, defers acquiring
the struct_mutex to after the GPU is running again, allowing it to
complete. Even in a few patches time (optimist!) when we no longer
require struct_mutex to unpin the framebuffers, it will still be good
practice to minimise the number of contention points along reset. The
mutex dependency still exists (as one modeset flushes the other), but in
the short term it resolves the deadlock for simple reset cases.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=101600
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180623103951.23889-1-chris@chris-wilson.co.uk
Acked-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2018-06-25 16:27:56 +01:00