Commit Graph

753685 Commits

Author SHA1 Message Date
Chris Wilson
e16f4c36cb drm/i915/selftests: Skip making an object busy if the GPU is wedged
If the GPU is wedged, we cannot make the object busy as trying to
submit a request will generate -EIO. Skip to the end of the test.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180706065332.15214-4-chris@chris-wilson.co.uk
2018-07-06 11:24:31 +01:00
Chris Wilson
b5f6e53d4c drm/i915/selftests: Skip using the GPU if wedged
If the GPU is irrecoverably broken, we can not use it to dirty memory
and check for cache coherency with the CPU. All we can do is simply skip
over the GPU subtests and focus on the CPU domains (WC, WB) cache
management.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107127
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180706065332.15214-3-chris@chris-wilson.co.uk
2018-07-06 11:24:13 +01:00
Chris Wilson
e5d2435bfa drm/i915/selftests: Destroy partial tiling vma after use
As we keep VMA around until the object is destroyed, when testing
partial tiling we instantiate many, many VMA (as the object is huge
allowing for many different partial regions). We test elsewhere our
handling of populating large objects with a full set of VMA and checking
we can retrieve them afterwards, but in this test we incur the cost of
flushing all VMA after every GTT write, dramatically slowing down the
test.

References: https://bugs.freedesktop.org/show_bug.cgi?id=107130
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180706065332.15214-2-chris@chris-wilson.co.uk
2018-07-06 11:24:00 +01:00
Chris Wilson
1eca65d922 drm/i915: Squelch very verbose error logging
Having found the error causing the IGT test to fail, downgrade the
verbose logging so that we stop flooding the syslogs as we deliberately
provoke it many thousands of time during selftests.

References: 10195b1e44 ("drm/i915: Show vma allocator stack when in doubt")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180706065332.15214-1-chris@chris-wilson.co.uk
2018-07-06 11:23:47 +01:00
Madhav Chauhan
d61d1b3bbb drm/i915/icl: Define AUX lane registers for Port A/B
This patch defines AUX lane registers for PORT_PCS_DW1,
PORT_TX_DW2, PORT_TX_DW4, PORT_TX_DW5 used during
dsi enabling.

v2: Review comments from Jani N:
    - Define _ICL_PORT_PCS_DW1_AUX_A for consistency
    - Three spaces for bitfield definition.

Signed-off-by: Madhav Chauhan <madhav.chauhan@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1530798591-2077-8-git-send-email-madhav.chauhan@intel.com
2018-07-06 12:14:16 +03:00
Madhav Chauhan
45f09f7adc drm/i915/icl: Power down unused DSI lanes
To save power, unused lanes should be powered
down using the bitfield of PORT_CL_DW10.

v2: Review comments from Jani N
    - Put default label next to case 4
    - Include the shifts in the macros

Signed-off-by: Madhav Chauhan <madhav.chauhan@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1530798591-2077-7-git-send-email-madhav.chauhan@intel.com
2018-07-06 12:14:16 +03:00
Madhav Chauhan
166869b390 drm/i915/icl: Define PORT_CL_DW_10 register
This register used to power down individual lanes for
DDI/DSI ports. Bitfields to power up/down various
combinations of lanes are also added in this patch.

v2: Review comments from Jani N
    - Use override instead of "override" for bitfields
    - Define mask for override bitfield
    - Define PWR_DOWN_LN* macros shifted in place
v3: Correct PWR_DOWN_LN_MASK value (Jani N)

Signed-off-by: Madhav Chauhan <madhav.chauhan@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1530798591-2077-6-git-send-email-madhav.chauhan@intel.com
2018-07-06 12:14:16 +03:00
Madhav Chauhan
b1cb21a5f1 drm/i915/icl: Enable DSI IO power
This patch configures mode of operation for DSI
and enable DDI IO power by configuring power well.

v2: Use for_each_dsi_port() for power get (Jani N)

Signed-off-by: Madhav Chauhan <madhav.chauhan@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1530798591-2077-5-git-send-email-madhav.chauhan@intel.com
2018-07-06 12:14:15 +03:00
Madhav Chauhan
21652f3b0d drm/i915/icl: Define DSI mode ctl register
This patch defines DSI IO mode control register and it's bits
used while enabling IO power for DSI.

Signed-off-by: Madhav Chauhan <madhav.chauhan@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1530798591-2077-4-git-send-email-madhav.chauhan@intel.com
2018-07-06 12:14:15 +03:00
Madhav Chauhan
fcfe0bdcb1 drm/i915/icl: Program DSI Escape clock Divider
Escape Clock is used for LP communication across the DSI
Link. To achieve the constant frequency of the escape clock
from the variable DPLL frequency output, a variable divider(M)
is needed. This patch programs the same.

v2: (Jani N) Don't end line with "(".

Signed-off-by: Madhav Chauhan <madhav.chauhan@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1530798591-2077-3-git-send-email-madhav.chauhan@intel.com
2018-07-06 12:13:34 +03:00
Jani Nikula
012bf847d1 drm/i915/dsi: update some of the platform based checks
Use the more customary order of latest platform first, and don't bother
with an if in the last branch.

Cc: Madhav Chauhan <madhav.chauhan@intel.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Madhav Chauhan <madhav.chauhan@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180705132509.12881-3-jani.nikula@intel.com
2018-07-06 10:54:10 +03:00
Jani Nikula
e518634b43 drm/i915/dsi: use vlv and bxt prefixes for the global DSI functions
Avoid confusion with the functions to be added for the new ICL or gen 11
DSI implementation by renaming the current DSI functions. While at it,
permutate the words in the function names to make them all start with
"vlv_dsi" or "vlv_dsi_pll" etc.

Reduce the platform abstractions in the PLL file while at it, moving the
checks to vlv_dsi.c instead, where we typically already have the
necessary if ladders.

Leave the static functions as-is for now; they could be renamed later if
needed.

No functional changes.

v2: use "gen7" prefix.

v3: use "vlv" and "bxt" prefixes, reduce the abstractions.

References: https://patchwork.freedesktop.org/series/44823/
Cc: Madhav Chauhan <madhav.chauhan@intel.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Madhav Chauhan <madhav.chauhan@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180705132509.12881-2-jani.nikula@intel.com
2018-07-06 10:54:05 +03:00
Jani Nikula
ca3589c118 drm/i915/dsi: rename the current DSI files based on first platform
Starting from ICL or gen 11 we have a new DSI block which requires
completely different programming from the current implementation. Having
them in the same file would be confusing. Rename the current DSI and DSI
PLL implementation files as vlv_dsi.c and vlv_dsi_pll.c.

No functional changes.

v2: use "gen7" prefix.

v3: use "vlv" prefix.

References: https://patchwork.freedesktop.org/series/44823/
Cc: Madhav Chauhan <madhav.chauhan@intel.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Madhav Chauhan <madhav.chauhan@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180705132509.12881-1-jani.nikula@intel.com
2018-07-06 10:53:55 +03:00
Chris Wilson
c4e4f4545b drm/i915/selftests: Fail hangcheck testing if the GPU is wedged
If the GPU is irrecoverably wedged on startup, it means that it failed
on initialisation and we have already tried to reset it but failed. We
can ignore all further testing, as it is already dead. Failing early,
prevents us from slowly failing in our endeavours later and timing out.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180705150214.28316-1-chris@chris-wilson.co.uk
2018-07-06 07:39:30 +01:00
Chris Wilson
73d8e5fba5 drm/i915/selftests: Detect unknown swizzling correctly
i915_gem_detect_bit_6_swizzle() tries to hide unknown swizzling from
userspace (and ourselves) leaving us with the only clue inside
i915->quirks & QUIRK_PIN_SWIZZLED_PAGES. If we see this bit set, it
means that we really have no clue as to what the swizzle pattern is
being used in any one page and so cannot compute what the reference
value should be in our tiling selftests. We have to skip the test.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107133
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180705171523.18462-1-chris@chris-wilson.co.uk
2018-07-05 20:53:01 +01:00
Ville Syrjälä
9757973f41 drm/i915: Remove pointless if-else from sdvo code
The return value is a bool so we can just return the result of
the biwise AND. The compiler will take care of the rest.

Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180621174658.18823-1-ville.syrjala@linux.intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
2018-07-05 22:16:56 +03:00
Chris Wilson
bb9e8755a4 drm/i915/selftests: Fixup recursive MI_BB_START for gen3
There's no magic bit0 in MI_BB_START for gen3, it's the same dword length
parameter as elsewhere and needs to be zero.

v2: Same bug in both live_requests and live_hanghcheck.

References: https://bugs.freedesktop.org/show_bug.cgi?id=107132
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180705154756.5533-1-chris@chris-wilson.co.uk
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
2018-07-05 17:59:11 +01:00
Gustavo A. R. Silva
f0d759f038 drm/i915: Mark expected switch fall-throughs
In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.

Addresses-Coverity-ID: 141432
Addresses-Coverity-ID: 141433
Addresses-Coverity-ID: 141434
Addresses-Coverity-ID: 141435
Addresses-Coverity-ID: 141436
Addresses-Coverity-ID: 1357360
Addresses-Coverity-ID: 1357403
Addresses-Coverity-ID: 1357433
Addresses-Coverity-ID: 1392622
Addresses-Coverity-ID: 1415273
Addresses-Coverity-ID: 1435752
Addresses-Coverity-ID: 1441500
Addresses-Coverity-ID: 1454596
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628223541.GA17665@embeddedor.com
2018-07-05 16:40:51 +03:00
Madhav Chauhan
27efd2566c drm/i915/icl: Define register for DSI PLL
This patch adds the new registers and corresponding bit definitions
which will be used for programming/enable DSI PLL.

v2: Review comments from Jani N
    - Fix spaces while defining ICL_ESC_CLK_DIV_MASK
    - Define shift and mask for bitfields.

Signed-off-by: Madhav Chauhan <madhav.chauhan@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1530795727-28644-2-git-send-email-madhav.chauhan@intel.com
2018-07-05 16:27:56 +03:00
Chris Wilson
0f17d5dd21 drm/i915/selftests: Replace open-coded i915_address_space_init()
Use i915_address_space_init() rather than open-code it inside
mock_ppgtt() as we will forget to keep it in sync.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180705065653.20449-3-chris@chris-wilson.co.uk
2018-07-05 11:19:24 +01:00
Chris Wilson
eae4c94453 drm/i915/selftests: Use full release for local ppgtt allocation
We can now use the full release mechanism (i915_ppgtt_put) for our local
ppgtt allocation in igt_ppgtt_alloc.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180705065653.20449-2-chris@chris-wilson.co.uk
2018-07-05 11:19:23 +01:00
Chris Wilson
cef08fdc74 drm/i915: Remove defunct i915->vm_list
No longer used and can be removed. One less global that currently
demands struct_mutex protection.

References: e9e7dc4144 ("drm/i915/gtt: Make gen6 page directories evictable")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180705065653.20449-1-chris@chris-wilson.co.uk
2018-07-05 11:19:22 +01:00
Chris Wilson
63fd659fb1 drm/i915/gtt: Pull global wc page stash under its own locking
Currently, the wc-stash used for providing flushed WC pages ready for
constructing the page directories is assumed to be protected by the
struct_mutex. However, we want to remove this global lock and so must
install a replacement global lock for accessing the global wc-stash (the
per-vm stash continues to be guarded by the vm).

We need to push ahead on this patch due to an oversight in hastily
removing the struct_mutex guard around the igt_ppgtt_alloc selftest. No
matter, it will prove very useful (i.e. will be required) in the near
future.

v2: Restore the onstack stash so that we can drop the vm->mutex in
future across the allocation.
v3: Restore the lost pagevec_init of the onstack allocation, and repaint
function names.
v4: Reorder init so that we don't try and use i915_address_space before
it is ininitialised.

Fixes: 1f6f00238a ("drm/i915/selftests: Drop struct_mutex around lowlevel pggtt allocation")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180704185518.4193-1-chris@chris-wilson.co.uk
2018-07-04 21:23:11 +01:00
Ville Syrjälä
16659bc53a drm/i915: Unmask and enable master error interrupt on gen2/3
For whatever reason we only unmask and enable the master error
interrut on gen4. With the EIR handling fixed let's do that
on gen2/3 as well.

Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180611200258.27121-4-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
2018-07-04 23:03:30 +03:00
Ville Syrjälä
78c357dd3f drm/i915: Fix pre-ILK error interrupt ack
Adjust the EIR clearing to cope with the edge triggered IIR
on i965/g4x. To guarantee an edge in the ISR master error bit
we temporarily mask everything in EMR. As some of the EIR bits
can't even be directly cleared we also borrow a trick from
i915_clear_error_registers() and permanently mask any bit that
remains high. No real thought given to how we might unmask them
again once the cause for the error has been clered. I suppose
on pre-g4x GPU reset will reinitialize EMR from scratch.

Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180611200258.27121-3-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
2018-07-04 23:03:30 +03:00
Ville Syrjälä
0ba7c51a6f drm/i915: Fix hotplug irq ack on i965/g4x
Just like with PIPESTAT, the edge triggered IIR on i965/g4x
also causes problems for hotplug interrupts. To make sure
we don't get the IIR port interrupt bit stuck low with the
ISR bit high we must force an edge in ISR. Unfortunately
we can't borrow the PIPESTAT trick and toggle the enable
bits in PORT_HOTPLUG_EN as that act itself generates hotplug
interrupts. Instead we just have to loop until we've cleared
PORT_HOTPLUG_STAT, or we just give up and WARN.

v2: Don't frob with PORT_HOTPLUG_EN

Cc: stable@vger.kernel.org
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180614175625.1615-1-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
2018-07-04 23:03:30 +03:00
Chris Wilson
1f6f00238a drm/i915/selftests: Drop struct_mutex around lowlevel pggtt allocation
For a ppgtt that we are constructing, there is no struct_mutex
dependence so skip it. In the process, also ping the scheduler
frequently to try and avoid the NMI watchdog.

v2: gen6 requires struct_mutex to clean up (currently)

Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
References: https://bugs.freedesktop.org/show_bug.cgi?id=107094
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180703135331.12265-1-chris@chris-wilson.co.uk
2018-07-03 22:09:22 +01:00
Chris Wilson
38b7fb0b2a drm/i915/selftests: Release the struct_mutex to free the objects
live_gtt is a very slow test to run, simply because it tries to allocate
and use as much as the 48b address space as possibly can and in the
process will try to own all of the system memory. This leads to resource
exhaustion and CPU starvation; the latter impacts us when the NMI
watchdog declares a task hung due to a mutex contention with ourselves.
This we can prevent by releasing the struct_mutex and forcing our
i915/rcu workers to run, and in particular flushing the freed object
worker that is the cause for concern.

References: https://bugs.freedesktop.org/show_bug.cgi?id=107094
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180703101829.7360-1-chris@chris-wilson.co.uk
2018-07-03 22:02:36 +01:00
Tarun Vyas
a608987970 drm/i915: Wait for PSR exit before checking for vblank evasion
The PIPEDSL freezes on PSR entry and if PSR hasn't fully exited, then
the pipe_update_start call schedules itself out to check back later.

On ChromeOS-4.4 kernel, which is fairly up-to-date w.r.t drm/i915 but
lags w.r.t core kernel code, hot plugging an external display triggers
tons of "potential atomic update errors" in the dmesg, on *pipe A*. A
closer analysis reveals that we try to read the scanline 3 times and
eventually timeout, b/c PSR hasn't exited fully leading to a PIPEDSL
stuck @ 1599. This issue is not seen on upstream kernels, b/c for *some*
reason we loop inside intel_pipe_update start for ~2+ msec which in this
case is more than enough to exit PSR fully, hence an *unstuck* PIPEDSL
counter, hence no error. On the other hand, the ChromeOS kernel spends
~1.1 msec looping inside intel_pipe_update_start and hence errors out
b/c the source is still in PSR.

Regardless, we should wait for PSR exit (if PSR is disabled, we incur
a ~1-2 usec penalty) before reading the PIPEDSL, b/c if we haven't
fully exited PSR, then checking for vblank evasion isn't actually
applicable.

v4: Comment explaining psr_wait after enabling VBL interrupts (DK)

v5: CAN_PSR() to handle platforms that don't support PSR.

v6: Handle local_irq_disable on early return (Chris)

Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Tarun Vyas <tarun.vyas@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627200250.1515-2-tarun.vyas@intel.com
2018-07-02 10:56:33 -07:00
Tarun Vyas
c43dbcbbcc drm/i915/psr: Lockless version of psr_wait_for_idle
This is a lockless version of the exisiting psr_wait_for_idle().
We want to wait for PSR to idle out inside intel_pipe_update_start.
At the time of a pipe update, we should never race with any psr
enable or disable code, which is a part of crtc enable/disable.
The follow up patch will use this lockless wait inside pipe_update_
start to wait for PSR to idle out before checking for vblank evasion.
We need to keep the wait in pipe_update_start to as less as it can be.
So,we can live and flourish w/o taking any psr locks at all.

Even if psr is never enabled, psr2_enabled will be false and this
function will wait for PSR1 to idle out, which should just return
immediately, so a very short (~1-2 usec) wait for cases where PSR
is disabled.

v2: Add comment to explain the 25msec timeout (DK)

v3: Rename psr_wait_for_idle to __psr_wait_for_idle_locked to avoid
    naming conflicts and propagate err (if any) to the caller (Chris)

v5: Form a series with the next patch

v7: Better explain the need for lockless wait and increase the max
    timeout to handle refresh rates < 60 Hz (Daniel Vetter)

v8: Rebase

Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Tarun Vyas <tarun.vyas@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627200250.1515-1-tarun.vyas@intel.com
2018-07-02 10:52:39 -07:00
Dhinakaran Pandiyan
abdd322f68 drm/i915: Remove unnecessary check for unsupported modifiers for NV12
There is already a check to allow only RGB8888 formats with CCS
modifiers.

Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628061854.6430-1-dhinakaran.pandiyan@intel.com
2018-07-02 10:37:38 -07:00
Vathsala Nagaraju
00b062967f drm/i915/psr: Add psr1 live status
Prints live state of psr1.Extending the existing
PSR2 live state function to cover psr1.

Tested on KBL with psr2 and psr1 panel.

v2: rebase
v3: DK
    Rename psr2_live_status to psr_source_status.
v4: DK
    Move EDP_PSR_STATUS_STATE_SHIFT below EDP_PSR_STATUS_STATE_MASK.
    Pass seq to psr_source_status, handle source status prints in
    psr_source_status.
v5: Fixed CI warning messages
v6:
    Remove extra space in the title before the colon.(DK)
    Rebase. (Jani)
v7: Use tabs for indenting the values.(Jani)
v8: Addressed dk's review comments.

Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>

Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Vathsala Nagaraju <vathsala.nagaraju@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1530086910-15914-1-git-send-email-vathsala.nagaraju@intel.com
2018-07-02 10:36:20 -07:00
Chris Wilson
7e7367d3bc drm/i915: Try GGTT mmapping whole object as partial
If the whole object is already pinned by HW for use as scanout, we will
fail to move it to the mappable region and so must resort to using a
partial VMA covering the whole object.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104513
Fixes: aa136d9d72 ("drm/i915: Convert partial ggtt vma to full ggtt if it spans the entire object")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180630090509.469-1-chris@chris-wilson.co.uk
2018-07-02 17:36:09 +01:00
Jani Nikula
e67005e59a drm/i915: abstract and document register picking macros
Try to describe what the pick variants do, and which to prefer. No
functional changes.

Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180629102039.2435-1-jani.nikula@intel.com
2018-07-02 17:36:23 +03:00
Michal Wajdeczko
1ea29bbd47 drm/i915/guc: Print CTL params passed to Guc
While debugging we may want to examine params passed to GuC.

v2: drop #ifdef DEBUG_GUC - Michal

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com> #1
Cc: Michal Winiarski <michal.winiarski@intel.com>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180618111821.47088-1-michal.wajdeczko@intel.com
2018-06-29 23:34:17 +01:00
Chris Wilson
be01de596e drm/i915/selftests: Attach the fence to the object when making busy
make_obj_busy() makes a dummy busy object, but didn't attach the fence
to the reservation object, so it would not have registered as busy. For
completeness, attach the dummy request as the exclusive fence and mark
the object as written (in i915_vma_move_to_active)

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180629133717.11761-2-chris@chris-wilson.co.uk
2018-06-29 21:07:39 +01:00
Chris Wilson
d78e2bbf48 drm/i915/selftests: Mark up write into scratch vma
We correctly attach the exclusive fetch for the scratch object when
emitting a request that writes into it, but for completeness we should
also declared the write to i915_vma_move_to_active()

Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180629133717.11761-1-chris@chris-wilson.co.uk
2018-06-29 20:52:46 +01:00
Maarten Lankhorst
4572095957 drm/i915: Remove delayed FBC activation.
The only time we should start FBC is when we have waited a vblank
after the atomic update. We've already forced a vblank wait by doing
wait_for_flip_done before intel_post_plane_update(), so we don't need
to wait a second time before enabling.

Removing the worker simplifies the code and removes possible race
conditions, like happening in 103167.

Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103167
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180625163758.10871-2-maarten.lankhorst@linux.intel.com
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
2018-06-29 10:06:31 +02:00
Maarten Lankhorst
c9855a561a drm/i915: Block enabling FBC until flips have been completed
There is a small race window in which FBC can be enabled after
pre_plane_update is called, but before the page flip has been
queued or completed.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103167
Link: https://patchwork.freedesktop.org/patch/msgid/20180625163758.10871-1-maarten.lankhorst@linux.intel.com
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
2018-06-29 10:06:08 +02:00
Chris Wilson
9512f985c3 drm/i915/execlists: Direct submission of new requests (avoid tasklet/ksoftirqd)
Back in commit 27af5eea54 ("drm/i915: Move execlists irq handler to a
bottom half"), we came to the conclusion that running our CSB processing
and ELSP submission from inside the irq handler was a bad idea. A really
bad idea as we could impose nearly 1s latency on other users of the
system, on average! Deferring our work to a tasklet allowed us to do the
processing with irqs enabled, reducing the impact to an average of about
50us.

We have since eradicated the use of forcewaked mmio from inside the CSB
processing and ELSP submission, bringing the impact down to around 5us
(on Kabylake); an order of magnitude better than our measurements 2
years ago on Broadwell and only about 2x worse on average than the
gem_syslatency on an unladen system.

In this iteration of the tasklet-vs-direct submission debate, we seek a
compromise where by we submit new requests immediately to the HW but
defer processing the CS interrupt onto a tasklet. We gain the advantage
of low-latency and ksoftirqd avoidance when waking up the HW, while
avoiding the system-wide starvation of our CS irq-storms.

Comparing the impact on the maximum latency observed (that is the time
stolen from an RT process) over a 120s interval, repeated several times
(using gem_syslatency, similar to RT's cyclictest) while the system is
fully laden with i915 nops, we see that direct submission an actually
improve the worse case.

Maximum latency in microseconds of a third party RT thread
(gem_syslatency -t 120 -f 2)
  x Always using tasklets (a couple of >1000us outliers removed)
  + Only using tasklets from CS irq, direct submission of requests
+------------------------------------------------------------------------+
|          +                                                             |
|          +                                                             |
|          +                                                             |
|          +       +                                                     |
|          + +     +                                                     |
|       +  + +     +  x     x     x                                      |
|      +++ + +     +  x  x  x  x  x  x                                   |
|      +++ + ++  + +  *x x  x  x  x  x                                   |
|      +++ + ++  + *  *x x  *  x  x  x                                   |
|    + +++ + ++  * * +*xxx  *  x  x  xx                                  |
|    * +++ + ++++* *x+**xx+ *  x  x xxxx x                               |
|   **x++++*++**+*x*x****x+ * +x xx xxxx x          x                    |
|x* ******+***************++*+***xxxxxx* xx*x     xxx +                x+|
|             |__________MA___________|                                  |
|      |______M__A________|                                              |
+------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x 118            91           186           124     125.28814     16.279137
+ 120            92           187           109     112.00833     13.458617
Difference at 95.0% confidence
	-13.2798 +/- 3.79219
	-10.5994% +/- 3.02677%
	(Student's t, pooled s = 14.9237)

However the mean latency is adversely affected:

Mean latency in microseconds of a third party RT thread
(gem_syslatency -t 120 -f 1)
  x Always using tasklets
  + Only using tasklets from CS irq, direct submission of requests
+------------------------------------------------------------------------+
|           xxxxxx                                        +   ++         |
|           xxxxxx                                        +   ++         |
|           xxxxxx                                      + +++ ++         |
|           xxxxxxx                                     +++++ ++         |
|           xxxxxxx                                     +++++ ++         |
|           xxxxxxx                                     +++++ +++        |
|           xxxxxxx                                   + ++++++++++       |
|           xxxxxxxx                                 ++ ++++++++++       |
|           xxxxxxxx                                 ++ ++++++++++       |
|          xxxxxxxxxx                                +++++++++++++++     |
|         xxxxxxxxxxx    x                           +++++++++++++++     |
|x       xxxxxxxxxxxxx   x           +            + ++++++++++++++++++  +|
|           |__A__|                                                      |
|                                                      |____A___|        |
+------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x 120         3.506         3.727         3.631     3.6321417    0.02773109
+ 120         3.834         4.149         4.039     4.0375167   0.041221676
Difference at 95.0% confidence
	0.405375 +/- 0.00888913
	11.1608% +/- 0.244735%
	(Student's t, pooled s = 0.03513)

However, since the mean latency corresponds to the amount of irqsoff
processing we have to do for a CS interrupt, we only need to speed that
up to benefit not just system latency but our own throughput.

v2: Remember to defer submissions when under reset.
v4: Only use direct submission for new requests
v5: Be aware that with mixing direct tasklet evaluation and deferred
tasklets, we may end up idling before running the deferred tasklet.
v6: Remove the redudant likely() from tasklet_is_enabled(), restrict the
annotation to reset_in_progress().
v7: Take the full timeline.lock when enabling perf_pmu stats as the
tasklet is no longer a valid guard. A consequence is that the stats are
now only valid for engines also using the timeline.lock to process
state.

Testcase: igt/gem_exec_latency/*rthog*
References: 27af5eea54 ("drm/i915: Move execlists irq handler to a bottom half")
Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-9-chris@chris-wilson.co.uk
2018-06-28 22:55:10 +01:00
Chris Wilson
fd8526e509 drm/i915/execlists: Trust the CSB
Now that we use the CSB stored in the CPU friendly HWSP, we do not need
to track interrupts for when the mmio CSB registers are valid and can
just check where we read up to last from the cached HWSP. This means we
can forgo the atomic bit tracking from interrupt, and in the next patch
it means we can check the CSB at any time.

v2: Change the splitting inside reset_prepare, we only want to lose
testing the interrupt in this patch, the next patch requires the change
in locking

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-8-chris@chris-wilson.co.uk
2018-06-28 22:55:09 +01:00
Chris Wilson
3800cd1953 drm/i915/execlists: Stop storing the CSB read pointer in the mmio register
As we now never read back our current head position from the CSB
pointers register, and the HW itself doesn't use it to prevent
overwriting unread CSB entries, we do not need to keep updating the
register. As it turns out this register is not listed as being shadowed,
and so requires forcewake -- but we haven't been taking forcewake around
it so the writes has probably been regularly dropped. Fortuitously, we
only read the value after a reset where it did not matter, and zero was
the right answer (well, close enough).

Mika pointed out that this was how we used to do it (accidentally!)
before he fixed it in commit cc53699b25 ("drm/i915: Use masked write
for Context Status Buffer Pointer").

References: cc53699b25 ("drm/i915: Use masked write for Context Status Buffer Pointer")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-7-chris@chris-wilson.co.uk
2018-06-28 22:55:08 +01:00
Chris Wilson
f4b58f0438 drm/i915/execlists: Reset CSB write pointer after reset
On HW reset, the HW clears the write pointer (to 0). But since it also
writes its first CSB entry to slot 0, we need to reset the write pointer
back to the element before (so the first entry we read is 0).

This is required for the next patch, where we trust the CSB completely!

v2: Use _MASKED_FIELD
v3: Store the reset value, so that we differentiate between mmio/hwsp
transparently and without pretense.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-6-chris@chris-wilson.co.uk
2018-06-28 22:55:07 +01:00
Chris Wilson
bc4237ec8d drm/i915/execlists: Unify CSB access pointers
Following the removal of the last workarounds, the only CSB mmio access
is for the old vGPU interface. The mmio registers presented by vGPU do
not require forcewake and can be treated as ordinary volatile memory,
i.e. they behave just like the HWSP access just at a different location.
We can reduce the CSB access to a set of read/write/buffer pointers and
treat the various paths identically and not worry about forcewake.
(Forcewake is nightmare for worstcase latency, and we want to process
this all with irqsoff -- no latency allowed!)

v2: Comments, comments, comments. Well, 2 bonus comments.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-5-chris@chris-wilson.co.uk
2018-06-28 22:55:06 +01:00
Chris Wilson
8ea397fa70 drm/i915/execlists: Process one CSB update at a time
In the next patch, we will process the CSB events directly from the
submission path, rather than only after a CS interrupt. Hence, we will
no longer have the need for a loop until the has-interrupt bit is clear,
and in the meantime can remove that small optimisation.

v2: Tvrtko pointed out it was safer to unconditionally kick the tasklet
after each irq, when assuming that the tasklet is called for each irq.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-4-chris@chris-wilson.co.uk
2018-06-28 22:55:04 +01:00
Chris Wilson
d8857d541c drm/i915/execlists: Pull CSB reset under the timeline.lock
In the following patch, we will process the CSB events under the
timeline.lock and not serialised by the tasklet. This also means that we
will need to protect access to common variables such as
execlists->csb_head with the timeline.lock during reset.

v2: Move sync_irq to avoid deadlocks between taking timeline.lock from
our interrupt handler.
v3: Kill off the synchronize_hardirq as it raises more questions than
answered; now we use the timeline.lock entirely for CSB serialisation
between the irq and elsewhere, we don't need to be so heavy handed with
flushing
v4: Treat request cancellation (wedging after failed reset) similarly

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-3-chris@chris-wilson.co.uk
2018-06-28 22:55:04 +01:00
Chris Wilson
0b02befa82 drm/i915/execlists: Pull submit after dequeue under timeline lock
In the next patch, we will begin processing the CSB from inside the
submission path (underneath an irqsoff section, and even from inside
interrupt handlers). This means that updating the execlists->port[] will
no longer be serialised by the tasklet but needs to be locked by the
engine->timeline.lock instead. Pull dequeue and submit under the same
lock for protection. (An alternate future plan is to keep the in/out
arrays separate for concurrent processing and reduced lock coverage.)

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-2-chris@chris-wilson.co.uk
2018-06-28 22:55:03 +01:00
Chris Wilson
74093f3ecc drm/i915: Drop posting reads to flush master interrupts
We do not need to do a posting read of our uncached mmio write to
re-enable the master interrupt lines after handling an interrupt, so
don't. This saves us a slow UC read before we can process the interrupt,
most noticeable in execlists where any stalls imposes extra latency on
GPU command execution.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjala <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-1-chris@chris-wilson.co.uk
2018-06-28 22:55:02 +01:00
Michal Wajdeczko
f7dc0157e4 drm/i915/uc: Fetch GuC/HuC firmwares from guc/huc specific init
We're fetching GuC/HuC firmwares directly from uc level during
init_early stage but this breaks guc/huc struct isolation and
also strict SW-only initialization rule for init_early. Move fw
fetching to init phase and do it separately per guc/huc struct.

v2: don't forget to move wopcm_init - Michele
v3: fetch in init_misc phase - Michal

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com> #2
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628141522.62788-2-michal.wajdeczko@intel.com
2018-06-28 22:51:33 +01:00
Michal Wajdeczko
c39d2e7e35 drm/i915/guc: Use intel_guc_init_misc to hide GuC internals
We will add more init steps to misc phase and there is no need
to expose them separately for use in uc_init_misc function.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628141522.62788-1-michal.wajdeczko@intel.com
2018-06-28 22:51:32 +01:00