Currently the presumption is that the request construction and its
submission to the GuC are all under the same holding of struct_mutex. We
wish to relax this to separate the request construction and the later
submission to the GuC. This requires us to reserve some space in the
GuC command queue for the future submission. For flexibility to handle
out-of-order request submission we do not preallocate the next slot in
the GuC command queue during request construction, just ensuring that
there is enough space later.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-17-chris@chris-wilson.co.uk
We are about to specialize object synchronisation to enable nonblocking
execbuf submission. First we make a copy of the current object
synchronisation for execbuffer. The general i915_gem_object_sync() will
be removed following the removal of CS flips in the near future.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: John Harrison <john.c.harrison@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-16-chris@chris-wilson.co.uk
Drive final request submission from a callback from the fence. This way
the request is queued until all dependencies are resolved, at which
point it is handed to the backend for queueing to hardware. At this
point, no dependencies are set on the request, so the callback is
immediate.
A side-effect of imposing a heavier-irqsafe spinlock for execlist
submission is that we lose the softirq enabling after scheduling the
execlists tasklet. To compensate, we manually kickstart the softirq by
disabling and enabling the bh around the fence signaling.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: John Harrison <john.c.harrison@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-14-chris@chris-wilson.co.uk
Update reset path in preparation for engine reset which requires
identification of incomplete requests and associated context and fixing
their state so that engine can resume correctly after reset.
The request that caused the hang will be skipped and head is reset to the
start of breadcrumb. This allows us to resume from where we left-off.
Since this request didn't complete normally we also need to cleanup elsp
queue manually. This is vital if we employ nonblocking request
submission where we may have a web of dependencies upon the hung request
and so advancing the seqno manually is no longer trivial.
ABI: gem_reset_stats / DRM_IOCTL_I915_GET_RESET_STATS
We change the way we count pending batches. Only the active context
involved in the reset is marked as either innocent or guilty, and not
mark the entire world as pending. By inspection this only affects
igt/gem_reset_stats (which assumes implementation details) and not
piglit.
ARB_robustness gives this guide on how we expect the user of this
interface to behave:
* Provide a mechanism for an OpenGL application to learn about
graphics resets that affect the context. When a graphics reset
occurs, the OpenGL context becomes unusable and the application
must create a new context to continue operation. Detecting a
graphics reset happens through an inexpensive query.
And with regards to the actual meaning of the reset values:
Certain events can result in a reset of the GL context. Such a reset
causes all context state to be lost. Recovery from such events
requires recreation of all objects in the affected context. The
current status of the graphics reset state is returned by
enum GetGraphicsResetStatusARB();
The symbolic constant returned indicates if the GL context has been
in a reset state at any point since the last call to
GetGraphicsResetStatusARB. NO_ERROR indicates that the GL context
has not been in a reset state since the last call.
GUILTY_CONTEXT_RESET_ARB indicates that a reset has been detected
that is attributable to the current GL context.
INNOCENT_CONTEXT_RESET_ARB indicates a reset has been detected that
is not attributable to the current GL context.
UNKNOWN_CONTEXT_RESET_ARB indicates a detected graphics reset whose
cause is unknown.
The language here is explicit in that we must mark up the guilty batch,
but is loose enough for us to relax the innocent (i.e. pending)
accounting as only the active batches are involved with the reset.
In the future, we are looking towards single engine resetting (with
minimal locking), where it seems inappropriate to mark the entire world
as innocent since the reset occurred on a different engine. Reducing the
information available means we only have to encounter the pain once, and
also reduces the information leaking from one context to another.
v2: Legacy ringbuffer submission required a reset following hibernation,
or else we restore stale values to the RING_HEAD and walked over
stolen garbage.
v3: GuC requires replaying the requests after a reset.
v4: Restore engine IRQ after reset (so waiters will be woken!)
Rearm hangcheck if resetting with a waiter.
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Arun Siluvery <arun.siluvery@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-13-chris@chris-wilson.co.uk
Since we have a cooperative mode now with a direct reset, we can avoid
the contention on struct_mutex and instead try then sleep on the
I915_RESET_IN_PROGRESS bit. If the mutex is held and that bit is
cleared, all is fine. Otherwise, we sleep for a bit and try again. In
the worst case we sleep for an extra second waiting for the mutex to be
released (no one touching the GPU is allowed the struct_mutex whilst the
I915_RESET_IN_PROGRESS bit is set). But when we have a direct reset,
this allows us to clean up the reset worker faster.
v2: Remember to call wake_up_bit() after changing (for the faster wakeup
as promised)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-12-chris@chris-wilson.co.uk
If a waiter is holding the struct_mutex, then the reset worker cannot
reset the GPU until the waiter returns. We do not want to return -EAGAIN
form i915_wait_request as that breaks delicate operations like
i915_vma_unbind() which often cannot be restarted easily, and returning
-EIO is just as useless (and has in the past proven dangerous). The
remaining WARN_ON(i915_wait_request) serve as a valuable reminder that
handling errors from an indefinite wait are tricky.
We can keep the current semantic that knowing after a reset is complete,
so is the request, by performing the reset ourselves if we hold the
mutex.
uevent emission is still handled by the reset worker, so it may appear
slightly out of order with respect to the actual reset (and concurrent
use of the device).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-11-chris@chris-wilson.co.uk
In the next patch we want to handle reset directly by a locked waiter in
order to avoid issues with returning before the reset is handled. To
handle the reset, we must first know whether we hold the struct_mutex.
If we do not hold the struct_mtuex we can not perform the reset, but we do
not block the reset worker either (and so we can just continue to wait for
request completion) - otherwise we must relinquish the mutex.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-10-chris@chris-wilson.co.uk
Access to intel_init_emon() is strictly ordered by gt_powersave, using
struct_mutex around it is overkill (and will conflict with the caller
holding struct_mutex themselves).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-8-chris@chris-wilson.co.uk
In preparation for introducing a per-engine reset, we can first separate
the mixing of the reset state from the global reset counter.
The loss of atomicity in updating the reset state poses a small problem
for handling the waiters. For requests, this is solved by advancing the
seqno so that a waiter waking up after the reset knows the request is
complete. For pending flips, we still rely on the increment of the
global reset epoch (as well as the reset-in-progress flag) to signify
when the hardware was reset.
The advantage, now that we do not inspect the reset state during reset
itself i.e. we no longer emit requests during reset, is that we can use
the atomic updates of the state flags to ensure that only one reset
worker is active.
v2: Mika spotted that I transformed the i915_gem_wait_for_error() wakeup
into a waiter wakeup.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Arun Siluvery <arun.siluvery@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1470414607-32453-6-git-send-email-arun.siluvery@linux.intel.com
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-7-chris@chris-wilson.co.uk
Emulate HW to track and manage ELSP queue. A set of SW ports are defined
and requests are assigned to these ports before submitting them to HW. This
helps in cleaning up incomplete requests during reset recovery easier
especially after engine reset by decoupling elsp queue management. This
will become more clear in the next patch.
In the engine reset case we want to resume where we left-off after skipping
the incomplete batch which requires checking the elsp queue, removing
element and fixing elsp_submitted counts in some cases. Instead of directly
manipulating the elsp queue from reset path we can examine these ports, fix
up ringbuffer pointers using the incomplete request and restart submissions
again after reset.
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Arun Siluvery <arun.siluvery@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/1470414607-32453-3-git-send-email-arun.siluvery@linux.intel.com
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-6-chris@chris-wilson.co.uk
Similar to the issue with reading from the context status buffer,
see commit 26720ab97f ("drm/i915: Move CSB MMIO reads out of the
execlists lock"), we frequently write to the ELSP register (4 writes per
interrupt) and know we hold the required spinlock and forcewake throughout.
We can further reduce the cost of writing these registers beyond the
I915_WRITE_FW() by precomputing the address of the ELSP register. We also
note that the subsequent read serves no purpose here, and are happy to
see it go.
v2: Address I915_WRITE mistakes in changelog
text data bss dec hex filename
1259784 4581 576 1264941 134d2d drivers/gpu/drm/i915/i915.ko
1259720 4581 576 1264877 134ced drivers/gpu/drm/i915/i915.ko
Saves 64 bytes of address recomputation.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-4-chris@chris-wilson.co.uk
Leave the more complicated request dequeueing to the tasklet and instead
just kick start the tasklet if we detect we are adding the first
request.
v2: Play around with list operators until we agree upon something
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-2-chris@chris-wilson.co.uk
This is really a core kernel struct in disguise until we can finally
place it in kernel/. There is an immediate need for a fence collection
mechanism that is more flexible than fence-array, in particular being
able to easily drive request submission via events (and not just
interrupt driven). The same mechanism would be useful for handling
nonblocking and asynchronous atomic modesets, parallel execution and
more, but for the time being just create a local sw fence for execbuf.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160909131201.16673-1-chris@chris-wilson.co.uk
Moving all GPU features to the platform definition allows for
- standard place when adding new features from new platform
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Make the .hws_needs_physical the exception by switching the flag
on earlier platforms since they are fewer to support. Remove the flag on
later GPUs hardware since they all use GTT hws by default.
Switch the logic as well in the driver to reflect this change
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Moving all GPU features to the platform definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Moving all GPU features to the platform definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Moving all GPU features to the platform definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Moving all GPU features to the platform definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Introducing a GEN2_FEATURES macro to simplify the struct definitions by
platforms given that most of the features are common. Inspired by the
GEN7_FEATURES macro done by Ben W. and others.
Use it for 830, 845g, i85x, i865g.
CC: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Introducing a GEN3_FEATURES macro to simplify the struct definitions by
platforms given that most of the features are common. Inspired by the
GEN7_FEATURES macro done by Ben W. and others.
Use it for i915g, i915gm, i945g, i945gm, g33 and pnv.
CC: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Introducing a GEN4_FEATURES macro to simplify the struct
definitions by platforms given that most of the features are common.
Inspired by the GEN7_FEATURES macro done by Ben W. and others.
Use it for i965g, i965gm, g45 and gm45.
CC: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Moving all GPU features to the platform struct definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Introducing a GEN5_FEATURES macro to simplify the struct
definitions by platforms given that most of the features are common.
Inspired by the GEN7_FEATURES macro done by Ben W. and others.
Use it for ilk.
CC: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Moving all GPU features to the platform struct definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Moving all GPU features to the platform struct definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Moving all GPU features to the platform struct definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Moving all GPU features to the platform struct definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Moving all GPU features to the platform struct definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
No need for HAS_CORE_RING_FREQ as that flag is actually the same as
.has_llc. Feedback from V. Syrjala.
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Remove runtime PM support for SNB as it breaks hotplug support.
Feedback from V. Syrjala.
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Moving all GPU features to the platform struct definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Introducing a GEN6_FEAUTRES macro to simplify the struct definitions by
platforms given that most of the features are common. Inspired by the
GEN7_FEATURES macro done by Ben W. and others.
Use it for snb.
CC: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
As recommended by Ville Syrjala removing .is_mobile field from the
platform struct definition for vlv and hsw+ GPUs as there's no need to
make the distinction in later hardware anymore. Keep it for older GPUs
as it is still needed for ilk-ivb.
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
[patch series] Moving all GPU features to the platform struct definition
allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct definition
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Split out the DisplayPort and HDMI pll setup code into separate
functions and refactor the DP code that calculates the pll
so that it doesn't depend on crtc state.
This will be used for acquiring port pll when doing
upfront link training.
Reviewed-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Split out the DisplayPort and HDMI pll setup code into separate
functions and refactor the DP code does not directly depend on
crtc state, so that the code can be used for upfront link training.
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jim Bride <jim.bride@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Split out of bxt_ddi_pll_select() the logic that calculates the pll
dividers and dpll_hw_state into a new function that doesn't depend on
crtc state. This will be used for enabling the port pll when doing
upfront link training.
v2:
* Refactored code so that bxt_clk_div need not be exported (Durga)
v1:
* Rebased on top of intel_dpll_mgr.c (Durga)
* Initial version from Ander on top of intel_ddi.c
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Signed-off-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Split intel_ddi_pre_enable() into encoder type specific versions that
don't depend on crtc_state. The necessary parameters are passed as
function arguments. This split will be necessary for implementing DP
upfront link training.
v3:
* Rebased onto latest kernel (Manasi)
v2:
* Rebased onto kernel v4.7 (Jim)
Reviewed-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Signed-off-by: Jim Bride <jim.bride@linux.intel.com>
Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
The value of ddi_pll_sel is derived from the selection of shared dpll,
so just calculate the final value when necessary.
v2: Actually remove it from crtc state and delete remaining usages. (CI)
Reviewed-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Decouple intel_dp_set_link_params() from struct intel_crtc_state. This
will be useful for implementing DP upfront link training.
v2:
* Rebased on atomic state changes (Manasi)
Reviewed-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
This reverts commit 237ed86c69.
Our current implementation of live status check (repeat 9 times
with 10ms delays between each attempt as a workaround for
buggy displays) imposes a rather serious penalty, time wise,
on intel_hdmi_detect(). Since we we already skip live status
checks on platforms before gen 7, and since we seem to have
coped quite well before the live status check was introduced
for newer platforms too, the previous behaviour is probably
preferable, at least unless someone can point to a use-case
that the live status check improves (apart from "Bspec says so".)
Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
Fixes: 237ed86c69 ("drm/i915: Check live status before reading edid")
Fixes: f8d03ea005 ("drm/i915: increase the tries for HDMI hotplug live status checking")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97139
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94014
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@vger.kernel.org # v4.4+
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160817124748.31208-1-david.weinehall@linux.intel.com
i915 sometimes needs to disable planes in the middle of an atomic
commit, and then reenable them later in the same commit. Because of
this, we can't make the assumption that the state of the plane actually
changed. Since the state of the plane hasn't actually changed, neither
have it's watermarks. And if the watermarks hasn't changed then we
haven't populated skl_results with anything, which means we'll end up
zeroing out a plane's watermarks in the middle of the atomic commit
without restoring them later.
Simple reproduction recipe:
- Get a SKL laptop, launch any kind of X session
- Get two extra monitors
- Keep hotplugging both displays (so that the display configuration
jumps from 1 active pipe to 3 active pipes and back)
- Eventually underrun
Changes since v1:
- Fix incorrect use of "it's"
Changes since v2:
- Add reproduction recipe
Signed-off-by: Lyude <cpaul@redhat.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Fixes: 62e0fb8801 ("drm/i915/skl: Update plane watermarks atomically during plane updates")
Signed-off-by: Lyude <cpaul@redhat.com>
Testcase: kms_plane
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1472488288-27280-1-git-send-email-cpaul@redhat.com
Cc: drm-intel-fixes@lists.freedesktop.org
We don't have safe 64-bit mmio writes as they are really split into
2x32-bit writes. This tearing is dangerous as the hardware *will*
operate on the intermediate value, requiring great care when assigning.
(See, for example, i965_write_fence_reg.) As such we don't currently use
them and strongly advise not to us them. Go one step further and remove
the 64-bit write vfuncs.
v2: Add some more details to the comment about why WRITE64 is absent,
and why you need to think twice before using READ64.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20160906144538.4204-1-chris@chris-wilson.co.uk