Now that we now longer need to guarantee that the active callback is
under the struct_mutex, we can lift it out of the i915_gem_park() and
into the engine parking itself.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-7-chris@chris-wilson.co.uk
Forgo the struct_mutex serialisation for i915_active, and interpose its
own mutex handling for active/retire.
This is a multi-layered sleight-of-hand. First, we had to ensure that no
active/retire callbacks accidentally inverted the mutex ordering rules,
nor assumed that they were themselves serialised by struct_mutex. More
challenging though, is the rule over updating elements of the active
rbtree. Instead of the whole i915_active now being serialised by
struct_mutex, allocations/rotations of the tree are serialised by the
i915_active.mutex and individual nodes are serialised by the caller
using the i915_timeline.mutex (we need to use nested spinlocks to
interact with the dma_fence callback lists).
The pain point here is that instead of a single mutex around execbuf, we
now have to take a mutex for active tracker (one for each vma, context,
etc) and a couple of spinlocks for each fence update. The improvement in
fine grained locking allowing for multiple concurrent clients
(eventually!) should be worth it in typical loads.
v2: Add some comments that barely elucidate anything :(
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-6-chris@chris-wilson.co.uk
As we need to use a mutex to serialise i915_active activation
(because we want to allow the callback to sleep), we need to push the
i915_active.retire into a worker callback in case we get need to retire
from an atomic context.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-5-chris@chris-wilson.co.uk
Replace the struct_mutex requirement for pinning the i915_vma with the
local vm->mutex instead. Note that the vm->mutex is tainted by the
shrinker (we require unbinding from inside fs-reclaim) and so we cannot
allocate while holding that mutex. Instead we have to preallocate
workers to do allocate and apply the PTE updates after we have we
reserved their slot in the drm_mm (using fences to order the PTE writes
with the GPU work and with later unbind).
In adding the asynchronous vma binding, one subtle requirement is to
avoid coupling the binding fence into the backing object->resv. That is
the asynchronous binding only applies to the vma timeline itself and not
to the pages as that is a more global timeline (the binding of one vma
does not need to be ordered with another vma, nor does the implicit GEM
fencing depend on a vma, only on writes to the backing store). Keeping
the vma binding distinct from the backing store timelines is verified by
a number of async gem_exec_fence and gem_exec_schedule tests. The way we
do this is quite simple, we keep the fence for the vma binding separate
and only wait on it as required, and never add it to the obj->resv
itself.
Another consequence in reducing the locking around the vma is the
destruction of the vma is no longer globally serialised by struct_mutex.
A natural solution would be to add a kref to i915_vma, but that requires
decoupling the reference cycles, possibly by introducing a new
i915_mm_pages object that is own by both obj->mm and vma->pages.
However, we have not taken that route due to the overshadowing lmem/ttm
discussions, and instead play a series of complicated games with
trylocks to (hopefully) ensure that only one destruction path is called!
v2: Add some commentary, and some helpers to reduce patch churn.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-4-chris@chris-wilson.co.uk
Since we cannot allocate underneath the vm->mutex (it is used in the
direct-reclaim paths), we need to shift the allocations off into a
mutexless worker with fence recursion prevention. To know when we need
this protection, we mark up the address spaces that do allocate before
insertion. In the future, we may wish to extend the async bind scheme to
more than just allocations.
v2: s/vm->bind_alloc/vm->bind_async_flags/
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-3-chris@chris-wilson.co.uk
The premise here is to simply avoiding having to acquire the vm->mutex
inside vma create/destroy to update the vm->unbound_lists, to avoid some
nasty lock recursions later.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-2-chris@chris-wilson.co.uk
The L3 cache remapping is stored as u32 elements, and we should ensure
that the user only supplies complete slice information(u32).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004105958.1741-1-chris@chris-wilson.co.uk
On platfroms with gen10+ display, driver must set the enable bit of
AUDIO_PIN_BUF_CTL register before transactions with the HDA controller
can proceed. Add setting this bit to the audio power up sequence.
Failing to do this resulted in errors during display audio codec probe,
and failures during resume from suspend.
Note: We may also need to disable the bit afterwards, but there are
still unresolved issues with that.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111214
Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191003085531.30990-1-kai.vehmanen@linux.intel.com
Add aux_busy_last_status to intel_dp. Don't bother with initializing to
all ones; the only difference is potentially missing logging for one
error case if the readout is all zeros.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191002144138.7917-1-jani.nikula@intel.com
The Thunderbolt PLL divider values on TGL differ from the ICL ones,
update the PLL parameter calculation function accordingly.
Bspec: 49204
v2:
- Remove unused refclk config. (José)
Cc: Jose Souza <jose.souza@intel.com>
Cc: Clinton A Taylor <clinton.a.taylor@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Mika Westerberg <mika.westerberg@intel.com>
Tested-by: Mika Westerberg <mika.westerberg@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Jose Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191002204108.32242-1-imre.deak@intel.com
If execlists's lite-restore is based on the common GEM context tag
rather than the per-intel_context LRCA, then a context switch between
two intel_contexts on the same engine derived from the same GEM context
will perform a lite-restore instead of a full context switch. We can
exploit this by poisoning the ringbuffer of the first context and trying
to trick a simple RING_TAIL update (i.e. lite-restore)
v2: Also check what happens if preempt ce[0] with ce[1] (both instances
on the same engine from the same parent context) [Tvrtko]
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191002183459.26614-1-chris@chris-wilson.co.uk
All the MG registers is based on the tc_port not port, so
MG_PHY_PORT_LN() was subtracting port and PORT_C what is very
fragile.
So replacing port to tc_port in all MG register macros and users
like we have for DKL.
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191001193729.123736-1-jose.souza@intel.com
For selftests, we desire repeatability and so prefer using a prng with
known seed over true randomness. Extract random_offset() as a selftest
utility that can take the prng state.
Suggested-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191002122430.23205-1-chris@chris-wilson.co.uk
I forgot to update the g4x sprite scaling stride check when GTT
remapping was introduced. The stride of the original framebuffer
is irrelevant when remapping is used and instead we want to check
the stride of the remapped view.
Also drop the duplicate width_bytes check. We already check that
a few lines earlier.
Fixes: df79cf4419 ("drm/i915: Store the final plane stride in plane_state")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190930183045.662-1-ville.syrjala@linux.intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Drop the tv_mode NULL check since intel_tv_mode_find() never
actually returns NULL, and flip the condition around so that
the MODE_OK case is at the end, which is customary to all
the other .mode_valid() implementations.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191001154629.11063-2-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
When adding the max plane size checks to the .mode_valid() hooks
I naturally forgot about MST. Take care of that one as well.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Sean Paul <sean@poorly.run>
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Fixes: 2d20411e25 ("drm/i915: Don't advertise modes that exceed the max plane size")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191001154629.11063-1-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Split out the code related to vga client and vgaarb all over the place
into new intel_vga.[ch]. No functional changes.
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191001152506.7854-1-jani.nikula@intel.com
Unwedging the GPU requires a successful GPU reset before we restore the
default submission, or else we may see residual context switch events
that we were not expecting.
v2: Pull in the special-case reset_clobbers_display, and explain why it
should be safe in the context of unwedging.
v3: Just forget all about resets before unwedging if it will clobber the
display; risk it all.
Reported-by: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> #v1
Link: https://patchwork.freedesktop.org/patch/msgid/20190927160335.10622-1-chris@chris-wilson.co.uk
We currently test context switching on each engine as a basic stress
test (just verifying that nothing explodes if we execute 2 requests from
different contexts sequentially). What we have not tested is what
happens if we try and do so on all available engines simultaneously,
putting our SW and the HW under the maximal stress.
v2: Clone the set of engines from the first context into the secondary
contexts.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190930144919.27992-1-chris@chris-wilson.co.uk
On systems that have no runtime-pm, we mark the wakeref as being -1. We
therefore cannot use that value for the mock-gt indicator, so opt for
-ENODEV instead. The wakeref should never be an error value -- one
hopes!
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190927211749.2181-2-chris@chris-wilson.co.uk
As we execute GPU resets on a gt/ basis, and use the intel_gt as the
primary for all other reset functions, also use it for the has-reset?
predicates. Gradually simplifying the churn of pointers.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190927211749.2181-1-chris@chris-wilson.co.uk
Now that TC support was added, initialize DDIs.
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Acked-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926210659.56317-4-jose.souza@intel.com
Link training is failling when running link at 2.7GHz and 1.62GHz and
following BSpec pll algorithm.
Comparing the values calculated and the ones from the reference table
it looks like MG_CLKTOP2_CORECLKCTL1_A_DIVRATIO should not always set
to 5. For DP ports ICL mg pll algorithm sets it to 10 or 5 based on
div2 value, that matches with dkl hardcoded table.
So implementing this way as it proved to work in HW and leaving a
comment so we know why it do not match BSpec.
v4:
Using the same is_dp check as ICL, need testing on HDMI over tc port
Issue reported on BSpec 49204.
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926210659.56317-3-jose.souza@intel.com
Added DKL Phy sequences and helpers functions to program voltage
swing, clock gating and dp mode.
It is not written in DP enabling sequence but "PHY Clockgating
programming" states that clock gating should be enabled after the
link training but doing so causes all the following trainings to fail
so not enabling it for.
v2:
Setting the right HIP_INDEX_REG bits (José)
v3:
Adding the meaning of each column of tgl_dkl_phy_ddi_translations
Adding if gen >= 12 on intel_ddi_hdmi_level() and
intel_ddi_pre_enable_hdmi() instead of reuse part of gen >= 11 if
v4:
Moved the DP_MODE lane programing to another patch as ICL also
needed it
Sharing icl_phy_set_clock_gating() and icl_program_mg_dp_mode() with
TGL as bits and programing as now it almost identical to ICL
BSpec: 49292
BSpec: 49190
Cc: Imre Deak <imre.deak@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Clinton A Taylor <clinton.a.taylor@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926210659.56317-2-jose.souza@intel.com
BSpec was updated(r146548) with a new MG_DP_MODE Programming table,
now taking in consideration the pin assignment and allowing us to
optimize power by shutting down available but not needed lanes.
It was tested on ICL and TGL, with adaptors that used pin assignment
C and B, reversing the connector and going to different modes testing
the not needed lane shutdown.
v5:
Using crtc_state->lane_count instead of dp.lane_count
BSpec: 21735
BSpec: 49292
Cc: Imre Deak <imre.deak@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Clinton A Taylor <clinton.a.taylor@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926210659.56317-1-jose.souza@intel.com
We have a new version of DMC for ICL - v1.09.
This version adds the Half Refresh Rate capability
into DMC.
Cc: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190925201250.18136-1-daniele.ceraolospurio@intel.com
The HuC FW has silently switched to encoding the version the same way as
the GuC FW does, i.e. major.minor.patch instead of just major.minor. All
the current blobs follow the new scheme, but since minor and patch are
both zero there is no difference in the end results and we happily load
them. New binaries, however, will have non-zero values in there, so we
need to make sure to parse them correctly.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Anusha Srivatsa <anusha.srivatsa@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
Acked-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190925222121.4000-1-daniele.ceraolospurio@intel.com
We can use it in i915 for updating parts of unmasked registers from
within a batch. We're also adding Gen8+ versions of CS_GPR registers
(aka MI_MATH_REG in the coprocessor).
Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926100635.9416-4-michal.winiarski@intel.com
Insert structure members names into their descriptions to follow
kernel-doc format.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Anna Karas <anna.karas@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926122158.13028-1-anna.karas@intel.com
The function intel_engine_breadcrumbs_irq() is always invoked from an interrupt
handler and for that reason it invokes (as an optimisation) only spin_lock()
for locking assuming that the interrupts are already disabled. The
function intel_engine_signal_breadcrumbs() is provided to disable
interrupts while the former function is invoked so that assumption is
also true for callers from preemptible context.
On PREEMPT_RT local_irq_disable() really disables interrupts and this
forbids to invoke spin_lock() which becomes a sleeping spinlock.
This is also problematic with `threadirqs' in conjunction with
irq_work. With force threading the interrupt handler, the handler is
invoked with disabled BH but with interrupts enabled. This is okay and
the lock itself is never acquired in IRQ context. This changes with
irq_work (signal_irq_work()) which _still_ invokes
intel_engine_breadcrumbs_irq() from IRQ context. Lockdep should see this
and complain.
Acquire the locks in intel_engine_breadcrumbs_irq() with _irqsave()
suffix and let all callers invoke intel_engine_breadcrumbs_irq()
directly instead using intel_engine_signal_breadcrumbs().
Reported-by: Clark Williams <williams@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926105644.16703-2-bigeasy@linutronix.de
The lockdep_assert_irqs_disabled() check is needless. The previous
lockdep_assert_held() check ensures that the lock is acquired and while
the lock is acquired lockdep also prints a warning if the interrupts are
not disabled if they have to be.
These IRQ-off asserts trigger on PREEMPT_RT because the locks become
sleeping locks and do not really disable interrupts.
Remove lockdep_assert_irqs_disabled().
Reported-by: Clark Williams <williams@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926105644.16703-3-bigeasy@linutronix.de
Default length value of MI_LOAD_REGISTER_REG is 1.
Also move it out of cmd-parser-only registers since we're going to use
it in i915.
Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190926133142.2838-3-chris@chris-wilson.co.uk