Since intel_gt_resume() is always immediately proceeded by init_hw, pull
the call into intel_gt_resume, where we have the rpm and fw already
held.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191222144046.1674865-1-chris@chris-wilson.co.uk
Begin pulling the GT setup underneath a single GT umbrella; let intel_gt
take ownership of its engines! As hinted, the complication is the
lifetime of the probed engine versus the active lifetime of the GT
backends. We need to detect the engine layout early and keep it until
the end so that we can sanitize state on takeover and release.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Acked-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191222120752.1368352-1-chris@chris-wilson.co.uk
As the GEM global context setup is now independent of the GT state
(although GT does currently still depend upon the global
i915->kernel_context), we can move its init earlier, leaving the gt init
ready to be extracted.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191221200109.1202310-1-chris@chris-wilson.co.uk
Since we may retire timelines from secondary workers,
intel_gt_retire_requests() is not always a reliable indicator that all
pending retirements are complete. If we do detect secondary workers are
in progress, recommend intel_gt_wait_for_idle() to repeat the retirement
check.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191221180204.1201217-1-chris@chris-wilson.co.uk
Allocate only an internal intel_context for the kernel_context, forgoing
a global GEM context for internal use as we only require a separate
address space (for our own protection).
Now having weaned GT from requiring ce->gem_context, we can stop
referencing it entirely. This also means we no longer have to create random
and unnecessary GEM contexts for internal use.
GEM contexts are now entirely for tracking GEM clients, and intel_context
the execution environment on the GPU.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Acked-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191221160324.1073045-1-chris@chris-wilson.co.uk
We have several places where we want to allocate a pristine
crtc state. Some of those currently call intel_crtc_state_reset()
to properly initialize all the non-zero defaults in the state, but
some places do not. Let's add intel_crtc_state_alloc() to do both
the alloc and the reset, and call that everywhere we need a fresh
crtc state.
v2: s/kzalloc/kmalloc/ since we memset() anyway (José)
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191219111430.17527-1-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Decide whether or not we need to disable arbitration within user batches
based on our intel_engine_has_preemption() flag.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213151331.1788371-1-chris@chris-wilson.co.uk
Instead of rummaging through the intel_context to peek at the GEM
context in the middle of request submission to decide whether to use
semaphores, store that information on the intel_context itself.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191220101230.256839-2-chris@chris-wilson.co.uk
Keep the intel_context as being the primary state for i915_request, with
the GEM context a backpointer from the low level state for the rarer
cases we need client information. Our goal is to remove such references
to clients from the backend, and leave the HW submission agnostic to
client interfaces and self-contained.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191220101230.256839-1-chris@chris-wilson.co.uk
Since we added the context_alloc callback to intel_context_ops, we can
safely install a custom hook for the deferred virtual context allocation.
This means that all new contexts behave the same upon creation,
simplifying later code.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191219232932.189197-1-chris@chris-wilson.co.uk
Avoid adding the retire workers to the virtual engine so that we don't
end up in the unenviable situation of trying to free the virtual engine
while its worker remains active.
Fixes: dc93c9b693 ("drm/i915/gt: Schedule request retirement when signaler idles")
Closes: https://gitlab.freedesktop.org/drm/intel/issues/867
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Acked-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191219221344.161523-1-chris@chris-wilson.co.uk
All the other display related tracepoints use intel_ instead
if i915_ as the prefix. Do the same for the pipe update
tracepoints so I don't always have to spend time looking for
them.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213133453.22152-6-ville.syrjala@linux.intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
I fumbled the conflict resolution a bit when applying the
fbc vblank wait w/a. Because of that we now call intel_fbc_pre_update()
twice. Remove the second redundant call.
Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213133453.22152-2-ville.syrjala@linux.intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
icl and tgl are still affected by the modulo 4 PLANE_OFFSET.y
underrun issue. Reject such configurations on all gen9+ platforms.
Can be reproduced easily with the following sequence of
hardware poking:
while {
write FBC_CTL.enable=1
wait for vblank
write PLANE_OFFSET .x=0 .y=32
write PLANE_SURF
wait for vblank
# if PLANE_OFFSET.y is multiple of 4 the underrun won't happen
write PLANE_OFFSET .x=0 .y=31
write PLANE_SURF
wait for vblank
# extra vblank wait is required here presumably
# to get FBC into the proper state
wait for vblank
write FBC_CTL.enable=0
# underrun happens some time after FBC disable
wait for vblank
}
Both 8888 and 565 pixel formats and all tilinga formats
seem affected. Reproduced on KBL/GLK/ICL/TGL. BDW confirmed
not affected.
Closes: https://gitlab.freedesktop.org/drm/intel/issues/792
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213133453.22152-1-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
Currently pointers to and from are not initialized and may contain
garbage values. This will cause uninitialized pointer reads in the
call to intel_frontbuffer_track and later checks to see if to and from
are null. Fix this by ensuring to and from are initialized to NULL.
Addresses-Coverity: ("Uninitialised pointer read)"
Fixes: da42104f58 ("drm/i915: Hold reference to intel_frontbuffer as we track activity")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191219190916.24693-1-colin.king@canonical.com
When we park RPS, we set the GPU to run at minimum 'idle' frequency.
However, as the GPU is idle, we also disable the worker and RPS
interrupts - changing the RPS thresholds has no effect, it just incurs
extra changes to restore them when we unpark. So on parking, leave the
thresholds set to the current power level and so we expect them to be
valid for our restart.
References: https://gitlab.freedesktop.org/drm/intel/issues/848
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191218210545.3975426-2-chris@chris-wilson.co.uk
Use non-forcewaked writes to queue RPS register changes that will take
effect when the write buffer is flushed, rather than wake the mmio
device for immediate effect. This is so that we can avoid a slow
forcewake dance upon unparking, and at our irregular updates.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191218210545.3975426-1-chris@chris-wilson.co.uk
Knowing the round trip time of an engine is useful for tracking the
health of the system as well as providing a metric for the baseline
responsiveness of the engine. We can use the latter metric for
automatically tuning our waits in selftests and when idling so we don't
confuse a slower system with a dead one.
Upon idling the engine, we send one last pulse to switch the context
away from precious user state to the volatile kernel context. We know
the engine is idle at this point, and the pulse is non-preemptible, so
this provides us with a good measurement of the round trip time. It also
provides us with faster engine parking for ringbuffer submission, which
is a welcome bonus (e.g. softer-rc6).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191219105043.4169050-1-chris@chris-wilson.co.uk
Link: https://patchwork.freedesktop.org/patch/msgid/20191219124353.8607-2-chris@chris-wilson.co.uk
Very similar to commit 4f88f8747f ("drm/i915/gt: Schedule request
retirement when timeline idles"), but this time instead of coupling into
the execlists CS event interrupt, we couple into the breadcrumb
interrupt and queue a timeline's retirement when the last signaler is
completed. This should allow us to more rapidly park ringbuffer
submission, and so help reduce power consumption on older systems.
v2: Fixup intel_engine_add_retire() to handle concurrent callers
References: 4f88f8747f ("drm/i915/gt: Schedule request retirement when timeline idles")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191219124353.8607-1-chris@chris-wilson.co.uk
Fix several issues with DSC power domains that did not take DSI
transcoders into account:
- On TGL+ we need to use PW2 for DSC on pipe A, not transcoder A. There
is no longer an eDP transcoder, but there are two DSI transcoders
which may be connected to pipe A.
- On TGL+ we need to use the pipe, not transcoder, power domains for DSC
on pipes other than A. Again, there are DSI transcoders.
- On ICL we need to use PW2 for DSC also for DSI transcoders, not just
for the eDP transcoder.
Using is_pipe_dsc() also adds the warning about ICL pipe A DSC, which
does not exist.
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191212134728.18432-1-jani.nikula@intel.com
The check for cpu_transcoder != TRANSCODER_A is more magic than
necessary, and potentially misleading. Before TGL, DSC is supported on
pipe A if, and only if, it's used with eDP or DSI transcoders. No
functional changes.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/f00e9d55ce20b256177222588780c660aa587cc3.1576081155.git.jani.nikula@intel.com
ICL eDP and DSI transcoders have a DSC engine separate from the
pipe. Abstract the register selection and fix it for ICL.
Add a warning for pipe A DSC on ICL; it does not exist.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/01bcddcdf397b1c8eb859ed18ebe023fb64383d9.1576081155.git.jani.nikula@intel.com
When doing our global park, we like to be a good citizen and shrink our
slab caches (of which we have quite a few now), but each
kmem_cache_shrink() incurs a stop_machine() and so ends up being quite
expensive, causing machine-wide stalls. While ideally we would like to
throw away unused pages in our slab caches whenever it appears that we
are idling, doing so will require a much cheaper mechanism. In the
meantime use a delayed worked to impose a rate-limit that means we have
to have been idle for more than 2 seconds before we start shrinking.
References: https://gitlab.freedesktop.org/drm/intel/issues/848
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191218094057.3510459-1-chris@chris-wilson.co.uk
Only signal the breadcrumbs from inside the irq_work, simplifying our
interface and calling conventions. The micro-optimisation here is that
by always using the irq_work interface, we know we are always inside an
irq-off critical section for the breadcrumb signaling and can ellide
save/restore of the irq flags.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191217095642.3124521-7-chris@chris-wilson.co.uk
Avoid rc6 counter going backward in close to 0% RC6 scenarios like:
15.005477996 114,246,613 ns i915/rc6-residency/
16.005876662 667,657 ns i915/rc6-residency/
17.006131417 7,286 ns i915/rc6-residency/
18.006615031 18,446,744,073,708,914,688 ns i915/rc6-residency/
19.007158361 18,446,744,073,709,447,168 ns i915/rc6-residency/
20.007806498 0 ns i915/rc6-residency/
21.008227495 1,440,403 ns i915/rc6-residency/
There are two aspects to this fix.
First is not assuming rc6 value zero means GT is asleep since that can
also mean GPU is fully busy and we do not want to enter the estimation
path in that case.
Second is ensuring monotonicity on the estimation path itself. I suspect
what is happening is with extremely rapid park/unpark cycles we get no
updates on the real rc6 and therefore have to careful not to
unconditionally trust use last known real rc6 when creating a new
estimation.
v2:
* Simplify logic by not tracking the estimate but last reported value.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Fixes: 16ffe73c18 ("drm/i915/pmu: Use GT parked for estimating RC6 while asleep")
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> # v1
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191217142057.1000-1-tvrtko.ursulin@linux.intel.com
Move all of haswell_crtc_disable() into the encoder
.post_disable() hooks. Now we're left with just
calling the .disable() and .post_disable() hooks
back to back.
I chose to move the code into the .post_disable() hook instead
of the .disable() hook as most of the sequence is currently
implemented in the .post_disable() hook.
We should collapse it all down to just one hook and then the
encoders can drive the modeset sequence fully. But that may
need some further refactoring as we currently call the
ddi .post_disable() hook from mst code and we can't just
replace that with a call to the ddi .disable() hook.
Should also follow up with similar treatment for the enable
sequence but let's start here where it's easier.
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213195217.15168-5-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
To make life easier in the future let's pass the old crtc state
to intel_crtc_vblank_off() just like we already do for its
counterpart intel_crtc_vblank_on().
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213195217.15168-4-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
To make life easier in the future let's pass the old crtc state
to skylake_scaler_disable() just like we already do for
for its ancestor ironlake_pfit_disable().
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213195217.15168-3-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
HSW+ platforms call encoder .post_disable() and .post_pll_disable()
back to back. And since we don't even disable the PLL in between
let's just move everything into .post_disable().
intel_dp_mst does forward the .post_disable() call to intel_ddi at
the very end of its own .post_disable() hook, so this time MST
I shouldn't even break MST by accident.
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213195217.15168-2-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Remove the pointless vfunc detour for hsw_fdi_link_train()
and just call it directly. Also pass the encoder in so we
can nuke the silly encoder loop within.
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213195217.15168-1-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
For the sake of symmetry with the crtc stuff let's add
a helper to reset the plane state to sane default values.
For the moment this only gets caller from the plane init.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191107142417.11107-5-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
We have a few places where we want to reset a crtc state to its
default values. Let's add a helper for that. We'll need the new
__drm_atomic_helper_crtc_state_reset() helper for this to allow
us to just reset the state itself without clobbering the
crtc->state pointer.
And while at it let's zero out the whole thing, except a few
choice member which we'll mark as "invalid". And thanks to this
we can now nuke intel_crtc_init_scalers().
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191107142417.11107-4-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
We already have alloc/free helpers for planes, add the same for
crtcs. The main benefit is we get to move all the annoying state
initialization out of the main crtc_init() flow.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191107142417.11107-3-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Annoyingly __drm_atomic_helper_crtc_reset() does two
totally separate things:
a) reset the state to defaults values
b) assign the crtc->state pointer
I just want a) without the b) so let's split out part
a) into __drm_atomic_helper_crtc_state_reset(). And
of course we'll do the same thing for planes and connectors.
v2: Fix conn__state vs. conn_state typo (Lucas)
Make code and kerneldoc match for
__drm_atomic_helper_plane_state_reset()
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191107142417.11107-1-ville.syrjala@linux.intel.com
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For very light workloads that frequently park, acquiring the display
power well (required to prevent the dmc from trashing the system) takes
longer than the execution. A good example is the igt_coherency selftest,
which is slowed down by an order of magnitude in the worst case with
powerwell cycling. To prevent frequent cycling, while keeping our fast
soft-rc6, use a timer to delay release of the display powerwell.
Fixes: 311770173f ("drm/i915/gt: Schedule request retirement when timeline idles")
References: https://gitlab.freedesktop.org/drm/intel/issues/848
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191218093504.3477048-1-chris@chris-wilson.co.uk
Since obj->frontbuffer is no longer protected by the struct_mutex, as we
are processing the execbuf, it may be removed. Mark the
intel_frontbuffer as rcu protected, and so acquire a reference to
the struct as we track activity upon it.
Closes: https://gitlab.freedesktop.org/drm/intel/issues/827
Fixes: 8e7cb1799b ("drm/i915: Extract intel_frontbuffer active tracking")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: <stable@vger.kernel.org> # v5.4+
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191218104043.3539458-1-chris@chris-wilson.co.uk
If the whole GT is asleep, we know that each engine must also be asleep
and so we can quickly return without checking them all.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191218000756.3475668-1-chris@chris-wilson.co.uk
If we inherit an error along the fence chain, we skip the main work
callback and go straight to the error. In the case of the vma bind
worker, we only dropped the pinned pages from the worker.
In the process, make sure we call the release earlier rather than wait
until the final reference to the fence is dropped (as a reference is
kept while being listened upon).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191216161717.2688274-1-chris@chris-wilson.co.uk
The Gen11+ and the legacy function differ in the register and value
written to interrupt the GuC. However, while on older gen the value
matches a bit on the register, on Gen11+ the value is a SW defined
payload that is sent to the FW. Since the FW behaves the same no matter
what value we pass to it, we can just write the same thing on all gens
and get rid of the function pointer by saving the register offset.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191217012316.13271-6-daniele.ceraolospurio@intel.com
Since we started using CT buffers on all gens, the function pointers can
only be set to either the _nop() or the _ct() functions. Since the
_nop() case applies to when the CT are disabled, we can just handle that
case in the _ct() functions and call them directly.
v2: keep intel_guc_send() and make the CT send/receive functions work on
intel_guc_ct. (Michal)
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191217012316.13271-5-daniele.ceraolospurio@intel.com
For better isolation of the request tracking from the rest of the
CT-related data.
v2: split to separate patch, move next_fence to substructure (Michal)
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191217012316.13271-4-daniele.ceraolospurio@intel.com
The GuC supports having multiple CT buffer pairs and we designed our
implementation with that in mind. However, the different channels are not
processed in parallel within the GuC, so there is very little advantage
in having multiple channels (independent locks?), compared to the
drawbacks (one channel can starve the other if messages keep being
submitted to it). Given this, it is unlikely we'll ever add a second
channel and therefore we can simplify our code by removing the
flexibility.
v2: split substructure grouping to separate patch, improve docs (Michal)
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191217012316.13271-3-daniele.ceraolospurio@intel.com
We track the status of the GuC much more closely now and we expect the
enable/disable functions to be correctly called only once. If this isn't
true we do want to flag it as a flow failure (via the BUG_ON in the ctch
functions) and not silently ignore the call.
Suggested-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191217012316.13271-2-daniele.ceraolospurio@intel.com
The only difference from the GuC POV between guc_communication_stop and
guc_communication_disable is that the former can be called after GuC
has been reset. Instead of having two separate paths, we can just skip
the call into GuC in the disabling path and re-use that.
Note that by using the disable() path instead of the stop() one there
are two additional changes in SW side for the stop path:
- interrupts are now disabled before disabling the CT, which is ok
because we do not want interrupts with CT disabled;
- guc_get_mmio_msg() is called in the stop case as well, which is ok
because if there are errors before the reset we do want to record
them.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191217012316.13271-1-daniele.ceraolospurio@intel.com
Get_pid_task() needs to be paired with a put_pid or we leak a pid
reference every time a banned client tries to create a context.
v2:
* task_pid_nr helper exists! (Chris)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Fixes: b083a0870c ("drm/i915: Add per client max context ban limit")
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191217170933.8108-1-tvrtko.ursulin@linux.intel.com
As we stash a pointer to the HWSP cacheline on the request, when reading
it we only need confirm that the cacheline is still valid by checking
that the request and timeline are still intact.
v2: Protect hwsp_cachline with RCU
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191217011659.3092130-1-chris@chris-wilson.co.uk
Since commit e5dadff4b0 ("drm/i915: Protect request retirement with
timeline->mutex"), the request retirement can happen outside of the
struct_mutex serialised only by the timeline->mutex. We drop the
timeline->mutex on submitting the request (i915_request_add) so after
that point, it is liable to be freed. Make sure our local reference is
kept alive until we have finished attaching it to the signalers. (Note
that this erodes the argument that i915_request_add should consume the
reference, but that is a slightly larger patch!)
Fixes: e5dadff4b0 ("drm/i915: Protect request retirement with timeline->mutex")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191217134729.3297818-1-chris@chris-wilson.co.uk
At least Bay Trail (BYT) and Cherry Trail (CHT) devices can use 1 of 2
different PWM controllers for controlling the LCD's backlight brightness.
Either the one integrated into the PMIC or the one integrated into the
SoC (the 1st LPSS PWM controller).
So far in the LPSS code on BYT we have skipped registering the LPSS PWM
controller "pwm_backlight" lookup entry when a Crystal Cove PMIC is
present, assuming that in this case the PMIC PWM controller will be used.
On CHT we have been relying on only 1 of the 2 PWM controllers being
enabled in the DSDT at the same time; and always registered the lookup.
So far this has been working, but the correct way to determine which PWM
controller needs to be used is by checking a bit in the VBT table and
recently I've learned about 2 different BYT devices:
Point of View MOBII TAB-P800W
Acer Switch 10 SW5-012
Which use a Crystal Cove PMIC, yet the LCD is connected to the SoC/LPSS
PWM controller (and the VBT correctly indicates this), so here our old
heuristics fail.
This commit fixes using the wrong PWM controller on these devices by
calling pwm_get() for the right PWM controller based on the
VBT dsi.config.pwm_blc bit.
Note this is part of a series which contains 2 other patches which renames
the PWM lookup for the 1st SoC/LPSS PWM from "pwm_backlight" to
"pwm_pmic_backlight" and the PWM lookup for the Crystal Cove PMIC PWM
from "pwm_backlight" to "pwm_pmic_backlight".
Acked-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191216202906.1662893-4-hdegoede@redhat.com
Sandybridge is the gen that didn't handle multiple registers in a single
LRI packet. Don't forget it!
Fixes: 902eb748e5 ("drm/i915/gt: Tidy up full-ppgtt on Ivybridge")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Tested-by: Tomi Sarvela <tomi.p.sarvela@intel.com>
Acked-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191217091328.3093551-1-chris@chris-wilson.co.uk
We currently use an error-prone mutex_trylock to grab another timeline
to find an earlier request along it. However, with a bit of a
sleight-of-hand, we can reduce the mutex_trylock to a spin_lock on the
immediate request and careful pointer chasing to acquire a reference on
the previous request.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191216165317.2742896-1-chris@chris-wilson.co.uk
With a couple more memory barriers dotted around the place we can
significantly reduce the MTBF on Ivybridge. Still doesn't really help
Haswell though.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Acked-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191216142409.2605211-1-chris@chris-wilson.co.uk
When creating a handle, it is just that, an abstract handle. The fact
that we cannot currently support a handle larger than the size of the
backing storage is an artifact of our whole-object-at-a-time handling in
get_pages() and being an implementation limitation is best handled at
that point -- similar to shmem, where we only barf when asked to
populate the whole object if larger than RAM. (Pinning the whole object
at a time is major hindrance that we are likely to have to overcome in
the near future.) In the case of the buddy allocator, the late check is
preferable as the request size may often be smaller than the required
size.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191216122603.2598155-1-chris@chris-wilson.co.uk
typecheck() macro creates an huge stack size causing
issues with static analysis with coverity, addressing
this with creating a local pointer.
Fixes: 639f2f2489 ("drm/i915: Introduce new macros for tracing")
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191216185332.83289-1-venkata.s.dhanalakota@intel.com
Fixes coccicheck warning:
drivers/gpu/drm/i915/gem/i915_gem_region.c:88:2-3: Unneeded semicolon
drivers/gpu/drm/i915/gvt/gtt.c:1285:2-3: Unneeded semicolon
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: zhengbin <zhengbin13@huawei.com>
Acked-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/1576467845-60920-1-git-send-email-zhengbin13@huawei.com
In some cases like latency[level]==0, wm[level].res_lines>31,
min_ddb_alloc can be U16_MAX, exclude it from the WARN_ON.
v2: Specify the cases in which we hit U16_MAX, indentation (Ville)
Fixes: 10a7e07b68 ("drm/i915: Make sure cursor has enough ddb for the selected wm level")
Suggested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191216080619.10945-1-vandita.kulkarni@intel.com
According to both the old acpi_igd_opregion_spec_0.pdf and the newer
skl_opregion_rev0p5.pdf opregion specification documents, if a driver
handles hotplug events itself, it should set the opregion CHPD field to
1 to indicate this and the firmware should respond to this by no longer
sending ACPI 0x00 notification events on e.g. lid-state changes.
Specifically skl_opregion_rev0p5.pdf states thid in the documentation of
the CHPD word: "Re-enumeration trigger logic in System BIOS MUST be
disabled for all the Operating Systems supporting Hot-Plug
(e.g., Windows* Longhorn and above)." Note the MUST in there.
We ignore these notifications, so this should not be a problem but many
recent DSTDs seem to all have the same copy-pasted bug in the GNOT() AML
function which is used to send these notifications. Windows likely does not
hit this bug as it presumably correcty sets CHPD to 1.
Here is an example of the broken GNOT() method:
Method (GNOT, 2, NotSerialized)
{
...
CEVT = Arg0
CSTS = 0x03
If (((CHPD == Zero) && (Arg1 == Zero)))
{
If (((OSYS > 0x07D0) || (OSYS < 0x07D6)))
{
Notify (PCI0, Arg1)
}
Else
{
Notify (GFX0, Arg1)
}
}
...
Notice that the condition for the If is always true I believe that the
|| like needs to be an &&, but there is nothing we can do about this and
in my own DSDT archive 55 of the 93 DSDTs have this issue.
When the if is true the notification gets send to the PCI root instead
of only to the GFX0 device. This causes Linux to re-enumerate PCI devices
whenever the LID opens / closes, leading to unexpected messages in dmesg:
Suspend through lid close:
[ 313.598199] intel_atomisp2_pm 0000:00:03.0: Refused to change power state, currently in D3
[ 313.664453] intel_atomisp2_pm 0000:00:03.0: Refused to change power state, currently in D3
[ 313.737982] pci_bus 0000:01: Allocating resources
[ 313.738036] pcieport 0000:00:1c.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000
[ 313.738051] pcieport 0000:00:1c.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000
[ 313.738111] pcieport 0000:00:1c.0: BAR 15: assigned [mem 0x91000000-0x911fffff 64bit pref]
[ 313.738128] pcieport 0000:00:1c.0: BAR 13: assigned [io 0x1000-0x1fff]
Resume:
[ 813.623894] pci 0000:00:03.0: [8086:22b8] type 00 class 0x048000
[ 813.623955] pci 0000:00:03.0: reg 0x10: [mem 0x00000000-0x003fffff]
[ 813.630477] pci 0000:00:03.0: BAR 0: assigned [mem 0x91c00000-0x91ffffff]
[ 854.579101] intel_atomisp2_pm 0000:00:03.0: Refused to change power state, currently in D3
And more importantly this re-enumeration races with suspend/resume causing
enumeration to not be complete when assert_isp_power_gated() from
drivers/gpu/drm/i915/display/intel_display_power.c runs. This causes
the !pci_dev_present(isp_ids) check in assert_isp_power_gated() to fail
making the condition for the WARN true, leading to:
[ 813.327886] ------------[ cut here ]------------
[ 813.327898] ISP not power gated
[ 813.328028] WARNING: CPU: 2 PID: 2317 at drivers/gpu/drm/i915/display/intel_display_power.c:4870 intel_display_print_error_state+0x2b98/0x3a80 [i915]
...
[ 813.328599] ---[ end trace f01e81b599596774 ]---
This commit fixes the unwanted ACPI notification on the PCI root device
by setting CHPD to 1, so that the broken if condition in the AML never
gets checked as notifications of type 0x00 are disabled altogether.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191212204828.191288-1-hdegoede@redhat.com
Wait for the object to be idle before changing its cache-level and
unbinding. This was dropped as supposedly superfluous from commit
8b1c78e06e ("drm/i915: Avoid calling i915_gem_object_unbind holding
object lock"), but it turns out to prevent some cache dirt escaping.
Smells like papering over a race...
Closes: https://gitlab.freedesktop.org/drm/intel/issues/820
Fixes: 8b1c78e06e ("drm/i915: Avoid calling i915_gem_object_unbind holding object lock")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Acked-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213223140.1830738-1-chris@chris-wilson.co.uk
Commit 4d89adc7b5 ("drm/i915/display/dsi: Add support to pipe D")
added pipe D support for DSI, but failed to update the state readout.
Fixes: 4d89adc7b5 ("drm/i915/display/dsi: Add support to pipe D")
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191211110844.2996-1-jani.nikula@intel.com
Just like in commit 523e0cc89b ("drm/i915/tgl: allow DVI/HDMI on port
A"), the port checks when reading the VBT can easily not match what the
platform really exposes. However here we only have some additional debug
messages that are not adding much value: in the previous debug message
we already print everything we know about the VBT.
Instead of keep fixing the possible port assignments according to the
platform, just nuke the additional messages.
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Acked-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191206190552.8818-1-lucas.demarchi@intel.com
Add two helpers that for reading the actual GT's frequency. The
two helpers are:
- intel_rps_read_cagf: reads the frequency and returns it not
normalized
- intel_rps_read_actual_frequency: provides the frequency in Hz.
Use the above helpers in sysfs and debugfs.
Signed-off-by: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213183736.31992-2-andi@etezian.org
While not good behaviour, it is, however, established behaviour that we
can punt EAGAIN to userspace if we need to retry the ioctl. When trying
to acquire a mutex, prefer to use EAGAIN to propagate losing the race
so that if it does end up back in userspace, we try again.
Fixes: c81471f5e9 ("drm/i915: Copy across scheduler behaviour flags across submit fences")
Closes: https://gitlab.freedesktop.org/drm/intel/issues/800
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213160347.1789004-1-chris@chris-wilson.co.uk
New macros ENGINE_TRACE(), CE_TRACE(), RQ_TRACE() and
GT_TRACE() are introduce to tag device name and engine
name with contexts and requests tracing in i915.
Cc: Sudeep Dutt <sudeep.dutt@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213155152.69182-2-venkata.s.dhanalakota@intel.com
We do not require to register the sysctl paths per instance,
so making registration global.
v2: make sysctl path register and unregister function driver
specific (Tvrtko and Lucas).
Cc: Sudeep Dutt <sudeep.dutt@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213155152.69182-1-venkata.s.dhanalakota@intel.com
Now that the combo PHY aux power well handlers are used exclusively on
Icelake, we can drop a bunch of the extra tests.
v2: Don't try to use intel_uncore_rmw for register updates yet; there's
pending display uncore patches that need to land first. (Lucas)
v3: Drop the combo phy assertion. It was backward before, but doesn't
seem terribly necessary. I'm keeping the IS_ICELAKE assertion
though since we often copy/paste/modify the power well tables when
defining new platforms and it's too easy to cargo cult the
ICL-specific handling to new platforms that shouldn't use it.
(Lucas)
v4: Fix build; forgot to commit all the changes. (CI)
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213010600.701315-1-matthew.d.roper@intel.com
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
The TGL workaround database no longer shows Wa #1178 (or anything
similar under different workaround names/numbers) so we should be able
to drop it. In fact Swati just discovered that applying this workaround
is the root cause of some power well enable failures we've been seeing
in CI (gitlab issue 498).
Once we stop applying this WA, TGL no longer utilizes any of the special
handling provided by icl_combo_phy_aux_power_well_ops so we can just
drop back to using the standard hsw-style power well ops instead.
v3: Drop now-unused _TGL_AUX_ANAOVRD1_C definition too. (Lucas)
Closes: https://gitlab.freedesktop.org/drm/intel/issues/498
Fixes: deea06b475 ("drm/i915/tgl: apply Display WA #1178 to fix type C dongles")
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Swati Sharma <swati2.sharma@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213001511.678070-3-matthew.d.roper@intel.com
Outputs C and D on EHL are combo PHY outputs and thus should not be
using the same TC AUX power well handlers as ICL. And even though
icl_combo_phy_aux_power_well_ops works okay for EHL/JSL combo PHYs none
of its special handling is actually necessary for this platform:
* EHL/JSL don't actually need to program PORT_CL_DW12
* Display WA #1178 does not apply to EHL/JSL
Thus we can simply drop back to using our standard "hsw-style" power
well ops for EHL AUX power wells.
Bspec: 4301
Fixes: f722b8c1e2 ("drm/i915/ehl: All EHL ports are combo phys")
Cc: Jose Souza <jose.souza@intel.com>
Cc: Bob Paauwe <bob.j.paauwe@intel.com>
Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191213001511.678070-2-matthew.d.roper@intel.com
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
The "num_dtd" variable is the number of elements in the
generic_dtd->dtd[] array so the > needs to be >= to prevent reading one
element beyond the end of the array.
Fixes: 33ef6d4fd8 ("drm/i915/vbt: Handle generic DTD block")
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191212091130.zf2g53njf5u24wk6@kili.mountain
skl_commit_modeset_enables() is a bit of mess. Let's streamline
it by simply tracking which pipes still need to be updated.
As a bonus we get rid of the state->wm_results.dirty_pipes usage.
v2: Rebase due to port sync
Cc: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com> #v1
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191210144105.3239-2-ville.syrjala@linux.intel.com
U series device need different DDI buffer setup for eDP
and DP. If driver did not recognize ULT id proerply.
The setting for H and S series would be used.
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Anusha Srivatsa <anusha.srivatsa@intel.com>
Cc: Cooper Chiou <cooper.chiou@intel.com>
Signed-off-by: Lee Shawn C <shawn.c.lee@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191210150415.10705-2-shawn.c.lee@intel.com
Since dma_fence_init may call ops (because of a meaningless
trace_dma_fence), we need to set the worker ops prior to that call.
Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Fixes: 8e458fe2ee ("drm/i915: Generalise the clflush dma-worker")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191212154224.1631531-1-chris@chris-wilson.co.uk
On non-debug builds we were not using i915 param and thus
we may cause "unused variable" warning/error if caller was
not using i915 elsewhere. Let compiler see this param.
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191212121903.72524-1-michal.wajdeczko@intel.com
Execute the cmdparser asynchronously as part of the submission pipeline.
Using our dma-fences, we can schedule execution after an asynchronous
piece of work, so we move the cmdparser out from under the struct_mutex
inside execbuf as run it as part of the submission pipeline. The same
security rules apply, we copy the user batch before validation and
userspace cannot touch the validation shadow. The only caveat is that we
will do request construction before we complete cmdparsing and so we
cannot know the outcome of the validation step until later -- so the
execbuf ioctl does not report -EINVAL directly, but we must cancel
execution of the request and flag the error on the out-fence.
Closes: https://gitlab.freedesktop.org/drm/intel/issues/611
Closes: https://gitlab.freedesktop.org/drm/intel/issues/412
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191211230858.599030-2-chris@chris-wilson.co.uk
The gen7 cmdparser is primarily a promotion-based system to allow access
to additional registers beyond the HW validation, and allows fallback to
normal execution of the user batch buffer if valid and requires
chaining. In the next patch, we will do the cmdparser validation in the
pipeline asynchronously and so at the point of request construction we
will not know if we want to execute the privileged and validated batch,
or the original user batch. The solution employed here is to execute
both batches, one with raised privileges and one as normal. This is
because the gen7 MI_BATCH_BUFFER_START command cannot change privilege
level within a batch and must strictly use the current privilege level
(or undefined behaviour kills the GPU). So in order to execute the
original batch, we need a second non-priviledged batch buffer chain from
the ring, i.e. we need to emit two batches for each user batch. Inside
the two batches we determine which one should actually execute, we
provide a conditional trampoline to call the original batch.
Implementation-wise, we create a single buffer and write the shadow and
the trampoline inside it at different offsets; and bind the buffer into
both the kernel GGTT for the privileged execution of the shadow and into
the user ppGTT for the non-privileged execution of the trampoline and
original batch. One buffer, two batches and two vma.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191211230858.599030-1-chris@chris-wilson.co.uk
An oversight in that we use rc6->ctl_enable to disable rc6 on gen9 and
so it does not simply indicate indirect control via a PCU. Switch the
rc6->ctl_enable check for a platform-based check.
Fixes: 972745fd57 ("drm/i915/gt: Disable manual rc6 for Braswell/Baytrail")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191212072737.884335-2-chris@chris-wilson.co.uk
The movntqda requires 16-byte alignment for the source pointer. Avoid
falling back to clflush if the source pointer is misaligned by doing the
doing a small uncached memcpy to fixup the alignments.
v2: Turn the unaligned copy into a genuine helper
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191211110437.4082687-5-chris@chris-wilson.co.uk
We need to flush the destination buffer, even on error, to maintain
consistent cache state. Thereby removing the jump on error past the
clear, and reducing the loop-escape mechanism to a mere break.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191211110437.4082687-3-chris@chris-wilson.co.uk
The initial investigated showed that while the PCU on Braswell/Baytrail
controlled RC6 itself. setting the software RC6 request made no
difference. Further testing reveals though that it causes a delay in the
PCU on enabling RC6.
Closes: https://gitlab.freedesktop.org/drm/intel/issues/763
Fixes: 730eaeb524 ("drm/i915/gt: Manual rc6 entry upon parking")
Testcase: igt/perf/rc6-disable
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Acked-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191210180111.3958558-1-chris@chris-wilson.co.uk
There is no need to pass explicit ggtt since we already have
a trick to get parent gt from uc_fw, we only need to use it.
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191211124549.59516-4-michal.wajdeczko@intel.com
There is no need to pass explicit gt since we already have
a trick to get parent gt from uc_fw, we only need to use it.
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191211124549.59516-3-michal.wajdeczko@intel.com
There is no need to pass explicit i915 since we already have
a debug trick to get parent gt from uc_fw, we only need to
make this trick available on non-debug builds.
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191211124549.59516-2-michal.wajdeczko@intel.com
Use the dev_name(i915) to identify the requests for debugging, so we can
tell different device timelines apart.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
Reviewed-by: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191211150204.133471-1-chris@chris-wilson.co.uk
A prior check and return when pointer fb is null makes
subsequent null checks on fb redundant. Remove the redundant
null checks.
Addresses-Coverity: ("Logically dead code")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191210142349.333171-1-colin.king@canonical.com
In order to eliminate intel_pipe_to_cpu_transcoder() (and its
crtc->config usage) let's pass the cpu transcoder to
assert_pipe() so we don't have to do the pipe->cpu transcoder
lookup on HSW+.
On VLV/CHV this can get called during eDP init, which
happens before crtc->config->cpu_transcoder is even
populated. So currently we're always reading PIPECONF(A)
there even if we're trying to check the state of some
other pipe.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191112163812.22075-4-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Let's start to eliminate intel_pipe_to_cpu_transcoder() so that
we can get rid of one more crtc->config usage (which we will want
to nuke as well).
In the case of assert_fdi_tx() we know that we're never
dealing with the EDP transcoder so we can simply replace
this with a cast.
v2: Fix poor English in comment
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191112163812.22075-3-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Check that we own the global pointer before deregistering.
Reported-by: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191210153620.3929372-1-chris@chris-wilson.co.uk
Convert i915_gem_check_execbuffer to return the error code instead of
a boolean so our neat EINVAL debugging trick works within this function.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191209122314.16289-1-tvrtko.ursulin@linux.intel.com
We want the bonded request to have the same scheduler properties as its
master so that it is placed at the same depth in the queue. For example,
consider we have requests A, B and B', where B & B' are a bonded pair to
run in parallel on two engines.
A -> B
\- B'
B will run after A and so may be scheduled on an idle engine and wait on
A using a semaphore. B' sees B being executed and so enters the queue on
the same engine as A. As B' did not inherit the semaphore-chain from B,
it may have higher precedence than A and so preempts execution. However,
B' then sits on a semaphore waiting for B, who is waiting for A, who is
blocked by B.
Ergo B' needs to inherit the scheduler properties from B (i.e. the
semaphore chain) so that it is scheduled with the same priority as B and
will not be executed ahead of Bs dependencies.
Furthermore, to prevent the priorities changing via the expose fence on
B', we need to couple in the dependencies for PI. This requires us to
relax our sanity-checks that dependencies are strictly in order.
v2: Synchronise (B, B') execution on all platforms, regardless of using
a scheduler, any no-op syncs should be elided.
Fixes: ee1136908e ("drm/i915/execlists: Virtual engine bonding")
Closes: https://gitlab.freedesktop.org/drm/intel/issues/464
Testcase: igt/gem_exec_balancer/bonded-chain
Testcase: igt/gem_exec_balancer/bonded-semaphore
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191210151332.3902215-1-chris@chris-wilson.co.uk
As the current usage is restricted to first DSB instance per pipe, so
existing code could not catch the issue to calculate the mmio offset
of different DSB instance per pipe. Corrected the offset calculation.
Fixes: a6e58d9a2e ("drm/i915/dsb: Check DSB engine status.")
Signed-off-by: Animesh Manna <animesh.manna@intel.com>
Reviewed-by: Anshuman Gupta <anshuman.gupta@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191205123513.22603-1-animesh.manna@intel.com
Enable DSC for DSI, if specified in VBT.
This still lacks DSC aware get config implementation, and therefore
state checker will fail. Also mode valid is not there yet.
v5:
- add dsc get config call
v4:
- convert_rgb = true (Vandita)
- ignore max cdclock check (Vandita)
- rename pipe_config to crtc_state
v3:
- take compressed bpp into account
v2:
- Nuke conn_state->max_requested_bpc, it's not used on DSI
Bspec: 49263
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/e0136299e03c582238523189f6951eeb08daed98.1575974743.git.jani.nikula@intel.com
When DSC is enabled, we need to adjust the horizontal timings to account
for the compressed (and therefore reduced) link speed.
The compressed frequency ratio simplifies down to the ratio between
compressed and non-compressed bpp.
Bspec: 49263
Suggested-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/fecebdc2719dd0c78eaf8f4d3225bb185956d7db.1575974743.git.jani.nikula@intel.com
We'll be expanding afe_clk() to take DSC into account. Switch to using
it where DSC matters. Which is really everywhere that
intel_dsi_bitrate() is currently used in ICL DSI code.
The functional difference is that we round the result closest instead of
down.
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/b6c52b320daa8aaa0d79618ce714170f8f04ff67.1575974743.git.jani.nikula@intel.com
Add basic hardware state readout for DSC, and check the most relevant
details in the state checker.
v2:
- check for DSC power before reading its state
- check if source supports DSC at all
As a side effect, this should also get the power domains for the enabled
DSC on takeover, and subsequently disable DSC if it's not needed.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/3fb018cf9bd9a4c275aab389b6ec0f2a4e938bb9.1575974743.git.jani.nikula@intel.com
Move intel_dp_source_supports_dsc() from intel_dp.c as
intel_dsc_source_support() in intel_vdsc.c. The DSC source support is
more about DSC than about DP, and will be needed for DP independent
code.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/6c9f646090913290fb00efd46a4332421bf95930.1575974743.git.jani.nikula@intel.com
Add DSI specific computation and transmission to display of PPS.
With hopes that this approach will work for both DP and DSI encoders.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/461db10b1f4d76625625a9f2b1e3d932fff42799.1575974743.git.jani.nikula@intel.com
Turns out future DSI specific parameters aren't workable with the
approach of having the encoder specific functions in intel_vdsc.c. Make
intel_dsc_compute_params() a helper that does the encoder independent
parts, and have encoder code call it. Move intel_dsc_dp_compute_params()
to intel_dp.c as intel_dp_dsc_compute_params().
No functional changes.
v2: Rename pipe_config to crtc_state while at it.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/620688ec302f7f49cc539c6c1653bfaf6092fce0.1575974743.git.jani.nikula@intel.com
Add function for retrieving the DSC data for an encoder.
Initially, this is DSI specific, as DP does not use VBT settings for DSC
at all. It's also not very pretty.
In the future we might have a pointer from encoder to the child device,
which would make the child device list query here so much more sensible.
v3:
- use crtc_state instead of pipe_config
- return true by default from intel_bios_get_dsc_params()
- expand the comment about rc_buffer_block_size and rc_buffer_size
v2:
- make more robust, debug log errors better
Bspec: 29885
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/b895c349d964d70e4cad26f12a629ea1898bfcc2.1575974743.git.jani.nikula@intel.com
Check for child devices that specify compression, and store the device
specific compression parameters in the display device data struct for
later use. Warn if compression is requested but not available.
Use fairly rigid checks for compression data for starters. These can be
made more dynamic later.
Log about DSC presence in DDI port parse, though this is not universal
across platforms or port types (DSI).
v2: amended debug logging
Bspec: 29885
Cc: Vandita Kulkarni <vandita.kulkarni@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/357d685ba047faf2285138c2f7014a8dee9a12b7.1575974743.git.jani.nikula@intel.com
Gen12 can improve bandwidth efficiency by pairing up memory requests
with similar addresses. We need to program the BW_BUDDY1 and BW_BUDDY2
registers according to the memory configuration during display
initialization to take advantage of this capability.
The magic numbers we program here feel like something that could
definitely change on future platforms, so let's use a table-based
programming scheme to make this easy to extend in the future.
v2:
- Add separate table for Wa_1409767108. (Stan)
- Reorder structure reduce size by a word. Page mask can still be up
to 28 bits (even though current values are small) so we should keep
it as a u32, but just using a u8 for DRAM type instead of the actual
enum type saves space. (Lucas, Ville)
- Rename function to tgl_bw_buddy_init() to be more precise about what
it does. (Lucas)
Bspec: 49189
Bspec: 49218
Bspec: 52890
Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191205224848.76712-1-matthew.d.roper@intel.com
Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Currently the variable sum is not uninitialized and hence will cause an
incorrect result in the summation values. Fix this by initializing
sum to the first item in the summation.
Addresses-Coverity: ("Uninitialized scalar variable")
Fixes: 3c7a44bbbf ("drm/i915/selftests: Perform some basic cycle counting of MI ops")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191210143205.338308-1-colin.king@canonical.com
In order to avoid confusing the HW, we must never submit an empty ring
during lite-restore, that is we should always advance the RING_TAIL
before submitting to stay ahead of the RING_HEAD.
Normally this is prevented by keeping a couple of spare NOPs in the
request->wa_tail so that on resubmission we can advance the tail. This
relies on the request only being resubmitted once, which is the normal
condition as it is seen once for ELSP[1] and then later in ELSP[0]. On
preemption, the requests are unwound and the tail reset back to the
normal end point (as we know the request is incomplete and therefore its
RING_HEAD is even earlier).
However, if this w/a should fail we would try and resubmit the request
with the RING_TAIL already set to the location of this request's wa_tail
potentially causing a GPU hang. We can spot when we do try and
incorrectly resubmit without advancing the RING_TAIL and spare any
embarrassment by forcing the context restore.
In the case of preempt-to-busy, we leave the requests running on the HW
while we unwind. As the ring is still live, we cannot rewind our
rq->tail without forcing a reload so leave it set to rq->wa_tail and
only force a reload if we resubmit after a lite-restore. (Normally, the
forced reload will be a part of the preemption event.)
Fixes: 22b7a426bb ("drm/i915/execlists: Preempt-to-busy")
Closes: https://gitlab.freedesktop.org/drm/intel/issues/673
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: stable@kernel.vger.org
Link: https://patchwork.freedesktop.org/patch/msgid/20191209023215.3519970-1-chris@chris-wilson.co.uk
We now only use 1 client without any plan to add more. The client is
also only holding information about the WQ and the process desc, so we
can just move those in the intel_guc structure and always use stage_id
0.
v2: fix comment (John)
v3: fix the comment for real, fix kerneldoc
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191205220243.27403-4-daniele.ceraolospurio@intel.com
Instead of relying on the workqueue, the upcoming reworked GuC
submission flow will offer the host driver indipendent control over
the execution status of each context submitted to GuC. As part of this,
the doorbell usage model has been reworked, with each doorbell being
paired to a single lrc and a doorbell ring representing new work
available for that specific context. This mechanism, however, limits
the number of contexts that can be registered with GuC to the number of
doorbells, which is an undesired limitation. To avoid this limitation,
we requested the GuC team to also provide a H2G that will allow the host
to notify the GuC of work available for a specified lrc, so we can use
that mechanism instead of relying on the doorbells. We can therefore drop
the doorbell code we currently have, also given the fact that in the
unlikely case we'd want to switch back to using doorbells we'd have to
heavily rework it.
The workqueue will still have a use in the new interface to pass special
commands, so that code has been retained for now.
With the doorbells gone and the GuC client becoming even simpler, the
existing GuC selftests don't give us any meaningful coverage so we can
remove them as well. Some selftests might come with the new code, but
they will look different from what we have now so if doesn't seem worth
it to keep the file around in the meantime.
v2: fix comments and commit message (John)
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191205220243.27403-3-daniele.ceraolospurio@intel.com
We already have a couple of use-cases in the code and another one will
come in one of the later patches in the series.
v2: use the new function for the CT object as well
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> #v1
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191205220243.27403-2-daniele.ceraolospurio@intel.com
Remove unused enums and ctx_save_restore_disabled() function, leftover
from the legacy preemption removal.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191205220243.27403-1-daniele.ceraolospurio@intel.com
intel_hdcp_transcoder_config() is clobbering some globally visible
state in .compute_config(). That is a big no no as .compute_config()
is supposed to have no visible side effects when either the commit
fails or it's just a TEST_ONLY commit.
Inline this stuff into intel_hdcp_enable() so that the state only
gets modified when we actually commit the state to the hardware.
Cc: Ramalingam C <ramalingam.c@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Uma Shankar <uma.shankar@intel.com>
Fixes: 39e2df090c ("drm/i915/hdcp: update current transcoder into intel_hdcp")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191204180549.1267-2-ville.syrjala@linux.intel.com
Reviewed-by: Ramalingam C <ramalingam.c@intel.com>
The code assumes we can omit the cfb allocation once fbc
has been enabled once. That's nonsense. Let's try to
reallocate it if we need to.
The code is still a mess, but maybe this is enough to get
fbc going in some cases where it initially underallocates
the cfb and there's no full modeset to fix it up.
Cc: Daniel Drake <drake@endlessm.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Jian-Hong Pan <jian-hong@endlessm.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-15-ville.syrjala@linux.intel.com
Tested-by: Daniel Drake <drake@endlessm.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Now that we have the glk+ w/a for back to back fbc disable + plane
update in place we can once more enable fbc on glk+ by default.
Cc: Daniel Drake <drake@endlessm.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Jian-Hong Pan <jian-hong@endlessm.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-14-ville.syrjala@linux.intel.com
Tested-by: Daniel Drake <drake@endlessm.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
On glk+ the hardware gets confused if we disable FBC while
it's recompressing and we perform a plane update during the
same frame. The result is that top of the screen gets corrupted.
We can avoid that by giving the hardware enough time to finish
the FBC disable before we touch the plane registers. Ie. we need
an extra vblank wait after FBC disable.
v2: Don't do the vblank wait if we never activated FBC in hw
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191128150338.12490-1-ville.syrjala@linux.intel.com
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
The hardware automagically nukes the cfb on flip. We can use
that whenever the plane/crtc configuration doesn't change too
much. Let's hook that up.
We'll need this for glk+ since we need to introduce an extra
vblank wait after FBC disable. As we're currently disabling
FBC around all plane updates we'd slow them down by an extra
frame. Not a great user experience when your fps is always
capped at vrefres/2. With flip nuke we don't need the extra
vblank wait.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-12-ville.syrjala@linux.intel.com
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
i965gm no longer needs the fence for scanout so we should be
do what we do for ctg+ and only configure a fence for FBC
when we have one.
In theory this should do nothing atm on account of
intel_fbc_can_activate() requiring the fence, but since
we do this for g4x+ let's do it for i965gm as well. We
may want to relax the requirements at some point and allow
FBC without a fence.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-9-ville.syrjala@linux.intel.com
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Precompute the override cfb stride value so that we can check
it when determining if flip nuke can be used or not.
The hardware has 13 bits for this, so we can shrink the storage
to u16 while at it.
v2: Don't explode when crtc_state->enable_fbc lies to us
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-6-ville.syrjala@linux.intel.com
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
We don't want to use the FBC hardware render tracking so let's not
enable it. To use the hw tracking properly we'd anyway need to
integrate this into the command submissing path as the register is
context saved, and if rendering happens via the ppgtt we'd have
to configure it with the ppgtt address instead of the ggtt address.
Easier to use software tracking instead.
Note that on pre-ilk we can't actually disable render tracking.
However we can't rely on it because it requires that DSPSURF to
match the render target address, and since we play tricks
with DSPSURF that may not be the case. Hence we shall rely on
software render tracking on all platforms.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-5-ville.syrjala@linux.intel.com
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
We're missing a workaround in the fbc code for all glk+ platforms
which can cause corruption around the top of the screen. So
enabling fbc by default is a bad idea. I'm not keen to backport
the w/a so let's start by disabling fbc by default on all glk+.
We'll lift the restriction once the w/a is in place.
Cc: stable@vger.kernel.org
Cc: Daniel Drake <drake@endlessm.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Jian-Hong Pan <jian-hong@endlessm.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191127201222.16669-2-ville.syrjala@linux.intel.com
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Gen12 supports saving/restoring render counters per context. Apply OAR
configuration only for the context that is passed in to perf.
v2:
- Fix OACTXCONTROL value to only stop/resume counters.
- Remove gen12_update_reg_state_unlocked as power state is already
applied by the caller.
v3: (Lionel)
- Move register initialization into the array
- Assume a valid oa_config in enable_metric_set
Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Fixes: 00a7f0d715 ("drm/i915/tgl: Add perf support on TGL")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191206194339.31356-2-umesh.nerlige.ramappa@intel.com
SAMPLE_OA_REPORT enables sampling of OA reports from the OA buffer.
Since reports from OA buffer had system wide visibility, collecting
samples from the OA buffer was a privileged operation on previous
platforms. Prior to TGL, it was also necessary to sample the OA buffer
to normalize reports from MI REPORT PERF COUNT.
TGL has a dedicated OAR unit to sample perf reports for a specific
render context. This removes the necessity to sample OA buffer.
- If not sampling the OA buffer, allow non-privileged access. An earlier
patch allows the non-privilege access:
https://patchwork.freedesktop.org/patch/337716/?series=68582&rev=1
- Clear up the path for non-privileged access in this patch
Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Fixes: 00a7f0d715 ("drm/i915/tgl: Add perf support on TGL")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191206194339.31356-1-umesh.nerlige.ramappa@intel.com
Since we didn't check and insist that args.pad must be zero for MMAP_GTT
historically, we cannot insert a check now as old userspace may be
feeding in garbage. As such the lack of check is enshrined into the ABI,
so add a comment to remind us we cannot add the check later.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191207222644.2830129-1-chris@chris-wilson.co.uk
"Have you tried switching it off and on again?"
Set the size of the mm to 0 to disable all PD cachelines, before
enabling the whole mm again. Let's see if that tricks the TLB into
reloading.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Acked-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191208143648.2986669-1-chris@chris-wilson.co.uk
Our asserts allow for the PDEs to be allocated concurrently, but we did
not account for the aliasing-ppgtt to be preallocated on top.
Testcase: igt/gem_ppgtt #bsw
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Acked-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191207221453.2802627-1-chris@chris-wilson.co.uk