Each plane may have different stride limitations. Let's add a new
plane function to retutn the maximum stride for each plane. There's
going to be some use for this outside the .atomic_check() stuff hence
the separate hook.
v2: Fix ilk+ x-tiled max stride to be 32k (José)
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-3-ville.syrjala@linux.intel.com
Rename some of the tile_offset() functions to aligned_offset() since
they operate on both linear and tiled functions. And we'll include
_plane_ in the name of all the variants that take a plane state.
Should make it more clear which function to use where.
v2: Pimp the patch subject a bit (José)
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-2-ville.syrjala@linux.intel.com
If the caller supplies more than 4G of objects and than one that has to
be in the low 4G, it is possible for the low 4G to be full before we
attempt to find room for the last object that must be there. As we don't
reorder the two types, every pass hits the same problem and we fail with
ENOSPC. However, if we impose a little bit of ordering between the two
classes of objects, on the second pass we will be able to fit the
special object as we do it first. For setups that only use !48b objects,
we now reverse the order between passes, hopefully making the subsequent
passes more likely to succeed given that we are trying a different
order (rather than repeating the previous pass!)
v2: Quick one line explanation for the relative priorities given to
reservations.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180912101133.31377-1-chris@chris-wilson.co.uk
Userspace should be free to race against itself and shoot itself in
the foot if it so desires to adjust a parameter at the same time as
submitting a batch to that context. As such, the struct_mutex in context
setparam is only being used to serialise userspace against itself and
not for any protection of internal structs and so is superfluous.
v2: Separate user_flags from internal flags to reduce chance of
interference; and use locked bit ops for user updates.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180911132206.23032-1-chris@chris-wilson.co.uk
The user parameters to put_image are not copied back to userspace
(DRM_IOW), and so we can modify the ioctl parameters (having already been
copied to a temporary kernel struct) directly and use those in place,
avoiding another temporary malloc and lots of manual copying.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180906190144.1272-2-chris@chris-wilson.co.uk
Given that we are now reasonably confident in our ability to detect and
reserve the stolen memory (physical memory reserved for graphics by the
BIOS) for ourselves on most machines, we can put it to use. In this
case, we need a page to hold the overlay registers.
On an i915g running MythTv, H Buus noticed that
commit 6a2c4232ec
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Tue Nov 4 04:51:40 2014 -0800
drm/i915: Make the physical object coherent with GTT
introduced stuttering into his video playback. After discarding the
likely suspect of it being the physical cursor updates, we were left
with the use of the phys object for the overlay. And lo, if we
completely avoid using the phys object (allocated just once on module
load!) by switching to stolen memory, the stuttering goes away.
For lack of a better explanation, claim victory and kill two birds with
one stone.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107600
Fixes: 6a2c4232ec ("drm/i915: Make the physical object coherent with GTT")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180906190144.1272-1-chris@chris-wilson.co.uk
During modeset, previously configured csc coefficient matrix,if any, will
not persist. This can result in blank screen as csc mode will be programmed
while loading LUT but csc coefficient matrix remains unprogrammed.
Changes since V1:
- Removed platform check
Signed-off-by: P Raviraj Sitaram <raviraj.p.sitaram@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1536589634-29680-1-git-send-email-raviraj.p.sitaram@intel.com
We should update GuC power domain states also when GuC submission
is disabled, otherwise GuC might complain or ignore our requests.
This seems to be required for all currently released GuC firmwares.
v2: it is only needed by pre-Gen11 firmwares
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: John Spotswood <john.a.spotswood@intel.com>
Cc: Anusha Srivatsa <anusha.srivatsa@intel.com>
Cc: Tomasz Lis <tomasz.lis@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180910104150.101752-1-michal.wajdeczko@intel.com
Using the guc, we cannot disable the user interrupt generation as we use
it for driving submission. And from Icelake, we no longer have the
ability to individually mask interrupt generation from each engine,
disabling our ability to fake missed interrupts.
In both cases, report back to userspace that the missed interrupt
generator is no longer available.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907112856.28242-1-chris@chris-wilson.co.uk
Introduce a complementary function to i915_driver_create() to undo all
that is created.
Suggested-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180905140921.17467-2-chris@chris-wilson.co.uk
Future gen reduce the number of bits we will have available to
differentiate between contexts, so reduce the lifetime of the ID
assignment from that of the context to its current active cycle (i.e.
only while it is pinned for use by the HW, will it have a constant ID).
This means that instead of a max of 2k allocated contexts (worst case
before fun with bit twiddling), we instead have a limit of 2k in flight
contexts (minus a few that have been pinned by the kernel or by perf).
To reduce the number of contexts id we require, we allocate a context id
on first and mark it as pinned for as long as the GEM context itself is,
that is we keep it pinned it while active on each engine. If we exhaust
our context id space, then we try to reclaim an id from an idle context.
In the extreme case where all context ids are pinned by active contexts,
we force the system to idle in order to recover ids.
We cannot reduce the scope of an HW-ID to an engine (allowing the same
gem_context to have different ids on each engine) as in the future we
will need to preassign an id before we know which engine the
context is being executed on.
v2: Improved commentary (Tvrtko) [I tried at least]
References: https://bugs.freedesktop.org/show_bug.cgi?id=107788
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180904153117.3907-1-chris@chris-wilson.co.uk
If the previous modeset commit has completed and is no longer part of
the crtc state, skip waiting for it.
Ville pointed out that, in fact, the commit is never removed after a
modeset so the only way we could see a NULL here should be if there was
never a commit attached. Nevertheless, we have the evidence it can be
NULL and it has been defended against elsewhere, for example commit
93313538c1 ("drm/i915: Pass idle crtc_state to intel_dp_sink_crc").
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107792
Fixes: c44301fce6 ("drm/i915: Allow control of PSR at runtime through debugfs, v6")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180904162902.2578-1-chris@chris-wilson.co.uk
Elsewhere we manipulate uncore.unclaimed_mmio_check and
i915_param.mmio_debug under the irq lock (e.g. preserving the current
value across a user forcewake grab), but do not protect the manipulation
inside intel_uncore_arm_unclaimed_mmio_detection() from concurrent
access, even from itself. This is an issue as we do call
arm_unclaimed_mmio_detection from multiple threads without coordination.
Suggested-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intelcom>
Link: https://patchwork.freedesktop.org/patch/msgid/20180904131207.17563-1-chris@chris-wilson.co.uk
There are two issues with the current RPCS programming for Icelake:
Expansion of the slice count bitfield has been missed, as well as the
required programming workaround for the subslice count bitfield size
limitation.
1)
Bitfield width for configuring the active slice count has grown so we need
to program the GEN8_R_PWR_CLK_STATE accordingly.
Current code was always requesting eight times the number of slices (due
writing to a bitfield starting three bits higher than it should). These
requests were luckily a) capped by the hardware to the available number of
slices, and b) we haven't yet exported the code to ask for reduced slice
configurations.
Due both of the above there was no impact from this incorrect programming
but we should still fix it.
2)
Due subslice count bitfield being only three bits wide and furthermore
capped to a maximum documented value of four, special programming
workaround is needed to enable more than four subslices.
With this programming driver has to consider the GT configuration as
2x4x8, while the hardware internally translates this to 1x8x8.
A limitation stemming from this is that either a subslice count between
one and four can be selected, or a subslice count equaling the total
number of subslices in all selected slices. In other words, odd subslice
counts greater than four are impossible, as are odd subslice counts
greater than a single slice subslice count.
This also had no impact in the current code base due breakage from 1)
always reqesting more than one slice.
While fixing this we also add some asserts to flag up any future bitfield
overflows.
v2:
* Use a local in all branches for clarity. (Lionel)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Bspec: 12247
Reported-by: tony.ye@intel.com
Suggested-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: tony.ye@intel.com
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903113007.2643-1-tvrtko.ursulin@linux.intel.com
Continuing the fun of trying to find exactly the delay that is
sufficient to ensure that the page directory is fully loaded between
context switches, move the extra flush added in commit 70b73f9ac1
("drm/i915/ringbuffer: Delay after invalidating gen6+ xcs") to just
after we flush the pd. Entirely based on the empirical data of running
failing tests in a loop until we survive a day (before the mtbf is 10-30
minutes).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107769
References: 70b73f9ac1 ("drm/i915/ringbuffer: Delay after invalidating gen6+ xcs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180904063802.13880-1-chris@chris-wilson.co.uk
Currently, if the user has enabled mmio-debug around each register
access, we presume that we have then checked them all. However, it is
still possible through omission (raw register access) or external
interaction that the unclaimed access was not highlighted.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180904111732.24266-1-chris@chris-wilson.co.uk
Older gen use a physical address for the hardware status page, for which
we use cache-coherent writes. As the writes are into the cpu cache, we use
a normal WB mapped page to read the HWS, used for our seqno tracking.
Anecdotally, I observed lost breadcrumbs writes into the HWS on i965gm,
which so far have not reoccurred with this patch. How reliable that
evidence is remains to be seen.
v2: Explicitly pass the expected physical address to the hw
v3: Also remember the wild writes we once had for HWS above 4G.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903152304.31589-2-chris@chris-wilson.co.uk
We currently assert that if the target is in a CPU write domain, we use
a CPU reloc path rather than the GPU reloc path. However, we have a debug
override to force the GPU path and that unfortunately hits the assert.
Include the async clflush under the debug option to ensure correct
behaviour even when debugging, and strict when not.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903150216.19965-1-chris@chris-wilson.co.uk
Add a mode to debugfs/drop-caches to flush unwanted requests off the GPU
(by wedging the device and resetting). This is very useful if a test
terminated leaving a long queue of hanging batches that would ordinarily
require a round trip through hangcheck for each.
It reduces the inter-test operation to just a write into drop-caches to
reset driver/GPU state between tests.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903083337.13134-5-chris@chris-wilson.co.uk
We currently try to pin and allocate the whole buffer at a time. If that
object is larger than RAM, we will try to pin the whole of physical
memory, force the machine into oom, and then still fail the allocation.
If the request is obviously too large, error out early. We opt to do
this in the backend to make it easy to use alternate paths that do not
require the entire object pinned, or may easily handle proxy objects
that are larger than physical memory.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903083337.13134-4-chris@chris-wilson.co.uk
If we fail to write the user relocation back when it is changed, force
ourselves to take the slow relocation path where we can handle faults in
the write path. There is still an element of dubiousness as having
patched up the batch to use the correct offset, it no longer matches the
presumed_offset in the relocation, so a second pass may miss any changes
in layout.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903083337.13134-3-chris@chris-wilson.co.uk
We do not explicitly mark the PTE for the user's GTT mmap as being
wrprotect, so we don't get a refault when we would need to change a
read-only mmapping into read-write. As such, we must presume that if the
vma has PROT_WRITE it may be written to, although this is supposed to be
indicated by set-domain there are cases (e.g. after swap) where
userspace may not be aware of the implicit domain change.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903083337.13134-2-chris@chris-wilson.co.uk
We only call unset_wedged on the global reset path (since it's a global
operation), so if we are terminally wedged and wish to reset, take the
full device reset path rather than the quicker individual engine resets.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903083337.13134-1-chris@chris-wilson.co.uk
Rather than inspect the global module parameter for whether full-ppgtt
maybe enabled, we can inspect the context directly as to whether it has
its own vm.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Bob Paauwe <bob.j.paauwe@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180901092451.7233-1-chris@chris-wilson.co.uk
So far we have been relying on vm->file pointer being NULL to declare
something GGTT.
This has the unfortunate consequence that the default kernel context is
also declared GGTT and interferes with the following patch which wants to
instantiate VMA's and execute requests against the kernel context.
Change the is_ggtt test to use an explicit flag in struct address_space to
solve this issue.
Note that the bit used is free since there is an alignment hole in the
struct.
v2:
* Mark mock ggtt.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180831143643.12366-1-tvrtko.ursulin@linux.intel.com
commit afb2c4437d ("drm/i915/ddi: Push pipe clock enabling to encoders")
inadvertently stopped enabling the pipe clock for any DP-MST stream
after the first one. It also rearranged the pipe clock enabling wrt.
initial MST payload allocation step (which may or may not be a
problem, but it's contrary to the spec.).
Fix things by making the above commit truly a non-functional change.
Fixes: afb2c4437d ("drm/i915/ddi: Push pipe clock enabling to encoders")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107365
Reported-by: Lyude Paul <lyude@redhat.com>
Reported-by: dmummenschanz@web.de
Tested-by: dmummenschanz@web.de
Tested-by: Lyude Paul <lyude@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: dmummenschanz@web.de
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Lyude Paul <lyude@redhat.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180831174739.30387-1-imre.deak@intel.com
This patch resolves the DMC FW loading issue.
Earlier DMC FW package have only one DMC FW for one stepping. But as such
there is no such restriction from Package side.
For ICL icl_dmc_ver1_07.bin binary package has DMC FW for 2 steppings.
So while reading the dmc_offset from package header, for 1st stepping
offset used to come 0x0 and was working fine till now.
But for second stepping and other steppings, offset is non zero number
and is in dwords. So we need to convert into bytes to fetch correct DMC
FW from correct place.
v2 : Added check for DMC FW max size for various gen. (Imre Deak)
v3 : Corrected naming convention for various gen. (Imre Deak)
v4 : Initialized max_fw_size to 0
v5 : Corrected DMC FW MAX_SIZE for various gen. (Imre Deak)
v6 : Fixed the typo issues.
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Jyoti Yadav <jyoti.r.yadav@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1535695223-4648-1-git-send-email-jyoti.r.yadav@intel.com
Although we cannot do a full system-level test of suspend/hibernate from
deep with the kernel selftests, we can exercise the GEM subsystem in
isolation and simulate the external effects (such as losing stolen
contents and trashing the register state).
v2: Don't forget to hold rpm
v3: Suspend the GTT mappings, and more rpm!
References: https://bugs.freedesktop.org/show_bug.cgi?id=96526
References: 5ab57c7020 ("drm/i915: Flush logical context image out to memory upon suspend")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jakub Bartmiński <jakub.bartminski@intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Jakub Bartmiński <jakub.bartminski@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180830134806.21939-1-chris@chris-wilson.co.uk
The optimisation inherent in commit 6a2c4232ec ("drm/i915: Make the
physical object coherent with GTT") relies on that once we allocated a
cursor we would have coherent, zero overhead access to the scanout plane
holding the cursor. That is we could then do the very frequent cursor
updates X enjoys with no indirection or kernel involvement. However,
that all hinges on the GGTT mmap of the cursor being pinned and not
require refaulting on each access -- handling such a page fault likely
requires the busy GGTT to be rearranged causing a stall. A very simple
fix is then to handle the physical cursor exactly like other cursors and
keep its vma pinned while active.
References: https://bugs.freedesktop.org/show_bug.cgi?id=107600
References: 6a2c4232ec ("drm/i915: Make the physical object coherent with GTT")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180817082405.755-1-chris@chris-wilson.co.uk
During stress testing of full-ppgtt (on Baytrail at least), we found
that the invalidation around a context/mm switch was insufficient (writes
would go astray). Adding a second MI_FLUSH_DW barrier prevents this, but
it is unclear as to whether this is merely a delaying tactic or if it is
truly serialising with the TLB invalidation. Either way, it is
empirically required.
v2: Avoid the loop for readability;
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107715
References: https://bugs.freedesktop.org/show_bug.cgi?id=107759
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180830161042.29193-1-chris@chris-wilson.co.uk
We need to clear the register in order to get correct value after the
next potential hang.
v2: Centralize error register clearing in i915_irq.c (Chris)
v3: Don't read gen8 register on < gen6 (Chris)
v4: Don't swap gen8+ & gen6+ code... (Chris)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180830132424.21940-1-lionel.g.landwerlin@intel.com
On finishing the reset, the intention is to restart the GPU before we
relinquish the forcewake taken to handle the reset - the goal being the
GPU reloads a context before it is allowed to sleep. For this purpose,
we used tasklet_flush() which although it accomplished the goal of
restarting the GPU, carried with it a sting in its tail: it cleared the
TASKLET_STATE_SCHED bit. This meant that if another CPU queued a new
request to this engine, we would clear the flag and later attempt to
requeue the tasklet on the local CPU, breaking the per-cpu softirq
lists.
Remove the dangerous tasklet_kill() and just run the tasklet func
directly as we know it is safe to do so (the tasklets are internally
locked to allow mixed usage from direct submission).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180828152702.27536-1-chris@chris-wilson.co.uk
During power domains initialization we acquire power well references for
power wells in the INIT power domain. The rest of power wells - which
BIOS could have left enabled - we can only acquire references as needed
during display HW readout and so must defer sanitization until then
(also implying that we must always do HW readout to cleanup unused power
wells).
Thus during initialization these latter power wells can have a refcount
of 0 while still being enabled. To avoid the false-positive state
mismatch error this causes remove the check from
intel_power_domains_init_hw() and rely on the state check in
intel_power_domains_enable() which follows the HW readout.
v2:
- Add comment to log and code clarifying how unused power wells get
disabled. (Chris)
Fixes: 6dfc4a8f13 ("drm/i915: Verify power domains after enabling them")
Cc: Chris Wilson <chris@chris-wilson.co.uk>
References: https://bugs.freedesktop.org/show_bug.cgi?id=107411
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180828122231.14336-1-imre.deak@intel.com
ICL requires two planes for scanning out a NV12 framebuffer. Do
not advertize support for creating NV12 framebuffers until required
plane programming is implemented.
v2: Do not allow adding buffers.
Check inside skl_plane_has_planar (Ville)
Bspec: Plane Planar YUV programming (18566)
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180824203856.17700-2-dhinakaran.pandiyan@intel.com