Move the display w/a #1175 to a better place. That place
being the new skl+ specific plane->check() hook. This leaves
the skl_check_plane_surface() stuff to deal with the gtt offset
and src coordinate stuff as originally envisioned.
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-12-ville.syrjala@linux.intel.com
Move the skl+ specific framebuffer related checks from
intel_plane_atomic_check_with_state() into a new function
(skl_plane_check_fb()) which we'll simply call from the skl
plane->check() hook.
v2: Split out the Y/Yf+CCS vs. interlaced change (José)
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-11-ville.syrjala@linux.intel.com
Split up intel_check_primary_plane() and intel_check_sprite_plane()
into per-platform variants. This way we can get a unified behaviour
between the SKL universal planes, and we stop checking for non-SKL
specific scaling limits for the "sprite" planes. And we now get
a natural place where to add more plarform specific checks.
v2: Split the .check_plane() calling convention change out (José)
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-10-ville.syrjala@linux.intel.com
We can easily calculate the plane can_scale/min_downscale on demand.
And later on we'll probably want to start calculating these dynamically
based on the cdclk just as skl already does.
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-9-ville.syrjala@linux.intel.com
Let's store the final plane stride in the plane state. This avoids
having to pick between the normal vs. rotated stride during hardware
programming. And once we get GTT remapping the plane stride will
no longer match the fb stride so we'll need a place to store it
anyway.
v2: Keep checking fb->pitches[0] for cursor as later on we won't
populate plane_state->color_plane[0].stride for invisible planes
and we have been checking the cursor fb stride even for invisible
planes
v3: s/betwen/between in commit msg (José)
v4: Check color_plane[0].stride instead of fb->pitches[0] in
the skl_check_main_surface() X-tiling kludge
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180911150139.23922-1-ville.syrjala@linux.intel.com
Each plane may have different stride limitations. Let's add a new
plane function to retutn the maximum stride for each plane. There's
going to be some use for this outside the .atomic_check() stuff hence
the separate hook.
v2: Fix ilk+ x-tiled max stride to be 32k (José)
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-3-ville.syrjala@linux.intel.com
Rename some of the tile_offset() functions to aligned_offset() since
they operate on both linear and tiled functions. And we'll include
_plane_ in the name of all the variants that take a plane state.
Should make it more clear which function to use where.
v2: Pimp the patch subject a bit (José)
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907152413.15761-2-ville.syrjala@linux.intel.com
If the caller supplies more than 4G of objects and than one that has to
be in the low 4G, it is possible for the low 4G to be full before we
attempt to find room for the last object that must be there. As we don't
reorder the two types, every pass hits the same problem and we fail with
ENOSPC. However, if we impose a little bit of ordering between the two
classes of objects, on the second pass we will be able to fit the
special object as we do it first. For setups that only use !48b objects,
we now reverse the order between passes, hopefully making the subsequent
passes more likely to succeed given that we are trying a different
order (rather than repeating the previous pass!)
v2: Quick one line explanation for the relative priorities given to
reservations.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180912101133.31377-1-chris@chris-wilson.co.uk
Userspace should be free to race against itself and shoot itself in
the foot if it so desires to adjust a parameter at the same time as
submitting a batch to that context. As such, the struct_mutex in context
setparam is only being used to serialise userspace against itself and
not for any protection of internal structs and so is superfluous.
v2: Separate user_flags from internal flags to reduce chance of
interference; and use locked bit ops for user updates.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180911132206.23032-1-chris@chris-wilson.co.uk
The user parameters to put_image are not copied back to userspace
(DRM_IOW), and so we can modify the ioctl parameters (having already been
copied to a temporary kernel struct) directly and use those in place,
avoiding another temporary malloc and lots of manual copying.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180906190144.1272-2-chris@chris-wilson.co.uk
Given that we are now reasonably confident in our ability to detect and
reserve the stolen memory (physical memory reserved for graphics by the
BIOS) for ourselves on most machines, we can put it to use. In this
case, we need a page to hold the overlay registers.
On an i915g running MythTv, H Buus noticed that
commit 6a2c4232ec
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Tue Nov 4 04:51:40 2014 -0800
drm/i915: Make the physical object coherent with GTT
introduced stuttering into his video playback. After discarding the
likely suspect of it being the physical cursor updates, we were left
with the use of the phys object for the overlay. And lo, if we
completely avoid using the phys object (allocated just once on module
load!) by switching to stolen memory, the stuttering goes away.
For lack of a better explanation, claim victory and kill two birds with
one stone.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107600
Fixes: 6a2c4232ec ("drm/i915: Make the physical object coherent with GTT")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180906190144.1272-1-chris@chris-wilson.co.uk
During modeset, previously configured csc coefficient matrix,if any, will
not persist. This can result in blank screen as csc mode will be programmed
while loading LUT but csc coefficient matrix remains unprogrammed.
Changes since V1:
- Removed platform check
Signed-off-by: P Raviraj Sitaram <raviraj.p.sitaram@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1536589634-29680-1-git-send-email-raviraj.p.sitaram@intel.com
We should update GuC power domain states also when GuC submission
is disabled, otherwise GuC might complain or ignore our requests.
This seems to be required for all currently released GuC firmwares.
v2: it is only needed by pre-Gen11 firmwares
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: John Spotswood <john.a.spotswood@intel.com>
Cc: Anusha Srivatsa <anusha.srivatsa@intel.com>
Cc: Tomasz Lis <tomasz.lis@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180910104150.101752-1-michal.wajdeczko@intel.com
Using the guc, we cannot disable the user interrupt generation as we use
it for driving submission. And from Icelake, we no longer have the
ability to individually mask interrupt generation from each engine,
disabling our ability to fake missed interrupts.
In both cases, report back to userspace that the missed interrupt
generator is no longer available.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180907112856.28242-1-chris@chris-wilson.co.uk
Introduce a complementary function to i915_driver_create() to undo all
that is created.
Suggested-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180905140921.17467-2-chris@chris-wilson.co.uk
Future gen reduce the number of bits we will have available to
differentiate between contexts, so reduce the lifetime of the ID
assignment from that of the context to its current active cycle (i.e.
only while it is pinned for use by the HW, will it have a constant ID).
This means that instead of a max of 2k allocated contexts (worst case
before fun with bit twiddling), we instead have a limit of 2k in flight
contexts (minus a few that have been pinned by the kernel or by perf).
To reduce the number of contexts id we require, we allocate a context id
on first and mark it as pinned for as long as the GEM context itself is,
that is we keep it pinned it while active on each engine. If we exhaust
our context id space, then we try to reclaim an id from an idle context.
In the extreme case where all context ids are pinned by active contexts,
we force the system to idle in order to recover ids.
We cannot reduce the scope of an HW-ID to an engine (allowing the same
gem_context to have different ids on each engine) as in the future we
will need to preassign an id before we know which engine the
context is being executed on.
v2: Improved commentary (Tvrtko) [I tried at least]
References: https://bugs.freedesktop.org/show_bug.cgi?id=107788
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180904153117.3907-1-chris@chris-wilson.co.uk
If the previous modeset commit has completed and is no longer part of
the crtc state, skip waiting for it.
Ville pointed out that, in fact, the commit is never removed after a
modeset so the only way we could see a NULL here should be if there was
never a commit attached. Nevertheless, we have the evidence it can be
NULL and it has been defended against elsewhere, for example commit
93313538c1 ("drm/i915: Pass idle crtc_state to intel_dp_sink_crc").
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107792
Fixes: c44301fce6 ("drm/i915: Allow control of PSR at runtime through debugfs, v6")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180904162902.2578-1-chris@chris-wilson.co.uk
Elsewhere we manipulate uncore.unclaimed_mmio_check and
i915_param.mmio_debug under the irq lock (e.g. preserving the current
value across a user forcewake grab), but do not protect the manipulation
inside intel_uncore_arm_unclaimed_mmio_detection() from concurrent
access, even from itself. This is an issue as we do call
arm_unclaimed_mmio_detection from multiple threads without coordination.
Suggested-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intelcom>
Link: https://patchwork.freedesktop.org/patch/msgid/20180904131207.17563-1-chris@chris-wilson.co.uk
There are two issues with the current RPCS programming for Icelake:
Expansion of the slice count bitfield has been missed, as well as the
required programming workaround for the subslice count bitfield size
limitation.
1)
Bitfield width for configuring the active slice count has grown so we need
to program the GEN8_R_PWR_CLK_STATE accordingly.
Current code was always requesting eight times the number of slices (due
writing to a bitfield starting three bits higher than it should). These
requests were luckily a) capped by the hardware to the available number of
slices, and b) we haven't yet exported the code to ask for reduced slice
configurations.
Due both of the above there was no impact from this incorrect programming
but we should still fix it.
2)
Due subslice count bitfield being only three bits wide and furthermore
capped to a maximum documented value of four, special programming
workaround is needed to enable more than four subslices.
With this programming driver has to consider the GT configuration as
2x4x8, while the hardware internally translates this to 1x8x8.
A limitation stemming from this is that either a subslice count between
one and four can be selected, or a subslice count equaling the total
number of subslices in all selected slices. In other words, odd subslice
counts greater than four are impossible, as are odd subslice counts
greater than a single slice subslice count.
This also had no impact in the current code base due breakage from 1)
always reqesting more than one slice.
While fixing this we also add some asserts to flag up any future bitfield
overflows.
v2:
* Use a local in all branches for clarity. (Lionel)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Bspec: 12247
Reported-by: tony.ye@intel.com
Suggested-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: tony.ye@intel.com
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903113007.2643-1-tvrtko.ursulin@linux.intel.com
Continuing the fun of trying to find exactly the delay that is
sufficient to ensure that the page directory is fully loaded between
context switches, move the extra flush added in commit 70b73f9ac1
("drm/i915/ringbuffer: Delay after invalidating gen6+ xcs") to just
after we flush the pd. Entirely based on the empirical data of running
failing tests in a loop until we survive a day (before the mtbf is 10-30
minutes).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107769
References: 70b73f9ac1 ("drm/i915/ringbuffer: Delay after invalidating gen6+ xcs")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180904063802.13880-1-chris@chris-wilson.co.uk
Currently, if the user has enabled mmio-debug around each register
access, we presume that we have then checked them all. However, it is
still possible through omission (raw register access) or external
interaction that the unclaimed access was not highlighted.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180904111732.24266-1-chris@chris-wilson.co.uk
Older gen use a physical address for the hardware status page, for which
we use cache-coherent writes. As the writes are into the cpu cache, we use
a normal WB mapped page to read the HWS, used for our seqno tracking.
Anecdotally, I observed lost breadcrumbs writes into the HWS on i965gm,
which so far have not reoccurred with this patch. How reliable that
evidence is remains to be seen.
v2: Explicitly pass the expected physical address to the hw
v3: Also remember the wild writes we once had for HWS above 4G.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903152304.31589-2-chris@chris-wilson.co.uk
We currently assert that if the target is in a CPU write domain, we use
a CPU reloc path rather than the GPU reloc path. However, we have a debug
override to force the GPU path and that unfortunately hits the assert.
Include the async clflush under the debug option to ensure correct
behaviour even when debugging, and strict when not.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903150216.19965-1-chris@chris-wilson.co.uk
Add a mode to debugfs/drop-caches to flush unwanted requests off the GPU
(by wedging the device and resetting). This is very useful if a test
terminated leaving a long queue of hanging batches that would ordinarily
require a round trip through hangcheck for each.
It reduces the inter-test operation to just a write into drop-caches to
reset driver/GPU state between tests.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903083337.13134-5-chris@chris-wilson.co.uk
We currently try to pin and allocate the whole buffer at a time. If that
object is larger than RAM, we will try to pin the whole of physical
memory, force the machine into oom, and then still fail the allocation.
If the request is obviously too large, error out early. We opt to do
this in the backend to make it easy to use alternate paths that do not
require the entire object pinned, or may easily handle proxy objects
that are larger than physical memory.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903083337.13134-4-chris@chris-wilson.co.uk
If we fail to write the user relocation back when it is changed, force
ourselves to take the slow relocation path where we can handle faults in
the write path. There is still an element of dubiousness as having
patched up the batch to use the correct offset, it no longer matches the
presumed_offset in the relocation, so a second pass may miss any changes
in layout.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903083337.13134-3-chris@chris-wilson.co.uk
We do not explicitly mark the PTE for the user's GTT mmap as being
wrprotect, so we don't get a refault when we would need to change a
read-only mmapping into read-write. As such, we must presume that if the
vma has PROT_WRITE it may be written to, although this is supposed to be
indicated by set-domain there are cases (e.g. after swap) where
userspace may not be aware of the implicit domain change.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903083337.13134-2-chris@chris-wilson.co.uk
We only call unset_wedged on the global reset path (since it's a global
operation), so if we are terminally wedged and wish to reset, take the
full device reset path rather than the quicker individual engine resets.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903083337.13134-1-chris@chris-wilson.co.uk
Rather than inspect the global module parameter for whether full-ppgtt
maybe enabled, we can inspect the context directly as to whether it has
its own vm.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Bob Paauwe <bob.j.paauwe@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180901092451.7233-1-chris@chris-wilson.co.uk
So far we have been relying on vm->file pointer being NULL to declare
something GGTT.
This has the unfortunate consequence that the default kernel context is
also declared GGTT and interferes with the following patch which wants to
instantiate VMA's and execute requests against the kernel context.
Change the is_ggtt test to use an explicit flag in struct address_space to
solve this issue.
Note that the bit used is free since there is an alignment hole in the
struct.
v2:
* Mark mock ggtt.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180831143643.12366-1-tvrtko.ursulin@linux.intel.com
commit afb2c4437d ("drm/i915/ddi: Push pipe clock enabling to encoders")
inadvertently stopped enabling the pipe clock for any DP-MST stream
after the first one. It also rearranged the pipe clock enabling wrt.
initial MST payload allocation step (which may or may not be a
problem, but it's contrary to the spec.).
Fix things by making the above commit truly a non-functional change.
Fixes: afb2c4437d ("drm/i915/ddi: Push pipe clock enabling to encoders")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107365
Reported-by: Lyude Paul <lyude@redhat.com>
Reported-by: dmummenschanz@web.de
Tested-by: dmummenschanz@web.de
Tested-by: Lyude Paul <lyude@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: dmummenschanz@web.de
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Lyude Paul <lyude@redhat.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180831174739.30387-1-imre.deak@intel.com