On TGL some registers moved from DDI to transcoder and the
DisplayPort training sequence has a separate BSpec page.
I started adding 'ifs' to the original intel_ddi_pre_enable_dp() but
it was becoming really hard to follow, so a new and cleaner function
for TGL was added with comments of all steps. It's similar to ICL,
but different enough to deserve a new function.
The rest of DisplayPort enable and the whole disable sequences
remained the same.
v2: FEC and DSC should be enabled on sink side before start link
training(Maarten reported and Manasi confirmed the DSC part)
v3: Add call to enable FEC on step 7.l(Manasi)
BSpec: 49190
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823082055.5992-16-lucas.demarchi@intel.com
Disable CRTC/pipes in reverse order because some features (MST in
TGL+) requires master and slave relationship between pipes, so it
should always pick the lowest pipe as master as it will be enabled
first and disable in the reverse order so the master will be the last
one to be disabled.
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Mika Kahola <mika.kahola@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823082055.5992-13-lucas.demarchi@intel.com
Same as for_each_oldnew_intel_crtc_in_state() but iterates in reverse
order.
v2: Fix additional blank line
v3: Rebase
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Mika Kahola <mika.kahola@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823082055.5992-12-lucas.demarchi@intel.com
TGL PSR2 HW supports a bigger resolution, so lets add it
BSpec: 50422, 49199
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Anshuman Gupta <anshuman.gupta@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823082055.5992-10-lucas.demarchi@intel.com
On TGL+ it's possible to have PSR1 enabled in other ports besides DDIA.
PSR2 is still limited to DDIA. However currently we handle only one
instance of PSR struct. Lets guard intel_psr_init_dpcd() against
multiple eDP panels and warn about it.
v2: Reword commit message to be TGL+ only and with the info where
PSR1/PSR2 are supported (Lucas)
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Anshuman Gupta <anshuman.gupta@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823082055.5992-6-lucas.demarchi@intel.com
The point of debug_object_activate is to mark the first, and only the
first, acquisition. The object then remains active until the last
release. However, we marked up all successful first acquires even though
we allowed concurrent parties to try and acquire the i915_active
simultaneously (serialised by the i915_active.mutex).
Testcase: igt/gem_mmap_gtt/fault-concurrent
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190827132631.18627-1-chris@chris-wilson.co.uk
If we create a new live_context() we should have a mapping for each
engine. Document that assumption with an assertion.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190827094933.13778-1-chris@chris-wilson.co.uk
The intention is that we first try to pin the current vma into the
mappable aperture only if it is already in use or it fits in the free
space and will not cause contention. The first attempt was meant to be
using PIN_NOEVICT to reuse the current vma if possible, following up
with different eviction strategies.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111485
Fixes: 6846895fde ("drm/i915: Replace PIN_NONFAULT with calls to PIN_NOEVICT")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190826130750.17272-1-chris@chris-wilson.co.uk
To properly handle asynchronous migration of batch objects, we need to
couple the fences on the incoming batch into the request and should not
assume that they always start idle.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190826072149.9447-1-chris@chris-wilson.co.uk
Currently, we don't call dma_set_max_seg_size() for i915 because we
intentionally do not limit the segment length that the device supports.
However, this results in a warning being emitted if we try to map
anything larger than SZ_64K on a kernel with CONFIG_DMA_API_DEBUG_SG
enabled:
[ 7.751926] DMA-API: i915 0000:00:02.0: mapping sg segment longer
than device claims to support [len=98304] [max=65536]
[ 7.751934] WARNING: CPU: 5 PID: 474 at kernel/dma/debug.c:1220
debug_dma_map_sg+0x20f/0x340
This was originally brought up on
https://bugs.freedesktop.org/show_bug.cgi?id=108517 , and the consensus
there was it wasn't really useful to set a limit (and that dma-debug
isn't really all that useful for i915 in the first place). Unfortunately
though, CONFIG_DMA_API_DEBUG_SG is enabled in the debug configs for
various distro kernels. Since a WARN_ON() will disable automatic problem
reporting (and cause any CI with said option enabled to start
complaining), we really should just fix the problem.
Note that as me and Chris Wilson discussed, the other solution for this
would be to make DMA-API not make such assumptions when a driver hasn't
explicitly set a maximum segment size. But, taking a look at the commit
which originally introduced this behavior, commit 78c47830a5
("dma-debug: check scatterlist segments"), there is an explicit mention
of this assumption and how it applies to devices with no segment size:
Conversely, devices which are less limited than the rather
conservative defaults, or indeed have no limitations at all
(e.g. GPUs with their own internal MMU), should be encouraged to
set appropriate dma_parms, as they may get more efficient DMA
mapping performance out of it.
So unless there's any concerns (I'm open to discussion!), let's just
follow suite and call dma_set_max_seg_size() with UINT_MAX as our limit
to silence any warnings.
Changes since v3:
* Drop patch for enabling CONFIG_DMA_API_DEBUG_SG in CI. It looks like
just turning it on causes the kernel to spit out bogus WARN_ONs()
during some igt tests which would otherwise require teaching igt to
disable the various DMA-API debugging options causing this. This is
too much work to be worth it, since DMA-API debugging is useless for
us. So, we'll just settle with this single patch to squelch WARN_ONs()
during driver load for users that have CONFIG_DMA_API_DEBUG_SG turned
on for some reason.
* Move dma_set_max_seg_size() call into i915_driver_hw_probe() - Chris
Wilson
Signed-off-by: Lyude Paul <lyude@redhat.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: <stable@vger.kernel.org> # v4.18+
Link: https://patchwork.freedesktop.org/patch/msgid/20190823205251.14298-1-lyude@redhat.com
vgpu ppgtt notification was split into 2 steps, the first step is to
update PVINFO's pdp register and then write PVINFO's g2v_notify register
with action code to tirgger ppgtt notification to GVT side.
currently these steps were not atomic operations due to no any protection,
so it is easy to enter race condition state during the MTBF, stress and
IGT test to cause GPU hang.
the solution is to add a lock to make vgpu ppgtt notication as atomic
operation.
Cc: stable@vger.kernel.org
Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/1566543451-13955-1-git-send-email-xiaolin.zhang@intel.com
Avoid having to pass around (ctx, engine) everywhere by passing the
actual intel_context we intend to use. Today we preach this lesson to
igt_gpu_fill_dw and its callers' callers.
The immediate benefit for the GEM selftests is that we aim to use the
GEM context as the control, the source of the engines on which to test
the GEM context.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823235141.31799-1-chris@chris-wilson.co.uk
Our fence management is lazy, very lazy. If the user marks an object as
untiled, we do not immediately flush the fence but merely mark it as
dirty. On the next use we have to remember to check and remove the fence,
by which time we hope it is idle and we do not have to wait.
v2: Throw away the old fence on the next ggtt_pin.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111468
Fixes: 1f7fd484ff ("drm/i915: Replace i915_vma_put_fence()")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823153944.20630-1-chris@chris-wilson.co.uk
In order for the Braswell top-level PD to remain the same from the time
of request construction to its submission onto HW, as we may be
asynchronously rewriting the page tables (thus changing the expected
register state after having already stored the old addresses in the
request), the top level PD must be preallocated.
So wave goodbye to our lazy allocation of those 4x2 pages.
v2: A little bit of write-flushing required (presumably it always has
been required, but now we are more susceptible and it is showing up!)
v3: Put back the forced-PD-reload on every batch, we can't survive
without it and explicitly marking the context for PD reload makes
Braswell turn nasty.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823141421.2398-1-chris@chris-wilson.co.uk
Sadly lockdep records when the irqs are re-enabled and then marks up the
fake lock as being irq-unsafe. Our hand is forced and so we must mark up
the entire fake lock critical section as irq-off.
Hopefully this is the last tweak required!
v2: Not quite, we need to mark the timeline spinlock as irqsafe. That
was a genuine bug being hidden by the earlier lockdep splat.
Fixes: d67739268c ("drm/i915/gt: Mark up the nested engine-pm timeline lock as irqsafe")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823132700.25286-2-chris@chris-wilson.co.uk
Use hweight8() instead of hweight32() for 8bit masks. Doesn't actually
matter for us since the arch code will go for hweight32() anyway, but
maybe we stil want to do this for documentation purposes?
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190821173033.24123-5-ville.syrjala@linux.intel.com
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
We may need to eliminate the crtc->index == pipe assumptions from
the code to support arbitrary pipes being fused off. Start that by
switching some bitmasks over to using pipe instead of the crtc index.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190821173033.24123-1-ville.syrjala@linux.intel.com
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Currently, the subslice_mask runtime parameter is stored as an
array of subslices per slice. Expand the subslice mask array to
better match what is presented to userspace through the
I915_QUERY_TOPOLOGY_INFO ioctl. The index into this array is
then calculated:
slice * subslice stride + subslice index / 8
v2: Fix 32-bit build
v3: Use new helper function in SSEU workaround warning message
v4: Use GEM_BUG_ON to force developers to use valid SSEU configurations
per platform (Chris)
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823160307.180813-12-stuart.summers@intel.com
Add a new function to copy subslices for a specified slice
between intel_sseu structures for the purpose of determining
power-gate status. Note that currently ss_stride has a max
of 1.
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823160307.180813-11-stuart.summers@intel.com
Add a subslice stride calculation when setting subslices. This
aligns more closely with the userspace expectation of the subslice
mask structure.
v2: Use local variable for subslice_mask on HSW and
clean up a few other subslice_mask local variable
changes
v3: Add GEM_BUG_ON for ss_stride to prevent array overflow (Chris)
Split main set function and refactors in intel_device_info.c
into separate patches (Chris)
v4: Reduce ss_stride size check when setting subslices per slice
based on actual expected max stride (Chris)
Move that GEM_BUG_ON check for the ss_stride out to the patch
which adds the ss_stride
v5: Use memcpy instead of looping through each stride index
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823160307.180813-8-stuart.summers@intel.com
When setting up subslice_mask, instead of operating on the slice
array directly, use a local variable to start bits per slice, then
use this to set the per slice array in one step.
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823160307.180813-6-stuart.summers@intel.com
Add a new SSEU runtime parameter, eu_stride, which is
used to mirror the userspace concept of a range of EUs
per subslice.
This patch simply adds the parameter and updates usage
in the QUERY_TOPOLOGY_INFO handler.
v2: Add GEM_BUG_ON to make sure eu_stride is valid
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823160307.180813-5-stuart.summers@intel.com
Add a new parameter, ss_stride, to the runtime info
structure. This is used to mirror the userspace concept
of subslice stride, which is a range of subslices per slice.
This patch simply adds the definition and updates usage
in the QUERY_TOPOLOGY_INFO handler.
v2: Add GEM_BUG_ON to make sure ss_stride is valid
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823160307.180813-4-stuart.summers@intel.com
Add a new function to allow each platform to set maximum
slice, subslice, and EU information to reduce code duplication.
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823160307.180813-3-stuart.summers@intel.com
Use a local variable to find SSEU runtime information
in various debugfs functions.
v2: Remove extra line breaks per feedback from Chris
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823160307.180813-2-stuart.summers@intel.com
HCP/MFX power gating is disabled by default, turn it on for the vd units
available. User space will also issue a MI_FORCE_WAKEUP properly to
wake up proper subwell.
During driver load, init_clock_gating happens after device_info_init_mmio
read the vdbox disable fuse register, so only present vd units will have
these enabled.
BSpec: 14214
HSDES: 1209977827
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Tony Ye <tony.ye@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823082055.5992-3-lucas.demarchi@intel.com
GAM registers located in the 0x4xxx range have been relocated to 0xCxxx;
this is to make space for global MOCS registers.
v2: Rename register and bitfield to its new name (suggested by Mika)
HSD: 399379
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190823082055.5992-2-lucas.demarchi@intel.com
This patch fixes the intel_configure_pps_for_dsc_encoder() function to use
cpu_transcoder instead of encoder->type to select the correct DSC registers
that was wrongly used in the original patch for one DSC register isntance.
Fixes: 7182414e25 ("drm/i915/dp: Configure i915 Picture parameter Set registers during DSC enabling")
Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
Cc: <stable@vger.kernel.org> # v5.0+
Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190821215950.24223-1-manasi.d.navare@intel.com
DSC engine on ICL supports only 8 and 10 BPC as the input
BPC. But DSC engine in TGL supports 8, 10 and 12 BPC.
Add 12 BPC support for DSC while calculating compression
configuration.
v2: Remove the separate define TGL_DP_DSC_MAX_SUPPORTED_BPC
and use the value directly.(More such defines can be removed
as part of future patches). (Ville)
v3: Use values directly instead of accessing the defines
everytime for min and max DSC BPC.
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190820223059.18052-1-anusha.srivatsa@intel.com
No need to unmask PSR interrutpion if PSR is not enabled, better move
the call to intel_psr_enable_source().
v2: Renamed intel_psr_irq_control() to psr_irq_control() (Lucas)
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190820223325.27490-3-jose.souza@intel.com
According to PSR2_CTL definition in BSpec there is only one instance
of PSR2_CTL. Platforms gen < 12 with EDP transcoder only support PSR2
on TRANSCODER_EDP while on TGL PSR2 is only supported by
TRANSCODER_A.
Since BDW PSR is allowed on any port, but we need to restrict by
transcoder.
v8: Renamed _psr2_supported_in_trans() to psr2_supported() (Lucas)
v9: Renamed psr2_supported() to transcoder_has_psr2() (Ville)
BSpec: 7713
BSpec: 20584
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Anshuman Gupta <anshuman.gupta@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190820223325.27490-2-jose.souza@intel.com
PSR registers are a mess, some have the full address while others just
have the additional offset from psr_mmio_base.
For BDW+ psr_mmio_base is nothing more than TRANSCODER_EDP_OFFSET +
0x800 and using it makes more difficult for people with an PSR
register address or PSR register name from from BSpec as i915 also
don't match the BSpec names.
For HSW psr_mmio_base is _DDI_BUF_CTL_A + 0x800 and PSR registers are
only available in DDIA.
Other reason to make relative to transcoder is that since BDW every
transcoder have PSR registers, so in theory it should be possible to
have PSR enabled in a non-eDP transcoder.
So for BDW+ we can use _TRANS2() to get the register offset of any
PSR register in any transcoder while for HSW we have _HSW_PSR_ADJ
that will calculate the register offset for the single PSR instance,
noting that we are already guarded about trying to enable PSR in other
port than DDIA on HSW by the 'if (dig_port->base.port != PORT_A)' in
intel_psr_compute_config(), this check should only be valid for HSW
and will be changed in future.
PSR2 registers and PSR_EVENT was added after Haswell so that is why
_PSR_ADJ() is not used in some macros.
The only registers that can not be relative to transcoder are
PSR_IMR and PSR_IIR that are not relative to anything, so keeping it
hardcoded. That changed for TGL but it will be handled in another
patch.
Also removing BDW_EDP_PSR_BASE from GVT because it is not used as it
is the only PSR register that GVT have.
v5:
- Macros changed to be more explicit about HSW (Dhinakaran)
- Squashed with the patch that added the tran parameter to the
macros (Dhinakaran)
v6:
- Checking for interruption errors after module reload in the
transcoder that will be used (Dhinakaran)
- Using lowercase to the registers offsets
v7:
- Removing IS_HASWELL() from registers macros(Jani)
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Zhi Wang <zhi.a.wang@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190820223325.27490-1-jose.souza@intel.com
You have to cut it off at the neck, otherwise it just reappears in the
next merge, like it did in commit 3f866026f0ce ("Merge drm/drm-next
into drm-intel-next-queued")
References: 3f866026f0ce ("Merge drm/drm-next into drm-intel-next-queued")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Acked-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190822065917.18988-1-chris@chris-wilson.co.uk
Avoid calling i915_vma_put_fence() by using our alternate paths that
bind a secondary vma avoiding the original fenced vma. For the few
instances where we need to release the fence (i.e. on binding when the
GGTT range becomes invalid), replace the put_fence with a revoke_fence.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190822061557.18402-1-chris@chris-wilson.co.uk
Since we want to revoke the ggtt vma from only under the ggtt->mutex, we
need to move protection of the userfault tracking from the struct_mutex
to the ggtt->mutex.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190822060914.2671-2-chris@chris-wilson.co.uk
We can reduce the locking for fence registers from the dev->struct_mutex
to a local mutex. We could introduce a mutex for the sole purpose of
tracking the fence acquisition, except there is a little bit of overlap
with the fault tracking, so use the i915_ggtt.mutex as it covers both.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190822060914.2671-1-chris@chris-wilson.co.uk
We need the rename of reservation_object to dma_resv.
The solution on this merge came from linux-next:
From: Stephen Rothwell <sfr@canb.auug.org.au>
Date: Wed, 14 Aug 2019 12:48:39 +1000
Subject: [PATCH] drm: fix up fallout from "dma-buf: rename reservation_object to dma_resv"
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
---
drivers/gpu/drm/i915/gt/intel_engine_pool.c | 8 ++++----
3 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pool.c b/drivers/gpu/drm/i915/gt/intel_engine_pool.c
index 03d90b49584a..4cd54c569911 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pool.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pool.c
@@ -43,12 +43,12 @@ static int pool_active(struct i915_active *ref)
{
struct intel_engine_pool_node *node =
container_of(ref, typeof(*node), active);
- struct reservation_object *resv = node->obj->base.resv;
+ struct dma_resv *resv = node->obj->base.resv;
int err;
- if (reservation_object_trylock(resv)) {
- reservation_object_add_excl_fence(resv, NULL);
- reservation_object_unlock(resv);
+ if (dma_resv_trylock(resv)) {
+ dma_resv_add_excl_fence(resv, NULL);
+ dma_resv_unlock(resv);
}
err = i915_gem_object_pin_pages(node->obj);
which is a simplified version from a previous one which had:
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Most importantly per-process address spaces on GPUs that are capable of
providing proper isolation has finished baking. This is the base for
our softpin implementation, which allows us to support the texture
descriptor buffers used by GC7000 series GPUs without a major UAPI
extension/rework.
Shortlog of notable changes:
- code cleanup from Fabio
- fix performance counters on GC880 and GC2000 GPUs from Christian
- drmP.h header removal from Sam
- per process address space support on MMUv2 GPUs from me
- softpin support from me
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Lucas Stach <l.stach@pengutronix.de>
Link: https://patchwork.freedesktop.org/patch/msgid/1565946875.2641.73.camel@pengutronix.de