Add device pointer to vidi_context and remove platform_device pointer.
It doesn't need for vidi_context to contain platform_device object.
Instead, this patch makes this driver more simply by replacing platform_device
pointer with device one.
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Use DRM_DEV_DEBUG* instead of DRM_DEBUG macro to print out
debug messages.
This patch just cleans up the use of debug log macro, which changes
the log macro to DRM_DEV_DEBUG*.
Signed-off-by: Inki Dae <inki.dae@samsung.com>
This patch removes unnecessary messages from fimd_clear_channels
and decon_clear_channels functions which print out just function
name.
Signed-off-by: Inki Dae <inki.dae@samsung.com>
- Add the amdgpu specific bits for timeline support
- Add internal interfaces for xgmi pstate support
- DC Z ordering fixes for planes
- Add support for NV12 planes in DC
- Add colorspace properties for planes in DC
- eDP optimizations if the GOP driver already initialized eDP
- DC bandwidth validation tracing support
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Alex Deucher <alexdeucher@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190419150034.3473-1-alexander.deucher@amd.com
This contains a fix for the usage of shared resets that previously
generated a WARN on boot. In addition, there's a fix for CPU cache
maintenance of GEM buffers allocated using get_pages().
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEiOrDCAFJzPfAjcif3SOs138+s6EFAly4k7gTHHRyZWRpbmdA
bnZpZGlhLmNvbQAKCRDdI6zXfz6zoUqkD/oC21pwikg0nYh1LoiH7GbmA3NPm56o
Ei0cx9p8KyHUT0ongxmr0wfvyfCjwCqlFPWJOZrFOXjUmIupUooTbsqnAiHwEQE8
6m2FaZXRpYc8VYk7+uW4u1MhmWr7fStuxVs/GKVZBK7jaxn3MXouL6fP3cOc8QK1
GZdek2hxMDkvK1PnDc0MRx+COxsi3pMRL8hwBj++1EWPh3SGuOBsaj8S9dxXwSXH
TrNCIpGV6BOL6y9g9uDGmqDM6lef90Dx0mRneuWK3pQoCbQJDMUsdVuNEKYI+wqZ
hrfG9TZWeamK42OSX5GDM/DJwtb2K1z58Z410dlmc1IK9Inw7LcpYDHdoZQSmBgv
5KwYgeqjQSYV9Ok1D0SY3grdroufg6PSHuE+VgD4SpQ4pbBrBvPUqxpi4icY0re9
Td3FBzHzEMB14QWXIKWXfG7fd9pAKSesxaRaI3G9mtJrI1kIeRa3/0z8MQtSwbOg
fxAZYXkCS2aeo9h6Owmhqm/lgYuv/SJ/1NOwLVJ6tgoI1rN0ukMe2OCPCOFmY7AJ
ehiVBfXY7bjVYW3yhCUdm4w5ju/lmO5g2N4nqdpmcpvOz4QZE0aviPqcA5hiMfMQ
5WKakqrJ1NGVMTQtKJxGgUq3BXEFqnMpKUkUMT+KVQinMjZOEu1klxlg4ScyZVdd
Ly0z3PjsUbv+6g==
=nA5m
-----END PGP SIGNATURE-----
Merge tag 'drm/tegra/for-5.2-rc1' of git://anongit.freedesktop.org/tegra/linux into drm-next
drm/tegra: Changes for v5.2-rc1
This contains a fix for the usage of shared resets that previously
generated a WARN on boot. In addition, there's a fix for CPU cache
maintenance of GEM buffers allocated using get_pages().
(airlied: contains a merge from a shared tegra tree)
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Thierry Reding <thierry.reding@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190418151447.9430-1-thierry.reding@gmail.com
UAPI Changes:
- Document which feature flags belong to which command in virtio_gpu.h
- Make the FB_DAMAGE_CLIPS available for atomic userspace only, it's useless for legacy.
Cross-subsystem Changes:
- Add device tree bindings for lg,acx467akm-7 panel and ST-Ericsson Multi Channel Display Engine MCDE
- Add parameters to the device tree bindings for tfp410
- iommu/io-pgtable: Add ARM Mali midgard MMU page table format
- dma-buf: Only do a 64-bits seqno compare when driver explicitly asks for it, else wraparound.
- Use the 64-bits compare for dma-fence-chains
Core Changes:
- Make the fb conversion functions use __iomem dst.
- Rename drm_client_add to drm_client_register
- Move intel_fb_initial_config to core.
- Add a drm_gem_objects_lookup helper
- Add drm_gem_fence_array helpers, and use it in lima.
- Add drm_format_helper.c to kerneldoc.
Driver Changes:
- Add panfrost driver for mali midgard/bitfrost.
- Converts bochs to use the simple display type.
- Small fixes to sun4i, tinydrm, ti-fp410.
- Fid aspeed's Kconfig options.
- Make some symbols/functions static in lima, sun4i and meson.
- Add a driver for the lg,acx467akm-7 panel.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEuXvWqAysSYEJGuVH/lWMcqZwE8MFAly4N1oACgkQ/lWMcqZw
E8OJNQ//fk8+8TkupJTiBYjsAIbo4pRrWa29zQFhiqEhG2kpDESr1YjkeN3uQ+wg
R6laOUt1jMNiV/BRBUlsv95cbZ0eda3mDPyFOqni8qBoUf0o4NMgLhuLK6qPeMyN
76u/VZCwYl1AZGiyi33xMABWZ0WmRkwaGxG8gDW/pGU2klRX6T7XgZPDnXrX/jJC
xq9QUWsOKoBVXX6OtYQPiL6WslbHh97sPzzs5dljJuLOWIug6xFWarUYK6vPnzU1
HVNcu2/oS6VMCRAW+Ocf4dlcfIN7PvKL984AOcZH3SLB5qabhbjB14e10RivJGZx
O2yhqNsdF42HcvA08EnwzNvtNS9Rj/GNuw95KHEU+pKZGZ6dQo/fFivm2DoeOqub
piQlTjVqrHhpNhKg+h8Bd5jUQjx97TPy+PjFtjvCZznZpp8SK8T12yrN+MK4W1ml
vzMYSaMWiUYNbdixSQH0L90i555uMQgOXh53mKNEovPkh1SKlcsMbONuJIEphpne
jOT89O9AhtwGu4179cTHRPWpsDcWw/Uoji5wcWZkjeBWdwjwXDGXiIHePN1KAcw9
qBERJs+yVujgmAvjnJwbe78QlYB1+wgtPdvWAGR0gmu31J1WVL1ADMLFe0YQ5RIz
huXetCYJzYzv8PxWAPUkky/vUFq4EZQkEQFzGQPtrZje5sH2mko=
=7Hyr
-----END PGP SIGNATURE-----
Merge tag 'drm-misc-next-2019-04-18' of git://anongit.freedesktop.org/drm/drm-misc into drm-next
drm-misc-next for v5.2:
UAPI Changes:
- Document which feature flags belong to which command in virtio_gpu.h
- Make the FB_DAMAGE_CLIPS available for atomic userspace only, it's useless for legacy.
Cross-subsystem Changes:
- Add device tree bindings for lg,acx467akm-7 panel and ST-Ericsson Multi Channel Display Engine MCDE
- Add parameters to the device tree bindings for tfp410
- iommu/io-pgtable: Add ARM Mali midgard MMU page table format
- dma-buf: Only do a 64-bits seqno compare when driver explicitly asks for it, else wraparound.
- Use the 64-bits compare for dma-fence-chains
Core Changes:
- Make the fb conversion functions use __iomem dst.
- Rename drm_client_add to drm_client_register
- Move intel_fb_initial_config to core.
- Add a drm_gem_objects_lookup helper
- Add drm_gem_fence_array helpers, and use it in lima.
- Add drm_format_helper.c to kerneldoc.
Driver Changes:
- Add panfrost driver for mali midgard/bitfrost.
- Converts bochs to use the simple display type.
- Small fixes to sun4i, tinydrm, ti-fp410.
- Fid aspeed's Kconfig options.
- Make some symbols/functions static in lima, sun4i and meson.
- Add a driver for the lg,acx467akm-7 panel.
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/737ad994-213d-45b5-207a-b99d795acd21@linux.intel.com
UAPI Changes:
- uAPI "Fixes:" patch for the upcoming kernel 5.1, included here too
We have an Ack from the media folks (only current user) for this
late tweak
Cross-subsystem Changes:
- ALSA: hda: Fix racy display power access (Takashi, Chris)
Driver Changes:
- DDI and MIPI-DSI clocks fixes for Icelake (Vandita)
- Fix Icelake frequency change/locking (RPS) (Mika)
- Temporarily disable ppGTT read-only bit on Icelake (Mika)
- Add missing Icelake W/As (Mika)
- Enable 12 deep CSB status FIFO on Icelake (Mika)
- Inherit more Icelake code for Elkhartlake (Bob, Jani)
- Handle catastrophic error on engine reset (Mika)
- Shortcut readiness to reset check (Mika)
- Regression fix for GEM_BUSY causing us to report a mixed uabi-class request as not busy (Chris)
- Revert back to max link rate and lane count on eDP (Jani)
- Fix pipe BPP readout for BXT/GLK DSI (Ville)
- Set DP min_bpp to 8*3 for non-RGB output formats (Ville)
- Enable coarse preemption boundaries for Gen8 (Chris)
- Do not enable FEC without DSC (Ville)
- Restore correct BXT DDI latency optim setting calculation (Ville)
- Always reset context's RING registers to avoid running workload twice during reset (Chris)
- Set GPU wedged on driver unload (Janusz)
- Consolidate two similar barries from timeline into one (Chris)
- Only reset the pinned kernel contexts on resume (Chris)
- Wakeref tracking improvements (Chris, Imre)
- Lockdep fixes for shrinker interactions (Chris)
- Bump ready tasks ahead of busywaits in prep of semaphore use (Chris)
- Huge step in splitting display code into fine grained files (Jani)
- Refactor the IRQ init/reset macros for code saving (Paulo)
- Convert IRQ initialization code to uncore MMIO access (Paulo)
- Convert workarounds code to use uncore MMIO access (Chris)
- Nuke drm_crtc_state and use intel_atomic_state instead (Manasi)
- Update SKL clock-gating WA (Radhakrishna, Ville)
- Isolate GuC reset code flow (Chris)
- Expose force_dsc_enable through debugfs (Manasi)
- Header standalone compile testing framework (Jani)
- Code cleanups to reduce driver footprint (Chris)
- PSR code fixes and cleanups (Jose)
- Sparse and kerneldoc updates (Chris)
- Suppress spurious combo PHY B warning (Vile)
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190418080426.GA6409@jlahtine-desk.ger.corp.intel.com
Fixes the clock-gating issue when pipe scaling is enabled.
(Lineage #2006604312)
V2: Fix typo in headline(Chris)
Handle the non double buffered nature of the register(Ville)
V3: Fix checkpatch warning. BAT failure for V2 on gen3 looks unrelated.
V4: Split the icl and skl wa's(Ville)
V5: Split the checks for icl and skl(Ville)
V6: Correct the flipped checks in intel_pre_plane_update(Ville)
V7: Use enum for pipe and extend the WA for plane scalers(Ville)
V8: Eliminate the redundant use of pch_pfit(Ville)
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Clint Taylor <clinton.a.taylor@intel.com>
Cc: Aditya Swarup <aditya.swarup@intel.com>
Signed-off-by: Radhakrishna Sripada <radhakrishna.sripada@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190417185901.14833-1-radhakrishna.sripada@intel.com
The spec has changed since skl_max_plane_width() was written.
Now the SKL limits are lower than what they were initially, and
GLK and ICL have different limits. Update the code to match the
spec.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190418195907.23912-1-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Fix the order of lane, port parameters passed to the register macro.
Note that this was already partly fixed by commit
37fc7845df ("drm/i915: Call MG_DP_MODE() macro with the right parameters order")
While at it simplify things by using the macro directly instead of an
unnecessary redirection via an array.
v2:
- Add a note the commit message about simplifying things. (José)
Fixes: 58106b7d81 ("drm/i915: Make MG PHY macros semantically consistent")
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Aditya Swarup <aditya.swarup@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190419071026.32370-1-imre.deak@intel.com
Add CONFIG_DRM_MSM_GPU_STATE to conditionally compile Adreno GPU state
code depending on the availability of the dependencies.
Reported-by: Hulk Robot <hulkci@huawei.com>
Reported-by: YueHaibing <yuehaibing@huawei.com>
Fixes: 1707add815 ("drm/msm/a6xx: Add a6xx gpu state")
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
Describe the zap-shader node that defines a reserved memory region
to store the zap shader.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
The a6xx GPU powers on in secure mode which restricts what memory it can
write to. To get out of secure mode the GPU driver can write to
REG_A6XX_RBBM_SECVID_TRUST_CNTL but on targets that are "secure" that
register region is blocked and writes will cause the system to go down.
For those targets we need to execute a special sequence that involves
loadinga special shader that clears the GPU registers and use a PM4
sequence to pull the GPU out of secure. Add support for loading the zap
shader and executing the secure sequence. For targets that do not support
SCM or the specific SCM sequence this should fail and we would fall back
to writing the register.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
a5xx and a6xx both share (mostly) the same code to load the zap shader and
bring the GPU out of secure mode. Move the formerly 5xx specific code to
adreno to make it available for a6xx too.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
Add documentation for the interconnect and interconnect-names bindings
for the GPU node as detailed by bindings/interconnect/interconnect.txt.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Rob Herring <robh@kernel.org>
Acked-by: Georgi Djakov <georgi.djakov@linaro.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
When we are called to relieve mempressue via the shrinker, the only way
we can make progress is either by discarding unwanted pages (those
objects that userspace has marked MADV_DONTNEED) or by reclaiming the
dirty objects via swap. As we know that is the only way to make further
progress, we can initiate the writeback as we invalidate the objects.
This means the objects we put onto the inactive anon lru list are
already marked for reclaim+writeback and so will trigger a wait upon the
writeback inside direct reclaim, greatly improving the success rate of
direct reclaim on i915 objects.
The corollary is that we may start a slow swap on opportunistic
mempressure from the likes of the compaction + migration kthreads. This
is limited by those threads only being allowed to shrink idle pages, but
also that if we reactivate the page before it is swapped out by gpu
activity, we only page the cost of repinning the page. The cost is most
felt when an object is reused after mempressure, which hopefully
excludes the latency sensitive tasks (as we are just extending the
impact of swap thrashing to them).
Apparently this is not the first time we've had this idea. Back in
commit 5537252b6b ("drm/i915: Invalidate our pages under memory
pressure") we wanted to start writeback but settled on invalidate after
Hugh Dickins warned us about a possibility of a deadlock within shmemfs
if we started writeback from shrink_slab. Looking at the callchain,
using writeback from i915_gem_shrink should be equivalent to the pageout
also employed by shrink_slab, i.e. it should not be any riskier afaict.
v2: Leave mmapings intact. At this point, the only mmapings of our
objects will be via CPU mmaps on the shmemfs filp, which are
out-of-scope for our LRU tracking. Instead leave those pages to the
inactive anon LRU page list for aging and pageout as normal.
v3: Be selective on which paths trigger writeback, in particular
excluding paths shrinking just to reclaim vm space (e.g. mmap, vmap
reapers) and avoid starting writeback on the entire process space from
within the pm freezer.
References: https://bugs.freedesktop.org/show_bug.cgi?id=108686
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Michal Hocko <mhocko@suse.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> #v1
Link: https://patchwork.freedesktop.org/patch/msgid/20190420115539.29081-1-chris@chris-wilson.co.uk
GPU reset is now available with GuC enabled, so re-enable our check that
this reset is usable from atomic context.
Signed-off-by: Fernando Pacheco <fernando.pacheco@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190419230015.18121-6-fernando.pacheco@intel.com
We have now prepared the guc reset paths to avoid taking struct_mutex, or
any other lock, and so it is now safe to re-enable.
References: fe62365f9f ("drm/i915/guc: Disable global reset")
Signed-off-by: Fernando Pacheco <fernando.pacheco@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190419230015.18121-5-fernando.pacheco@intel.com
Currently we pin the GuC or HuC firmware image just before uploading.
Perma-pin during uC initialization instead and use the range reserved at
the top of the address space.
Moving the firmware resulted in needing to:
- use an additional pinning for the rsa signature which will be used
during HuC auth as addresses above GUC_GGTT_TOP do not map through GTT.
v2: Remove call to set to gtt domain
Do not restore fw gtt mapping unconditionally
Separate out pin/unpin functions and drop usage of pin/unpin
Use uc_fw init/fini functions to bind/unbind fw object
v3: Bind is only needed during xfer (Chris)
Remove attempts to bind outside of xfer (Chris)
Mark fw bind/unbind static
Signed-off-by: Fernando Pacheco <fernando.pacheco@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190419230015.18121-4-fernando.pacheco@intel.com
GuC and HuC depend on struct_mutex for device reinitialization. Moving
away from this dependency requires perma-pinning the firmware images in
GGTT. The upper portion of the GuC address space has a sizeable hole
(several MB) that is inaccessible by GuC. Reserve this range within GGTT
as it can comfortably hold GuC/HuC firmware images.
v2: Reserve node rather than insert (Chris)
Simpler determination of node start/size (Daniele)
Move reserve/release out to intel_guc.* files
v3: Reserve starting at GUC_GGTT_TOP only and bail if this
fails (Chris)
Signed-off-by: Fernando Pacheco <fernando.pacheco@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190419230015.18121-3-fernando.pacheco@intel.com
he uC firmware init function is called during GuC/HuC init early phases.
Rename to include "_early" and properly reflect which phase we are at.
The uC firmware fini function is cleaning up the state set/created on
firmware fetch. Replace "_fini" with "_cleanup_fetch".
v2: also rename uC fw fini function
Signed-off-by: Fernando Pacheco <fernando.pacheco@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190419230015.18121-2-fernando.pacheco@intel.com
If we know that the user cannot access the GGTT, by virtue of having a
segregated memory area, we can skip clearing the unused entries as they
cannot be accessed.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190419201207.5477-1-chris@chris-wilson.co.uk
This is to work around problems with libva and vainfo.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190417112525.16848-1-christian.koenig@amd.com
An interesting discussion regarding "hybrid interrupt polling" for NVMe
came to the conclusion that the ideal busyspin before sleeping was half
of the expected request latency (and better if it was already halfway
through that request). This suggested that we too should look again at
our tradeoff between spinning and waiting. Currently, our spin simply
tries to hide the cost of enabling the interrupt, which is good to avoid
penalising nop requests (i.e. test throughput) and not much else.
Studying real world workloads suggests that a spin of upto 500us can
dramatically boost performance, but the suggestion is that this is not
from avoiding interrupt latency per-se, but from secondary effects of
sleeping such as allowing the CPU reduce cstate and context switch away.
In a truly hybrid interrupt polling scheme, we would aim to sleep until
just before the request completed and then wake up in advance of the
interrupt and do a quick poll to handle completion. This is tricky for
ourselves at the moment as we are not recording request times, and since
we allow preemption, our requests are not on as a nicely ordered
timeline as IO. However, the idea is interesting, for it will certainly
help us decide when busyspinning is worthwhile.
v2: Expose the spin setting via Kconfig options for easier adjustment
and testing.
v3: Don't get caught sneaking in a change to the busyspin parameters.
v4: Explain more about the "hybrid interrupt polling" scheme that we
want to migrate towards.
Suggested-by: Sagar Kamble <sagar.a.kamble@intel.com>
References: http://events.linuxfoundation.org/sites/events/files/slides/lemoal-nvme-polling-vault-2017-final_0.pdf
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sagar Kamble <sagar.a.kamble@intel.com>
Cc: Eero Tamminen <eero.t.tamminen@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Ben Widawsky <ben@bwidawsk.net>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Reviewed-by: Sagar Kamble <sagar.a.kamble@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190419182625.11186-1-chris@chris-wilson.co.uk
First loop does copy_from_user() without the table lock held and
just stores the handle. Second loop looks up buffer objects with the
table_lock held without potentially blocking or faulting. This lets us
clean up a bunch of custom, non-faulting copy_from_user() code.
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
Now that we don't have the mmap_sem lock inversion, we don't need to
jump through this particular hoop anymore.
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
We use a llist and a worker to delay the object cleanup. This avoids
taking mmap_sem and struct_mutex in the wrong order when calling
drm_gem_object_put_unlocked() from drm_gem_mmap().
Fixes lockdep problem with copy_from_user() in msm_ioctl_gem_submit().
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
The HFI tasklet was removed in df0dff1 ("drm/msm/a6xx: Poll for HFI
responses") but the tasklet_struct was accidentally left behind.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
Currently if the GMU resume function fails all we try to do is clear the
BOOT_SLUMBER oob which usually times out and ends up in a cycle of death.
If the resume function fails at any point remove any RPMh votes that might
have been added and try to shut down the GMU hardware cleanly.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
Now that the GX domain is sorted we can wire up a working GMU reset.
IF a GMU hang was detected then try to forcefully shut down the GMU
in the power down sequence which should ensure that it can recover
normally on the next power up.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
99.999% of the time during normal operation the GMU is responsible
for power and clock control on the GX domain and the CPU remains
blissfully unaware. However, there is one situation where the CPU
needs to get involved:
The power sequencing rules dictate that the GX needs to be turned
off before the CX so that the CX can be turned on before the GX
during power up. During normal operation when the CPU is taking
down the CX domain a stop command is sent to the GMU which turns
off the GX domain and then the CPU handles the CX domain.
But if the GMU happened to be unresponsive while the GX domain was
left then the CPU will need to step in and turn off the GX domain
before resetting the CX and rebooting the GMU. This unfortunately
means that the CPU needs to be marginally aware of the GX domain
even though it is expected to usually keep its hands off.
To support this we create a semi-disabled GX power domain that
does nothing to the hardware on power up but tries to shut it
down normally on power down. In this method the reference counting
is correct and we can step in with the pm_runtime_put() at the right
time during the failure path.
This patch sets up the connection to the GX power domain and does
the magic to "enable" and disable it at the right points.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
The GMU should have two power domains defined: "cx" and "gx". "cx" is the
actual power domain for the device and "gx" will be attached at runtime
to manage reference counting on the GPU device in case of a GMU crash.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
The GMU code currently has some misguided code to try to work around
a hardware quirk that requires the power domains on the GPU be
collapsed in a certain order. Upcoming patches will do this the
right way so get rid of the unused and unwanted regulator
code.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
Add the capability to query information from a submit queue.
The first available parameter is for querying the number of GPU faults
(hangs) that can be attributed to the queue.
This is useful for implementing context robustness. A user context can
regularly query the number of faults to see if it is responsible for any
and if so it can invalidate itself.
This is also helpful for testing by confirming to the user driver if a
particular command stream caused a fault (or not as the case may be).
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
For KHR_robustness, userspace wants to know two things, the count of GPU
faults globally, and the count of faults attributed to a given context.
This patch providees the former, and the next patch provides the latter.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Jordan Crouse <jcrouse@codeaurora.org>
For now it always returns '0' (false), but once the iommu work is in
place to enable per-process pagetables we can update the value returned.
Userspace needs to know this to make an informed decision about exposing
KHR_robustness.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Jordan Crouse <jcrouse@codeaurora.org>
For consistency (and elegance!), add intel_device_info.has_rps.
The immediate boon is that RPS support is now emitted along the other
capabilities in the debug log and after errors.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Sagar Arun Kamble <sagar.a.kamble@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190419134836.5626-1-chris@chris-wilson.co.uk
The driver does not currently support unbinding from a device which is
in use. Since open file descriptors may still be pointing into kernel
memory where the device structures used to be, entirely correct kernel
panics protect the driver from being unbound as we should not be
unbinding it before those dangling pointers have been made safe.
According to the documentation found inside drivers/gpu/drm/drm_drv.c,
drm_dev_unplug() should be used instead of drm_dev_unregister() in
order to make a device inaccessible to users as soon as it is unpluged.
Follow that advice to make those possibly dangling pointers safe,
protected by DRM layer from a user who is otherwise left pointing into
possibly reused kernel memory after the driver has been unbound from
the device. Once done, also cancel inflight operations immediately by
calling i915_gem_set_wedged().
Signed-off-by: Janusz Krzysztofik <janusz.krzysztofik@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190405130235.7707-2-janusz.krzysztofik@linux.intel.com
The call to of_get_child_by_name returns a node pointer with refcount
incremented thus it must be explicitly decremented after the last
usage.
Detected by coccinelle with the following warnings:
drivers/gpu/drm/msm/adreno/a5xx_gpu.c:57:2-8: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 47, but without a corresponding object release within this function.
drivers/gpu/drm/msm/adreno/a5xx_gpu.c:66:2-8: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 47, but without a corresponding object release within this function.
drivers/gpu/drm/msm/adreno/a5xx_gpu.c:118:1-7: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 47, but without a corresponding object release within this function.
drivers/gpu/drm/msm/adreno/a5xx_gpu.c:57:2-8: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 51, but without a corresponding object release within this function.
drivers/gpu/drm/msm/adreno/a5xx_gpu.c:66:2-8: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 51, but without a corresponding object release within this function.
drivers/gpu/drm/msm/adreno/a5xx_gpu.c:118:1-7: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 51, but without a corresponding object release within this function.
Signed-off-by: Wen Yang <wen.yang99@zte.com.cn>
Cc: Rob Clark <robdclark@gmail.com>
Cc: Sean Paul <sean@poorly.run>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Jordan Crouse <jcrouse@codeaurora.org>
Cc: Mamta Shukla <mamtashukla555@gmail.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Sharat Masetty <smasetty@codeaurora.org>
Cc: linux-arm-msm@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: freedreno@lists.freedesktop.org
Cc: linux-kernel@vger.kernel.org (open list)
Reviewed-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Rob Clark <robdclark@chromium.org>
The patch ("OPP: Add support for parsing the 'opp-level' property")
adds an API enabling a cleaner way to read the opp-level. Let's use
the new API.
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Rob Clark <robdclark@chromium.org>
Iterate and assign HW intf block to physical encoders
in encoder modeset. Moving all the HW block assignments
to encoder modeset to allow easy switching to state
based resource management.
Signed-off-by: Jeykumar Sankaran <jsanka@codeaurora.org>
Signed-off-by: Sean Paul <seanpaul@chromium.org>
Link: https://patchwork.freedesktop.org/patch/msgid/1550107156-17625-7-git-send-email-jsanka@codeaurora.org
Signed-off-by: Rob Clark <robdclark@chromium.org>
After resource allocation, iterate and populate mixer/ctl
hw blocks in encoder modeset thereby centralizing all
the resource mapping to the CRTC. This change is made
for easy switching to state based allocation using
private objects later in this series.
Signed-off-by: Jeykumar Sankaran <jsanka@codeaurora.org>
Signed-off-by: Sean Paul <seanpaul@chromium.org>
Link: https://patchwork.freedesktop.org/patch/msgid/1550107156-17625-6-git-send-email-jsanka@codeaurora.org
Signed-off-by: Rob Clark <robdclark@chromium.org>