Commit Graph

14 Commits

Author SHA1 Message Date
Jordan Crouse
84c6127580 drm/msm/gpu: Map the ringbuffer in the iova at create time
For reasons that I'm sure made perfect sense at the time we were
opting to defer the iova alloc / pin on the ringbuffer until HW
init time so when we moved to iova reference counting we ended
up adding a reference count every time the hardware started.
Not that it mattered (because the ring is always around) but
it did make the debug output look odd. Allocate and pin the iova
at create time instead.

Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-12-11 13:07:03 -05:00
Jordan Crouse
0815d7749a drm/msm: Add a name field for gem objects
For debugging purposes it is useful to assign descriptions
to buffers so that we know what they are used for. Add
a field to the buffer object and use that to name the various
kernel side allocations which ends up looking like like this
in /d/dri/X/gem:

   flags       id ref  offset   kaddr            size     madv      name
   00040000: I  0 ( 1) 00000000 0000000070b79eca 00004096           memptrs
      vmas: [gpu: 01000000,mapped,inuse=1]
   00020000: I  0 ( 1) 00000000 0000000031ed4074 00032768           ring0

Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-12-11 13:06:59 -05:00
Jordan Crouse
1e29dff004 drm/msm: Add a common function to free kernel buffer objects
Buffer objects allocated with msm_gem_kernel_new() are mostly
freed the same way so we can save a few lines of code with a
common function.

Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-12-11 13:05:30 -05:00
Steve Kowalik
dc9a9b3205 drm/msm: Replace gem_object deprecated functions
drm_gem_object_{reference,unreference,unreference_unlocked} are
deprecated functions, and merely alias to the get/put functions.
Switch to the new names.

Signed-off-by: Steve Kowalik <steven@wedontsleep.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-02-20 10:41:21 -05:00
Jordan Crouse
b1fc2839d2 drm/msm: Implement preemption for A5XX targets
Implement preemption for A5XX targets - this allows multiple
ringbuffers for different priorities with automatic preemption
of a lower priority ringbuffer if a higher one is ready.

Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-10-28 11:01:38 -04:00
Jordan Crouse
4c7085a5d5 drm/msm: Shadow current pointer in the ring until command is complete
Add a shadow pointer to track the current command being written into
the ring. Don't commit it as 'cur' until the command is submitted.
Because 'cur' is used to construct the software copy of the wptr this
ensures that somebody peeking in on the ring doesn't assume that a
command is inflight while it is being written. This isn't a huge deal
with a single ring (though technically the hangcheck could assume
the system is prematurely busy when it isn't) but it will be rather
important for preemption where the decision to preempt is based
on a non-empty ringbuffer. Without a shadow an aggressive preemption
scheme could assume that the ringbuffer is non empty and switch to it
before the CPU is done writing the command and boom.

Even though preemption won't be supported for all targets because of
the way the code is organized it is simpler to make this generic for
all targets. The extra load for non-preemption targets should be
minimal.

Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-10-28 11:01:37 -04:00
Jordan Crouse
f97decac5f drm/msm: Support multiple ringbuffers
Add the infrastructure to support the idea of multiple ringbuffers.
Assign each ringbuffer an id and use that as an index for the various
ring specific operations.

The biggest delta is to support legacy fences. Each fence gets its own
sequence number but the legacy functions expect to use a unique integer.
To handle this we return a unique identifier for each submission but
map it to a specific ring/sequence under the covers. Newer users use
a dma_fence pointer anyway so they don't care about the actual sequence
ID or ring.

The actual mechanics for multiple ringbuffers are very target specific
so this code just allows for the possibility but still only defines
one ringbuffer for each target family.

Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-10-28 11:01:36 -04:00
Jordan Crouse
8223286d62 drm/msm: Add a helper function for in-kernel buffer allocations
Nearly all of the buffer allocations for kernel allocate an buffer object,
virtual address and GPU iova at the same time. Make a helper function to
handle the details.

Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
[dropped msm_fbdev conversion to new helper, since it interferes with
display-handover work, where we want to separate allocation and mapping]
Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-08-22 13:19:17 -04:00
Sushmita Susheelendra
0e08270a1f drm/msm: Separate locking of buffer resources from struct_mutex
Buffer object specific resources like pages, domains, sg list
need not be protected with struct_mutex. They can be protected
with a buffer object level lock. This simplifies locking and
makes it easier to avoid potential recursive locking scenarios
for SVM involving mmap_sem and struct_mutex. This also removes
unnecessary serialization when creating buffer objects, and also
between buffer object creation and GPU command submission.

Signed-off-by: Sushmita Susheelendra <ssusheel@codeaurora.org>
[robclark: squash in handling new locking for shrinker]
Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-06-17 08:03:07 -04:00
Jordan Crouse
88b333b0ed drm/msm: Ensure that the hardware write pointer is valid
Currently the value written to CP_RB_WPTR is calculated on the fly as
(rb->next - rb->start). But as the code is designed rb->next is wrapped
before writing the commands so if a series of commands happened to
fit perfectly in the ringbuffer, rb->next would end up being equal to
rb->size / 4 and thus result in an out of bounds address to CP_RB_WPTR.

The easiest way to fix this is to mask WPTR when writing it to the
hardware; it makes the hardware happy and the rest of the ringbuffer
math appears to work and there isn't any point in upsetting anything.

Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
[squash in is_power_of_2() check]
Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-12-29 15:02:58 -05:00
Rob Clark
18f23049f6 drm/msm: change gem->vmap() to get/put
Before we can add vmap shrinking, we really need to know which vmap'ings
are currently being used.  So switch to get/put interface.  Stubbed put
fxns for now.

Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-07-16 10:09:07 -04:00
Rob Clark
69a834c28f drm/msm: deal with exhausted vmap space better
Some, but not all, callers of obj->vmap() would check if return
IS_ERR().  So let's actually return an error if vmap() fails.  And fixup
the call-sites that were not handling this properly.

Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-06-04 14:45:48 -04:00
Rob Clark
774449ebcb drm/msm: fix locking inconsistencies in gpu->destroy()
In error paths, this was being called without struct_mutex held.
Leading to panics like:

  msm 1a00000.qcom,mdss_mdp: No memory protection without IOMMU
  Kernel panic - not syncing: BUG!
  CPU: 0 PID: 1409 Comm: cat Not tainted 4.0.0-dirty #4
  Hardware name: Qualcomm Technologies, Inc. APQ 8016 SBC (DT)
  Call trace:
  [<ffffffc000089c78>] dump_backtrace+0x0/0x118
  [<ffffffc000089da0>] show_stack+0x10/0x20
  [<ffffffc0006686d4>] dump_stack+0x84/0xc4
  [<ffffffc0006678b4>] panic+0xd0/0x210
  [<ffffffc0003e1ce4>] drm_gem_object_free+0x5c/0x60
  [<ffffffc000402870>] adreno_gpu_cleanup+0x60/0x80
  [<ffffffc0004035a0>] a3xx_destroy+0x20/0x70
  [<ffffffc0004036f4>] a3xx_gpu_init+0x84/0x108
  [<ffffffc0004018b8>] adreno_load_gpu+0x58/0x190
  [<ffffffc000419dac>] msm_open+0x74/0x88
  [<ffffffc0003e0a48>] drm_open+0x168/0x400
  [<ffffffc0003e7210>] drm_stub_open+0xa8/0x118
  [<ffffffc0001a0e84>] chrdev_open+0x94/0x198
  [<ffffffc000199f88>] do_dentry_open+0x208/0x310
  [<ffffffc00019a4c4>] vfs_open+0x44/0x50
  [<ffffffc0001aa26c>] do_last.isra.14+0x2c4/0xc10
  [<ffffffc0001aac38>] path_openat+0x80/0x5e8
  [<ffffffc0001ac354>] do_filp_open+0x2c/0x98
  [<ffffffc00019b60c>] do_sys_open+0x13c/0x228
  [<ffffffc00019b72c>] SyS_openat+0xc/0x18
  CPU1: stopping

But there isn't any particularly good reason to hold struct_mutex for
teardown, so just standardize on calling it without the mutex held and
use the _unlocked() versions for GEM obj unref'ing

Signed-off-by: Rob Clark <robdclark@gmail.com>
2015-05-15 09:28:27 -04:00
Rob Clark
7198e6b031 drm/msm: add a3xx gpu support
Add initial support for a3xx 3d core.

So far, with hardware that I've seen to date, we can have:
 + zero, one, or two z180 2d cores
 + a3xx or a2xx 3d core, which share a common CP (the firmware
   for the CP seems to implement some different PM4 packet types
   but the basics of cmdstream submission are the same)

Which means that the eventual complete "class" hierarchy, once
support for all past and present hw is in place, becomes:
 + msm_gpu
   + adreno_gpu
     + a3xx_gpu
     + a2xx_gpu
   + z180_gpu

This commit splits out the parts that will eventually be common
between a2xx/a3xx into adreno_gpu, and the parts that are even
common to z180 into msm_gpu.

Note that there is no cmdstream validation required.  All memory access
from the GPU is via IOMMU/MMU.  So as long as you don't map silly things
to the GPU, there isn't much damage that the GPU can do.

Signed-off-by: Rob Clark <robdclark@gmail.com>
2013-08-24 14:57:18 -04:00