Currently the IOMMU code calls pm_runtime_get/put on the GPU or display
device before doing a IOMMU operation. This was because usually the
IOMMU driver didn't do power control of its own and since the hardware
used the same clocks and power as the respective multimedia device it
was a easy way to make sure that the power was available.
Now two things have changed. First, the SMMU devices can do their own power
control and more important bringing up the a6xx GPU isn't as easy as
turning on some clocks. To bring the GPU up we need the GMU which itself
needs the IOMMU so we have a chicken and egg problem.
Luckily this is easily fixed by removing the pm_runtime calls from the
functions and letting the device link to the IOMMU device handle the magic.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Tested-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Rob Clark <robdclark@chromium.org>
The scatter gather table doesn't need to be passed in for the
MMU unmap function.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
The error checks on ret for a negative error return always fails because
the return value of iommu_map_sg() is unsigned and can never be negative.
Detected with Coccinelle:
drivers/gpu/drm/msm/msm_iommu.c:69:9-12: WARNING: Unsigned expression
compared with zero: ret < 0
Signed-off-by: Wen Yang <wen.yang99@zte.com.cn>
CC: Rob Clark <robdclark@gmail.com>
CC: David Airlie <airlied@linux.ie>
CC: Julia Lawall <julia.lawall@lip6.fr>
CC: linux-arm-msm@vger.kernel.org
CC: dri-devel@lists.freedesktop.org
CC: freedreno@lists.freedesktop.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Sean Paul <seanpaul@chromium.org>
Significantly simplifies things. Also iommu_unmap() can unmap an entire
iova range.
(If backporting to downstream kernel you might need to revert this. Or
at least double check older iommu implementation.)
Signed-off-by: Rob Clark <robdclark@gmail.com>
For a5xx the gpu is 64b so we need to change iova to 64b everywhere. On
the display side, iova is still 32b so it can ignore the upper bits.
(Although all the armv8 devices have an iommu that can map 64b pa to 32b
iova.)
Signed-off-by: Rob Clark <robdclark@gmail.com>
The msm_iommu_map/unmap funcs have debug prints to show the list of
VA:PA mappings. Use the correct variable to print the VAs.
Signed-off-by: Archit Taneja <architt@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
The u32 type used to pass the physical addresses to iommu_map can't
accommodate 64 bit addresses. Move to dma_addr_t to ensure wrong
addresses aren't provided to the IOMMU driver.
Signed-off-by: Archit Taneja <architt@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Avoid casts from pointers to fixed-size integers to prevent the compiler
from warning. Print virtual memory addresses using %p instead. Also turn
a couple of %d/%x specifiers into %zu/%zd/%zx to avoid further warnings
due to mismatched format strings.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
87e956e9 changed the fault handler to return -ENOSYS, which causes the
iommu driver to print out a huge splat. Which wouldn't be quite so bad
if nothing ever faulted. But seems like some EXA composite operations
generate quite a lot of (seemingly harmless) faults. That is probably a
userspace problem, but the huge increase in verbosity from iommu fault
dumps makes things kind of unusable.
We probably should actually log *some* message (not conditional on
drm.debug). But ratelimit it.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Downstream kernel IOMMU had a non-standard way of dealing with multiple
devices and multiple ports/contexts. We don't need that on upstream
kernel, so rip out the crazy.
Note that we have to move the pinning of the ringbuffer to after the
IOMMU is attached. No idea how that managed to work properly on the
downstream kernel.
For now, I am leaving the IOMMU port name stuff in place, to simplify
things for folks trying to backport latest drm/msm to device kernels.
Once we no longer have to care about pre-DT kernels, we can drop this
and instead backport upstream IOMMU driver.
Signed-off-by: Rob Clark <robdclark@gmail.com>
If probe fails after IOMMU is attached, we need to detach in order to
clean up properly. Before this change, IOMMU faults would occur if the
probe failed (-EPROBE_DEFER).
Signed-off-by: Stephane Viau <sviau@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
use mm.h definition
Cc: David Airlie <airlied@linux.ie>
Cc: Rob Clark <robdclark@gmail.com>
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Add support for adreno 330. Not too much different, just a few
differences in initial configuration plus setting OCMEM base.
Userspace support is already in upstream mesa.
Note that the existing DT code is simply using the bindings from
downstream android kernel, to simplify porting of this driver to
existing devices. These do not constitute any committed/stable
DT ABI. The addition of proper DT bindings will be a subsequent
patch, at which point (as best as possible) I will try to support
either upstream bindings or what is found in downstream android
kernel, so that existing device DT files can be used.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Add a VRAM carveout that is used for systems which do not have an IOMMU.
The VRAM carveout uses CMA. The arch code must setup a CMA pool for the
device (preferrably in highmem.. a 256m-512m VRAM pool in lowmem is not
cool). The user can configure the VRAM pool size using msm.vram module
param.
Technically, the abstraction of IOMMU behind msm_mmu is not strictly
needed, but it simplifies the GEM code a bit, and will be useful later
when I add support for a2xx devices with GPUMMU, so I decided to keep
this part.
It appears to be possible to configure the GPU to restrict access to
addresses within the VRAM pool, but this is not done yet. So for now
the GPU will refuse to load if there is no sort of mmu. Once address
based limits are supported and tested to confirm that we aren't giving
the GPU access to arbitrary memory, this restriction can be lifted
Signed-off-by: Rob Clark <robdclark@gmail.com>