Commit Graph

6 Commits

Author SHA1 Message Date
Ritesh Harjani
006f841db1 arm: dma-iommu: Clean up redundant variable
mapping->size can be derived from mapping->bits << PAGE_SHIFT
which makes mapping->size as redundant.

Clean this up.

Signed-off-by: Ritesh Harjani <ritesh.harjani@gmail.com>
Reported-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2014-05-20 13:43:26 +02:00
Marek Szyprowski
68efd7d2fb arm: dma-mapping: remove order parameter from arm_iommu_create_mapping()
The 'order' parameter for IOMMU-aware dma-mapping implementation was
introduced mainly as a hack to reduce size of the bitmap used for
tracking IO virtual address space. Since now it is possible to dynamically
resize the bitmap, this hack is not needed and can be removed without any
impact on the client devices. This way the parameters for
arm_iommu_create_mapping() becomes much easier to understand. 'size'
parameter now means the maximum supported IO address space size.

The code will allocate (resize) bitmap in chunks, ensuring that a single
chunk is not larger than a single memory page to avoid unreliable
allocations of size larger than PAGE_SIZE in atomic context.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2014-02-28 11:55:18 +01:00
Andreas Herrmann
4d852ef8c2 arm: dma-mapping: Add support to extend DMA IOMMU mappings
Instead of using just one bitmap to keep track of IO virtual addresses
(handed out for IOMMU use) introduce an array of bitmaps. This allows
us to extend existing mappings when running out of iova space in the
initial mapping etc.

If there is not enough space in the mapping to service an IO virtual
address allocation request, __alloc_iova() tries to extend the mapping
-- by allocating another bitmap -- and makes another allocation
attempt using the freshly allocated bitmap.

This allows arm iommu drivers to start with a decent initial size when
an dma_iommu_mapping is created and still to avoid running out of IO
virtual addresses for the mapping.

Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
[mszyprow: removed extensions parameter to arm_iommu_create_mapping()
 function, which will be modified in the next patch anyway, also some
 debug messages about extending bitmap]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2014-02-28 11:55:18 +01:00
Hiroshi Doyu
6fe3675803 ARM: dma-mapping: Add arm_iommu_detach_device()
A counter part of arm_iommu_attach_device().

Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-02-25 15:30:41 +01:00
Laurent Pinchart
3e3a182328 ARM: iommu: Include linux/kref.h in asm/dma-iommu.h
The dma_iommu_mapping structure defined in asm/dma-iommu.h embeds a
struct kref, include the appropriate header file.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-02-25 15:30:40 +01:00
Marek Szyprowski
4ce63fcd91 ARM: dma-mapping: add support for IOMMU mapper
This patch add a complete implementation of DMA-mapping API for
devices which have IOMMU support.

This implementation tries to optimize dma address space usage by remapping
all possible physical memory chunks into a single dma address space chunk.

DMA address space is managed on top of the bitmap stored in the
dma_iommu_mapping structure stored in device->archdata. Platform setup
code has to initialize parameters of the dma address space (base address,
size, allocation precision order) with arm_iommu_create_mapping() function.
To reduce the size of the bitmap, all allocations are aligned to the
specified order of base 4 KiB pages.

dma_alloc_* functions allocate physical memory in chunks, each with
alloc_pages() function to avoid failing if the physical memory gets
fragmented. In worst case the allocated buffer is composed of 4 KiB page
chunks.

dma_map_sg() function minimizes the total number of dma address space
chunks by merging of physical memory chunks into one larger dma address
space chunk. If requested chunk (scatter list entry) boundaries
match physical page boundaries, most calls to dma_map_sg() requests will
result in creating only one chunk in dma address space.

dma_map_page() simply creates a mapping for the given page(s) in the dma
address space.

All dma functions also perform required cache operation like their
counterparts from the arm linear physical memory mapping version.

This patch contains code and fixes kindly provided by:
- Krishna Reddy <vdumpa@nvidia.com>,
- Andrzej Pietrasiewicz <andrzej.p@samsung.com>,
- Hiroshi DOYU <hdoyu@nvidia.com>

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
2012-05-21 15:06:23 +02:00