mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-11-26 03:40:55 +07:00
df05c6f6e0
This patch adds the DMA_ATTR_ALLOC_SINGLE_PAGES attribute to the DMA-mapping subsystem. This attribute can be used as a hint to the DMA-mapping subsystem that it's likely not worth it to try to allocate large pages behind the scenes. Large pages are likely to make an IOMMU TLB work more efficiently but may not be worth it. See the Documentation contained in this patch for more details about this attribute and when to use it. Note that the name of the hint (DMA_ATTR_ALLOC_SINGLE_PAGES) is loosely based on the name MADV_NOHUGEPAGE. Just as there is MADV_NOHUGEPAGE vs. MADV_HUGEPAGE we could also add an "opposite" attribute to DMA_ATTR_ALLOC_SINGLE_PAGES. Without having the "opposite" attribute the lack of DMA_ATTR_ALLOC_SINGLE_PAGES means "use your best judgement about whether to use small pages or large pages". Signed-off-by: Douglas Anderson <dianders@chromium.org> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Javier Martinez Canillas <javier@osg.samsung.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
129 lines
5.7 KiB
Plaintext
129 lines
5.7 KiB
Plaintext
DMA attributes
|
|
==============
|
|
|
|
This document describes the semantics of the DMA attributes that are
|
|
defined in linux/dma-attrs.h.
|
|
|
|
DMA_ATTR_WRITE_BARRIER
|
|
----------------------
|
|
|
|
DMA_ATTR_WRITE_BARRIER is a (write) barrier attribute for DMA. DMA
|
|
to a memory region with the DMA_ATTR_WRITE_BARRIER attribute forces
|
|
all pending DMA writes to complete, and thus provides a mechanism to
|
|
strictly order DMA from a device across all intervening busses and
|
|
bridges. This barrier is not specific to a particular type of
|
|
interconnect, it applies to the system as a whole, and so its
|
|
implementation must account for the idiosyncrasies of the system all
|
|
the way from the DMA device to memory.
|
|
|
|
As an example of a situation where DMA_ATTR_WRITE_BARRIER would be
|
|
useful, suppose that a device does a DMA write to indicate that data is
|
|
ready and available in memory. The DMA of the "completion indication"
|
|
could race with data DMA. Mapping the memory used for completion
|
|
indications with DMA_ATTR_WRITE_BARRIER would prevent the race.
|
|
|
|
DMA_ATTR_WEAK_ORDERING
|
|
----------------------
|
|
|
|
DMA_ATTR_WEAK_ORDERING specifies that reads and writes to the mapping
|
|
may be weakly ordered, that is that reads and writes may pass each other.
|
|
|
|
Since it is optional for platforms to implement DMA_ATTR_WEAK_ORDERING,
|
|
those that do not will simply ignore the attribute and exhibit default
|
|
behavior.
|
|
|
|
DMA_ATTR_WRITE_COMBINE
|
|
----------------------
|
|
|
|
DMA_ATTR_WRITE_COMBINE specifies that writes to the mapping may be
|
|
buffered to improve performance.
|
|
|
|
Since it is optional for platforms to implement DMA_ATTR_WRITE_COMBINE,
|
|
those that do not will simply ignore the attribute and exhibit default
|
|
behavior.
|
|
|
|
DMA_ATTR_NON_CONSISTENT
|
|
-----------------------
|
|
|
|
DMA_ATTR_NON_CONSISTENT lets the platform to choose to return either
|
|
consistent or non-consistent memory as it sees fit. By using this API,
|
|
you are guaranteeing to the platform that you have all the correct and
|
|
necessary sync points for this memory in the driver.
|
|
|
|
DMA_ATTR_NO_KERNEL_MAPPING
|
|
--------------------------
|
|
|
|
DMA_ATTR_NO_KERNEL_MAPPING lets the platform to avoid creating a kernel
|
|
virtual mapping for the allocated buffer. On some architectures creating
|
|
such mapping is non-trivial task and consumes very limited resources
|
|
(like kernel virtual address space or dma consistent address space).
|
|
Buffers allocated with this attribute can be only passed to user space
|
|
by calling dma_mmap_attrs(). By using this API, you are guaranteeing
|
|
that you won't dereference the pointer returned by dma_alloc_attr(). You
|
|
can treat it as a cookie that must be passed to dma_mmap_attrs() and
|
|
dma_free_attrs(). Make sure that both of these also get this attribute
|
|
set on each call.
|
|
|
|
Since it is optional for platforms to implement
|
|
DMA_ATTR_NO_KERNEL_MAPPING, those that do not will simply ignore the
|
|
attribute and exhibit default behavior.
|
|
|
|
DMA_ATTR_SKIP_CPU_SYNC
|
|
----------------------
|
|
|
|
By default dma_map_{single,page,sg} functions family transfer a given
|
|
buffer from CPU domain to device domain. Some advanced use cases might
|
|
require sharing a buffer between more than one device. This requires
|
|
having a mapping created separately for each device and is usually
|
|
performed by calling dma_map_{single,page,sg} function more than once
|
|
for the given buffer with device pointer to each device taking part in
|
|
the buffer sharing. The first call transfers a buffer from 'CPU' domain
|
|
to 'device' domain, what synchronizes CPU caches for the given region
|
|
(usually it means that the cache has been flushed or invalidated
|
|
depending on the dma direction). However, next calls to
|
|
dma_map_{single,page,sg}() for other devices will perform exactly the
|
|
same synchronization operation on the CPU cache. CPU cache synchronization
|
|
might be a time consuming operation, especially if the buffers are
|
|
large, so it is highly recommended to avoid it if possible.
|
|
DMA_ATTR_SKIP_CPU_SYNC allows platform code to skip synchronization of
|
|
the CPU cache for the given buffer assuming that it has been already
|
|
transferred to 'device' domain. This attribute can be also used for
|
|
dma_unmap_{single,page,sg} functions family to force buffer to stay in
|
|
device domain after releasing a mapping for it. Use this attribute with
|
|
care!
|
|
|
|
DMA_ATTR_FORCE_CONTIGUOUS
|
|
-------------------------
|
|
|
|
By default DMA-mapping subsystem is allowed to assemble the buffer
|
|
allocated by dma_alloc_attrs() function from individual pages if it can
|
|
be mapped as contiguous chunk into device dma address space. By
|
|
specifying this attribute the allocated buffer is forced to be contiguous
|
|
also in physical memory.
|
|
|
|
DMA_ATTR_ALLOC_SINGLE_PAGES
|
|
---------------------------
|
|
|
|
This is a hint to the DMA-mapping subsystem that it's probably not worth
|
|
the time to try to allocate memory to in a way that gives better TLB
|
|
efficiency (AKA it's not worth trying to build the mapping out of larger
|
|
pages). You might want to specify this if:
|
|
- You know that the accesses to this memory won't thrash the TLB.
|
|
You might know that the accesses are likely to be sequential or
|
|
that they aren't sequential but it's unlikely you'll ping-pong
|
|
between many addresses that are likely to be in different physical
|
|
pages.
|
|
- You know that the penalty of TLB misses while accessing the
|
|
memory will be small enough to be inconsequential. If you are
|
|
doing a heavy operation like decryption or decompression this
|
|
might be the case.
|
|
- You know that the DMA mapping is fairly transitory. If you expect
|
|
the mapping to have a short lifetime then it may be worth it to
|
|
optimize allocation (avoid coming up with large pages) instead of
|
|
getting the slight performance win of larger pages.
|
|
Setting this hint doesn't guarantee that you won't get huge pages, but it
|
|
means that we won't try quite as hard to get them.
|
|
|
|
NOTE: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM,
|
|
though ARM64 patches will likely be posted soon.
|