Convert a 0 error return code to a negative one, as returned elsewhere in the
function.
A simplified version of the semantic match that finds this problem is as
follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
identifier ret;
expression e,e1,e2,e3,e4,x;
@@
(
if (\(ret != 0\|ret < 0\) || ...) { ... return ...; }
|
ret = 0
)
... when != ret = e1
*x = \(kmalloc\|kzalloc\|kcalloc\|devm_kzalloc\|ioremap\|ioremap_nocache\|devm_ioremap\|devm_ioremap_nocache\)(...);
... when != x = e2
when != ret = e3
*if (x == NULL || ...)
{
... when != ret = e4
* return ret;
}
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Ensure all queued descriptors are freed when the channel is released,
ensuring we don't leak memory
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The free function says the pl08x lock should be taken before calling
it. However, the DMA pool allocation/freeing is already properly
locked. The only thing that would need this is pool_ctr, which
happens to be a write-only variable.
Let's get rid of this, and eliminate any need for additional locking
here.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This function is now unnecessary; we can move its internals inline
instead.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Now that we're converted to use the generic vchan support, we can fix
the residue return from tx_status to be compliant with dmaengine. This
returns the number of bytes remaining for the _specified_ cookie, not
the number of bytes in all pending transfers on the channel.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert to use the virtual dma channel done list, tasklet, and
descriptor freeing.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert to use the virtual dma channel submitted/issued descriptor
lists rather than our own private lists, and use the virtual dma
channel support functions to manage these lists.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Initialize the vchan struct, and use the provided spinlock rather than
our own.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert PL08x to use the virt-dma structures.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rework the physical channel allocation mechanism to only allocate
physical channels to virtual channels when they're about to be used.
This eliminates all the complexity with holding channels while
descriptors are being prepared, which is completely unnecessary.
This also brings this driver to a state where the generic virtual DMA
code can be used with this driver, and opens up the possibility of
properly scheduling and prioritorising physical DMA channels to
virtual DMA channels.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rather than waiting for the tasklet to run, we can start the next
descriptor from interrupt context, as soon as we know that the
previous descriptor has completed.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Our behaviour wasn't correct; issue_pending is supposed to be called
before any submitted descriptors are available for processing by the
DMA engine. Split the pend_list in two, one for submitted descriptors
and another list for issued descriptors.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rather than code the de-queue of the txd several times, move that into
the start_txd function. Rename this to better illustrate what it's
now doing, and call this function when starting a delayed memcpy().
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As we no longer need to pass a descriptor to prep_phy_channel(), we
don't need to keep track of the descriptor which is waiting for a
channel to become available. So let's get rid of it.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the DMA request muxing into the slave prepare code and txd
release/completion code. This means we only hold the DMA request
mux while there are descriptors waiting to be started or are in
progress.
This leaves txd->direction as a write-only variable; remove it.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert PL08x to use a list of completed descriptors rather than
merely relying upon a single pointer. This makes it possible to
schedule the tasklet for other purposes, and makes our behaviour
similar to virt-dma.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Keep track of the number of descriptors currently using a MUX setting
on a per-channel basis. This allows us to know when we have descriptors
queued somewhere which have been assigned a DMA request signal.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the signal handling out of the physical channel structure into
the virtual channel structure, where it should belong as it has more
to do with the virtual channel than the physical one.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Split the DMA request mux signal handling from the physical channel
allocation code. The physical channel has very little to do with the
DMA request input which will be used, so these should be two separate
operations.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Get rid of the unnecessary checks in dma_slave_config utilizing
the DMA direction. This allows us to move the computation of
cctl to the prepare function.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Ignore the direction argument in dma_slave_config, and configure both
directions independently. We still check that the configuration for
the intended direction is valid; this check will eventually be dropped.
This check is just for debugging at present.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Extract the functionality from dma_slave_config to generate the cctl
values for a given bus width and burst size. This allows us to use
this elsewhere in the driver, namely the prepare functions.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the bus and transfer increment selection to the DMA prepare
function rather than the slave configuration function.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As we now store the dma_slave_config in pl08x_dma_chan, we don't need
to store this separately. Use the one in dma_slave_config directly.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add a dma_slave_config struct to struct pl08x_dma_chan, and move the
src_addr/dst_addr arguments into this struct. This is a step away
from using the dma_slave_config's direction member.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the driver private data structures into the driver itself, rather
than having them exposed to everyone in a header file.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Try to avoid dereferencing the DMA engine's channel struct in these
platform helpers; instead, pass a pointer to the channel data into
get_signal(), and the returned signal number to put_signal().
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Circular buffers are not handled in this way; we have a separate API
call now to setup circular buffers. So lets not mislead people with
this bool.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The pl08x_driver_data spinlock is only ever initialized. Nothing else
uses it. Let's get rid of it.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
db8196df4 (dmaengine: move drivers to dma_transfer_direction) missed
fixing up the "DMA_NONE" case.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The runtime PM support conflicts with the generic AMBA bus PM, and also
causes a potential deadlock with the PL011 driver as it results in
interrupts being enabled beneath a spinlock.
I don't presently see any solution to this other than by removing the
runtime PM support entirely from the DMA engine driver. Alternative
suggestions welcome.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
sg->length may or may not contain the length of the dma region to transfer,
depending on the architecture - dma_sg_len(sg) always will though. For the
architectures which use the drivers modified by this patch it probably is the
case that sg->length contains the dma transfer length. But to be consistent and
future proof change them to use dma_sg_len.
To quote Russel King:
sg->length is meaningless to something performing DMA.
In cases where sg_dma_len(sg) and sg->length are the same storage, then
there's no problem. But scatterlists _can_ (and one some architectures) do
split them - especially when you have an IOMMU which can allow you to
combine a scatterlist into fewer entries.
So, anything using sg->length for the size of a scatterlist's DMA transfer
_after_ a call to dma_map_sg() is almost certainly buggy.
The patch has been generated using the following coccinelle patch:
<smpl>
@@
struct scatterlist *sg;
expression X;
@@
-sg[X].length
+sg_dma_len(&sg[X])
@@
struct scatterlist *sg;
@@
-sg->length
+sg_dma_len(sg)
</smpl>
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
dmaengine drivers should always use sg_dma_address instead of sg_phys to get the
addresses for the transfer from a sg element.
To quote Russel King:
sg_phys(sg) of course has nothing to do with DMA addresses. It's the
physical address _to the CPU_ of the memory associated with the scatterlist
entry. That may, or may not have the same value for the DMA engine,
particularly if IOMMUs are involved.
And if these drivers are used on ARM, they must be fixed, sooner rather
than later. There's patches in the works which will mean we will end up
with IOMMU support in the DMA mapping later, which means everything I've
said above will become reality.
The patch has been generated using the following coccinelle patch:
<smpl>
@@
struct scatterlist *sg;
@@
-sg_phys(sg)
+sg_dma_address(sg)
</smpl>
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
The Nomadik PL080 variant has some extra protection bits that
may be set, so we need to check these bits to see if the
channels are actually available for the DMAengine to use.
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Alim Akhtar <alim.akhtar@gmail.com>
Cc: Alessandro Rubini <rubini@gnudd.com>
Reviewed-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
When a client calls pl08x_control with DMA_TERMINATE_ALL, it is correct
to terminate and release the phy channel currently in use (if one is in use),
but the phychan_hold counter must also be reset (otherwise it could get
trapped in an unbalanced state).
Signed-off-by: Davide Ciminaghi <ciminaghi@gnudd.com>
Reviewed-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
For some reason I can't figure out we're reading the PL080_INT_STATUS
register instead of PL080_TC_STATUS when checking for the terminal
count. The PL080_INT_STATUS is a logical OR between the error and
terminal count status register and may not report what we want it
to, especially if there is an error and a terminal count at the same
time and the former is not lowered in time for the check in the TC
register. Make sure we read what we're actually interested in.
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Viresh Kumar <viresh.kumar@st.com>
Cc: Alim Akhtar <alim.akhtar@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Add context parameter to device_prep_slave_sg() and device_prep_dma_cyclic()
interfaces to allow passing client/target specific information associated
with the data transfer.
Modify all affected DMA engine drivers.
Signed-off-by: Alexandre Bounine <alexandre.bounine@idt.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Provide a common function to initialize a channels cookie values.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jassi Brar <jassisinghbrar@gmail.com>
[imx-sdma.c & mxs-dma.c]
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Now that we have the completed cookie in the dma_chan structure, we
can consolidate the tx_status functions by providing a function to set
the txstate structure and returning the DMA status. We also provide
a separate helper to set the residue for cookies which are still in
progress.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jassi Brar <jassisinghbrar@gmail.com>
[imx-sdma.c & mxs-dma.c]
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Provide a common function to do the cookie mechanics for completing
a DMA descriptor.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jassi Brar <jassisinghbrar@gmail.com>
[imx-sdma.c & mxs-dma.c]
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Everyone deals with assigning DMA cookies in the same way (it's part of
the API so they should be), so lets consolidate the common code into a
helper function to avoid this duplication.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jassi Brar <jassisinghbrar@gmail.com>
[imx-sdma.c & mxs-dma.c]
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Add a local private header file to contain definitions and declarations
which should only be used by DMA engine drivers.
We also fix linux/dmaengine.h to use LINUX_DMAENGINE_H to guard against
multiple inclusion.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jassi Brar <jassisinghbrar@gmail.com>
[imx-sdma.c & mxs-dma.c]
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Every DMA engine implementation declares a last completed dma cookie
in their private dma channel structures. This is pointless, and
forces driver specific code. Move this out into the common dma_chan
structure.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jassi Brar <jassisinghbrar@gmail.com>
[imx-sdma.c & mxs-dma.c]
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Flow controller information is passed now from DMA_SLAVE_CONFIG option. This
patch makes changes in pl08x driver to use device_fc from it instead of platform
data.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Rewrite a duplicated test to test the correct value
The semantic match that finds this problem is as follows:
(http://coccinelle.lip6.fr/)
// <smpl>
@@
expression E;
@@
(
* E
|| ... || E
|
* E
&& ... && E
)
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
Currently, if plchan->phychan is true, we return immediately from
prep_phy_chan(). We must configure txd->ccfg and increment phychan_hold before
returning. Otherwise, request line number wouldn't be configured in this txd.
Reported-by: Rajeev Kumar <rajeev-dlh.kumar@st.com>
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
In pl08x_free_txd(), check if pool is allocated successfully before freeing it.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Untill now, sg_len greater than one is not supported. This patch adds support to
do that.
Note: Still, if peripheral is flow controller, sg_len can't be greater that one.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Before converting the dma channel to our private data structure, first
check that the channel is indeed one which our driver registered. We
do this by ensuring that the underlying device is bound to our driver.
This avoids potential oopses if we try to reference 'plchan->name'
against a foreign drivers dma channel.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
At least, on SPEAr platforms there is one peripheral, JPEG, which can be flow
controller for DMA transfer. Currently DMA controller driver didn't support
peripheral flow controller configurations.
This patch adds device_fc field in struct pl08x_channel_data, which will be used
only for slave transfers and is not used in case of mem2mem transfers.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When we have DMA transfers between peripheral and memory, then we shouldn't
reduce width of peripheral at all, as that may be a strict requirement. But we
can always reduce width of memory access, with some compromise in performance.
Thus, we must select peripheral as master and not memory.
Also this rearranges code to make it shorter.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently lli_len is aligned to min of two widths, which looks to be incorrect.
Instead it should be aligned to max of both widths.
Lets say, total_size = 441 bytes
MIN: lets check if min() suits or not:
CASE 1: srcwidth = 1, dstwidth = 4
min(src, dst) = 1
i.e. We program transfer size in control reg to 441.
Now, till 440 bytes everything is fine, but on the last byte DMAC can't transfer
1 byte to dst, as its width is 4.
CASE 2: srcwidth = 4, dstwidth = 1
min(src, dst) = 1
i.e. we program transfer size in control reg to 110 (data transferred = 110 * srcwidth).
So, here too 1 byte is left, but on the source side.
MAX: Lets check if max() suits or not:
CASE 3: srcwidth = 1, dstwidth = 4
max(src, dst) = 4
Aligned size is 440
i.e. We program transfer size in control reg to 440.
Now, all 440 bytes will be transferred without any issues.
CASE 4: srcwidth = 4, dstwidth = 1
max(src, dst) = 4
Aligned size is 440
i.e. We program transfer size in control reg to 110 (data transferred = 110 * srcwidth).
Now, also all 440 bytes will be transferred without any issues.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Code for creating single byte llis is present at several places. Create a
routine to avoid code redundancy.
Also, we don't need one lli per single byte transfer, we can have single lli to
do all single byte transfer.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
max_bytes_per_lli = bd.srcbus.buswidth * PL080_CONTROL_TRANSFER_SIZE_MASK;
This is confirmed by ARM support guys.
Below is summary of mail exchange with them:
[Viresh] What is the total data to be transferred in case source and destination
bus widths are different. Suppose, source bus width is 2 bytes and destination
is 4 bytes. Now in order to transfer 80 bytes, what should be value of
TransferSize field in control reg: 40? or 20?.
[David from ARM] The value that is programmed into the TransferSize field should
be the number of <SourceWidth> transfers needed to achieve the required data
transfer.
So, to transfer 80 bytes, with a Source Width of 2, the TransferSize field =
should be programmed with:
Total transfer size
------------------- = 40
<source width>
[Viresh] Will this change if source is 4 bytes and dest is 2?
[David] Yes - the calculation then becomes:
Total transfer size
------------------- =20
<source width>
Also, max_bytes_per_lli must be calculated after fixing src and dest widths not
before that. So move this code to the correct place.
This patch also removes max_bytes_per_lli from earlier print message, as till
that point max_bytes_per_lli is unknown.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Pl080 Manual says: "Bursts do not cross the 1KB address boundary"
We can program the controller to cross 1 KB boundary on a burst and controller
can take care of this boundary condition by itself.
Following is the discussion with ARM Technical Support Guys (David):
[Viresh] Manual says: "Bursts do not cross the 1KB address boundary"
What does that actually mean? As, Maximum size transferable with a single LLI is
4095 * 4 =16380 ~ 16KB. So, if we don't have src/dest address aligned to burst
size, we can't use this big of an LLI.
[David] There is a difference between bursts describing the total data
transferred by the DMA controller and AHB bursts. Bursts described by the
programmable parameters in the PL080 have no direct connection with the bursts
that are seen on the AHB bus.
The statement that "Bursts do not cross the 1KB address boundary" in the TRM is
referring to AHB bursts, where this limitation is a requirement of the AHB spec.
You can still issue bursts within the PL080 that are in excess of 1KB. The
PL080 will make sure that its bursts are broken down into legal AHB bursts which
will be formatted to ensure that no AHB burst crosses a 1KB boundary.
Based on above discussion, this patch removes all code related to 1 KB boundary
as we are not required to handle this in driver.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently, if error interrupt occurs, nothing is done in interrupt handler (just
clearing the interrupts). We must somehow indicate this to the user that DMA is
over, due to ERR interrupt or TC interrupt.
So, this patch just schedules existing tasklet, with a print showing error
interrupt has occurred on which channels.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We have just executed following in pl08x_get_phy_channel():
ch->signal = -1;
We don't have to compare "ch->signal < 0", as this will always be true.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Simply writing 1 on bit 0 is sufficient instead of reading and clearing bits.
Also as per manual, for bit 3-31 of DMACConfiguration register:
"read undefined, write as 0"
So, we must not rely on values read from this registers bit 3-31.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Insert notifiers for the runtime PM API. With this the runtime PM layer kicks in
to action where used.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
For 8 memory and 16 slave channels 35 boot print lines are printed. And that is
too much. Most of this would be more useful for debugging. So moving few of them
to dev_dbg instead of dev_info. Now only 3 prints will be printed.
This also rearrange one of the debug message to fit into two lines.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Similar comment is present over routine also pl08x_choose_master_bus(). Keeping
one of them. Also rewrite that comment to convey message clearly.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As mentioned in Documentation/CodingStyle,
The preferred form for passing a size of a struct is the following:
p = kmalloc(sizeof(*p), ...);
The alternative form where struct name is spelled out hurts readability and
introduces an opportunity for a bug when the pointer variable type is changed
but the corresponding sizeof that is passed to a memory allocator is not.
This patch replaces (struct xyz) with *ptr at several occurrences in driver.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Header files included in driver are not present in alphabetical order. Rearrange
them in alphabetical order.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There were few formatting related issues in code. This patch fixes them.
Fixes include:
- Remove extra blank lines
- align code to 80 cols
- combine several lines to one line
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Something changed during the 3.1 merge window in the include files
which now causes the pl08x DMA engine driver to fail to build. Fix
this by adding the now necessary dma-mapping.h include:
drivers/dma/amba-pl08x.c: In function ■pl08x_unmap_buffers■:
drivers/dma/amba-pl08x.c:1524: error: implicit declaration of function ■dma_unmap_single■
drivers/dma/amba-pl08x.c:1527: error: implicit declaration of function ■dma_unmap_page■
Acked-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
pl08x_width function does not handle rest of enums for DMA_SLAVE_BUSWIDTH_xxxx
which causes gcc to emit below warining
drivers/dma/amba-pl08x.c: In function 'pl08x_width':
drivers/dma/amba-pl08x.c:1119: warning: enumeration value
'DMA_SLAVE_BUSWIDTH_UNDEFINED' not handled in switch
drivers/dma/amba-pl08x.c:1119: warning: enumeration value
'DMA_SLAVE_BUSWIDTH_8_BYTES' not handled in switch
this patch adds a default case which returns error
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Now that we have separate cctl values for M>P and P>M transfers, we can
avoid calculating the cctl value each time we prepare a transaction.
Move the bus selection and increment setting to the slave configuration
and initialization functions.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Store the source/destination cctl values into the channel structure.
This moves us towards being able to avoid a configuration call each
time we use the channel.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Store the source/destination slave address separately into the channel
structure. This moves us towards being able to avoid a configuration
call each time we use the channel.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Clean up debugging when setting up the LLI list. This reduces the
amount of output while preserving the information, and makes it easier
to read.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Avoid re-selecting the LLI bus each time we create an LLI. Move it out
of the LLI setup loops.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
PL08X_WQ_PERIODMIN and PL08X_MAX_ALLOCS are not used, remove them.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Make Primecell driver probe functions take a const pointer to their
ID tables. Drivers should never modify their ID tables in their
probe handler.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If a transfer is initiated from memory to a peripheral, then data is
fetched and the channel is marked busy. This busy status persists until
the HALT bit is set and the queued data has been transfered to the
peripheral. Waiting indefinitely after setting the HALT bit results in
system lockups. Timeout this operation, and print an error when this
happens.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
If we try to pause a channel when terminating a transfer, we could end
up spinning for it to become inactive indefinitely, and can result in
an uninterruptible wait requiring a reset to recover from.
Terminating a transfer is supposed to take effect immediately, but may
result in data loss.
To make this clear, rename the function to pl08x_terminate_phy_chan().
Also, make sure it is always consistently called - with the spinlock
held and IRQs disabled, and ensure that the TC and ERR interrupt status
is always cleared.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Cleanup the formatting of comments, remove some which don't make sense
anymore.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[fix conflict with 96a608a4]
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Prevent dma_set_runtime_config() being used to alter the configuration
supplied by the platform for memcpy channel configuration. No one
should be trying to change this configuration.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
There are cases in dma_set_runtime_config() where we fail to perform
the requested action - and we just issue a KERN_ERR message in that
case. We have the facility to return an error to the caller, so that
is what we should do.
When we encounter an error due to invalid parameters, we should not
modify driver state.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The PL08x driver holds on to the channel lock with interrupts disabled
between the prepare and the subsequent submit API functions. This
means that the locking state when the prepare function returns is
dependent on whether it suceeeds or not.
It did this to ensure that the physical channel wasn't released, and
as it used to add the descriptor onto the pending list at prepare time
rather than submit time.
Now that we have reorganized the code to remove those reasons, we can
now safely release the spinlock at the end of preparation and reacquire
it in our submit function.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Introduce 'phychan_hold' to hold on to physical DMA channels while we're
preparing a new descriptor for it. This will be incremented when we
allocate a physical channel and set the MUX registers during the
preparation of the TXD, and will only be decremented when the TXD is
submitted.
This prevents the physical channel being given up before the new TXD
is placed on the queue.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Don't place TXDs on the pending list when they're prepared - place
them on the list when they're ready to be submitted. Also, only
place memcpy requests in the wait state when they're submitted and
don't have a physical channel associated.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This 'desc_list' is actually a list of pending descriptors, so name
it after its function (pending list) rather than what it contains
(descriptors).
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The DMA engine API requires DMA engine implementations to unmap buffers
passed into the non-slave DMA methods unless the relevant completion
flag is set. We aren't doing this, so implement this facility.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Like other DMA engine drivers do, store the passed flags into the
async_tx structure, so they can be checked when the operation
completes.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
We only need to store the dma address.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Don't alter any txd->srcbus or txd->dstbus values while building the
LLI list. This allows us to see the original dma_addr_t values passed
in via the prep_memcpy() method.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The number of bytes we want to fill into any LLI is the minimum of:
- number of bytes remaining in the transfer
- number of bytes we can transfer in a single LLI
- number of bytes we can transfer without overflowing the source boundary
- number of bytes we can transfer without overflowing the destination boundary
The minimum of the first two is already calculated (target_len). We
limit the boundary calculations to this number of bytes, which will
then give us the number of bytes we can place into this LLI.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
pl08x_pre_boundary() was unsafe with addresses towards the top of
memory space:
boundary = ((addr >> PL08X_BOUNDARY_SHIFT) + 1)
<< PL08X_BOUNDARY_SHIFT;
This can overflow a 32-bit number, producing zero. When it does:
if (boundary < addr + len)
return boundary - addr;
else
return len;
results in (boundary - addr) returning either a large positive value.
Also if addr + len overflows, this calculation also fails.
We can fix this trivially as the only thing we're actually interested
in is the value of the least significant PL08X_BOUNDARY_SHIFT bits:
boundary_len = PL08X_BOUNDARY_SIZE -
(addr & (PL08X_BOUNDARY_SIZE - 1));
gives us the number of bytes before 'addr' becomes a multiple of
PL08X_BOUNDARY_SIZE. We can then just take the min() of the two
calculated lengths.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
We don't need pl08x_fill_lli_for_desc() to return num_llis + 1 as
we know that's what it always does. We can just pass in num_llis
and use post-increment in the caller.
This makes the code slightly easier to read.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Calling the callback handler with spinlocks in the tasklet held leads
to deadlock when dmaengine functions are called:
BUG: spinlock lockup on CPU#0, sh/417, c1870a08
Backtrace:
...
[<c017b408>] (do_raw_spin_lock+0x0/0x154) from [<c02c4b98>] (_raw_spin_lock_irqsave+0x54/0x60)
[<c02c4b44>] (_raw_spin_lock_irqsave+0x0/0x60) from [<c01f5828>] (pl08x_prep_channel_resources+0x718/0x8b4)
[<c01f5110>] (pl08x_prep_channel_resources+0x0/0x8b4) from [<c01f5bb4>] (pl08x_prep_slave_sg+0x120/0x19c)
[<c01f5a94>] (pl08x_prep_slave_sg+0x0/0x19c) from [<c01be7a0>] (pl011_dma_tx_refill+0x164/0x224)
[<c01be63c>] (pl011_dma_tx_refill+0x0/0x224) from [<c01bf1c8>] (pl011_dma_tx_callback+0x7c/0xc4)
[<c01bf14c>] (pl011_dma_tx_callback+0x0/0xc4) from [<c01f4d34>] (pl08x_tasklet+0x60/0x368)
[<c01f4cd4>] (pl08x_tasklet+0x0/0x368) from [<c004d978>] (tasklet_action+0xa0/0x100)
Dan quoted the documentation:
> 2/ Completion callback routines cannot submit new operations. This
> results in recursion in the synchronous case and spin_locks being
> acquired twice in the asynchronous case.
but then followed up to say:
> I should clarify, this is the async_memcpy() api requirement which is
> not used outside of md/raid5. DMA drivers can and do allow new
> submissions from callbacks, and the ones that do so properly move the
> callback outside of the driver lock.
So let's fix it by moving the callback out of the spinlocked region.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>