Currently the handler ignores the channel 0 interrupt and thus doesn't ack
it properly. This is done in order to allow sdma_run_channel0() to poll
on the irq status bit, as this function may be called in atomic context,
but needs to know when the channel has finished.
This works mostly, as the polling happens under a spinlock, disabling IRQs
on the local CPU, leaving only a very slight race window for a spurious
IRQ to happen if the handler is executed on another CPU in an SMP system.
Still this is clearly suboptimal.
This behavior turns into a real problem on an RT system, where the spinlock
doesn't disable IRQs on the local CPU. Not acking the IRQ in the handler
in such a setup is very likely to drown the CPU in an IRQ storm, leaving
it unable to make any progress in the polling loop, leading to the IRQ
never being acked.
Fix this by properly acknowledging the channel 0 IRQ in the handler.
As the IRQ status bit can no longer be used to poll for the channel
completion, switch over to using the SDMA_H_STATSTOP register for this
purpose, where bit 0 is cleared by the hardware when the channel is done.
Signed-off-by: Michael Olbrich <m.olbrich@pengutronix.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In case of error, the function platform_device_register_full()
returns ERR_PTR() and never returns NULL. The NULL test in the
return value check should be replaced with IS_ERR().
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The new mv_xor_v2 driver supports the XOR engines found in the 64-bits
ARM from Marvell of the Armada 7K and Armada 8K family. This XOR
engine is a completely new hardware block, entirely different from the
one used on previous Marvell Armada platforms, which use the existing
mv_xor driver.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The newly added zynqmp_dma driver produces a warning on 32-bit architectures
when dma_addr_t is 64-bit wide:
drivers/dma/xilinx/zynqmp_dma.c: In function 'zynqmp_dma_config_sg_ll_desc':
drivers/dma/xilinx/zynqmp_dma.c:321:9: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
((dma_addr_t)sdesc - (dma_addr_t)chan->desc_pool_v);
^
drivers/dma/xilinx/zynqmp_dma.c:321:29: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
((dma_addr_t)sdesc - (dma_addr_t)chan->desc_pool_v);
This changes the cast to the more appropriate uintptr_t.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In cyclic DMA mode need to link the tail bd segment
with the head bd segment to process bd's in cyclic.
Current driver is doing this only for tx channel
needs to update the same for rx channel case also.
This patch fixes the same.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Added the driver for zynqmp dma engine used in Zynq
UltraScale+ MPSoC. This dma controller supports memory to memory
and I/O to I/O buffer transfers.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cookies corresponding to pending transfers have a residue value equal to
the full size of the corresponding descriptor. The driver miscomputes
that and uses the size of the active descriptor instead. Fix it.
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
[geert: Also check desc.active list]
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Running descriptor pointer is set to NULL upon freeing resources. Other-
wise, rcar_dmac_issue_pending might not start new transfers
Signed-off-by: Muhammad Hamza Farooq <mfarooq@visteon.com>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The documentation states one should make sure both DE and TE are cleared
before starting a transaction. This patch extends the current warning to
look at both DE and TE.
Based on previous work from Muhammad Hamza Farooq.
Suggested-by: Muhammad Hamza Farooq <mfarooq@visteon.com>
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The hardware might have complete the transfer but the interrupt handler
might not have had a chance to run. If rcar_dmac_chan_get_residue()
which reads HW registers finds that there is no residue return
DMA_COMPLETE.
Signed-off-by: Muhammad Hamza Farooq <mfarooq@visteon.com>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
[Niklas: add explanation in commit message]
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Current driver assumes that child node channel name is either
"xlnx,axi-vdma-mm2s-channel" or "xlnx,axi-vdma-s2mm-channel"
which is confusing the users of AXI DMA and CDMA.
This patch fixes this issue by using different channel
names for the AXI DMA and AXI CDMA child nodes.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In the existing vdma driver support for
AXI DMA and CDMA got added so the driver is no
longer VDMA specific.
This patch renames the driver and DT binding doc to xilinx_dma
and updates the Kconfig description for all the DMAS.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for AXI DMA multi-channel dma mode
Multichannel mode enables DMA to connect to multiple masters
and slaves on the streaming side.
In Multichannel mode AXI DMA supports 2D transfers.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The bam_dma driver gained runtime PM support, but that causes build
warnings whenever CONFIG_PM is disabled:
drivers/dma/qcom/bam_dma.c:1324:12: error: 'bam_dma_runtime_resume' defined but not used [-Werror=unused-function]
static int bam_dma_runtime_resume(struct device *dev)
^~~~~~~~~~~~~~~~~~~~~~
drivers/dma/qcom/bam_dma.c:1315:12: error: 'bam_dma_runtime_suspend' defined but not used [-Werror=unused-function]
static int bam_dma_runtime_suspend(struct device *dev)
This removes the incomplete #ifdef guard and instead marks all
four PM functions as __maybe_unused, which avoids this kind of
warning.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 7d2545599f ("dmaengine: qcom-bam-dma: Add pm_runtime support")
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When building this driver on arm64, we get a harmless type
mismatch warning:
drivers/dma/bcm2835-dma.c: In function 'bcm2835_dma_fill_cb_chain_with_sg':
include/linux/kernel.h:743:17: warning: comparison of distinct pointer types lacks a cast
(void) (&_min1 == &_min2); \
^
drivers/dma/bcm2835-dma.c:409:21: note: in expansion of macro 'min'
cb->cb->length = min(len, max_len);
This changes the type of the 'len' variable to size_t, which
avoids the problem.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 388cc7a281 ("dmaengine: bcm2835: add slave_sg support to bcm2835-dma")
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Return IRQ_NONE in the interrupt handler when it is called but no IRQs are
pending. This allows the system to recover in case of an interrupt storm
e.g. due to a wrong interrupt configuration setup.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Propagate errors returned by platform_get_irq() to the driver core. This
will enable proper probe deferring for the driver in case the IRQ provider
has not been registered yet.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add MODULE_DEVICE_TABLE() for the axi-dmac driver. This allows the driver
to be loaded on demand when built as a module.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When building this driver on arm64, we get a harmless type
mismatch warning:
drivers/dma/bcm2835-dma.c: In function 'bcm2835_dma_fill_cb_chain_with_sg':
include/linux/kernel.h:743:17: warning: comparison of distinct pointer types lacks a cast
(void) (&_min1 == &_min2); \
^
drivers/dma/bcm2835-dma.c:409:21: note: in expansion of macro 'min'
cb->cb->length = min(len, max_len);
This changes the type of the 'len' variable to size_t, which
avoids the problem.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 388cc7a281 ("dmaengine: bcm2835: add slave_sg support to bcm2835-dma")
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Adds pm_runtime support for BAM DMA so that clock is enabled only
when there is a transaction going on to help save power.
Signed-off-by: Pramod Gurav <pramod.gurav@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit 71f7e6cc55 ('dmaengine: tegra20-apb-dma: Only calculate residue
if txstate exists') changed the tegra_dma_tx_status() function to only
calculate the residue if there is a valid 'txstate' pointer for storing
the residue. Although this makes sense, this changed the behaviour of
the function tegra_dma_tx_status() such that if the pointer 'txstate' is
not valid, then we will return whatever state is returned by
dma_cookie_status() and no longer return the state by looking up the DMA
descriptor and returning it's state.
Please note that dma_cookie_status() will either return DMA_COMPLETE or
DMA_IN_PROGRESS. However, if dma_cookie_status() returns DMA_IN_PROGRESS
the actual status could be DMA_ERROR which will only be seen from
checking the descriptor status. Therefore, even if 'txstate' is not
valid, still check to see if there is a valid descriptor for the cookie
in question and if so return the descriptor state. Finally, ensure the
residue is still not calculated if the 'txstate' is not valid.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The calculation of the DMA residue for the Tegra APB DMA is duplicated
in two places in the tegra_dma_tx_status() function. Remove this
duplicated code by moving calculation to the end of the function and
only calculating if we found a valid descriptor.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Correct the grammar in the debug message when no descriptor is found.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
mbr_ds is an integer, don't use %pad to print it.
Fixes: commit 268914f4e7 ("dmaengine: at_xdmac: use %pad format string for dma_addr_t")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The omap_dmaxbar_init() function is not exported or declared outside
the driver, so make it static to fix the following sparse warning:
drivers/dma/ti-dma-crossbar.c:455:5: warning: symbol 'omap_dmaxbar_init' was not declared. Should it be static?
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To allow other code to safely read DMA Channel Status Register (where
the register attribute for Channel Error, Descriptor Time Out &
Descriptor Done fields are read-clear), export hsu_dma_get_status().
hsu_dma_irq() is renamed to hsu_dma_do_irq() and requires Status
Register value to be passed in.
Signed-off-by: Chuah, Kim Tatt <kim.tatt.chuah@intel.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
If kzalloc() fails it will issue it's own error message including
a dump_stack(). So remove the site specific error messages.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no point calculating the residue if there is
no txstate to store the value.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no point in calculating the residue if state does not
exist to store the value.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no point calculating the residue if there is
no txstate to store the value.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Doing so saves a few lines of code in the driver.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is no point in calculating the residue if there is no
txstate to store the value.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
It is useful to print the error code as part of the error
message.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently fsl-edma doesn't clk_disable_unprepare()
its clocks on error conditions. This patch adds a
fsl_disable_clocks helper for this, and also only
disables clocks which were enabled if encountering
an error whilst enabling clocks.
Signed-off-by: Peter Griffin <peter.griffin@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The AXI CDMA is a soft ip, which can be programmed to support
32 bit addressing or greater than 32 bit addressing.
When the AXI CDMA ip is configured for 32 bit address space
in simple dma mode the source/destination buffer address is
specified by a single register(18h for Source buffer address and
20h for Destination buffer address). When configured in SG mode
the current descriptor and tail descriptor are specified by a
Single register(08h for curdesc 10h for tail desc).
When the AXI CDMA core is configured for an address space greater
than 32 then each buffer address or descriptor address is specified by
a combination of two registers.
The first register specifies the LSB 32 bits of address,
while the next register specifies the MSB 32 bits of address.
For example, 08h will specify the LSB 32 bits while 0Ch will
specify the MSB 32 bits of the first start address.
So we need to program two registers at a time.
This patch adds the 64 bit addressing support to the axicdma
IP in the driver.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The AXI DMA is a soft ip, which can be programmed to support
32 bit addressing or greater than 32 bit addressing.
When the AXI DMA ip is configured for 32 bit address space
in simple dma mode the buffer address is specified by a single register
(18h for MM2S channel and 48h for S2MM channel). When configured in SG mode
The current descriptor and tail descriptor are specified by a single
Register(08h for curdesc 10h for tail desc for MM2S channel and 38h for
Curdesc and 40h for tail desc for S2MM).
When the AXI DMA core is configured for an address space greater
than 32 then each buffer address or descriptor address is specified by
a combination of two registers.
The first register specifies the LSB 32 bits of address,
while the next register specifies the MSB 32 bits of address.
For example, 48h will specify the LSB 32 bits while 4Ch will
specify the MSB 32 bits of the first start address.
So we need to program two registers at a time.
This patch adds the 64 bit addressing support for the axidma
IP in the driver.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There are some places where whitespace is used in very funky ways. Fix
the most serious ones to make the code easier on the eye.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The newly added xilinx_dma_prep_dma_cyclic function sometimes causes
a gcc warning about the use of the segment function in case
we never run into the inner loop of the function:
dma/xilinx/xilinx_vdma.c: In function 'xilinx_dma_prep_dma_cyclic':
dma/xilinx/xilinx_vdma.c:1808:23: error: 'segment' may be used uninitialized in this function [-Werror=maybe-uninitialized]
segment->hw.control |= XILINX_DMA_BD_SOP;
This can only happen if the period len is zero (which would cause other
problems earlier), or if the buffer is shorter than a period. Neither
of them should ever happen, but by adding an explicit check for these two
cases, we can abort in a more controlled way, and the compiler is
able to see that we never use uninitialized data.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes the below compilation warining.
drivers/dma/xilinx/xilinx_vdma.c: In function 'xilinx_dma_prep_dma_cyclic':
drivers/dma/xilinx/xilinx_vdma.c:1808:23: warning: 'segment' may be used
uninitialized in this function [-Wmaybe-uninitialized]
segment->hw.control |= XILINX_DMA_BD_SOP;
The start of packet (SOP) should be set to the first segment in the desc
chain not for the last segment of the desc chain.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The bcm2835_dma_prep_dma_memcpy() function is not exported
outside the driver, so make it static to avoid the following
warning:
drivers/dma/bcm2835-dma.c:616:32: warning: symbol 'bcm2835_dma_prep_dma_memcpy' was not declared. Should it be static?
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The at_xdmac_init_used_desc() and at_xdmac_prep_dma_memset()
functions are not exported outside the driver, so make them
static to avoid the following warnings:
drivers/dma/at_xdmac.c:459:6: warning: symbol 'at_xdmac_init_used_desc' was not declared. Should it be static?
drivers/dma/at_xdmac.c:1205:32: warning: symbol 'at_xdmac_prep_dma_memset' was not declared. Should it be static?
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The sirfsoc_dmadata structs are not used outside the driver, so
remove build warnings by making them static. Fixes:
drivers/dma/sirf-dma.c:1129:24: warning: symbol 'sirfsoc_dmadata_a6' was not declared. Should it be static?
drivers/dma/sirf-dma.c:1134:24: warning: symbol 'sirfsoc_dmadata_a7v1' was not declared. Should it be static?
drivers/dma/sirf-dma.c:1139:24: warning: symbol 'sirfsoc_dmadata_a7v2' was not declared. Should it be static?
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Fix warning due to d40_width_to_bits() not being used outside
this file. Fixes:
drivers/dma/ste_dma40_ll.c:13:4: warning: symbol 'd40_width_to_bits' was not declared. Should it be static?
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The driver limits the physical number of paRAM slots to be used by channels.
If the transfer needs more slots (more SGs) then the transfer is broken up
to smaller chunks. When the chunk is finished the driver will rewrite the
physical slots and continues the transfer. This set up time can take some
time and we might miss DMA events. If the intermediate set completion is
using early completion (the interrupt will happen when the last slot is
issued to the TPTC and not when the transfer is finished by the TPTC) we
will have a bit more time to update the paRAM slots and less likely to have
missed events.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Upon booting, I occasionally spotted some BUGs triggered by the internal
DMA test routine executed upon driver probing. This was detected by
SLUB_DEBUG ("Freechain corrupt" or "Redzone overwritten"). Tracking
this down located a problem in passing 0 as offset in dma_map_page().
As kmalloc, especially when used with SLUB_DEBUG, may return a non page
aligned address.
This patch fixes this issue by passing the correct offset in
dma_map_page().
Tested on a custom Armada XP board.
Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Gregory CLEMENT <gregory.clement@free-electrons.com>
Cc: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Remove the space before the "err_free_dma:" label in mv_xor_channel_add().
Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Gregory CLEMENT <gregory.clement@free-electrons.com>
Cc: Marcin Wojtas <mw@semihalf.com>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dma_pool_zalloc combines dma_pool_alloc and memset 0
this patch updates the driver to use dma_pool_zalloc.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for AXI DMA cyclic dma mode.
In cyclic mode, DMA fetches and processes the same
BDs without interruption. The DMA continues to fetch and process
until it is stopped or reset.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Due to the way CUBC register is updated, a double flush is needed to
compute an accurate residue. First flush aim is to get data from the DMA
FIFO and second one ensures that we won't report data which are not in
memory.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel
eXtended DMA Controller driver")
Cc: stable@vger.kernel.org #v4.1 and later
Reviewed-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
An unexpected value of CUBC can lead to a corrupted residue. A more
complex sequence is needed to detect an inaccurate value for NCA or CUBC.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel
eXtended DMA Controller driver")
Cc: stable@vger.kernel.org #v4.1 and later
Reviewed-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Having descriptors aligned on 64 bits allows update CNDA and CUBC in an
atomic way.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel
eXtended DMA Controller driver")
Cc: stable@vger.kernel.org #v4.1 and later
Reviewed-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
For each descriptor, in addition to the memory used by the descriptors
structure itself, the driver allocates a list of chunks as well as a
buffer for hardware descriptors. Descriptors themselves are preallocated,
and allocation of the chunks and buffer is performed the first time the
descriptor is used. The memory isn't freed when the transfer is completed,
as the chunks and buffer will be needed again when the descriptor is
reused internally, so the driver keeps the memory around.
If only a few descriptors are used concurrently, the current
list_add_tail() implementation will result in all preallocated descriptors
being used before going back to the first one, and will thus allocate
chunks and a buffer for all preallocated descriptors. Using list_add()
will put the complete descriptor at the head of the list of available
descriptors, so the next transfer will be more likely to reuse a
descriptor that already has associated memory instead of one that has
never been used before.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Most users of IS_ERR_VALUE() in the kernel are wrong, as they
pass an 'int' into a function that takes an 'unsigned long'
argument. This happens to work because the type is sign-extended
on 64-bit architectures before it gets converted into an
unsigned type.
However, anything that passes an 'unsigned short' or 'unsigned int'
argument into IS_ERR_VALUE() is guaranteed to be broken, as are
8-bit integers and types that are wider than 'unsigned long'.
Andrzej Hajda has already fixed a lot of the worst abusers that
were causing actual bugs, but it would be nice to prevent any
users that are not passing 'unsigned long' arguments.
This patch changes all users of IS_ERR_VALUE() that I could find
on 32-bit ARM randconfig builds and x86 allmodconfig. For the
moment, this doesn't change the definition of IS_ERR_VALUE()
because there are probably still architecture specific users
elsewhere.
Almost all the warnings I got are for files that are better off
using 'if (err)' or 'if (err < 0)'.
The only legitimate user I could find that we get a warning for
is the (32-bit only) freescale fman driver, so I did not remove
the IS_ERR_VALUE() there but changed the type to 'unsigned long'.
For 9pfs, I just worked around one user whose calling conventions
are so obscure that I did not dare change the behavior.
I was using this definition for testing:
#define IS_ERR_VALUE(x) ((unsigned long*)NULL == (typeof (x)*)NULL && \
unlikely((unsigned long long)(x) >= (unsigned long long)(typeof(x))-MAX_ERRNO))
which ends up making all 16-bit or wider types work correctly with
the most plausible interpretation of what IS_ERR_VALUE() was supposed
to return according to its users, but also causes a compile-time
warning for any users that do not pass an 'unsigned long' argument.
I suggested this approach earlier this year, but back then we ended
up deciding to just fix the users that are obviously broken. After
the initial warning that caused me to get involved in the discussion
(fs/gfs2/dir.c) showed up again in the mainline kernel, Linus
asked me to send the whole thing again.
[ Updated the 9p parts as per Al Viro - Linus ]
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Andrzej Hajda <a.hajda@samsung.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: https://lkml.org/lkml/2016/1/7/363
Link: https://lkml.org/lkml/2016/5/27/486
Acked-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> # For nvmem part
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This time round the update brings in following changes:
- New tegra driver for ADMA device
- Support for Xilinx AXI Direct Memory Access Engine and Xilinx AXI Central
Direct Memory Access Engine and few updates to this driver.
- New cyclic capability to sun6i and few updates.
- Slave-sg support in bcm2835.
- Updates to many drivers like designware, hsu, mv_xor, pxa, edma,
qcom_hidma & bam.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXPVb9AAoJEHwUBw8lI4NHnDQP/AtUYBTI8XD68iGh5eCTEtwO
3dNgUmOvIAIl0ZtVKex3b7j2S52IN7EDv44QmsmvMHgjvaupUsZ/HeIHgoI37y39
/qoRkyiG75ht68BrNjKcpJLsOyxaAUT1tMyf/bYXlDW8O7qEPtRDhuvUB+i+s3RX
ljNOQXH2WaQTJrNeZxkvbp92iGiu3j7AKyCh9MJ4gnF4y2oA1bFp++QpH5qcBOTp
0nccs7pgDQhw2nzHmhYbEmvgcKPrPQi+67U7eIed7n7wiThAIXIEbZl6AYk9kFaK
gSa4/N3fwnZc9TFR5O6qdanvsYdW4JC1P5Ydm0opExo3lgtMckQ3sGKFIwTG8eU4
YiyQE1uVHRqT82zxPCecTF+I0Y4g68oCJURrHED6kxKGA5a8ojU04aGebXDiNKlp
FEDceEC5ch7ZPw8CCTola+TYpf9Vni3g7OkrdkPY9cX/aDXDROghTCg9jgPJ2aL/
oai5axc5gQMEFzHPaEwFp45tgXw7IvIzaqYHmiWE11fsRbGUSB2HAwBXytI9ReC0
XTMBvc08YvisbIpIR29T0R5cerzdDuK9bXxYHHHOeUFg0t8R8UGaP1UxEQCVmLsT
AIrHupoccPJ7IAn0h6mShtZ2yzBfj3rU4tEMJR/Oj/VvjW3gKbbZ5XVi92fOurBs
xjn9uBBZ/Pt9hgprwlmY
=0Sy7
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.7-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time round the update brings in following changes:
- new tegra driver for ADMA device
- support for Xilinx AXI Direct Memory Access Engine and Xilinx AXI
Central Direct Memory Access Engine and few updates to this driver
- new cyclic capability to sun6i and few updates
- slave-sg support in bcm2835
- updates to many drivers like designware, hsu, mv_xor, pxa, edma,
qcom_hidma & bam"
* tag 'dmaengine-4.7-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (84 commits)
dmaengine: ioatdma: disable relaxed ordering for ioatdma
dmaengine: of_dma: approximate an average distribution
dmaengine: core: Use IS_ENABLED() instead of checking for built-in or module
dmaengine: edma: Re-evaluate errors when ccerr is triggered w/o error event
dmaengine: qcom_hidma: add support for object hierarchy
dmaengine: qcom_hidma: add debugfs hooks
dmaengine: qcom_hidma: implement lower level hardware interface
dmaengine: vdma: Add clock support
Documentation: DT: vdma: Add clock support for dmas
dmaengine: vdma: Add config structure to differentiate dmas
MAINTAINERS: Update Tegra DMA maintainers
dmaengine: tegra-adma: Add support for Tegra210 ADMA
Documentation: DT: Add binding documentation for NVIDIA ADMA
dmaengine: vdma: Add Support for Xilinx AXI Central Direct Memory Access Engine
Documentation: DT: vdma: update binding doc for AXI CDMA
dmaengine: vdma: Add Support for Xilinx AXI Direct Memory Access Engine
Documentation: DT: vdma: update binding doc for AXI DMA
dmaengine: vdma: Rename xilinx_vdma_ prefix to xilinx_dma
dmaengine: slave means at least one of DMA_SLAVE, DMA_CYCLIC
dmaengine: mv_xor: Allow selecting mv_xor for mvebu only compatible SoC
...
ioatdma by default is in snoop mode. Relaxed ordering according to spec
does not do anything in snoop mode. However, it causes hang or significant
performance degrade when tested with NTB. Disabling in the driver due to
some BIOS do not configure it correctly.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently the following DT description would result in dmac0 always
being tried first and dmac1 second if dmac0 was unavailable. This
results in heavier use of dmac0 then of dmac1. This patch adds an
approximate average distribution over the two nodes lessening the load
of anyone of them.
i2c6: i2c@e60b0000 {
...
dmas = <&dmac0 0x77>, <&dmac0 0x78>,
<&dmac1 0x77>, <&dmac1 0x78>;
dma-names = "tx", "rx", "tx", "rx";
...
};
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The IS_ENABLED() macro checks if a Kconfig symbol has been enabled either
built-in or as a module, use that macro instead of open coding the same.
Signed-off-by: Javier Martinez Canillas <javier@osg.samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When the ccerr handler is called but the error registers indicate no error
events we need to command eDMA to re-evaluate the errors. Otherwise we can
receive flood of error interrupts.
Reported-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In order to create a relationship model between the channels and the
management object, we are adding support for object hierarchy to the
drivers. This patch simplifies the userspace application development.
We will not have to traverse different firmware paths based on device
tree or ACPI based kernels.
No matter what flavor of kernel is used, objects will be represented as
platform devices.
The new layout is as follows:
hidmam_10: hidma-mgmt@0x5A000000 {
compatible = "qcom,hidma-mgmt-1.0";
...
hidma_10: hidma@0x5a010000 {
compatible = "qcom,hidma-1.0";
...
}
}
The hidma_mgmt_init detects each instance of the hidma-mgmt-1.0 objects
in device tree and calls into the channel driver to create platform devices
for each child of the management object.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add debugfs hooks for debugging the execution behavior of the DMA
channel. The debugfs hooks get initialized by the probe function and
uninitialized by the remove function.
A stats file is created in debugfs. The stats file will show the
information about each HIDMA channel as well as each asynchronous job
queued and completed at a given time.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch implements the hardware hooks for the HIDMA channel driver.
The main functions of interest are:
- hidma_ll_init
- hidma_ll_request
- hidma_ll_queue_request
- hidma_ll_hw_start
OS layer calls the hidma_ll_init function during probe to set up the
hardware. At this moment, the number of supported descriptors are also
given. On each request, a descriptor is allocated from the free pool and
filled in with the transfer parameters. Multiple requests can be queued
into the hardware via the OS interface. When client is ready for requests
to be executed, start method is called.
Completions are delivered via callbacks via tasklet.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Added basic clock support for axi dma's.
The clocks are requested at probe and released at remove.
Reviewed-by: Shubhrajyoti Datta <shubhraj@xilinx.com>
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds config structure in the driver to differentiate
AXI DMA's and to add more features(clock support etc..) to these DMA's.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add support for the Tegra210 Audio DMA controller that is used for
transferring data between system memory and the Audio sub-system.
The driver only supports cyclic transfers because this is being solely
used for audio.
This driver is based upon the work by Dara Ramesh <dramesh@nvidia.com>.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for the AXI Central Direct Memory Access
(AXI CDMA) core to the existing vdma driver, AXI CDMA is a
soft Xilinx IP core that provides high-bandwidth
Direct Memory Access(DMA) between a memory-mapped
source address and a memory-mapped destination address.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for the AXI Direct Memory Access (AXI DMA)
core in the existing vdma driver, AXI DMA Core is a
soft Xilinx IP core that provides high-bandwidth
direct memory access between memory and AXI4-Stream
type target peripherals.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch renames the xilinx_vdma_ prefix to xilinx_dma
for the API's and masks that will be shared b/w three DMA
IP cores.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When check for capabilities recognize slave support by either DMA_SLAVE or
DMA_CYCLIC bit set. If we don't do that the user can't get a normally worked
DMA support for engines that doesn't have one of the mentioned bits set.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Armada 3700 SoC uses the mv_xor driver but don't select anymore the
PLAT_ORION symbol. This commit extends the dependency of the mv_xor
driver to the more modern SoCs only compatible with ARCH_MVEBU, which
allows using it with the Armada 3700 SoC.
In the same time it also add the COMPILE_TEST dependency allowing a wider
test coverage.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Armada 3700 SoC comprise a single XOR engine compliant with the ones used
in older Marvell SoC's like Armada XP or 38x. The only thing that needs
modification is the Mbus configuration, which has to be done on two
levels: global and in device. The first one is inherited from the
bootloader. The latter can be opened in a default way, leaving
arbitration to the bus controller. Hence filled mbus_dram_target_info
structure is not needed.
Patch "dmaengine: mv_xor: optimize performance by using a subset
of the XOR channels" introduced limitation for using XOR engines and
channels vs number of available CPU's. Those constraints do not however
fit Armada 3700 architecture with two possible CPU's and single,
dual-channel engine. Hence in this commit an adjustment for setting
maximum available channels is added.
This patch enables XOR access to DRAM by opening default window to 4GB
space with specific attribute.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently the main difference between legacy XOR engine and newer one, is
the way the engine modes are setup (either in the descriptor or through
the controller registers). In order to be able to take into account new
generation of the XOR engine for the ARM64 SoC, we need to identify them
by type, and then depending to the type the engine setup will be
selected.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Fix two warnings which appear when building for 64 bits target:
drivers/dma/mv_xor.c: In function ‘mv_xor_prep_dma_xor’:
drivers/dma/mv_xor.c:480:3: warning: format ‘%u’ expects argument of type ‘unsigned int’, but argument 6 has type ‘size_t {aka long unsigned int}’ [-Wformat=]
"%s src_cnt: %d len: %u dest %pad flags: %ld\n",
^
drivers/dma/mv_xor.c: In function ‘mv_xor_probe’:
drivers/dma/mv_xor.c:1223:17: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
op_in_desc = (int)of_id->data;
^
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Dma_pool_zalloc combines dma_pool_alloc and memset 0. The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
expression d,e;
statement S;
@@
d =
- dma_pool_alloc
+ dma_pool_zalloc
(...);
if (!d) S
- memset(d, 0, sizeof(*d));
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Acked-by: Sören Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Dma_pool_zalloc combines dma_pool_alloc and memset 0. The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
expression d,e;
statement S;
@@
d =
- dma_pool_alloc
+ dma_pool_zalloc
(...);
if (!d) S
- memset(d, 0, sizeof(*d));
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Acked-by: Li Yang <leoyang.li@nxp.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Dma_pool_zalloc combines dma_pool_alloc and memset 0. The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
expression d,e;
statement S;
@@
d =
- dma_pool_alloc
+ dma_pool_zalloc
(...);
if (!d) S
- memset(d, 0, sizeof(*d));
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Dma_pool_zalloc combines dma_pool_alloc and memset 0. The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
expression d,e;
statement S;
@@
d =
- dma_pool_alloc
+ dma_pool_zalloc
(...);
if (!d) S
- memset(d, 0, sizeof(*d));
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The transfer direction is now checked in set_config.
There is no need to check it twice.
Signed-off-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Some DMA clients, as audio, don't set the maxburst size and bus width
on the memory side when starting DMA transfers.
This patch prevents such transfers to be aborted by providing system
default values to the lacking ones.
Signed-off-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We pass struct dw_dma_chip to dw_dma_probe() anyway, thus we may use it to
pass a platform data as well.
While here, constify the source of the platform data.
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>