The newer and better JZ4780 driver is now used to provide DMA
functionality on the JZ4740.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Tested-by: Artur Rojek <contact@artur-rojek.eu>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
When a DMA client driver does not set the DMA_PREP_INTERRUPT because it
does not want to use interrupts for DMA completion or because it can not
rely on DMA interrupts due to executing the memcpy when interrupts are
disabled it will poll the status of the transfer.
Since we can not tell from any EDMA register that the transfer is
completed, we can only know that the paRAM set has been sent to TPTC for
processing we need to check the residue of the transfer, if it is 0 then
the transfer is completed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20190716082655.1620-4-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
For memcpy we never stored the start address of the transfer for the pset
which rendered the memcpy residue calculation completely broken.
In the edma_residue() function we also need to to some correction for the
calculations:
Instead waiting for all EDMA channels to be idle (in a busy system it can
take few iteration to hit a point when all queues are idle) wait for the
event pending on the given channel (SH_ER for hw synchronized channels,
SH_ESR for manually triggered channels).
If the position returned by EMDA is 0 it implies that the last paRAM set
has been consumed and we are at the closing dummy set, thus we can conclude
that the transfer is completed and we can return 0 as residue.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
[vkoul: fixed typo in commit log]
Link: https://lore.kernel.org/r/20190716082655.1620-3-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Introduce defines for getting the array index and the bit number within the
64bit array register pairs.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20190716082655.1620-2-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When a DMA client driver does not set the DMA_PREP_INTERRUPT because it
does not want to use interrupts for DMA completion or because it can not
rely on DMA interrupts due to executing the memcpy when interrupts are
disabled it will poll the status of the transfer.
If the interrupts are enabled then the cookie will be set completed in the
interrupt handler so only check in HW completion when the polling is really
needed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20190716082459.1222-3-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The tx_status is most likely going to be asked for the current transfer, so
check that first then try to fall back to lookup of non started transfers.
In this way the code is a bit more readable and in most cases we will avoid
to run vchan_find_desc() all the time.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20190716082459.1222-2-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current driver works perfectly fine on every generation of the
JZ47xx SoCs, except on the JZ4740.
There, when hardware descriptors are chained together (with the LINK
bit set), the next descriptor isn't automatically fetched as it should -
instead, an interrupt is raised, even if the TIE bit (Transfer Interrupt
Enable) bit is cleared. When it happens, the DMA transfer seems to be
stopped (it doesn't chain), and it's uncertain how many bytes have
actually been transferred.
Until somebody smarter than me can figure out how to make chained
descriptors work on the JZ4740, we now disable chained descriptors on
that particular SoC.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Link: https://lore.kernel.org/r/20190714215504.10877-1-paul@crapouillou.net
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If CONFIG_PM is not set, build warnings:
drivers/dma/tegra210-adma.c:747:12: warning: tegra_adma_runtime_resume defined but not used [-Wunused-function]
static int tegra_adma_runtime_resume(struct device *dev)
drivers/dma/tegra210-adma.c:715:12: warning: tegra_adma_runtime_suspend defined but not used [-Wunused-function]
static int tegra_adma_runtime_suspend(struct device *dev)
Mark the two function as __maybe_unused.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Fixes: 3145d73e69 ("dmaengine: tegra210-adma: remove PM_CLK dependency")
Fixes: f46b195799 ("dmaengine: tegra-adma: Add support for Tegra210 ADMA")
Reported-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/20190709083258.57112-1-yuehaibing@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
clang-9 points out that there are two variables that depending on the
configuration may only be used in an ARRAY_SIZE() expression but not
referenced:
drivers/dma/ste_dma40.c:145:12: error: variable 'd40_backup_regs' is not needed and will not be emitted [-Werror,-Wunneeded-internal-declaration]
static u32 d40_backup_regs[] = {
^
drivers/dma/ste_dma40.c:214:12: error: variable 'd40_backup_regs_chan' is not needed and will not be emitted [-Werror,-Wunneeded-internal-declaration]
static u32 d40_backup_regs_chan[] = {
Mark these __maybe_unused to shut up the warning.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Link: https://lore.kernel.org/r/20190712091357.744515-1-arnd@arndb.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When building with 'make C=1', sparse reports an endianess bug:
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:60:30: warning: cast removes address space of expression
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: warning: incorrect type in argument 1 (different address spaces)
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: expected void const volatile [noderef] <asn:2>*addr
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: got void *[assigned] ptr
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: warning: incorrect type in argument 1 (different address spaces)
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: expected void const volatile [noderef] <asn:2>*addr
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: got void *[assigned] ptr
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: warning: incorrect type in argument 1 (different address spaces)
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: expected void const volatile [noderef] <asn:2>*addr
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: got void *[assigned] ptr
The current code is clearly wrong, as it passes an endian-swapped word
into a register function where it gets swapped again. Just pass the variables
directly into lower_32_bits()/upper_32_bits().
Fixes: 7e4b8a4fbe ("dmaengine: Add Synopsys eDMA IP version 0 support")
Link: https://lore.kernel.org/lkml/20190617131820.2470686-1-arnd@arndb.de/
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/20190722124457.1093886-3-arnd@arndb.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The new driver mixes up dma_addr_t and __iomem pointers, which results
in warnings on some 32-bit architectures, like:
drivers/dma/dw-edma/dw-edma-v0-core.c: In function '__dw_regs':
drivers/dma/dw-edma/dw-edma-v0-core.c:28:9: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
return (struct dw_edma_v0_regs __iomem *)dw->rg_region.vaddr;
Make it use __iomem pointers consistently here, and avoid using dma_addr_t
for __iomem tokens altogether.
A small complication here is the debugfs code, which passes an __iomem
token as the private data for debugfs files, requiring the use of
extra __force.
Fixes: 7e4b8a4fbe ("dmaengine: Add Synopsys eDMA IP version 0 support")
Link: https://lore.kernel.org/lkml/20190617131918.2518727-1-arnd@arndb.de/
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20190722124457.1093886-2-arnd@arndb.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Putting large constant data on the stack causes unnecessary overhead
and stack usage:
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:285:6: error: stack frame size of 1376 bytes in function 'dw_edma_v0_debugfs_on' [-Werror,-Wframe-larger-than=]
Mark the variable 'static const' in order for the compiler to move it
into the .rodata section where it does no such harm.
Fixes: 305aebeff8 ("dmaengine: Add Synopsys eDMA IP version 0 debugfs support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/20190722124457.1093886-1-arnd@arndb.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
During enabling of the RPi 4, we found out that the driver doesn't provide
a helpful error message in case setting DMA mask fails. So add one.
Signed-off-by: Stefan Wahren <wahrenst@gmx.net>
Link: https://lore.kernel.org/r/1563297318-4900-1-git-send-email-wahrenst@gmx.net
Signed-off-by: Vinod Koul <vkoul@kernel.org>
With the audio driver no longer referring to this function, it
can be made private to the dmaengine driver itself, and the
header file removed.
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20190722081705.2084961-2-arnd@arndb.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
- Add support in dmaengine core to do device node checks for DT devices and
update bunch of drivers to use that and remove open coding from drivers
- New driver/driver support for new hardware, namely:
- MediaTek UART APDMA
- Freescale i.mx7ulp edma2
- Synopsys eDMA IP core version 0
- Allwinner H6 DMA
- Updates to axi-dma and support for interleaved cyclic transfers
- Greg's debugfs return value check removals on drivers
- Updates to stm32-dma, hsu, dw, pl330, tegra drivers
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJdLKxYAAoJEHwUBw8lI4NHsH8P/AqYZpUlLthe5L4qItzM1Uf0
HqxsJYs0xworjSRml8uptx/TzjIgJnJfEk2PV5VA+0zJNz/HnH7lDH85wKDx1Ydl
AatUuyAFRO3GZOup/hY0AEIPhoIMdg/3zS2aapjJmaEZCVK2eVKmcj0KMvO5g0cw
tsmXm3O0xd2Na1ToslNyYgFfCn8ortuAeoKiXJxhivMbGjRfw4LW/RPgS17Vspvh
mEuxNXFWAZ+DorgPF5BmDPZ+LXcGgCXGNIoj64W+VHaXU5yXnlky+6/0f7cEcFEd
yl3hjXVwyAq5zIItIOmiuozZidi5yfoizXg4S2ZD3P4xXKZ5OZ9Gf/0SMyXUIErU
pwGxo6ZgsBcEpAHtqySELQedttttID+jYYeWU6oDr2LOy3W3F7AHOEGg9l9ZllLh
gRdIoz3PrMK1wy/9Ytl37xklZyBk+HJLkeoIAvjrNgNJ1YRKqcysUCwsmqO7SG3N
HnIGx74sG8ChljT/yX5pElq3ip6qLdb4pJcsfxKJ9VSxsTZ3JNINGNQtvI19hKR/
6sn/c1Rb5/S1WxINGr+2FxChxXF8OESCN6GIEu6mNYVBzQnNPzwgPxfAGCqdoOOH
mqXXgYNePMaBGYXBkdgvP1CnqenRRmTYo/1L4QmI4Mve4xpd5zhx5cZt9FlQJ2Im
/hVT8gZ6bIrutsVOy4rg
=R+aC
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-5.3-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
- Add support in dmaengine core to do device node checks for DT devices
and update bunch of drivers to use that and remove open coding from
drivers
- New driver/driver support for new hardware, namely:
- MediaTek UART APDMA
- Freescale i.mx7ulp edma2
- Synopsys eDMA IP core version 0
- Allwinner H6 DMA
- Updates to axi-dma and support for interleaved cyclic transfers
- Greg's debugfs return value check removals on drivers
- Updates to stm32-dma, hsu, dw, pl330, tegra drivers
* tag 'dmaengine-5.3-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (68 commits)
dmaengine: Revert "dmaengine: fsl-edma: add i.mx7ulp edma2 version support"
dmaengine: at_xdmac: check for non-empty xfers_list before invoking callback
Documentation: dmaengine: clean up description of dmatest usage
dmaengine: tegra210-adma: remove PM_CLK dependency
dmaengine: fsl-edma: add i.mx7ulp edma2 version support
dt-bindings: dma: fsl-edma: add new i.mx7ulp-edma
dmaengine: fsl-edma-common: version check for v2 instead
dmaengine: fsl-edma-common: move dmamux register to another single function
dmaengine: fsl-edma: add drvdata for fsl-edma
dmaengine: Revert "dmaengine: fsl-edma: support little endian for edma driver"
dmaengine: rcar-dmac: Reject zero-length slave DMA requests
dmaengine: dw: Enable iDMA 32-bit on Intel Elkhart Lake
dmaengine: dw-edma: fix semicolon.cocci warnings
dmaengine: sh: usb-dmac: Use [] to denote a flexible array member
dmaengine: dmatest: timeout value of -1 should specify infinite wait
dmaengine: dw: Distinguish ->remove() between DW and iDMA 32-bit
dmaengine: fsl-edma: support little endian for edma driver
dmaengine: hsu: Revert "set HSU_CH_MTSR to memory width"
dmagengine: pl330: add code to get reset property
dt-bindings: pl330: document the optional resets property
...
MTD core changes:
- New Hyperbus framework
- New _is_locked (concat) implementation
- Various cleanups
NAND core changes:
- use longest matching pattern in ->exec_op() default parser
- export NAND operation tracer
- add flag to indicate panic_write in MTD
- use kzalloc() instead of kmalloc() and memset()
Raw NAND controller drivers changes:
- brcmnand:
* fix BCH ECC layout for large page NAND parts
* fallback to detected ecc-strength, ecc-step-size
* when oops in progress use pio and interrupt polling
* code refactor code to introduce helper functions
* add support for v7.3 controller
- FSMC:
* use nand_op_trace for operation tracing
- GPMI:
* move all driver code into single file
* various cleanups (including dmaengine changes)
* use runtime PM to manage clocks
* implement exec_op
- MTK:
* correct low level time calculation of r/w cycle
* improve data sampling timing for read cycle
* add validity check for CE# pin setting
* fix wrongly assigned OOB buffer pointer issue
* re-license MTK NAND driver as Dual MIT/GPL
- STM32:
* manage the get_irq error case
* increase DMA completion timeouts
Raw NAND chips drivers changes:
- Macronix: add read-retry support
Onenand driver changes:
- add support for 8Gb datasize chips
- avoid fall-through warnings
SPI-NAND changes:
- define macros for page-read ops with three-byte addresses
- add support for two-byte device IDs and then for GigaDevice
GD5F1GQ4UFxxG
- add initial support for Paragon PN26G0xA
- handle the case where the last page read has bitflips
SPI-NOR core changes:
- add support for the mt25ql02g and w25q16jv flashes
- print error in case of jedec read id fails
- is25lp256: add post BFPT fix to correct the addr_width
SPI NOR controller drivers changes:
- intel-spi: Add support for Intel Elkhart Lake SPI serial flash
- smt32: remove the driver as the driver was replaced by spi-stm32-qspi.c
- cadence-quadspi: add reset control
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEE9HuaYnbmDhq/XIDIJWrqGEe9VoQFAl0qLr4ACgkQJWrqGEe9
VoQVCwf/ZCG5CldS3cs6B68kMJoZ/rJyJxBnxtIhffda2vw1KG/12o6XaDO9xA/R
EwYrOTzlYZxzCZsNvWyHepG3Kj3d38CJ52ZqhavjpihwMlKKOgW/K39xuKWCrfxS
sVMLz/UdrcsQfcPGAy7DSyqhzRAtupNxngCdEkIIMGFZWsv4uZfOFEGMrzUJ5RYN
/okIyUE7Iz0dRq1/KXSl365V1MS8QP2eHFuHrUd38+kJ8TJnQjXX3Bmdul4aNTx+
HIIpykovoAn5BZ0YA4lJL90zVoDOWysARwHIAMDvJa8zS0wDTU16Tj2M6AQK+a4x
hbIOOkeX0hTKpJvy7/khli5y1bn2mw==
=L+tV
-----END PGP SIGNATURE-----
Merge tag 'mtd/for-5.3' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux
Pull MTD updates from Miquel Raynal:
"This contains the following changes for MTD:
MTD core changes:
- New Hyperbus framework
- New _is_locked (concat) implementation
- Various cleanups
NAND core changes:
- use longest matching pattern in ->exec_op() default parser
- export NAND operation tracer
- add flag to indicate panic_write in MTD
- use kzalloc() instead of kmalloc() and memset()
Raw NAND controller drivers changes:
- brcmnand:
- fix BCH ECC layout for large page NAND parts
- fallback to detected ecc-strength, ecc-step-size
- when oops in progress use pio and interrupt polling
- code refactor code to introduce helper functions
- add support for v7.3 controller
- FSMC:
- use nand_op_trace for operation tracing
- GPMI:
- move all driver code into single file
- various cleanups (including dmaengine changes)
- use runtime PM to manage clocks
- implement exec_op
- MTK:
- correct low level time calculation of r/w cycle
- improve data sampling timing for read cycle
- add validity check for CE# pin setting
- fix wrongly assigned OOB buffer pointer issue
- re-license MTK NAND driver as Dual MIT/GPL
- STM32:
- manage the get_irq error case
- increase DMA completion timeouts
Raw NAND chips drivers changes:
- Macronix: add read-retry support
Onenand driver changes:
- add support for 8Gb datasize chips
- avoid fall-through warnings
SPI-NAND changes:
- define macros for page-read ops with three-byte addresses
- add support for two-byte device IDs and then for GigaDevice
GD5F1GQ4UFxxG
- add initial support for Paragon PN26G0xA
- handle the case where the last page read has bitflips
SPI-NOR core changes:
- add support for the mt25ql02g and w25q16jv flashes
- print error in case of jedec read id fails
- is25lp256: add post BFPT fix to correct the addr_width
SPI NOR controller drivers changes:
- intel-spi: Add support for Intel Elkhart Lake SPI serial flash
- smt32: remove the driver as the driver was replaced by spi-stm32-qspi.c
- cadence-quadspi: add reset control"
* tag 'mtd/for-5.3' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (60 commits)
mtd: concat: implement _is_locked mtd operation
mtd: concat: refactor concat_lock/concat_unlock
mtd: abi: do not use C++ style comments in uapi header
mtd: afs: remove unneeded NULL check
mtd: rawnand: stm32_fmc2: increase DMA completion timeouts
mtd: rawnand: Use kzalloc() instead of kmalloc() and memset()
mtd: hyperbus: Add driver for TI's HyperBus memory controller
mtd: spinand: read returns badly if the last page has bitflips
mtd: spinand: Add initial support for Paragon PN26G0xA
mtd: rawnand: mtk: Re-license MTK NAND driver as Dual MIT/GPL
mtd: rawnand: gpmi: remove double assignment to block_size
dt-bindings: mtd: brcmnand: Add brcmnand, brcmnand-v7.3 support
mtd: rawnand: brcmnand: Add support for v7.3 controller
mtd: rawnand: brcmnand: Refactored code to introduce helper functions
mtd: rawnand: brcmnand: When oops in progress use pio and interrupt polling
mtd: Add flag to indicate panic_write
mtd: rawnand: Add Macronix NAND read retry support
mtd: onenand: Avoid fall-through warnings
mtd: spinand: Add support for GigaDevice GD5F1GQ4UFxxG
mtd: spinand: Add support for two-byte device IDs
...
This reverts commit 7144afd025 ("dmaengine: fsl-edma: add i.mx7ulp
edma2 version support") as this fails to build with module option due to
usage of of_irq_count() which is not an exported symbol as kernel
drivers are *not* expected to use it (rightly so).
Signed-off-by: Vinod Koul <vkoul@kernel.org>
tx descriptor retrieved from an empty xfers_list may not have valid
pointers to the callback functions.
Avoid calling dmaengine_desc_get_callback_invoke if xfers_list is empty.
Signed-off-by: Raag Jadav <raagjadav@gmail.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
One space is left unused in circular FIFO to differentiate
'full' and 'empty' cases. So take that in to account while
counting for the descriptors completed.
Fixes the issue reported here,
https://lkml.org/lkml/2019/6/18/669
Cc: stable@vger.kernel.org
Reported-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Sricharan R <sricharan@codeaurora.org>
Tested-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It is possible for an irq triggered by channel0 to be received later
after clks are disabled once firmware loaded during sdma probe. If
that happens then clearing them by writing to SDMA_H_INTR won't work
and the kernel will hang processing infinite interrupts. Actually,
don't need interrupt triggered on channel0 since it's pollling
SDMA_H_STATSTOP to know channel0 done rather than interrupt in
current code, just clear BD_INTR to disable channel0 interrupt to
avoid the above case.
This issue was brought by commit 1d069bfa3c ("dmaengine: imx-sdma:
ack channel 0 IRQ in the interrupt handler") which didn't take care
the above case.
Fixes: 1d069bfa3c ("dmaengine: imx-sdma: ack channel 0 IRQ in the interrupt handler")
Cc: stable@vger.kernel.org #5.0+
Signed-off-by: Robin Gong <yibin.gong@nxp.com>
Reported-by: Sven Van Asbroeck <thesven73@gmail.com>
Tested-by: Sven Van Asbroeck <thesven73@gmail.com>
Reviewed-by: Michael Olbrich <m.olbrich@pengutronix.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If probe() fails anywhere beyond the point where
sdma_get_firmware() is called, then a kernel oops may occur.
Problematic sequence of events:
1. probe() calls sdma_get_firmware(), which schedules the
firmware callback to run when firmware becomes available,
using the sdma instance structure as the context
2. probe() encounters an error, which deallocates the
sdma instance structure
3. firmware becomes available, firmware callback is
called with deallocated sdma instance structure
4. use after free - kernel oops !
Solution: only attempt to load firmware when we're certain
that probe() will succeed. This guarantees that the firmware
callback's context will remain valid.
Note that the remove() path is unaffected by this issue: the
firmware loader will increment the driver module's use count,
ensuring that the module cannot be unloaded while the
firmware callback is pending or running.
Signed-off-by: Sven Van Asbroeck <TheSven73@gmail.com>
Reviewed-by: Robin Gong <yibin.gong@nxp.com>
[vkoul: fixed braces for if condition]
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The "pending" variable was a u32 but we cast it to an unsigned long
pointer when we do the for_each_set_bit() loop. The problem is that on
big endian 64bit systems that results in an out of bounds read.
Fixes: 4e4106f5e9 ("dmaengine: jz4780: Fix transfers being ACKed too soon")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Tegra ADMA does not use pm-clk interface now and hence the dependency
is removed from Kconfig.
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add edma2 for i.mx7ulp by version v3, since v2 has already
been used by mcf-edma.
The big changes based on v1 are belows:
1. only one dmamux.
2. another clock dma_clk except dmamux clk.
3. 16 independent interrupts instead of only one interrupt for
all channels.
Signed-off-by: Robin Gong <yibin.gong@nxp.com>
Tested-by: Angelo Dureghello <angelo@sysam.it>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The next v3 i.mx7ulp edma is based on v1, so change version
check logic for v2 instead.
Signed-off-by: Robin Gong <yibin.gong@nxp.com>
Tested-by: Angelo Dureghello <angelo@sysam.it>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Prepare for edmav2 on i.mx7ulp whose dmamux register is 32bit. No function
impacted.
Signed-off-by: Robin Gong <yibin.gong@nxp.com>
Tested-by: Angelo Dureghello <angelo@sysam.it>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There are some differences between vf610 and next i.mx7ulp. Put such
differences into static driver data for distinguishing easily at
driver level. Change mcf-edma accordingly.
Signed-off-by: Robin Gong <yibin.gong@nxp.com>
Tested-by: Angelo Dureghello <angelo@sysam.it>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This reverts commit 002905eca5.
Commit 002905eca5 ("dmaengine: fsl-edma: support little endian for edma
driver") incorrectly assumed that there was not little endian support
in the driver.
This causes hangs on Vybrid, so revert it so that Vybrid systems
could boot again.
Reported-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Fabio Estevam <festevam@gmail.com>
Tested-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The gpmi driver performance suffers from NAND operations being split
in multiple small DMA transfers. This has been forced by the NAND layer
in the former days, but now with exec_op we can use the controller as
intended.
With this patch gpmi_nfc_exec_op becomes the main entry point to NAND
operations. Here all instructions are collected and chained as separate
DMA transfers. In the end whole chain is fired and waited to be
finished. gpmi_nfc_exec_op only does the hardware operations, bad block
marker swapping and buffer scrambling is done by the callers. It's worth
noting that the nand_*_op functions always take the buffer lengths for
the data that the NAND chip actually transfers. When doing BCH we have
to calculate the net data size from the raw data size in some places.
This patch has been tested with 2048/64 and 2048/128 byte NAND on
i.MX6q. mtd_oobtest, mtd_subpagetest and mtd_speedtest run without
errors. nandbiterrs, nandpagetest and nandsubpagetest userspace tests
from mtdutils run without errors and UBIFS can successfully be mounted.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
The mxs dma driver uses the flags parameter in dmaengine_prep_slave_sg() for
custom flags, but still uses the dmaengine specific names of the flags.
Do a little bit better and at least give the flag a custom name.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
The mxs dma driver can do PIO transfers. A pointer to the PIO words
to transfer is passed in the struct scatterlist * argument of
dmaengine_prep_slave_sg(). It's quite ugly and non obvious to cast
u32 * to struct scatterlist * each time when calling
dmaengine_prep_slave_sg(), so add a static inline wrapper function
to be called by the user along with a description what is going on.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
The mxs dma driver insists on having the DMA_PREP_INTERRUPT flag set
on all but the first transfer. There's no need to let the user set this
flag, the driver can do it internally whenever it needs it. Drop
handling of this flag from the driver.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
While the .device_prep_slave_sg() callback rejects empty scatterlists,
it still accepts single-entry scatterlists with a zero-length segment.
These may happen if a driver calls dmaengine_prep_slave_single() with a
zero len parameter. The corresponding DMA request will never complete,
leading to messages like:
rcar-dmac e7300000.dma-controller: Channel Address Error happen
and DMA timeouts.
Although requesting a zero-length DMA request is a driver bug, rejecting
it early eases debugging. Note that the .device_prep_dma_memcpy()
callback already rejects requests to copy zero bytes.
Reported-by: Eugeniu Rosca <erosca@de.adit-jv.com>
Analyzed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Intel Elkhart Lake OSE (Offload Service Engine) provides few DMA controllers
to the host. Enable them in the driver.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Flexible array members should be denoted using [] instead of [0], else
gcc will not warn when they are no longer at the end of the structure.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The dmatest module parameter 'timeout' is documented as accepting a
-1 to mean "infinite timeout". However, an infinite timeout is not
advised, nor possible since the module parameter is an unsigned int,
which won't accept a negative value. Change the parameter
comment to reflect current behavior, which allows values from 0 up to
4294967295 (0xFFFFFFFF).
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In the same way as done for ->probe(), call ->remove() based on
the type of the hardware.
While it works now due to equivalency of the two removal functions,
it might be changed in the future.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The commit
080edf75d3 ("dmaengine: hsu: set HSU_CH_MTSR to memory width")
has been mistakenly submitted. The further investigations show that
the original code does better job since the memory side transfer size
has never been configured by DMA users.
As per latest revision of documentation: "Channel minimum transfer size
(CHnMTSR)... For IOSF UART, maximum value that can be programmed is 64 and
minimum value that can be programmed is 1."
This reverts commit 080edf75d3.
Fixes: 080edf75d3 ("dmaengine: hsu: set HSU_CH_MTSR to memory width")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 4122 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation this program is
distributed in the hope that it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details you should have received a copy of the gnu general
public license along with this program if not see http www gnu org
licenses
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 503 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Enrico Weigelt <info@metux.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The DMA controller on some SoCs can be held in reset, and thus requires
the reset signal(s) to deasserted. Most SoCs will have just one reset
signal, but there are others, i.e. Arria10/Stratix10 will have an
additional reset signal, referred to as the OCP.
Add code to get the reset property from the device tree for deassert and
assert.
Signed-off-by: Dinh Nguyen <dinguyen@kernel.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The registers for AXI DMAC are detailed at:
https://wiki.analog.com/resources/fpga/docs/axi_dmac#register_map
This change adds regmap support for these registers, in case some wants to
have a more direct access to them via this interface.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
[vkoul: fixed code style issue]
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When a partial transfer is received, the driver should not submit any more
segments to the hardware, as they will be ignored/unused until a new
transfer start operation is done.
This change implements this by adding a new flag on the AXI DMAC
descriptor. This flags is set to true, if there was a partial transfer in
a previously completed segment. When that flag is true, the TLAST flag is
added to the to the submitted segment, signaling the controller to stop
receiving more segments.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Starting with version 4.2.a, the AXI DMAC controller can report partial
transfers that have been issued.
This change implements computing DMA residue information for transfers,
based on that reported information.
The way this is done, is to dequeue the partial transfers from the FIFO of
partial transfers, store the partial length to the correct segment &
descriptor, and compute the residue before submitting the DMA cookie to the
DMA framework.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This allows each virtual channel to store information about each transfer
that completed, i.e. which transfer succeeded (or which failed) and if
there was any residue data on each (completed) transfer.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
Also, because there is no need to save the file dentry, remove the
variables that were saving them as they were never even being used once
set.
Cc: Sinan Kaya <okaya@kernel.org>
Cc: Andy Gross <agross@kernel.org>
Cc: David Brown <david.brown@linaro.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-arm-msm@vger.kernel.org
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Sinan Kaya <okaya@kernel.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
Cc: Sudeep Dutt <sudeep.dutt@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Sudeep Dutt <sudeep.dutt@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
Also, because there is no need to save the file dentry, remove the
variable that was saving it as it was never even being used once set.
Cc: Daniel Mack <daniel@zonque.org>
Cc: Haojian Zhuang <haojian.zhuang@gmail.com>
Cc: Robert Jarzmik <robert.jarzmik@free.fr>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
No need to check the return value of debugfs_create_file(), so no need
to provide a fake "cast away" of the return value either.
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
Also, because there is no need to save the file dentry, remove the
variable that was saving it as it was never even being used once set.
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
No need to check the return value of debugfs_create_file(), so no need
to provide a fake "cast away" of the return value either.
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If CONFIG_PCI_MSI is not set, building with CONFIG_DW_EDMA
fails:
drivers/dma/dw-edma/dw-edma-core.c: In function dw_edma_irq_request:
drivers/dma/dw-edma/dw-edma-core.c:784:21: error: implicit declaration of function pci_irq_vector; did you mean rcu_irq_enter? [-Werror=implicit-function-declaration]
err = request_irq(pci_irq_vector(to_pci_dev(dev), 0),
^~~~~~~~~~~~~~
Reported-by: Hulk Robot <hulkci@huawei.com>
Fixes: e63d79d1ff ("dmaengine: Add Synopsys eDMA IP core driver")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The change replaces the old license information in the comment header with
the new SPDX license specifier.
As well as bumping the year range from 2013-2015 to 2013-2019.
The latter also reflects recent changes that were added to the driver.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Synopsys eDMA IP is normally distributed along with Synopsys PCIe
EndPoint IP (depends of the use and licensing agreement).
This IP requires some basic configurations, such as:
- eDMA registers BAR
- eDMA registers offset
- eDMA registers size
- eDMA linked list memory BAR
- eDMA linked list memory offset
- eDMA linked list memory size
- eDMA data memory BAR
- eDMA data memory offset
- eDMA data memory size
- eDMA version
- eDMA mode
- IRQs available for eDMA
As a working example, PCIe glue-logic will attach to a Synopsys PCIe
EndPoint IP prototype kit (Vendor ID = 0x16c3, Device ID = 0xedda),
which has built-in an eDMA IP with this default configuration:
- eDMA registers BAR = 0
- eDMA registers offset = 0x00001000 (4 Kbytes)
- eDMA registers size = 0x00002000 (8 Kbytes)
- eDMA linked list memory BAR = 2
- eDMA linked list memory offset = 0x00000000 (0 Kbytes)
- eDMA linked list memory size = 0x00800000 (8 Mbytes)
- eDMA data memory BAR = 2
- eDMA data memory offset = 0x00800000 (8 Mbytes)
- eDMA data memory size = 0x03800000 (56 Mbytes)
- eDMA version = 0
- eDMA mode = EDMA_MODE_UNROLL
- IRQs = 1
This driver can be compile as built-in or external module in kernel.
To enable this driver just select DW_EDMA_PCIE option in kernel
configuration, however it requires and selects automatically DW_EDMA
option too.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add support for the eDMA IP version 0 driver for both register maps (legacy
and unroll).
The legacy register mapping was the initial implementation, which consisted
in having all registers belonging to channels multiplexed, which could be
change anytime (which could led a race-condition) by view port register
(access to only one channel available each time).
This register mapping is not very effective and efficient in a multithread
environment, which has led to the development of unroll registers mapping,
which consists of having all channels registers accessible any time by
spreading all channels registers by an offset between them.
This version supports a maximum of 16 independent channels (8 write +
8 read), which can run simultaneously.
Implements a scatter-gather transfer through a linked list, where the size
of linked list depends on the allocated memory divided equally among all
channels.
Each linked list descriptor can transfer from 1 byte to 4 Gbytes and is
alignmented to DWORD.
Both SAR (Source Address Register) and DAR (Destination Address Register)
are alignmented to byte.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add Synopsys PCIe Endpoint eDMA IP core driver to kernel.
This IP is generally distributed with Synopsys PCIe Endpoint IP (depends
of the use and licensing agreement).
This core driver, initializes and configures the eDMA IP using vma-helpers
functions and dma-engine subsystem.
This driver can be compile as built-in or external module in kernel.
To enable this driver just select DW_EDMA option in kernel configuration,
however it requires and selects automatically DMA_ENGINE and
DMA_VIRTUAL_CHANNELS option too.
In order to transfer data from point A to B as fast as possible this IP
requires a dedicated memory space containing linked list of elements.
All elements of this linked list are continuous and each one describes a
data transfer (source and destination addresses, length and a control
variable).
For the sake of simplicity, lets assume a memory space for channel write
0 which allows about 42 elements.
+---------+
| Desc #0 |-+
+---------+ |
V
+----------+
| Chunk #0 |-+
| CB = 1 | | +----------+ +-----+ +-----------+ +-----+
+----------+ +->| Burst #0 |->| ... |->| Burst #41 |->| llp |
| +----------+ +-----+ +-----------+ +-----+
V
+----------+
| Chunk #1 |-+
| CB = 0 | | +-----------+ +-----+ +-----------+ +-----+
+----------+ +->| Burst #42 |->| ... |->| Burst #83 |->| llp |
| +-----------+ +-----+ +-----------+ +-----+
V
+----------+
| Chunk #2 |-+
| CB = 1 | | +-----------+ +-----+ +------------+ +-----+
+----------+ +->| Burst #84 |->| ... |->| Burst #125 |->| llp |
| +-----------+ +-----+ +------------+ +-----+
V
+----------+
| Chunk #3 |-+
| CB = 0 | | +------------+ +-----+ +------------+ +-----+
+----------+ +->| Burst #126 |->| ... |->| Burst #129 |->| llp |
+------------+ +-----+ +------------+ +-----+
Legend:
- Linked list, also know as Chunk
- Linked list element*, also know as Burst *CB*, also know as Change Bit,
it's a control bit (and typically is toggled) that allows to easily
identify and differentiate between the current linked list and the
previous or the next one.
- LLP, is a special element that indicates the end of the linked list
element stream also informs that the next CB should be toggle
On every last Burst of the Chunk (Burst #41, Burst #83, Burst #125 or
even Burst #129) is set some flags on their control variable (RIE and
LIE bits) that will trigger the send of "done" interruption.
On the interruptions callback, is decided whether to recycle the linked
list memory space by writing a new set of Bursts elements (if still
exists Chunks to transfer) or is considered completed (if there is no
Chunks available to transfer).
On scatter-gather transfer mode, the client will submit a scatter-gather
list of n (on this case 130) elements, that will be divide in multiple
Chunks, each Chunk will have (on this case 42) a limited number of
Bursts and after transferring all Bursts, an interrupt will be
triggered, which will allow to recycle the all linked list dedicated
memory again with the new information relative to the next Chunk and
respective Burst associated and repeat the whole cycle again.
On cyclic transfer mode, the client will submit a buffer pointer, length
of it and number of repetitions, in this case each burst will correspond
directly to each repetition.
Each Burst can describes a data transfer from point A(source) to point
B(destination) with a length that can be from 1 byte up to 4 GB. Since
dedicated the memory space where the linked list will reside is limited,
the whole n burst elements will be organized in several Chunks, that
will be used later to recycle the dedicated memory space to initiate a
new sequence of data transfers.
The whole transfer is considered has completed when it was transferred
all bursts.
Currently this IP has a set well-known register map, which includes
support for legacy and unroll modes. Legacy mode is version of this
register map that has multiplexer register that allows to switch
registers between all write and read channels and the unroll modes
repeats all write and read channels registers with an offset between
them. This register map is called v0.
The IP team is creating a new register map more suitable to the latest
PCIe features, that very likely will change the map register, which this
version will be called v1. As soon as this new version is released by
the IP team the support for this version in be included on this driver.
According to the logic, patches 1, 2 and 3 should be squashed into 1
unique patch, but for the sake of simplicity of review, it was divided
in this 3 patches files.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Another round of SPDX header file fixes for 5.2-rc4
These are all more "GPL-2.0-or-later" or "GPL-2.0-only" tags being
added, based on the text in the files. We are slowly chipping away at
the 700+ different ways people tried to write the license text. All of
these were reviewed on the spdx mailing list by a number of different
people.
We now have over 60% of the kernel files covered with SPDX tags:
$ ./scripts/spdxcheck.py -v 2>&1 | grep Files
Files checked: 64533
Files with SPDX: 40392
Files with errors: 0
I think the majority of the "easy" fixups are now done, it's now the
start of the longer-tail of crazy variants to wade through.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCXPuGTg8cZ3JlZ0Brcm9h
aC5jb20ACgkQMUfUDdst+ykBvQCg2SG+HmDH+tlwKLT/q7jZcLMPQigAoMpt9Uuy
sxVEiFZo8ZU9v1IoRb1I
=qU++
-----END PGP SIGNATURE-----
Merge tag 'spdx-5.2-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core
Pull yet more SPDX updates from Greg KH:
"Another round of SPDX header file fixes for 5.2-rc4
These are all more "GPL-2.0-or-later" or "GPL-2.0-only" tags being
added, based on the text in the files. We are slowly chipping away at
the 700+ different ways people tried to write the license text. All of
these were reviewed on the spdx mailing list by a number of different
people.
We now have over 60% of the kernel files covered with SPDX tags:
$ ./scripts/spdxcheck.py -v 2>&1 | grep Files
Files checked: 64533
Files with SPDX: 40392
Files with errors: 0
I think the majority of the "easy" fixups are now done, it's now the
start of the longer-tail of crazy variants to wade through"
* tag 'spdx-5.2-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (159 commits)
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 450
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 449
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 448
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 446
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 445
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 444
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 443
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 442
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 441
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 440
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 438
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 437
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 436
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 435
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 434
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 433
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 432
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 431
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 430
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 429
...
The fixes for this round are in drivers:
- jz4780 transfer fix for acking descriptors early
- fsl-qdma: clean registers on error
- dw-axi-dmac: null pointer dereference fix
- mediatek-cqdma: fix sleeping in atomic context
- tegra210-adma: fix bunch os issues like crashing in driver
probe, channel FIFO configuration etc.
- sprd: Fixes for possible crash on descriptor status, block
length overflow. For 2-stage transfer fix incorrect start,
configuration and interrupt handling.
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJc+3afAAoJEHwUBw8lI4NHYe0P/RVwZso3RKbAALFg6MJLyWun
UpIJ9NA28xcKyghXa3mbq5gFgqkwJedACaXaik/wZr23+FJodg1afy7cfZRq3l5I
arOmc7U2HK78L+h6T6JbIRP+pxIHHlrMaVwyO2ZwQ2jJmwwV/KyXgfxmv2ZIc1cg
0M2C22pgy7oacK47eyjfcF/yH6PWARVaUlUHB+0pXXgwpImNl9IO+ritEzJg6hA2
boltBfHX86XB/qfbnHaqo7bGUhEY4kqQafHKDNUh+jm7GgoI5biVC7dtH7liI0es
U3I+RXFbOi4sxaY/j5aRgCNCnc7nUdU806ma3DPuTwybeca0ixttazjtB1p56ewU
/NkhiuWf/qShZkepGlB50l7CAJ4Q0of2tPFr8pnI9OdZl8MzcoHIhN95IT/aEByf
f0cbwv8lAeU948zm63axs5eS36M1yoV/cUzzeLatXWY3CS82FlTiEaWGnYkfZJ9j
DNOr7WGk81A66kNEDIq65H9qQd86kRdNgoy1f7DNxBOjzVTzXGIATM+zpb3W/tsb
Qb3i8OD9Lo4TzWCiDqbasGYKjAwGougcRKZWvSv/qfuItq/vGnChSNs7cMx73znJ
EfgG+/kiKEn2hj9E5eSduqOrBaL4g+AJlqb93fpH54Jjr3QaupvhCZ2bYhrb9QrU
rW3FZfGsAcrlsPbOlIOW
=Ca0d
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-fix-5.2-rc4' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine fixes from Vinod Koul:
- jz4780 transfer fix for acking descriptors early
- fsl-qdma: clean registers on error
- dw-axi-dmac: null pointer dereference fix
- mediatek-cqdma: fix sleeping in atomic context
- tegra210-adma: fix bunch os issues like crashing in driver probe,
channel FIFO configuration etc.
- sprd: Fixes for possible crash on descriptor status, block length
overflow. For 2-stage transfer fix incorrect start, configuration and
interrupt handling.
* tag 'dmaengine-fix-5.2-rc4' of git://git.infradead.org/users/vkoul/slave-dma:
dmaengine: sprd: Add interrupt support for 2-stage transfer
dmaengine: sprd: Fix the right place to configure 2-stage transfer
dmaengine: sprd: Fix block length overflow
dmaengine: sprd: Fix the incorrect start for 2-stage destination channels
dmaengine: sprd: Add validation of current descriptor in irq handler
dmaengine: sprd: Fix the possible crash when getting descriptor status
dmaengine: tegra210-adma: Fix spelling
dmaengine: tegra210-adma: Fix channel FIFO configuration
dmaengine: tegra210-adma: Fix crash during probe
dmaengine: mediatek-cqdma: sleeping in atomic context
dmaengine: dw-axi-dmac: fix null dereference when pointer first is null
dmaengine: fsl-qdma: Add improvement
dmaengine: jz4780: Fix transfers being ACKed too soon
Add 8250 UART APDMA to support MediaTek UART. If MediaTek UART is
enabled by SERIAL_8250_MT6577, and we can enable this driver to offload
the UART device moving bytes.
Signed-off-by: Long Cheng <long.cheng@mediatek.com>
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms and conditions of the gnu general public license
version 2 as published by the free software foundation
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 101 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190531190113.822954939@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms and conditions of the gnu general public license
version 2 as published by the free software foundation this program
is distributed in the hope it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details the full gnu general public license is included in
this distribution in the file called copying
this program is free software you can redistribute it and or modify
it under the terms and conditions of the gnu general public license
version 2 as published by the free software foundation this program
is distributed in the hope [that] it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details the full gnu general public license is included in
this distribution in the file called copying
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 57 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190529141901.515993066@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation this program is
distributed in the hope that it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details the full gnu general public license is included in
this distribution in the file called copying
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 39 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190529141901.397680977@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms and conditions of the gnu general public license
version 2 as published by the free software foundation this program
is distributed in the hope it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 263 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190529141901.208660670@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 and
only version 2 as published by the free software foundation this
program is distributed in the hope that it will be useful but
without any warranty without even the implied warranty of
merchantability or fitness for a particular purpose see the gnu
general public license for more details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 294 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190529141900.825281744@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
H6 DMA has more than 32 supported DRQs, which means that configuration
register is slightly rearranged. It also needs additional clock to be
enabled.
Add support for it.
Signed-off-by: Jernej Skrabec <jernej.skrabec@siol.net>
Signed-off-by: Clément Péron <peron.clem@gmail.com>
Acked-by: Maxime Ripard <maxime.ripard@bootlin.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
H6 DMA has mode fields in different position than any other currently
supported DMA controller.
Add a quirk for that.
Signed-off-by: Jernej Skrabec <jernej.skrabec@siol.net>
Signed-off-by: Clément Péron <peron.clem@gmail.com>
Acked-by: Maxime Ripard <maxime.ripard@bootlin.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
H6 DMA has more than 32 possible DRQs. That means that current maximum
of 31 DRQs is not enough anymore.
Add a quirk which will set source and destination DRQ number.
Signed-off-by: Jernej Skrabec <jernej.skrabec@siol.net>
Signed-off-by: Clément Péron <peron.clem@gmail.com>
Acked-by: Maxime Ripard <maxime.ripard@bootlin.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
H6 DMA controller needs additional mbus clock to be enabled.
Add a quirk for it and handle it accordingly.
Signed-off-by: Jernej Skrabec <jernej.skrabec@siol.net>
Signed-off-by: Clément Péron <peron.clem@gmail.com>
Acked-by: Maxime Ripard <maxime.ripard@bootlin.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Apparently driver was never tested with DMA_PREP_INTERRUPT flag being
unset since it completely disables interrupt handling instead of skipping
the callbacks invocations, hence putting channel into unusable state.
The flag is always set by all of kernel drivers that use APB DMA, so let's
error out in otherwise case for consistency. It won't be difficult to
support that case properly if ever will be needed.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When an error occurs we should clean the error register then to return
Signed-off-by: Peng Ma <peng.ma@nxp.com>
[vkoul: change patch title]
Signed-off-by: Vinod Koul <vkoul@kernel.org>
CMD of Source/Destination descriptor format should be lower of
struct fsl_qdma_engine number data address.
Signed-off-by: Peng Ma <peng.ma@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms and conditions of the gnu general public license
version 2 as published by the free software foundation this program
is distributed in the hope it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details you should have received a copy of the gnu general
public license along with this program if not see http www gnu org
licenses
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 228 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190528171438.107155473@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based on 1 normalized pattern(s):
license terms gnu general public license gpl version 2
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 161 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190528170027.447718015@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based on 1 normalized pattern(s):
licensed under the gpl 2
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 135 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190528170026.071193225@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation this program is
distributed in the hope that it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 655 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070034.575739538@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 3029 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details the full
gnu general public license is in this distribution in the file
called copying
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 1 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190524100843.594454135@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The `copy_align` property is a generic property that describes alignment
for DMA memcpy & sg ops.
It serves mostly an informational purpose, and can be used in DMA tests, to
pass the info to know what alignment to expect.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Starting with version 4.1.a the AXI-DMAC is capable of reporting the
required length alignment.
The LSBs that are required to be set for alignment will always read back as
set from the transfer length register. It is not possible to clear them by
writing a 0. This means the driver can discover the length alignment
requirement by writing 0 to that register and reading back the value.
Since the DMA will support length alignment requirements that are different
from the address alignment requirement track both of them independently.
For older versions of the peripheral assume that the length alignment
requirement is equal to the address alignment requirement.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Let the DMA engine core do the device node validation instead of drivers.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Let the DMA engine core do the device node validation instead of drivers.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Let the DMA engine core do the device node validation instead of drivers.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Let the DMA engine core do the device node validation instead of drivers.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Let the DMA engine core do the device node validation instead of drivers.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Let the DMA engine core do the device node validation instead of drivers.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When user try to request one DMA channel by __dma_request_channel(), it won't
validate if it is the correct DMA device to request, that will lead each DMA
engine driver to validate the correct device node in their filter function
if it is necessary.
Thus we can add the matching device node validation in the DMA engine core,
to remove all of device node validation in the drivers.
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Based on 1 normalized pattern(s):
this is free software you can redistribute it and or modify it under
the terms of the gnu general public license as published by the free
software foundation either version 2 of the license or at your
option any later version
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 14 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190520170857.915677517@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We get a compiler warn about variable ‘tail_desc’ set but not used
drivers/dma/xilinx/xilinx_dma.c:1102:42: warning:
variable ‘tail_desc’ set but not used [-Wunused-but-set-variable]
struct xilinx_dma_tx_descriptor *desc, *tail_desc;
So remove it.
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
For 2-stage transfer, some users like Audio still need transaction interrupt
to notify when the 2-stage transfer is completed. Thus we should enable
2-stage transfer interrupt to support this feature.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Move the 2-stage configuration before configuring the link-list mode,
since we will use some 2-stage configuration to fill the link-list
configuration.
Signed-off-by: Eric Long <eric.long@unisoc.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The maximum value of block length is 0xffff, so if the configured transfer length
is more than 0xffff, that will cause block length overflow to lead a configuration
error.
Thus we can set block length as the maximum burst length to avoid this issue, since
the maximum burst length will not be a big value which is more than 0xffff.
Signed-off-by: Eric Long <eric.long@unisoc.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The 2-stage destination channel will be triggered by source channel
automatically, which means we should not trigger it by software request.
Signed-off-by: Eric Long <eric.long@unisoc.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When user terminates one DMA channel to free all its descriptors, but
at the same time one transaction interrupt was triggered possibly, now
we should not handle this interrupt by validating if the 'schan->cur_desc'
was set as NULL to avoid crashing the kernel.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We will get a NULL virtual descriptor by vchan_find_desc() when the descriptor
has been submitted, that will crash the kernel when getting the descriptor
status.
In this case, since the descriptor has been submitted to process, but it
is not completed now, which means the descriptor is listed into the
'vc->desc_submitted' list now. So we can not get current processing descriptor
by vchan_find_desc(), but the pointer 'schan->cur_desc' will point to the
current processing descriptor, then we can use 'schan->cur_desc' to get
current processing descriptor's status to avoid this issue.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or any
later version this program is distributed in the hope that it will
be useful but without any warranty without even the implied warranty
of merchantability or fitness for a particular purpose see the gnu
general public license for more details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 50 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Jilayne Lovejoy <opensource@jilayne.com>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190519154042.917228456@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based on 1 normalized pattern(s):
the code contained herein is licensed under the gnu general public
license you may obtain a copy of the gnu general public license
version 2 or later at the following locations
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 4 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Jilayne Lovejoy <opensource@jilayne.com>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190519154042.707528683@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details you
should have received a copy of the gnu general public license along
with this program if not see http www gnu org licenses
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details [based]
[from] [clk] [highbank] [c] you should have received a copy of the
gnu general public license along with this program if not see http
www gnu org licenses
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 355 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Jilayne Lovejoy <opensource@jilayne.com>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190519154041.837383322@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details the full
gnu general public license is included in this distribution in the
file called copying
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 9 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Jilayne Lovejoy <opensource@jilayne.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190519154041.244154651@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Based on 1 normalized pattern(s):
licensed under gplv2 or later
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 118 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Jilayne Lovejoy <opensource@jilayne.com>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190519154040.961286471@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Commit ded1f3db4c ("dmaengine: tegra210-adma: prepare for supporting
newer Tegra chips") removed the default settings DMA channel RX and TX
FIFO sizes and this is breaking DMA transfers. The intention was to
move the default settings to the chip specific data structure because
this commit was preparing for adding support for Tegra186 where the
fields for the FIFO CTRL register are slightly different.
Fix the configuration of the FIFO sizes by adding default values for
the FIFO CTRL register for both Tegra210 and Tegra186 and store the
values in the chip specific structure.
Fixes: ded1f3db4c ("dmaengine: tegra210-adma: prepare for supporting newer Tegra chips")
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Commit f33e7bb3eb ("dmaengine: tegra210-adma: restore channel status")
added support to save and restore the DMA channel registers when runtime
suspending the ADMA. This change is causing the kernel to crash when
probing the ADMA, if the device is probed deferred when looking up the
channel interrupts. The crash occurs because not all of the channel base
addresses have been setup at this point and in the clean-up path of the
probe, pm_runtime_suspend() is called invoking its callback which
expects all the channel base addresses to be initialised.
Although this could be fixed by simply checking for a NULL address, on
further review of the driver it seems more appropriate that we only call
pm_runtime_get_sync() after all the channel interrupts and base
addresses have been configured. Therefore, fix this crash by moving the
calls to pm_runtime_enable(), pm_runtime_get_sync() and
tegra_adma_init() after the DMA channels have been initialised.
Fixes: f33e7bb3eb ("dmaengine: tegra210-adma: restore channel status")
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The mtk_cqdma_poll_engine_done() function takes a true/false parameter
where true means it's called from atomic context. There are a couple
places where it was set to false but it's actually in atomic context
so it should be true.
All the callers for mtk_cqdma_hard_reset() are holding a spin_lock and
in mtk_cqdma_free_chan_resources() we take a spin_lock before calling
the mtk_cqdma_poll_engine_done() function.
Fixes: b1f01e48df ("dmaengine: mediatek: Add MediaTek Command-Queue DMA controller for MT6765 SoC")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In the unlikely event that axi_desc_get returns a null desc in the
very first iteration of the while-loop the error exit path ends
up calling axi_desc_put on a null pointer 'first' and this causes
a null pointer dereference. Fix this by adding a null check on
pointer 'first' before calling axi_desc_put.
Addresses-Coverity: ("Explicit null dereference")
Fixes: 1fe20f1b84 ("dmaengine: Introduce DW AXI DMAC driver")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add SPDX license identifiers to all Make/Kconfig files which:
- Have no license information of any form
These files fall under the project license, GPL v2 only. The resulting SPDX
license identifier is:
GPL-2.0-only
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Add SPDX license identifiers to all files which:
- Have no license information of any form
- Have MODULE_LICENCE("GPL*") inside which was used in the initial
scan/conversion to ignore the file
These files fall under the project license, GPL v2 only. The resulting SPDX
license identifier is:
GPL-2.0-only
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The AXI-DMAC supports different types of interface for the data source and
destination ports. Typically one of those ports is a memory-mapped
interface while the other is some kind of streaming interface.
The information about which kind of interface is used for each port is
encoded in the devicetree.
It is also possible in the driver to detect whether a port supports
memory-mapped transfers or not. For streaming interfaces the address
register is read-only and will always return 0. So in order to check if a
port supports memory-mapped transfers write a non-zero value to the
corresponding address register and check that the value read-back is still
non zero.
This allows to detect mismatches between the devicetree description and the
actual hardware configuration.
Unfortunately it is not possible to autodetect the interface types since
there is no method to distinguish between the different streaming ports. So
the best thing that can be done is to error out when a memory mapped port
is described in the devicetree but none is detected in the hardware.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The TLAST flag is used by the DMAC HDL controller to signal to the
controller that the following segment (to be submitted) is the last one (in
a series of segments).
A receiver DMA (typically another DMAC) can read this parameter (from the
transfer), and terminate the transfer earlier. A typical use-case for this,
is when the receiver expects a certain amount of segments, but for some
reason (e.g. an ADC capture which can have an unknown number of digital
samples) the number of actual segments is smaller. The receiver would read
this flag, and then the DMAC would finish.
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The DMAC HDL core supports interleaved & cyclic transfers.
An example use-case for this mode is when the controller is used as a
video DMA.
This change sets the `cyclic` field to true, so that when the IRQ comes and
the `axi_dmac_transfer_done()` callback is called (from the interrupt
handler) the proper `vchan_cyclic_callback()` is called. This way the
DMAEngine framework will process data correctly for interleaved + cyclic
transfers.
This doesn't fix anything. It's an enhancement to the driver.
Signed-off-by: Dragos Bogdan <dragos.bogdan@analog.com>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Commit c6504be539 ("dmaengine: stm32-dma: Fix unsigned variable compared
with zero") duplicated the call to platform_get_irq.
So remove the first call to platform_get_irq.
Fixes: c6504be539 ("dmaengine: stm32-dma: Fix unsigned variable compared with zero")
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When an error occurs we should clean the error register then to return
Signed-off-by: Peng Ma <peng.ma@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When a multi-descriptor DMA transfer is in progress, the "IRQ pending"
flag will apparently be set for that channel as soon as the last
descriptor loads, way before the IRQ actually happens. This behaviour
has been observed on the JZ4725B, but maybe other SoCs are affected.
In the case where another DMA transfer is running into completion on a
separate channel, the IRQ handler would then run the completion handler
for our previous channel even if the transfer didn't actually finish.
Fix this by checking in the completion handler that we're indeed done;
if not the interrupted DMA transfer will simply be resumed.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Use SPDX license notifier instead of plain text in the header.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
SUDMAC driver was introduced in v3.10 but was never integrated for use
by any platform. As it is unused remove it.
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Acked-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
- Updates to stm32 dma residue calculations
- Interleave dma capability to axi-dmac and
support for ZynqMP arch
- Rework of channel assignment for rcar dma
- Debugfs for pl330 driver
- Support for Tegra186/Tegra194, refactoring for new chips
and support for pause/resume
- Updates to axi-dmac, bcm2835, fsl-edma, idma64, imx-sdma,
rcar-dmac, stm32-dma etc
- dev_get_drvdata() updates on few drivers
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJc08p1AAoJEHwUBw8lI4NHD14QAJGU7MOc9dpr+qtm2k3sNO3o
EXZtb3GjTs4MUt6EfMA47KXsxeq4UhubQqM7CmPngDyjXaPd4JBE8bwAd+OzS9sq
eAPMa+M1g8MehuQcdUzB/y6APoSFhGvFoGLY8e7FeI6fwYNm3Yy2gTSiZfpMb3MW
hclJQe+UWfppUHOig13tr0tbQ31DOa7qb2+roVJqDEb9sQ5bDkhRWXjElfoeSXsS
n8nNh4GZr5RkIxfzslVRZNfqb1lja2e03SXBsN9faQI7BfIYBM+9hWSYd4Nq8uYo
xvhYf9gJnKVKtFrwdXtyeBJ80DijWBoodhLrLOfhEYYOrCl9WwJT9AepIOdvij32
11FwjCbkC9ASQ1cSLyRUBbdmfykSlBvdbAMwJc1y9qK7k9BMba3rXRJfimlRy29A
Cpsu4tZKoPlZRGinoGnEGreg1YZI1YHwa+hlkW/8V9Zkb2hvIUbbXr7xHedJf7n4
gIb5DnCF5pC1umB/o7pj2YXrYBc9GETp3sDQ88aw1owKh1T2pZcc5HOpi4p7/7n+
b2HM0cWOCM3aKwdOcONk0jd87FcYQm3g1isQF5SCOtOys8Uy6wNqo9aRrfE/94aw
4SiGRq9/nSOHDh72mD3Ux7v47/cFjWGzZZJVy5+NC+Mq79KxgpXOjsIr7YVbcn9m
GuUdiDZmUvZ4y+qq/uCI
=JDU6
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-5.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
- Updates to stm32 dma residue calculations
- Interleave dma capability to axi-dmac and support for ZynqMP arch
- Rework of channel assignment for rcar dma
- Debugfs for pl330 driver
- Support for Tegra186/Tegra194, refactoring for new chips and support
for pause/resume
- Updates to axi-dmac, bcm2835, fsl-edma, idma64, imx-sdma, rcar-dmac,
stm32-dma etc
- dev_get_drvdata() updates on few drivers
* tag 'dmaengine-5.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (34 commits)
dmaengine: tegra210-adma: restore channel status
dmaengine: tegra210-dma: free dma controller in remove()
dmaengine: tegra210-adma: add pause/resume support
dmaengine: tegra210-adma: add support for Tegra186/Tegra194
Documentation: DT: Add compatibility binding for Tegra186
dmaengine: tegra210-adma: prepare for supporting newer Tegra chips
dmaengine: at_xdmac: remove a stray bottom half unlock
dmaengine: fsl-edma: Adjust indentation
dmaengine: fsl-edma: Fix typo in Vybrid name
dmaengine: stm32-dma: fix residue calculation in stm32-dma
dmaengine: nbpfaxi: Use dev_get_drvdata()
dmaengine: bcm-sba-raid: Use dev_get_drvdata()
dmaengine: stm32-dma: Fix unsigned variable compared with zero
dmaengine: stm32-dma: use platform_get_irq()
dmaengine: rcar-dmac: Update copyright information
dmaengine: imx-sdma: Only check ratio on parts that support 1:1
dmaengine: xgene-dma: fix spelling mistake "descripto" -> "descriptor"
dmaengine: idma64: Move driver name to the header
dmaengine: bcm2835: Drop duplicate capability setting.
dmaengine: pl330: _stop: clear interrupt status
...
Remove mmiowb() from the kernel memory barrier API and instead, for
architectures that need it, hide the barrier inside spin_unlock() when
MMIO has been performed inside the critical section.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAlzMFaUACgkQt6xw3ITB
YzRICQgAiv7wF/yIbBhDOmCNCAKDO59chvFQWxXWdGk/aAB56kwKAMXJgLOvlMG/
VRuuLyParTFQETC3jaxKgnO/1hb+PZLDt2Q2KqixtjIzBypKUPWvK2sf6THhSRF1
GK0DBVUd1rCrWrR815+SPb8el4xXtdBzvAVB+Fx35PXVNpdRdqCkK+EQ6UnXGokm
rXXHbnfsnquBDtmb4CR4r2beH+aNElXbdt0Kj8VcE5J7f7jTdW3z6Q9WFRvdKmK7
yrsxXXB2w/EsWXOwFp0SLTV5+fgeGgTvv8uLjDw+SG6t0E0PebxjNAflT7dPrbYL
WecjKC9WqBxrGY+4ew6YJP70ijLBCw==
=aC8m
-----END PGP SIGNATURE-----
Merge tag 'arm64-mmiowb' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull mmiowb removal from Will Deacon:
"Remove Mysterious Macro Intended to Obscure Weird Behaviours (mmiowb())
Remove mmiowb() from the kernel memory barrier API and instead, for
architectures that need it, hide the barrier inside spin_unlock() when
MMIO has been performed inside the critical section.
The only relatively recent changes have been addressing review
comments on the documentation, which is in a much better shape thanks
to the efforts of Ben and Ingo.
I was initially planning to split this into two pull requests so that
you could run the coccinelle script yourself, however it's been plain
sailing in linux-next so I've just included the whole lot here to keep
things simple"
* tag 'arm64-mmiowb' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (23 commits)
docs/memory-barriers.txt: Update I/O section to be clearer about CPU vs thread
docs/memory-barriers.txt: Fix style, spacing and grammar in I/O section
arch: Remove dummy mmiowb() definitions from arch code
net/ethernet/silan/sc92031: Remove stale comment about mmiowb()
i40iw: Redefine i40iw_mmiowb() to do nothing
scsi/qla1280: Remove stale comment about mmiowb()
drivers: Remove explicit invocations of mmiowb()
drivers: Remove useless trailing comments from mmiowb() invocations
Documentation: Kill all references to mmiowb()
riscv/mmiowb: Hook up mmwiob() implementation to asm-generic code
powerpc/mmiowb: Hook up mmwiob() implementation to asm-generic code
ia64/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()
mips/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()
sh/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()
m68k/io: Remove useless definition of mmiowb()
nds32/io: Remove useless definition of mmiowb()
x86/io: Remove useless definition of mmiowb()
arm64/io: Remove useless definition of mmiowb()
ARM/io: Remove useless definition of mmiowb()
mmiowb: Hook up mmiowb helpers to spinlocks and generic I/O accessors
...
Status of ADMA channel registers is not saved and restored during system
suspend. During active playback if system enters suspend, this results in
wrong state of channel registers during system resume and playback fails
to resume properly. Fix this by saving following channel registers in
runtime suspend and restore during runtime resume.
* ADMA_CH_LOWER_SRC_ADDR
* ADMA_CH_LOWER_TRG_ADDR
* ADMA_CH_FIFO_CTRL
* ADMA_CH_CONFIG
* ADMA_CH_CTRL
* ADMA_CH_CMD
* ADMA_CH_TC
Runtime PM calls will be inovked during system resume path if a playback
or capture needs to be resumed. Hence above changes work fine for system
suspend case.
Fixes: f46b195799 ("dmaengine: tegra-adma: Add support for Tegra210 ADMA")
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
During an audio playback session it is observed that, audio goes off after
few seconds of continuous pause and play. No audio is heard even when the
playback is resumed.
The reason for above is, currently ADMA driver does not handle DMA_PAUSE/
DMA_RESUME and relevant callbacks for dma_device are not implemented. This
patch implements device_pause and device_resume callbacks for the device.
During pause TRANSFER_PAUSE bit of dma channel control register is set and
the same is cleared during resume.
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add Tegra186 specific macro defines and chip_data structure for chip
specific information. New compatibility is added to select relevant
chip details. There is no major change for Tegra194 and hence it can
use the same chip data.
The bits in the BURST_SIZE field of the ADMA CH_CONFIG register are
encoded differently on Tegra186 and Tegra194 compared with Tegra210.
On Tegra210 the bits are encoded as follows ...
1 = WORD_1
2 = WORDS_2
3 = WORDS_4
4 = WORDS_8
5 = WORDS_16
Where as on Tegra186 and Tegra194 the bits are encoded as ...
0 = WORD_1
1 = WORDS_2
2 = WORDS_3
3 = WORDS_4
4 = WORDS_5
...
15 = WORDS_16
Add helper functions for generating the correct burst size.
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This is a preparatory patch to add support for Tegra186 and Tegra194 chips.
Following changes are necessary to make driver code generic.
* chip_data structure is enhanced to have chip specific details and
following are the additions to the structure
* Offset addresses for ADMA global and channel registers
* Offset values for Tx and Rx channel selection
* Maximum supported Tx and Rx channels
* Tx and Rx channel request mask
* ADMA channel register space size
* Make use of above chip_data to generalise the driver code
Support for Tegra186 and Tegra194 will be added in subsequent patches of
the series.
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We switched this code from spin_lock_bh() to vanilla spin_lock() but
there was one stray spin_unlock_bh() that was overlooked. This
patch converts it to spin_unlock() as well.
Fixes: d8570d018f ("dmaengine: at_xdmac: move spin_lock_bh to spin_lock in tasklet")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fix indentation and remove unneeded space after 'return' keyword. This
fixes checkpatch warning:
WARNING: Statements should start on a tabstop
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In double buffer mode, during residue calculation, the DMA can
automatically switch to the next transfer. Indeed the CT bit that
gives position in the double buffer can has been updated by the
hardware, during calculation.
In this case the SxNDTR register value can not be trusted.
If a transition is detected we consider that the DMA has switched to
the beginning of next sg.
Signed-off-by: Arnaud Pouliquen <arnaud.pouliquen@st.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Commit f4fd2ec08f: ("dmaengine: stm32-dma: use platform_get_irq()") used
unsigned variable irq to store the results and check later for negative
errors, so update the code to use signed variable for this
Fixes: f4fd2ec08f ("dmaengine: stm32-dma: use platform_get_irq()")
Reported-by: kbuild test robot <lkp@intel.com>
Reported-by: Julia Lawall <julia.lawall@lip6.fr>
Acked-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
platform_get_resource(pdev, IORESOURCE_IRQ) is not recommended for
requesting IRQ's resources, as they can be not ready yet. Using
platform_get_irq() instead is preferred for getting IRQ even if it was
not retrieved earlier.
Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
Reviewed-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On imx8mq B0 chip, AHB/SDMA clock ratio 2:1 can't be supported,
since SDMA clock ratio has to be increased to 250Mhz, AHB can't reach
to 500Mhz, so use 1:1 instead.
To limit this change to the imx8mq for now this patch also adds an
im8mq-sdma compatible string.
Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca>
Acked-by: Robin Gong <yibin.gong@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is a spelling mistake in a chan_dbg message, fix it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There are two drivers that are relying on the iDMA 64-bit driver name
to match. Instead of duplicating string in both of them, dedicate
a header file and share it between users.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch kill instructs the DMAC to immediately terminate
execution of a thread. and then clear the interrupt status,
at last, stop generating interrupts for DMA_SEV. to guarantee
the next dma start is clean. otherwise, one interrupt maybe leave
to next start and make some mistake.
we can reporduce the problem as follows:
DMASEV: modify the event-interrupt resource, and if the INTEN sets
function as interrupt, the DMAC will set irq<event_num> HIGH to
generate interrupt. write INTCLR to clear interrupt.
DMA EXECUTING INSTRUCTS DMA TERMINATE
| |
| |
... _stop
| |
| spin_lock_irqsave
DMASEV |
| |
| mask INTEN
| |
| DMAKILL
| |
| spin_unlock_irqrestore
in above case, a interrupt was left, and if we unmask INTEN, the DMAC
will set irq<event_num> HIGH to generate interrupt.
to fix this, do as follows:
DMA EXECUTING INSTRUCTS DMA TERMINATE
| |
| |
... _stop
| |
| spin_lock_irqsave
DMASEV |
| |
| DMAKILL
| |
| clear INTCLR
| mask INTEN
| |
| spin_unlock_irqrestore
Signed-off-by: Sugar Zhang <sugar.zhang@rock-chips.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Since device_prep_interleaved_dma() is already implemented, the
DMA_INTERLEAVE capability should be set.
Signed-off-by: Dragos Bogdan <dragos.bogdan@analog.com>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In 2D transfers (for the AXI DMAC), the number of frames (numf) represents
Y_LENGTH, and the length of a frame is X_LENGTH. 2D transfers are useful
for video transfers where screen resolutions ( X * Y ) are typically
aligned for X, but not for Y.
There is no requirement for Y_LENGTH to be aligned to the bus-width (or
anything), and this is also true for AXI DMAC.
Checking the Y_LENGTH for alignment causes false errors when initiating DMA
transfers. This change fixes this by checking only that the Y_LENGTH is
non-zero.
Fixes: 0e3b67b348 ("dmaengine: Add support for the Analog Devices AXI-DMAC DMA controller")
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Some synthesis time configuration parameters of the DMA controller can be
inferred from the hardware itself.
Use this information as it is more reliably than the information specified
in the devicetree which might be outdated if the HDL project got changed.
Deprecate the devicetree properties that can be inferred from the hardware
itself.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The tx_status poll in the rcar_dmac driver reads the status register
which indicates which chunk is busy (DMACHCRB). Afterwards the point
inside the chunk is read from DMATCRB. It is possible that the chunk
has changed between the two reads. The result is a non-monotonous
increase of the residue. Fix this by introducing a 'safe read' logic.
Fixes: 73a47bd0da ("dmaengine: rcar-dmac: use TCRB instead of TCR for residue")
Signed-off-by: Achim Dahlhoff <Achim.Dahlhoff@de.bosch.com>
Signed-off-by: Dirk Behme <dirk.behme@de.bosch.com>
Reviewed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Cc: <stable@vger.kernel.org> # v4.16+
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Having a cyclic DMA, a residue 0 is not an indication of a completed
DMA. In case of cyclic DMA make sure that dma_set_residue() is called
and with this a residue of 0 is forwarded correctly to the caller.
Fixes: 3544d28788 ("dmaengine: rcar-dmac: use result of updated get_residue in tx_status")
Signed-off-by: Dirk Behme <dirk.behme@de.bosch.com>
Signed-off-by: Achim Dahlhoff <Achim.Dahlhoff@de.bosch.com>
Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
Signed-off-by: Yao Lihua <ylhuajnu@outlook.com>
Reviewed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: <stable@vger.kernel.org> # v4.8+
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The commit af19b7ce76 ("mmc: bcm2835: Avoid possible races on
data requests") introduces a possible circular locking dependency,
which is triggered by swapping to the sdhost interface.
So instead of reintroduce the race condition again, we could also
avoid this situation by using GFP_NOWAIT for the allocation of the
DMA buffer descriptors.
Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Signed-off-by: Stefan Wahren <stefan.wahren@i2se.com>
Fixes: af19b7ce76 ("mmc: bcm2835: Avoid possible races on data requests")
Link: http://lists.infradead.org/pipermail/linux-rpi-kernel/2019-March/008615.html
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The overflow error flag (ROI: Request Overflow Error) is only relevant
for the case when the channel handles a peripheral synchronized transfer.
Not in the case of memory to memory transfer where there is no hardware
request signal.
Remove the use of this interrupt source in such a case. It's based on
the first descriptor which holds the configuration for the whole
linked list transfer.
Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Complement the identification of errors with stopping the channel and
dumping the descriptor that led to the error case.
Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Even if this case shouldn't happen when controller is properly programmed,
it's still better to avoid dumping a kernel Oops for this.
As the sequence may happen only for debugging purposes, log the error and
just finish the tasklet call.
Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
mmiowb() is now implied by spin_unlock() on architectures that require
it, so there is no reason to call it from driver code. This patch was
generated using coccinelle:
@mmiowb@
@@
- mmiowb();
and invoked as:
$ for d in drivers include/linux/qed sound; do \
spatch --include-headers --sp-file mmiowb.cocci --dir $d --in-place; done
NOTE: mmiowb() has only ever guaranteed ordering in conjunction with
spin_unlock(). However, pairing each mmiowb() removal in this patch with
the corresponding call to spin_unlock() is not at all trivial, so there
is a small chance that this change may regress any drivers incorrectly
relying on mmiowb() to order MMIO writes between CPUs using lock-free
synchronisation. If you've ended up bisecting to this commit, you can
reintroduce the mmiowb() calls using wmb() instead, which should restore
the old behaviour on all architectures other than some esoteric ia64
systems.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This reverts commit 906b40b246 ("dmaengine: stm32-mdma: Add a check on
read_u32_array")
As stated by bindings "st,ahb-addr-masks" is optional.
The statement inserted by this commit makes this property
mandatory and prevents MDMA to be probed in case property not present.
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The AXI DMAC driver is currently supported also on the Xilinx ZynqMP
architecture. This change allows this driver to be enabled & used on it as
well.
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Current way we find a waiting virtual channel for the next transfer
at the time one physical channel becomes free is not really fair.
More in details, in case there is more than one channel waiting at a time,
by just going through the arrays of memcpy and slave channels and stopping
as soon as state match waiting state, channels with high indexes can be
penalized.
Whenever dma engine is substantially overloaded so that we constantly
get several channels waiting, channels with highest indexes might not
be served for a substantial time which in the worse case, might hang
task that wait for dma transfer to complete.
This patch makes physical channel re-assignment more fair by storing
time in jiffies when a channel is put in waiting state. Whenever a
physical channel has to be re-assigned, this time is used to select
channel that is waiting for the longest time.
Signed-off-by: Jean-Nicolas Graux <jean-nicolas.graux@st.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Nicolas Guion <nicolas.guion@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The axi-dmac driver currently rejects transfers with segments that are
larger than what the hardware can handle.
Re-work the driver so that these large segments are split into multiple
segments instead where each segment is smaller or equal to the maximum
segment size.
This allows the driver to handle transfers with segments of arbitrary size.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Bogdan Togorean <bogdan.togorean@analog.com>
Signed-off-by: Alexandru Ardelean <alex.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch adds debugfs interface to show the relationship between
DMA threads (hardware resource for transferring data) and DMA
channel ID of DMA slave.
Typically, PL330 has many slaves than number of DMA threads.
So sometimes PL330 cannot allocate DMA threads for all slaves even
if a user specify DMA channel ID in devicetree. This interface will
be useful for checking that DMA threads are allocated or not.
Below is an output sample:
$ sudo cat /sys/kernel/debug/ff1f0000.dmac
PL330 physical channels:
THREAD: CHANNEL:
-------- -----
0 8
1 9
2 11
3 12
4 14
5 15
6 10
7 --
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If the driver is active till late suspend, where runtime PM cannot run,
force suspend is essential in such case to put the device in low power
state. Thus pm_runtime_force_suspend and pm_runtime_force_resume are
used as system sleep callbacks during system wide PM transitions.
Late system sleep callbacks are used to ensure, for instance, that the
sound core has suspended any on-going activity, including stopping the
ADMA if active, before we attempt to suspend the ADMA.
Suggested-by: Jonathan Hunter <jonathanh@nvidia.com>
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
adma driver is using pm_clk_*() interface for managing clock resources.
With this it is observed that clocks remain ON always. This happens on
Tegra devices which use BPMP co-processor to manage clock resources,
where clocks are enabled during prepare phase. This is necessary because
clocks to BPMP are always blocking. When pm_clk_*() interface is used on
such Tegra devices, clock prepare count is not balanced till remove call
happens for the driver and hence clocks are seen ON always. Thus this
patch replaces pm_clk_*() with devm_clk_*() framework.
Suggested-by: Mohan Kumar D <mkumard@nvidia.com>
Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com>
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Intel IOMMU, when enabled, tries to find the domain of the device,
assuming it's a PCI one, during DMA operations, such as mapping or
unmapping. Since we are splitting the actual PCI device to couple of
children via MFD framework (see drivers/mfd/intel-lpss.c for details),
the DMA device appears to be a platform one, and thus not an actual one
that performs DMA. In a such situation IOMMU can't find or allocate
a proper domain for its operations. As a result, all DMA operations are
failed.
In order to fix this, supply parent of the platform device
to the DMA engine framework and fix filter functions accordingly.
We may rely on the fact that parent is a real PCI device, because no
other configuration is present in the wild.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Mark Brown <broonie@kernel.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [for tty parts]
Signed-off-by: Vinod Koul <vkoul@kernel.org>
- dmatest updates for modularizing common struct and code
- remove SG support for VDMA xilinx IP and updates to driver
- Update to dw driver to support Intel iDMA controllers
multi-block support
- tegra updates for proper reporting of residue
- Add Snow Ridge ioatdma device id and support for IOATDMA v3.4
- struct_size() usage and useless LIST_HEAD cleanups in subsystem.
- qDMA controller driver for Layerscape SoCs
- stm32-dma PM Runtime support
- And usual updates to imx-sdma, sprd, Documentation, fsl-edma,
bcm2835, qcom_hidma etc
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJciVhkAAoJEHwUBw8lI4NHdVoQANDfaY1eruozJobfWX0w1b8U
0eSCN4M/gRzL/K1nOTWTV9dZFGycGokPBn25lryX6d6VIL+yO7Ptpwls/w0inn0e
RgVtESQodLZyxcD4iWACVfTGxqxe74/bXCRAq0OCrjt5oX+5KdsBsTrhHvB8dQin
JWT7Mq6tii5wZHZHl9b4Ds/crxM9+pIHmlzbu5MQiPDL37X9HX4KUoLQCKrVGgZt
3FjlKEiSS6CnBWP1cs/aHgANr/PjOIwL8SD1t5NPS3i7/2k9z1u16TmmI8SbcyHf
y9hVy1QPlxC5V85EPYK7JW7JLILotkMToxlX/QhfaFN0PWQq04rp6PCLvObKJuB2
36QJTwSXBM2/f9bWwkddPyo9Szb3L30K80Vx8zlxzgoXYWtFFB2BXAV57M/I48j2
gMnxEMZpHD3IupeqlykCmssClVVmCRT8qUZHLNHTdDNu48rLuNlZestFGyBS2Ma2
D4UcJPiA/IxENy1rz54XoCajL/BIJOsFXXVRikjj3vfbV0Uir0uB9puzwfemMtKz
tCrJbKwnnDtN30vBZtU7hVu/t4lFYYnl2c945+SpeQC69dysz3IFQGem0kxVHKnH
INdQH4Od7nQbVQer1EXg4+h3n/rUbcYx/0iLE6JAYyOT80wE+KSlrQ5etqyJ3/qh
1BxvU49PkUUPzw7aUGgS
=Ykvn
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-5.1-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
- dmatest updates for modularizing common struct and code
- remove SG support for VDMA xilinx IP and updates to driver
- Update to dw driver to support Intel iDMA controllers multi-block
support
- tegra updates for proper reporting of residue
- Add Snow Ridge ioatdma device id and support for IOATDMA v3.4
- struct_size() usage and useless LIST_HEAD cleanups in subsystem.
- qDMA controller driver for Layerscape SoCs
- stm32-dma PM Runtime support
- And usual updates to imx-sdma, sprd, Documentation, fsl-edma,
bcm2835, qcom_hidma etc
* tag 'dmaengine-5.1-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (81 commits)
dmaengine: imx-sdma: fix consistent dma test failures
dmaengine: imx-sdma: add a test for imx8mq multi sdma devices
dmaengine: imx-sdma: add clock ratio 1:1 check
dmaengine: dmatest: move test data alloc & free into functions
dmaengine: dmatest: add short-hand `buf_size` var in dmatest_func()
dmaengine: dmatest: wrap src & dst data into a struct
dmaengine: ioatdma: support latency tolerance report (LTR) for v3.4
dmaengine: ioatdma: add descriptor pre-fetch support for v3.4
dmaengine: ioatdma: disable DCA enabling on IOATDMA v3.4
dmaengine: ioatdma: Add Snow Ridge ioatdma device id
dmaengine: sprd: Change channel id to slave id for DMA cell specifier
dt-bindings: dmaengine: sprd: Change channel id to slave id for DMA cell specifier
dmaengine: mv_xor: Use correct device for DMA API
Documentation :dmaengine: clarify DMA desc. pointer after submission
Documentation: dmaengine: fix dmatest.rst warning
dmaengine: k3dma: Add support for dma-channel-mask
dmaengine: k3dma: Delete axi_config
dmaengine: k3dma: Upgrade k3dma driver to support hisi_asp_dma hardware
Documentation: bindings: dma: Add binding for dma-channel-mask
Documentation: bindings: k3dma: Extend the k3dma driver binding to support hisi-asp
...
-----BEGIN PGP SIGNATURE-----
iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAlyCpL0UHGJoZWxnYWFz
QGdvb2dsZS5jb20ACgkQWYigwDrT+vzoHw//ZyFbwekF0mV3RZwcV35LkScIOw0d
O1DgjJo8UbuV51+/foQeUZ8IzjHlybQhoFdJupPuw+LyaDUkwqjAmdtY8J/FjWSm
AJeVzu6gMF0Z9kwwGO4NyqX2EWluTD0xNLgf8g+fe3p1MtEuH6VCrqe+hk3wma0K
CrSIKWY/sO408SpAaWiLTEZmVT+hXiP9hJw1qTrbqKLtyWa4oCjErdoyUDsA01+5
gPndKC/3pu6q6q9Dd94582HuQaE2dKHWQXx6Fzd/tdCyYffpbOUAUNP3aRXaTKrS
MwKxOF3y7yUnz5RbxRgopwNVf5WyXhCnnPZRLaSxqnTSZCY6FCUi3l6RpVyWu2Ha
iztBbkTP/x6WV3VWg810qgQKQ9wl8oALMkoOfR6lWCR7MTuJnMXJtbrz0jWpEC2O
ZPwK9fAxFj2/3e13hx88O7Ek8kfajTPM8T15K79pvpljfqa0BD9SrhPyQ5ssmxj4
idz4yIFCATULKszPXA1QbfC1/xCDveQOEPSerL3eACXsLN17vfpOwOT9vWJm6bpr
6u5ggM2dEA07eI1ANnY6twn5g0kSYU9qISNQO98tA86IvaCnME0Z+k+SCwUNIM9U
ep9k0NdAGDNsYOfdVEEY0fYGT9k+9f9w8AfZLNvh0N3s7mGQQ35jf0Z75jj/jsor
cbMcPAN2jOCyFVs=
=vf9L
-----END PGP SIGNATURE-----
Merge tag 'pci-v5.1-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas:
- Use match_string() instead of reimplementing it (Andy Shevchenko)
- Enable SERR# forwarding for all bridges (Bharat Kumar Gogada)
- Use Latency Tolerance Reporting if already enabled by platform (Bjorn
Helgaas)
- Save/restore LTR info for suspend/resume (Bjorn Helgaas)
- Fix DPC use of uninitialized data (Dongdong Liu)
- Probe bridge window attributes only once at enumeration-time to fix
device accesses during rescan (Bjorn Helgaas)
- Return BAR size (not "size -1 ") from pci_size() to simplify code (Du
Changbin)
- Use config header type (not class code) identify bridges more
reliably (Honghui Zhang)
- Work around Intel Denverton incorrect Trace Hub BAR size reporting
(Alexander Shishkin)
- Reorder pciehp cached state/hardware state updates to avoid missed
interrupts (Mika Westerberg)
- Turn ibmphp semaphores into completions or mutexes (Arnd Bergmann)
- Mark expected switch fall-through (Mathieu Malaterre)
- Use of_node_name_eq() for node name comparisons (Rob Herring)
- Add ACS and pciehp quirks for HXT SD4800 (Shunyong Yang)
- Consolidate Rohm Vendor ID definitions (Andy Shevchenko)
- Use u32 (not __u32) for things not exposed to userspace (Logan
Gunthorpe)
- Fix locking semantics of bus and slot reset interfaces (Alex
Williamson)
- Update PCIEPORTBUS Kconfig help text (Hou Zhiqiang)
- Allow portdrv to claim subtractive decode Ports so PCIe services will
work for them (Honghui Zhang)
- Report PCIe links that become degraded at run-time (Alexandru
Gagniuc)
- Blacklist Gigabyte X299 Root Port power management to fix Thunderbolt
hotplug (Mika Westerberg)
- Revert runtime PM suspend/resume callbacks that broke PME on network
cable plug (Mika Westerberg)
- Disable Data Link State Changed interrupts to prevent wakeup
immediately after suspend (Mika Westerberg)
- Extend altera to support Stratix 10 (Ley Foon Tan)
- Allow building altera driver on ARM64 (Ley Foon Tan)
- Replace Douglas with Tom Joseph as Cadence PCI host/endpoint
maintainer (Lorenzo Pieralisi)
- Add DT support for R-Car RZ/G2E (R8A774C0) (Fabrizio Castro)
- Add dra72x/dra74x/dra76x SoC compatible strings (Kishon Vijay Abraham I)
- Enable x2 mode support for dra72x/dra74x/dra76x SoC (Kishon Vijay
Abraham I)
- Configure dra7xx PHY to PCIe mode (Kishon Vijay Abraham I)
- Simplify dwc (remove unnecessary header includes, name variables
consistently, reduce inverted logic, etc) (Gustavo Pimentel)
- Add i.MX8MQ support (Andrey Smirnov)
- Add message to help debug dwc MSI-X mask bit errors (Gustavo
Pimentel)
- Work around imx7d PCIe PLL erratum (Trent Piepho)
- Don't assert qcom reset GPIO during probe (Bjorn Andersson)
- Skip dwc MSI init if MSIs have been disabled (Lucas Stach)
- Use memcpy_fromio()/memcpy_toio() instead of plain memcpy() in PCI
endpoint framework (Wen Yang)
- Add interface to discover supported endpoint features to replace a
bitfield that wasn't flexible enough (Kishon Vijay Abraham I)
- Implement the new supported-feature interface for designware-plat,
dra7xx, rockchip, cadence (Kishon Vijay Abraham I)
- Fix issues with 64-bit BAR in endpoints (Kishon Vijay Abraham I)
- Add layerscape endpoint mode support (Xiaowei Bao)
- Remove duplicate struct hv_vp_set in favor of struct hv_vpset (Maya
Nakamura)
- Rework hv_irq_unmask() to use cpumask_to_vpset() instead of
open-coded reimplementation (Maya Nakamura)
- Align Hyper-V struct retarget_msi_interrupt arguments (Maya Nakamura)
- Fix mediatek MMIO size computation to enable full size of available
MMIO space (Honghui Zhang)
- Fix mediatek DMA window size computation to allow endpoint DMA access
to full DRAM address range (Honghui Zhang)
- Fix mvebu prefetchable BAR regression caused by common bridge
emulation that assumed all bridges had prefetchable windows (Thomas
Petazzoni)
- Make advk_pci_bridge_emul_ops static (Wei Yongjun)
- Configure MPS settings for VMD root ports (Jon Derrick)
* tag 'pci-v5.1-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (92 commits)
PCI: Update PCIEPORTBUS Kconfig help text
PCI: Fix "try" semantics of bus and slot reset
PCI/LINK: Report degraded links via link bandwidth notification
dt-bindings: PCI: altera: Add altr,pcie-root-port-2.0
PCI: altera: Enable driver on ARM64
PCI: altera: Add Stratix 10 PCIe support
PCI/PME: Fix possible use-after-free on remove
PCI: aardvark: Make symbol 'advk_pci_bridge_emul_ops' static
PCI: dwc: skip MSI init if MSIs have been explicitly disabled
PCI: hv: Refactor hv_irq_unmask() to use cpumask_to_vpset()
PCI: hv: Replace hv_vp_set with hv_vpset
PCI: hv: Add __aligned(8) to struct retarget_msi_interrupt
PCI: mediatek: Enlarge PCIe2AHB window size to support 4GB DRAM
PCI: mediatek: Fix memory mapped IO range size computation
PCI: dwc: Remove superfluous shifting in definitions
PCI: dwc: Make use of GENMASK/FIELD_PREP
PCI: dwc: Make use of BIT() in constant definitions
PCI: dwc: Share code for dw_pcie_rd/wr_other_conf()
PCI: dwc: Make use of IS_ALIGNED()
PCI: imx6: Add code to request/control "pcie_aux" clock for i.MX8MQ
...
Patch series "Replace all open encodings for NUMA_NO_NODE", v3.
All these places for replacement were found by running the following
grep patterns on the entire kernel code. Please let me know if this
might have missed some instances. This might also have replaced some
false positives. I will appreciate suggestions, inputs and review.
1. git grep "nid == -1"
2. git grep "node == -1"
3. git grep "nid = -1"
4. git grep "node = -1"
This patch (of 2):
At present there are multiple places where invalid node number is
encoded as -1. Even though implicitly understood it is always better to
have macros in there. Replace these open encodings for an invalid node
number with the global macro NUMA_NO_NODE. This helps remove NUMA
related assumptions like 'invalid node' from various places redirecting
them to a common definition.
Link: http://lkml.kernel.org/r/1545127933-10711-2-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> [ixgbe]
Acked-by: Jens Axboe <axboe@kernel.dk> [mtip32xx]
Acked-by: Vinod Koul <vkoul@kernel.org> [dmaengine.c]
Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc]
Acked-by: Doug Ledford <dledford@redhat.com> [drivers/infiniband]
Cc: Joseph Qi <jiangqi903@gmail.com>
Cc: Hans Verkuil <hverkuil@xs4all.nl>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Without the copy being aligned sdma1 fails ~10% of the time
Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On i.mx8mq, there are two sdma instances, and the common dma framework
will get a channel dynamically from any available sdma instance whether
it's the first sdma device or the second sdma device. Some IPs like
SAI only work with sdma2 not sdma1. To make sure the sdma channel is from
the correct sdma device, use the node pointer to match.
Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Tested-by: Daniel Baluta <daniel.baluta@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On i.mx8 mscale B0 chip, AHB/SDMA clock ratio 2:1 can't be supportted,
since SDMA clock ratio has to be increased to 250Mhz, AHB can't reach
to 500Mhz, so use 1:1 instead.
Based on NXP commit MLK-16841-1 by Robin Gong <yibin.gong@nxp.com>
Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch starts to take advantage of the `dmatest_data` struct by moving
the common allocation & free-ing bits into functions.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This is just a cosmetic change, since this variable gets used quite a bit
inside the dmatest_func() routine.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This change wraps the data for the source & destination buffers into a
`struct dmatest_data`. The rename patterns are:
* src_cnt -> src->cnt
* dst_cnt -> dst->cnt
* src_off -> src->off
* dst_off -> dst->off
* thread->srcs -> src->aligned
* thread->usrcs -> src->raw
* thread->dsts -> dst->aligned
* thread->udsts -> dst->raw
The intent is to make a function that moves duplicate parts of the code
into common alloc & free functions, which will unclutter the
`dmatest_func()` function.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
IOATDMA 3.4 supports PCIe LTR mechanism. The registers are non-standard
PCIe LTR support. This needs to be setup in order to not suffer performance
impact and provide proper power management. The channel is set to active
when it is allocated, and to passive when it's freed.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Adding support for new feature on ioatdma 3.4 hardware that provides
descriptor pre-fetching in order to reduce small DMA latencies.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add Snowridge Xeon-D ioatdma PCI device id. Also applies for Icelake
SP Xeon. This introduces ioatdma v3.4 platform. Also bumping driver version
to 5.0 since we are adding additional code for 3.4 support.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We will describe the slave id in DMA cell specifier instead of DMA channel
id, thus we should save the slave id from DMA engine translation function,
and remove the channel id validation.
Meanwhile we do not need set default slave id in sprd_dma_alloc_chan_resources(),
remove it.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Using dma_dev->dev for mappings before it's assigned with the correct
device is unlikely to work as expected, and with future dma-direct
changes, passing a NULL device may end up crashing entirely. I don't
know enough about this hardware or the mv_xor_prep_dma_interrupt()
operation to implement the appropriate error-handling logic that would
have revealed those dma_map_single() calls failing on arm64 for as long
as the driver has been enabled there, but moving the assignment earlier
will at least make the current code operate as intended.
Fixes: 22843545b2 ("dma: mv_xor: Add support for DMA_INTERRUPT")
Reported-by: John David Anglin <dave.anglin@bell.net>
Tested-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Tested-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
dmaengine fixes for v5.0-rc6
- Fix in at_xdmac fr wrongful channel state
- Fix for imx driver for wrong callback invocation
- Fix to bcm driver for interrupt race & transaction abort.
- Fix in dmatest to abort in mapping error
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJcYD8SAAoJEHwUBw8lI4NHqc0P/At5/XRhS60fnsibTEnGH3ee
I3TQY3DrGXHXUAq8LqVIjaql7NUQ+I07bRtP6JMX9GYrTguUVVDPau6E4CtWt571
+TnKXQefgY/UlP9f5K127Dal5YNaYsBZqbEvhE2xgYKrsFStiIYDFIPGFeIE841E
J0cm8mjM+dQxZOXCCf/vSxA9h2Aqj7Fa4RRY3Sj4r/CWahaKpZt+OM6bdDpmUxmq
PzDLSE4DdHLWZSTcvAFQQtwlMc4Dq8a7cs2MmCvLC2jwGq0IgLr0bfzj6pZOqcuN
O05TNcyhQHZ5YRnBHTgJUSgR2abhKuNnBWN/pA1Tbsn7+OISsvt4erjITAFv6fwR
mkWgyAzH3tZ1PnO1YO43KEfy/nrT6Veg1xePTzWaaYAsPJ9FJeQJuZ5YxPysOLcn
lux2/B19HeWVb02vredn5eUXH7b8C0WQhfEax4hgbbPcaDX+y3WFN0n2gxOAFQ8H
xloKu511Iuu0Uteov4wys6LucqX61vPPank9Ma2N7RLdZ4x2Ya8gDjG/kfQOFHDa
jRbbjzw8Q5tuU2++GXwSpWq55LOKYnaXQbUIRWxrrdwAJahKAQFW++KR2ehB1kvS
pCaIWwpSHXmCbat/A1JfcYMagAtw6QSgbJlNwru1B7OYujFMLml+GXtqeA/kJ8yo
3+oF8PB0xA+++o83USaa
=NTGY
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-fix-5.0-rc6' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine fixes from Vinod Koul:
- Fix in at_xdmac fr wrongful channel state
- Fix for imx driver for wrong callback invocation
- Fix to bcm driver for interrupt race & transaction abort.
- Fix in dmatest to abort in mapping error
* tag 'dmaengine-fix-5.0-rc6' of git://git.infradead.org/users/vkoul/slave-dma:
dmaengine: dmatest: Abort test in case of mapping error
dmaengine: bcm2835: Fix abort of transactions
dmaengine: bcm2835: Fix interrupt race on RT
dmaengine: imx-dma: fix wrong callback invoke
dmaengine: at_xdmac: Fix wrongfull report of a channel as in use
Add dma-channel-mask as a property for k3dma, it defines
available dma channels which a non-secure mode driver can use.
One sample usage of this is in Hi3660 SoC. DMA channel 0 is
reserved to lpm3, which is a coprocessor for power management. So
as a result, any request in kernel (which runs on main processor
and in non-secure mode) should start from at least channel 1.
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Tanglei Han <hantanglei@huawei.com>
Cc: Zhuangluan Su <suzhuangluan@hisilicon.com>
Cc: Ryan Grachek <ryan@edited.us>
Cc: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Cc: Guodong Xu <guodong.xu@linaro.org>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Li Yu <liyu65@hisilicon.com>
[jstultz: Reworked to use a channel mask]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Axi_config controls whether DMA resources can be accessed in non-secure
mode, such as linux kernel. The register should be set by the bootloader
stage and depends on the device.
Thus, this patch removes axi_config from k3dma driver.
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Tanglei Han <hantanglei@huawei.com>
Cc: Zhuangluan Su <suzhuangluan@hisilicon.com>
Cc: Ryan Grachek <ryan@edited.us>
Cc: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Cc: dmaengine@vger.kernel.org
Acked-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Signed-off-by: Li Yu <liyu65@hisilicon.com>
Signed-off-by: Guodong Xu <guodong.xu@linaro.org>
[jstultz: Minor tweaks to commit message]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On the hi3660 hardware there are two (at least) DMA controllers,
the DMA-P (Peripheral DMA) and the DMA-A (Audio DMA). The
two blocks are similar, but have some slight differences. This
resulted in the vendor implementing two separate drivers, which
after review, they have been able to condense and re-use the
existing k3dma driver.
Thus, this patch adds support for the new "hisi-pcm-asp-dma-1.0"
compatible string in the binding.
One difference with the DMA-A controller, is that it does not
need to initialize a clock. So we skip this by adding and using
soc data flags.
After above this driver will support both k3 and hisi_asp dma
hardware.
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Zhuangluan Su <suzhuangluan@hisilicon.com>
Cc: Ryan Grachek <ryan@edited.us>
Cc: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Cc: dmaengine@vger.kernel.org
Acked-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Signed-off-by: Youlin Wang <wwx575822@notesmail.huawei.com>
Signed-off-by: Tanglei Han <hantanglei@huawei.com>
[jstultz: Reworked to use of_match_data, commit msg improvements]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Otherwise 64-bit PPC builds fail with undefined references
to these accessors.
Cc: Peng Ma <peng.ma@nxp.com>
Cc: Wen He <wen.he_1@nxp.com>
Fixes: 68997fff94afa (" dmaengine: fsldma: Adding macro FSL_DMA_IN/OUT implement for ARM platform")
Signed-off-by: Scott Wood <oss@buserror.net>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Remove an outdated comment claiming the driver only supports cyclic
transactions. The driver has been supporting other transaction types
for more than two years.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: Frank Pavlic <f.pavlic@kunbus.de>
Cc: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Meier <florian.meier@koalo.de>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Florian Kauer <florian.kauer@koalo.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The BCM2835 DMA driver deletes a channel from a list upon termination
without having added it to a list first. Moreover that operation is
protected by a spinlock which isn't taken anywhere else. These appear
to be remnants of an older version of the driver which accidentally
got mainlined. Remove the dead code.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: Frank Pavlic <f.pavlic@kunbus.de>
Cc: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Meier <florian.meier@koalo.de>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Florian Kauer <florian.kauer@koalo.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Per section 4.2.1.1 of the BCM2835 ARM Peripherals spec, control blocks
"must start at a 256 bit aligned address":
https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf
This rule is currently satisfied only by accident because struct
bcm2835_dma_cb has a size of 256 bit and the DMA pool API happens to
allocate blocks consecutively. It seems safer to be explicit and tell
the DMA pool allocator about the required alignment.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: Frank Pavlic <f.pavlic@kunbus.de>
Cc: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Meier <florian.meier@koalo.de>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Florian Kauer <florian.kauer@koalo.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
bcm2835_dma_abort() returns an int but bcm2835_dma_terminate_all() (its
sole caller) does not evaluate the return value. Change the return type
to void.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: Frank Pavlic <f.pavlic@kunbus.de>
Cc: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Meier <florian.meier@koalo.de>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Florian Kauer <florian.kauer@koalo.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There are multiple issues with bcm2835_dma_abort() (which is called on
termination of a transaction):
* The algorithm to abort the transaction first pauses the channel by
clearing the ACTIVE flag in the CS register, then waits for the PAUSED
flag to clear. Page 49 of the spec documents the latter as follows:
"Indicates if the DMA is currently paused and not transferring data.
This will occur if the active bit has been cleared [...]"
https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf
So the function is entering an infinite loop because it is waiting for
PAUSED to clear which is always set due to the function having cleared
the ACTIVE flag. The only thing that's saving it from itself is the
upper bound of 10000 loop iterations.
The code comment says that the intention is to "wait for any current
AXI transfer to complete", so the author probably wanted to check the
WAITING_FOR_OUTSTANDING_WRITES flag instead. Amend the function
accordingly.
* The CS register is only read at the beginning of the function. It
needs to be read again after pausing the channel and before checking
for outstanding writes, otherwise writes which were issued between
the register read at the beginning of the function and pausing the
channel may not be waited for.
* The function seeks to abort the transfer by writing 0 to the NEXTCONBK
register and setting the ABORT and ACTIVE flags. Thereby, the 0 in
NEXTCONBK is sought to be loaded into the CONBLK_AD register. However
experimentation has shown this approach to not work: The CONBLK_AD
register remains the same as before and the CS register contains
0x00000030 (PAUSED | DREQ_STOPS_DMA). In other words, the control
block is not aborted but merely paused and it will be resumed once the
next DMA transaction is started. That is absolutely not the desired
behavior.
A simpler approach is to set the channel's RESET flag instead. This
reliably zeroes the NEXTCONBK as well as the CS register. It requires
less code and only a single MMIO write. This is also what popular
user space DMA drivers do, e.g.:
https://github.com/metachris/RPIO/blob/master/source/c_pwm/pwm.c
Note that the spec is contradictory whether the NEXTCONBK register
is writeable at all. On the one hand, page 41 claims:
"The value loaded into the NEXTCONBK register can be overwritten so
that the linked list of Control Block data structures can be
dynamically altered. However it is only safe to do this when the DMA
is paused."
On the other hand, page 40 specifies:
"Only three registers in each channel's register set are directly
writeable (CS, CONBLK_AD and DEBUG). The other registers (TI,
SOURCE_AD, DEST_AD, TXFR_LEN, STRIDE & NEXTCONBK), are automatically
loaded from a Control Block data structure held in external memory."
Fixes: 96286b5766 ("dmaengine: Add support for BCM2835")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: stable@vger.kernel.org # v3.14+
Cc: Frank Pavlic <f.pavlic@kunbus.de>
Cc: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Meier <florian.meier@koalo.de>
Cc: Clive Messer <clive.m.messer@gmail.com>
Cc: Matthias Reichl <hias@horus.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Florian Kauer <florian.kauer@koalo.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If IRQ handlers are threaded (either because CONFIG_PREEMPT_RT_BASE is
enabled or "threadirqs" was passed on the command line) and if system
load is sufficiently high that wakeup latency of IRQ threads degrades,
SPI DMA transactions on the BCM2835 occasionally break like this:
ks8851 spi0.0: SPI transfer timed out
bcm2835-dma 3f007000.dma: DMA transfer could not be terminated
ks8851 spi0.0 eth2: ks8851_rdfifo: spi_sync() failed
The root cause is an assumption made by the DMA driver which is
documented in a code comment in bcm2835_dma_terminate_all():
/*
* Stop DMA activity: we assume the callback will not be called
* after bcm_dma_abort() returns (even if it does, it will see
* c->desc is NULL and exit.)
*/
That assumption falls apart if the IRQ handler bcm2835_dma_callback() is
threaded: A client may terminate a descriptor and issue a new one
before the IRQ handler had a chance to run. In fact the IRQ handler may
miss an *arbitrary* number of descriptors. The result is the following
race condition:
1. A descriptor finishes, its interrupt is deferred to the IRQ thread.
2. A client calls dma_terminate_async() which sets channel->desc = NULL.
3. The client issues a new descriptor. Because channel->desc is NULL,
bcm2835_dma_issue_pending() immediately starts the descriptor.
4. Finally the IRQ thread runs and writes BCM2835_DMA_INT to the CS
register to acknowledge the interrupt. This clears the ACTIVE flag,
so the newly issued descriptor is paused in the middle of the
transaction. Because channel->desc is not NULL, the IRQ thread
finalizes the descriptor and tries to start the next one.
I see two possible solutions: The first is to call synchronize_irq()
in bcm2835_dma_issue_pending() to wait until the IRQ thread has
finished before issuing a new descriptor. The downside of this approach
is unnecessary latency if clients desire rapidly terminating and
re-issuing descriptors and don't have any use for an IRQ callback.
(The SPI TX DMA channel is a case in point.)
A better alternative is to make the IRQ thread recognize that it has
missed descriptors and avoid finalizing the newly issued descriptor.
So first of all, set the ACTIVE flag when acknowledging the interrupt.
This keeps a newly issued descriptor running.
If the descriptor was finished, the channel remains idle despite the
ACTIVE flag being set. However the ACTIVE flag can then no longer be
used to check whether the channel is idle, so instead check whether
the register containing the current control block address is zero
and finalize the current descriptor only if so.
That way, there is no impact on latency and throughput if the client
doesn't care for the interrupt: Only minimal additional overhead is
introduced for non-cyclic descriptors as one further MMIO read is
necessary per interrupt to check for idleness of the channel. Cyclic
descriptors are sped up slightly by removing one MMIO write per
interrupt.
Fixes: 96286b5766 ("dmaengine: Add support for BCM2835")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: stable@vger.kernel.org # v3.14+
Cc: Frank Pavlic <f.pavlic@kunbus.de>
Cc: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Meier <florian.meier@koalo.de>
Cc: Clive Messer <clive.m.messer@gmail.com>
Cc: Matthias Reichl <hias@horus.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Florian Kauer <florian.kauer@koalo.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Once the "ld_queue" list is not empty, next descriptor will migrate
into "ld_active" list. The "desc" variable will be overwritten
during that transition. And later the dmaengine_desc_get_callback_invoke()
will use it as an argument. As result we invoke wrong callback.
That behaviour was in place since:
commit fcaaba6c71 ("dmaengine: imx-dma: fix callback path in tasklet").
But after commit 4cd13c21b2 ("softirq: Let ksoftirqd do its job")
things got worse, since possible delay between tasklet_schedule()
from DMA irq handler and actual tasklet function execution got bigger.
And that gave more time for new DMA request to be submitted and
to be put into "ld_queue" list.
It has been noticed that DMA issue is causing problems for "mxc-mmc"
driver. While stressing the system with heavy network traffic and
writing/reading to/from sd card simultaneously the timeout may happen:
10013000.sdhci: mxcmci_watchdog: read time out (status = 0x30004900)
That often lead to file system corruption.
Signed-off-by: Leonid Iziumtsev <leonid.iziumtsev@gmail.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Cc: stable@vger.kernel.org
This mapping needs to be created in order for slave dma transfers
to work on systems with SMMU. The implementation mostly mimics the
one in pl330 dma driver, authored by Robin Murphy.
Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
Suggested-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Angelo Dureghello <angelo@sysam.it>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
atchan->status variable is used to store two different information:
- pass channel interrupts status from interrupt handler to tasklet;
- channel information like whether it is cyclic or paused;
This causes a bug when device_terminate_all() is called,
(AT_XDMAC_CHAN_IS_CYCLIC cleared on atchan->status) and then a late End
of Block interrupt arrives (AT_XDMAC_CIS_BIS), which sets bit 0 of
atchan->status. Bit 0 is also used for AT_XDMAC_CHAN_IS_CYCLIC, so when
a new descriptor for a cyclic transfer is created, the driver reports
the channel as in use:
if (test_and_set_bit(AT_XDMAC_CHAN_IS_CYCLIC, &atchan->status)) {
dev_err(chan2dev(chan), "channel currently used\n");
return NULL;
}
This patch fixes the bug by adding a different struct member to keep
the interrupts status separated from the channel status bits.
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver")
Signed-off-by: Codrin Ciubotariu <codrin.ciubotariu@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Move the Rohm Vendor ID to pci_ids.h instead of defining it in several
drivers.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Mark Brown <broonie@kernel.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
This commit fixes the issue that USB-DMAC hangs silently after system
resumes on R-Car Gen3 hence renesas_usbhs will not work correctly
when using USB-DMAC for bulk transfer e.g. ethernet or serial
gadgets.
The issue can be reproduced by these steps:
1. modprobe g_serial
2. Suspend and resume system.
3. connect a usb cable to host side
4. Transfer data from Host to Target
5. cat /dev/ttyGS0 (Target side)
6. echo "test" > /dev/ttyACM0 (Host side)
The 'cat' will not result anything. However, system still can work
normally.
Currently, USB-DMAC driver does not have system sleep callbacks hence
this driver relies on the PM core to force runtime suspend/resume to
suspend and reinitialize USB-DMAC during system resume. After
the commit 17218e0092 ("PM / genpd: Stop/start devices without
pm_runtime_force_suspend/resume()"), PM core will not force
runtime suspend/resume anymore so this issue happens.
To solve this, make system suspend resume explicit by using
pm_runtime_force_{suspend,resume}() as the system sleep callbacks.
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS() is used to make sure USB-DMAC
suspended after and initialized before renesas_usbhs."
Signed-off-by: Phuong Nguyen <phuong.nguyen.xw@renesas.com>
Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
Cc: <stable@vger.kernel.org> # v4.16+
[shimoda: revise the commit log and add Cc tag]
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Pass ->dev to dma_alloc_coherent() API. We need this
because dma_alloc_coherent() makes use of dev parameter
and receiving NULL will result in a crash.
Signed-off-by: Andy Duan <fugang.duan@nxp.com>
Signed-off-by: Daniel Baluta <daniel.baluta@nxp.com>
Reviewed-by: Robin Gong <yibin.gong@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The warning got introduced by commit 930507c183 ("arm64: add basic
Kconfig symbols for i.MX8"). Since it got enabled for arm64. The warning
haven't been seen before since size_t was 'unsigned int' when built on
arm32.
../drivers/dma/imx-dma.c: In function ‘imxdma_sg_next’:
../include/linux/kernel.h:846:29: warning: comparison of distinct pointer types lacks a cast
(!!(sizeof((typeof(x) *)1 == (typeof(y) *)1)))
^~
../include/linux/kernel.h:860:4: note: in expansion of macro ‘__typecheck’
(__typecheck(x, y) && __no_side_effects(x, y))
^~~~~~~~~~~
../include/linux/kernel.h:870:24: note: in expansion of macro ‘__safe_cmp’
__builtin_choose_expr(__safe_cmp(x, y), \
^~~~~~~~~~
../include/linux/kernel.h:879:19: note: in expansion of macro ‘__careful_cmp’
#define min(x, y) __careful_cmp(x, y, <)
^~~~~~~~~~~~~
../drivers/dma/imx-dma.c:288:8: note: in expansion of macro ‘min’
now = min(d->len, sg_dma_len(sg));
^~~
Rework so that we use min_t and pass in the size_t that returns the
minimum of two values, using the specified type.
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Reviewed-by: Fabio Estevam <festevam@gmail.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fixes gcc '-Wunused-but-set-variable' warning:
drivers/dma/xilinx/xilinx_dma.c: In function 'xilinx_vdma_start_transfer':
drivers/dma/xilinx/xilinx_dma.c:1104:33: warning:
variable 'tail_segment' set but not used [-Wunused-but-set-variable]
It not used since commit b8349172b4 ("dmaengine: xilinx_dma: Drop SG support
for VDMA IP")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:
struct foo {
int stuff;
void *entry[];
};
instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:
instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:
struct foo {
int stuff;
void *entry[];
};
instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:
instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:
struct foo {
int stuff;
void *entry[];
};
instance = devm_kzalloc(dev, sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:
instance = devm_kzalloc(dev, struct_size(instance, entry, count), GFP_KERNEL);
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When dma_cookie_complete() is called in hidma_process_completed(),
dma_cookie_status() will return DMA_COMPLETE in hidma_tx_status(). Then,
hidma_txn_is_success() will be called to use channel cookie
mchan->last_success to do additional DMA status check. Current code
assigns mchan->last_success after dma_cookie_complete(). This causes
a race condition of dma_cookie_status() returns DMA_COMPLETE before
mchan->last_success is assigned correctly. The race will cause
hidma_tx_status() return DMA_ERROR but the transaction is actually a
success. Moreover, in async_tx case, it will cause a timeout panic
in async_tx_quiesce().
Kernel panic - not syncing: async_tx_quiesce: DMA error waiting for
transaction
...
Call trace:
[<ffff000008089994>] dump_backtrace+0x0/0x1f4
[<ffff000008089bac>] show_stack+0x24/0x2c
[<ffff00000891e198>] dump_stack+0x84/0xa8
[<ffff0000080da544>] panic+0x12c/0x29c
[<ffff0000045d0334>] async_tx_quiesce+0xa4/0xc8 [async_tx]
[<ffff0000045d03c8>] async_trigger_callback+0x70/0x1c0 [async_tx]
[<ffff0000048b7d74>] raid_run_ops+0x86c/0x1540 [raid456]
[<ffff0000048bd084>] handle_stripe+0x5e8/0x1c7c [raid456]
[<ffff0000048be9ec>] handle_active_stripes.isra.45+0x2d4/0x550 [raid456]
[<ffff0000048beff4>] raid5d+0x38c/0x5d0 [raid456]
[<ffff000008736538>] md_thread+0x108/0x168
[<ffff0000080fb1cc>] kthread+0x10c/0x138
[<ffff000008084d34>] ret_from_fork+0x10/0x18
Cc: Joey Zheng <yu.zheng@hxt-semitech.com>
Reviewed-by: Sinan Kaya <okaya@kernel.org>
Signed-off-by: Shunyong Yang <shunyong.yang@hxt-semitech.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In async_tx_test_ack(), it uses flags in struct dma_async_tx_descriptor
to check the ACK status. As hidma reuses the descriptor in a free list
when hidma_prep_dma_*(memcpy/memset) is called, the flag will keep ACKed
if the descriptor has been used before. This will cause a BUG_ON in
async_tx_quiesce().
kernel BUG at crypto/async_tx/async_tx.c:282!
Internal error: Oops - BUG: 0 1 SMP
...
task: ffff8017dd3ec000 task.stack: ffff8017dd3e8000
PC is at async_tx_quiesce+0x54/0x78 [async_tx]
LR is at async_trigger_callback+0x98/0x110 [async_tx]
This patch initializes flags in dma_async_tx_descriptor by the flags
passed from the caller when hidma_prep_dma_*(memcpy/memset) is called.
Cc: Joey Zheng <yu.zheng@hxt-semitech.com>
Reviewed-by: Sinan Kaya <okaya@kernel.org>
Signed-off-by: Shunyong Yang <shunyong.yang@hxt-semitech.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Field name ststus_hi should be spelled as status_hi.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The context loaded only one time before channel running,but
currently sdma_config_channel() and dma_prep_* duplicated with
sdma_load_context(), so refine it to load context only one time
before channel running and reload after the channel terminated.
Signed-off-by: Robin Gong <yibin.gong@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We already need to zero out memory for dma_alloc_coherent(), as such
using dma_zalloc_coherent() is superflous. Phase it out.
This change was generated with the following Coccinelle SmPL patch:
@ replace_dma_zalloc_coherent @
expression dev, size, data, handle, flags;
@@
-dma_zalloc_coherent(dev, size, handle, flags)
+dma_alloc_coherent(dev, size, handle, flags)
Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
[hch: re-ran the script on the latest tree]
Signed-off-by: Christoph Hellwig <hch@lst.de>
One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:
struct foo {
int stuff;
void *entry[];
};
instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:
instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Tested-by: Angelo Dureghello <angelo@sysam.it>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:
struct foo {
int stuff;
void *entry[];
};
instance = devm_kzalloc(dev, sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:
instance = devm_kzalloc(dev, struct_size(instance, entry, count), GFP_KERNEL);
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:
struct foo {
int stuff;
void *entry[];
};
instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:
instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:
struct foo {
int stuff;
void *entry[];
};
instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:
instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Acked-by: Patrice Chotard <patrice.chotard@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:
struct foo {
int stuff;
void *entry[];
};
instance = devm_kzalloc(dev, sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:
instance = devm_kzalloc(dev, struct_size(instance, entry, count), GFP_KERNEL);
This issue was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:
struct foo {
int stuff;
void *entry[];
};
instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:
instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch updates license to use SPDX-License-Identifier
instead of verbose license text.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Intel iDMA 32-bit doesn't have a concept of bus masters and thus
there is no need to setup any kind of masters in the CTL_LO register.
Moreover, the burst size for memory-to-memory transfer is not what is says,
we need to have a corrected list of possible sizes. Note, that
the size of 8 items, each of that up to 4 bytes, is chosen because of
maximum of 1/2 FIFO, which is 64 bytes on Intel Merrifield.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
For Intel iDMA 32-bit the channel can be drained on a suspend.
We need to reset the bit on the resume to return a status quo.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Here is a kinda big refactoring that should have been done
in the first place, when Intel iDMA 32-bit support appeared.
It splits operations which are different to Synopsys DesignWare and
Intel iDMA 32-bit controllers.
No functional change intended.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
All known devices, which use DT for configuration, support
memory-to-memory transfers. So enable it by default.
The rest two cases, i.e. Intel Quark and PPC460ex, instantiate DMA driver and
use its channels exclusively for hardware, which means there is no available
channel for any other purposes anyway.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The commit a9ddb575d6
("dmaengine: dw_dmac: Enhance device tree support")
introduces is_private property in uncertain understanding what does it mean.
First of all, documentation defines DMA_PRIVATE capability as
Documentation/crypto/async-tx-api.txt:
The DMA_PRIVATE capability flag is used to tag dma devices that should not be
used by the general-purpose allocator. It can be set at initialization time
if it is known that a channel will always be private. Alternatively,
it is set when dma_request_channel() finds an unused "public" channel.
A couple caveats to note when implementing a driver and consumer:
1/ Once a channel has been privately allocated it will no longer be
considered by the general-purpose allocator even after a call to
dma_release_channel().
2/ Since capabilities are specified at the device level a dma_device with
multiple channels will either have all channels public, or all channels
private.
Documentation/driver-api/dmaengine/provider.rst:
- DMA_PRIVATE
The devices only supports slave transfers, and as such isn't available
for async transfers.
The capability had been introduced by the commit 59b5ec2144
("dmaengine: introduce dma_request_channel and private channels")
and some code didn't changed from that times ever.
Taking into consideration above and the fact that on all known platforms
Synopsys DesignWare DMA engine is attached to serve slave transfers,
the DMA_PRIVATE capability must be enabled for this device unconditionally.
Otherwise, as rightfully noticed in drivers/dma/at_xdmac.c:
/*
* Without DMA_PRIVATE the driver is not able to allocate more than
* one channel, second allocation fails in private_candidate.
*/
because of of a caveats mentioned in above documentation excerpts.
So, remove conditional around DMA_PRIVATE followed by removal leftovers.
If someone wonders, DMA_PRIVATE can be not used if and only if the all channels
of the DMA controller are supposed to serve memory-to-memory like operations.
For example, EP93xx has two controllers, one of which can only perform
memory-to-memory transfers
Note, this change doesn't affect dmatest to be able to test such controllers.
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> (maintainer:SERIAL DRIVERS)
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
xilinx_vdma_start_transfer() is used only for VDMA IP, still it contains
conditional code on has_sg variable. has_sg is set only whenever the HW
does support SG mode, that is never true for VDMA IP.
This patch drops the never-taken branches.
Signed-off-by: Andrea Merello <andrea.merello@gmail.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The AXIDMA and CDMA HW can be either direct-access or scatter-gather
version. These are SW incompatible.
The driver can handle both versions: a DT property was used to
tell the driver whether to assume the HW is in scatter-gather mode.
This patch makes the driver to autodetect this information. The DT
property is not required anymore.
No changes for VDMA.
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: devicetree@vger.kernel.org
Cc: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Andrea Merello <andrea.merello@gmail.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
AXI-DMA IP supports configurable (c_sg_length_width) buffer length
register width, hence read buffer length (xlnx,sg-length-width) DT
property and ensure that driver doesn't program buffer length
exceeding the supported limit. For VDMA and CDMA there is no change.
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: devicetree@vger.kernel.org
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Andrea Merello <andrea.merello@gmail.com> [rebase, reword]
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Whenever a single or cyclic transaction is prepared, the driver
could eventually split it over several SG descriptors in order
to deal with the HW maximum transfer length.
This could end up in DMA operations starting from a misaligned
address. This seems fatal for the HW if DRE (Data Realignment Engine)
is not enabled.
This patch eventually adjusts the transfer size in order to make sure
all operations start from an aligned address.
Cc: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Andrea Merello <andrea.merello@gmail.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch removes a bit of duplicated code by introducing a new
function that implements calculations for DMA copy size, and
prepares for changes to the copy size calculation that will
happen in following patches.
Suggested-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Andrea Merello <andrea.merello@gmail.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add some trace-points to the driver to allow for debuging via the
trace pipe.
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The name field is used for "apbdma.%d" which is rarely going to be
more than 10 bytes, so reduce the size from 30 to 12. This is only
being used by the interrupt registration, so is not critical to the
operation of the driver either.
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The use of Dma is annoying, since it is an acronym so should be all
upper case. Fix this throughout the driver.
Reviewed-by: Dmitry Osipenko <digetx@gmail.com>
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The buffer byte request length and counter are declared as signed integers
but the values should never be below zero, so make these unsigned integers
instead.
Reviewed-by: Dmitry Osipenko <digetx@gmail.com>
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The dma_desc->bytes_transferred counter tracks the number of bytes
moved by the DMA channel. This is then used to calculate the information
passed back in the in the tegra_dma_tx_status callback, which is usually
fine.
When the DMA channel is configured as continous, then the bytes_transferred
counter will increase over time and eventually overflow to become negative
so the residue count will become invalid and the ALSA sound-dma code will
report invalid hardware pointer values to the application. This results in
some users becoming confused about the playout position and putting audio
data in the wrong place.
To fix this issue, always ensure the bytes_transferred field is modulo the
size of the request. We only do this for the case of the cyclic transfer
done ISR as anyone attempting to move 2GiB of DMA data in one transfer
is unlikely.
Note, we don't fix the issue that we should /never/ transfer a negative
number of bytes so we could make those fields unsigned.
Reviewed-by: Dmitry Osipenko <digetx@gmail.com>
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
For avoiding false FIFO detection, check FIFO Error interrupt is
enabled prior raising any errors.
This will prevent having spurious FIFO error where it shouldn't.
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In stm32_mdma_probe, after reading the property "st,ahb-addr-masks", the
second call is not checked for failure. This time of check to time of use
case of "count" error is sent upstream.
Signed-off-by: Aditya Pakki <pakki001@umn.edu>
Acked-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>