When a partial transfer is received, the driver should not submit any more
segments to the hardware, as they will be ignored/unused until a new
transfer start operation is done.
This change implements this by adding a new flag on the AXI DMAC
descriptor. This flags is set to true, if there was a partial transfer in
a previously completed segment. When that flag is true, the TLAST flag is
added to the to the submitted segment, signaling the controller to stop
receiving more segments.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Starting with version 4.2.a, the AXI DMAC controller can report partial
transfers that have been issued.
This change implements computing DMA residue information for transfers,
based on that reported information.
The way this is done, is to dequeue the partial transfers from the FIFO of
partial transfers, store the partial length to the correct segment &
descriptor, and compute the residue before submitting the DMA cookie to the
DMA framework.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This allows each virtual channel to store information about each transfer
that completed, i.e. which transfer succeeded (or which failed) and if
there was any residue data on each (completed) transfer.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
Also, because there is no need to save the file dentry, remove the
variables that were saving them as they were never even being used once
set.
Cc: Sinan Kaya <okaya@kernel.org>
Cc: Andy Gross <agross@kernel.org>
Cc: David Brown <david.brown@linaro.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-arm-msm@vger.kernel.org
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Sinan Kaya <okaya@kernel.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
Cc: Sudeep Dutt <sudeep.dutt@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Sudeep Dutt <sudeep.dutt@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
Also, because there is no need to save the file dentry, remove the
variable that was saving it as it was never even being used once set.
Cc: Daniel Mack <daniel@zonque.org>
Cc: Haojian Zhuang <haojian.zhuang@gmail.com>
Cc: Robert Jarzmik <robert.jarzmik@free.fr>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
No need to check the return value of debugfs_create_file(), so no need
to provide a fake "cast away" of the return value either.
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
Also, because there is no need to save the file dentry, remove the
variable that was saving it as it was never even being used once set.
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
No need to check the return value of debugfs_create_file(), so no need
to provide a fake "cast away" of the return value either.
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If CONFIG_PCI_MSI is not set, building with CONFIG_DW_EDMA
fails:
drivers/dma/dw-edma/dw-edma-core.c: In function dw_edma_irq_request:
drivers/dma/dw-edma/dw-edma-core.c:784:21: error: implicit declaration of function pci_irq_vector; did you mean rcu_irq_enter? [-Werror=implicit-function-declaration]
err = request_irq(pci_irq_vector(to_pci_dev(dev), 0),
^~~~~~~~~~~~~~
Reported-by: Hulk Robot <hulkci@huawei.com>
Fixes: e63d79d1ff ("dmaengine: Add Synopsys eDMA IP core driver")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The change replaces the old license information in the comment header with
the new SPDX license specifier.
As well as bumping the year range from 2013-2015 to 2013-2019.
The latter also reflects recent changes that were added to the driver.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Synopsys eDMA IP is normally distributed along with Synopsys PCIe
EndPoint IP (depends of the use and licensing agreement).
This IP requires some basic configurations, such as:
- eDMA registers BAR
- eDMA registers offset
- eDMA registers size
- eDMA linked list memory BAR
- eDMA linked list memory offset
- eDMA linked list memory size
- eDMA data memory BAR
- eDMA data memory offset
- eDMA data memory size
- eDMA version
- eDMA mode
- IRQs available for eDMA
As a working example, PCIe glue-logic will attach to a Synopsys PCIe
EndPoint IP prototype kit (Vendor ID = 0x16c3, Device ID = 0xedda),
which has built-in an eDMA IP with this default configuration:
- eDMA registers BAR = 0
- eDMA registers offset = 0x00001000 (4 Kbytes)
- eDMA registers size = 0x00002000 (8 Kbytes)
- eDMA linked list memory BAR = 2
- eDMA linked list memory offset = 0x00000000 (0 Kbytes)
- eDMA linked list memory size = 0x00800000 (8 Mbytes)
- eDMA data memory BAR = 2
- eDMA data memory offset = 0x00800000 (8 Mbytes)
- eDMA data memory size = 0x03800000 (56 Mbytes)
- eDMA version = 0
- eDMA mode = EDMA_MODE_UNROLL
- IRQs = 1
This driver can be compile as built-in or external module in kernel.
To enable this driver just select DW_EDMA_PCIE option in kernel
configuration, however it requires and selects automatically DW_EDMA
option too.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add support for the eDMA IP version 0 driver for both register maps (legacy
and unroll).
The legacy register mapping was the initial implementation, which consisted
in having all registers belonging to channels multiplexed, which could be
change anytime (which could led a race-condition) by view port register
(access to only one channel available each time).
This register mapping is not very effective and efficient in a multithread
environment, which has led to the development of unroll registers mapping,
which consists of having all channels registers accessible any time by
spreading all channels registers by an offset between them.
This version supports a maximum of 16 independent channels (8 write +
8 read), which can run simultaneously.
Implements a scatter-gather transfer through a linked list, where the size
of linked list depends on the allocated memory divided equally among all
channels.
Each linked list descriptor can transfer from 1 byte to 4 Gbytes and is
alignmented to DWORD.
Both SAR (Source Address Register) and DAR (Destination Address Register)
are alignmented to byte.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add Synopsys PCIe Endpoint eDMA IP core driver to kernel.
This IP is generally distributed with Synopsys PCIe Endpoint IP (depends
of the use and licensing agreement).
This core driver, initializes and configures the eDMA IP using vma-helpers
functions and dma-engine subsystem.
This driver can be compile as built-in or external module in kernel.
To enable this driver just select DW_EDMA option in kernel configuration,
however it requires and selects automatically DMA_ENGINE and
DMA_VIRTUAL_CHANNELS option too.
In order to transfer data from point A to B as fast as possible this IP
requires a dedicated memory space containing linked list of elements.
All elements of this linked list are continuous and each one describes a
data transfer (source and destination addresses, length and a control
variable).
For the sake of simplicity, lets assume a memory space for channel write
0 which allows about 42 elements.
+---------+
| Desc #0 |-+
+---------+ |
V
+----------+
| Chunk #0 |-+
| CB = 1 | | +----------+ +-----+ +-----------+ +-----+
+----------+ +->| Burst #0 |->| ... |->| Burst #41 |->| llp |
| +----------+ +-----+ +-----------+ +-----+
V
+----------+
| Chunk #1 |-+
| CB = 0 | | +-----------+ +-----+ +-----------+ +-----+
+----------+ +->| Burst #42 |->| ... |->| Burst #83 |->| llp |
| +-----------+ +-----+ +-----------+ +-----+
V
+----------+
| Chunk #2 |-+
| CB = 1 | | +-----------+ +-----+ +------------+ +-----+
+----------+ +->| Burst #84 |->| ... |->| Burst #125 |->| llp |
| +-----------+ +-----+ +------------+ +-----+
V
+----------+
| Chunk #3 |-+
| CB = 0 | | +------------+ +-----+ +------------+ +-----+
+----------+ +->| Burst #126 |->| ... |->| Burst #129 |->| llp |
+------------+ +-----+ +------------+ +-----+
Legend:
- Linked list, also know as Chunk
- Linked list element*, also know as Burst *CB*, also know as Change Bit,
it's a control bit (and typically is toggled) that allows to easily
identify and differentiate between the current linked list and the
previous or the next one.
- LLP, is a special element that indicates the end of the linked list
element stream also informs that the next CB should be toggle
On every last Burst of the Chunk (Burst #41, Burst #83, Burst #125 or
even Burst #129) is set some flags on their control variable (RIE and
LIE bits) that will trigger the send of "done" interruption.
On the interruptions callback, is decided whether to recycle the linked
list memory space by writing a new set of Bursts elements (if still
exists Chunks to transfer) or is considered completed (if there is no
Chunks available to transfer).
On scatter-gather transfer mode, the client will submit a scatter-gather
list of n (on this case 130) elements, that will be divide in multiple
Chunks, each Chunk will have (on this case 42) a limited number of
Bursts and after transferring all Bursts, an interrupt will be
triggered, which will allow to recycle the all linked list dedicated
memory again with the new information relative to the next Chunk and
respective Burst associated and repeat the whole cycle again.
On cyclic transfer mode, the client will submit a buffer pointer, length
of it and number of repetitions, in this case each burst will correspond
directly to each repetition.
Each Burst can describes a data transfer from point A(source) to point
B(destination) with a length that can be from 1 byte up to 4 GB. Since
dedicated the memory space where the linked list will reside is limited,
the whole n burst elements will be organized in several Chunks, that
will be used later to recycle the dedicated memory space to initiate a
new sequence of data transfers.
The whole transfer is considered has completed when it was transferred
all bursts.
Currently this IP has a set well-known register map, which includes
support for legacy and unroll modes. Legacy mode is version of this
register map that has multiplexer register that allows to switch
registers between all write and read channels and the unroll modes
repeats all write and read channels registers with an offset between
them. This register map is called v0.
The IP team is creating a new register map more suitable to the latest
PCIe features, that very likely will change the map register, which this
version will be called v1. As soon as this new version is released by
the IP team the support for this version in be included on this driver.
According to the logic, patches 1, 2 and 3 should be squashed into 1
unique patch, but for the sake of simplicity of review, it was divided
in this 3 patches files.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add 8250 UART APDMA to support MediaTek UART. If MediaTek UART is
enabled by SERIAL_8250_MT6577, and we can enable this driver to offload
the UART device moving bytes.
Signed-off-by: Long Cheng <long.cheng@mediatek.com>
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
H6 DMA has more than 32 supported DRQs, which means that configuration
register is slightly rearranged. It also needs additional clock to be
enabled.
Add support for it.
Signed-off-by: Jernej Skrabec <jernej.skrabec@siol.net>
Signed-off-by: Clément Péron <peron.clem@gmail.com>
Acked-by: Maxime Ripard <maxime.ripard@bootlin.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
H6 DMA has mode fields in different position than any other currently
supported DMA controller.
Add a quirk for that.
Signed-off-by: Jernej Skrabec <jernej.skrabec@siol.net>
Signed-off-by: Clément Péron <peron.clem@gmail.com>
Acked-by: Maxime Ripard <maxime.ripard@bootlin.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
H6 DMA has more than 32 possible DRQs. That means that current maximum
of 31 DRQs is not enough anymore.
Add a quirk which will set source and destination DRQ number.
Signed-off-by: Jernej Skrabec <jernej.skrabec@siol.net>
Signed-off-by: Clément Péron <peron.clem@gmail.com>
Acked-by: Maxime Ripard <maxime.ripard@bootlin.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
H6 DMA controller needs additional mbus clock to be enabled.
Add a quirk for it and handle it accordingly.
Signed-off-by: Jernej Skrabec <jernej.skrabec@siol.net>
Signed-off-by: Clément Péron <peron.clem@gmail.com>
Acked-by: Maxime Ripard <maxime.ripard@bootlin.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Apparently driver was never tested with DMA_PREP_INTERRUPT flag being
unset since it completely disables interrupt handling instead of skipping
the callbacks invocations, hence putting channel into unusable state.
The flag is always set by all of kernel drivers that use APB DMA, so let's
error out in otherwise case for consistency. It won't be difficult to
support that case properly if ever will be needed.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When an error occurs we should clean the error register then to return
Signed-off-by: Peng Ma <peng.ma@nxp.com>
[vkoul: change patch title]
Signed-off-by: Vinod Koul <vkoul@kernel.org>
CMD of Source/Destination descriptor format should be lower of
struct fsl_qdma_engine number data address.
Signed-off-by: Peng Ma <peng.ma@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The `copy_align` property is a generic property that describes alignment
for DMA memcpy & sg ops.
It serves mostly an informational purpose, and can be used in DMA tests, to
pass the info to know what alignment to expect.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Starting with version 4.1.a the AXI-DMAC is capable of reporting the
required length alignment.
The LSBs that are required to be set for alignment will always read back as
set from the transfer length register. It is not possible to clear them by
writing a 0. This means the driver can discover the length alignment
requirement by writing 0 to that register and reading back the value.
Since the DMA will support length alignment requirements that are different
from the address alignment requirement track both of them independently.
For older versions of the peripheral assume that the length alignment
requirement is equal to the address alignment requirement.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Let the DMA engine core do the device node validation instead of drivers.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Let the DMA engine core do the device node validation instead of drivers.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Let the DMA engine core do the device node validation instead of drivers.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Let the DMA engine core do the device node validation instead of drivers.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Let the DMA engine core do the device node validation instead of drivers.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Let the DMA engine core do the device node validation instead of drivers.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When user try to request one DMA channel by __dma_request_channel(), it won't
validate if it is the correct DMA device to request, that will lead each DMA
engine driver to validate the correct device node in their filter function
if it is necessary.
Thus we can add the matching device node validation in the DMA engine core,
to remove all of device node validation in the drivers.
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We get a compiler warn about variable ‘tail_desc’ set but not used
drivers/dma/xilinx/xilinx_dma.c:1102:42: warning:
variable ‘tail_desc’ set but not used [-Wunused-but-set-variable]
struct xilinx_dma_tx_descriptor *desc, *tail_desc;
So remove it.
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The AXI-DMAC supports different types of interface for the data source and
destination ports. Typically one of those ports is a memory-mapped
interface while the other is some kind of streaming interface.
The information about which kind of interface is used for each port is
encoded in the devicetree.
It is also possible in the driver to detect whether a port supports
memory-mapped transfers or not. For streaming interfaces the address
register is read-only and will always return 0. So in order to check if a
port supports memory-mapped transfers write a non-zero value to the
corresponding address register and check that the value read-back is still
non zero.
This allows to detect mismatches between the devicetree description and the
actual hardware configuration.
Unfortunately it is not possible to autodetect the interface types since
there is no method to distinguish between the different streaming ports. So
the best thing that can be done is to error out when a memory mapped port
is described in the devicetree but none is detected in the hardware.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The TLAST flag is used by the DMAC HDL controller to signal to the
controller that the following segment (to be submitted) is the last one (in
a series of segments).
A receiver DMA (typically another DMAC) can read this parameter (from the
transfer), and terminate the transfer earlier. A typical use-case for this,
is when the receiver expects a certain amount of segments, but for some
reason (e.g. an ADC capture which can have an unknown number of digital
samples) the number of actual segments is smaller. The receiver would read
this flag, and then the DMAC would finish.
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The DMAC HDL core supports interleaved & cyclic transfers.
An example use-case for this mode is when the controller is used as a
video DMA.
This change sets the `cyclic` field to true, so that when the IRQ comes and
the `axi_dmac_transfer_done()` callback is called (from the interrupt
handler) the proper `vchan_cyclic_callback()` is called. This way the
DMAEngine framework will process data correctly for interleaved + cyclic
transfers.
This doesn't fix anything. It's an enhancement to the driver.
Signed-off-by: Dragos Bogdan <dragos.bogdan@analog.com>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Commit c6504be539 ("dmaengine: stm32-dma: Fix unsigned variable compared
with zero") duplicated the call to platform_get_irq.
So remove the first call to platform_get_irq.
Fixes: c6504be539 ("dmaengine: stm32-dma: Fix unsigned variable compared with zero")
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Use SPDX license notifier instead of plain text in the header.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
SUDMAC driver was introduced in v3.10 but was never integrated for use
by any platform. As it is unused remove it.
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Acked-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
- Updates to stm32 dma residue calculations
- Interleave dma capability to axi-dmac and
support for ZynqMP arch
- Rework of channel assignment for rcar dma
- Debugfs for pl330 driver
- Support for Tegra186/Tegra194, refactoring for new chips
and support for pause/resume
- Updates to axi-dmac, bcm2835, fsl-edma, idma64, imx-sdma,
rcar-dmac, stm32-dma etc
- dev_get_drvdata() updates on few drivers
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJc08p1AAoJEHwUBw8lI4NHD14QAJGU7MOc9dpr+qtm2k3sNO3o
EXZtb3GjTs4MUt6EfMA47KXsxeq4UhubQqM7CmPngDyjXaPd4JBE8bwAd+OzS9sq
eAPMa+M1g8MehuQcdUzB/y6APoSFhGvFoGLY8e7FeI6fwYNm3Yy2gTSiZfpMb3MW
hclJQe+UWfppUHOig13tr0tbQ31DOa7qb2+roVJqDEb9sQ5bDkhRWXjElfoeSXsS
n8nNh4GZr5RkIxfzslVRZNfqb1lja2e03SXBsN9faQI7BfIYBM+9hWSYd4Nq8uYo
xvhYf9gJnKVKtFrwdXtyeBJ80DijWBoodhLrLOfhEYYOrCl9WwJT9AepIOdvij32
11FwjCbkC9ASQ1cSLyRUBbdmfykSlBvdbAMwJc1y9qK7k9BMba3rXRJfimlRy29A
Cpsu4tZKoPlZRGinoGnEGreg1YZI1YHwa+hlkW/8V9Zkb2hvIUbbXr7xHedJf7n4
gIb5DnCF5pC1umB/o7pj2YXrYBc9GETp3sDQ88aw1owKh1T2pZcc5HOpi4p7/7n+
b2HM0cWOCM3aKwdOcONk0jd87FcYQm3g1isQF5SCOtOys8Uy6wNqo9aRrfE/94aw
4SiGRq9/nSOHDh72mD3Ux7v47/cFjWGzZZJVy5+NC+Mq79KxgpXOjsIr7YVbcn9m
GuUdiDZmUvZ4y+qq/uCI
=JDU6
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-5.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
- Updates to stm32 dma residue calculations
- Interleave dma capability to axi-dmac and support for ZynqMP arch
- Rework of channel assignment for rcar dma
- Debugfs for pl330 driver
- Support for Tegra186/Tegra194, refactoring for new chips and support
for pause/resume
- Updates to axi-dmac, bcm2835, fsl-edma, idma64, imx-sdma, rcar-dmac,
stm32-dma etc
- dev_get_drvdata() updates on few drivers
* tag 'dmaengine-5.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (34 commits)
dmaengine: tegra210-adma: restore channel status
dmaengine: tegra210-dma: free dma controller in remove()
dmaengine: tegra210-adma: add pause/resume support
dmaengine: tegra210-adma: add support for Tegra186/Tegra194
Documentation: DT: Add compatibility binding for Tegra186
dmaengine: tegra210-adma: prepare for supporting newer Tegra chips
dmaengine: at_xdmac: remove a stray bottom half unlock
dmaengine: fsl-edma: Adjust indentation
dmaengine: fsl-edma: Fix typo in Vybrid name
dmaengine: stm32-dma: fix residue calculation in stm32-dma
dmaengine: nbpfaxi: Use dev_get_drvdata()
dmaengine: bcm-sba-raid: Use dev_get_drvdata()
dmaengine: stm32-dma: Fix unsigned variable compared with zero
dmaengine: stm32-dma: use platform_get_irq()
dmaengine: rcar-dmac: Update copyright information
dmaengine: imx-sdma: Only check ratio on parts that support 1:1
dmaengine: xgene-dma: fix spelling mistake "descripto" -> "descriptor"
dmaengine: idma64: Move driver name to the header
dmaengine: bcm2835: Drop duplicate capability setting.
dmaengine: pl330: _stop: clear interrupt status
...
Remove mmiowb() from the kernel memory barrier API and instead, for
architectures that need it, hide the barrier inside spin_unlock() when
MMIO has been performed inside the critical section.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAlzMFaUACgkQt6xw3ITB
YzRICQgAiv7wF/yIbBhDOmCNCAKDO59chvFQWxXWdGk/aAB56kwKAMXJgLOvlMG/
VRuuLyParTFQETC3jaxKgnO/1hb+PZLDt2Q2KqixtjIzBypKUPWvK2sf6THhSRF1
GK0DBVUd1rCrWrR815+SPb8el4xXtdBzvAVB+Fx35PXVNpdRdqCkK+EQ6UnXGokm
rXXHbnfsnquBDtmb4CR4r2beH+aNElXbdt0Kj8VcE5J7f7jTdW3z6Q9WFRvdKmK7
yrsxXXB2w/EsWXOwFp0SLTV5+fgeGgTvv8uLjDw+SG6t0E0PebxjNAflT7dPrbYL
WecjKC9WqBxrGY+4ew6YJP70ijLBCw==
=aC8m
-----END PGP SIGNATURE-----
Merge tag 'arm64-mmiowb' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull mmiowb removal from Will Deacon:
"Remove Mysterious Macro Intended to Obscure Weird Behaviours (mmiowb())
Remove mmiowb() from the kernel memory barrier API and instead, for
architectures that need it, hide the barrier inside spin_unlock() when
MMIO has been performed inside the critical section.
The only relatively recent changes have been addressing review
comments on the documentation, which is in a much better shape thanks
to the efforts of Ben and Ingo.
I was initially planning to split this into two pull requests so that
you could run the coccinelle script yourself, however it's been plain
sailing in linux-next so I've just included the whole lot here to keep
things simple"
* tag 'arm64-mmiowb' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (23 commits)
docs/memory-barriers.txt: Update I/O section to be clearer about CPU vs thread
docs/memory-barriers.txt: Fix style, spacing and grammar in I/O section
arch: Remove dummy mmiowb() definitions from arch code
net/ethernet/silan/sc92031: Remove stale comment about mmiowb()
i40iw: Redefine i40iw_mmiowb() to do nothing
scsi/qla1280: Remove stale comment about mmiowb()
drivers: Remove explicit invocations of mmiowb()
drivers: Remove useless trailing comments from mmiowb() invocations
Documentation: Kill all references to mmiowb()
riscv/mmiowb: Hook up mmwiob() implementation to asm-generic code
powerpc/mmiowb: Hook up mmwiob() implementation to asm-generic code
ia64/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()
mips/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()
sh/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()
m68k/io: Remove useless definition of mmiowb()
nds32/io: Remove useless definition of mmiowb()
x86/io: Remove useless definition of mmiowb()
arm64/io: Remove useless definition of mmiowb()
ARM/io: Remove useless definition of mmiowb()
mmiowb: Hook up mmiowb helpers to spinlocks and generic I/O accessors
...
Status of ADMA channel registers is not saved and restored during system
suspend. During active playback if system enters suspend, this results in
wrong state of channel registers during system resume and playback fails
to resume properly. Fix this by saving following channel registers in
runtime suspend and restore during runtime resume.
* ADMA_CH_LOWER_SRC_ADDR
* ADMA_CH_LOWER_TRG_ADDR
* ADMA_CH_FIFO_CTRL
* ADMA_CH_CONFIG
* ADMA_CH_CTRL
* ADMA_CH_CMD
* ADMA_CH_TC
Runtime PM calls will be inovked during system resume path if a playback
or capture needs to be resumed. Hence above changes work fine for system
suspend case.
Fixes: f46b195799 ("dmaengine: tegra-adma: Add support for Tegra210 ADMA")
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
During an audio playback session it is observed that, audio goes off after
few seconds of continuous pause and play. No audio is heard even when the
playback is resumed.
The reason for above is, currently ADMA driver does not handle DMA_PAUSE/
DMA_RESUME and relevant callbacks for dma_device are not implemented. This
patch implements device_pause and device_resume callbacks for the device.
During pause TRANSFER_PAUSE bit of dma channel control register is set and
the same is cleared during resume.
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add Tegra186 specific macro defines and chip_data structure for chip
specific information. New compatibility is added to select relevant
chip details. There is no major change for Tegra194 and hence it can
use the same chip data.
The bits in the BURST_SIZE field of the ADMA CH_CONFIG register are
encoded differently on Tegra186 and Tegra194 compared with Tegra210.
On Tegra210 the bits are encoded as follows ...
1 = WORD_1
2 = WORDS_2
3 = WORDS_4
4 = WORDS_8
5 = WORDS_16
Where as on Tegra186 and Tegra194 the bits are encoded as ...
0 = WORD_1
1 = WORDS_2
2 = WORDS_3
3 = WORDS_4
4 = WORDS_5
...
15 = WORDS_16
Add helper functions for generating the correct burst size.
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This is a preparatory patch to add support for Tegra186 and Tegra194 chips.
Following changes are necessary to make driver code generic.
* chip_data structure is enhanced to have chip specific details and
following are the additions to the structure
* Offset addresses for ADMA global and channel registers
* Offset values for Tx and Rx channel selection
* Maximum supported Tx and Rx channels
* Tx and Rx channel request mask
* ADMA channel register space size
* Make use of above chip_data to generalise the driver code
Support for Tegra186 and Tegra194 will be added in subsequent patches of
the series.
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We switched this code from spin_lock_bh() to vanilla spin_lock() but
there was one stray spin_unlock_bh() that was overlooked. This
patch converts it to spin_unlock() as well.
Fixes: d8570d018f ("dmaengine: at_xdmac: move spin_lock_bh to spin_lock in tasklet")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fix indentation and remove unneeded space after 'return' keyword. This
fixes checkpatch warning:
WARNING: Statements should start on a tabstop
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>