The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:
struct foo {
int stuff;
struct boo array[];
};
By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.
Also, notice that, dynamic memory allocations won't be affected by
this change:
"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]
This issue was found with the help of Coccinelle.
[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 7649773293 ("cxgb3/l2t: Fix undefined behaviour")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Link: https://lore.kernel.org/r/20200213003703.GA4177@embeddedor.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
idxd_config_bus_probe() calls try_module_get() but never calls module_put()
when it fails. Thus with every failed attempt, the ref count goes up. Add
module_put() in failure paths.
Fixes: c52ca47823 ("dmaengine: idxd: add configuration component of driver")
Reported-by: Jerry Chen <jerry.t.chen@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/158144296730.41381.12134210685456322434.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
drivers/dma/idxd/cdev.c: In function idxd_cdev_open:
drivers/dma/idxd/cdev.c:77:20: warning:
variable idxd_cdev set but not used [-Wunused-but-set-variable]
commit 42d279f913 ("dmaengine: idxd: add char driver to
expose submission portal to userland") involed this.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/20200210151855.55044-1-yuehaibing@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
drivers/dma/idxd/sysfs.c: In function engine_group_id_store:
drivers/dma/idxd/sysfs.c:419:29: warning: variable group set but not used [-Wunused-but-set-variable]
It is not used, so remove it.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/20200211135335.55924-1-yuehaibing@huawei.com
Signed-Off-By: Vinod Koul <vkoul@kernel.org>
No need to use goto to jump over the
return chan ? chan : ERR_PTR(-EPROBE_DEFER);
We can just revert the check and return right there.
Do not fail the channel request if the chan->name allocation fails, but
print a warning about it.
Change the dev_err to dev_warn if sysfs_create_link() fails as it is not
fatal.
Only attempt to remove the DMA_SLAVE_NAME symlink if it is created - or it
was attempted to be created.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20200131093859.3311-2-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Commit 71723a96b8 ("dmaengine: Create symlinks between DMA channels and
slaves") changed the dma_request_chan() function flow in such a way that
it always returns EPROBE_DEFER in case of channels that cannot be found.
This break the operation of the devices which have optional DMA channels
as it puts their drivers in endless deferred probe loop. Fix this by
propagating the proper error value.
Fixes: 71723a96b8 ("dmaengine: Create symlinks between DMA channels and slaves")
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20200130070834.17537-1-m.szyprowski@samsung.com
[vkoul: fix typo in patch title]
Signed-off-by: Vinod Koul <vkoul@kernel.org>
- remove ioremap_nocache given that is is equivalent to
ioremap everywhere
-----BEGIN PGP SIGNATURE-----
iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAl4vKHwLHGhjaEBsc3Qu
ZGUACgkQD55TZVIEUYMPGBAAuVNUZaZfWYHpiVP2oRcUQUguFiD3NTbknsyzV2oH
J9P0GfeENSKwE9OOhZ7XIjnCZAJwQgTK/ppQY5yiQ/KAtYyyXjXEJ6jqqjiTDInr
+3+I3t/LhkgrK7tMrb7ylTGa/d7KhaciljnOXC8+b75iddvM9I1z2pbHDbppZMS9
wT4RXL/cFtRb85AfOyPLybcka3f5P2gGvQz38qyimhJYEzHDXZu9VO1Bd20f8+Xf
eLBKX0o6yWMhcaPLma8tm0M0zaXHEfLHUKLSOkiOk+eHTWBZ3b/w5nsOQZYZ7uQp
25yaClbameAn7k5dHajduLGEJv//ZjLRWcN3HJWJ5vzO111aHhswpE7JgTZJSVWI
ggCVkytD3ESXapvswmACSeCIDMmiJMzvn6JvwuSMVB7a6e5mcqTuGo/FN+DrBF/R
IP+/gY/T7zIIOaljhQVkiEIIwiD/akYo0V9fheHTBnqcKEDTHV4WjKbeF6aCwcO+
b8inHyXZSKSMG//UlDuN84/KH/o1l62oKaB1uDIYrrL8JVyjAxctWt3GOt5KgSFq
wVz1lMw4kIvWtC/Sy2H4oB+RtODLp6yJDqmvmPkeJwKDUcd/1JKf0KsZ8j3FpGei
/rEkBEss0KBKyFAgBSRO2jIpdj2epgcBcsdB/r5mlhcn8L77AS6mHbA173kY4pQ/
Kdg=
=TUCJ
-----END PGP SIGNATURE-----
Merge tag 'ioremap-5.6' of git://git.infradead.org/users/hch/ioremap
Pull ioremap updates from Christoph Hellwig:
"Remove the ioremap_nocache API (plus wrappers) that are always
identical to ioremap"
* tag 'ioremap-5.6' of git://git.infradead.org/users/hch/ioremap:
remove ioremap_nocache and devm_ioremap_nocache
MIPS: define ioremap_nocache to ioremap
- Core:
- Support for dynamic channels
- Removal of various slave wrappers
- Make few slave request APIs as private to dmaengine
- Symlinks between channels and slaves
- Support for hotplug of controllers
- Support for metadata_ops for dma_async_tx_descriptor
- Reporting DMA cached data amount
- Virtual dma channel locking updates
- New drivers/device/feature support support:
- Driver for Intel data accelerators
- Driver for TI K3 UDMA
- Driver for PLX DMA engine
- Driver for hisilicon Kunpeng DMA engine
- Support for eDMA support for QorIQ LS1028A in fsl edma driver
- Support for cyclic dma in sun4i driver
- Support for X1830 in JZ4780 driver
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+vs47OPLdNbVcHzyfBQHDyUjg0cFAl4u+QkACgkQfBQHDyUj
g0cCcg//awBruofTHIrBOwHmCX1a09mw5WmkFG48N7tYp4fvaI1aOgs3hH9PZiBG
fFZUktodwYpEKg6JJOfm1RnLBuKm0+3zmaKGPdK1RcbaDURh8G9qhW65f4mfImvB
GXlgw59WKtgPAM9zWW9UxjugAk4DBte5xVKYJUsI0t4P7k9TM4i0Fv0VmMUhhDuo
buPD1cM/GWFHbE7OYJ51aGRtrOHV1nPgQaHBkWaT7EotzGsZ3gtWYzteI3BRXRV/
IkSgxOefMkIgu1j3KIxFZ1CJDHCZSnx2B+AEMCcp63osyeHBOYoL7KQxo6tBjaRV
fbCasbkTkvvJUjyZdtOdU2wqf7ZqoDkD+n5nkpENf4G1M8J5RiHmrFq96m3HRonE
V1bmMslXhsJlvtoT6ec2iJFchiq0nx1XHyST6faUOK+0cd1lzbogWwztydQH4fwd
TxfEd+eYlFFu3lGDfRp14Tz7fAcFNPZ2bJQhZkF6RpwUW3y3L0cJc3Y0AcWmNkvJ
oStvTlbbUvgRgO7rvEyAmdPb31lE6PLaA0WCahcvf4zQxxNMyYyaWP73MegvqJGO
pfJXBOWBTTKwu0fDR5UHJd3tEDABvcZnwBaCSYrpI5f9bJ4NRI3f4DIMwLBnw9IK
aH6pzwo4gTAMuvxzq8KeTp3hU7kszyUN8q8hiTZlgVozMLKXhQY=
=mv1v
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-5.6-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time we have a bunch of core changes to support dynamic channels,
hotplug of controllers, new apis for metadata ops etc along with new
drivers for Intel data accelerators, TI K3 UDMA, PLX DMA engine and
hisilicon Kunpeng DMA engine. Also usual assorted updates to drivers.
Core:
- Support for dynamic channels
- Removal of various slave wrappers
- Make few slave request APIs as private to dmaengine
- Symlinks between channels and slaves
- Support for hotplug of controllers
- Support for metadata_ops for dma_async_tx_descriptor
- Reporting DMA cached data amount
- Virtual dma channel locking updates
New drivers/device/feature support support:
- Driver for Intel data accelerators
- Driver for TI K3 UDMA
- Driver for PLX DMA engine
- Driver for hisilicon Kunpeng DMA engine
- Support for eDMA support for QorIQ LS1028A in fsl edma driver
- Support for cyclic dma in sun4i driver
- Support for X1830 in JZ4780 driver"
* tag 'dmaengine-5.6-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (62 commits)
dmaengine: Create symlinks between DMA channels and slaves
dmaengine: hisilicon: Add Kunpeng DMA engine support
dmaengine: idxd: add char driver to expose submission portal to userland
dmaengine: idxd: connect idxd to dmaengine subsystem
dmaengine: idxd: add descriptor manipulation routines
dmaengine: idxd: add sysfs ABI for idxd driver
dmaengine: idxd: add configuration component of driver
dmaengine: idxd: Init and probe for Intel data accelerators
dmaengine: add support to dynamic register/unregister of channels
dmaengine: break out channel registration
x86/asm: add iosubmit_cmds512() based on MOVDIR64B CPU instruction
dmaengine: ti: k3-udma: fix spelling mistake "limted" -> "limited"
dmaengine: s3c24xx-dma: fix spelling mistake "to" -> "too"
dmaengine: Move dma_get_{,any_}slave_channel() to private dmaengine.h
dmaengine: Remove dma_request_slave_channel_compat() wrapper
dmaengine: Remove dma_device_satisfies_mask() wrapper
dt-bindings: fsl-imx-sdma: Add i.MX8MM/i.MX8MN/i.MX8MP compatible string
dmaengine: zynqmp_dma: fix burst length configuration
dmaengine: sun4i: Add support for cyclic requests with dedicated DMA
dmaengine: fsl-qdma: fix duplicated argument to &&
...
Currently it is not easy to find out which DMA channels are in use, and
which slave devices are using which channels.
Fix this by creating two symlinks between the DMA channel and the actual
slave device when a channel is requested:
1. A "slave" symlink from DMA channel to slave device,
2. A "dma:<name>" symlink slave device to DMA channel.
When the channel is released, the symlinks are removed again.
The latter requires keeping track of the slave device and the channel
name in the dma_chan structure.
Note that this is limited to channel request functions for requesting an
exclusive slave channel that take a device pointer (dma_request_chan()
and dma_request_slave_channel*()).
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Tested-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Link: https://lore.kernel.org/r/20200117153056.31363-1-geert+renesas@glider.be
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch adds a driver for HiSilicon Kunpeng DMA engine. This DMA engine
which is an PCIe iEP offers 30 channels, each channel has a send queue, a
complete queue and an interrupt to help to do tasks. This DMA engine can do
memory copy between memory blocks or between memory and device buffer.
Signed-off-by: Zhou Wang <wangzhou1@hisilicon.com>
Signed-off-by: Zhenfa Qiu <qiuzhenfa@hisilicon.com>
Link: https://lore.kernel.org/r/1579155057-80523-1-git-send-email-wangzhou1@hisilicon.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Create a char device region that will allow acquisition of user portals in
order to allow applications to submit DMA operations. A char device will be
created per work queue that gets exposed. The workqueue type "user"
is used to mark a work queue for user char device. For example if the
workqueue 0 of DSA device 0 is marked for char device, then a device node
of /dev/dsa/wq0.0 will be created.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/157965026985.73301.976523230037106742.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add plumbing for dmaengine subsystem connection. The driver register a DMA
device per DSA device. The channels are dynamically registered when a
workqueue is configured to be "kernel:dmanegine" type.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/157965026376.73301.13867988830650740445.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The device is left unconfigured when the driver is loaded. Various
components are configured via the driver sysfs attributes. Once
configuration is done, the device can be enabled by writing the device name
to the bind attribute of the device driver sysfs. Disabling can be done
similarly. Also the individual work queues can also be enabled and disabled
through the bind/unbind attributes. A constructed hierarchy is created
through the struct device framework in order to provide appropriate
configuration points and device state and status. This hierarchy is
presented off the virtual DSA bus.
i.e. /sys/bus/dsa/...
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/157965024585.73301.6431413676230150589.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The idxd driver introduces the Intel Data Stream Accelerator [1] that will
be available on future Intel Xeon CPUs. One of the kernel access
point for the driver is through the dmaengine subsystem. It will initially
provide the DMA copy service to the kernel.
Some of the main functionality introduced with this accelerator
are: shared virtual memory (SVM) support, and descriptor submission using
Intel CPU instructions movdir64b and enqcmds. There will be additional
accelerator devices that share the same driver with variations to
capabilities.
This commit introduces the probe and initialization component of the
driver.
[1]: https://software.intel.com/en-us/download/intel-data-streaming-accelerator-preliminary-architecture-specification
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/157965023991.73301.6186843973135311580.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The functions dma_get_slave_channel() and dma_get_any_slave_channel()
are called from DMA engine drivers only. Hence move their declarations
from the public header file <linux/dmaengine.h> to the private header
file drivers/dma/dmaengine.h.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20200121093311.28639-4-geert+renesas@glider.be
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Commit aa1e6f1a38 ("dmaengine: kill struct dma_client and
supporting infrastructure") removed the last user of the
dma_device_satisfies_mask() wrapper.
Remove the wrapper, and rename __dma_device_satisfies_mask() to
dma_device_satisfies_mask(), to get rid of one more function starting
with a double underscore.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20200121093311.28639-2-geert+renesas@glider.be
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Since the dma engine expects the burst length register content as
power of 2 value, the burst length needs to be converted first.
Additionally add a burst length range check to avoid corrupting unrelated
register bits.
Signed-off-by: Matthias Fend <matthias.fend@wolfvision.net>
Link: https://lore.kernel.org/r/20200115102249.24398-1-matthias.fend@wolfvision.net
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Currently the cyclic transfers can be used only with normal DMAs. They
can be used by pcm_dmaengine module, which is required for implementing
sound with sun4i-hdmi encoder. This is so because the controller can
accept audio only from a dedicated DMA.
This patch enables them, following the existing style for the
scatter/gather type transfers.
Signed-off-by: Stefan Mavrodiev <stefan@olimex.com>
Acked-by: Maxime Ripard <mripard@kernel.org>
Link: https://lore.kernel.org/r/20200110141140.28527-2-stefan@olimex.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is duplicated argument to && in function fsl_qdma_free_chan_resources,
which looks like a typo, pointer fsl_queue->desc_pool also needs NULL check,
fix it.
Detected with coccinelle.
Fixes: b092529e0a ("dmaengine: fsl-qdma: Add qDMA controller driver for Layerscape SoCs")
Signed-off-by: Chen Zhou <chenzhou10@huawei.com>
Reviewed-by: Peng Ma <peng.ma@nxp.com>
Tested-by: Peng Ma <peng.ma@nxp.com>
Link: https://lore.kernel.org/r/20200120125843.34398-1-chenzhou10@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fixe the following warnings by making these static
drivers/dma/ti/k3-psil-j721e.c:62:16: warning: symbol 'j721e_src_ep_map' was not declared. Should it be static?
drivers/dma/ti/k3-psil-j721e.c:172:16: warning: symbol 'j721e_dst_ep_map' was not declared. Should it be static?
drivers/dma/ti/k3-psil-j721e.c:216:20: warning: symbol 'j721e_ep_map' was not declared. Should it be static?
CC drivers/dma/ti/k3-psil-j721e.o
drivers/dma/ti/k3-psil-am654.c:52:16: warning: symbol 'am654_src_ep_map' was not declared. Should it be static?
drivers/dma/ti/k3-psil-am654.c:127:16: warning: symbol 'am654_dst_ep_map' was not declared. Should it be static?
drivers/dma/ti/k3-psil-am654.c:169:20: warning: symbol 'am654_ep_map' was not declared. Should it be static?
Reported-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20200121070104.4393-1-peter.ujfalusi@ti.com
[vkoul: updated patch title]
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Certain users can not use right now the DMAengine API due to missing
features in the core. Prime example is Networking.
These users can use the glue layer interface to avoid misuse of DMAengine
API and when the core gains the needed features they can be converted to
use generic API.
The most prominent features the glue layer clients are depending on:
- most PSI-L native peripheral use extra rflow ranges on a receive channel
and depending on the peripheral's configuration packets from a single
free descriptor ring is going to be received to different receive ring
- it is also possible to have different free descriptor rings per rflow
and an rflow can also support 4 additional free descriptor ring based
on the size of the incoming packet
- out of order completion of descriptors on a channel
- when we have several queues to handle different priority packets the
descriptors will be completed 'out-of-order'
- the notion of prep_slave_sg is not matching with what the streaming type
of operation is demanding for networking
- Streaming type of operation
- Ability to fill the free descriptor ring with descriptors in
anticipation of incoming traffic and when a packet arrives UDMAP will
form a packet and gives it to the client driver
- the descriptors are not backed with exact size data buffers as we don't
know the size of the packet we will receive, but as a generic pool of
buffers to be used by the receive channel
- NAPI type of operation (polling instead of interrupt driven transfer)
- without this we can not sustain gigabit speeds and we need to support NAPI
- not to limit this to networking, but other high performance operations
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Keerthy <j-keerthy@ti.com>
Link: https://lore.kernel.org/r/20191223110458.30766-12-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Split patch for review containing: defines, structs, io and low level
functions and interrupt callbacks.
DMA driver for
Texas Instruments K3 NAVSS Unified DMA – Peripheral Root Complex (UDMA-P)
The UDMA-P is intended to perform similar (but significantly upgraded) functions
as the packet-oriented DMA used on previous SoC devices. The UDMA-P module
supports the transmission and reception of various packet types. The UDMA-P is
architected to facilitate the segmentation and reassembly of SoC DMA data
structure compliant packets to/from smaller data blocks that are natively
compatible with the specific requirements of each connected peripheral. Multiple
Tx and Rx channels are provided within the DMA which allow multiple segmentation
or reassembly operations to be ongoing. The DMA controller maintains state
information for each of the channels which allows packet segmentation and
reassembly operations to be time division multiplexed between channels in order
to share the underlying DMA hardware. An external DMA scheduler is used to
control the ordering and rate at which this multiplexing occurs for Transmit
operations. The ordering and rate of Receive operations is indirectly controlled
by the order in which blocks are pushed into the DMA on the Rx PSI-L interface.
The UDMA-P also supports acting as both a UTC and UDMA-C for its internal
channels. Channels in the UDMA-P can be configured to be either Packet-Based or
Third-Party channels on a channel by channel basis.
The initial driver supports:
- MEM_TO_MEM (TR mode)
- DEV_TO_MEM (Packet / TR mode)
- MEM_TO_DEV (Packet / TR mode)
- Cyclic (Packet / TR mode)
- Metadata for descriptors
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Keerthy <j-keerthy@ti.com>
Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com>
Link: https://lore.kernel.org/r/20191223110458.30766-11-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In K3 architecture the DMA operates within threads. One end of the thread
is UDMAP, the other is on the peripheral side.
The UDMAP channel configuration depends on the needs of the remote
endpoint and it can be differ from peripheral to peripheral.
This patch adds database for am654 and j721e and small API to fetch the
PSI-L endpoint configuration from the database which should only used by
the DMA driver(s).
Another API is added for native peripherals to give possibility to pass new
configuration for the threads they are using, which is needed to be able to
handle changes caused by different firmware loaded for the peripheral for
example.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Keerthy <j-keerthy@ti.com>
Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com>
Link: https://lore.kernel.org/r/20191223110458.30766-9-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
A DMA hardware can have big cache or FIFO and the amount of data sitting in
the DMA fabric can be an interest for the clients.
For example in audio we want to know the delay in the data flow and in case
the DMA have significantly large FIFO/cache, it can affect the latenc/delay
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Tero Kristo <t-kristo@ti.com>
Tested-by: Keerthy <j-keerthy@ti.com>
Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com>
Link: https://lore.kernel.org/r/20191223110458.30766-6-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The metadata is best described as side band data or parameters traveling
alongside the data DMAd by the DMA engine. It is data
which is understood by the peripheral and the peripheral driver only, the
DMA engine see it only as data block and it is not interpreting it in any
way.
The metadata can be different per descriptor as it is a parameter for the
data being transferred.
If the DMA supports per descriptor metadata it can implement the attach,
get_ptr/set_len callbacks.
Client drivers must only use either attach or get_ptr/set_len to avoid
misconfiguration.
Client driver can check if a given metadata mode is supported by the
channel during probe time with
dmaengine_is_metadata_mode_supported(chan, DESC_METADATA_CLIENT);
dmaengine_is_metadata_mode_supported(chan, DESC_METADATA_ENGINE);
and based on this information can use either mode.
Wrappers are also added for the metadata_ops.
To be used in DESC_METADATA_CLIENT mode:
dmaengine_desc_attach_metadata()
To be used in DESC_METADATA_ENGINE mode:
dmaengine_desc_get_metadata_ptr()
dmaengine_desc_set_metadata_len()
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Tero Kristo <t-kristo@ti.com>
Tested-by: Keerthy <j-keerthy@ti.com>
Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com>
Link: https://lore.kernel.org/r/20191223110458.30766-5-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On prep, a spin lock is taken and the next entry in the circular buffer
is filled. On submit, the valid bit is set in the hardware descriptor
and the lock is released.
The DMA engine is started (if it's not already running) when the client
calls dma_async_issue_pending().
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/20200103212021.2881-4-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Allocate DMA coherent memory for the ring of DMA descriptors and
program the appropriate hardware registers.
A tasklet is created which is triggered on an interrupt to process
all the finished requests. Additionally, any remaining descriptors
are aborted when the hardware is removed or the resources freed.
Use an RCU pointer to synchronize PCI device unbind.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/20200103212021.2881-3-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Some PLX Switches can expose DMA engines via extra PCI functions
on the upstream port. Each function will have one DMA channel.
This patch is just the core PCI driver skeleton and dma
engine registration.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/20200103212021.2881-2-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Currently when the call to dev_get_platdata returns null the driver issues
a warning and then later dereferences the null pointer. Avoid this issue
by returning -ENODEV error rather when the platform data is null and
change the warning to an appropriate error message.
Addresses-Coverity: ("Dereference after null check")
Fixes: 211010aeb0 ("dmaengine: ti: omap-dma: Pass sdma auxdata to driver and use it")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
ioremap has provided non-cached semantics by default since the Linux 2.6
days, so remove the additional ioremap_nocache interface.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Arnd Bergmann <arnd@arndb.de>
For omap2, we need to block idle if SDMA is busy. Let's do this with a
cpu notifier and remove the custom call.
Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Vinod Koul <vkoul@kernel.org>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
With the legacy IRQ handling gone, we can now start allocating channels
directly in the dmaengine driver for device tree based SoCs.
Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Vinod Koul <vkoul@kernel.org>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
We can now start passing sdma auxdata to the dmaengine driver to start
removing the platform based sdma init.
Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Vinod Koul <vkoul@kernel.org>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
We can move the global priority register configuration to the dmaengine
driver and configure it based on the of_device_id match data.
Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Vinod Koul <vkoul@kernel.org>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
If dma_alloc_coherent() returns NULL in ioat_alloc_ring(), ring
allocation must not proceed.
Until now, if the first call to dma_alloc_coherent() in
ioat_alloc_ring() returned NULL, the processing could proceed, failing
with NULL-pointer dereferencing further down the line.
Signed-off-by: Alexander Barabash <alexander.barabash@dell.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/75e9c0e84c3345d693c606c64f8b9ab5@x13pwhopdag1307.AMER.DELL.COM
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The current descriptor is not on any list of the virtual DMA channel.
Once sdma_terminate_all() is called when a descriptor is currently
in flight then this one is forgotten to be freed. We have to call
vchan_terminate_vdesc() on this descriptor to re-add it to the lists.
Now that we also free the currently running descriptor we can (and
actually have to) remove the current descriptor from its list also
for the cyclic case.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Robin Gong <yibin.gong@nxp.com>
Tested-by: Robin Gong <yibin.gong@nxp.com>
Link: https://lore.kernel.org/r/20191216105328.15198-10-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In sdma_tx_status() we must first find the current sdma_desc. In cyclic
mode we assume that this can always be found with vchan_find_desc().
This is true because do not remove the current descriptor from the
desc_issued list:
/*
* Do not delete the node in desc_issued list in cyclic mode, otherwise
* the desc allocated will never be freed in vchan_dma_desc_free_list
*/
if (!(sdmac->flags & IMX_DMA_SG_LOOP))
list_del(&vd->node);
We will change this in the next step, so check if the current descriptor is
the desired one also for the cyclic case.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Link: https://lore.kernel.org/r/20191216105328.15198-9-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Rename sdma_disable_channel_async() after the hook it implements, like
done for all other functions in the SDMA driver.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Link: https://lore.kernel.org/r/20191216105328.15198-8-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
vchan_dma_desc_free_list() basically open codes vchan_vdesc_fini() in its
loop body. Call it directly rather than duplicating the code.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20191216105328.15198-7-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
All list operations are protected by &vc->lock. As vchan_vdesc_fini()
is called unlocked add the missing locking around the list operations.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20191216105328.15198-6-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
vchan_vdesc_fini() shouldn't be called under a spin_lock. This is done
in two places, once in vchan_terminate_vdesc() and once in
vchan_synchronize(). Instead of freeing the vdesc right away, collect
the aborted vdescs on a separate list and free them along with the other
vdescs. The terminated descs are also freed in vchan_synchronize as done
before this patch.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20191216105328.15198-5-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
vchan_dma_desc_free_list() basically open codes vchan_vdesc_fini() in
the loop body. One difference is an additional debug message. As this
isn't overly useful remove it.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20191216105328.15198-4-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Originally freeing descriptors was split into a locked and an unlocked
part. The locked part in vchan_get_all_descriptors() collected all
descriptors on a separate list_head. This was done to allow iterating
over that new list in vchan_dma_desc_free_list() without a lock held.
This became broken in 13bb26ae88 ("dmaengine: virt-dma: don't always
free descriptor upon completion"). With this commit
vchan_dma_desc_free_list() no longer exclusively operates on the
separate list, but starts to put descriptors which can be reused back on
&vc->desc_allocated. This list operation should have been locked, but
wasn't.
In the mean time drivers started to call vchan_dma_desc_free_list() with
their lock held so that we now have the situation that
vchan_dma_desc_free_list() is called locked from some drivers and
unlocked from others.
To clean this up we have to do two things:
1. Add missing locking in vchan_dma_desc_free_list()
2. Make sure drivers call vchan_dma_desc_free_list() unlocked
This needs to be done atomically, so in this patch the locking is added
and all drivers are fixed.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Green Wan <green.wan@sifive.com>
Tested-by: Green Wan <green.wan@sifive.com>
Link: https://lore.kernel.org/r/20191216105328.15198-3-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
vchan_vdesc_fini() can't be called locked. Instead, call
vchan_terminate_vdesc() which delays the freeing of the descriptor to
vchan_synchronize().
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Link: https://lore.kernel.org/r/20191216105328.15198-2-s.hauer@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
error log for dma_channel_table_init() failure pointed a mere
"initialization failure", which is not very helpful message, so print
additional details like function name and error code.
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We call dma_device_put() and module_put() after invoking
.device_free_chan_resources callback, but we should also take care of
router devices and invoke this after .route_free callback. So move it
after .route_free
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Don't allocate memory using the devm infrastructure and instead call
kfree with the new dmaengine device_release call back. This ensures
the structures are available until the last reference is dropped.
We also need to ensure we call ioat_shutdown() in ioat_remove() so
that all the channels are quiesced and further transaction fails.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/20191216190120.21374-6-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Adding a reference count helps drivers to properly implement the unbind
while in use case.
References are taken and put every time a channel is allocated or freed.
Once the final reference is put, the device is removed from the
dma_device_list and a release callback function is called to signal
the driver to free the memory.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/20191216190120.21374-5-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
So it can be called by a release function which is needed higher up in
the code. No functional changes intended.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/20191216190120.21374-4-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The module reference is taken to ensure the callbacks still exist
when they are called. If the channel holds the last reference to the
module, the module can disappear before device_free_chan_resources() is
called and would cause a call into free'd memory.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/20191216190120.21374-3-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
dma_chan_to_owner() dereferences the driver from the struct device to
obtain the owner and call module_[get|put](). However, if the backing
device is unbound before the dma_device is unregistered, the driver
will be cleared and this will cause a NULL pointer dereference.
Instead, store a pointer to the owner module in the dma_device struct
so the module reference can be properly put when the channel is put, even
if the backing device was destroyed first.
This change helps to support a safer unbind of DMA engines.
If the dma_device is unregistered in the driver's remove function,
there's no guarantee that there are no existing clients and a users
action may trigger the WARN_ONCE in dma_async_device_unregister()
which is unlikely to leave the system in a consistent state.
Instead, a better approach is to allow the backing driver to go away
and fail any subsequent requests to it.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/20191216190120.21374-2-logang@deltatee.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
vchan_vdesc_fini() is freeing up 'vd' so the access to vd->tx_result is
via already freed up memory.
Move the vchan_vdesc_fini() after invoking the callback to avoid this.
Fixes: 09d5b702b0 ("dmaengine: virt-dma: store result on dma descriptor")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Link: https://lore.kernel.org/r/20191220131100.21804-1-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In some cases we seem to submit two transactions in a row, which
causes us to lose track of the first. If we then cancel the
request, we may still get an interrupt, which traverses a null
ds_run value.
So try to avoid starting a new transaction if the ds_run value
is set.
While this patch avoids the null pointer crash, I've had some
reports of the k3dma driver still getting confused, which
suggests the ds_run/ds_done value handling still isn't quite
right. However, I've not run into an issue recently with it
so I think this patch is worth pushing upstream to avoid the
crash.
Signed-off-by: John Stultz <john.stultz@linaro.org>
[add ss tag]
Link: https://lore.kernel.org/r/20191218190906.6641-1-john.stultz@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fix to return negative error code -ENOMEM from the error handling
case instead of 0, as done elsewhere in this function.
Fixes: 2a03c13145 ("dmaengine: ti: edma: add missed operations")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20191212114622.127322-1-weiyongjun1@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
With old DMA code disabled for handling DMA requests for device tree based
SoCs, we can move omap3 specific context save and restore to the dmaengine
driver.
Let's do this by adding cpu_pm notifier handling to save and restore context,
and enable it based on device tree match data. This way we can use the match
data later to configure more SoC specific features later on too.
Note that we only clear the channels in use while the platform code also
clears reserved channels 0 and 1 on high-security SoCs. Based on testing
on n900, this is not needed though and the system idles just fine.
With the dmaengine driver handling context save and restore, we must now
remove the old custom calls for context save and restore.
Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Vinod Koul <vkoul@kernel.org>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
It turns out that the JZ4725B displays the same buggy behaviour as the
JZ4740 that was described in commit f4c255f1a7 ("dmaengine: dma-jz4780:
Break descriptor chains on JZ4740").
Work around it by using the same workaround previously used for the
JZ4740.
Fixes commit f4c255f1a7 ("dmaengine: dma-jz4780: Break descriptor
chains on JZ4740")
Cc: <stable@vger.kernel.org>
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Link: https://lore.kernel.org/r/20191210165545.59690-1-paul@crapouillou.net
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The driver misses checking the result of devm_regmap_init_mmio().
Add a check to fix it.
Fixes: fc15be39a8 ("dmaengine: axi-dmac: add regmap support")
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Reviewed-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Link: https://lore.kernel.org/r/20191209085711.16001-1-hslester96@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It has turned out that it's in general a good idea for dmaengines to allow
DMA requests during the entire dpm_suspend() phase. Therefore, convert the
pl330 driver into using SET_LATE_SYSTEM_SLEEP_PM_OPS.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Link: https://lore.kernel.org/r/20191205143746.24873-3-ulf.hansson@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Let's drop the boilerplate code in the system suspend/resume callbacks and
convert to use pm_runtime_force_suspend|resume(). This change also has a
nice side effect, as pm_runtime_force_resume() may decide to leave the
device in low power state, when that is feasible, thus avoiding to waste
both time and energy during system resume.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Link: https://lore.kernel.org/r/20191205143746.24873-2-ulf.hansson@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The driver forgets to call pm_runtime_disable and pm_runtime_put_sync in
probe failure and remove.
Add the calls and modify probe failure handling to fix it.
To simplify the fix, the patch adjusts the calling order and merges checks
for devm_kcalloc.
Fixes: 2b6b3b7420 ("ARM/dmaengine: edma: Merge the two drivers under drivers/dma/")
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20191124052855.6472-1-hslester96@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Adjust indentation from spaces to tab (+optional two spaces) as in
coding style with command like:
$ sed -e 's/^ /\t/' -i */Kconfig
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Link: https://lore.kernel.org/r/1574306348-29212-1-git-send-email-krzk@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The place where the macro, SF_PDMA_REG_BASE(), is cause kernel-doc
using wrong function declaration. Move it to header file.
Signed-off-by: Green Wan <green.wan@sifive.com>
Link: https://lore.kernel.org/r/20191118143554.16129-2-green.wan@sifive.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There are several comments starting from "/**" but not for function
comment purpose. It causes kernel-doc parsing wrong string. Replace
"/**" with "/*" to fix them.
Signed-off-by: Green Wan <green.wan@sifive.com>
Link: https://lore.kernel.org/r/20191118143554.16129-1-green.wan@sifive.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When devm_kcalloc fails, it forgets to call edma_free_slot.
Replace direct return with failure handler to fix it.
Fixes: 1be5336bc7 ("dmaengine: edma: New device tree binding")
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Link: https://lore.kernel.org/r/20191118073802.28424-1-hslester96@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The driver calls of_dma_controller_register in probe but does not free
it in remove.
Add the call to fix it.
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Link: https://lore.kernel.org/r/20191115083153.12334-1-hslester96@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The driver calls of_dma_controller_register in probe but does not free
it in remove.
Add the call to fix it.
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Link: https://lore.kernel.org/r/20191115083100.12220-1-hslester96@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The Spreadtrum Audio compress offload mode will use 2-stage DMA transfer
to save power. That means we can request 2 dma channels, one for source
channel, and another one for destination channel. Once the source channel's
transaction is done, it will trigger the destination channel's transaction
automatically by hardware signal.
In this case, the source channel will transfer data from IRAM buffer to
the DSP fifo to decoding/encoding, once IRAM buffer is empty by transferring
done, the destination channel will start to transfer data from DDR buffer
to IRAM buffer. Since the destination channel will use link-list mode to
fill the IRAM data, and IRAM buffer is allocated by 32K, and DDR buffer
is larger to 2M, that means we need lots of link-list nodes to do a cyclic
transfer, instead wasting lots of link-list memory, we can use wrap address
support to reduce link-list node number, which means when the transfer
address reaches the wrap address, the transfer address will jump to the
wrap_to address specified by wrap_to register, and only 2 link-list nodes
can do a cyclic transfer to transfer data from DDR to IRAM.
Thus this patch adds wrap address to support this case.
[Baolin Wang changes the commit message]
Signed-off-by: Eric Long <eric.long@unisoc.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Link: https://lore.kernel.org/r/85a5484bc1f3dd53ce6f92700ad8b35f30a0b096.1571812029.git.baolin.wang@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In the probe method dmam_pool_create is used. Therefore, there is no
need to explicitly call dmam_pool_destroy in remove method as this
will be automatically taken care by devres
Signed-off-by: Satendra Singh Thakur <sst2005@gmail.com>
Link: https://lore.kernel.org/r/20191109113609.6159-1-sst2005@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When devm_request_irq fails, currently, the function
dma_async_device_unregister gets called. This doesn't free
the resources allocated by of_dma_controller_register.
Therefore, we have called of_dma_controller_free for this purpose.
Signed-off-by: Satendra Singh Thakur <sst2005@gmail.com>
Link: https://lore.kernel.org/r/20191109113523.6067-1-sst2005@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
platform_get_irq() prints the error message, so caller need not do so,
remove the error line in this driver for platform_get_irq()
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Link: https://lore.kernel.org/r/20191106163128.1980714-2-vkoul@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
platform_get_irq() prints the error message, so caller need not do so,
remove the error line in this driver for platform_get_irq()
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Link: https://lore.kernel.org/r/20191106163128.1980714-1-vkoul@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The remove misses to disable and unprepare jzdma->clk.
Add a call to clk_disable_unprepare to fix it.
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Acked-by: Paul Cercueil <paul@crapouillou.net>
Link: https://lore.kernel.org/r/20191104161622.11758-1-hslester96@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add support for AXI Multichannel Direct Memory Access (AXI MCDMA)
core, which is a soft Xilinx IP core that provides high-bandwidth
direct memory access between memory and AXI4-Stream target peripherals.
The AXI MCDMA core provides scatter-gather interface with multiple
independent transmit and receive channels. The driver supports
device_prep_slave_sg slave transfer mode.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1571763622-29281-7-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Extend dma_config structure to store irq routine handle. It enables runtime
handler selection based on xdma_ip_type and serves as preparatory patch for
adding MCDMA IP support.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Suggested-by: Vinod Koul <vkoul@kernel.org>
Link: https://lore.kernel.org/r/1571763622-29281-6-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The AXI DMA multichannel support is deprecated in the IP and it is no
longer actively supported. For multichannel support, refer to the AXI
multichannel direct memory access IP product guide(PG228) and MCDMA
driver. So inline with it remove axidma multichannel support from
from the driver.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1571763622-29281-5-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Like paRAM slots, channels could be used by other cores and in this case
we need to make sure that the driver do not alter these channels.
Handle the generic dma-channel-mask property to mark channels in a bitmap
which can not be used by Linux and convert the legacy rsv_chans if it is
provided by platform_data.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20191025073056.25450-4-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Clang warns:
drivers/dma/fsl-dpaa2-qdma/dpdmai.c:148:25: warning: variable 'cfg' is
uninitialized when used within its own initialization [-Wuninitialized]
DPDMAI_CMD_CREATE(cmd, cfg);
~~~~~~~~~~~~~~~~~~~~~~~^~~~
drivers/dma/fsl-dpaa2-qdma/dpdmai.c:42:24: note: expanded from macro
'DPDMAI_CMD_CREATE'
typeof(_cfg) (cfg) = (_cfg); \
~~~ ^~~~
1 warning generated.
Looking at the preprocessed source, we can see that this is true.
int dpdmai_create(struct fsl_mc_io *mc_io, u32 cmd_flags,
const struct dpdmai_cfg *cfg, u16 *token)
{
struct fsl_mc_command cmd = { 0 };
int err;
cmd.header = mc_encode_cmd_header((((0x90E) << 4) | 0), cmd_flags, 0);
do {
typeof(cmd)(cmd) = (cmd);
typeof(cfg)(cfg) = (cfg);
((cmd).params[0] |= mc_enc((8), (8), (cfg)->priorities[0]));
((cmd).params[0] |= mc_enc((16), (8), (cfg)->priorities[1]));
} while (0);
I cannot see a good reason to create another version of cfg when the
parameter one will work perfectly fine and cmd can just be used as is.
Remove them to fix this warning.
Fixes: f2835adf8a ("dmaengine: fsl-dpaa2-qdma: Add the DPDMAI(Data Path DMA Interface) support")
Link: https://github.com/ClangBuiltLinux/linux/issues/746
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Link: https://lore.kernel.org/r/20191022171648.37732-1-natechancellor@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
platform_get_irq_byname() might return -errno which later would be cast
to an unsigned int and used in IRQ handling code leading to usage of
wrong ID and errors about wrong irq_base.
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Reviewed-by: Peng Ma <peng.ma@nxp.com>
Tested-by: Peng Ma <peng.ma@nxp.com>
Link: https://lore.kernel.org/r/20191004150826.6656-1-krzk@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Yegor Yefremov <yegorslists@googlemail.com> reported that musb and ftdi
uart can fail for the first open of the uart unless connected using
a hub.
This is because the first dma call done by musb_ep_program() must wait
if cppi41 is PM runtime suspended. Otherwise musb_ep_program() continues
with other non-dma packets before the DMA transfer is started causing at
least ftdi uarts to fail to receive data.
Let's fix the issue by waking up cppi41 with PM runtime calls added to
cppi41_dma_prep_slave_sg() and return NULL if still idled. This way we
have musb_ep_program() continue with PIO until cppi41 is awake.
Fixes: fdea2d09b9 ("dmaengine: cppi41: Add basic PM runtime support")
Reported-by: Yegor Yefremov <yegorslists@googlemail.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Cc: stable@vger.kernel.org # v4.9+
Link: https://lore.kernel.org/r/20191023153138.23442-1-tony@atomide.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Whenever we reset the channel, we need to clear desc_pendingcount
along with desc_submitcount. Otherwise when a new transaction is
submitted, the irq coalesce level could be programmed to an incorrect
value in the axidma case.
This behavior can be observed when terminating pending transactions
with xilinx_dma_terminate_all() and then submitting new transactions
without releasing and requesting the channel.
Signed-off-by: Nicholas Graumann <nick.graumann@gmail.com>
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1571150904-3988-8-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The driver should not run out of tx segments in normal operation. But,
if the user attempts to prepare a transaction with a large sg list,
the driver may not have enough free segments to accommodate the request.
Log a message at the debug level to inform the user in case they are
experiencing issues.
Signed-off-by: Nicholas Graumann <nick.graumann@gmail.com>
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1571150904-3988-7-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Take advantage of dmaengine_desc_get_callback_invoke which allows either
a callback or callback_result to be specified. This can be useful when
using the AXI DMA transfer unknown quantities of data where the residue
contained in the result can be used to calculate the number of bytes
transferred.
Signed-off-by: Nicholas Graumann <nick.graumann@gmail.com>
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1571150904-3988-6-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Introduce a function that can calculate residues for IPs that support it:
AXI DMA and CDMA.
Signed-off-by: Nicholas Graumann <nick.graumann@gmail.com>
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1571150904-3988-5-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The dma api provides a single interface to get the appropriate callback
and invoke it directly. Prefer using it.
Signed-off-by: Nicholas Graumann <nick.graumann@gmail.com>
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1571150904-3988-3-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In descriptor cleanup the call to desc_callback_valid can be safely
removed as both callback pointers i.e callback_result and callback
are anyway checked in invoke(). There is no much benefit in having
redundant checks.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Nicholas Graumann <nick.graumann@gmail.com>
Reviewed-by: Appana Durga Kedareswara rao <appana.durga.rao@xilinx.com>
Link: https://lore.kernel.org/r/1571150904-3988-2-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Driver for Socionext Milbeaut HDMAC controller. The controller has
upto 8 floating channels, that need a predefined slave-id to work
from a set of slaves.
Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
Link: https://lore.kernel.org/r/20191015033359.14925-1-jassisinghbrar@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
bam_dma_terminate_all() will leak resources if any of the transactions are
committed to the hardware (present in the desc fifo), and not complete.
Since bam_dma_terminate_all() does not cause the hardware to be updated,
the hardware will still operate on any previously committed transactions.
This can cause memory corruption if the memory for the transaction has been
reassigned, and will cause a sync issue between the BAM and its client(s).
Fix this by properly updating the hardware in bam_dma_terminate_all().
Fixes: e7c0fe2a5c ("dmaengine: add Qualcomm BAM dma driver")
Signed-off-by: Jeffrey Hugo <jeffrey.l.hugo@gmail.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20191017152606.34120-1-jeffrey.l.hugo@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
DPPA2(Data Path Acceleration Architecture 2) qDMA supports
virtualized channel by allowing DMA jobs to be enqueued into
different work queues. Core can initiate a DMA transaction by
preparing a frame descriptor(FD) for each DMA job and enqueuing
this job through a hardware portal. DPAA2 components can also
prepare a FD and enqueue a DMA job through a hardware portal.
The qDMA prefetches DMA jobs through DPAA2 hardware portal. It
then schedules and dispatches to internal DMA hardware engines,
which generate read and write requests. Both qDMA source data and
destination data can be either contiguous or non-contiguous using
one or more scatter/gather tables.
The qDMA supports global bandwidth flow control where all DMA
transactions are stalled if the bandwidth threshold has been reached.
Also supported are transaction based read throttling.
Add NXP dppa2 qDMA to support some of Layerscape SoCs.
such as: LS1088A, LS208xA, LX2, etc.
Signed-off-by: Peng Ma <peng.ma@nxp.com>
Link: https://lore.kernel.org/r/20190930020440.7754-2-peng.ma@nxp.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The MC(Management Complex) exports the DPDMAI(Data Path DMA Interface)
object as an interface to operate the DPAA2(Data Path Acceleration
Architecture 2) qDMA Engine. The DPDMAI enables sending frame-based
requests to qDMA and receiving back confirmation response on transaction
completion, utilizing the DPAA2 QBMan(Queue Manager and Buffer Manager
hardware) infrastructure. DPDMAI object provides up to two priorities for
processing qDMA requests.
The following list summarizes the DPDMAI main features and capabilities:
1. Supports up to two scheduling priorities for processing
service requests.
- Each DPDMAI transmit queue is mapped to one of two service
priorities, allowing further prioritization in hardware between
requests from different DPDMAI objects.
2. Supports up to two receive queues for incoming transaction
completion confirmations.
- Each DPDMAI receive queue is mapped to one of two receive
priorities, allowing further prioritization between other
interfaces when associating the DPDMAI receive queues to DPIO
or DPCON(Data Path Concentrator) objects.
3. Supports different scheduling options for processing received
packets:
- Queues can be configured either in 'parked' mode (default),
or attached to a DPIO object, or attached to DPCON object.
4. Allows interaction with one or more DPIO objects for
dequeueing/enqueueing frame descriptors(FD) and for
acquiring/releasing buffers.
5. Supports enable, disable, and reset operations.
Add dpdmai to support some platforms with dpaa2 qdma engine.
Signed-off-by: Peng Ma <peng.ma@nxp.com>
Link: https://lore.kernel.org/r/20190930020440.7754-1-peng.ma@nxp.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If we terminate the channel to free all descriptors associated with this
channel, we will leak the memory of current descriptor if the current
descriptor is not completed, since it had been deteled from the desc_issued
list and have not been added into the desc_completed list.
Thus we should check if current descriptor is completed or not, when freeing
the descriptors associated with one channel, if not, we should free it to
avoid this issue.
Fixes: 9b3b8171f7 ("dmaengine: sprd: Add Spreadtrum DMA driver")
Reported-by: Zhenfang Wang <zhenfang.wang@unisoc.com>
Tested-by: Zhenfang Wang <zhenfang.wang@unisoc.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Link: https://lore.kernel.org/r/170dbbc6d5366b6fa974ce2d366652e23a334251.1570609788.git.baolin.wang@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In vdma_channel_set_config clear the delay, frame count and master mask
before updating their new values. It avoids programming incorrect state
when input parameters are different from default.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Acked-by: Appana Durga Kedareswara rao <appana.durga.rao@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Link: https://lore.kernel.org/r/1569495060-18117-3-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In dma probe, the driver checks for devm_clk_get return and print error
message in the failing case. However for -EPROBE_DEFER this message is
confusing so avoid it.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Link: https://lore.kernel.org/r/1569495060-18117-5-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Replace the chain of platform_get_resource() and devm_ioremap_resource()
with devm_platform_ioremap_resource(). It simplifies the flow and there
is no functional change.
Fixes below cocinelle warning-
WARNING: Use devm_platform_ioremap_resource for xdev -> regs
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1569495060-18117-4-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Don't populate the array 'handler' on the stack but instead make it
static const. Makes the object code smaller by 80 bytes.
Before:
text data bss dec hex filename
38225 9084 64 47373 b90d drivers/dma/iop-adma.o
After:
text data bss dec hex filename
38081 9148 64 47293 b8bd drivers/dma/iop-adma.o
(gcc version 9.2.1, amd64)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Link: https://lore.kernel.org/r/20190905163726.19690-1-colin.king@canonical.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
On some platforms the clock can be fixed rate, always running one and
there is no need to do anything with it.
In order to support those platforms, switch to use optional clock.
Fixes: f8d9ddbc28 ("dmaengine: dw: platform: Enable iDMA 32-bit on Intel Elkhart Lake")
Depends-on: 60b8f0ddf1 ("clk: Add (devm_)clk_get_optional() functions")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Link: https://lore.kernel.org/r/20190924085116.83683-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Illegal memory will be touch if SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V3
(41) exceed the size of structure sdma_script_start_addrs(40),
thus cause memory corrupt such as slob block header so that kernel
trap into while() loop forever in slob_free(). Please refer to below
code piece in imx-sdma.c:
for (i = 0; i < sdma->script_number; i++)
if (addr_arr[i] > 0)
saddr_arr[i] = addr_arr[i]; /* memory corrupt here */
That issue was brought by commit a572460be9 ("dmaengine: imx-sdma: Add
support for version 3 firmware") because SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V3
(38->41 3 scripts added) not align with script number added in
sdma_script_start_addrs(2 scripts).
Fixes: a572460be9 ("dmaengine: imx-sdma: Add support for version 3 firmware")
Cc: stable@vger.kernel
Link: https://www.spinics.net/lists/arm-kernel/msg754895.html
Signed-off-by: Robin Gong <yibin.gong@nxp.com>
Reported-by: Jurgen Lambrecht <J.Lambrecht@TELEVIC.com>
Link: https://lore.kernel.org/r/1569347584-3478-1-git-send-email-yibin.gong@nxp.com
[vkoul: update the patch title]
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Simplify this function implementation by using a known wrapper function.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Acked-by: Shawn Guo <shawnguo@kernel.org>
Link: https://lore.kernel.org/r/85de79fa-1ca5-a1e5-0296-9e8a2066f134@web.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Simplify this function implementation by using a known wrapper function.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Link: https://lore.kernel.org/r/d36b6a6c-2e3d-8d68-6ddc-969a377ca3b2@web.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Simplify this function implementation by using a known wrapper function.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Link: https://lore.kernel.org/r/366e776c-8760-eeb7-c248-7380c9f4fd34@web.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Simplify this function implementation a bit by using
a known wrapper function.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Link: https://lore.kernel.org/r/c7e3bbae-44fa-9019-18ee-c6cdfd7c2a14@web.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Simplify this function implementation by using a known wrapper function.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Link: https://lore.kernel.org/r/aaed7862-49bb-e368-3e7b-5cc2c3d915b1@web.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Simplify this function implementation a bit by using
a known wrapper function.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Link: https://lore.kernel.org/r/5dd19f28-349a-4957-ea3a-6aebbd7c97e2@web.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Simplify this function implementation by using a known wrapper function.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Link: https://lore.kernel.org/r/377247f3-b53a-a9d9-66c7-4b8515de3809@web.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
>From Tegra186 onwards OUTSTANDING_REQUESTS field is added in channel
configuration register(bits 7:4) which defines the maximum number of reads
from the source and writes to the destination that may be outstanding at
any given point of time. This field must be programmed with a value
between 1 and 8. A value of 0 will prevent any transfers from happening.
Thus added 'has_outstanding_reqs' bool member in chip data structure and is
set to false for Tegra210, since the field is not applicable. For Tegra186
it is set to true and channel configuration is updated with maximum
outstanding requests.
Fixes: 433de642a7 ("dmaengine: tegra210-adma: add support for Tegra186/Tegra194")
Cc: stable@vger.kernel.org
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/r/1568626513-16541-1-git-send-email-spujar@nvidia.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch adds dma-channel-mask property support not to reserve
some DMA channels for some reasons. (for example: a heterogeneous
CPU uses it.)
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/1568010892-17606-5-git-send-email-yoshihiro.shimoda.uh@renesas.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch uses devm_platform_ioremap_resource() instead of
using platform_get_resource() and devm_ioremap_resource() together
to simplify.
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Link: https://lore.kernel.org/r/1568010892-17606-4-git-send-email-yoshihiro.shimoda.uh@renesas.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Since we will have changed memory mapping of the DMAC in the future,
this patch uses of_data values instead of a macro to calculate
each channel's base offset.
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Link: https://lore.kernel.org/r/1568010892-17606-3-git-send-email-yoshihiro.shimoda.uh@renesas.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We will set the link-list pointer register point to next link-list
configuration's physical address, which can load DMA configuration
from the link-list node automatically.
But the link-list node's physical address can be larger than 32bits,
and now Spreadtrum DMA driver only supports 32bits physical address,
which may cause loading a incorrect DMA configuration when starting
the link-list transfer mode. According to the DMA datasheet, we can
use SRC_BLK_STEP register (bit28 - bit31) to save the high bits of the
link-list node's physical address to fix this issue.
Fixes: 4ac6954647 ("dmaengine: sprd: Support DMA link-list mode")
Signed-off-by: Zhenfang Wang <zhenfang.wang@unisoc.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Link: https://lore.kernel.org/r/eadfe9295499efa003e1c344e67e2890f9d1d780.1568267061.git.baolin.wang@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
drivers/dma/ti/edma.c: In function edma_probe:
drivers/dma/ti/edma.c:2252:11: warning:
variable off set but not used [-Wunused-but-set-variable]
'off' is not used now, so remove it.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20190905060249.23928-1-yuehaibing@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Replace the chain of platform_get_resource() and devm_ioremap_resource()
with devm_platform_ioremap_resource().
This allows to remove the local variable for (struct resource *), and
have one function call less.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Link: https://lore.kernel.org/r/20190905034133.29514-1-yamada.masahiro@socionext.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
- boot_mem_map is removed, providing a nice cleanup made possible by the
recent removal of bootmem.
- Some fixes to atomics, in general providing compiler barriers for
smp_mb__{before,after}_atomic plus fixes specific to Loongson CPUs or
MIPS32 systems using cmpxchg64().
- Conversion to the new generic VDSO infrastructure courtesy of Vincenzo
Frascino.
- Removal of undefined behavior in set_io_port_base(), fixing the
behavior of some MIPS kernel configurations when built with recent
clang versions.
- Initial MIPS32 huge page support, functional on at least Ingenic SoCs.
- pte_special() is now supported for some configurations, allowing among
other things generic fast GUP to be used.
- Miscellaneous fixes & cleanups.
And platform specific changes:
- Major improvements to Ingenic SoC support from Paul Cercueil, mostly
enabled by the inclusion of the new TCU (timer-counter unit) drivers
he's spent a very patient year or so working on. Plus some fixes for
X1000 SoCs from Zhou Yanjie.
- Netgear R6200 v1 systems are now supported by the bcm47xx platform.
- DT updates for BMIPS, Lantiq & Microsemi Ocelot systems.
-----BEGIN PGP SIGNATURE-----
iIsEABYIADMWIQRgLjeFAZEXQzy86/s+p5+stXUA3QUCXYaqpRUccGF1bC5idXJ0
b25AbWlwcy5jb20ACgkQPqefrLV1AN2JUQD+PQGFIlq9bo/3vLyqsXJffm+DhwVQ
4WSCSeN5brPkO8EA/153sRJBlRtG+KK5p9f7WYKUuBfbcEawuc1uwmKuy7cG
=lWlM
-----END PGP SIGNATURE-----
Merge tag 'mips_5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux
Pull MIPS updates from Paul Burton:
"Main MIPS changes:
- boot_mem_map is removed, providing a nice cleanup made possible by
the recent removal of bootmem.
- Some fixes to atomics, in general providing compiler barriers for
smp_mb__{before,after}_atomic plus fixes specific to Loongson CPUs
or MIPS32 systems using cmpxchg64().
- Conversion to the new generic VDSO infrastructure courtesy of
Vincenzo Frascino.
- Removal of undefined behavior in set_io_port_base(), fixing the
behavior of some MIPS kernel configurations when built with recent
clang versions.
- Initial MIPS32 huge page support, functional on at least Ingenic
SoCs.
- pte_special() is now supported for some configurations, allowing
among other things generic fast GUP to be used.
- Miscellaneous fixes & cleanups.
And platform specific changes:
- Major improvements to Ingenic SoC support from Paul Cercueil,
mostly enabled by the inclusion of the new TCU (timer-counter unit)
drivers he's spent a very patient year or so working on. Plus some
fixes for X1000 SoCs from Zhou Yanjie.
- Netgear R6200 v1 systems are now supported by the bcm47xx platform.
- DT updates for BMIPS, Lantiq & Microsemi Ocelot systems"
* tag 'mips_5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: (89 commits)
MIPS: Detect bad _PFN_SHIFT values
MIPS: Disable pte_special() for MIPS32 with RiXi
MIPS: ralink: deactivate PCI support for SOC_MT7621
mips: compat: vdso: Use legacy syscalls as fallback
MIPS: Drop Loongson _CACHE_* definitions
MIPS: tlbex: Remove cpu_has_local_ebase
MIPS: tlbex: Simplify r3k check
MIPS: Select R3k-style TLB in Kconfig
MIPS: PCI: refactor ioc3 special handling
mips: remove ioremap_cachable
mips/atomic: Fix smp_mb__{before,after}_atomic()
mips/atomic: Fix loongson_llsc_mb() wreckage
mips/atomic: Fix cmpxchg64 barriers
MIPS: Octeon: remove duplicated include from dma-octeon.c
firmware: bcm47xx_nvram: Allow COMPILE_TEST
firmware: bcm47xx_nvram: Correct size_t printf format
MIPS: Treat Loongson Extensions as ASEs
MIPS: Remove dev_err() usage after platform_get_irq()
MIPS: dts: mscc: describe the PTP ready interrupt
MIPS: dts: mscc: describe the PTP register range
...
- Move Dmaengine DT bindings to YAML and convert Allwinner to schema.
- FSL dma device_synchronize implementation
- DW split acpi and of helpers and updates to driver and support for Elkhart
Lake
- Move filter fn as private for omap-dma and edma drivers and improvements
to these drivers
- Mark expected switch fall-through in couple of drivers
- Renames of shdma and nbpfaxi binding document
- Minor updates to bunch of drivers
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJdgKdzAAoJEHwUBw8lI4NHpS8P/R9goCZa+8gheQbgJKR7lyH0
a+6wTUA3z8bAw0Z8QiOtZkeYbSYUQSWYzPIjjOkpZXLLXvzHfDLBZ9LBEQORhPU6
hVC3RZ4hey/HFC1pyZcua6dniFIXicD/zIUvqeYTnC7gb+q7J2WgJaTub/OuZKL0
JQ45dDE219nkFWZ37wUJgAEWq2r429JoxkJwFUZeKKldpMy3pKRSCt+FPnDZZBtt
n6DBSbWbzPZ6DtKGc6Sh75bFm12xHhuCS0uB6k7g3APY6T8NslpGTsFwtp5PB/Q0
5BLwLAZZjeya6RgPBne0cjZ75YFb+Rf+yLM5AErYPLZTy07/88BtMDWXSjJbDDlh
BFu93hDqB/0rp28HJF9ZoH5MNp181cyvkQztt83gB97Lkk8wBGAyvqIExe2ZeHw6
XJibCDjS9A1xjxWi4IBx/YyiSOesrnWvRvFcCXnEcsRM74m2xX9oLC+dwQgx2o9f
92V5edxojDlKk8J2ZNX8meojIXx955/et6SgUC61S/hhpZAMnJgNfBJ0FtrR2q3Q
qnpYsy1Ef9399laQGRzPQ2wgs6PRQfoNTZUa1evTQW3fBdoy+yF19tsVhVwhxXt8
LqKL6Y/fKlL7/wtdBQIeyJLp3CF1EyaePuZcaUnd/BM8ZKLetCT64iM6nJLjoZY9
xpYkmce5/u+65x0mQJ/B
=Q8bz
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-5.4-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
- Move Dmaengine DT bindings to YAML and convert Allwinner to schema.
- FSL dma device_synchronize implementation
- DW split acpi and of helpers and updates to driver and support for
Elkhart Lake
- Move filter fn as private for omap-dma and edma drivers and
improvements to these drivers
- Mark expected switch fall-through in couple of drivers
- Renames of shdma and nbpfaxi binding document
- Minor updates to bunch of drivers
* tag 'dmaengine-5.4-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (55 commits)
dmaengine: ti: edma: Use bitmap_set() instead of open coded edma_set_bits()
dmaengine: ti: edma: Only reset region0 access registers
dmaengine: ti: edma: Do not reset reserved paRAM slots
dmaengine: iop-adma.c: fix printk format warning
dmaengine: stm32-dma: Use struct_size() helper
dt-bindings: dmaengine: dma-common: Fix the dma-channel-mask property
dmanegine: ioat/dca: Use struct_size() helper
dmaengine: iop-adma: remove set but not used variable 'slots_per_op'
dmaengine: dmatest: Add support for completion polling
dmaengine: ti: omap-dma: Remove variable override in omap_dma_tx_status()
dmaengine: ti: omap-dma: Remove 'Assignment in if condition'
dmaengine: ti: edma: Remove 'Assignment in if condition'
dmaengine: dw: platform: Split OF helpers to separate module
dmaengine: dw: platform: Split ACPI helpers to separate module
dmaengine: dw: platform: Move handle check to dw_dma_acpi_controller_register()
dmaengine: dw: platform: Switch to acpi_dma_controller_register()
dmaengine: dw: platform: Use devm_platform_ioremap_resource()
dmaengine: dw: platform: Enable iDMA 32-bit on Intel Elkhart Lake
dmaengine: dw: platform: Use struct dw_dma_chip_pdata
dmaengine: dw: Export struct dw_dma_chip_pdata for wider use
...
The main change this time around is a cleanup of some of the oldest
platforms based on the XScale and ARM9 CPU cores, which are between 10
and 20 years old.
The Kendin/Micrel/Microchip KS8695, Winbond/Nuvoton W90x900 and Intel
IOP33x/IOP13xx platforms are removed after we determined that nobody is
using them any more.
The TI Davinci and NXP LPC32xx platforms on the other hand are still in
active use and are converted to the ARCH_MULTIPLATFORM build, meaning
that we can compile a kernel that works on these along with most other
ARMv5 platforms. Changes toward that goal are also merged for IOP32x,
but additional work is needed to complete this. Patches for the
remaining ARMv5 platforms have started but need more work and some
testing.
Support for the new ASpeed AST2600 gets added, this is based on the
Cortex-A7 ARMv7 core, and is a newer version of the existing ARMv5 and
ARMv6 chips in the same family.
Other changes include a cleanup of the ST-Ericsson ux500 platform
and the move of the TI Davinci platform to a new clocksource driver.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJdf6RZAAoJEJpsee/mABjZDO0QAKGkhoEdUmrar0Yf7/ibTrLK
0naWvQtPEm6dv+u1zNfe2IWchVRbBVm+YSLbyaAZwJQknV5gdxgd/UyQrSG9yywg
xDEUdnj03AksYreWbr1tfcA6kQQJEeqZeysNej0v7MAd6MFz5O4M4CgYfAnCLCc4
5rzSqbtFgv4uNMnMTJGKk8anFWNYLbewoBGVcGIEQ/KQlf1+NrlqBWk7CgvencaF
VLqpK0mllhKxAOJHGz6Q/XXiJnj0u56X9GvfcJoaxDWVrAnC6ii29NOHKeXhxS5W
/EmhIEuqCRoGLAMYmml+NbCv43Z1L7neDJwg6sf9cVqNrB9L1Ldlgpr/uoj++Z+l
wdZ297Ogs4mKYx5CCdK4X1TaFOH3s+/awFr1uEP70p+QsPDMrCTuLSjgeQziylgO
kBGIGkqG4l48V9psQy/9SaalgGSB8w6Ta/ms1+rWc99Qj3ExDFCAMIl4uzOxjxT1
9MZbvhc3ulLT4BBXNGhvNaPJxYGk+ggV/ObsqF9+GkxLcVPR8L1tPIzZMB5dB1YR
0YcwyK2jlZGKIwMjLYDT+axGGPh9G+LV6INjRbW5BGJHgNZF844Bxw/eFYushLAu
KHrbpm3ImzJw9eHlgIRtFRTy8ZKPuvv7fGg3zzKSHT4ZmC3tEM68PnQGuMEvVX8Q
+Oa6OiDb0pkV1zsACSZn
=u+ew
-----END PGP SIGNATURE-----
Merge tag 'armsoc-soc' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
Pull ARM SoC platform updates from Arnd Bergmann:
"The main change this time around is a cleanup of some of the oldest
platforms based on the XScale and ARM9 CPU cores, which are between 10
and 20 years old.
The Kendin/Micrel/Microchip KS8695, Winbond/Nuvoton W90x900 and Intel
IOP33x/IOP13xx platforms are removed after we determined that nobody
is using them any more.
The TI Davinci and NXP LPC32xx platforms on the other hand are still
in active use and are converted to the ARCH_MULTIPLATFORM build,
meaning that we can compile a kernel that works on these along with
most other ARMv5 platforms. Changes toward that goal are also merged
for IOP32x, but additional work is needed to complete this. Patches
for the remaining ARMv5 platforms have started but need more work and
some testing.
Support for the new ASpeed AST2600 gets added, this is based on the
Cortex-A7 ARMv7 core, and is a newer version of the existing ARMv5 and
ARMv6 chips in the same family.
Other changes include a cleanup of the ST-Ericsson ux500 platform and
the move of the TI Davinci platform to a new clocksource driver"
[ The changes had marked INTEL_IOP_ADMA and USB_LPC32XX as being
buildable on other platforms through COMPILE_TEST, but that causes new
warnings that I most definitely do not want to see during the merge
window as that could hide other issues.
So the COMPILE_TEST option got disabled for them again - Linus ]
* tag 'armsoc-soc' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (61 commits)
ARM: multi_v5_defconfig: make DaVinci part of the ARM v5 multiplatform build
ARM: davinci: support multiplatform build for ARM v5
arm64: exynos: Enable exynos-chipid driver
ARM: OMAP2+: Delete an unnecessary kfree() call in omap_hsmmc_pdata_init()
ARM: OMAP2+: move platform-specific asm-offset.h to arch/arm/mach-omap2
ARM: davinci: dm646x: Fix a typo in the comment
ARM: davinci: dm646x: switch to using the clocksource driver
ARM: davinci: dm644x: switch to using the clocksource driver
ARM: aspeed: Enable SMP boot
ARM: aspeed: Add ASPEED AST2600 architecture
ARM: aspeed: Select timer in each SoC
dt-bindings: arm: cpus: Add ASPEED SMP
ARM: imx: stop adjusting ar8031 phy tx delay
mailmap: map old company name to new one @microchip.com
MAINTAINERS: at91: remove the TC entry
MAINTAINERS: at91: Collect all pinctrl/gpio drivers in same entry
ARM: at91: move platform-specific asm-offset.h to arch/arm/mach-at91
MAINTAINERS: Extend patterns for Samsung SoC, Security Subsystem and clock drivers
ARM: s3c64xx: squash samsung_usb_phy.h into setup-usb-phy.c
ARM: debug-ll: Add support for r7s9210
...
The BCM2835 DMA controller is capable of synthesizing zeroes instead of
copying them from a source address. The feature is enabled by setting
the SRC_IGNORE bit in the Transfer Information field of a Control Block:
"Do not perform source reads.
In addition, destination writes will zero all the write strobes.
This is used for fast cache fill operations."
https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf
The feature is only available on 8 of the 16 channels. The others are
so-called "lite" channels with a limited feature set and performance.
Enable the feature if a cyclic transaction copies from the zero page.
This reduces traffic on the memory bus.
A forthcoming use case is the BCM2835 SPI driver, which will cyclically
copy from the zero page to the TX FIFO. The idea to use SRC_IGNORE was
taken from an ancient GitHub conversation between Martin and Noralf:
https://github.com/msperl/spi-bcm2835/issues/13#issuecomment-98180451
Tested-by: Nuno Sá <nuno.sa@analog.com>
Tested-by: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Acked-by: Vinod Koul <vkoul@kernel.org>
Acked-by: Stefan Wahren <wahrenst@gmx.net>
Acked-by: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Kauer <florian.kauer@koalo.de>
Link: https://lore.kernel.org/r/b2286c904408745192e4beb3de3c88f73e4a7210.1568187525.git.lukas@wunner.de
Signed-off-by: Mark Brown <broonie@kernel.org>
Document the BCM2835 DMA driver's device data structure so that upcoming
commits may add further members with proper kerneldoc.
Tested-by: Nuno Sá <nuno.sa@analog.com>
Tested-by: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Acked-by: Vinod Koul <vkoul@kernel.org>
Acked-by: Stefan Wahren <wahrenst@gmx.net>
Acked-by: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Kauer <florian.kauer@koalo.de>
Link: https://lore.kernel.org/r/78648f80f67d97bb7beecc1b9be6b6e4a45bc1d8.1568187525.git.lukas@wunner.de
Signed-off-by: Mark Brown <broonie@kernel.org>
The DMA engine API requires DMA drivers to explicitly allow that
descriptors are prepared once and reused multiple times. Only a
single driver makes use of this functionality so far (pxa_dma.c,
to speed up pxa_camera.c).
We're about to add another use case for reusable descriptors in
the BCM2835 SPI driver, so allow that in the BCM2835 DMA driver.
Tested-by: Nuno Sá <nuno.sa@analog.com>
Tested-by: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Acked-by: Vinod Koul <vkoul@kernel.org>
Acked-by: Stefan Wahren <wahrenst@gmx.net>
Acked-by: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Kauer <florian.kauer@koalo.de>
Cc: Robert Jarzmik <robert.jarzmik@free.fr>
Link: https://lore.kernel.org/r/bfc98a38225bbec4158440ad06cb9eee675e3e6f.1568187525.git.lukas@wunner.de
Signed-off-by: Mark Brown <broonie@kernel.org>
The BCM2835 DMA driver currently requests an interrupt from the
controller regardless whether or not the client has passed in the
DMA_PREP_INTERRUPT flag. This causes unnecessary overhead for cyclic
transactions which do not need an interrupt after each period.
We're about to add such a use case, namely cyclic clearing of the SPI
controller's RX FIFO, so amend the DMA driver to request an interrupt
only if DMA_PREP_INTERRUPT was passed in. Ignore the period_len for
such transactions and set it to the buffer length to make the driver's
calculations work.
Tested-by: Nuno Sá <nuno.sa@analog.com>
Tested-by: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Acked-by: Vinod Koul <vkoul@kernel.org>
Acked-by: Stefan Wahren <wahrenst@gmx.net>
Acked-by: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Kauer <florian.kauer@koalo.de>
Link: https://lore.kernel.org/r/73cf37be56eb4cbe6f696057c719f3a38cbaf26e.1568187525.git.lukas@wunner.de
Signed-off-by: Mark Brown <broonie@kernel.org>
Some late fixes for drivers:
- memory leak in ti crossbar dma driver
- cleanup of omap dma probe
- Fix for link list configuration in sprd dma driver
- Handling fixed for DMACHCLR if iommu is mapped in rcar dma
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJdc2UTAAoJEHwUBw8lI4NHOyMP/R/rB6DdQ1TLbe+NciH/0WZT
OL0oTSQ3K3pCiA9XqPa1VXaOwPo0w3151Fzd44pfhoQkKGXUpBNHDRSfsV4kvajA
E9weDEfvatrsh9N5R7ml+sWpsu+dd28NyCIOydDVOx+QjS4f9qZyNcUsnKNKlEij
N2ZCQpBozQa8kXhDymI5V1ldJSA8OzOqTgdRGKJFwg69hzpUSrkfSbjjhCubA943
LFLrQ1yp2lRwvd1HAKQutWGzzbXV9PiFCYWTcxHClaYjjhqNY/HBRppAw/Nfi4Qt
C4JV2fi7IXTqNU5VJD6bfDtL4K2+oA0xkhuqdolrWFu0n1KBDDzC99zPEcjysQrK
TWaGSNzR0oH9Xgk2IM75Srjorn3ErU5VSW0M8TSVBCoEj8Jt/R2GVFOrtCNMF8KN
7Lv48FZQsv8SoMeEgH6Kq4GuqRtFbqVzJdkeHpjfNe0hih5PNNW1+VM2RTkoJkPd
qG7YavUqKbOTbR+QXVY9TLyV14/fp5OnDhrBWZ4vJxU0waHkxNbNLIlEChs8Pa9O
6UVnpl3bnKzDdFUEf6am5kjOEzTfxlbWcm5AA8rNyGHStDucgq/3c/FLZCuEPLtf
VPrbR8oMe9iHZjRLwjSgVc1EjfWhmYeAOEBnAhi4duhgq+sXBfomrp8Y1B4voCkA
m1UxFdLiAl+n1p4MQ9vA
=rSgu
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-fix-5.3' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine fixes from Vinod Koul:
"Some late fixes for drivers:
- memory leak in ti crossbar dma driver
- cleanup of omap dma probe
- Fix for link list configuration in sprd dma driver
- Handling fixed for DMACHCLR if iommu is mapped in rcar dma"
* tag 'dmaengine-fix-5.3' of git://git.infradead.org/users/vkoul/slave-dma:
dmaengine: rcar-dmac: Fix DMACHCLR handling if iommu is mapped
dmaengine: sprd: Fix the DMA link-list configuration
dmaengine: ti: omap-dma: Add cleanup in omap_dma_probe()
dmaengine: ti: dma-crossbar: Fix a memory leak bug
The commit 20c169aceb ("dmaengine: rcar-dmac: clear pertinence
number of channels") forgets to clear the last channel by
DMACHCLR in rcar_dmac_init() (and doesn't need to clear the first
channel) if iommu is mapped to the device. So, this patch fixes it
by using "channels_mask" bitfield.
Note that the hardware and driver don't support more than 32 bits
in DMACHCLR register anyway, so this patch should reject more than
32 channels in rcar_dmac_parse_of().
Fixes: 20c169aceb ("dmaengine: rcar-dmac: clear pertinence number of channels")
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/1567424643-26629-1-git-send-email-yoshihiro.shimoda.uh@renesas.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
For the Spreadtrum DMA link-list mode, when the DMA engine got a slave
hardware request, which will trigger the DMA engine to load the DMA
configuration from the link-list memory automatically. But before the
slave hardware request, the slave will get an incorrect residue due
to the first node used to trigger the link-list was configured as the
last source address and destination address.
Thus we should make sure the first node was configured the start source
address and destination address, which can fix this issue.
Fixes: 4ac6954647 ("dmaengine: sprd: Support DMA link-list mode")
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Link: https://lore.kernel.org/r/77868edb7aff9d5cb12ac3af8827ef2e244441a6.1567150471.git.baolin.wang@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fix printk format warning in iop-adma.c (seen on x86_64) by using
%pad:
../drivers/dma/iop-adma.c:118:12: warning: format ‘%x’ expects argument of type ‘unsigned int’, but argument 6 has type ‘dma_addr_t {aka long long unsigned int}’ [-Wformat=]
Fixes: c211092313 ("dmaengine: driver for the iop32x, iop33x, and iop13xx raid engines")
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Link: https://lore.kernel.org/r/1803541f-98a6-7cce-b050-ff1e9a333ab2@infradead.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:
struct stm32_dma_desc {
...
struct stm32_dma_sg_req sg_req[];
};
Make use of the struct_size() helper instead of an open-coded version
in order to avoid any potential type mistakes.
So, replace the following function:
static struct stm32_dma_desc *stm32_dma_alloc_desc(u32 num_sgs)
{
return kzalloc(sizeof(struct stm32_dma_desc) +
sizeof(struct stm32_dma_sg_req) * num_sgs, GFP_NOWAIT);
}
with:
kzalloc(struct_size(desc, sg_req, num_sgs), GFP_NOWAIT)
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Link: https://lore.kernel.org/r/20190830161423.GA3483@embeddedor
Signed-off-by: Vinod Koul <vkoul@kernel.org>
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:
struct ioat_dca_priv {
...
struct ioat_dca_slot req_slots[0];
};
Make use of the struct_size() helper instead of an open-coded version
in order to avoid any potential type mistakes.
So, replace the following form:
sizeof(*ioatdca) + (sizeof(struct ioat_dca_slot) * slots)
with:
struct_size(ioatdca, req_slots, slots)
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/20190828184015.GA4273@embeddedor
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fixes gcc '-Wunused-but-set-variable' warning:
drivers/dma/iop-adma.c: In function iop_adma_tx_submit:
drivers/dma/iop-adma.c:367:6: warning:
variable slots_per_op set but not used [-Wunused-but-set-variable]
It is never used, so can be removed.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Link: https://lore.kernel.org/r/20190821121908.7468-1-yuehaibing@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
With the polled parameter the DMA drivers can be tested if they can work
correctly when no completion is requested (no DMA_PREP_INTERRUPT and no
callback is provided).
If polled mode is selected then use dma_sync_wait() to execute the test
iteration instead of relying on the completion callback.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20190731071438.24075-1-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>