In the quest to remove all stack VLA usage from the kernel[1], this
switches to using a pre-allocated scratch register space, set up with
all other other allocations.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Vertical flip state is exported in xilinx_vdma_config and depending
on IP configuration(c_enable_vert_flip) vertical flip state is
programmed in hardware.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Acked-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The d->chans[] array has d->dma_requests elements so the > should be
>= here.
Fixes: 8e6152bc66 ("dmaengine: Add hisilicon k3 DMA engine driver")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The reported residue is already calculated in BURST unit granularity, so
advertise this capability properly to other devices in the system.
Fixes: aee4d1fac8 ("dmaengine: pl330: improve pl330_tx_status() function")
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
rcar-dmac has 2 types of interrupt, 1) error IRQ (for all),
2) IRQ for each channels.
If error happens on some channels, the error IRQ will be handled
by 1), and "all" channels will be restarted.
But in this design, error handling itself will be problem for
non error channel users.
This patch removes 1) handler, and handles error IRQ on 2)
Signed-off-by: Magnus Damm <damm+renesas@opensource.se>
[Kuninori: updated patch to adjust DMACHCR/DMAOR]
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Tested-by: Nguyen Viet Dung <nv-dung@jinso.co.jp>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It seems that starting with Skylake Xeon, channel reset clears the
completion address register. Make sure the completion address register is
set again after reset.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Commit 0198d7bb8a ("ASoC: omap-mcbsp: Convert to use the sdma-pcm
instead of omap-pcm") resulted in broken audio playback on OMAP1510
(discovered on Amstrad Delta).
When running on OMAP1510, omap-pcm used to obtain DMA offset from
snd_dmaengine_pcm_pointer_no_residue() based on DMA interrupt triggered
software calculations instead of snd_dmaengine_pcm_pointer() which
depended on residue value calculated from omap_dma_get_src_pos().
Similar code path is still available in now used
sound/soc/soc-generic-dmaengine-pcm.c but it is not triggered.
It was verified already before that omap_get_dma_src_pos() from
arch/arm/plat-omap/dma.c didn't work correctly for OMAP1510 - see
commit 1bdd741991 ("ASoC: OMAP: fix OMAP1510 broken PCM pointer
callback") for details. Apparently the same applies to its successor,
omap_dma_get_src_pos() from drivers/dma/ti/omap-dma.c.
On the other hand, snd_dmaengine_pcm_pointer_no_residue() is described
as depreciated and discouraged for use in new drivers because of its
unreliable accuracy. However, it seems the only working option for
OPAM1510 now, as long as a software calculated residue is not
implemented as OMAP1510 fallback in omap-dma.
Using snd_dmaengine_pcm_pointer_no_residue() code path instead of
snd_dmaengine_pcm_pointer() in sound/soc/soc-generic-dmaengine-pcm.c
can be triggered in two ways:
- by passing pcm->flags |= SND_DMAENGINE_PCM_FLAG_NO_RESIDUE from
sound/soc/omap/sdma-pcm.c,
- by passing dma_caps.residue_granularity =
DMA_RESIDUE_GRANULARITY_DESCRIPTOR from DMA engine.
Let's do the latter.
Signed-off-by: Janusz Krzysztofik <jmkrzyszt@gmail.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
As what former drcmr -1 value meant, add a this as a default to each
channel, ie. that by default no requestor line is used.
This is specifically used for network drivers smc91x and smc911x, and
needed for their port to slave maps.
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Acked-by: Vinod Koul <vkoul@kernel.org>
In order to remove the specific knowledge of the dma mapping from PXA
drivers, add a default slave map for pxa architectures.
This won't impact MMP architecture, but is aimed only at all PXA boards.
This is the first step, and once all drivers are converted,
pxad_filter_fn() will be made static, and the DMA resources removed from
device.c.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Reported-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Vinod Koul <vkoul@kernel.org>
As files move around, their previous links break. Fix the
references for them.
Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Acked-by: Jonathan Corbet <corbet@lwn.net>
- Updates to sprd, bam_dma, stm drivers.
- removal of VLAs in dmatest.
- Move TI drivers its own subdir.
- Switch to SPDX tags for ima/mxs dma drivers.
- Simplify getting .drvdata on bunch of drivers by Wolfram Sang.
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJbGPc+AAoJEHwUBw8lI4NHRxMQAMekjobWFidkB2QkNub53jlB
CAH2owLSFa/rk4CzEtxHts0q4YfW84xGcFiYMeVXeN5F62XQe4kacnbov7jVgxXe
wHx5UlyEjMODFz1SwqysdB1T7wj6CtnFkPeCW4Wz3Kh6EcynoMCpV5XG4EXxoHMI
4hLNafsKSH+yX2OudpcivU+FSqT7GVHs2rT1ZLaUcGosme2iM2EHMplfQYQ2bTXr
+W/cmDf58OIuu2G3gECSb3qeFDTBnJrCkAtCemHFT01mtMeBA67m1zns+5nQTFGz
WD9XiB/OQuCVxfh7X6EvwdTkA5+w+gOkMp+H4OSScRiScK1hCcEeP7Uf4gGviQi0
qzzV/snBXm68/pijkvbGhjE2oDz0ydExmaunrA3pHzBx1YEs07uhrw8aiyqhiG73
YaoB2auUSyiTKshhnGq5jedeCsoKIRGxlmZ4ophAzyi+6GWDGDSoFd0L9WswR3RU
yY3xGn5jpGN8DLmya2El4oDz3MAE9e+OxeQ9ZnanXFgFOJQK4zlybFDp7vjDK4LJ
ILPm5FCiyrWZiovcUNt191UP+hywPL1SEgCEF3f38F8yiTe6bB8VhCQ5/0wVy6uc
pGmWTOBT7Bvw9VN18JbvCSrrZqc2pAThnDHxrCzCSYro1JWOwdvKG5v9hVeFCPwl
l+UOrjg8q2bgVn4yPJ5R
=8mGV
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.18-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
- updates to sprd, bam_dma, stm drivers
- remove VLAs in dmatest
- move TI drivers to their own subdir
- switch to SPDX tags for ima/mxs dma drivers
- simplify getting .drvdata on bunch of drivers by Wolfram Sang
* tag 'dmaengine-4.18-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (32 commits)
dmaengine: sprd: Add Spreadtrum DMA configuration
dmaengine: sprd: Optimize the sprd_dma_prep_dma_memcpy()
dmaengine: imx-dma: Switch to SPDX identifier
dmaengine: mxs-dma: Switch to SPDX identifier
dmaengine: imx-sdma: Switch to SPDX identifier
dmaengine: usb-dmac: Document R8A7799{0,5} bindings
dmaengine: qcom: bam_dma: fix some doc warnings.
dmaengine: qcom: bam_dma: fix invalid assignment warning
dmaengine: sprd: fix an NULL vs IS_ERR() bug
dmaengine: sprd: Use devm_ioremap_resource() to map memory
dmaengine: sprd: Fix potential NULL dereference in sprd_dma_probe()
dmaengine: pl330: flush before wait, and add dev burst support.
dmaengine: axi-dmac: Request IRQ with IRQF_SHARED
dmaengine: stm32-mdma: fix spelling mistake: "avalaible" -> "available"
dmaengine: rcar-dmac: Document R-Car D3 bindings
dmaengine: sprd: Move DMA request mode and interrupt type into head file
dmaengine: sprd: Define the DMA data width type
dmaengine: sprd: Define the DMA transfer step type
dmaengine: ti: New directory for Texas Instruments DMA drivers
dmaengine: shdmac: Change platform check to CONFIG_ARCH_RENESAS
...
- Use overflow helpers in 2-factor allocators (Kees, Rasmus)
- Introduce overflow test module (Rasmus, Kees)
- Introduce saturating size helper functions (Matthew, Kees)
- Treewide use of struct_size() for allocators (Kees)
-----BEGIN PGP SIGNATURE-----
Comment: Kees Cook <kees@outflux.net>
iQJKBAABCgA0FiEEpcP2jyKd1g9yPm4TiXL039xtwCYFAlsYJ1gWHGtlZXNjb29r
QGNocm9taXVtLm9yZwAKCRCJcvTf3G3AJlCTEACwdEeriAd2VwxknnsstojGD/3g
8TTFA19vSu4Gxa6WiDkjGoSmIlfhXTlZo1Nlmencv16ytSvIVDNLUIB3uDxUIv1J
2+dyHML9JpXYHHR7zLXXnGFJL0wazqjbsD3NYQgXqmun7EVVYnOsAlBZ7h/Lwiej
jzEJd8DaHT3TA586uD3uggiFvQU0yVyvkDCDONIytmQx+BdtGdg9TYCzkBJaXuDZ
YIthyKDvxIw5nh/UaG3L+SKo73tUr371uAWgAfqoaGQQCWe+mxnWL4HkCKsjFzZL
u9ouxxF/n6pij3E8n6rb0i2fCzlsTDdDF+aqV1rQ4I4hVXCFPpHUZgjDPvBWbj7A
m6AfRHVNnOgI8HGKqBGOfViV+2kCHlYeQh3pPW33dWzy/4d/uq9NIHKxE63LH+S4
bY3oO2ela8oxRyvEgXLjqmRYGW1LB/ZU7FS6Rkx2gRzo4k8Rv+8K/KzUHfFVRX61
jEbiPLzko0xL9D53kcEn0c+BhofK5jgeSWxItdmfuKjLTW4jWhLRlU+bcUXb6kSS
S3G6aF+L+foSUwoq63AS8QxCuabuhreJSB+BmcGUyjthCbK/0WjXYC6W/IJiRfBa
3ZTxBC/2vP3uq/AGRNh5YZoxHL8mSxDfn62F+2cqlJTTKR/O+KyDb1cusyvk3H04
KCDVLYPxwQQqK1Mqig==
=/3L8
-----END PGP SIGNATURE-----
Merge tag 'overflow-v4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux
Pull overflow updates from Kees Cook:
"This adds the new overflow checking helpers and adds them to the
2-factor argument allocators. And this adds the saturating size
helpers and does a treewide replacement for the struct_size() usage.
Additionally this adds the overflow testing modules to make sure
everything works.
I'm still working on the treewide replacements for allocators with
"simple" multiplied arguments:
*alloc(a * b, ...) -> *alloc_array(a, b, ...)
and
*zalloc(a * b, ...) -> *calloc(a, b, ...)
as well as the more complex cases, but that's separable from this
portion of the series. I expect to have the rest sent before -rc1
closes; there are a lot of messy cases to clean up.
Summary:
- Introduce arithmetic overflow test helper functions (Rasmus)
- Use overflow helpers in 2-factor allocators (Kees, Rasmus)
- Introduce overflow test module (Rasmus, Kees)
- Introduce saturating size helper functions (Matthew, Kees)
- Treewide use of struct_size() for allocators (Kees)"
* tag 'overflow-v4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
treewide: Use struct_size() for devm_kmalloc() and friends
treewide: Use struct_size() for vmalloc()-family
treewide: Use struct_size() for kmalloc()-family
device: Use overflow helpers for devm_kmalloc()
mm: Use overflow helpers in kvmalloc()
mm: Use overflow helpers in kmalloc_array*()
test_overflow: Add memory allocation overflow tests
overflow.h: Add allocation size calculation helpers
test_overflow: Report test failures
test_overflow: macrofy some more, do more tests for free
lib: add runtime test of check_*_overflow functions
compiler.h: enable builtin overflow checkers and add fallback code
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:
struct foo {
int stuff;
void *entry[];
};
instance = kmalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:
instance = kmalloc(struct_size(instance, entry, count), GFP_KERNEL);
This patch makes the changes for kmalloc()-family (and kvmalloc()-family)
uses. It was done via automatic conversion with manual review for the
"CHECKME" non-standard cases noted below, using the following Coccinelle
script:
// pkey_cache = kmalloc(sizeof *pkey_cache + tprops->pkey_tbl_len *
// sizeof *pkey_cache->table, GFP_KERNEL);
@@
identifier alloc =~ "kmalloc|kzalloc|kvmalloc|kvzalloc";
expression GFP;
identifier VAR, ELEMENT;
expression COUNT;
@@
- alloc(sizeof(*VAR) + COUNT * sizeof(*VAR->ELEMENT), GFP)
+ alloc(struct_size(VAR, ELEMENT, COUNT), GFP)
// mr = kzalloc(sizeof(*mr) + m * sizeof(mr->map[0]), GFP_KERNEL);
@@
identifier alloc =~ "kmalloc|kzalloc|kvmalloc|kvzalloc";
expression GFP;
identifier VAR, ELEMENT;
expression COUNT;
@@
- alloc(sizeof(*VAR) + COUNT * sizeof(VAR->ELEMENT[0]), GFP)
+ alloc(struct_size(VAR, ELEMENT, COUNT), GFP)
// Same pattern, but can't trivially locate the trailing element name,
// or variable name.
@@
identifier alloc =~ "kmalloc|kzalloc|kvmalloc|kvzalloc";
expression GFP;
expression SOMETHING, COUNT, ELEMENT;
@@
- alloc(sizeof(SOMETHING) + COUNT * sizeof(ELEMENT), GFP)
+ alloc(CHECKME_struct_size(&SOMETHING, ELEMENT, COUNT), GFP)
Signed-off-by: Kees Cook <keescook@chromium.org>
- replaceme the force_dma flag with a dma_configure bus method.
(Nipun Gupta, although one patch is іncorrectly attributed to me
due to a git rebase bug)
- use GFP_DMA32 more agressively in dma-direct. (Takashi Iwai)
- remove PCI_DMA_BUS_IS_PHYS and rely on the dma-mapping API to do the
right thing for bounce buffering.
- move dma-debug initialization to common code, and apply a few cleanups
to the dma-debug code.
- cleanup the Kconfig mess around swiotlb selection
- swiotlb comment fixup (Yisheng Xie)
- a trivial swiotlb fix. (Dan Carpenter)
- support swiotlb on RISC-V. (based on a patch from Palmer Dabbelt)
- add a new generic dma-noncoherent dma_map_ops implementation and use
it for arc, c6x and nds32.
- improve scatterlist validity checking in dma-debug. (Robin Murphy)
- add a struct device quirk to limit the dma-mask to 32-bit due to
bridge/system issues, and switch x86 to use it instead of a local
hack for VIA bridges.
- handle devices without a dma_mask more gracefully in the dma-direct
code.
-----BEGIN PGP SIGNATURE-----
iQI/BAABCAApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAlsU1hwLHGhjaEBsc3Qu
ZGUACgkQD55TZVIEUYPraxAAocC7JiFKW133/VugCtGA1x9uE8DPHealtsWTAeEq
KOOB3GxWMU2hKqQ4km5tcfdWoGJvvab6hmDXcitzZGi2JajO7Ae0FwIy3yvxSIKm
iH/ON7c4sJt8gKrXYsLVylmwDaimNs4a6xfODoCRgnWuovI2QrrZzupnlzPNsiOC
lv8ezzcW+Ay/gvDD/r72psO+w3QELETif/OzR/qTOtvLrVabM06eHmPQ8Wb98smu
/UPMMv6/3XwQnxpxpdyqN+p/gUdneXithzT261wTeZ+8gDXmcWBwHGcMBCimcoBi
FklW52moazIPIsTysqoNlVFsLGJTeS4p2D3BLAp5NwWYsLv+zHUVZsI1JY/8u5Ox
mM11LIfvu9JtUzaqD9SvxlxIeLhhYZZGnUoV3bQAkpHSQhN/xp2YXd5NWSo5ac2O
dch83+laZkZgd6ryw6USpt/YTPM/UHBYy7IeGGHX/PbmAke0ZlvA6Rae7kA5DG59
7GaLdwQyrHp8uGFgwze8P+R4POSk1ly73HHLBT/pFKnDD7niWCPAnBzuuEQGJs00
0zuyWLQyzOj1l6HCAcMNyGnYSsMp8Fx0fvEmKR/EYs8O83eJKXi6L9aizMZx4v1J
0wTolUWH6SIIdz474YmewhG5YOLY7mfe9E8aNr8zJFdwRZqwaALKoteRGUxa3f6e
zUE=
=6Acj
-----END PGP SIGNATURE-----
Merge tag 'dma-mapping-4.18' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping updates from Christoph Hellwig:
- replace the force_dma flag with a dma_configure bus method. (Nipun
Gupta, although one patch is іncorrectly attributed to me due to a
git rebase bug)
- use GFP_DMA32 more agressively in dma-direct. (Takashi Iwai)
- remove PCI_DMA_BUS_IS_PHYS and rely on the dma-mapping API to do the
right thing for bounce buffering.
- move dma-debug initialization to common code, and apply a few
cleanups to the dma-debug code.
- cleanup the Kconfig mess around swiotlb selection
- swiotlb comment fixup (Yisheng Xie)
- a trivial swiotlb fix. (Dan Carpenter)
- support swiotlb on RISC-V. (based on a patch from Palmer Dabbelt)
- add a new generic dma-noncoherent dma_map_ops implementation and use
it for arc, c6x and nds32.
- improve scatterlist validity checking in dma-debug. (Robin Murphy)
- add a struct device quirk to limit the dma-mask to 32-bit due to
bridge/system issues, and switch x86 to use it instead of a local
hack for VIA bridges.
- handle devices without a dma_mask more gracefully in the dma-direct
code.
* tag 'dma-mapping-4.18' of git://git.infradead.org/users/hch/dma-mapping: (48 commits)
dma-direct: don't crash on device without dma_mask
nds32: use generic dma_noncoherent_ops
nds32: implement the unmap_sg DMA operation
nds32: consolidate DMA cache maintainance routines
x86/pci-dma: switch the VIA 32-bit DMA quirk to use the struct device flag
x86/pci-dma: remove the explicit nodac and allowdac option
x86/pci-dma: remove the experimental forcesac boot option
Documentation/x86: remove a stray reference to pci-nommu.c
core, dma-direct: add a flag 32-bit dma limits
dma-mapping: remove unused gfp_t parameter to arch_dma_alloc_attrs
dma-debug: check scatterlist segments
c6x: use generic dma_noncoherent_ops
arc: use generic dma_noncoherent_ops
arc: fix arc_dma_{map,unmap}_page
arc: fix arc_dma_sync_sg_for_{cpu,device}
arc: simplify arc_dma_sync_single_for_{cpu,device}
dma-mapping: provide a generic dma-noncoherent implementation
dma-mapping: simplify Kconfig dependencies
riscv: add swiotlb support
riscv: only enable ZONE_DMA32 for 64-bit
...
This patch adds the 'device_config' and 'device_prep_slave_sg' interfaces
for users to configure DMA, as well as adding one 'struct sprd_dma_config'
structure to save Spreadtrum DMA configuration for each DMA channel.
Signed-off-by: Eric Long <eric.long@spreadtrum.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This is one preparation patch, we can use default DMA configuration to
implement the device_prep_dma_memcpy() interface instead of issuing
sprd_dma_config().
We will implement one new sprd_dma_config() function with introducing
device_prep_slave_sg() interface in following patch. So we can remove
the obsolete sprd_dma_config() firstly.
Signed-off-by: Eric Long <eric.long@spreadtrum.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Building kernel with W=1 throws up below warnings:
bam_dma.c:459: warning: Function parameter or member 'dir'
not described in 'bam_chan_init_hw'
bam_dma.c:697: warning: Function parameter or member 'chan'
not described in 'bam_dma_terminate_all'
bam_dma.c:697: warning: Excess function parameter 'bchan'
description in 'bam_dma_terminate_all'
bam_dma.c:964: warning: Function parameter or member 'bchan'
not described in 'bam_start_dma'
Fix these!.
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Building kernel with W=1 throws below invalid assignment warnings.
bam_dma.c:676:44: warning: invalid assignment: +=
bam_dma.c:676:44: left side has type unsigned long
bam_dma.c:676:44: right side has type restricted __le16
bam_dma.c:921:41: warning: invalid assignment: +=
bam_dma.c:921:41: left side has type unsigned long
bam_dma.c:921:41: right side has type restricted __le16
Fix them!.
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Disabling pm runtime at probe is not sufficient to get BAM working
on remotely controller instances. pm_runtime_get_sync() would return
-EACCES in such cases.
So check if runtime pm is enabled before returning error from bam functions.
Fixes: 5b4a68952a ("dmaengine: qcom: bam_dma: disable runtime pm on remote controlled")
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We recently cleaned this code up but we need to update the error
handling as well. The devm_ioremap_resource() returns error pointers on
error, never NULL.
Fixes: e7f063ae1a ("dmaengine: sprd: Use devm_ioremap_resource() to map memory")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Instead of checking the return value of platform_get_resource(), we can
use devm_ioremap_resource() which has the NULL pointer check and the
memory region requesting.
Suggested-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
platform_get_resource() may fail and return NULL, so we should
better check it's return value to avoid a NULL pointer dereference
a bit later in the code.
This is detected by Coccinelle semantic patch.
@@
expression pdev, res, n, t, e, e1, e2;
@@
res = platform_get_resource(pdev, t, n);
+ if (!res)
+ return -EINVAL;
... when != res == NULL
e = devm_ioremap_nocache(e1, res->start, e2);
Fixes: 9b3b8171f7 ("dmaengine: sprd: Add Spreadtrum DMA driver")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Reviewed-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
With each bus implementing its own DMA configuration callback, there is no
need for bus to explicitly set the force_dma flag. Modify the
of_dma_configure function to accept an input parameter which specifies if
implicit DMA configuration is required when it is not described by the
firmware.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com> # PCI parts
Reviewed-by: Rob Herring <robh@kernel.org>
[hch: tweaked the changelog a bit]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Do DMAFLUSHP _before_ the first DMAWFP to ensure controller
and peripheral are in agreement about dma request state before first
transfer. Add support for burst transfers to/from peripherals. In the new
scheme, the controller does as many burst transfers as it can then
transfers the remaining dregs with either single transfers for
peripherals, or with a reduced size burst for memory-to-memory transfers.
Signed-off-by: Frank Mori Hess <fmh6jj@gmail.com>
Tested-by: Frank Mori Hess <fmh6jj@gmail.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Request IRQ with IRQF_SHARED flag to enable setups with multiple
instances of the core sharing a single IRQ line.
This works out since the IRQ handler already checks if there is
an actual IRQ pending and returns IRQ_NONE otherwise.
Acked-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Moritz Fischer <mdf@kernel.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Trivial fix to spelling mistake in dev_err error message text and make
channel plural.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch will move the Spreadtrum DMA request mode and interrupt type
into one head file for user to configure.
Signed-off-by: Eric Long <eric.long@spreadtrum.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Define the DMA data width type to make code more readable.
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Define the DMA transfer step type to make code more readable.
Signed-off-by: Eric Long <eric.long@spreadtrum.com>
Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Since commit 9b5ba0df4e ("ARM: shmobile: Introduce ARCH_RENESAS")
is CONFIG_ARCH_RENESAS a more appropriate platform check than the legacy
CONFIG_ARCH_SHMOBILE, hence use the former.
Renesas SuperH SH-Mobile SoCs are still covered by the CONFIG_CPU_SH4
check, just like before support for Renesas ARM SoCs was added.
Instead of blindly changing all the #ifdefs, switch the main code block
in sh_dmae_probe() to IS_ENABLED(), as this allows to remove all the
remaining #ifdefs.
This will allow to drop ARCH_SHMOBILE on ARM in the near future.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Both buffer Transfer Length (TLEN if any) and transfer size have to be
aligned on burst size (burst beats*bus width).
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We should get drvdata from struct device directly. Going via
platform_device is an unneeded step back and forth.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Reviewed-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There's an ongoing effort to remove VLAs from the kernel
(https://lkml.org/lkml/2018/3/7/621) to eventually turn on -Wvla.
The test already pre-allocates some buffers with kmalloc so turn
the two VLAs in to pre-allocated kmalloc buffers.
Signed-off-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This time we have couple of new drivers along with updates to drivers.
- new driver for DesignWare AXI DMAC and MediaTek High-Speed DMA controller
- stm32 dma and qcom bam dma driver updates
- norandom test option for dmatest
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJazNiVAAoJEHwUBw8lI4NHGoYP/0G4butX2wkKgQMt+CCTSvC8
3J3E3hqyd4aIPu9O2ebxz4MpVRcJKQRDZiJ5ZIcVaP+79ehjpRH1iVMxO3vX0Eb4
urTRB0F+waZJK1Cdm/tXtLkCGIzxFuv4HJbd+Z2CHYiuKT5SbNvhz4j8HRUmoV35
+Vlify3NZgKdQzVAbD+ZHPWAnyIQFeQHjywS8PIGSKg/iXTpSGDV+2NweH8rOPre
MlTL5/YknO/rn5w34/kz4nzqc1AWH57+HBh3tEtsrlrgdZjO0czFmEGbLzTls5tw
AcbFm41m7ZV2PADHaH+LLF0cxDp299sbHqROkVM9evnLYu4IEUIUL6mOp13wKh45
x78QDzlLQiFQ+gbjJ37PXg3SKsQ8Krr/lXnvBITnnC+56w+L7/OuAyM3bxFRCTke
9iCIOeKXHxhESh3J3Znup9BEEdP5DN84jjjaF4EZ2e6wXbd+GaPQABYoRRwyn7pr
G49a0k5o3j4vT7vo9j6Eqw7kqwI8oOkAtAVKGWttoO1r0YbaV0ATPBuvD9SZgFpx
bLZehKWZCVRnr18J4MiMcSq+bwci9FM7P3HU7nECeJWTwb694LER4hnbQqGS2mZh
kOZXexrqQljKbaLRR8YCXy8BcgK4qufOPSX//ts7FX0J1cNqtWU86XkWwPI7WYoa
KkN2omd6rNtdxoXtVO4Y
=0YWt
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.17-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time we have couple of new drivers along with updates to drivers:
- new drivers for the DesignWare AXI DMAC and MediaTek High-Speed DMA
controllers
- stm32 dma and qcom bam dma driver updates
- norandom test option for dmatest"
* tag 'dmaengine-4.17-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (30 commits)
dmaengine: stm32-dma: properly mask irq bits
dmaengine: stm32-dma: fix max items per transfer
dmaengine: stm32-dma: fix DMA IRQ status handling
dmaengine: stm32-dma: Improve memory burst management
dmaengine: stm32-dma: fix typo and reported checkpatch warnings
dmaengine: stm32-dma: fix incomplete configuration in cyclic mode
dmaengine: stm32-dma: threshold manages with bitfield feature
dt-bindings: stm32-dma: introduce DMA features bitfield
dt-bindings: rcar-dmac: Document r8a77470 support
dmaengine: rcar-dmac: Fix too early/late system suspend/resume callbacks
dmaengine: dw-axi-dmac: fix spelling mistake: "catched" -> "caught"
dmaengine: edma: Check the memory allocation for the memcpy dma device
dmaengine: at_xdmac: fix rare residue corruption
dmaengine: mediatek: update MAINTAINERS entry with MediaTek DMA driver
dmaengine: mediatek: Add MediaTek High-Speed DMA controller for MT7622 and MT7623 SoC
dt-bindings: dmaengine: Add MediaTek High-Speed DMA controller bindings
dt-bindings: Document the Synopsys DW AXI DMA bindings
dmaengine: Introduce DW AXI DMAC driver
dmaengine: pl330: fix a race condition in case of threaded irqs
dmaengine: imx-sdma: fix pagefault when channel is disabled during interrupt
...
A single register of the controller holds the information for four dma
channels.
The functions stm32_dma_irq_status() don't mask the relevant bits after
the shift, thus adjacent channel's status is also reported in the returned
value.
Fixed by masking the value before returning it.
Similarly, the function stm32_dma_irq_clear() don't mask the input value
before shifting it, thus an incorrect input value could disable the
interrupts of adjacent channels.
Fixed by masking the input value before using it.
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Having 0 in item counter register is valid and stands for a "No or Ended
transfer". Therefore valid transfer starts from @+0 to @+0xFFFE leading to
unaligned scatter gather at boundary. Thus it's safer to round down this
value on its FIFO size (16 Bytes).
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Update the way Transfer Complete and Half Transfer Complete status are
acknowledge. Even if HTI is not enabled its status is shown when reading
registers, driver has to clear it gently and not raise an error.
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch improves memory burst capability using best burst size
according to transferred buffer size from/to memory.
>From now on, memory burst is not necessarily same as with peripheral
burst one and fifo threshold is directly managed by this driver in order
to fit with computed memory burst.
Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When in cyclic mode, the configuration is updated after having started the
DMA hardware (STM32_DMA_SCR_EN) leading to incomplete configuration of
SMxAR registers.
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Hugues Fruchet <hugues.fruchet@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
>From now on, DMA bitfield is to manage DMA FIFO Threshold.
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If serial console wake-up is enabled ("echo enabled >
/sys/.../ttySC0/power/wakeup"), and any serial input is received while
the system is suspended, serial port input no longer works after system
resume.
Note that:
1) The system can still be woken up using the serial console,
2) Serial port input keeps working if the system is woken up in some
other way (e.g. Wake-on-LAN or gpio-keys), and no serial input was
received while suspended.
To fix this, replace SET_LATE_SYSTEM_SLEEP_PM_OPS() by
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(), as the callbacks installed by the
former happen too early resp. late in the suspend resp. resume process.
Reported-by: RVC test team via Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Fixes: 1131b0a4af ("dmaengine: rcar-dmac: Make DMAC reinit during system resume explicit")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Trivial fix to spelling mistake in dev_err error message text
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If the allocation fails then disable the memcpy support.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Despite the efforts made to correctly read the NDA and CUBC registers,
the order in which the registers are read could sometimes lead to an
inconsistent state.
Re-using the timeline from the comments, this following timing of
registers reads could lead to reading NDA with value "@desc2" and
CUBC with value "MAX desc1":
INITD -------- ------------
|____________________|
_______________________ _______________
NDA @desc2 \/ @desc3
_______________________/\_______________
__________ ___________ _______________
CUBC 0 \/ MAX desc1 \/ MAX desc2
__________/\___________/\_______________
| | | |
Events:(1)(2) (3)(4)
(1) check_nda = @desc2
(2) initd = 1
(3) cur_ubc = MAX desc1
(4) cur_nda = @desc2
This is allowed by the condition ((check_nda == cur_nda) && initd),
despite cur_ubc and cur_nda being in the precise state we don't want.
This error leads to incorrect residue computation.
Fix it by inversing the order in which CUBC and INITD are read. This
makes sure that NDA and CUBC are always read together either _before_
INITD goes to 0 or _after_ it is back at 1.
The case where NDA is read before INITD is at 0 and CUBC is read after
INITD is back at 1 will be rejected by check_nda and cur_nda being
different.
Fixes: 53398f4888 ("dmaengine: at_xdmac: fix residue corruption")
Cc: stable@vger.kernel.org
Signed-off-by: Maxime Jayat <maxime.jayat@mobile-devices.fr>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
MediaTek High-Speed DMA controller (HSDMA) on MT7622 and MT7623 SoC has
a single ring is dedicated to memory-to-memory transfer through ring based
descriptor management.
Even though there is only one physical ring available inside HSDMA, the
driver can be easily extended to the support of multiple virtual channels
processing simultaneously by means of DMA_VIRTUAL_CHANNELS effort.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The bitfield dma_inuse is allocated of size dma_requests bits, thus a
valid bit address is from 0 to (dma_requests - 1).
When find_first_zero_bit() fails, it returns dma_requests as invalid
address.
Using such address for the following set_bit() is incorrect and, if
dma_requests is a multiple of BITS_PER_LONG, it will cause a buffer
overflow.
Currently this driver is only used in DT stm32h743.dtsi where a safe value
dma_requests=16 is not triggering the buffer overflow.
Fixed by checking the return value of find_first_zero_bit() _before_
using it.
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds support for the DW AXI DMAC controller.
DW AXI DMAC is a part of HSDK development board from Synopsys.
In this driver implementation only DMA_MEMCPY transfers are
supported.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
On the CP110 components which are present on the Armada 7K/8K SoC we need
to explicitly enable the clock for the registers. However it is not
needed for the AP8xx component, that's why this clock is optional.
With this patch both clock have now a name, but in order to be backward
compatible, the name of the first clock is not used. It allows to still
use this clock with a device tree using the old binding.
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add a spinlock and a 'enabled' boolean on channel descriptor, to avoid
using buffer descriptors in the interrupt context,
when sdma_disable_channel is called meanwhile.
Signed-off-by: Thierry Bultel <tbultel@pixelsurmer.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes an issue that a race condition happens between a client
driver and the rcar-dmac driver:
- The rcar_dmac_isr_transfer_end() is called.
- The done list appears, and desc.running is the next active list.
- rcar_dmac_chan_get_residue() is called by a client driver before
rcar_dmac_isr_channel_thread() is called.
- The rcar_dmac_chan_get_residue() will not find any descriptors.
- And, the following WARNING happens:
WARN(1, "No descriptor for cookie!");
The sh-sci driver with HSCIF (921,600bps) on R-Car H3 can cause this
situation.
So, this patch checks the done lists in rcar_dmac_chan_get_residue()
and returns zero if the done lists has the argument cookie.
Tested-by: Nguyen Viet Dung <dung.nguyen.aj@renesas.com>
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Remotely controlled BAM instance should not do any power management from
CPU side, as cpu can not reliably say if the BAM is busy or not.
Disable it for such instances.
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
BAM_DESC_CNT_TRSHLD register is global register, which can only be written
when BAM is in master mode, So check the mode of operation before writing
it.
Without this check SOC's xPU would catch such access and crash the system.
First noticed on DB820c while testing SLIMBus BAM.
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When Linux is master of BAM, it can directly read registers to know number
of supported channels, however when its remotely controlled reading these
registers would trigger a crash if the BAM is not yet initialized or
powered up on the remote side.
This patch allows driver to read num-channels and num-ees from Device Tree
for remotely controlled BAM.
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When BAM is remotely controlled it does not sound correct to control
its clk on Linux side. Make it optional, so that its not mandatory
for remote controlled BAM instances.
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
According to R-Car Gen3 Rev.0.80 manual, the DMATCR can be set to
16,777,215 as maximum. So, this patch fixes the max_chunk_size for
safety on all of SoCs. Otherwise, a system may hang if the DMATCR
is set to 0 on R-Car Gen3.
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Existing option noverify disables both random src/dst address offset
setup and data verification. Sometimes, we need to control random
src/dst address setup and verification separately, such as disabling
random to make sure that test covers addresses in all interleaving
banks, but data verification is still performed.
This patch adds option norandom to disable random offset setup. Option
noverify has been changed to disable data verification only.
Cc: Joey Zheng <yu.zheng@hxt-semitech.com>
Signed-off-by: Yang Shunyong <shunyong.yang@hxt-semitech.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Pull ARM updates from Russell King:
- StrongARM SA1111 updates to modernise and remove cruft
- Add StrongARM gpio drivers for board GPIOs
- Verify size of zImage is what we expect to avoid issues with
appended DTB
- nommu updates from Vladimir Murzin
- page table read-write-execute checking from Jinbum Park
- Broadcom Brahma-B15 cache updates from Florian Fainelli
- Avoid failure with kprobes test caused by inappropriately
placed kprobes
- Remove __memzero optimisation (which was incorrectly being
used directly by some drivers)
* 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (32 commits)
ARM: 8745/1: get rid of __memzero()
ARM: 8744/1: don't discard memblock for kexec
ARM: 8743/1: bL_switcher: add MODULE_LICENSE tag
ARM: 8742/1: Always use REFCOUNT_FULL
ARM: 8741/1: B15: fix unused label warnings
ARM: 8740/1: NOMMU: Make sure we do not hold stale data in mem[] array
ARM: 8739/1: NOMMU: Setup VBAR/Hivecs for secondaries cores
ARM: 8738/1: Disable CONFIG_DEBUG_VIRTUAL for NOMMU
ARM: 8737/1: mm: dump: add checking for writable and executable
ARM: 8736/1: mm: dump: make the page table dumping seq_file
ARM: 8735/1: mm: dump: make page table dumping reusable
ARM: sa1100/neponset: add GPIO drivers for control and modem registers
ARM: sa1100/assabet: add BCR/BSR GPIO driver
ARM: 8734/1: mm: idmap: Mark variables as ro_after_init
ARM: 8733/1: hw_breakpoint: Mark variables as __ro_after_init
ARM: 8732/1: NOMMU: Allow userspace to access background MPU region
ARM: 8727/1: MAINTAINERS: Update brcmstb entries to cover B15 code
ARM: 8728/1: B15: Register reboot notifier for KEXEC
ARM: 8730/1: B15: Add suspend/resume hooks
ARM: 8726/1: B15: Add CPU hotplug awareness
...
This cycle we have small update for:
- updates to xilinx and zynqmp dma controllers
- update reside calculation for rcar controller
- more RSTify fixes for documentation
- Add support for race free transfer termination and updating
for users for that
- Support for new rev of hidma with addition new APIs to
get device match data in ACPI/OF
- Random updates to bunch of other drivers
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJacYHYAAoJEHwUBw8lI4NHWc0P/0oMfdJXSCPbg/Sm/VrTwMR8
QvWbVxkOdeG/2L4JQYqzuHI1fWFjWV/bCdMqugTfoCs1HGr/JIEcUntM2WLIwCu6
lF8MjULfOiUieE5SmRj3pvMEKCVYQKVjQpffFRnfnHA7gtU8wpgUYjm9I8dYeat9
R6JVnqpTL+yrSocjBOZ/PoQy4oboe3TiYH+SOVLZozLUu89+/52i0U+orPYpYXVu
fu59x8J1YnFxTwNC7RhwTkp1TYW7zse/DtTWQxjJJfxzW+5Gove+VdhmJmfaOQDR
mJrSzn+dPrFbR6IFs4+XE7ja/lZn5Sjs8vRWktC6/KKQrkUlxOYKDyuoLRwZGLEy
hCLJo7FRt4n4jV25P4mJB1p9ePOHfzxSD/myXF6o81fX8haBJMr9SmSnWBeiYJpe
ybz+AvYHn7sDW8WwHJzyuN4WJgDcSkWHqNzx2kjF1k3sYNYqMN4W94+9VIx6oxrI
fucyry6dNAL9wYEfF8hlnH/3A3PKpWs4zE+trxrCnrj3hvzo3pTbhH+/fhqhR+Wk
PRoD+yVTVZcPR2F9lysqDX26Rpbq6yHv5IqCyDjnwDuLqwF5yzIODgJ/glkQ1D+F
bpzVN7BJyz0MMGSQX7ExMcw7PHgnycVW/rNBLVZ6QtBuc1BaYQHdqIpXqzwQr+4T
8ewXGx5EVqCyYVnDty4y
=7bH/
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.16-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time is smallish update with updates mainly to drivers:
- updates to xilinx and zynqmp dma controllers
- update reside calculation for rcar controller
- more RSTify fixes for documentation
- add support for race free transfer termination and updating for
users for that
- support for new rev of hidma with addition new APIs to get device
match data in ACPI/OF
- random updates to bunch of other drivers"
* tag 'dmaengine-4.16-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (47 commits)
dmaengine: dmatest: fix container_of member in dmatest_callback
dmaengine: stm32-dmamux: Remove unnecessary platform_get_resource() error check
dmaengine: sprd: statify 'sprd_dma_prep_dma_memcpy'
dmaengine: qcom_hidma: simplify DT resource parsing
dmaengine: xilinx_dma: Free BD consistent memory
dmaengine: xilinx_dma: Fix warning variable prev set but not used
dmaengine: xilinx_dma: properly configure the SG mode bit in the driver for cdma
dmaengine: doc: format struct fields using monospace
dmaengine: doc: fix bullet list formatting
dmaengine: ti-dma-crossbar: Fix event mapping for TPCC_EVT_MUX_60_63
dmaengine: cppi41: Fix channel queues array size check
dmaengine: imx-sdma: Add MODULE_FIRMWARE
dmaengine: xilinx_dma: Fix typos
dmaengine: xilinx_dma: Differentiate probe based on the ip type
dmaengine: xilinx_dma: fix style issues from checkpatch
dmaengine: xilinx_dma: Fix kernel doc warnings
dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario
dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
dmaengine: zynqmp_dma: Fix race condition in the probe
...
Pull RCU updates from Ingo Molnar:
"The main RCU changes in this cycle were:
- Updates to use cond_resched() instead of cond_resched_rcu_qs()
where feasible (currently everywhere except in kernel/rcu and in
kernel/torture.c). Also a couple of fixes to avoid sending IPIs to
offline CPUs.
- Updates to simplify RCU's dyntick-idle handling.
- Updates to remove almost all uses of smp_read_barrier_depends() and
read_barrier_depends().
- Torture-test updates.
- Miscellaneous fixes"
* 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (72 commits)
torture: Save a line in stutter_wait(): while -> for
torture: Eliminate torture_runnable and perf_runnable
torture: Make stutter less vulnerable to compilers and races
locking/locktorture: Fix num reader/writer corner cases
locking/locktorture: Fix rwsem reader_delay
torture: Place all torture-test modules in one MAINTAINERS group
rcutorture/kvm-build.sh: Skip build directory check
rcutorture: Simplify functions.sh include path
rcutorture: Simplify logging
rcutorture/kvm-recheck-*: Improve result directory readability check
rcutorture/kvm.sh: Support execution from any directory
rcutorture/kvm.sh: Use consistent help text for --qemu-args
rcutorture/kvm.sh: Remove unused variable, `alldone`
rcutorture: Remove unused script, config2frag.sh
rcutorture/configinit: Fix build directory error message
rcutorture: Preempt RCU-preempt readers more vigorously
torture: Reduce #ifdefs for preempt_schedule()
rcu: Remove have_rcu_nocb_mask from tree_plugin.h
rcu: Add comment giving debug strategy for double call_rcu()
tracing, rcu: Hide trace event rcu_nocb_wake when not used
...
The type of arg passed to dmatest_callback is struct dmatest_done.
It refers to test_done in struct dmatest_thread, not done_wait.
Fixes: 6f6a23a213 ("dmaengine: dmatest: move callback wait ...")
Signed-off-by: Yang Shunyong <shunyong.yang@hxt-semitech.com>
Acked-by: Adam Wallis <awallis@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The __memzero assembly code is almost identical to memset's except for
two orr instructions. The runtime performance of __memset(p, n) and
memset(p, 0, n) is accordingly almost identical.
However, the memset() macro used to guard against a zero length and to
call __memzero at compile time when the fill value is a constant zero
interferes with compiler optimizations.
Arnd found tha the test against a zero length brings up some new
warnings with gcc v8:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82103
And successively rremoving the test against a zero length and the call
to __memzero optimization produces the following kernel sizes for
defconfig with gcc 6:
text data bss dec hex filename
12248142 6278960 413588 18940690 1210312 vmlinux.orig
12244474 6278960 413588 18937022 120f4be vmlinux.no_zero_test
12239160 6278960 413588 18931708 120dffc vmlinux.no_memzero
So it is probably not worth keeping __memzero around given that the
compiler can do a better job at inlining trivial memset(p,0,n) on its
own. And the memset code already handles a zero length just fine.
Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
* pm-core: (29 commits)
dmaengine: rcar-dmac: Make DMAC reinit during system resume explicit
PM / runtime: Allow no callbacks in pm_runtime_force_suspend|resume()
PM / runtime: Check ignore_children in pm_runtime_need_not_resume()
PM / runtime: Rework pm_runtime_force_suspend/resume()
PM / wakeup: Print warn if device gets enabled as wakeup source during sleep
PM / core: Propagate wakeup_path status flag in __device_suspend_late()
PM / core: Re-structure code for clearing the direct_complete flag
PM: i2c-designware-platdrv: Optimize power management
PM: i2c-designware-platdrv: Use DPM_FLAG_SMART_PREPARE
PM / mfd: intel-lpss: Use DPM_FLAG_SMART_SUSPEND
PCI / PM: Use SMART_SUSPEND and LEAVE_SUSPENDED flags for PCIe ports
PM / wakeup: Add device_set_wakeup_path() helper to control wakeup path
PM / core: Assign the wakeup_path status flag in __device_prepare()
PM / wakeup: Do not fail dev_pm_attach_wake_irq() unnecessarily
PM / core: Direct DPM_FLAG_LEAVE_SUSPENDED handling
PM / core: Direct DPM_FLAG_SMART_SUSPEND optimization
PM / core: Add helpers for subsystem callback selection
PM / wakeup: Drop redundant check from device_init_wakeup()
PM / wakeup: Drop redundant check from device_set_wakeup_enable()
PM / wakeup: only recommend "call"ing device_init_wakeup() once
...
The current (empty) system sleep callbacks rely on the PM core to force
a runtime resume to reinitialize the DMAC registers during system
resume. Without a reinitialization, e.g. SCIF DMA will hang silently
after a system resume on R-Car Gen3.
Make this explicit by using pm_runtime_force_{suspend,resume}() as the
system sleep callbacks instead. Use SET_LATE_SYSTEM_SLEEP_PM_OPS() as
DMA engines must be initialized before all DMA slave devices.
Fixes: 17218e0092 "PM / genpd: Stop/start devices without pm_runtime_force_suspend/resume()"
Suggested-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Sparse warns that 'sprd_dma_prep_dma_memcpy' should be static so make it
static.
drivers/dma/sprd-dma.c:713:32: warning:
symbol'sprd_dma_prep_dma_memcpy' was not declared. Should it be static?
Reviewed-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The hidma driver open codes populating address and IRQ resources from DT.
We have standard functions of_address_to_resource and of_irq_to_resource
for this, so use them instead.
The DT binding states each child should have 2 addresses and 1 IRQ, so we
can simplify the logic and do a fixed size resource allocation. Using the
standard of_address_to_resource will also do any address translation which
was missing.
Signed-off-by: Rob Herring <robh@kernel.org>
Reviewed-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes the below sparse warning in the driver
drivers/dma/xilinx/xilinx_dma.c: In function ‘xilinx_vdma_dma_prep_interleaved’:
drivers/dma/xilinx/xilinx_dma.c:1614:43: warning: variable ‘prev’ set but not used [-Wunused-but-set-variable]
struct xilinx_vdma_tx_segment *segment, *prev = NULL;
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If the hardware is configured for Scatter Gather(SG) mode,
and hardware is idle, in the control register SG mode bit
must be set to a 0 then back to 1 by the software, to force
the CDMA SG engine to use a new value written to the CURDESC_PNTR
register, failure to do so could result errors from the dmaengine.
This patch updates the same.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Pull RCU updates from Paul E. McKenney:
- Updates to use cond_resched() instead of cond_resched_rcu_qs()
where feasible (currently everywhere except in kernel/rcu and
in kernel/torture.c). Also a couple of fixes to avoid sending
IPIs to offline CPUs.
- Updates to simplify RCU's dyntick-idle handling.
- Updates to remove almost all uses of smp_read_barrier_depends()
and read_barrier_depends().
- Miscellaneous fixes.
- Torture-test updates.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Register layout of a typical TPCC_EVT_MUX_M_N register is such that the
lowest numbered event is at the lowest byte address and highest numbered
event at highest byte address. But TPCC_EVT_MUX_60_63 register layout is
different, in that the lowest numbered event is at the highest address
and highest numbered event is at the lowest address. Therefore, modify
ti_am335x_xbar_write() to handle TPCC_EVT_MUX_60_63 register
accordingly.
Signed-off-by: Vignesh R <vigneshr@ti.com>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The test should be >= ARRAY_SIZE() instead of > ARRAY_SIZE().
Signed-off-by: Vasyl Gomonovych <gomonovych@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This avoid the following error when using an initramfs on wandboard quad
Direct firmware load for imx/sdma/sdma-imx6q.bin failed with error -2
Signed-off-by: Nicolas Chauvet <kwizart@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
some typos is comments, so fix them up
/s/enusres/ensures
/s/descripotrs/descriptors
/s/Submited/Submitted
/s/pollling/polling
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch updates the probe banner info based on the ip probed.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes below.
ERROR: open brace '{' following function definitions go on the next line
+static int xilinx_dma_child_probe(struct xilinx_dma_device *xdev,
+ struct device_node *node) {
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes the kernel doc warnings
in the driver.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As per axi dmaengine spec the software must not move the tail pointer
to a location that has not been updated (next descriptor field of the
h/w descriptor should always point to a valid address).
When user submits multiple descriptors on the recv side, with the
current driver flow the last buffer descriptor next descriptor field
points to a invalid location, resulting the invalid data or errors from the
axidma dmaengine.
This patch fixes this issue by creating a buffer descritpor chain during
channel allocation itself and use those buffer descriptors for the
subsequent dma operations.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
VDMA engine default frame buffer configuration is cirular mode.
in this mode dmaengine continuously circles through h/w configured fstore
frame buffers.
When vdma h/w is configured for more than one frame.
for example h/w is configured for n number of frames, user
submits less than n number of frames and triggered the dmaengine
using issue_pending API.
since the h/w (or) driver default configuraiton is circular mode
h/w tries to write/read from an invalid frame buffer resulting
errors from the vdma dmaengine.
This patch fixes this issue by enabling the park mode as
default mode configuration for frame buffers in s/w,
so that driver can handle all cases for "k" frames where n%k==0
(n is a multiple of k) by simply replicating the frame pointers.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add variable for checking channel idle state to ensure that dma
descriptor is not submitted when dmaengine is in progress.
This will avoid the polling for a bit in the status register to know
dma state in the driver hot path.
Reviewed-by: Jose Abreu <joabreu@synopsys.com>
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Incase of interrupt property is not present,
Driver is trying to free an invalid irq,
This patch fixes it by adding a check before freeing the irq.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes the below issues.
--> Need to clear the channel data count register
when overflow interrupts occurs.
--> Reduce the log level from _info to _dbg when
overflow interrupt occurs.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes the below warning
drivers/dma/xilinx/zynqmp_dma.c: In function 'zynqmp_dma_handle_ovfl_int':
drivers/dma/xilinx/zynqmp_dma.c:522:6: warning: variable 'val' set but not used [-Wunused-but-set-variable]
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch fixes the below kernel doc warnings
drivers/dma/xilinx/zynqmp_dma.c:552: info: Scanning doc for
zynqmp_dma_device_config
drivers/dma/xilinx/zynqmp_dma.c:558: warning: No description found for
return value of 'zynqmp_dma_device_config'
drivers/dma/xilinx/zynqmp_dma.c:649: info: Scanning doc for
zynqmp_dma_free_descriptors
drivers/dma/xilinx/zynqmp_dma.c:653: warning: No description found for
parameter 'chan'
drivers/dma/xilinx/zynqmp_dma.c:653: warning: Excess function parameter
'dchan' description in 'zynqmp_dma_free_descriptors'
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds runtime pm support in the driver.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Previously enabled clks are only disabled if clk_prepare_enable() fails.
However, there are other error paths were the previously enabled
clocks are not disabled.
To fix the problem, fsl_disable_clocks() now takes the number of clocks
that shall be disabled + unprepared. For existing calls were all clocks
were already successfully prepared + enabled, DMAMUX_NR is passed to
disable + unprepare all clocks.
In error paths were only some clocks were successfully prepared +
enabled the loop counter is passed, in order to disable + unprepare
all successfully prepared + enabled clocks.
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Andreas Platschek <andreas.platschek@opentech.at>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The location for destination event channel register has been relocated from
offset 0x28 to 0x40. Update the code accordingly.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add support for probing the newer HW and also organize MSI capable hardware
into an array for maintenance reasons.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Driver is missing the interrupts if two requests are queued up at the same
time as the interrupt handler is servicing a request that was just
delivered.
The ISR clears the interrupt at the end but it could be clearing the
interrupt for an outstanding event. Therefore, second interrupt never
arrives.
Clear the interrupt first and then check for completions.
Also, make sure that request start and interrupt clear do not overlap in
time by using a spinlock.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
in error path of jz4740_dma_probe(), call clk_disable_unprepare() to clean
up.
Found by Linux Driver Verification project (linuxtesting.org).
Fixes: 25ce6c35fe MIPS: jz4740: Remove custom DMA API
Signed-off-by: Tobias Jordan <Tobias.Jordan@elektrobit.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Trivial fix to spelling mistake in dev_err error message text.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit adfa543e73 ("dmatest: don't use set_freezable_with_signal()")
introduced a bug (that is in fact documented by the patch commit text)
that leaves behind a dangling pointer. Since the done_wait structure is
allocated on the stack, future invocations to the DMATEST can produce
undesirable results (e.g., corrupted spinlocks).
Commit a9df21e34b ("dmaengine: dmatest: warn user when dma test times
out") attempted to WARN the user that the stack was likely corrupted but
did not fix the actual issue.
This patch fixes the issue by pushing the wait queue and callback
structs into the the thread structure. If a failure occurs due to time,
dmaengine_terminate_all will force the callback to safely call
wake_up_all() without possibility of using a freed pointer.
Cc: stable@vger.kernel.org
Bug: https://bugzilla.kernel.org/show_bug.cgi?id=197605
Fixes: adfa543e73 ("dmatest: don't use set_freezable_with_signal()")
Reviewed-by: Sinan Kaya <okaya@codeaurora.org>
Suggested-by: Shunyong Yang <shunyong.yang@hxt-semitech.com>
Signed-off-by: Adam Wallis <awallis@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Now that READ_ONCE() implies smp_read_barrier_depends(), the
__cleanup() and ioat_abort_descs() functions no longer need their
smp_read_barrier_depends() calls, which this commit removes.
It is actually not entirely clear why this driver ever included
smp_read_barrier_depends() given that it appears to be x86-only and
given that smp_read_barrier_depends() has no effect whatsoever except
on DEC Alpha.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: <dmaengine@vger.kernel.org>
Fix ptr_ret.cocci warnings:
drivers/dma/mic_x100_dma.c:483:1-3: WARNING: PTR_ERR_OR_ZERO can be used
Use PTR_ERR_OR_ZERO rather than if(IS_ERR(...)) + PTR_ERR
Generated by: scripts/coccinelle/api/ptr_ret.cocci
Signed-off-by: Vasyl Gomonovych <gomonovych@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Implement the device_synchronize callback to make sure that the terminated
descriptor is freed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid race with vchan_complete, use the race free way to terminate
running transfer.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Even with the introduced vchan_synchronize() we can face race when
terminating a cyclic transfer.
If the terminate_all is called after the interrupt handler called
vchan_cyclic_callback(), but before the vchan_complete tasklet is called:
vc->cyclic is set to the cyclic descriptor, but the descriptor itself was
freed up in the driver's terminate_all() callback.
When the vhan_complete() is executed it will try to fetch the vc->cyclic
vdesc, but the pointer is pointing now to uninitialized memory leading to
(hard to reproduce) kernel crash.
In order to fix this, drivers should:
- call vchan_terminate_vdesc() from their terminate_all callback instead
calling their free_desc function to free up the descriptor.
- implement device_synchronize callback and call vchan_synchronize().
This way we can make sure that the descriptor is only going to be freed up
after the vchan_callback was executed in a safe manner.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The vchan_vdesc_fini() can be used to free or reuse a given descriptor
after it has been marked as completed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
_xt_ is being dereferenced before it is null checked, hence there is a
potential null pointer dereference.
Fix this by moving the pointer dereference after _xt_ has been null
checked.
This issue was detected with the help of Coccinelle.
Fixes: 4483320e24 ("dmaengine: Use Pointer xt after NULL check.")
Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If the last test in 'ioat_dma_self_test()' fails, we must release all
the allocated resources and not just part of them.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
SYS/RT/Audio DMAC includes independent data buffers for reading
and writing. Therefore, the read transfer counter and write transfer
counter have different values.
TCR indicates read counter, and TCRB indicates write counter.
The relationship is like below.
TCR TCRB
[SOURCE] -> [DMAC] -> [SINK]
In the MEM_TO_DEV direction, what really matters is how much data has
been written to the device. If the DMA is interrupted between read and
write, then, the data doesn't end up in the destination, so shouldn't
be counted. TCRB is thus the register we should use in this cases.
In the DEV_TO_MEM direction, the situation is more complex. Both the
read and write side are important. What matters from a data consumer
point of view is how much data has been written to memory.
On the other hand, if the transfer is interrupted between read and
write, we'll end up losing data. It can also be important to report.
In the MEM_TO_MEM direction, what matters is of course how much data
has been written to memory from data consumer point of view.
Here, because read and write have independent data buffers, it will
take a while for TCR and TCRB to become equal. Thus we should check
TCRB in this case, too.
Thus, all cases we should check TCRB instead of TCR.
Without this patch, Sound Capture has noise after PulseAudio support
(= 07b7acb51d ("ASoC: rsnd: update pointer more accurate")), because
the recorder will use wrong residue counter which indicates transferred
from sound device, but in reality the data was not yet put to memory
and recorder will record it.
However, because DMAC is buffering data until it can be transferable
size, TCRB might not be updated.
For example, if consumer doesn't know how much data can be received,
it requests enough size to DMAC. But in reality, it might receive very
few data. In such case, DMAC just buffered it until transferable size,
and no TCRB updated.
In such case, this buffered data will be transferred if CHCR::DE bit was
cleared, and this is happen if rcar_dmac_chan_halt(). In other word, it
happen when consumer called dmaengine_terminate_all().
Because of this behavior, it need to flush buffered data when it returns
"residue" (= dmaengine_tx_status()).
Otherwise, consumer might calculate wrong things if it called
dmaengine_tx_status() and dmaengine_terminate_all() consecutively.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Tested-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
Tested-by: Ryo Kodama <ryo.kodama.vz@renesas.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
DMAC reads data from source device, and buffered it until transferable
size for sink device. Because of this behavior, DMAC is including
buffered data .
Now, CHCR DE bit is controlling DMA transfer enable/disable.
If DE bit was cleared during data transferring, or during buffering,
it will flush buffered data if source device was peripheral device
(The buffered data will be removed if source device was memory).
Because of this behavior, driver should ensure that DE bit is actually
0 after clearing.
This patch adds new rcar_dmac_chcr_de_barrier() and call it after CHCR
register access.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Tested-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
Tested-by: Ryo Kodama <ryo.kodama.vz@renesas.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This allows DMA client to issue a non-flow controlled TX. In particular
it is needed for the fuse driver that reads fuse registers using APBDMA
to workaround a HW bug that results in hang when CPU and DMA perform
simultaneous access to fuse peripheral.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Updates for this cycle include:
- New driver for Spreadtrum dma controller, ST MDMA and DMAMUX controllers
- PM support for IMG MDC drivers
- Updates to bcm-sba-raid driver and improvements to sun6i driver
- Subsystem conversion for:
- timers to use timer_setup()
- remove usage of PCI pool API
- usage of %p format specifier
- Minor updates to bunch of drivers
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJaCn48AAoJEHwUBw8lI4NHTe8P/RpJH8tDat/joT7Hl71stEod
vKa0iSkW2fdwd6PeaRfd+UTloska1NE9rdgfh8pCVveoHjCPQBVBOC7V8DbMtlsi
/IlJjFT74wl2R1aSHcSGoLGsIEyurz+9SK88qCU54OQSjVHSnfmyGI4ycTLQGH9U
zce5JHWHB5MkdftM4eJaSE/t0Md1DBkxadFSQRkwQqqDqoLE7jgJUK0TADRukQqS
fsDYPh/OhYAizAHlmEGuLZQheN0ld5W7n1sGsEnBD88wtBMvYHzAwT17B+BobxEp
jyaoE5nV4AgqWh1mvixrmgKoj2KL3DDC+QeoHYCExdcgIrvc86xN3homx9g9y38a
b99pgDDvXjw4N7S6AmRyQlm/5D0QyjUaoHgGklsaR3ix81dFwDY15aZa8/uQ4EAT
iKH8DxAgOq6aG1MkUycQ/7QTenRbN4yWQQa+Mm5ncoNU8bpazyxf2l5L9OJWpFjX
Q6VagNim+plGeUhpJ4IEfPi7LChXFaYsb1D7A/dqpIRvaYzwsy80b/DNhobGMDF6
eTpny64AKHnozWw/KP5k3DfcYvoU/ytcSsWf8h+CPN7EdLMBqUXFgkVwtyf6WKNc
UPl+2in08GLgfGb+n2IAdaQzlJ4dK2P7f7mx0T4OvRymu35HXd8nJjmMJ5ZyBr1t
Z/0JVfcA66AL+XSt179C
=t9Ix
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.15-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"Updates for this cycle include:
- new driver for Spreadtrum dma controller, ST MDMA and DMAMUX
controllers
- PM support for IMG MDC drivers
- updates to bcm-sba-raid driver and improvements to sun6i driver
- subsystem conversion for:
- timers to use timer_setup()
- remove usage of PCI pool API
- usage of %p format specifier
- minor updates to bunch of drivers"
* tag 'dmaengine-4.15-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (49 commits)
dmaengine: ti-dma-crossbar: Correct am335x/am43xx mux value type
dmaengine: dmatest: warn user when dma test times out
dmaengine: Revert "rcar-dmac: use TCRB instead of TCR for residue"
dmaengine: stm32_mdma: activate pack/unpack feature
dmaengine: at_hdmac: Remove unnecessary 0x prefixes before %pad
dmaengine: coh901318: Remove unnecessary 0x prefixes before %pad
MAINTAINERS: Step down from a co-maintaner of DW DMAC driver
dmaengine: pch_dma: Replace PCI pool old API
dmaengine: Convert timers to use timer_setup()
dmaengine: sprd: Add Spreadtrum DMA driver
dt-bindings: dmaengine: Add Spreadtrum SC9860 DMA controller
dmaengine: sun6i: Retrieve channel count/max request from devicetree
dmaengine: Build bcm-sba-raid driver as loadable module for iProc SoCs
dmaengine: bcm-sba-raid: Use common GPL comment header
dmaengine: bcm-sba-raid: Use only single mailbox channel
dmaengine: bcm-sba-raid: serialize dma_cookie_complete() using reqs_lock
dmaengine: pl330: fix descriptor allocation fail
dmaengine: rcar-dmac: use TCRB instead of TCR for residue
dmaengine: sun6i: Add support for Allwinner A64 and compatibles
arm64: allwinner: a64: Add devicetree binding for DMA controller
...
The used 0x1f mask is only valid for am335x family of SoC, different family
using this type of crossbar might have different number of electable
events. In case of am43xx family 0x3f mask should have been used for
example.
Instead of trying to handle each family's mask, just use u8 type to store
the mux value since the event offsets are aligned to byte offset.
Fixes: 42dbdcc6bf ("dmaengine: ti-dma-crossbar: Add support for crossbar on AM33xx/AM43xx")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit adfa543e73 ("dmatest: don't use set_freezable_with_signal()")
introduced a bug (that is in fact documented by the patch commit text)
that leaves behind a dangling pointer. Since the done_wait structure is
allocated on the stack, future invocations to the DMATEST can produce
undesirable results (e.g., corrupted spinlocks). Ideally, this would be
cleaned up in the thread handler, but at the very least, the kernel
is left in a very precarious scenario that can lead to some long debug
sessions when the crash comes later.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=197605
Signed-off-by: Adam Wallis <awallis@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This reverts commit 847449f23d: ("dmaengine: rcar-dmac: use TCRB instead
of TCR for residue") as it breaks small serial console.
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If source and destination bus width differs pack/unpack MDMA
feature has to be activated for alignment.
This pack/unpack feature implies to have both source/destination address
and buffer length aligned on bus width.
Fixes: a4ffb13c89 ("dmaengine: Add STM32 MDMA driver")
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since commit 3cab1e7112 ("lib/vsprintf: refactor duplicate code
to special_hex_number()") %pad doesn't need 0x prefix so drop that.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since commit 3cab1e7112 ("lib/vsprintf: refactor duplicate code
to special_hex_number()") %pad doesn't need 0x prefix so drop that.
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The PCI pool API is deprecated. This commit replaces the PCI pool old
API by the appropriate function with the DMA pool API.
Signed-off-by: Romain Perier <romain.perier@collabora.com>
Acked-by: Peter Senna Tschudin <peter.senna@collabora.com>
Tested-by: Peter Senna Tschudin <peter.senna@collabora.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds the DMA controller driver for Spreadtrum SC9860 platform.
Signed-off-by: Baolin Wang <baolin.wang@spreadtrum.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
To avoid introduction of a new compatible for each small SoC/DMA controller
variation, move the definition of the channel count to the devicetree.
The number of vchans is no longer explicit, but limited by the highest
port/DMA request number. The result is a slight overallocation for SoCs
with a sparse port mapping.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
By default, we build Broadcom SBA RAID driver as loadable module for
iProc SOCs so that kernel image is little smaller and we load SBA RAID
driver only when required.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch makes the comment header of Broadcom SBA RAID driver
similar to the GPL comment header used across Broadcom driver
sources.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Each mailbox channel used by Broadcom SBA RAID driver is
a separate HW ring.
Currently, Broadcom SBA RAID driver creates one DMA channel
using one or more mailbox channels. When we are using more
than one mailbox channels for a DMA channel, the sba_request
are distributed evenly among multiple mailbox channels which
results in sba_request being completed out-of-order.
The above described out-of-order completion of sba_request
breaks the dma_async_is_complete() API because it assumes
DMA cookies are completed in orderly fashion.
To ensure correct behaviour of dma_async_is_complete() API,
this patch updates Broadcom SBA RAID driver to use only
single mailbox channel. If additional mailbox channels are
specified in DT then those will be ignored.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As-per documentation in driver/dma/dmaengine.h, the
dma_cookie_complete() API should be called with lock
held.
This patch ensures that Broadcom SBA RAID driver calls
the dma_cookie_complete() API with reqs_lock held.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The patch edf10919 [dmaengine: altera: fix spinlock usage] missed to
change 2 occurrences of spin_unlock_bh() to spin_unlock_irqrestore().
This patch fixes this by moving to the IRQ-safe call in the error
paths as well.
Fixes: edf10919 (dmaengine: altera: fix spinlock usage)
Signed-off-by: Stefan Roese <sr@denx.de>
Reviewed-by: Sylvain Lesne <lesne@alse-fr.com>
[add fixes tag and fix typo in log]
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
If two concurrent threads call pl330_get_desc() when DMAC descriptor
pool is empty it is possible that allocation for one of threads will fail
with message:
kernel: dma-pl330 20078000.dma-controller: pl330_get_desc:2469 ALERT!
Here how that can happen. Thread A calls pl330_get_desc() to get
descriptor. If DMAC descriptor pool is empty pl330_get_desc() allocates
new descriptor on shared pool using add_desc() and then get newly
allocated descriptor using pluck_desc(). At the same time thread B calls
pluck_desc() and take newly allocated descriptor. In that case descriptor
allocation for thread A will fail.
Using on-stack pool for new descriptor allow avoid the issue described.
The patch modify pl330_get_desc() to use on-stack pool for allocation
new descriptors.
Signed-off-by: Alexander Kochetkov <al.kochet@gmail.com>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
SYS/RT/Audio DMAC includes independent data buffers for reading
and writing. Therefore, the read transfer counter and write transfer
counter have different values.
TCR indicates read counter, and TCRB indicates write counter.
The relationship is like below.
TCR TCRB
[SOURCE] -> [DMAC] -> [SINK]
In the MEM_TO_DEV direction, what really matters is how much data has
been written to the device. If the DMA is interrupted between read and
write, then, the data doesn't end up in the destination, so shouldn't
be counted. TCRB is thus the register we should use in this cases.
In the DEV_TO_MEM direction, the situation is more complex. Both the
read and write side are important. What matters from a data consumer
point of view is how much data has been written to memory.
On the other hand, if the transfer is interrupted between read and
write, we'll end up losing data. It can also be important to report.
In the MEM_TO_MEM direction, what matters is of course how much data
has been written to memory from data consumer point of view.
Here, because read and write have independent data buffers, it will
take a while for TCR and TCRB to become equal. Thus we should check
TCRB in this case, too.
Thus, all cases we should check TCRB instead of TCR.
Without this patch, Sound Capture has noise after PluseAudio support
(= 07b7acb51d ("ASoC: rsnd: update pointer more accurate")), because
the recorder will use wrong residue counter which indicates transferred
from sound device, but in reality the data was not yet put to memory
and recorder will record it.
Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
[Kuninori: added detail information in log]
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The A64 SoC has the same dma engine as the H3 (sun8i), with a
reduced amount of physical channels. To allow future reuse of the
compatible, leave the channel count etc. in the config data blank
and retrieve it from the devicetree.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Preparatory patch: If the same compatible is used for different SoCs which
have a common register layout, but different number of channels, the
channel count can no longer be stored in the config. Store it in the
device structure instead.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The H3 supports bursts lengths of 1, 4, 8 and 16 transfers, each with
a width of 1, 2, 4 or 8 bytes.
The register value for the the width is log2-encoded, change the
conversion function to provide the correct value for width == 8.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The current code mixes three distinct operations when transforming
the slave config to register settings:
1. special handling of DMA_SLAVE_BUSWIDTH_UNDEFINED, maxburst == 0
2. range checking
3. conversion of raw to register values
As the range checks depend on the specific SoC, move these out of the
conversion to distinct operations.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
For the H3, the burst lengths field offsets in the channel configuration
register differs from earlier SoC generations.
Using the A31 register macros actually configured the H3 controller
do to bursts of length 1 always, which although working leads to higher
bus utilisation.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The H83T uses a compatible string different from the A23, but requires
the same clock autogating register setting.
The H3 also requires setting the clock autogating register, but has
the register at a different offset.
Add three suitable callbacks for the existing controller generations
and set it in the controller config structure.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add runtime PM support to disable the clock when the h/w is not in use.
The existing clock_prepare_enable is removed from probe() as the clock
is no longer permanently enabled.
Signed-off-by: Ed Blake <ed.blake@sondrel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add suspend / resume handling using suspend_late and resume_early, and
check that all channels are idle before suspending.
DMA drivers should use suspend_late / resume_early to ensure that all
DMA client devices are suspended before the DMA device itself, and that
client devices are resumed after the DMA device. This avoids suspending
the DMA device while transactions are still active.
It is the responsibility of client drivers to terminate all DMA
transactions in their suspend handlers, so there should be no active
transactions by the time suspend_late is called.
There's no need to save and restore registers for MDC during suspend /
resume, as all transactions will be terminated as a result of the
suspend, and all required registers are programmed anyway at the start
of any new transactions following resume.
Signed-off-by: Ed Blake <ed.blake@sondrel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use the of_device_get_match_data() helper instead of open coding.
Note that when used with DT, there's always a valid match.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
the device's max_burst to 16777215 (EN is 24bit unsigned value) so
clients can take this into consideration when setting up the transfer.
During slave transfer preparation check if the requested maxburst is valid.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <linux@armlinux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
the device's max_burst to 32767 (CIDX is 16bit signed value) so clients
can take this into consideration when setting up the transfer.
During slave transfer preparation check if the requested maxburst is valid.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
hwdesc is being initialized to desc->hwdesc but this is never read
as hwdesc is overwritten in a for-loop. Remove the redundant
initialization and move the declaration of hwdesc into the for-loop.
Cleans up clang warning:
Value stored to 'hwdesc' during its initialization is never read
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Without CONFIG_OF we get a build warning:
warning: (STM32_MDMA) selects DMA_OF which has unmet direct dependencies (DMADEVICES && OF)
This adds a dependency on CONFIG_OF. Since this means
we no longer need to select 'DMA_OF', I'm dropping that line
as well.
Fixes: a4ffb13c89 ("dmaengine: Add STM32 MDMA driver")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Pointer print was using explict cast and printing as %x which causes below
warn on some arch's so print using %p format specfier.
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds the driver for the STM32 MDMA controller.
Master Direct memory access (MDMA) is used in order to provide high-speed
data transfer between memory and memory or between peripherals and memory.
MDMA controller provides a master AXI interface for main memory and
peripheral registers access (system access port) and a master AHB
interface only for Cortex-M7 TCM memory access (TCM access port).
MDMA works in conjunction with the standard DMA controllers (DMA1 or DMA2).
It offers up to 64 channels, each dedicated to managing memory access
requests from one of the DMA stream memory buffer or other peripherals
(w/ integrated FIFO).
Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since this lock is acquired in both process and IRQ context, failing to
to disable IRQs when trying to acquire the lock in process context can
lead to deadlocks.
Signed-off-by: Sylvain Lesne <lesne@alse-fr.com>
Reviewed-by: Stefan Roese <sr@denx.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit 6084fc2ec4 ("dmaengine: altera: Use macros instead of structs
to describe the registers") introduced a minus sign before a register
offset.
This leads to soft-locks of the DMA controller, since reading the last
status byte is required to pop the response from the FIFO. Failing to
do so will lead to a full FIFO, which means that the DMA controller
will stop processing descriptors.
Signed-off-by: Sylvain Lesne <lesne@alse-fr.com>
Reviewed-by: Stefan Roese <sr@denx.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add DMA filters for the sa11x0 DMA channels. This will allow us to
migrate away from directly using the DMA filter function in drivers.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch implements the STM32 DMAMUX driver.
The DMAMUX request multiplexer allows routing a DMA request line between
the peripherals and the DMA controllers of the product. The routing
function is ensured by a programmable multi-channel DMA request line
multiplexer. Each channel selects a unique DMA request line,
unconditionally or synchronously with events from its DMAMUX
synchronization inputs. The DMAMUX may also be used as a DMA request
generator from programmable events on its input trigger signals
Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The bam dmaengine has a circular FIFO to which we
add hw descriptors that describes the transaction.
The FIFO has space for about 4096 hw descriptors.
Currently we add one descriptor and wait for it to
complete with interrupt and then add the next pending
descriptor. In this way, the FIFO is underutilized
since only one descriptor is processed at a time, although
there is space in FIFO for the BAM to process more.
Instead keep adding descriptors to FIFO till its full,
that allows BAM to continue to work on the next descriptor
immediately after signalling completion interrupt for the
previous descriptor.
Also when the client has not set the DMA_PREP_INTERRUPT for
a descriptor, then do not configure BAM to trigger a interrupt
upon completion of that descriptor. This way we get a interrupt
only for the descriptor for which DMA_PREP_INTERRUPT was
requested and there signal completion of all the previous completed
descriptors. So we still do callbacks for all requested descriptors,
but just that the number of interrupts are reduced.
CURRENT:
------ ------- ---------------
|DES 0| |DESC 1| |DESC 2 + INT |
------ ------- ---------------
| | |
| | |
INTERRUPT: (INT) (INT) (INT)
CALLBACK: (CB) (CB) (CB)
MTD_SPEEDTEST READ PAGE: 3560 KiB/s
MTD_SPEEDTEST WRITE PAGE: 2664 KiB/s
IOZONE READ: 2456 KB/s
IOZONE WRITE: 1230 KB/s
bam dma interrupts (after tests): 96508
CHANGE:
------ ------- -------------
|DES 0| |DESC 1 |DESC 2 + INT |
------ ------- --------------
|
|
(INT)
(CB for 0, 1, 2)
MTD_SPEEDTEST READ PAGE: 3860 KiB/s
MTD_SPEEDTEST WRITE PAGE: 2837 KiB/s
IOZONE READ: 2677 KB/s
IOZONE WRITE: 1308 KB/s
bam dma interrupts (after tests): 58806
Signed-off-by: Sricharan R <sricharan@codeaurora.org>
Reviewed-by: Andy Gross <andy.gross@linaro.org>
Tested-by: Abhishek Sahu <absahu@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When looking for unused xbar_out lane we should also protect the set_bit()
call with the same mutex to protect against concurrent threads picking the
same ID.
Fixes: ec9bfa1e1a ("dmaengine: ti-dma-crossbar: dra7: Use bitops instead of idr")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: stable@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The usage of of_device_get_match_data reduce the code size a bit.
Furthermore, it prevents an improbable dereference when
of_match_device() return NULL.
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Memory to Memory transfers does not have any special alignment needs
regarding to acnt array size, but if one of the areas are in memory mapped
regions (like PCIe memory), we need to make sure that the acnt array size
is aligned with the mem copy parameters.
Before "dmaengine: edma: Optimize memcpy operation" change the memcpy was set
up in a different way: acnt == number of bytes in a word based on
__ffs((src | dest | len), bcnt and ccnt for looping the necessary number of
words to comlete the trasnfer.
Instead of reverting the commit we can fix it to make sure that the ACNT size
is aligned to the traswnfer.
Fixes: df6694f803 (dmaengine: edma: Optimize memcpy operation)
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: stable@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The driver already supports DMA_DEV_TO_DEV in sdma_config(),
DMA_SLAVE_BUSWIDTH_2_BYTES and DMA_SLAVE_BUSWIDTH_1_BYTE in
sdma_prep_slave_sg(). So this patch adds them to the lists.
Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The enum xdma_ip_type is only used inside the Xilinx DMA driver and not
exported to any consumers (nor should it be). So move it from the global
header to driver file itself.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When running in software cyclic mode the driver currently does not go back
to the first segment once the last segment has been reached. Effectively
making the transfer non-cyclic.
Fix this by going back to the first segment once the last segment has been
reached for cyclic transfers.
Special care need to be taken to avoid a segment from being submitted
multiple times concurrently, which could happen for transfers with a number
of segments that is smaller than the DMA controller's internal queue.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In hardware cyclic mode the submitted segment is repeated. This means
hardware cyclic mode can only be used if the transfer has a single segment.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
- Removal of DMA_SG support as we have no users for this feature
- New driver for Altera / Intel mSGDMA IP core
- Support for memset in dmatest and qcom_hidma driver
- Update for non cyclic mode in k3dma, bunch of update in bam_dma, bcm sba-raid
- Constify device ids across drivers
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJZsXteAAoJEHwUBw8lI4NHPwcP/iihF1n7jOQVtUm3zxPvUV+n
GzU7+rqAEDLKBaIttK28LIjgvg0AC/4aiEsosfCzTjpkzMHteRw00YyplwF7/wdM
O0owKOIub4PriDiL6d/SWFnhcWwv0/KLbyKscQcOwwvkksG/mwMn1VfW7alCrz1w
81TOQaW9SxLxL7guJU0aQHljkudT53l8Dgsp55iC9Ccz515Iuu7dQm3DnSG3sYjJ
Ct4u4MWWzDmmKKpbDoYe/Z+fiQT0WKuGfI7QHURVnw5qLo2sDKREWGbThhRG/lZj
YlnLQnkjWwLU5dyX1MyIWipPxe83sjf/7OwJ7XUlLjD6o+lNEuQxjmNkVAh0hNRc
dgrXRuqPRJMW40uOvAMDHTkexxikWc5ggt5LN9dIYDOdaS4Ch5ewf19SRi9pSDap
FZeIWY1FWwQCAU7HQMwSYyRLBjlmEmeSkElkXCd+2wu5aH2oKOMUMbUIYcqL4fjD
qMAR7kfn6e92fDT1gR1ZKL79Cfe9zsCQA3XmecpC/HwqiE3XtfZuDY/73cXD0MeO
SbJUCv4ldPGjrTKBHvs0wiWbxi5Mj5sXglmSaD0lEhtMsOfhPHY2BGatTzSmKKwO
WwmKAvM8qElQZy2Eh25dvlE04yAOofoJb6Pf/AraQOLTUkMyF8wRWEpltjUuttM9
VzQLvh8s25naKM5mOAM2
=88SI
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.14-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This one features the usual updates to the drivers and one good part
of removing DA_SG from core as it has no users.
Summary:
- Remove DMA_SG support as we have no users for this feature
- New driver for Altera / Intel mSGDMA IP core
- Support for memset in dmatest and qcom_hidma driver
- Update for non cyclic mode in k3dma, bunch of update in bam_dma,
bcm sba-raid
- Constify device ids across drivers"
* tag 'dmaengine-4.14-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (52 commits)
dmaengine: sun6i: support V3s SoC variant
dmaengine: sun6i: make gate bit in sun8i's DMA engines a common quirk
dmaengine: rcar-dmac: document R8A77970 bindings
dmaengine: xilinx_dma: Fix error code format specifier
dmaengine: altera: Use macros instead of structs to describe the registers
dmaengine: ti-dma-crossbar: Fix dra7 reserve function
dmaengine: pl330: constify amba_id
dmaengine: pl08x: constify amba_id
dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_COMPLETED
dmaengine: bcm-sba-raid: Explicitly ACK mailbox message after sending
dmaengine: bcm-sba-raid: Add debugfs support
dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_RECEIVED
dmaengine: bcm-sba-raid: Re-factor sba_process_deferred_requests()
dmaengine: bcm-sba-raid: Pre-ack async tx descriptor
dmaengine: bcm-sba-raid: Peek mbox when we have no free requests
dmaengine: bcm-sba-raid: Alloc resources before registering DMA device
dmaengine: bcm-sba-raid: Improve sba_issue_pending() run duration
dmaengine: bcm-sba-raid: Increase number of free sba_request
dmaengine: bcm-sba-raid: Allow arbitrary number free sba_request
dmaengine: bcm-sba-raid: Remove reqs_free_count from sba_device
...
Allwinner V3s has a DMA engine similar to the ones from A31, but with
fewer channels and DRQs.
Add support for it.
Signed-off-by: Icenowy Zheng <icenowy@aosc.xyz>
Acked-by: Chen-Yu Tsai <wens@csie.org>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Originally we enable a special gate bit when the compatible indicates
A23/33.
But according to BSP sources and user manuals, more SoCs will need this
gate bit.
So make it a common quirk configured in the config struct.
Signed-off-by: Icenowy Zheng <icenowy@aosc.xyz>
Reviewed-by: Chen-Yu Tsai <wens@csie.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
'err' is a signed int and error codes are typically negative numbers, so
use '%d' instead of '%u' to format the error code in the error message.
Fixes: ba16db36b5 ("dmaengine: vdma: Add clock support")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch moves from a struct declaration for the DMA controller
registers to macros with offests to the base address. This is mainly
done to remove the sparse warnings, since the function parameter of
ioread32/iowrite32 is "void __iomem *" instead of a pointer to struct
members. With this patch applied, no sparse warning is seen anymore.
Please note that the struct for the descriptors is still kept in place,
as the code largely accesses the struct members as internal variables
before the complete struct is copied into the descriptor FIFO of the
DMA controller.
Additionally this patch also removes two warnings "variable xxx set but
not used" seen when compiling with "W=1". The registers need to be read
to flush the response FIFO, but nothing needs to be done with them. So
the code is correct here and the warning is a false one.
Signed-off-by: Stefan Roese <sr@denx.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
DMA crossbar uses 'xbar->dma_inuse' variable to manage allocated routes.
Each bit represents respective DMA channel. If the channel is free, bit
is set to '0', if channel is allocated, bit should be set to '1'.
In reserve function, the bits for requested DMA channels are cleared, so
they are not really reserved, but freed and become ready for allocation.
Signed-off-by: Alexander Smirnov <asmirnov@ilbers.de>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
amba_id are not supposed to change at runtime. All functions
working with const amba_id. So mark the non-const structs as const.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
amba_id are not supposed to change at runtime. All functions
working with const amba_id. So mark the non-const structs as const.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The SBA_REQUEST_STATE_COMPLETED state was added to keep track
of sba_request which got completed but cannot be freed because
underlying Async Tx descriptor was not ACKed by DMA client.
Instead of above, we can free the sba_request with non-ACKed
Async Tx descriptor and sba_alloc_request() will ensure that
it always allocates sba_request with ACKed Async Tx descriptor.
This alternate approach makes SBA_REQUEST_STATE_COMPLETED state
redundant hence this patch removes it.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We should explicitly ACK mailbox message because after
sending message we can know the send status via error
attribute of brcm_message.
This will also help SBA-RAID to use "txdone_ack" method
whenever mailbox controller supports it.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This patch adds debugfs support to report stats via debugfs
which in-turn will help debug hang or error situations.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The SBA_REQUEST_STATE_RECEIVED state is now redundant because
received sba_request are immediately freed or moved to completed
list in sba_process_received_request().
This patch removes redundant SBA_REQUEST_STATE_RECEIVED state.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>