Commit Graph

4087 Commits

Author SHA1 Message Date
Sameer Pujar
94dc8f4ed4 dmaengine: tegra210-adma: add pause/resume support
During an audio playback session it is observed that, audio goes off after
few seconds of continuous pause and play. No audio is heard even when the
playback is resumed.

The reason for above is, currently ADMA driver does not handle DMA_PAUSE/
DMA_RESUME and relevant callbacks for dma_device are not implemented. This
patch implements device_pause and device_resume callbacks for the device.
During pause TRANSFER_PAUSE bit of dma channel control register is set and
the same is cleared during resume.

Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-05-04 16:13:42 +05:30
Sameer Pujar
433de642a7 dmaengine: tegra210-adma: add support for Tegra186/Tegra194
Add Tegra186 specific macro defines and chip_data structure for chip
specific information. New compatibility is added to select relevant
chip details. There is no major change for Tegra194 and hence it can
use the same chip data.

The bits in the BURST_SIZE field of the ADMA CH_CONFIG register are
encoded differently on Tegra186 and Tegra194 compared with Tegra210.
On Tegra210 the bits are encoded as follows ...

 1 = WORD_1
 2 = WORDS_2
 3 = WORDS_4
 4 = WORDS_8
 5 = WORDS_16

Where as on Tegra186 and Tegra194 the bits are encoded as ...

 0 = WORD_1
 1 = WORDS_2
 2 = WORDS_3
 3 = WORDS_4
 4 = WORDS_5
 ...
 15 = WORDS_16

Add helper functions for generating the correct burst size.

Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-05-04 16:13:41 +05:30
Sameer Pujar
ded1f3db4c dmaengine: tegra210-adma: prepare for supporting newer Tegra chips
This is a preparatory patch to add support for Tegra186 and Tegra194 chips.
Following changes are necessary to make driver code generic.
 * chip_data structure is enhanced to have chip specific details and
   following are the additions to the structure
   * Offset addresses for ADMA global and channel registers
   * Offset values for Tx and Rx channel selection
   * Maximum supported Tx and Rx channels
   * Tx and Rx channel request mask
   * ADMA channel register space size
 * Make use of above chip_data to generalise the driver code

Support for Tegra186 and Tegra194 will be added in subsequent patches of
the series.

Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-05-04 16:13:41 +05:30
Dan Carpenter
0b515abb6b dmaengine: at_xdmac: remove a stray bottom half unlock
We switched this code from spin_lock_bh() to vanilla spin_lock() but
there was one stray spin_unlock_bh() that was overlooked.  This
patch converts it to spin_unlock() as well.

Fixes: d8570d018f ("dmaengine: at_xdmac: move spin_lock_bh to spin_lock in tasklet")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-05-04 16:11:02 +05:30
Krzysztof Kozlowski
e095189a54 dmaengine: fsl-edma: Adjust indentation
Fix indentation and remove unneeded space after 'return' keyword.  This
fixes checkpatch warning:
    WARNING: Statements should start on a tabstop

Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-05-04 15:50:26 +05:30
Krzysztof Kozlowski
32685552fd dmaengine: fsl-edma: Fix typo in Vybrid name
Fix typo in comment for Vybrid SoC family.

Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-05-04 15:50:26 +05:30
Arnaud Pouliquen
2a4885abf5 dmaengine: stm32-dma: fix residue calculation in stm32-dma
In double buffer mode, during residue calculation, the DMA can
automatically switch to the next transfer. Indeed the CT bit that
gives position in the double buffer can has been updated by the
hardware, during calculation.
In this case the SxNDTR register value can not be trusted.
If a transition is detected we consider that the DMA has switched to
the beginning of next sg.

Signed-off-by: Arnaud Pouliquen <arnaud.pouliquen@st.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-05-04 15:46:58 +05:30
Kefeng Wang
66c30aa679 dmaengine: nbpfaxi: Use dev_get_drvdata()
Using dev_get_drvdata directly.

Cc: Vinod Koul <vinod.koul@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-29 10:47:15 +05:30
Kefeng Wang
95d47fb71d dmaengine: bcm-sba-raid: Use dev_get_drvdata()
Using dev_get_drvdata directly.

Cc: Vinod Koul <vinod.koul@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-29 10:47:15 +05:30
Vinod Koul
c6504be539 dmaengine: stm32-dma: Fix unsigned variable compared with zero
Commit f4fd2ec08f: ("dmaengine: stm32-dma: use platform_get_irq()") used
unsigned variable irq to store the results and check later for negative
errors, so update the code to use signed variable for this

Fixes: f4fd2ec08f ("dmaengine: stm32-dma: use platform_get_irq()")
Reported-by: kbuild test robot <lkp@intel.com>
Reported-by: Julia Lawall <julia.lawall@lip6.fr>
Acked-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-29 09:59:07 +05:30
Fabien Dessenne
f4fd2ec08f dmaengine: stm32-dma: use platform_get_irq()
platform_get_resource(pdev, IORESOURCE_IRQ) is not recommended for
requesting IRQ's resources, as they can be not ready yet. Using
platform_get_irq() instead is preferred for getting IRQ even if it was
not retrieved earlier.

Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
Reviewed-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-26 22:33:34 +05:30
Shun-Chih Yu
5bb5c3a3ac dmaengine: mediatek-cqdma: fix wrong register usage in mtk_cqdma_start
This patch fixes wrong register usage in the mtk_cqdma_start. The
destination register should be MTK_CQDMA_DST2 instead.

Fixes: b1f01e48df ("dmaengine: mediatek: Add MediaTek Command-Queue DMA controller for MT6765 SoC")
Signed-off-by: Shun-Chih Yu <shun-chih.yu@mediatek.com>
Cc: stable@vger.kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-26 17:26:38 +05:30
Hiroyuki Yokoyama
8a6061c34a dmaengine: rcar-dmac: Update copyright information
Update copyright and string for Gen3.

Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-26 17:24:26 +05:30
Angus Ainslie (Purism)
941acd566b dmaengine: imx-sdma: Only check ratio on parts that support 1:1
On imx8mq B0 chip, AHB/SDMA clock ratio 2:1 can't be supported,
since SDMA clock ratio has to be increased to 250Mhz, AHB can't reach
to 500Mhz, so use 1:1 instead.

To limit this change to the imx8mq for now this patch also adds an
im8mq-sdma compatible string.

Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca>
Acked-by: Robin Gong <yibin.gong@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-26 17:18:21 +05:30
Colin Ian King
9e1630b809 dmaengine: xgene-dma: fix spelling mistake "descripto" -> "descriptor"
There is a spelling mistake in a chan_dbg message, fix it.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-26 16:57:15 +05:30
Andy Shevchenko
ffcfc20f74 dmaengine: idma64: Move driver name to the header
There are two drivers that are relying on the iDMA 64-bit driver name
to match. Instead of duplicating string in both of them, dedicate
a header file and share it between users.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-26 16:55:23 +05:30
Michal Suchanek
c7266d26dc dmaengine: bcm2835: Drop duplicate capability setting.
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Acked-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-26 16:53:08 +05:30
Sugar Zhang
2da254cc79 dmaengine: pl330: _stop: clear interrupt status
This patch kill instructs the DMAC to immediately terminate
execution of a thread. and then clear the interrupt status,
at last, stop generating interrupts for DMA_SEV. to guarantee
the next dma start is clean. otherwise, one interrupt maybe leave
to next start and make some mistake.

we can reporduce the problem as follows:

DMASEV: modify the event-interrupt resource, and if the INTEN sets
function as interrupt, the DMAC will set irq<event_num> HIGH to
generate interrupt. write INTCLR to clear interrupt.

	DMA EXECUTING INSTRUCTS		DMA TERMINATE
		|				|
		|				|
	       ...			      _stop
		|				|
		|			spin_lock_irqsave
	     DMASEV				|
		|				|
		|			    mask INTEN
		|				|
		|			     DMAKILL
		|				|
		|			spin_unlock_irqrestore

in above case, a interrupt was left, and if we unmask INTEN, the DMAC
will set irq<event_num> HIGH to generate interrupt.

to fix this, do as follows:

	DMA EXECUTING INSTRUCTS		DMA TERMINATE
		|				|
		|				|
	       ...			      _stop
		|				|
		|			spin_lock_irqsave
	     DMASEV				|
		|				|
		|			     DMAKILL
		|				|
		|			   clear INTCLR
		|			    mask INTEN
		|				|
		|			spin_unlock_irqrestore

Signed-off-by: Sugar Zhang <sugar.zhang@rock-chips.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-26 16:51:37 +05:30
Dragos Bogdan
9a05045d2a dmaengine: axi-dmac: Enable DMA_INTERLEAVE capability
Since device_prep_interleaved_dma() is already implemented, the
DMA_INTERLEAVE capability should be set.

Signed-off-by: Dragos Bogdan <dragos.bogdan@analog.com>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-24 11:08:38 +05:30
Alexandru Ardelean
648865a79d dmaengine: axi-dmac: Don't check the number of frames for alignment
In 2D transfers (for the AXI DMAC), the number of frames (numf) represents
Y_LENGTH, and the length of a frame is X_LENGTH. 2D transfers are useful
for video transfers where screen resolutions ( X * Y ) are typically
aligned for X, but not for Y.

There is no requirement for Y_LENGTH to be aligned to the bus-width (or
anything), and this is also true for AXI DMAC.

Checking the Y_LENGTH for alignment causes false errors when initiating DMA
transfers. This change fixes this by checking only that the Y_LENGTH is
non-zero.

Fixes: 0e3b67b348 ("dmaengine: Add support for the Analog Devices AXI-DMAC DMA controller")
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-24 11:08:38 +05:30
Lars-Peter Clausen
56009f0d2f dmaengine: axi-dmac: Infer synthesis configuration parameters hardware
Some synthesis time configuration parameters of the DMA controller can be
inferred from the hardware itself.

Use this information as it is more reliably than the information specified
in the devicetree which might be outdated if the HDL project got changed.

Deprecate the devicetree properties that can be inferred from the hardware
itself.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-24 11:04:09 +05:30
Achim Dahlhoff
6e7da74775 dmaengine: sh: rcar-dmac: Fix glitch in dmaengine_tx_status
The tx_status poll in the rcar_dmac driver reads the status register
which indicates which chunk is busy (DMACHCRB). Afterwards the point
inside the chunk is read from DMATCRB. It is possible that the chunk
has changed between the two reads. The result is a non-monotonous
increase of the residue. Fix this by introducing a 'safe read' logic.

Fixes: 73a47bd0da ("dmaengine: rcar-dmac: use TCRB instead of TCR for residue")
Signed-off-by: Achim Dahlhoff <Achim.Dahlhoff@de.bosch.com>
Signed-off-by: Dirk Behme <dirk.behme@de.bosch.com>
Reviewed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Cc: <stable@vger.kernel.org> # v4.16+
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-23 10:45:34 +05:30
Dirk Behme
907bd68a2e dmaengine: sh: rcar-dmac: With cyclic DMA residue 0 is valid
Having a cyclic DMA, a residue 0 is not an indication of a completed
DMA. In case of cyclic DMA make sure that dma_set_residue() is called
and with this a residue of 0 is forwarded correctly to the caller.

Fixes: 3544d28788 ("dmaengine: rcar-dmac: use result of updated get_residue in tx_status")
Signed-off-by: Dirk Behme <dirk.behme@de.bosch.com>
Signed-off-by: Achim Dahlhoff <Achim.Dahlhoff@de.bosch.com>
Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
Signed-off-by: Yao Lihua <ylhuajnu@outlook.com>
Reviewed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: <stable@vger.kernel.org> # v4.8+
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-23 10:45:26 +05:30
Stefan Wahren
f147384774 dmaengine: bcm2835: Avoid GFP_KERNEL in device_prep_slave_sg
The commit af19b7ce76 ("mmc: bcm2835: Avoid possible races on
data requests") introduces a possible circular locking dependency,
which is triggered by swapping to the sdhost interface.

So instead of reintroduce the race condition again, we could also
avoid this situation by using GFP_NOWAIT for the allocation of the
DMA buffer descriptors.

Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Signed-off-by: Stefan Wahren <stefan.wahren@i2se.com>
Fixes: af19b7ce76 ("mmc: bcm2835: Avoid possible races on data requests")
Link: http://lists.infradead.org/pipermail/linux-rpi-kernel/2019-March/008615.html
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-23 10:43:54 +05:30
Nicolas Ferre
38a829a389 dmaengine: at_xdmac: only monitor overflow errors for peripheral xfer
The overflow error flag (ROI: Request Overflow Error) is only relevant
for the case when the channel handles a peripheral synchronized transfer.
Not in the case of memory to memory transfer where there is no hardware
request signal.

Remove the use of this interrupt source in such a case. It's based on
the first descriptor which holds the configuration for the whole
linked list transfer.

Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-23 10:38:56 +05:30
Nicolas Ferre
223a4f4cfe dmaengine: at_xdmac: enhance channel errors handling in tasklet
Complement the identification of errors with stopping the channel and
dumping the descriptor that led to the error case.

Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-23 10:38:56 +05:30
Nicolas Ferre
e2c114c06d dmaengine: at_xdmac: remove BUG_ON macro in tasklet
Even if this case shouldn't happen when controller is properly programmed,
it's still better to avoid dumping a kernel Oops for this.
As the sequence may happen only for debugging purposes, log the error and
just finish the tasklet call.

Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-23 10:38:55 +05:30
Will Deacon
fb24ea52f7 drivers: Remove explicit invocations of mmiowb()
mmiowb() is now implied by spin_unlock() on architectures that require
it, so there is no reason to call it from driver code. This patch was
generated using coccinelle:

	@mmiowb@
	@@
	- mmiowb();

and invoked as:

$ for d in drivers include/linux/qed sound; do \
spatch --include-headers --sp-file mmiowb.cocci --dir $d --in-place; done

NOTE: mmiowb() has only ever guaranteed ordering in conjunction with
spin_unlock(). However, pairing each mmiowb() removal in this patch with
the corresponding call to spin_unlock() is not at all trivial, so there
is a small chance that this change may regress any drivers incorrectly
relying on mmiowb() to order MMIO writes between CPUs using lock-free
synchronisation. If you've ended up bisecting to this commit, you can
reintroduce the mmiowb() calls using wmb() instead, which should restore
the old behaviour on all architectures other than some esoteric ia64
systems.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-04-08 12:01:02 +01:00
Pierre-Yves MORDRET
9dfec7ca0b dmaengine: stm32-mdma: Revert "dmaengine: stm32-mdma: Add a check on read_u32_array"
This reverts commit 906b40b246 ("dmaengine: stm32-mdma: Add a check on
read_u32_array")

As stated by bindings "st,ahb-addr-masks" is optional.
The statement inserted by this commit makes this property
mandatory and prevents MDMA to be probed in case property not present.

Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-03-25 21:56:54 +05:30
Michael Hennerich
23b846396b dmaengine: axi-dmac: extend support for ZynqMP arch
The AXI DMAC driver is currently supported also on the Xilinx ZynqMP
architecture. This change allows this driver to be enabled & used on it as
well.

Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-03-25 21:52:28 +05:30
Jeff Xie
f177a43121 dmaengine: xgene-dma: move spin_lock_bh to spin_lock in tasklet
It is unnecessary to call spin_lock_bh in a tasklet.

Signed-off-by: Jeff Xie <chongguiguzi@gmail.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-03-25 21:52:28 +05:30
Jean-Nicolas Graux
2ff25c1c32 dmaengine: pl08x: be fair when re-assigning physical channel
Current way we find a waiting virtual channel for the next transfer
at the time one physical channel becomes free is not really fair.

More in details, in case there is more than one channel waiting at a time,
by just going through the arrays of memcpy and slave channels and stopping
as soon as state match waiting state, channels with high indexes can be
penalized.

Whenever dma engine is substantially overloaded so that we constantly
get several channels waiting, channels with highest indexes might not
be served for a substantial time which in the worse case, might hang
task that wait for dma transfer to complete.

This patch makes physical channel re-assignment more fair by storing
time in jiffies when a channel is put in waiting state. Whenever a
physical channel has to be re-assigned, this time is used to select
channel that is waiting for the longest time.

Signed-off-by: Jean-Nicolas Graux <jean-nicolas.graux@st.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Nicolas Guion <nicolas.guion@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-03-25 21:52:28 +05:30
Lars-Peter Clausen
921234e0c5 dmaengine: axi-dmac: Split too large segments
The axi-dmac driver currently rejects transfers with segments that are
larger than what the hardware can handle.

Re-work the driver so that these large segments are split into multiple
segments instead where each segment is smaller or equal to the maximum
segment size.

This allows the driver to handle transfers with segments of arbitrary size.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Bogdan Togorean <bogdan.togorean@analog.com>
Signed-off-by: Alexandru Ardelean <alex.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-03-25 21:52:28 +05:30
Katsuhiro Suzuki
b45aef3aef dmaengine: pl330: introduce debugfs interface
This patch adds debugfs interface to show the relationship between
DMA threads (hardware resource for transferring data) and DMA
channel ID of DMA slave.

Typically, PL330 has many slaves than number of DMA threads.
So sometimes PL330 cannot allocate DMA threads for all slaves even
if a user specify DMA channel ID in devicetree. This interface will
be useful for checking that DMA threads are allocated or not.

Below is an output sample:

$ sudo cat /sys/kernel/debug/ff1f0000.dmac
PL330 physical channels:
THREAD:         CHANNEL:
--------        -----
0               8
1               9
2               11
3               12
4               14
5               15
6               10
7               --

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-03-25 21:52:28 +05:30
Sameer Pujar
74fca241e6 dmaengine: tegra210-adma: update system sleep callbacks
If the driver is active till late suspend, where runtime PM cannot run,
force suspend is essential in such case to put the device in low power
state. Thus pm_runtime_force_suspend and pm_runtime_force_resume are
used as system sleep callbacks during system wide PM transitions.
Late system sleep callbacks are used to ensure, for instance, that the
sound core has suspended any on-going activity, including stopping the
ADMA if active, before we attempt to suspend the ADMA.

Suggested-by: Jonathan Hunter <jonathanh@nvidia.com>
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-03-25 10:23:55 +05:30
Sameer Pujar
f6ed6491d5 dmaengine: tegra210-adma: use devm_clk_*() helpers
adma driver is using pm_clk_*() interface for managing clock resources.
With this it is observed that clocks remain ON always. This happens on
Tegra devices which use BPMP co-processor to manage clock resources,
where clocks are enabled during prepare phase. This is necessary because
clocks to BPMP are always blocking. When pm_clk_*() interface is used on
such Tegra devices, clock prepare count is not balanced till remove call
happens for the driver and hence clocks are seen ON always. Thus this
patch replaces pm_clk_*() with devm_clk_*() framework.

Suggested-by: Mohan Kumar D <mkumard@nvidia.com>
Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com>
Signed-off-by: Sameer Pujar <spujar@nvidia.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-03-25 10:23:42 +05:30
Andy Shevchenko
5ba846b1ee dmaengine: idma64: Use actual device for DMA transfers
Intel IOMMU, when enabled, tries to find the domain of the device,
assuming it's a PCI one, during DMA operations, such as mapping or
unmapping. Since we are splitting the actual PCI device to couple of
children via MFD framework (see drivers/mfd/intel-lpss.c for details),
the DMA device appears to be a platform one, and thus not an actual one
that performs DMA. In a such situation IOMMU can't find or allocate
a proper domain for its operations. As a result, all DMA operations are
failed.

In order to fix this, supply parent of the platform device
to the DMA engine framework and fix filter functions accordingly.

We may rely on the fact that parent is a real PCI device, because no
other configuration is present in the wild.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Mark Brown <broonie@kernel.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [for tty parts]
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-03-21 19:48:26 +05:30
Linus Torvalds
31ef489a02 dmaengine updates for v5.1-rc1
- dmatest updates for modularizing common struct and code
  - remove SG support for VDMA xilinx IP and updates to driver
  - Update to dw driver to support Intel iDMA controllers
    multi-block support
  - tegra updates for proper reporting of residue
  - Add Snow Ridge ioatdma device id and support for IOATDMA v3.4
  - struct_size() usage and useless LIST_HEAD cleanups in subsystem.
  - qDMA controller driver for Layerscape SoCs
  - stm32-dma PM Runtime support
  - And usual updates to imx-sdma, sprd, Documentation, fsl-edma,
    bcm2835, qcom_hidma etc
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJciVhkAAoJEHwUBw8lI4NHdVoQANDfaY1eruozJobfWX0w1b8U
 0eSCN4M/gRzL/K1nOTWTV9dZFGycGokPBn25lryX6d6VIL+yO7Ptpwls/w0inn0e
 RgVtESQodLZyxcD4iWACVfTGxqxe74/bXCRAq0OCrjt5oX+5KdsBsTrhHvB8dQin
 JWT7Mq6tii5wZHZHl9b4Ds/crxM9+pIHmlzbu5MQiPDL37X9HX4KUoLQCKrVGgZt
 3FjlKEiSS6CnBWP1cs/aHgANr/PjOIwL8SD1t5NPS3i7/2k9z1u16TmmI8SbcyHf
 y9hVy1QPlxC5V85EPYK7JW7JLILotkMToxlX/QhfaFN0PWQq04rp6PCLvObKJuB2
 36QJTwSXBM2/f9bWwkddPyo9Szb3L30K80Vx8zlxzgoXYWtFFB2BXAV57M/I48j2
 gMnxEMZpHD3IupeqlykCmssClVVmCRT8qUZHLNHTdDNu48rLuNlZestFGyBS2Ma2
 D4UcJPiA/IxENy1rz54XoCajL/BIJOsFXXVRikjj3vfbV0Uir0uB9puzwfemMtKz
 tCrJbKwnnDtN30vBZtU7hVu/t4lFYYnl2c945+SpeQC69dysz3IFQGem0kxVHKnH
 INdQH4Od7nQbVQer1EXg4+h3n/rUbcYx/0iLE6JAYyOT80wE+KSlrQ5etqyJ3/qh
 1BxvU49PkUUPzw7aUGgS
 =Ykvn
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-5.1-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:

 - dmatest updates for modularizing common struct and code

 - remove SG support for VDMA xilinx IP and updates to driver

 - Update to dw driver to support Intel iDMA controllers multi-block
   support

 - tegra updates for proper reporting of residue

 - Add Snow Ridge ioatdma device id and support for IOATDMA v3.4

 - struct_size() usage and useless LIST_HEAD cleanups in subsystem.

 - qDMA controller driver for Layerscape SoCs

 - stm32-dma PM Runtime support

 - And usual updates to imx-sdma, sprd, Documentation, fsl-edma,
   bcm2835, qcom_hidma etc

* tag 'dmaengine-5.1-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (81 commits)
  dmaengine: imx-sdma: fix consistent dma test failures
  dmaengine: imx-sdma: add a test for imx8mq multi sdma devices
  dmaengine: imx-sdma: add clock ratio 1:1 check
  dmaengine: dmatest: move test data alloc & free into functions
  dmaengine: dmatest: add short-hand `buf_size` var in dmatest_func()
  dmaengine: dmatest: wrap src & dst data into a struct
  dmaengine: ioatdma: support latency tolerance report (LTR) for v3.4
  dmaengine: ioatdma: add descriptor pre-fetch support for v3.4
  dmaengine: ioatdma: disable DCA enabling on IOATDMA v3.4
  dmaengine: ioatdma: Add Snow Ridge ioatdma device id
  dmaengine: sprd: Change channel id to slave id for DMA cell specifier
  dt-bindings: dmaengine: sprd: Change channel id to slave id for DMA cell specifier
  dmaengine: mv_xor: Use correct device for DMA API
  Documentation :dmaengine: clarify DMA desc. pointer after submission
  Documentation: dmaengine: fix dmatest.rst warning
  dmaengine: k3dma: Add support for dma-channel-mask
  dmaengine: k3dma: Delete axi_config
  dmaengine: k3dma: Upgrade k3dma driver to support hisi_asp_dma hardware
  Documentation: bindings: dma: Add binding for dma-channel-mask
  Documentation: bindings: k3dma: Extend the k3dma driver binding to support hisi-asp
  ...
2019-03-14 09:11:54 -07:00
Vinod Koul
feb59d77a4 Merge branch 'topic/xilinx' into for-linus 2019-03-12 12:05:47 +05:30
Vinod Koul
42cb6e07c5 Merge branch 'topic/tegra' into for-linus 2019-03-12 12:05:43 +05:30
Vinod Koul
a74e7952bf Merge branch 'topic/stm' into for-linus 2019-03-12 12:05:39 +05:30
Vinod Koul
3de78f4f43 Merge branch 'topic/sh' into for-linus 2019-03-12 12:05:35 +05:30
Vinod Koul
1602a33570 Merge branch 'topic/mv' into for-linus 2019-03-12 12:04:16 +05:30
Vinod Koul
989e3af3af Merge branch 'topic/k3dma' into for-linus 2019-03-12 12:04:09 +05:30
Vinod Koul
84054481ee Merge branch 'topic/imx' into for-linus 2019-03-12 12:04:01 +05:30
Vinod Koul
79074168de Merge branch 'topic/fsl' into for-linus 2019-03-12 12:03:55 +05:30
Vinod Koul
278489c2e1 Merge branch 'topic/dw' into for-linus 2019-03-12 12:03:47 +05:30
Vinod Koul
5c196f5efa Merge branch 'topic/brcm' into for-linus 2019-03-12 12:03:42 +05:30
Linus Torvalds
2901752c14 pci-v5.1-changes
-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAlyCpL0UHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vzoHw//ZyFbwekF0mV3RZwcV35LkScIOw0d
 O1DgjJo8UbuV51+/foQeUZ8IzjHlybQhoFdJupPuw+LyaDUkwqjAmdtY8J/FjWSm
 AJeVzu6gMF0Z9kwwGO4NyqX2EWluTD0xNLgf8g+fe3p1MtEuH6VCrqe+hk3wma0K
 CrSIKWY/sO408SpAaWiLTEZmVT+hXiP9hJw1qTrbqKLtyWa4oCjErdoyUDsA01+5
 gPndKC/3pu6q6q9Dd94582HuQaE2dKHWQXx6Fzd/tdCyYffpbOUAUNP3aRXaTKrS
 MwKxOF3y7yUnz5RbxRgopwNVf5WyXhCnnPZRLaSxqnTSZCY6FCUi3l6RpVyWu2Ha
 iztBbkTP/x6WV3VWg810qgQKQ9wl8oALMkoOfR6lWCR7MTuJnMXJtbrz0jWpEC2O
 ZPwK9fAxFj2/3e13hx88O7Ek8kfajTPM8T15K79pvpljfqa0BD9SrhPyQ5ssmxj4
 idz4yIFCATULKszPXA1QbfC1/xCDveQOEPSerL3eACXsLN17vfpOwOT9vWJm6bpr
 6u5ggM2dEA07eI1ANnY6twn5g0kSYU9qISNQO98tA86IvaCnME0Z+k+SCwUNIM9U
 ep9k0NdAGDNsYOfdVEEY0fYGT9k+9f9w8AfZLNvh0N3s7mGQQ35jf0Z75jj/jsor
 cbMcPAN2jOCyFVs=
 =vf9L
 -----END PGP SIGNATURE-----

Merge tag 'pci-v5.1-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:

 - Use match_string() instead of reimplementing it (Andy Shevchenko)

 - Enable SERR# forwarding for all bridges (Bharat Kumar Gogada)

 - Use Latency Tolerance Reporting if already enabled by platform (Bjorn
   Helgaas)

 - Save/restore LTR info for suspend/resume (Bjorn Helgaas)

 - Fix DPC use of uninitialized data (Dongdong Liu)

 - Probe bridge window attributes only once at enumeration-time to fix
   device accesses during rescan (Bjorn Helgaas)

 - Return BAR size (not "size -1 ") from pci_size() to simplify code (Du
   Changbin)

 - Use config header type (not class code) identify bridges more
   reliably (Honghui Zhang)

 - Work around Intel Denverton incorrect Trace Hub BAR size reporting
   (Alexander Shishkin)

 - Reorder pciehp cached state/hardware state updates to avoid missed
   interrupts (Mika Westerberg)

 - Turn ibmphp semaphores into completions or mutexes (Arnd Bergmann)

 - Mark expected switch fall-through (Mathieu Malaterre)

 - Use of_node_name_eq() for node name comparisons (Rob Herring)

 - Add ACS and pciehp quirks for HXT SD4800 (Shunyong Yang)

 - Consolidate Rohm Vendor ID definitions (Andy Shevchenko)

 - Use u32 (not __u32) for things not exposed to userspace (Logan
   Gunthorpe)

 - Fix locking semantics of bus and slot reset interfaces (Alex
   Williamson)

 - Update PCIEPORTBUS Kconfig help text (Hou Zhiqiang)

 - Allow portdrv to claim subtractive decode Ports so PCIe services will
   work for them (Honghui Zhang)

 - Report PCIe links that become degraded at run-time (Alexandru
   Gagniuc)

 - Blacklist Gigabyte X299 Root Port power management to fix Thunderbolt
   hotplug (Mika Westerberg)

 - Revert runtime PM suspend/resume callbacks that broke PME on network
   cable plug (Mika Westerberg)

 - Disable Data Link State Changed interrupts to prevent wakeup
   immediately after suspend (Mika Westerberg)

 - Extend altera to support Stratix 10 (Ley Foon Tan)

 - Allow building altera driver on ARM64 (Ley Foon Tan)

 - Replace Douglas with Tom Joseph as Cadence PCI host/endpoint
   maintainer (Lorenzo Pieralisi)

 - Add DT support for R-Car RZ/G2E (R8A774C0) (Fabrizio Castro)

 - Add dra72x/dra74x/dra76x SoC compatible strings (Kishon Vijay Abraham I)

 - Enable x2 mode support for dra72x/dra74x/dra76x SoC (Kishon Vijay
   Abraham I)

 - Configure dra7xx PHY to PCIe mode (Kishon Vijay Abraham I)

 - Simplify dwc (remove unnecessary header includes, name variables
   consistently, reduce inverted logic, etc) (Gustavo Pimentel)

 - Add i.MX8MQ support (Andrey Smirnov)

 - Add message to help debug dwc MSI-X mask bit errors (Gustavo
   Pimentel)

 - Work around imx7d PCIe PLL erratum (Trent Piepho)

 - Don't assert qcom reset GPIO during probe (Bjorn Andersson)

 - Skip dwc MSI init if MSIs have been disabled (Lucas Stach)

 - Use memcpy_fromio()/memcpy_toio() instead of plain memcpy() in PCI
   endpoint framework (Wen Yang)

 - Add interface to discover supported endpoint features to replace a
   bitfield that wasn't flexible enough (Kishon Vijay Abraham I)

 - Implement the new supported-feature interface for designware-plat,
   dra7xx, rockchip, cadence (Kishon Vijay Abraham I)

 - Fix issues with 64-bit BAR in endpoints (Kishon Vijay Abraham I)

 - Add layerscape endpoint mode support (Xiaowei Bao)

 - Remove duplicate struct hv_vp_set in favor of struct hv_vpset (Maya
   Nakamura)

 - Rework hv_irq_unmask() to use cpumask_to_vpset() instead of
   open-coded reimplementation (Maya Nakamura)

 - Align Hyper-V struct retarget_msi_interrupt arguments (Maya Nakamura)

 - Fix mediatek MMIO size computation to enable full size of available
   MMIO space (Honghui Zhang)

 - Fix mediatek DMA window size computation to allow endpoint DMA access
   to full DRAM address range (Honghui Zhang)

 - Fix mvebu prefetchable BAR regression caused by common bridge
   emulation that assumed all bridges had prefetchable windows (Thomas
   Petazzoni)

 - Make advk_pci_bridge_emul_ops static (Wei Yongjun)

 - Configure MPS settings for VMD root ports (Jon Derrick)

* tag 'pci-v5.1-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (92 commits)
  PCI: Update PCIEPORTBUS Kconfig help text
  PCI: Fix "try" semantics of bus and slot reset
  PCI/LINK: Report degraded links via link bandwidth notification
  dt-bindings: PCI: altera: Add altr,pcie-root-port-2.0
  PCI: altera: Enable driver on ARM64
  PCI: altera: Add Stratix 10 PCIe support
  PCI/PME: Fix possible use-after-free on remove
  PCI: aardvark: Make symbol 'advk_pci_bridge_emul_ops' static
  PCI: dwc: skip MSI init if MSIs have been explicitly disabled
  PCI: hv: Refactor hv_irq_unmask() to use cpumask_to_vpset()
  PCI: hv: Replace hv_vp_set with hv_vpset
  PCI: hv: Add __aligned(8) to struct retarget_msi_interrupt
  PCI: mediatek: Enlarge PCIe2AHB window size to support 4GB DRAM
  PCI: mediatek: Fix memory mapped IO range size computation
  PCI: dwc: Remove superfluous shifting in definitions
  PCI: dwc: Make use of GENMASK/FIELD_PREP
  PCI: dwc: Make use of BIT() in constant definitions
  PCI: dwc: Share code for dw_pcie_rd/wr_other_conf()
  PCI: dwc: Make use of IS_ALIGNED()
  PCI: imx6: Add code to request/control "pcie_aux" clock for i.MX8MQ
  ...
2019-03-09 14:57:08 -08:00
Anshuman Khandual
98fa15f34c mm: replace all open encodings for NUMA_NO_NODE
Patch series "Replace all open encodings for NUMA_NO_NODE", v3.

All these places for replacement were found by running the following
grep patterns on the entire kernel code.  Please let me know if this
might have missed some instances.  This might also have replaced some
false positives.  I will appreciate suggestions, inputs and review.

1. git grep "nid == -1"
2. git grep "node == -1"
3. git grep "nid = -1"
4. git grep "node = -1"

This patch (of 2):

At present there are multiple places where invalid node number is
encoded as -1.  Even though implicitly understood it is always better to
have macros in there.  Replace these open encodings for an invalid node
number with the global macro NUMA_NO_NODE.  This helps remove NUMA
related assumptions like 'invalid node' from various places redirecting
them to a common definition.

Link: http://lkml.kernel.org/r/1545127933-10711-2-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>	[ixgbe]
Acked-by: Jens Axboe <axboe@kernel.dk>			[mtip32xx]
Acked-by: Vinod Koul <vkoul@kernel.org>			[dmaengine.c]
Acked-by: Michael Ellerman <mpe@ellerman.id.au>		[powerpc]
Acked-by: Doug Ledford <dledford@redhat.com>		[drivers/infiniband]
Cc: Joseph Qi <jiangqi903@gmail.com>
Cc: Hans Verkuil <hverkuil@xs4all.nl>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-05 21:07:14 -08:00
Angus Ainslie (Purism)
a3711d49be dmaengine: imx-sdma: fix consistent dma test failures
Without the copy being aligned sdma1 fails ~10% of the time

Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25 23:26:05 +05:30
Angus Ainslie (Purism)
de7b7dca87 dmaengine: imx-sdma: add a test for imx8mq multi sdma devices
On i.mx8mq, there are two sdma instances, and the common dma framework
will get a channel dynamically from any available sdma instance whether
it's the first sdma device or the second sdma device. Some IPs like
SAI only work with sdma2 not sdma1. To make sure the sdma channel is from
the correct sdma device, use the node pointer to match.

Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Tested-by: Daniel Baluta <daniel.baluta@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25 23:25:51 +05:30
Angus Ainslie (Purism)
25aaa75df1 dmaengine: imx-sdma: add clock ratio 1:1 check
On i.mx8 mscale B0 chip, AHB/SDMA clock ratio 2:1 can't be supportted,
since SDMA clock ratio has to be increased to 250Mhz, AHB can't reach
to 500Mhz, so use 1:1 instead.

Based on NXP commit MLK-16841-1 by Robin Gong <yibin.gong@nxp.com>

Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25 23:25:42 +05:30
Alexandru Ardelean
3b6679f91e dmaengine: dmatest: move test data alloc & free into functions
This patch starts to take advantage of the `dmatest_data` struct by moving
the common allocation & free-ing bits into functions.

Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25 23:13:50 +05:30
Alexandru Ardelean
41d00bb7a6 dmaengine: dmatest: add short-hand buf_size var in dmatest_func()
This is just a cosmetic change, since this variable gets used quite a bit
inside the dmatest_func() routine.

Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25 23:13:50 +05:30
Alexandru Ardelean
361deb7243 dmaengine: dmatest: wrap src & dst data into a struct
This change wraps the data for the source & destination buffers into a
`struct dmatest_data`. The rename patterns are:
 * src_cnt -> src->cnt
 * dst_cnt -> dst->cnt
 * src_off -> src->off
 * dst_off -> dst->off
 * thread->srcs -> src->aligned
 * thread->usrcs -> src->raw
 * thread->dsts -> dst->aligned
 * thread->udsts -> dst->raw

The intent is to make a function that moves duplicate parts of the code
into common alloc & free functions, which will unclutter the
`dmatest_func()` function.

Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25 23:13:50 +05:30
Dave Jiang
528314b503 dmaengine: ioatdma: support latency tolerance report (LTR) for v3.4
IOATDMA 3.4 supports PCIe LTR mechanism. The registers are non-standard
PCIe LTR support. This needs to be setup in order to not suffer performance
impact and provide proper power management. The channel is set to active
when it is allocated, and to passive when it's freed.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25 12:18:38 +05:30
Dave Jiang
e0100d4090 dmaengine: ioatdma: add descriptor pre-fetch support for v3.4
Adding support for new feature on ioatdma 3.4 hardware that provides
descriptor pre-fetching in order to reduce small DMA latencies.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25 12:18:38 +05:30
Dave Jiang
11e31e281b dmaengine: ioatdma: disable DCA enabling on IOATDMA v3.4
IOATDMA v3.4 does not support DCA. Disable

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25 12:18:38 +05:30
Dave Jiang
4d75873f81 dmaengine: ioatdma: Add Snow Ridge ioatdma device id
Add Snowridge Xeon-D ioatdma PCI device id. Also applies for Icelake
SP Xeon. This introduces ioatdma v3.4 platform. Also bumping driver version
to 5.0 since we are adding additional code for 3.4 support.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25 12:18:38 +05:30
Baolin Wang
ffb5be7c70 dmaengine: sprd: Change channel id to slave id for DMA cell specifier
We will describe the slave id in DMA cell specifier instead of DMA channel
id, thus we should save the slave id from DMA engine translation function,
and remove the channel id validation.

Meanwhile we do not need set default slave id in sprd_dma_alloc_chan_resources(),
remove it.

Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-25 12:11:19 +05:30
Robin Murphy
3e5daee5ec dmaengine: mv_xor: Use correct device for DMA API
Using dma_dev->dev for mappings before it's assigned with the correct
device is unlikely to work as expected, and with future dma-direct
changes, passing a NULL device may end up crashing entirely. I don't
know enough about this hardware or the mv_xor_prep_dma_interrupt()
operation to implement the appropriate error-handling logic that would
have revealed those dma_map_single() calls failing on arm64 for as long
as the driver has been enabled there, but moving the assignment earlier
will at least make the current code operate as intended.

Fixes: 22843545b2 ("dma: mv_xor: Add support for DMA_INTERRUPT")
Reported-by: John David Anglin <dave.anglin@bell.net>
Tested-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Tested-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-19 17:40:35 +05:30
Linus Torvalds
68d94a8424 dmaengine-fix-5.0-rc6
dmaengine fixes for v5.0-rc6
 
  - Fix in at_xdmac fr wrongful channel state
  - Fix for imx driver for wrong callback invocation
  - Fix to bcm driver for interrupt race & transaction abort.
  - Fix in dmatest to abort in mapping error
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJcYD8SAAoJEHwUBw8lI4NHqc0P/At5/XRhS60fnsibTEnGH3ee
 I3TQY3DrGXHXUAq8LqVIjaql7NUQ+I07bRtP6JMX9GYrTguUVVDPau6E4CtWt571
 +TnKXQefgY/UlP9f5K127Dal5YNaYsBZqbEvhE2xgYKrsFStiIYDFIPGFeIE841E
 J0cm8mjM+dQxZOXCCf/vSxA9h2Aqj7Fa4RRY3Sj4r/CWahaKpZt+OM6bdDpmUxmq
 PzDLSE4DdHLWZSTcvAFQQtwlMc4Dq8a7cs2MmCvLC2jwGq0IgLr0bfzj6pZOqcuN
 O05TNcyhQHZ5YRnBHTgJUSgR2abhKuNnBWN/pA1Tbsn7+OISsvt4erjITAFv6fwR
 mkWgyAzH3tZ1PnO1YO43KEfy/nrT6Veg1xePTzWaaYAsPJ9FJeQJuZ5YxPysOLcn
 lux2/B19HeWVb02vredn5eUXH7b8C0WQhfEax4hgbbPcaDX+y3WFN0n2gxOAFQ8H
 xloKu511Iuu0Uteov4wys6LucqX61vPPank9Ma2N7RLdZ4x2Ya8gDjG/kfQOFHDa
 jRbbjzw8Q5tuU2++GXwSpWq55LOKYnaXQbUIRWxrrdwAJahKAQFW++KR2ehB1kvS
 pCaIWwpSHXmCbat/A1JfcYMagAtw6QSgbJlNwru1B7OYujFMLml+GXtqeA/kJ8yo
 3+oF8PB0xA+++o83USaa
 =NTGY
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-fix-5.0-rc6' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine fixes from Vinod Koul:
 - Fix in at_xdmac fr wrongful channel state
 - Fix for imx driver for wrong callback invocation
 - Fix to bcm driver for interrupt race & transaction abort.
 - Fix in dmatest to abort in mapping error

* tag 'dmaengine-fix-5.0-rc6' of git://git.infradead.org/users/vkoul/slave-dma:
  dmaengine: dmatest: Abort test in case of mapping error
  dmaengine: bcm2835: Fix abort of transactions
  dmaengine: bcm2835: Fix interrupt race on RT
  dmaengine: imx-dma: fix wrong callback invoke
  dmaengine: at_xdmac: Fix wrongfull report of a channel as in use
2019-02-10 10:39:37 -08:00
Andy Shevchenko
6454368a80 dmaengine: dmatest: Abort test in case of mapping error
In case of mapping error the DMA addresses are invalid and continuing
will screw system memory or potentially something else.

[  222.480310] dmatest: dma0chan7-copy0: summary 1 tests, 3 failures 6 iops 349 KB/s (0)
...
[  240.912725] check: Corrupted low memory at 00000000c7c75ac9 (2940 phys) = 5656000000000000
[  240.921998] check: Corrupted low memory at 000000005715a1cd (2948 phys) = 279f2aca5595ab2b
[  240.931280] check: Corrupted low memory at 000000002f4024c0 (2950 phys) = 5e5624f349e793cf
...

Abort any test if mapping failed.

Fixes: 4076e755db ("dmatest: convert to dmaengine_unmap_data")
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04 14:34:22 +05:30
Li Yu
c4994a98fa dmaengine: k3dma: Add support for dma-channel-mask
Add dma-channel-mask as a property for k3dma, it defines
available dma channels which a non-secure mode driver can use.

One sample usage of this is in Hi3660 SoC. DMA channel 0 is
reserved to lpm3, which is a coprocessor for power management. So
as a result, any request in kernel (which runs on main processor
and in non-secure mode) should start from at least channel 1.

Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Tanglei Han <hantanglei@huawei.com>
Cc: Zhuangluan Su <suzhuangluan@hisilicon.com>
Cc: Ryan Grachek <ryan@edited.us>
Cc: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Cc: Guodong Xu <guodong.xu@linaro.org>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Li Yu <liyu65@hisilicon.com>
[jstultz: Reworked to use a channel mask]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04 14:30:57 +05:30
Li Yu
1200e070d6 dmaengine: k3dma: Delete axi_config
Axi_config controls whether DMA resources can be accessed in non-secure
mode, such as linux kernel. The register should be set by the bootloader
stage and depends on the device.

Thus, this patch removes axi_config from k3dma driver.

Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Tanglei Han <hantanglei@huawei.com>
Cc: Zhuangluan Su <suzhuangluan@hisilicon.com>
Cc: Ryan Grachek <ryan@edited.us>
Cc: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Cc: dmaengine@vger.kernel.org
Acked-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Signed-off-by: Li Yu <liyu65@hisilicon.com>
Signed-off-by: Guodong Xu <guodong.xu@linaro.org>
[jstultz: Minor tweaks to commit message]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04 14:30:50 +05:30
Youlin Wang
d4bdc39f5b dmaengine: k3dma: Upgrade k3dma driver to support hisi_asp_dma hardware
On the hi3660 hardware there are two (at least) DMA controllers,
the DMA-P (Peripheral DMA) and the DMA-A (Audio DMA). The
two blocks are similar, but have some slight differences. This
resulted in the vendor implementing two separate drivers, which
after review, they have been able to condense and re-use the
existing k3dma driver.

Thus, this patch adds support for the new "hisi-pcm-asp-dma-1.0"
compatible string in the binding.

One difference with the DMA-A controller, is that it does not
need to initialize a clock. So we skip this by adding and using
soc data flags.

After above this driver will support both k3 and hisi_asp dma
hardware.

Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Zhuangluan Su <suzhuangluan@hisilicon.com>
Cc: Ryan Grachek <ryan@edited.us>
Cc: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Cc: dmaengine@vger.kernel.org
Acked-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Signed-off-by: Youlin Wang <wwx575822@notesmail.huawei.com>
Signed-off-by: Tanglei Han <hantanglei@huawei.com>
[jstultz: Reworked to use of_match_data, commit msg improvements]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04 14:30:40 +05:30
Vinod Koul
6d66c8d1a0 Merge branch 'fix/brcm' into fixes 2019-02-04 12:57:56 +05:30
Scott Wood
6175f6a7eb dmaengine: fsldma: Add 64-bit I/O accessors for powerpc64
Otherwise 64-bit PPC builds fail with undefined references
to these accessors.

Cc: Peng Ma <peng.ma@nxp.com>
Cc: Wen He <wen.he_1@nxp.com>
Fixes: 68997fff94afa (" dmaengine: fsldma: Adding macro FSL_DMA_IN/OUT implement for ARM platform")
Signed-off-by: Scott Wood <oss@buserror.net>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04 12:56:54 +05:30
Lukas Wunner
37c22cabf2 dmaengine: bcm2835: Drop outdated comment on supported transactions
Remove an outdated comment claiming the driver only supports cyclic
transactions.  The driver has been supporting other transaction types
for more than two years.

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: Frank Pavlic <f.pavlic@kunbus.de>
Cc: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Meier <florian.meier@koalo.de>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Florian Kauer <florian.kauer@koalo.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04 12:41:36 +05:30
Lukas Wunner
efdffc1aaf dmaengine: bcm2835: Drop gratuitous list deletion
The BCM2835 DMA driver deletes a channel from a list upon termination
without having added it to a list first.  Moreover that operation is
protected by a spinlock which isn't taken anywhere else.  These appear
to be remnants of an older version of the driver which accidentally
got mainlined.  Remove the dead code.

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: Frank Pavlic <f.pavlic@kunbus.de>
Cc: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Meier <florian.meier@koalo.de>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Florian Kauer <florian.kauer@koalo.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04 12:41:32 +05:30
Lukas Wunner
603fe86be1 dmaengine: bcm2835: Enforce control block alignment
Per section 4.2.1.1 of the BCM2835 ARM Peripherals spec, control blocks
"must start at a 256 bit aligned address":
https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf

This rule is currently satisfied only by accident because struct
bcm2835_dma_cb has a size of 256 bit and the DMA pool API happens to
allocate blocks consecutively.  It seems safer to be explicit and tell
the DMA pool allocator about the required alignment.

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: Frank Pavlic <f.pavlic@kunbus.de>
Cc: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Meier <florian.meier@koalo.de>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Florian Kauer <florian.kauer@koalo.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04 12:41:28 +05:30
Lukas Wunner
3e05ada043 dmaengine: bcm2835: Return void from abort of transactions
bcm2835_dma_abort() returns an int but bcm2835_dma_terminate_all() (its
sole caller) does not evaluate the return value. Change the return type
to void.

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: Frank Pavlic <f.pavlic@kunbus.de>
Cc: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Meier <florian.meier@koalo.de>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Florian Kauer <florian.kauer@koalo.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04 12:41:18 +05:30
Lukas Wunner
9e528c799d dmaengine: bcm2835: Fix abort of transactions
There are multiple issues with bcm2835_dma_abort() (which is called on
termination of a transaction):

* The algorithm to abort the transaction first pauses the channel by
  clearing the ACTIVE flag in the CS register, then waits for the PAUSED
  flag to clear.  Page 49 of the spec documents the latter as follows:

  "Indicates if the DMA is currently paused and not transferring data.
   This will occur if the active bit has been cleared [...]"
   https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf

  So the function is entering an infinite loop because it is waiting for
  PAUSED to clear which is always set due to the function having cleared
  the ACTIVE flag.  The only thing that's saving it from itself is the
  upper bound of 10000 loop iterations.

  The code comment says that the intention is to "wait for any current
  AXI transfer to complete", so the author probably wanted to check the
  WAITING_FOR_OUTSTANDING_WRITES flag instead.  Amend the function
  accordingly.

* The CS register is only read at the beginning of the function.  It
  needs to be read again after pausing the channel and before checking
  for outstanding writes, otherwise writes which were issued between
  the register read at the beginning of the function and pausing the
  channel may not be waited for.

* The function seeks to abort the transfer by writing 0 to the NEXTCONBK
  register and setting the ABORT and ACTIVE flags.  Thereby, the 0 in
  NEXTCONBK is sought to be loaded into the CONBLK_AD register.  However
  experimentation has shown this approach to not work:  The CONBLK_AD
  register remains the same as before and the CS register contains
  0x00000030 (PAUSED | DREQ_STOPS_DMA).  In other words, the control
  block is not aborted but merely paused and it will be resumed once the
  next DMA transaction is started.  That is absolutely not the desired
  behavior.

  A simpler approach is to set the channel's RESET flag instead.  This
  reliably zeroes the NEXTCONBK as well as the CS register.  It requires
  less code and only a single MMIO write.  This is also what popular
  user space DMA drivers do, e.g.:
  https://github.com/metachris/RPIO/blob/master/source/c_pwm/pwm.c

  Note that the spec is contradictory whether the NEXTCONBK register
  is writeable at all.  On the one hand, page 41 claims:

  "The value loaded into the NEXTCONBK register can be overwritten so
  that the linked list of Control Block data structures can be
  dynamically altered. However it is only safe to do this when the DMA
  is paused."

  On the other hand, page 40 specifies:

  "Only three registers in each channel's register set are directly
  writeable (CS, CONBLK_AD and DEBUG). The other registers (TI,
  SOURCE_AD, DEST_AD, TXFR_LEN, STRIDE & NEXTCONBK), are automatically
  loaded from a Control Block data structure held in external memory."

Fixes: 96286b5766 ("dmaengine: Add support for BCM2835")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: stable@vger.kernel.org # v3.14+
Cc: Frank Pavlic <f.pavlic@kunbus.de>
Cc: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Meier <florian.meier@koalo.de>
Cc: Clive Messer <clive.m.messer@gmail.com>
Cc: Matthias Reichl <hias@horus.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Florian Kauer <florian.kauer@koalo.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04 12:41:13 +05:30
Lukas Wunner
f7da7782ab dmaengine: bcm2835: Fix interrupt race on RT
If IRQ handlers are threaded (either because CONFIG_PREEMPT_RT_BASE is
enabled or "threadirqs" was passed on the command line) and if system
load is sufficiently high that wakeup latency of IRQ threads degrades,
SPI DMA transactions on the BCM2835 occasionally break like this:

ks8851 spi0.0: SPI transfer timed out
bcm2835-dma 3f007000.dma: DMA transfer could not be terminated
ks8851 spi0.0 eth2: ks8851_rdfifo: spi_sync() failed

The root cause is an assumption made by the DMA driver which is
documented in a code comment in bcm2835_dma_terminate_all():

/*
 * Stop DMA activity: we assume the callback will not be called
 * after bcm_dma_abort() returns (even if it does, it will see
 * c->desc is NULL and exit.)
 */

That assumption falls apart if the IRQ handler bcm2835_dma_callback() is
threaded: A client may terminate a descriptor and issue a new one
before the IRQ handler had a chance to run. In fact the IRQ handler may
miss an *arbitrary* number of descriptors. The result is the following
race condition:

1. A descriptor finishes, its interrupt is deferred to the IRQ thread.
2. A client calls dma_terminate_async() which sets channel->desc = NULL.
3. The client issues a new descriptor. Because channel->desc is NULL,
   bcm2835_dma_issue_pending() immediately starts the descriptor.
4. Finally the IRQ thread runs and writes BCM2835_DMA_INT to the CS
   register to acknowledge the interrupt. This clears the ACTIVE flag,
   so the newly issued descriptor is paused in the middle of the
   transaction. Because channel->desc is not NULL, the IRQ thread
   finalizes the descriptor and tries to start the next one.

I see two possible solutions: The first is to call synchronize_irq()
in bcm2835_dma_issue_pending() to wait until the IRQ thread has
finished before issuing a new descriptor. The downside of this approach
is unnecessary latency if clients desire rapidly terminating and
re-issuing descriptors and don't have any use for an IRQ callback.
(The SPI TX DMA channel is a case in point.)

A better alternative is to make the IRQ thread recognize that it has
missed descriptors and avoid finalizing the newly issued descriptor.
So first of all, set the ACTIVE flag when acknowledging the interrupt.
This keeps a newly issued descriptor running.

If the descriptor was finished, the channel remains idle despite the
ACTIVE flag being set. However the ACTIVE flag can then no longer be
used to check whether the channel is idle, so instead check whether
the register containing the current control block address is zero
and finalize the current descriptor only if so.

That way, there is no impact on latency and throughput if the client
doesn't care for the interrupt: Only minimal additional overhead is
introduced for non-cyclic descriptors as one further MMIO read is
necessary per interrupt to check for idleness of the channel. Cyclic
descriptors are sped up slightly by removing one MMIO write per
interrupt.

Fixes: 96286b5766 ("dmaengine: Add support for BCM2835")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: stable@vger.kernel.org # v3.14+
Cc: Frank Pavlic <f.pavlic@kunbus.de>
Cc: Martin Sperl <kernel@martin.sperl.org>
Cc: Florian Meier <florian.meier@koalo.de>
Cc: Clive Messer <clive.m.messer@gmail.com>
Cc: Matthias Reichl <hias@horus.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Florian Kauer <florian.kauer@koalo.de>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04 12:40:45 +05:30
Leonid Iziumtsev
341198eda7 dmaengine: imx-dma: fix wrong callback invoke
Once the "ld_queue" list is not empty, next descriptor will migrate
into "ld_active" list. The "desc" variable will be overwritten
during that transition. And later the dmaengine_desc_get_callback_invoke()
will use it as an argument. As result we invoke wrong callback.

That behaviour was in place since:
commit fcaaba6c71 ("dmaengine: imx-dma: fix callback path in tasklet").
But after commit 4cd13c21b2 ("softirq: Let ksoftirqd do its job")
things got worse, since possible delay between tasklet_schedule()
from DMA irq handler and actual tasklet function execution got bigger.
And that gave more time for new DMA request to be submitted and
to be put into "ld_queue" list.

It has been noticed that DMA issue is causing problems for "mxc-mmc"
driver. While stressing the system with heavy network traffic and
writing/reading to/from sd card simultaneously the timeout may happen:

10013000.sdhci: mxcmci_watchdog: read time out (status = 0x30004900)

That often lead to file system corruption.

Signed-off-by: Leonid Iziumtsev <leonid.iziumtsev@gmail.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Cc: stable@vger.kernel.org
2019-02-04 12:35:12 +05:30
Laurentiu Tudor
0fa89f972d dmaengine: fsl-edma: dma map slave device address
This mapping needs to be created in order for slave dma transfers
to work on systems with SMMU. The implementation mostly mimics the
one in pl330 dma driver, authored by Robin Murphy.

Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
Suggested-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Angelo Dureghello <angelo@sysam.it>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-04 12:32:53 +05:30
Codrin Ciubotariu
dc3f595b66 dmaengine: at_xdmac: Fix wrongfull report of a channel as in use
atchan->status variable is used to store two different information:
 - pass channel interrupts status from interrupt handler to tasklet;
 - channel information like whether it is cyclic or paused;

This causes a bug when device_terminate_all() is called,
(AT_XDMAC_CHAN_IS_CYCLIC cleared on atchan->status) and then a late End
of Block interrupt arrives (AT_XDMAC_CIS_BIS), which sets bit 0 of
atchan->status. Bit 0 is also used for AT_XDMAC_CHAN_IS_CYCLIC, so when
a new descriptor for a cyclic transfer is created, the driver reports
the channel as in use:

if (test_and_set_bit(AT_XDMAC_CHAN_IS_CYCLIC, &atchan->status)) {
	dev_err(chan2dev(chan), "channel currently used\n");
	return NULL;
}

This patch fixes the bug by adding a different struct member to keep
the interrupts status separated from the channel status bits.

Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver")
Signed-off-by: Codrin Ciubotariu <codrin.ciubotariu@microchip.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-02-02 15:55:26 +05:30
Andy Shevchenko
0ce26a1c31 PCI: Move Rohm Vendor ID to generic list
Move the Rohm Vendor ID to pci_ids.h instead of defining it in several
drivers.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Mark Brown <broonie@kernel.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2019-02-01 17:24:52 -06:00
Phuong Nguyen
d9140a0da4 dmaengine: usb-dmac: Make DMAC system sleep callbacks explicit
This commit fixes the issue that USB-DMAC hangs silently after system
resumes on R-Car Gen3 hence renesas_usbhs will not work correctly
when using USB-DMAC for bulk transfer e.g. ethernet or serial
gadgets.

The issue can be reproduced by these steps:
 1. modprobe g_serial
 2. Suspend and resume system.
 3. connect a usb cable to host side
 4. Transfer data from Host to Target
 5. cat /dev/ttyGS0 (Target side)
 6. echo "test" > /dev/ttyACM0 (Host side)

The 'cat' will not result anything. However, system still can work
normally.

Currently, USB-DMAC driver does not have system sleep callbacks hence
this driver relies on the PM core to force runtime suspend/resume to
suspend and reinitialize USB-DMAC during system resume. After
the commit 17218e0092 ("PM / genpd: Stop/start devices without
pm_runtime_force_suspend/resume()"), PM core will not force
runtime suspend/resume anymore so this issue happens.

To solve this, make system suspend resume explicit by using
pm_runtime_force_{suspend,resume}() as the system sleep callbacks.
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS() is used to make sure USB-DMAC
suspended after and initialized before renesas_usbhs."

Signed-off-by: Phuong Nguyen <phuong.nguyen.xw@renesas.com>
Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
Cc: <stable@vger.kernel.org> # v4.16+
[shimoda: revise the commit log and add Cc tag]
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20 16:29:52 +05:30
Andy Duan
ceaf522651 dmaengine: imx-sdma: pass ->dev to dma_alloc_coherent() API
Pass ->dev to dma_alloc_coherent() API. We need this
because dma_alloc_coherent() makes use of dev parameter
and receiving NULL will result in a crash.

Signed-off-by: Andy Duan <fugang.duan@nxp.com>
Signed-off-by: Daniel Baluta <daniel.baluta@nxp.com>
Reviewed-by: Robin Gong <yibin.gong@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20 16:25:31 +05:30
Vinod Koul
452fd6dc86 dmaengine: imx-dma: change return of 'imxdma_sg_next' to void
The return value of function 'imxdma_sg_next' is not checked anywhere,
so make it void return type.

Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20 11:57:26 +05:30
Vinod Koul
da5035f377 dmaengine: imx-dma: change variable 'now' type to size_t
now is used to keep size and it is better to change the variable
type to size_t

Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20 11:42:44 +05:30
Anders Roxell
9227ab5643 dmaengine: imx-dma: fix warning comparison of distinct pointer types
The warning got introduced by commit 930507c183 ("arm64: add basic
Kconfig symbols for i.MX8"). Since it got enabled for arm64. The warning
haven't been seen before since size_t was 'unsigned int' when built on
arm32.

../drivers/dma/imx-dma.c: In function ‘imxdma_sg_next’:
../include/linux/kernel.h:846:29: warning: comparison of distinct pointer types lacks a cast
   (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1)))
                             ^~
../include/linux/kernel.h:860:4: note: in expansion of macro ‘__typecheck’
   (__typecheck(x, y) && __no_side_effects(x, y))
    ^~~~~~~~~~~
../include/linux/kernel.h:870:24: note: in expansion of macro ‘__safe_cmp’
  __builtin_choose_expr(__safe_cmp(x, y), \
                        ^~~~~~~~~~
../include/linux/kernel.h:879:19: note: in expansion of macro ‘__careful_cmp’
 #define min(x, y) __careful_cmp(x, y, <)
                   ^~~~~~~~~~~~~
../drivers/dma/imx-dma.c:288:8: note: in expansion of macro ‘min’
  now = min(d->len, sg_dma_len(sg));
        ^~~

Rework so that we use min_t and pass in the size_t that returns the
minimum of two values, using the specified type.

Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Reviewed-by: Fabio Estevam <festevam@gmail.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20 11:15:10 +05:30
YueHaibing
c2be36ac21 dmaengine: xilinx_dma: remove set but not used variable 'tail_segment'
Fixes gcc '-Wunused-but-set-variable' warning:

drivers/dma/xilinx/xilinx_dma.c: In function 'xilinx_vdma_start_transfer':
drivers/dma/xilinx/xilinx_dma.c:1104:33: warning:
 variable 'tail_segment' set but not used [-Wunused-but-set-variable]

It not used since commit b8349172b4 ("dmaengine: xilinx_dma: Drop SG support
for VDMA IP")

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20 10:53:29 +05:30
Gustavo A. R. Silva
48b02a85fe dmaengine: axi-dmac: Use struct_size() in kzalloc()
One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:

struct foo {
    int stuff;
    void *entry[];
};

instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:

instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20 10:51:38 +05:30
Gustavo A. R. Silva
3c215fd868 dmaengine: timb_dma: Use struct_size() in kzalloc()
One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:

struct foo {
    int stuff;
    void *entry[];
};

instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:

instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20 10:50:07 +05:30
Gustavo A. R. Silva
863326a6ee dmaengine: tegra210-adma: Use struct_size() in devm_kzalloc()
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:

struct foo {
    int stuff;
    void *entry[];
};

instance = devm_kzalloc(dev, sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:

instance = devm_kzalloc(dev, struct_size(instance, entry, count), GFP_KERNEL);

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20 10:48:23 +05:30
Vinod Koul
73bf95f57b Merge branch 'topic/qcom' into for-linus 2019-01-20 10:48:05 +05:30
Shunyong Yang
546c054755 dmaengine: qcom_hidma: assign channel cookie correctly
When dma_cookie_complete() is called in hidma_process_completed(),
dma_cookie_status() will return DMA_COMPLETE in hidma_tx_status(). Then,
hidma_txn_is_success() will be called to use channel cookie
mchan->last_success to do additional DMA status check. Current code
assigns mchan->last_success after dma_cookie_complete(). This causes
a race condition of dma_cookie_status() returns DMA_COMPLETE before
mchan->last_success is assigned correctly. The race will cause
hidma_tx_status() return DMA_ERROR but the transaction is actually a
success. Moreover, in async_tx case, it will cause a timeout panic
in async_tx_quiesce().

 Kernel panic - not syncing: async_tx_quiesce: DMA error waiting for
 transaction
 ...
 Call trace:
 [<ffff000008089994>] dump_backtrace+0x0/0x1f4
 [<ffff000008089bac>] show_stack+0x24/0x2c
 [<ffff00000891e198>] dump_stack+0x84/0xa8
 [<ffff0000080da544>] panic+0x12c/0x29c
 [<ffff0000045d0334>] async_tx_quiesce+0xa4/0xc8 [async_tx]
 [<ffff0000045d03c8>] async_trigger_callback+0x70/0x1c0 [async_tx]
 [<ffff0000048b7d74>] raid_run_ops+0x86c/0x1540 [raid456]
 [<ffff0000048bd084>] handle_stripe+0x5e8/0x1c7c [raid456]
 [<ffff0000048be9ec>] handle_active_stripes.isra.45+0x2d4/0x550 [raid456]
 [<ffff0000048beff4>] raid5d+0x38c/0x5d0 [raid456]
 [<ffff000008736538>] md_thread+0x108/0x168
 [<ffff0000080fb1cc>] kthread+0x10c/0x138
 [<ffff000008084d34>] ret_from_fork+0x10/0x18

Cc: Joey Zheng <yu.zheng@hxt-semitech.com>
Reviewed-by: Sinan Kaya <okaya@kernel.org>
Signed-off-by: Shunyong Yang <shunyong.yang@hxt-semitech.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20 10:43:34 +05:30
Shunyong Yang
875aac8a46 dmaengine: qcom_hidma: initialize tx flags in hidma_prep_dma_*
In async_tx_test_ack(), it uses flags in struct dma_async_tx_descriptor
to check the ACK status. As hidma reuses the descriptor in a free list
when hidma_prep_dma_*(memcpy/memset) is called, the flag will keep ACKed
if the descriptor has been used before. This will cause a BUG_ON in
async_tx_quiesce().

  kernel BUG at crypto/async_tx/async_tx.c:282!
  Internal error: Oops - BUG: 0 1 SMP
  ...
  task: ffff8017dd3ec000 task.stack: ffff8017dd3e8000
  PC is at async_tx_quiesce+0x54/0x78 [async_tx]
  LR is at async_trigger_callback+0x98/0x110 [async_tx]

This patch initializes flags in dma_async_tx_descriptor by the flags
passed from the caller when hidma_prep_dma_*(memcpy/memset) is called.

Cc: Joey Zheng <yu.zheng@hxt-semitech.com>
Reviewed-by: Sinan Kaya <okaya@kernel.org>
Signed-off-by: Shunyong Yang <shunyong.yang@hxt-semitech.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20 10:43:34 +05:30
Andy Shevchenko
bdcb2c5d5e dmaengine: dw-axi-dmac: Fix trivia typo
Field name ststus_hi should be spelled as status_hi.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-08 22:36:18 +05:30
Robin Gong
ad0d92d7ba dmaengine: imx-sdma: refine to load context only once
The context loaded only one time before channel running,but
currently sdma_config_channel() and dma_prep_* duplicated with
sdma_load_context(), so refine it to load context only one time
before channel running and reload after the channel terminated.

Signed-off-by: Robin Gong <yibin.gong@nxp.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-08 22:33:27 +05:30
Luis Chamberlain
750afb08ca cross-tree: phase out dma_zalloc_coherent()
We already need to zero out memory for dma_alloc_coherent(), as such
using dma_zalloc_coherent() is superflous. Phase it out.

This change was generated with the following Coccinelle SmPL patch:

@ replace_dma_zalloc_coherent @
expression dev, size, data, handle, flags;
@@

-dma_zalloc_coherent(dev, size, handle, flags)
+dma_alloc_coherent(dev, size, handle, flags)

Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
[hch: re-ran the script on the latest tree]
Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-01-08 07:58:37 -05:00
Gustavo A. R. Silva
de1fa4f61b dmaengine: fsl-edma: use struct_size() in kzalloc()
One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:

struct foo {
    int stuff;
    void *entry[];
};

instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:

instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Tested-by: Angelo Dureghello <angelo@sysam.it>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 18:08:17 +05:30
Gustavo A. R. Silva
d3d70373f6 dmaengine: tegra-apb: Use struct_size() in devm_kzalloc()
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:

struct foo {
    int stuff;
    void *entry[];
};

instance = devm_kzalloc(dev, sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:

instance = devm_kzalloc(dev, struct_size(instance, entry, count), GFP_KERNEL);

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 18:07:58 +05:30
Gustavo A. R. Silva
edd3c38999 dmaengine: qcom: bam_dma: use struct_size() in kzalloc()
One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:

struct foo {
    int stuff;
    void *entry[];
};

instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:

instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 18:07:21 +05:30
Gustavo A. R. Silva
55f53b9c17 dmaengine: st_fdma: use struct_size() in kzalloc()
One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:

struct foo {
    int stuff;
    void *entry[];
};

instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:

instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Acked-by: Patrice Chotard <patrice.chotard@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 18:05:22 +05:30
Gustavo A. R. Silva
ed414d5803 dmaengine: dma-jz4780: Use struct_size() in devm_kzalloc()
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:

struct foo {
    int stuff;
    void *entry[];
};

instance = devm_kzalloc(dev, sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:

instance = devm_kzalloc(dev, struct_size(instance, entry, count), GFP_KERNEL);

This issue was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 18:05:17 +05:30
Gustavo A. R. Silva
5fde600537 dmaengine: bcm2835: Use struct_size() in kzalloc()
One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:

struct foo {
    int stuff;
    void *entry[];
};

instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:

instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 18:05:17 +05:30