Merge branch 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma

Pull slave-dmaengine updates from Vinod Koul:
 "For dmaengine contributions we have:
   - designware cleanup by Andy
   - my series moving device_control users to dmanegine_xxx APIs for
     later removal of device_control API
   - minor fixes spread over drivers mainly mv_xor, pl330, mmp, imx-sdma
     etc"

* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (60 commits)
  serial: atmel: add missing dmaengine header
  dmaengine: remove FSLDMA_EXTERNAL_START
  dmaengine: freescale: remove FSLDMA_EXTERNAL_START control method
  carma-fpga: move to fsl_dma_external_start()
  carma-fpga: use dmaengine_xxx() API
  dmaengine: freescale: add and export fsl_dma_external_start()
  dmaengine: add dmaengine_prep_dma_sg() helper
  video: mx3fb: use dmaengine_terminate_all() API
  serial: sh-sci: use dmaengine_terminate_all() API
  net: ks8842: use dmaengine_terminate_all() API
  mtd: sh_flctl: use dmaengine_terminate_all() API
  mtd: fsmc_nand: use dmaengine_terminate_all() API
  V4L2: mx3_camer: use dmaengine_pause() API
  dmaengine: coh901318: use dmaengine_terminate_all() API
  pata_arasan_cf: use dmaengine_terminate_all() API
  dmaengine: edma: check for echan->edesc => NULL in edma_dma_pause()
  dmaengine: dw: export probe()/remove() and Co to users
  dmaengine: dw: enable and disable controller when needed
  dmaengine: dw: always export dw_dma_{en,dis}able
  dmaengine: dw: introduce dw_dma_on() helper
  ...
This commit is contained in:
Linus Torvalds 2014-10-18 18:11:04 -07:00
commit 52d589a01d
50 changed files with 833 additions and 690 deletions

View File

@ -0,0 +1,62 @@
QCOM ADM DMA Controller
Required properties:
- compatible: must contain "qcom,adm" for IPQ/APQ8064 and MSM8960
- reg: Address range for DMA registers
- interrupts: Should contain one interrupt shared by all channels
- #dma-cells: must be <2>. First cell denotes the channel number. Second cell
denotes CRCI (client rate control interface) flow control assignment.
- clocks: Should contain the core clock and interface clock.
- clock-names: Must contain "core" for the core clock and "iface" for the
interface clock.
- resets: Must contain an entry for each entry in reset names.
- reset-names: Must include the following entries:
- clk
- c0
- c1
- c2
- qcom,ee: indicates the security domain identifier used in the secure world.
Example:
adm_dma: dma@18300000 {
compatible = "qcom,adm";
reg = <0x18300000 0x100000>;
interrupts = <0 170 0>;
#dma-cells = <2>;
clocks = <&gcc ADM0_CLK>, <&gcc ADM0_PBUS_CLK>;
clock-names = "core", "iface";
resets = <&gcc ADM0_RESET>,
<&gcc ADM0_C0_RESET>,
<&gcc ADM0_C1_RESET>,
<&gcc ADM0_C2_RESET>;
reset-names = "clk", "c0", "c1", "c2";
qcom,ee = <0>;
};
DMA clients must use the format descripted in the dma.txt file, using a three
cell specifier for each channel.
Each dmas request consists of 3 cells:
1. phandle pointing to the DMA controller
2. channel number
3. CRCI assignment, if applicable. If no CRCI flow control is required, use 0.
The CRCI is used for flow control. It identifies the peripheral device that
is the source/destination for the transferred data.
Example:
spi4: spi@1a280000 {
status = "ok";
spi-max-frequency = <50000000>;
pinctrl-0 = <&spi_pins>;
pinctrl-names = "default";
cs-gpios = <&qcom_pinmux 20 0>;
dmas = <&adm_dma 6 9>,
<&adm_dma 5 10>;
dma-names = "rx", "tx";
};

View File

@ -0,0 +1,65 @@
Xilinx AXI DMA engine, it does transfers between memory and AXI4 stream
target devices. It can be configured to have one channel or two channels.
If configured as two channels, one is to transmit to the device and another
is to receive from the device.
Required properties:
- compatible: Should be "xlnx,axi-dma-1.00.a"
- #dma-cells: Should be <1>, see "dmas" property below
- reg: Should contain DMA registers location and length.
- dma-channel child node: Should have atleast one channel and can have upto
two channels per device. This node specifies the properties of each
DMA channel (see child node properties below).
Optional properties:
- xlnx,include-sg: Tells whether configured for Scatter-mode in
the hardware.
Required child node properties:
- compatible: It should be either "xlnx,axi-dma-mm2s-channel" or
"xlnx,axi-dma-s2mm-channel".
- interrupts: Should contain per channel DMA interrupts.
- xlnx,datawidth: Should contain the stream data width, take values
{32,64...1024}.
Option child node properties:
- xlnx,include-dre: Tells whether hardware is configured for Data
Realignment Engine.
Example:
++++++++
axi_dma_0: axidma@40400000 {
compatible = "xlnx,axi-dma-1.00.a";
#dma_cells = <1>;
reg = < 0x40400000 0x10000 >;
dma-channel@40400000 {
compatible = "xlnx,axi-dma-mm2s-channel";
interrupts = < 0 59 4 >;
xlnx,datawidth = <0x40>;
} ;
dma-channel@40400030 {
compatible = "xlnx,axi-dma-s2mm-channel";
interrupts = < 0 58 4 >;
xlnx,datawidth = <0x40>;
} ;
} ;
* DMA client
Required properties:
- dmas: a list of <[DMA device phandle] [Channel ID]> pairs,
where Channel ID is '0' for write/tx and '1' for read/rx
channel.
- dma-names: a list of DMA channel names, one per "dmas" entry
Example:
++++++++
dmatest_0: dmatest@0 {
compatible ="xlnx,axi-dma-test-1.00.a";
dmas = <&axi_dma_0 0
&axi_dma_0 1>;
dma-names = "dma0", "dma1";
} ;

View File

@ -98,7 +98,7 @@ The slave DMA usage consists of following steps:
unsigned long flags);
The peripheral driver is expected to have mapped the scatterlist for
the DMA operation prior to calling device_prep_slave_sg, and must
the DMA operation prior to calling dmaengine_prep_slave_sg(), and must
keep the scatterlist mapped until the DMA operation has completed.
The scatterlist must be mapped using the DMA struct device.
If a mapping needs to be synchronized later, dma_sync_*_for_*() must be
@ -195,5 +195,5 @@ Further APIs:
Note:
Not all DMA engine drivers can return reliable information for
a running DMA channel. It is recommended that DMA engine users
pause or stop (via dmaengine_terminate_all) the channel before
pause or stop (via dmaengine_terminate_all()) the channel before
using this API.

View File

@ -8062,7 +8062,7 @@ SYNOPSYS DESIGNWARE DMAC DRIVER
M: Viresh Kumar <viresh.linux@gmail.com>
M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
S: Maintained
F: include/linux/dw_dmac.h
F: include/linux/platform_data/dma-dw.h
F: drivers/dma/dw/
SYNOPSYS DESIGNWARE MMC/SD/SDIO DRIVER

View File

@ -7,7 +7,7 @@
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/dw_dmac.h>
#include <linux/platform_data/dma-dw.h>
#include <linux/fb.h>
#include <linux/init.h>
#include <linux/platform_device.h>
@ -1356,10 +1356,10 @@ at32_add_device_mci(unsigned int id, struct mci_platform_data *data)
goto fail;
slave->sdata.dma_dev = &dw_dmac0_device.dev;
slave->sdata.cfg_hi = (DWC_CFGH_SRC_PER(0)
| DWC_CFGH_DST_PER(1));
slave->sdata.cfg_lo &= ~(DWC_CFGL_HS_DST_POL
| DWC_CFGL_HS_SRC_POL);
slave->sdata.src_id = 0;
slave->sdata.dst_id = 1;
slave->sdata.src_master = 1;
slave->sdata.dst_master = 0;
data->dma_slave = slave;
@ -2052,8 +2052,7 @@ at32_add_device_ac97c(unsigned int id, struct ac97c_platform_data *data,
/* Check if DMA slave interface for capture should be configured. */
if (flags & AC97C_CAPTURE) {
rx_dws->dma_dev = &dw_dmac0_device.dev;
rx_dws->cfg_hi = DWC_CFGH_SRC_PER(3);
rx_dws->cfg_lo &= ~(DWC_CFGL_HS_DST_POL | DWC_CFGL_HS_SRC_POL);
rx_dws->src_id = 3;
rx_dws->src_master = 0;
rx_dws->dst_master = 1;
}
@ -2061,8 +2060,7 @@ at32_add_device_ac97c(unsigned int id, struct ac97c_platform_data *data,
/* Check if DMA slave interface for playback should be configured. */
if (flags & AC97C_PLAYBACK) {
tx_dws->dma_dev = &dw_dmac0_device.dev;
tx_dws->cfg_hi = DWC_CFGH_DST_PER(4);
tx_dws->cfg_lo &= ~(DWC_CFGL_HS_DST_POL | DWC_CFGL_HS_SRC_POL);
tx_dws->dst_id = 4;
tx_dws->src_master = 0;
tx_dws->dst_master = 1;
}
@ -2134,8 +2132,7 @@ at32_add_device_abdac(unsigned int id, struct atmel_abdac_pdata *data)
dws = &data->dws;
dws->dma_dev = &dw_dmac0_device.dev;
dws->cfg_hi = DWC_CFGH_DST_PER(2);
dws->cfg_lo &= ~(DWC_CFGL_HS_DST_POL | DWC_CFGL_HS_SRC_POL);
dws->dst_id = 2;
dws->src_master = 0;
dws->dst_master = 1;

View File

@ -1,7 +1,7 @@
#ifndef __MACH_ATMEL_MCI_H
#define __MACH_ATMEL_MCI_H
#include <linux/dw_dmac.h>
#include <linux/platform_data/dma-dw.h>
/**
* struct mci_dma_data - DMA data for MCI interface

View File

@ -420,7 +420,7 @@ dma_xfer(struct arasan_cf_dev *acdev, dma_addr_t src, dma_addr_t dest, u32 len)
/* Wait for DMA to complete */
if (!wait_for_completion_timeout(&acdev->dma_completion, TIMEOUT)) {
chan->device->device_control(chan, DMA_TERMINATE_ALL, 0);
dmaengine_terminate_all(chan);
dev_err(acdev->host->dev, "wait_for_completion_timeout\n");
return -ETIMEDOUT;
}
@ -928,8 +928,7 @@ static int arasan_cf_suspend(struct device *dev)
struct arasan_cf_dev *acdev = host->ports[0]->private_data;
if (acdev->dma_chan)
acdev->dma_chan->device->device_control(acdev->dma_chan,
DMA_TERMINATE_ALL, 0);
dmaengine_terminate_all(acdev->dma_chan);
cf_exit(acdev);
return ata_host_suspend(host, PMSG_SUSPEND);

View File

@ -270,7 +270,7 @@ config IMX_SDMA
select DMA_ENGINE
help
Support the i.MX SDMA engine. This engine is integrated into
Freescale i.MX25/31/35/51/53 chips.
Freescale i.MX25/31/35/51/53/6 chips.
config IMX_DMA
tristate "i.MX DMA support"

View File

@ -2156,7 +2156,7 @@ coh901318_free_chan_resources(struct dma_chan *chan)
spin_unlock_irqrestore(&cohc->lock, flags);
chan->device->device_control(chan, DMA_TERMINATE_ALL, 0);
dmaengine_terminate_all(chan);
}

View File

@ -938,7 +938,7 @@ static int cppi41_dma_probe(struct platform_device *pdev)
if (!glue_info)
return -EINVAL;
cdd = kzalloc(sizeof(*cdd), GFP_KERNEL);
cdd = devm_kzalloc(&pdev->dev, sizeof(*cdd), GFP_KERNEL);
if (!cdd)
return -ENOMEM;
@ -959,10 +959,8 @@ static int cppi41_dma_probe(struct platform_device *pdev)
cdd->qmgr_mem = of_iomap(dev->of_node, 3);
if (!cdd->usbss_mem || !cdd->ctrl_mem || !cdd->sched_mem ||
!cdd->qmgr_mem) {
ret = -ENXIO;
goto err_remap;
}
!cdd->qmgr_mem)
return -ENXIO;
pm_runtime_enable(dev);
ret = pm_runtime_get_sync(dev);
@ -989,7 +987,7 @@ static int cppi41_dma_probe(struct platform_device *pdev)
cppi_writel(USBSS_IRQ_PD_COMP, cdd->usbss_mem + USBSS_IRQ_ENABLER);
ret = request_irq(irq, glue_info->isr, IRQF_SHARED,
ret = devm_request_irq(&pdev->dev, irq, glue_info->isr, IRQF_SHARED,
dev_name(dev), cdd);
if (ret)
goto err_irq;
@ -1009,7 +1007,6 @@ static int cppi41_dma_probe(struct platform_device *pdev)
err_of:
dma_async_device_unregister(&cdd->ddev);
err_dma_reg:
free_irq(irq, cdd);
err_irq:
cppi_writel(0, cdd->usbss_mem + USBSS_IRQ_CLEARR);
cleanup_chans(cdd);
@ -1023,8 +1020,6 @@ static int cppi41_dma_probe(struct platform_device *pdev)
iounmap(cdd->ctrl_mem);
iounmap(cdd->sched_mem);
iounmap(cdd->qmgr_mem);
err_remap:
kfree(cdd);
return ret;
}
@ -1036,7 +1031,7 @@ static int cppi41_dma_remove(struct platform_device *pdev)
dma_async_device_unregister(&cdd->ddev);
cppi_writel(0, cdd->usbss_mem + USBSS_IRQ_CLEARR);
free_irq(cdd->irq, cdd);
devm_free_irq(&pdev->dev, cdd->irq, cdd);
cleanup_chans(cdd);
deinit_cppi41(&pdev->dev, cdd);
iounmap(cdd->usbss_mem);
@ -1045,7 +1040,6 @@ static int cppi41_dma_remove(struct platform_device *pdev)
iounmap(cdd->qmgr_mem);
pm_runtime_put(&pdev->dev);
pm_runtime_disable(&pdev->dev);
kfree(cdd);
return 0;
}

View File

@ -11,7 +11,6 @@
*/
#include <linux/bitops.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/dmaengine.h>
#include <linux/dma-mapping.h>
@ -37,24 +36,6 @@
* support descriptor writeback.
*/
static inline bool is_request_line_unset(struct dw_dma_chan *dwc)
{
return dwc->request_line == (typeof(dwc->request_line))~0;
}
static inline void dwc_set_masters(struct dw_dma_chan *dwc)
{
struct dw_dma *dw = to_dw_dma(dwc->chan.device);
struct dw_dma_slave *dws = dwc->chan.private;
unsigned char mmax = dw->nr_masters - 1;
if (!is_request_line_unset(dwc))
return;
dwc->src_master = min_t(unsigned char, mmax, dwc_get_sms(dws));
dwc->dst_master = min_t(unsigned char, mmax, dwc_get_dms(dws));
}
#define DWC_DEFAULT_CTLLO(_chan) ({ \
struct dw_dma_chan *_dwc = to_dw_dma_chan(_chan); \
struct dma_slave_config *_sconfig = &_dwc->dma_sconfig; \
@ -155,13 +136,11 @@ static void dwc_initialize(struct dw_dma_chan *dwc)
*/
BUG_ON(!dws->dma_dev || dws->dma_dev != dw->dma.dev);
cfghi = dws->cfg_hi;
cfglo |= dws->cfg_lo & ~DWC_CFGL_CH_PRIOR_MASK;
cfghi |= DWC_CFGH_DST_PER(dws->dst_id);
cfghi |= DWC_CFGH_SRC_PER(dws->src_id);
} else {
if (dwc->direction == DMA_MEM_TO_DEV)
cfghi = DWC_CFGH_DST_PER(dwc->request_line);
else if (dwc->direction == DMA_DEV_TO_MEM)
cfghi = DWC_CFGH_SRC_PER(dwc->request_line);
cfghi |= DWC_CFGH_DST_PER(dwc->dst_id);
cfghi |= DWC_CFGH_SRC_PER(dwc->src_id);
}
channel_writel(dwc, CFG_LO, cfglo);
@ -939,6 +918,26 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
return NULL;
}
bool dw_dma_filter(struct dma_chan *chan, void *param)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
struct dw_dma_slave *dws = param;
if (!dws || dws->dma_dev != chan->device->dev)
return false;
/* We have to copy data since dws can be temporary storage */
dwc->src_id = dws->src_id;
dwc->dst_id = dws->dst_id;
dwc->src_master = dws->src_master;
dwc->dst_master = dws->dst_master;
return true;
}
EXPORT_SYMBOL_GPL(dw_dma_filter);
/*
* Fix sconfig's burst size according to dw_dmac. We need to convert them as:
* 1 -> 0, 4 -> 1, 8 -> 2, 16 -> 3.
@ -967,10 +966,6 @@ set_runtime_config(struct dma_chan *chan, struct dma_slave_config *sconfig)
memcpy(&dwc->dma_sconfig, sconfig, sizeof(*sconfig));
dwc->direction = sconfig->direction;
/* Take the request line from slave_id member */
if (is_request_line_unset(dwc))
dwc->request_line = sconfig->slave_id;
convert_burst(&dwc->dma_sconfig.src_maxburst);
convert_burst(&dwc->dma_sconfig.dst_maxburst);
@ -1099,6 +1094,31 @@ static void dwc_issue_pending(struct dma_chan *chan)
spin_unlock_irqrestore(&dwc->lock, flags);
}
/*----------------------------------------------------------------------*/
static void dw_dma_off(struct dw_dma *dw)
{
int i;
dma_writel(dw, CFG, 0);
channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask);
channel_clear_bit(dw, MASK.SRC_TRAN, dw->all_chan_mask);
channel_clear_bit(dw, MASK.DST_TRAN, dw->all_chan_mask);
channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask);
while (dma_readl(dw, CFG) & DW_CFG_DMA_EN)
cpu_relax();
for (i = 0; i < dw->dma.chancnt; i++)
dw->chan[i].initialized = false;
}
static void dw_dma_on(struct dw_dma *dw)
{
dma_writel(dw, CFG, DW_CFG_DMA_EN);
}
static int dwc_alloc_chan_resources(struct dma_chan *chan)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
@ -1123,7 +1143,10 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan)
* doesn't mean what you think it means), and status writeback.
*/
dwc_set_masters(dwc);
/* Enable controller here if needed */
if (!dw->in_use)
dw_dma_on(dw);
dw->in_use |= dwc->mask;
spin_lock_irqsave(&dwc->lock, flags);
i = dwc->descs_allocated;
@ -1182,7 +1205,6 @@ static void dwc_free_chan_resources(struct dma_chan *chan)
list_splice_init(&dwc->free_list, &list);
dwc->descs_allocated = 0;
dwc->initialized = false;
dwc->request_line = ~0;
/* Disable interrupts */
channel_clear_bit(dw, MASK.XFER, dwc->mask);
@ -1190,6 +1212,11 @@ static void dwc_free_chan_resources(struct dma_chan *chan)
spin_unlock_irqrestore(&dwc->lock, flags);
/* Disable controller in case it was a last user */
dw->in_use &= ~dwc->mask;
if (!dw->in_use)
dw_dma_off(dw);
list_for_each_entry_safe(desc, _desc, &list, desc_node) {
dev_vdbg(chan2dev(chan), " freeing descriptor %p\n", desc);
dma_pool_free(dw->desc_pool, desc, desc->txd.phys);
@ -1460,24 +1487,6 @@ EXPORT_SYMBOL(dw_dma_cyclic_free);
/*----------------------------------------------------------------------*/
static void dw_dma_off(struct dw_dma *dw)
{
int i;
dma_writel(dw, CFG, 0);
channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask);
channel_clear_bit(dw, MASK.SRC_TRAN, dw->all_chan_mask);
channel_clear_bit(dw, MASK.DST_TRAN, dw->all_chan_mask);
channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask);
while (dma_readl(dw, CFG) & DW_CFG_DMA_EN)
cpu_relax();
for (i = 0; i < dw->dma.chancnt; i++)
dw->chan[i].initialized = false;
}
int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
{
struct dw_dma *dw;
@ -1495,13 +1504,6 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
dw->regs = chip->regs;
chip->dw = dw;
dw->clk = devm_clk_get(chip->dev, "hclk");
if (IS_ERR(dw->clk))
return PTR_ERR(dw->clk);
err = clk_prepare_enable(dw->clk);
if (err)
return err;
dw_params = dma_read_byaddr(chip->regs, DW_PARAMS);
autocfg = dw_params >> DW_PARAMS_EN & 0x1;
@ -1604,7 +1606,6 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
channel_clear_bit(dw, CH_EN, dwc->mask);
dwc->direction = DMA_TRANS_NONE;
dwc->request_line = ~0;
/* Hardware configuration */
if (autocfg) {
@ -1659,8 +1660,6 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
dw->dma.device_tx_status = dwc_tx_status;
dw->dma.device_issue_pending = dwc_issue_pending;
dma_writel(dw, CFG, DW_CFG_DMA_EN);
err = dma_async_device_register(&dw->dma);
if (err)
goto err_dma_register;
@ -1673,7 +1672,6 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
err_dma_register:
free_irq(chip->irq, dw);
err_pdata:
clk_disable_unprepare(dw->clk);
return err;
}
EXPORT_SYMBOL_GPL(dw_dma_probe);
@ -1695,46 +1693,27 @@ int dw_dma_remove(struct dw_dma_chip *chip)
channel_clear_bit(dw, CH_EN, dwc->mask);
}
clk_disable_unprepare(dw->clk);
return 0;
}
EXPORT_SYMBOL_GPL(dw_dma_remove);
void dw_dma_shutdown(struct dw_dma_chip *chip)
int dw_dma_disable(struct dw_dma_chip *chip)
{
struct dw_dma *dw = chip->dw;
dw_dma_off(dw);
clk_disable_unprepare(dw->clk);
return 0;
}
EXPORT_SYMBOL_GPL(dw_dma_shutdown);
EXPORT_SYMBOL_GPL(dw_dma_disable);
#ifdef CONFIG_PM_SLEEP
int dw_dma_suspend(struct dw_dma_chip *chip)
int dw_dma_enable(struct dw_dma_chip *chip)
{
struct dw_dma *dw = chip->dw;
dw_dma_off(dw);
clk_disable_unprepare(dw->clk);
dw_dma_on(dw);
return 0;
}
EXPORT_SYMBOL_GPL(dw_dma_suspend);
int dw_dma_resume(struct dw_dma_chip *chip)
{
struct dw_dma *dw = chip->dw;
clk_prepare_enable(dw->clk);
dma_writel(dw, CFG, DW_CFG_DMA_EN);
return 0;
}
EXPORT_SYMBOL_GPL(dw_dma_resume);
#endif /* CONFIG_PM_SLEEP */
EXPORT_SYMBOL_GPL(dw_dma_enable);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller core driver");

View File

@ -8,63 +8,16 @@
* published by the Free Software Foundation.
*/
#ifndef _DW_DMAC_INTERNAL_H
#define _DW_DMAC_INTERNAL_H
#ifndef _DMA_DW_INTERNAL_H
#define _DMA_DW_INTERNAL_H
#include <linux/device.h>
#include <linux/dw_dmac.h>
#include <linux/dma/dw.h>
#include "regs.h"
/**
* struct dw_dma_chip - representation of DesignWare DMA controller hardware
* @dev: struct device of the DMA controller
* @irq: irq line
* @regs: memory mapped I/O space
* @dw: struct dw_dma that is filed by dw_dma_probe()
*/
struct dw_dma_chip {
struct device *dev;
int irq;
void __iomem *regs;
struct dw_dma *dw;
};
int dw_dma_disable(struct dw_dma_chip *chip);
int dw_dma_enable(struct dw_dma_chip *chip);
/* Export to the platform drivers */
int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata);
int dw_dma_remove(struct dw_dma_chip *chip);
extern bool dw_dma_filter(struct dma_chan *chan, void *param);
void dw_dma_shutdown(struct dw_dma_chip *chip);
#ifdef CONFIG_PM_SLEEP
int dw_dma_suspend(struct dw_dma_chip *chip);
int dw_dma_resume(struct dw_dma_chip *chip);
#endif /* CONFIG_PM_SLEEP */
/**
* dwc_get_dms - get destination master
* @slave: pointer to the custom slave configuration
*
* Returns destination master in the custom slave configuration if defined, or
* default value otherwise.
*/
static inline unsigned int dwc_get_dms(struct dw_dma_slave *slave)
{
return slave ? slave->dst_master : 0;
}
/**
* dwc_get_sms - get source master
* @slave: pointer to the custom slave configuration
*
* Returns source master in the custom slave configuration if defined, or
* default value otherwise.
*/
static inline unsigned int dwc_get_sms(struct dw_dma_slave *slave)
{
return slave ? slave->src_master : 1;
}
#endif /* _DW_DMAC_INTERNAL_H */
#endif /* _DMA_DW_INTERNAL_H */

View File

@ -82,7 +82,7 @@ static int dw_pci_suspend_late(struct device *dev)
struct pci_dev *pci = to_pci_dev(dev);
struct dw_dma_chip *chip = pci_get_drvdata(pci);
return dw_dma_suspend(chip);
return dw_dma_disable(chip);
};
static int dw_pci_resume_early(struct device *dev)
@ -90,7 +90,7 @@ static int dw_pci_resume_early(struct device *dev)
struct pci_dev *pci = to_pci_dev(dev);
struct dw_dma_chip *chip = pci_get_drvdata(pci);
return dw_dma_resume(chip);
return dw_dma_enable(chip);
};
#endif /* CONFIG_PM_SLEEP */
@ -108,6 +108,10 @@ static const struct pci_device_id dw_pci_id_table[] = {
{ PCI_VDEVICE(INTEL, 0x0f06), (kernel_ulong_t)&dw_pci_pdata },
{ PCI_VDEVICE(INTEL, 0x0f40), (kernel_ulong_t)&dw_pci_pdata },
/* Braswell */
{ PCI_VDEVICE(INTEL, 0x2286), (kernel_ulong_t)&dw_pci_pdata },
{ PCI_VDEVICE(INTEL, 0x22c0), (kernel_ulong_t)&dw_pci_pdata },
/* Haswell */
{ PCI_VDEVICE(INTEL, 0x9c60), (kernel_ulong_t)&dw_pci_pdata },
{ }

View File

@ -25,72 +25,49 @@
#include "internal.h"
struct dw_dma_of_filter_args {
struct dw_dma *dw;
unsigned int req;
unsigned int src;
unsigned int dst;
};
static bool dw_dma_of_filter(struct dma_chan *chan, void *param)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
struct dw_dma_of_filter_args *fargs = param;
/* Ensure the device matches our channel */
if (chan->device != &fargs->dw->dma)
return false;
dwc->request_line = fargs->req;
dwc->src_master = fargs->src;
dwc->dst_master = fargs->dst;
return true;
}
static struct dma_chan *dw_dma_of_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct dw_dma *dw = ofdma->of_dma_data;
struct dw_dma_of_filter_args fargs = {
.dw = dw,
struct dw_dma_slave slave = {
.dma_dev = dw->dma.dev,
};
dma_cap_mask_t cap;
if (dma_spec->args_count != 3)
return NULL;
fargs.req = dma_spec->args[0];
fargs.src = dma_spec->args[1];
fargs.dst = dma_spec->args[2];
slave.src_id = dma_spec->args[0];
slave.dst_id = dma_spec->args[0];
slave.src_master = dma_spec->args[1];
slave.dst_master = dma_spec->args[2];
if (WARN_ON(fargs.req >= DW_DMA_MAX_NR_REQUESTS ||
fargs.src >= dw->nr_masters ||
fargs.dst >= dw->nr_masters))
if (WARN_ON(slave.src_id >= DW_DMA_MAX_NR_REQUESTS ||
slave.dst_id >= DW_DMA_MAX_NR_REQUESTS ||
slave.src_master >= dw->nr_masters ||
slave.dst_master >= dw->nr_masters))
return NULL;
dma_cap_zero(cap);
dma_cap_set(DMA_SLAVE, cap);
/* TODO: there should be a simpler way to do this */
return dma_request_channel(cap, dw_dma_of_filter, &fargs);
return dma_request_channel(cap, dw_dma_filter, &slave);
}
#ifdef CONFIG_ACPI
static bool dw_dma_acpi_filter(struct dma_chan *chan, void *param)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
struct acpi_dma_spec *dma_spec = param;
struct dw_dma_slave slave = {
.dma_dev = dma_spec->dev,
.src_id = dma_spec->slave_id,
.dst_id = dma_spec->slave_id,
.src_master = 1,
.dst_master = 0,
};
if (chan->device->dev != dma_spec->dev ||
chan->chan_id != dma_spec->chan_id)
return false;
dwc->request_line = dma_spec->slave_id;
dwc->src_master = dwc_get_sms(NULL);
dwc->dst_master = dwc_get_dms(NULL);
return true;
return dw_dma_filter(chan, &slave);
}
static void dw_dma_acpi_controller_register(struct dw_dma *dw)
@ -201,10 +178,17 @@ static int dw_probe(struct platform_device *pdev)
chip->dev = dev;
err = dw_dma_probe(chip, pdata);
chip->clk = devm_clk_get(chip->dev, "hclk");
if (IS_ERR(chip->clk))
return PTR_ERR(chip->clk);
err = clk_prepare_enable(chip->clk);
if (err)
return err;
err = dw_dma_probe(chip, pdata);
if (err)
goto err_dw_dma_probe;
platform_set_drvdata(pdev, chip);
if (pdev->dev.of_node) {
@ -219,6 +203,10 @@ static int dw_probe(struct platform_device *pdev)
dw_dma_acpi_controller_register(chip->dw);
return 0;
err_dw_dma_probe:
clk_disable_unprepare(chip->clk);
return err;
}
static int dw_remove(struct platform_device *pdev)
@ -228,14 +216,18 @@ static int dw_remove(struct platform_device *pdev)
if (pdev->dev.of_node)
of_dma_controller_free(pdev->dev.of_node);
return dw_dma_remove(chip);
dw_dma_remove(chip);
clk_disable_unprepare(chip->clk);
return 0;
}
static void dw_shutdown(struct platform_device *pdev)
{
struct dw_dma_chip *chip = platform_get_drvdata(pdev);
dw_dma_shutdown(chip);
dw_dma_disable(chip);
clk_disable_unprepare(chip->clk);
}
#ifdef CONFIG_OF
@ -261,7 +253,10 @@ static int dw_suspend_late(struct device *dev)
struct platform_device *pdev = to_platform_device(dev);
struct dw_dma_chip *chip = platform_get_drvdata(pdev);
return dw_dma_suspend(chip);
dw_dma_disable(chip);
clk_disable_unprepare(chip->clk);
return 0;
}
static int dw_resume_early(struct device *dev)
@ -269,7 +264,8 @@ static int dw_resume_early(struct device *dev)
struct platform_device *pdev = to_platform_device(dev);
struct dw_dma_chip *chip = platform_get_drvdata(pdev);
return dw_dma_resume(chip);
clk_prepare_enable(chip->clk);
return dw_dma_enable(chip);
}
#endif /* CONFIG_PM_SLEEP */
@ -281,7 +277,7 @@ static const struct dev_pm_ops dw_dev_pm_ops = {
static struct platform_driver dw_driver = {
.probe = dw_probe,
.remove = dw_remove,
.shutdown = dw_shutdown,
.shutdown = dw_shutdown,
.driver = {
.name = "dw_dmac",
.pm = &dw_dev_pm_ops,

View File

@ -11,7 +11,6 @@
#include <linux/interrupt.h>
#include <linux/dmaengine.h>
#include <linux/dw_dmac.h>
#define DW_DMA_MAX_NR_CHANNELS 8
#define DW_DMA_MAX_NR_REQUESTS 16
@ -132,6 +131,18 @@ struct dw_dma_regs {
/* Bitfields in DWC_PARAMS */
#define DWC_PARAMS_MBLK_EN 11 /* multi block transfer */
/* bursts size */
enum dw_dma_msize {
DW_DMA_MSIZE_1,
DW_DMA_MSIZE_4,
DW_DMA_MSIZE_8,
DW_DMA_MSIZE_16,
DW_DMA_MSIZE_32,
DW_DMA_MSIZE_64,
DW_DMA_MSIZE_128,
DW_DMA_MSIZE_256,
};
/* Bitfields in CTL_LO */
#define DWC_CTLL_INT_EN (1 << 0) /* irqs enabled? */
#define DWC_CTLL_DST_WIDTH(n) ((n)<<1) /* bytes per element */
@ -161,20 +172,35 @@ struct dw_dma_regs {
#define DWC_CTLH_DONE 0x00001000
#define DWC_CTLH_BLOCK_TS_MASK 0x00000fff
/* Bitfields in CFG_LO. Platform-configurable bits are in <linux/dw_dmac.h> */
/* Bitfields in CFG_LO */
#define DWC_CFGL_CH_PRIOR_MASK (0x7 << 5) /* priority mask */
#define DWC_CFGL_CH_PRIOR(x) ((x) << 5) /* priority */
#define DWC_CFGL_CH_SUSP (1 << 8) /* pause xfer */
#define DWC_CFGL_FIFO_EMPTY (1 << 9) /* pause xfer */
#define DWC_CFGL_HS_DST (1 << 10) /* handshake w/dst */
#define DWC_CFGL_HS_SRC (1 << 11) /* handshake w/src */
#define DWC_CFGL_LOCK_CH_XFER (0 << 12) /* scope of LOCK_CH */
#define DWC_CFGL_LOCK_CH_BLOCK (1 << 12)
#define DWC_CFGL_LOCK_CH_XACT (2 << 12)
#define DWC_CFGL_LOCK_BUS_XFER (0 << 14) /* scope of LOCK_BUS */
#define DWC_CFGL_LOCK_BUS_BLOCK (1 << 14)
#define DWC_CFGL_LOCK_BUS_XACT (2 << 14)
#define DWC_CFGL_LOCK_CH (1 << 15) /* channel lockout */
#define DWC_CFGL_LOCK_BUS (1 << 16) /* busmaster lockout */
#define DWC_CFGL_HS_DST_POL (1 << 18) /* dst handshake active low */
#define DWC_CFGL_HS_SRC_POL (1 << 19) /* src handshake active low */
#define DWC_CFGL_MAX_BURST(x) ((x) << 20)
#define DWC_CFGL_RELOAD_SAR (1 << 30)
#define DWC_CFGL_RELOAD_DAR (1 << 31)
/* Bitfields in CFG_HI. Platform-configurable bits are in <linux/dw_dmac.h> */
/* Bitfields in CFG_HI */
#define DWC_CFGH_FCMODE (1 << 0)
#define DWC_CFGH_FIFO_MODE (1 << 1)
#define DWC_CFGH_PROTCTL(x) ((x) << 2)
#define DWC_CFGH_DS_UPD_EN (1 << 5)
#define DWC_CFGH_SS_UPD_EN (1 << 6)
#define DWC_CFGH_SRC_PER(x) ((x) << 7)
#define DWC_CFGH_DST_PER(x) ((x) << 11)
/* Bitfields in SGR */
#define DWC_SGR_SGI(x) ((x) << 0)
@ -221,9 +247,10 @@ struct dw_dma_chan {
bool nollp;
/* custom slave configuration */
unsigned int request_line;
unsigned char src_master;
unsigned char dst_master;
u8 src_id;
u8 dst_id;
u8 src_master;
u8 dst_master;
/* configuration passed via DMA_SLAVE_CONFIG */
struct dma_slave_config dma_sconfig;
@ -250,11 +277,11 @@ struct dw_dma {
void __iomem *regs;
struct dma_pool *desc_pool;
struct tasklet_struct tasklet;
struct clk *clk;
/* channels */
struct dw_dma_chan *chan;
u8 all_chan_mask;
u8 in_use;
/* hardware configuration */
unsigned char nr_masters;

View File

@ -288,7 +288,7 @@ static int edma_slave_config(struct edma_chan *echan,
static int edma_dma_pause(struct edma_chan *echan)
{
/* Pause/Resume only allowed with cyclic mode */
if (!echan->edesc->cyclic)
if (!echan->edesc || !echan->edesc->cyclic)
return -EINVAL;
edma_pause(echan->ch_num);

View File

@ -36,7 +36,7 @@
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/fsldma.h>
#include "dmaengine.h"
#include "fsldma.h"
@ -367,6 +367,20 @@ static void fsl_chan_toggle_ext_start(struct fsldma_chan *chan, int enable)
chan->feature &= ~FSL_DMA_CHAN_START_EXT;
}
int fsl_dma_external_start(struct dma_chan *dchan, int enable)
{
struct fsldma_chan *chan;
if (!dchan)
return -EINVAL;
chan = to_fsl_chan(dchan);
fsl_chan_toggle_ext_start(chan, enable);
return 0;
}
EXPORT_SYMBOL_GPL(fsl_dma_external_start);
static void append_ld_queue(struct fsldma_chan *chan, struct fsl_desc_sw *desc)
{
struct fsl_desc_sw *tail = to_fsl_desc(chan->ld_pending.prev);
@ -998,15 +1012,6 @@ static int fsl_dma_device_control(struct dma_chan *dchan,
chan->set_request_count(chan, size);
return 0;
case FSLDMA_EXTERNAL_START:
/* make sure the channel supports external start */
if (!chan->toggle_ext_start)
return -ENXIO;
chan->toggle_ext_start(chan, arg);
return 0;
default:
return -ENXIO;
}

View File

@ -1334,7 +1334,7 @@ static void sdma_load_firmware(const struct firmware *fw, void *context)
release_firmware(fw);
}
static int __init sdma_get_firmware(struct sdma_engine *sdma,
static int sdma_get_firmware(struct sdma_engine *sdma,
const char *fw_name)
{
int ret;
@ -1448,7 +1448,7 @@ static struct dma_chan *sdma_xlate(struct of_phandle_args *dma_spec,
return dma_request_channel(mask, sdma_filter_fn, &data);
}
static int __init sdma_probe(struct platform_device *pdev)
static int sdma_probe(struct platform_device *pdev)
{
const struct of_device_id *of_id =
of_match_device(sdma_dt_ids, &pdev->dev);
@ -1603,6 +1603,8 @@ static int __init sdma_probe(struct platform_device *pdev)
sdma->dma_device.dev->dma_parms = &sdma->dma_parms;
dma_set_max_seg_size(sdma->dma_device.dev, 65535);
platform_set_drvdata(pdev, sdma);
ret = dma_async_device_register(&sdma->dma_device);
if (ret) {
dev_err(&pdev->dev, "unable to register\n");
@ -1640,7 +1642,27 @@ static int __init sdma_probe(struct platform_device *pdev)
static int sdma_remove(struct platform_device *pdev)
{
return -EBUSY;
struct sdma_engine *sdma = platform_get_drvdata(pdev);
struct resource *iores = platform_get_resource(pdev, IORESOURCE_MEM, 0);
int irq = platform_get_irq(pdev, 0);
int i;
dma_async_device_unregister(&sdma->dma_device);
kfree(sdma->script_addrs);
free_irq(irq, sdma);
iounmap(sdma->regs);
release_mem_region(iores->start, resource_size(iores));
/* Kill the tasklet */
for (i = 0; i < MAX_DMA_CHANNELS; i++) {
struct sdma_channel *sdmac = &sdma->channel[i];
tasklet_kill(&sdmac->tasklet);
}
kfree(sdma);
platform_set_drvdata(pdev, NULL);
dev_info(&pdev->dev, "Removed...\n");
return 0;
}
static struct platform_driver sdma_driver = {
@ -1650,13 +1672,10 @@ static struct platform_driver sdma_driver = {
},
.id_table = sdma_devtypes,
.remove = sdma_remove,
.probe = sdma_probe,
};
static int __init sdma_module_init(void)
{
return platform_driver_probe(&sdma_driver, sdma_probe);
}
module_init(sdma_module_init);
module_platform_driver(sdma_driver);
MODULE_AUTHOR("Sascha Hauer, Pengutronix <s.hauer@pengutronix.de>");
MODULE_DESCRIPTION("i.MX SDMA driver");

View File

@ -148,10 +148,16 @@ static void mmp_tdma_chan_set_desc(struct mmp_tdma_chan *tdmac, dma_addr_t phys)
tdmac->reg_base + TDCR);
}
static void mmp_tdma_enable_irq(struct mmp_tdma_chan *tdmac, bool enable)
{
if (enable)
writel(TDIMR_COMP, tdmac->reg_base + TDIMR);
else
writel(0, tdmac->reg_base + TDIMR);
}
static void mmp_tdma_enable_chan(struct mmp_tdma_chan *tdmac)
{
/* enable irq */
writel(TDIMR_COMP, tdmac->reg_base + TDIMR);
/* enable dma chan */
writel(readl(tdmac->reg_base + TDCR) | TDCR_CHANEN,
tdmac->reg_base + TDCR);
@ -163,9 +169,6 @@ static void mmp_tdma_disable_chan(struct mmp_tdma_chan *tdmac)
writel(readl(tdmac->reg_base + TDCR) & ~TDCR_CHANEN,
tdmac->reg_base + TDCR);
/* disable irq */
writel(0, tdmac->reg_base + TDIMR);
tdmac->status = DMA_COMPLETE;
}
@ -434,6 +437,10 @@ static struct dma_async_tx_descriptor *mmp_tdma_prep_dma_cyclic(
i++;
}
/* enable interrupt */
if (flags & DMA_PREP_INTERRUPT)
mmp_tdma_enable_irq(tdmac, true);
tdmac->buf_len = buf_len;
tdmac->period_len = period_len;
tdmac->pos = 0;
@ -455,6 +462,8 @@ static int mmp_tdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
switch (cmd) {
case DMA_TERMINATE_ALL:
mmp_tdma_disable_chan(tdmac);
/* disable interrupt */
mmp_tdma_enable_irq(tdmac, false);
break;
case DMA_PAUSE:
mmp_tdma_pause_chan(tdmac);

View File

@ -45,19 +45,18 @@ static void mv_xor_issue_pending(struct dma_chan *chan);
#define mv_chan_to_devp(chan) \
((chan)->dmadev.dev)
static void mv_desc_init(struct mv_xor_desc_slot *desc, unsigned long flags)
static void mv_desc_init(struct mv_xor_desc_slot *desc,
dma_addr_t addr, u32 byte_count,
enum dma_ctrl_flags flags)
{
struct mv_xor_desc *hw_desc = desc->hw_desc;
hw_desc->status = (1 << 31);
hw_desc->status = XOR_DESC_DMA_OWNED;
hw_desc->phy_next_desc = 0;
hw_desc->desc_command = (1 << 31);
}
static void mv_desc_set_byte_count(struct mv_xor_desc_slot *desc,
u32 byte_count)
{
struct mv_xor_desc *hw_desc = desc->hw_desc;
/* Enable end-of-descriptor interrupts only for DMA_PREP_INTERRUPT */
hw_desc->desc_command = (flags & DMA_PREP_INTERRUPT) ?
XOR_DESC_EOD_INT_EN : 0;
hw_desc->phy_dest_addr = addr;
hw_desc->byte_count = byte_count;
}
@ -75,20 +74,6 @@ static void mv_desc_clear_next_desc(struct mv_xor_desc_slot *desc)
hw_desc->phy_next_desc = 0;
}
static void mv_desc_set_dest_addr(struct mv_xor_desc_slot *desc,
dma_addr_t addr)
{
struct mv_xor_desc *hw_desc = desc->hw_desc;
hw_desc->phy_dest_addr = addr;
}
static int mv_chan_memset_slot_count(size_t len)
{
return 1;
}
#define mv_chan_memcpy_slot_count(c) mv_chan_memset_slot_count(c)
static void mv_desc_set_src_addr(struct mv_xor_desc_slot *desc,
int index, dma_addr_t addr)
{
@ -123,17 +108,12 @@ static u32 mv_chan_get_intr_cause(struct mv_xor_chan *chan)
return intr_cause;
}
static int mv_is_err_intr(u32 intr_cause)
{
if (intr_cause & ((1<<4)|(1<<5)|(1<<6)|(1<<7)|(1<<8)|(1<<9)))
return 1;
return 0;
}
static void mv_xor_device_clear_eoc_cause(struct mv_xor_chan *chan)
{
u32 val = ~(1 << (chan->idx * 16));
u32 val;
val = XOR_INT_END_OF_DESC | XOR_INT_END_OF_CHAIN | XOR_INT_STOPPED;
val = ~(val << (chan->idx * 16));
dev_dbg(mv_chan_to_devp(chan), "%s, val 0x%08x\n", __func__, val);
writel_relaxed(val, XOR_INTR_CAUSE(chan));
}
@ -144,17 +124,6 @@ static void mv_xor_device_clear_err_status(struct mv_xor_chan *chan)
writel_relaxed(val, XOR_INTR_CAUSE(chan));
}
static int mv_can_chain(struct mv_xor_desc_slot *desc)
{
struct mv_xor_desc_slot *chain_old_tail = list_entry(
desc->chain_node.prev, struct mv_xor_desc_slot, chain_node);
if (chain_old_tail->type != desc->type)
return 0;
return 1;
}
static void mv_set_mode(struct mv_xor_chan *chan,
enum dma_transaction_type type)
{
@ -206,11 +175,6 @@ static char mv_chan_is_busy(struct mv_xor_chan *chan)
return (state == 1) ? 1 : 0;
}
static int mv_chan_xor_slot_count(size_t len, int src_cnt)
{
return 1;
}
/**
* mv_xor_free_slots - flags descriptor slots for reuse
* @slot: Slot to free
@ -222,7 +186,7 @@ static void mv_xor_free_slots(struct mv_xor_chan *mv_chan,
dev_dbg(mv_chan_to_devp(mv_chan), "%s %d slot %p\n",
__func__, __LINE__, slot);
slot->slots_per_op = 0;
slot->slot_used = 0;
}
@ -236,13 +200,11 @@ static void mv_xor_start_new_chain(struct mv_xor_chan *mv_chan,
{
dev_dbg(mv_chan_to_devp(mv_chan), "%s %d: sw_desc %p\n",
__func__, __LINE__, sw_desc);
if (sw_desc->type != mv_chan->current_type)
mv_set_mode(mv_chan, sw_desc->type);
/* set the hardware chain */
mv_chan_set_next_descriptor(mv_chan, sw_desc->async_tx.phys);
mv_chan->pending += sw_desc->slot_cnt;
mv_chan->pending++;
mv_xor_issue_pending(&mv_chan->dmachan);
}
@ -263,8 +225,6 @@ mv_xor_run_tx_complete_actions(struct mv_xor_desc_slot *desc,
desc->async_tx.callback_param);
dma_descriptor_unmap(&desc->async_tx);
if (desc->group_head)
desc->group_head = NULL;
}
/* run dependent operations */
@ -377,19 +337,16 @@ static void mv_xor_tasklet(unsigned long data)
}
static struct mv_xor_desc_slot *
mv_xor_alloc_slots(struct mv_xor_chan *mv_chan, int num_slots,
int slots_per_op)
mv_xor_alloc_slot(struct mv_xor_chan *mv_chan)
{
struct mv_xor_desc_slot *iter, *_iter, *alloc_start = NULL;
LIST_HEAD(chain);
int slots_found, retry = 0;
struct mv_xor_desc_slot *iter, *_iter;
int retry = 0;
/* start search from the last allocated descrtiptor
* if a contiguous allocation can not be found start searching
* from the beginning of the list
*/
retry:
slots_found = 0;
if (retry == 0)
iter = mv_chan->last_used;
else
@ -399,55 +356,29 @@ mv_xor_alloc_slots(struct mv_xor_chan *mv_chan, int num_slots,
list_for_each_entry_safe_continue(
iter, _iter, &mv_chan->all_slots, slot_node) {
prefetch(_iter);
prefetch(&_iter->async_tx);
if (iter->slots_per_op) {
if (iter->slot_used) {
/* give up after finding the first busy slot
* on the second pass through the list
*/
if (retry)
break;
slots_found = 0;
continue;
}
/* start the allocation if the slot is correctly aligned */
if (!slots_found++)
alloc_start = iter;
/* pre-ack descriptor */
async_tx_ack(&iter->async_tx);
if (slots_found == num_slots) {
struct mv_xor_desc_slot *alloc_tail = NULL;
struct mv_xor_desc_slot *last_used = NULL;
iter = alloc_start;
while (num_slots) {
int i;
iter->slot_used = 1;
INIT_LIST_HEAD(&iter->chain_node);
iter->async_tx.cookie = -EBUSY;
mv_chan->last_used = iter;
mv_desc_clear_next_desc(iter);
/* pre-ack all but the last descriptor */
async_tx_ack(&iter->async_tx);
return iter;
list_add_tail(&iter->chain_node, &chain);
alloc_tail = iter;
iter->async_tx.cookie = 0;
iter->slot_cnt = num_slots;
iter->xor_check_result = NULL;
for (i = 0; i < slots_per_op; i++) {
iter->slots_per_op = slots_per_op - i;
last_used = iter;
iter = list_entry(iter->slot_node.next,
struct mv_xor_desc_slot,
slot_node);
}
num_slots -= slots_per_op;
}
alloc_tail->group_head = alloc_start;
alloc_tail->async_tx.cookie = -EBUSY;
list_splice(&chain, &alloc_tail->tx_list);
mv_chan->last_used = last_used;
mv_desc_clear_next_desc(alloc_start);
mv_desc_clear_next_desc(alloc_tail);
return alloc_tail;
}
}
if (!retry++)
goto retry;
@ -464,7 +395,7 @@ mv_xor_tx_submit(struct dma_async_tx_descriptor *tx)
{
struct mv_xor_desc_slot *sw_desc = to_mv_xor_slot(tx);
struct mv_xor_chan *mv_chan = to_mv_xor_chan(tx->chan);
struct mv_xor_desc_slot *grp_start, *old_chain_tail;
struct mv_xor_desc_slot *old_chain_tail;
dma_cookie_t cookie;
int new_hw_chain = 1;
@ -472,30 +403,24 @@ mv_xor_tx_submit(struct dma_async_tx_descriptor *tx)
"%s sw_desc %p: async_tx %p\n",
__func__, sw_desc, &sw_desc->async_tx);
grp_start = sw_desc->group_head;
spin_lock_bh(&mv_chan->lock);
cookie = dma_cookie_assign(tx);
if (list_empty(&mv_chan->chain))
list_splice_init(&sw_desc->tx_list, &mv_chan->chain);
list_add_tail(&sw_desc->chain_node, &mv_chan->chain);
else {
new_hw_chain = 0;
old_chain_tail = list_entry(mv_chan->chain.prev,
struct mv_xor_desc_slot,
chain_node);
list_splice_init(&grp_start->tx_list,
&old_chain_tail->chain_node);
if (!mv_can_chain(grp_start))
goto submit_done;
list_add_tail(&sw_desc->chain_node, &mv_chan->chain);
dev_dbg(mv_chan_to_devp(mv_chan), "Append to last desc %pa\n",
&old_chain_tail->async_tx.phys);
/* fix up the hardware chain */
mv_desc_set_next_desc(old_chain_tail, grp_start->async_tx.phys);
mv_desc_set_next_desc(old_chain_tail, sw_desc->async_tx.phys);
/* if the channel is not busy */
if (!mv_chan_is_busy(mv_chan)) {
@ -510,9 +435,8 @@ mv_xor_tx_submit(struct dma_async_tx_descriptor *tx)
}
if (new_hw_chain)
mv_xor_start_new_chain(mv_chan, grp_start);
mv_xor_start_new_chain(mv_chan, sw_desc);
submit_done:
spin_unlock_bh(&mv_chan->lock);
return cookie;
@ -533,8 +457,9 @@ static int mv_xor_alloc_chan_resources(struct dma_chan *chan)
while (idx < num_descs_in_pool) {
slot = kzalloc(sizeof(*slot), GFP_KERNEL);
if (!slot) {
printk(KERN_INFO "MV XOR Channel only initialized"
" %d descriptor slots", idx);
dev_info(mv_chan_to_devp(mv_chan),
"channel only initialized %d descriptor slots",
idx);
break;
}
virt_desc = mv_chan->dma_desc_pool_virt;
@ -544,7 +469,6 @@ static int mv_xor_alloc_chan_resources(struct dma_chan *chan)
slot->async_tx.tx_submit = mv_xor_tx_submit;
INIT_LIST_HEAD(&slot->chain_node);
INIT_LIST_HEAD(&slot->slot_node);
INIT_LIST_HEAD(&slot->tx_list);
dma_desc = mv_chan->dma_desc_pool;
slot->async_tx.phys = dma_desc + idx * MV_XOR_SLOT_SIZE;
slot->idx = idx++;
@ -567,52 +491,12 @@ static int mv_xor_alloc_chan_resources(struct dma_chan *chan)
return mv_chan->slots_allocated ? : -ENOMEM;
}
static struct dma_async_tx_descriptor *
mv_xor_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
size_t len, unsigned long flags)
{
struct mv_xor_chan *mv_chan = to_mv_xor_chan(chan);
struct mv_xor_desc_slot *sw_desc, *grp_start;
int slot_cnt;
dev_dbg(mv_chan_to_devp(mv_chan),
"%s dest: %pad src %pad len: %u flags: %ld\n",
__func__, &dest, &src, len, flags);
if (unlikely(len < MV_XOR_MIN_BYTE_COUNT))
return NULL;
BUG_ON(len > MV_XOR_MAX_BYTE_COUNT);
spin_lock_bh(&mv_chan->lock);
slot_cnt = mv_chan_memcpy_slot_count(len);
sw_desc = mv_xor_alloc_slots(mv_chan, slot_cnt, 1);
if (sw_desc) {
sw_desc->type = DMA_MEMCPY;
sw_desc->async_tx.flags = flags;
grp_start = sw_desc->group_head;
mv_desc_init(grp_start, flags);
mv_desc_set_byte_count(grp_start, len);
mv_desc_set_dest_addr(sw_desc->group_head, dest);
mv_desc_set_src_addr(grp_start, 0, src);
sw_desc->unmap_src_cnt = 1;
sw_desc->unmap_len = len;
}
spin_unlock_bh(&mv_chan->lock);
dev_dbg(mv_chan_to_devp(mv_chan),
"%s sw_desc %p async_tx %p\n",
__func__, sw_desc, sw_desc ? &sw_desc->async_tx : NULL);
return sw_desc ? &sw_desc->async_tx : NULL;
}
static struct dma_async_tx_descriptor *
mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
unsigned int src_cnt, size_t len, unsigned long flags)
{
struct mv_xor_chan *mv_chan = to_mv_xor_chan(chan);
struct mv_xor_desc_slot *sw_desc, *grp_start;
int slot_cnt;
struct mv_xor_desc_slot *sw_desc;
if (unlikely(len < MV_XOR_MIN_BYTE_COUNT))
return NULL;
@ -624,20 +508,13 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
__func__, src_cnt, len, &dest, flags);
spin_lock_bh(&mv_chan->lock);
slot_cnt = mv_chan_xor_slot_count(len, src_cnt);
sw_desc = mv_xor_alloc_slots(mv_chan, slot_cnt, 1);
sw_desc = mv_xor_alloc_slot(mv_chan);
if (sw_desc) {
sw_desc->type = DMA_XOR;
sw_desc->async_tx.flags = flags;
grp_start = sw_desc->group_head;
mv_desc_init(grp_start, flags);
/* the byte count field is the same as in memcpy desc*/
mv_desc_set_byte_count(grp_start, len);
mv_desc_set_dest_addr(sw_desc->group_head, dest);
sw_desc->unmap_src_cnt = src_cnt;
sw_desc->unmap_len = len;
mv_desc_init(sw_desc, dest, len, flags);
while (src_cnt--)
mv_desc_set_src_addr(grp_start, src_cnt, src[src_cnt]);
mv_desc_set_src_addr(sw_desc, src_cnt, src[src_cnt]);
}
spin_unlock_bh(&mv_chan->lock);
dev_dbg(mv_chan_to_devp(mv_chan),
@ -646,6 +523,35 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
return sw_desc ? &sw_desc->async_tx : NULL;
}
static struct dma_async_tx_descriptor *
mv_xor_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
size_t len, unsigned long flags)
{
/*
* A MEMCPY operation is identical to an XOR operation with only
* a single source address.
*/
return mv_xor_prep_dma_xor(chan, dest, &src, 1, len, flags);
}
static struct dma_async_tx_descriptor *
mv_xor_prep_dma_interrupt(struct dma_chan *chan, unsigned long flags)
{
struct mv_xor_chan *mv_chan = to_mv_xor_chan(chan);
dma_addr_t src, dest;
size_t len;
src = mv_chan->dummy_src_addr;
dest = mv_chan->dummy_dst_addr;
len = MV_XOR_MIN_BYTE_COUNT;
/*
* We implement the DMA_INTERRUPT operation as a minimum sized
* XOR operation with a single dummy source address.
*/
return mv_xor_prep_dma_xor(chan, dest, &src, 1, len, flags);
}
static void mv_xor_free_chan_resources(struct dma_chan *chan)
{
struct mv_xor_chan *mv_chan = to_mv_xor_chan(chan);
@ -733,18 +639,16 @@ static void mv_dump_xor_regs(struct mv_xor_chan *chan)
static void mv_xor_err_interrupt_handler(struct mv_xor_chan *chan,
u32 intr_cause)
{
if (intr_cause & (1 << 4)) {
dev_dbg(mv_chan_to_devp(chan),
"ignore this error\n");
return;
if (intr_cause & XOR_INT_ERR_DECODE) {
dev_dbg(mv_chan_to_devp(chan), "ignoring address decode error\n");
return;
}
dev_err(mv_chan_to_devp(chan),
"error on chan %d. intr cause 0x%08x\n",
dev_err(mv_chan_to_devp(chan), "error on chan %d. intr cause 0x%08x\n",
chan->idx, intr_cause);
mv_dump_xor_regs(chan);
BUG();
WARN_ON(1);
}
static irqreturn_t mv_xor_interrupt_handler(int irq, void *data)
@ -754,7 +658,7 @@ static irqreturn_t mv_xor_interrupt_handler(int irq, void *data)
dev_dbg(mv_chan_to_devp(chan), "intr cause %x\n", intr_cause);
if (mv_is_err_intr(intr_cause))
if (intr_cause & XOR_INTR_ERRORS)
mv_xor_err_interrupt_handler(chan, intr_cause);
tasklet_schedule(&chan->irq_tasklet);
@ -1041,6 +945,10 @@ static int mv_xor_channel_remove(struct mv_xor_chan *mv_chan)
dma_free_coherent(dev, MV_XOR_POOL_SIZE,
mv_chan->dma_desc_pool_virt, mv_chan->dma_desc_pool);
dma_unmap_single(dev, mv_chan->dummy_src_addr,
MV_XOR_MIN_BYTE_COUNT, DMA_FROM_DEVICE);
dma_unmap_single(dev, mv_chan->dummy_dst_addr,
MV_XOR_MIN_BYTE_COUNT, DMA_TO_DEVICE);
list_for_each_entry_safe(chan, _chan, &mv_chan->dmadev.channels,
device_node) {
@ -1070,6 +978,16 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
dma_dev = &mv_chan->dmadev;
/*
* These source and destination dummy buffers are used to implement
* a DMA_INTERRUPT operation as a minimum-sized XOR operation.
* Hence, we only need to map the buffers at initialization-time.
*/
mv_chan->dummy_src_addr = dma_map_single(dma_dev->dev,
mv_chan->dummy_src, MV_XOR_MIN_BYTE_COUNT, DMA_FROM_DEVICE);
mv_chan->dummy_dst_addr = dma_map_single(dma_dev->dev,
mv_chan->dummy_dst, MV_XOR_MIN_BYTE_COUNT, DMA_TO_DEVICE);
/* allocate coherent memory for hardware descriptors
* note: writecombine gives slightly better performance, but
* requires that we explicitly flush the writes
@ -1094,6 +1012,8 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
dma_dev->dev = &pdev->dev;
/* set prep routines based on capability */
if (dma_has_cap(DMA_INTERRUPT, dma_dev->cap_mask))
dma_dev->device_prep_dma_interrupt = mv_xor_prep_dma_interrupt;
if (dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask))
dma_dev->device_prep_dma_memcpy = mv_xor_prep_dma_memcpy;
if (dma_has_cap(DMA_XOR, dma_dev->cap_mask)) {
@ -1116,7 +1036,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
mv_chan_unmask_interrupts(mv_chan);
mv_set_mode(mv_chan, DMA_MEMCPY);
mv_set_mode(mv_chan, DMA_XOR);
spin_lock_init(&mv_chan->lock);
INIT_LIST_HEAD(&mv_chan->chain);

View File

@ -23,17 +23,22 @@
#include <linux/dmaengine.h>
#include <linux/interrupt.h>
#define USE_TIMER
#define MV_XOR_POOL_SIZE PAGE_SIZE
#define MV_XOR_SLOT_SIZE 64
#define MV_XOR_THRESHOLD 1
#define MV_XOR_MAX_CHANNELS 2
#define MV_XOR_MIN_BYTE_COUNT SZ_128
#define MV_XOR_MAX_BYTE_COUNT (SZ_16M - 1)
/* Values for the XOR_CONFIG register */
#define XOR_OPERATION_MODE_XOR 0
#define XOR_OPERATION_MODE_MEMCPY 2
#define XOR_DESCRIPTOR_SWAP BIT(14)
#define XOR_DESC_DMA_OWNED BIT(31)
#define XOR_DESC_EOD_INT_EN BIT(31)
#define XOR_CURR_DESC(chan) (chan->mmr_high_base + 0x10 + (chan->idx * 4))
#define XOR_NEXT_DESC(chan) (chan->mmr_high_base + 0x00 + (chan->idx * 4))
#define XOR_BYTE_COUNT(chan) (chan->mmr_high_base + 0x20 + (chan->idx * 4))
@ -48,7 +53,24 @@
#define XOR_INTR_MASK(chan) (chan->mmr_base + 0x40)
#define XOR_ERROR_CAUSE(chan) (chan->mmr_base + 0x50)
#define XOR_ERROR_ADDR(chan) (chan->mmr_base + 0x60)
#define XOR_INTR_MASK_VALUE 0x3F5
#define XOR_INT_END_OF_DESC BIT(0)
#define XOR_INT_END_OF_CHAIN BIT(1)
#define XOR_INT_STOPPED BIT(2)
#define XOR_INT_PAUSED BIT(3)
#define XOR_INT_ERR_DECODE BIT(4)
#define XOR_INT_ERR_RDPROT BIT(5)
#define XOR_INT_ERR_WRPROT BIT(6)
#define XOR_INT_ERR_OWN BIT(7)
#define XOR_INT_ERR_PAR BIT(8)
#define XOR_INT_ERR_MBUS BIT(9)
#define XOR_INTR_ERRORS (XOR_INT_ERR_DECODE | XOR_INT_ERR_RDPROT | \
XOR_INT_ERR_WRPROT | XOR_INT_ERR_OWN | \
XOR_INT_ERR_PAR | XOR_INT_ERR_MBUS)
#define XOR_INTR_MASK_VALUE (XOR_INT_END_OF_DESC | XOR_INT_END_OF_CHAIN | \
XOR_INT_STOPPED | XOR_INTR_ERRORS)
#define WINDOW_BASE(w) (0x50 + ((w) << 2))
#define WINDOW_SIZE(w) (0x70 + ((w) << 2))
@ -97,10 +119,9 @@ struct mv_xor_chan {
struct list_head all_slots;
int slots_allocated;
struct tasklet_struct irq_tasklet;
#ifdef USE_TIMER
unsigned long cleanup_time;
u32 current_on_last_cleanup;
#endif
char dummy_src[MV_XOR_MIN_BYTE_COUNT];
char dummy_dst[MV_XOR_MIN_BYTE_COUNT];
dma_addr_t dummy_src_addr, dummy_dst_addr;
};
/**
@ -110,16 +131,10 @@ struct mv_xor_chan {
* @completed_node: node on the mv_xor_chan.completed_slots list
* @hw_desc: virtual address of the hardware descriptor chain
* @phys: hardware address of the hardware descriptor chain
* @group_head: first operation in a transaction
* @slot_cnt: total slots used in an transaction (group of operations)
* @slots_per_op: number of slots per operation
* @slot_used: slot in use or not
* @idx: pool index
* @unmap_src_cnt: number of xor sources
* @unmap_len: transaction bytecount
* @tx_list: list of slots that make up a multi-descriptor transaction
* @async_tx: support for the async_tx api
* @xor_check_result: result of zero sum
* @crc32_result: result crc calculation
*/
struct mv_xor_desc_slot {
struct list_head slot_node;
@ -127,23 +142,9 @@ struct mv_xor_desc_slot {
struct list_head completed_node;
enum dma_transaction_type type;
void *hw_desc;
struct mv_xor_desc_slot *group_head;
u16 slot_cnt;
u16 slots_per_op;
u16 slot_used;
u16 idx;
u16 unmap_src_cnt;
u32 value;
size_t unmap_len;
struct list_head tx_list;
struct dma_async_tx_descriptor async_tx;
union {
u32 *xor_check_result;
u32 *crc32_result;
};
#ifdef USE_TIMER
unsigned long arrival_time;
struct timer_list timeout;
#endif
};
/*
@ -189,9 +190,4 @@ struct mv_xor_desc {
#define mv_hw_desc_slot_idx(hw_desc, idx) \
((void *)(((unsigned long)hw_desc) + ((idx) << 5)))
#define MV_XOR_MIN_BYTE_COUNT (128)
#define XOR_MAX_BYTE_COUNT ((16 * 1024 * 1024) - 1)
#define MV_XOR_MAX_BYTE_COUNT XOR_MAX_BYTE_COUNT
#endif

View File

@ -1367,17 +1367,10 @@ static int pl330_submit_req(struct pl330_thread *thrd,
struct pl330_dmac *pl330 = thrd->dmac;
struct _xfer_spec xs;
unsigned long flags;
void __iomem *regs;
unsigned idx;
u32 ccr;
int ret = 0;
/* No Req or Unacquired Channel or DMAC */
if (!desc || !thrd || thrd->free)
return -EINVAL;
regs = thrd->dmac->base;
if (pl330->state == DYING
|| pl330->dmac_tbd.reset_chan & (1 << thrd->id)) {
dev_info(thrd->dmac->ddma.dev, "%s:%d\n",
@ -2755,8 +2748,10 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id)
list_del(&pch->chan.device_node);
/* Flush the channel */
pl330_control(&pch->chan, DMA_TERMINATE_ALL, 0);
pl330_free_chan_resources(&pch->chan);
if (pch->thread) {
pl330_control(&pch->chan, DMA_TERMINATE_ALL, 0);
pl330_free_chan_resources(&pch->chan);
}
}
probe_err2:
pl330_del(pl330);
@ -2782,8 +2777,10 @@ static int pl330_remove(struct amba_device *adev)
list_del(&pch->chan.device_node);
/* Flush the channel */
pl330_control(&pch->chan, DMA_TERMINATE_ALL, 0);
pl330_free_chan_resources(&pch->chan);
if (pch->thread) {
pl330_control(&pch->chan, DMA_TERMINATE_ALL, 0);
pl330_free_chan_resources(&pch->chan);
}
}
pl330_del(pl330);

View File

@ -117,7 +117,7 @@ static void audmapp_start_xfer(struct shdma_chan *schan,
audmapp_write(auchan, chcr, PDMACHCR);
}
static void audmapp_get_config(struct audmapp_chan *auchan, int slave_id,
static int audmapp_get_config(struct audmapp_chan *auchan, int slave_id,
u32 *chcr, dma_addr_t *dst)
{
struct audmapp_device *audev = to_dev(auchan);
@ -131,20 +131,22 @@ static void audmapp_get_config(struct audmapp_chan *auchan, int slave_id,
if (!pdata) { /* DT */
*chcr = ((u32)slave_id) << 16;
auchan->shdma_chan.slave_id = (slave_id) >> 8;
return;
return 0;
}
/* non-DT */
if (slave_id >= AUDMAPP_SLAVE_NUMBER)
return;
return -ENXIO;
for (i = 0, cfg = pdata->slave; i < pdata->slave_num; i++, cfg++)
if (cfg->slave_id == slave_id) {
*chcr = cfg->chcr;
*dst = cfg->dst;
break;
return 0;
}
return -ENXIO;
}
static int audmapp_set_slave(struct shdma_chan *schan, int slave_id,
@ -153,8 +155,11 @@ static int audmapp_set_slave(struct shdma_chan *schan, int slave_id,
struct audmapp_chan *auchan = to_chan(schan);
u32 chcr;
dma_addr_t dst;
int ret;
audmapp_get_config(auchan, slave_id, &chcr, &dst);
ret = audmapp_get_config(auchan, slave_id, &chcr, &dst);
if (ret < 0)
return ret;
if (try)
return 0;

View File

@ -862,7 +862,6 @@ static int sun6i_dma_probe(struct platform_device *pdev)
{
struct sun6i_dma_dev *sdc;
struct resource *res;
struct clk *mux, *pll6;
int ret, i;
sdc = devm_kzalloc(&pdev->dev, sizeof(*sdc), GFP_KERNEL);
@ -886,28 +885,6 @@ static int sun6i_dma_probe(struct platform_device *pdev)
return PTR_ERR(sdc->clk);
}
mux = clk_get(NULL, "ahb1_mux");
if (IS_ERR(mux)) {
dev_err(&pdev->dev, "Couldn't get AHB1 Mux\n");
return PTR_ERR(mux);
}
pll6 = clk_get(NULL, "pll6");
if (IS_ERR(pll6)) {
dev_err(&pdev->dev, "Couldn't get PLL6\n");
clk_put(mux);
return PTR_ERR(pll6);
}
ret = clk_set_parent(mux, pll6);
clk_put(pll6);
clk_put(mux);
if (ret) {
dev_err(&pdev->dev, "Couldn't reparent AHB1 on PLL6\n");
return ret;
}
sdc->rstc = devm_reset_control_get(&pdev->dev, NULL);
if (IS_ERR(sdc->rstc)) {
dev_err(&pdev->dev, "No reset controller specified\n");

View File

@ -1365,7 +1365,6 @@ static const struct of_device_id xilinx_vdma_of_ids[] = {
static struct platform_driver xilinx_vdma_driver = {
.driver = {
.name = "xilinx-vdma",
.owner = THIS_MODULE,
.of_match_table = xilinx_vdma_of_ids,
},
.probe = xilinx_vdma_probe,

View File

@ -415,10 +415,8 @@ static void mx3_stop_streaming(struct vb2_queue *q)
struct mx3_camera_buffer *buf, *tmp;
unsigned long flags;
if (ichan) {
struct dma_chan *chan = &ichan->dma_chan;
chan->device->device_control(chan, DMA_PAUSE, 0);
}
if (ichan)
dmaengine_pause(&ichan->dma_chan);
spin_lock_irqsave(&mx3_cam->lock, flags);

View File

@ -16,6 +16,7 @@
#include <linux/completion.h>
#include <linux/miscdevice.h>
#include <linux/dmaengine.h>
#include <linux/fsldma.h>
#include <linux/interrupt.h>
#include <linux/highmem.h>
#include <linux/kernel.h>
@ -518,23 +519,22 @@ static noinline int fpga_program_dma(struct fpga_dev *priv)
config.direction = DMA_MEM_TO_DEV;
config.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
config.dst_maxburst = fpga_fifo_size(priv->regs) / 2 / 4;
ret = chan->device->device_control(chan, DMA_SLAVE_CONFIG,
(unsigned long)&config);
ret = dmaengine_slave_config(chan, &config);
if (ret) {
dev_err(priv->dev, "DMA slave configuration failed\n");
goto out_dma_unmap;
}
ret = chan->device->device_control(chan, FSLDMA_EXTERNAL_START, 1);
ret = fsl_dma_external_start(chan, 1)
if (ret) {
dev_err(priv->dev, "DMA external control setup failed\n");
goto out_dma_unmap;
}
/* setup and submit the DMA transaction */
tx = chan->device->device_prep_dma_sg(chan,
table.sgl, num_pages,
vb->sglist, vb->sglen, 0);
tx = dmaengine_prep_dma_sg(chan, table.sgl, num_pages,
vb->sglist, vb->sglen, 0);
if (!tx) {
dev_err(priv->dev, "Unable to prep DMA transaction\n");
ret = -ENOMEM;

View File

@ -605,7 +605,7 @@ static int dma_xfer(struct fsmc_nand_data *host, void *buffer, int len,
wait_for_completion_timeout(&host->dma_access_complete,
msecs_to_jiffies(3000));
if (ret <= 0) {
chan->device->device_control(chan, DMA_TERMINATE_ALL, 0);
dmaengine_terminate_all(chan);
dev_err(host->dev, "wait_for_completion_timeout\n");
if (!ret)
ret = -ETIMEDOUT;

View File

@ -395,7 +395,7 @@ static int flctl_dma_fifo0_transfer(struct sh_flctl *flctl, unsigned long *buf,
msecs_to_jiffies(3000));
if (ret <= 0) {
chan->device->device_control(chan, DMA_TERMINATE_ALL, 0);
dmaengine_terminate_all(chan);
dev_err(&flctl->pdev->dev, "wait_for_completion_timeout\n");
}

View File

@ -875,13 +875,11 @@ static void ks8842_stop_dma(struct ks8842_adapter *adapter)
tx_ctl->adesc = NULL;
if (tx_ctl->chan)
tx_ctl->chan->device->device_control(tx_ctl->chan,
DMA_TERMINATE_ALL, 0);
dmaengine_terminate_all(tx_ctl->chan);
rx_ctl->adesc = NULL;
if (rx_ctl->chan)
rx_ctl->chan->device->device_control(rx_ctl->chan,
DMA_TERMINATE_ALL, 0);
dmaengine_terminate_all(rx_ctl->chan);
if (sg_dma_address(&rx_ctl->sg))
dma_unmap_single(adapter->dev, sg_dma_address(&rx_ctl->sg),

View File

@ -157,7 +157,6 @@ static struct dma_async_tx_descriptor *
pxa2xx_spi_dma_prepare_one(struct driver_data *drv_data,
enum dma_transfer_direction dir)
{
struct pxa2xx_spi_master *pdata = drv_data->master_info;
struct chip_data *chip = drv_data->cur_chip;
enum dma_slave_buswidth width;
struct dma_slave_config cfg;
@ -184,7 +183,6 @@ pxa2xx_spi_dma_prepare_one(struct driver_data *drv_data,
cfg.dst_addr = drv_data->ssdr_physical;
cfg.dst_addr_width = width;
cfg.dst_maxburst = chip->dma_burst_size;
cfg.slave_id = pdata->tx_slave_id;
sgt = &drv_data->tx_sgt;
nents = drv_data->tx_nents;
@ -193,7 +191,6 @@ pxa2xx_spi_dma_prepare_one(struct driver_data *drv_data,
cfg.src_addr = drv_data->ssdr_physical;
cfg.src_addr_width = width;
cfg.src_maxburst = chip->dma_burst_size;
cfg.slave_id = pdata->rx_slave_id;
sgt = &drv_data->rx_sgt;
nents = drv_data->rx_nents;
@ -210,14 +207,6 @@ pxa2xx_spi_dma_prepare_one(struct driver_data *drv_data,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
}
static bool pxa2xx_spi_dma_filter(struct dma_chan *chan, void *param)
{
const struct pxa2xx_spi_master *pdata = param;
return chan->chan_id == pdata->tx_chan_id ||
chan->chan_id == pdata->rx_chan_id;
}
bool pxa2xx_spi_dma_is_possible(size_t len)
{
return len <= MAX_DMA_LEN;
@ -321,12 +310,12 @@ int pxa2xx_spi_dma_setup(struct driver_data *drv_data)
return -ENOMEM;
drv_data->tx_chan = dma_request_slave_channel_compat(mask,
pxa2xx_spi_dma_filter, pdata, dev, "tx");
pdata->dma_filter, pdata->tx_param, dev, "tx");
if (!drv_data->tx_chan)
return -ENODEV;
drv_data->rx_chan = dma_request_slave_channel_compat(mask,
pxa2xx_spi_dma_filter, pdata, dev, "rx");
pdata->dma_filter, pdata->rx_param, dev, "rx");
if (!drv_data->rx_chan) {
dma_release_channel(drv_data->tx_chan);
drv_data->tx_chan = NULL;

View File

@ -10,42 +10,87 @@
#include <linux/clk.h>
#include <linux/clk-provider.h>
#include <linux/dmaengine.h>
#include <linux/platform_data/dma-dw.h>
enum {
PORT_CE4100,
PORT_BYT,
PORT_BSW0,
PORT_BSW1,
PORT_BSW2,
};
struct pxa_spi_info {
enum pxa_ssp_type type;
int port_id;
int num_chipselect;
int tx_slave_id;
int tx_chan_id;
int rx_slave_id;
int rx_chan_id;
unsigned long max_clk_rate;
/* DMA channel request parameters */
void *tx_param;
void *rx_param;
};
static struct dw_dma_slave byt_tx_param = { .dst_id = 0 };
static struct dw_dma_slave byt_rx_param = { .src_id = 1 };
static struct dw_dma_slave bsw0_tx_param = { .dst_id = 0 };
static struct dw_dma_slave bsw0_rx_param = { .src_id = 1 };
static struct dw_dma_slave bsw1_tx_param = { .dst_id = 6 };
static struct dw_dma_slave bsw1_rx_param = { .src_id = 7 };
static struct dw_dma_slave bsw2_tx_param = { .dst_id = 8 };
static struct dw_dma_slave bsw2_rx_param = { .src_id = 9 };
static bool lpss_dma_filter(struct dma_chan *chan, void *param)
{
struct dw_dma_slave *dws = param;
if (dws->dma_dev != chan->device->dev)
return false;
chan->private = dws;
return true;
}
static struct pxa_spi_info spi_info_configs[] = {
[PORT_CE4100] = {
.type = PXA25x_SSP,
.port_id = -1,
.num_chipselect = -1,
.tx_slave_id = -1,
.tx_chan_id = -1,
.rx_slave_id = -1,
.rx_chan_id = -1,
.max_clk_rate = 3686400,
},
[PORT_BYT] = {
.type = LPSS_SSP,
.port_id = 0,
.num_chipselect = 1,
.tx_slave_id = 0,
.tx_chan_id = 0,
.rx_slave_id = 1,
.rx_chan_id = 1,
.max_clk_rate = 50000000,
.tx_param = &byt_tx_param,
.rx_param = &byt_rx_param,
},
[PORT_BSW0] = {
.type = LPSS_SSP,
.port_id = 0,
.num_chipselect = 1,
.max_clk_rate = 50000000,
.tx_param = &bsw0_tx_param,
.rx_param = &bsw0_rx_param,
},
[PORT_BSW1] = {
.type = LPSS_SSP,
.port_id = 1,
.num_chipselect = 1,
.max_clk_rate = 50000000,
.tx_param = &bsw1_tx_param,
.rx_param = &bsw1_rx_param,
},
[PORT_BSW2] = {
.type = LPSS_SSP,
.port_id = 2,
.num_chipselect = 1,
.max_clk_rate = 50000000,
.tx_param = &bsw2_tx_param,
.rx_param = &bsw2_rx_param,
},
};
@ -59,6 +104,7 @@ static int pxa2xx_spi_pci_probe(struct pci_dev *dev,
struct ssp_device *ssp;
struct pxa_spi_info *c;
char buf[40];
struct pci_dev *dma_dev;
ret = pcim_enable_device(dev);
if (ret)
@ -73,11 +119,29 @@ static int pxa2xx_spi_pci_probe(struct pci_dev *dev,
memset(&spi_pdata, 0, sizeof(spi_pdata));
spi_pdata.num_chipselect = (c->num_chipselect > 0) ?
c->num_chipselect : dev->devfn;
spi_pdata.tx_slave_id = c->tx_slave_id;
spi_pdata.tx_chan_id = c->tx_chan_id;
spi_pdata.rx_slave_id = c->rx_slave_id;
spi_pdata.rx_chan_id = c->rx_chan_id;
spi_pdata.enable_dma = c->rx_slave_id >= 0 && c->tx_slave_id >= 0;
dma_dev = pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
if (c->tx_param) {
struct dw_dma_slave *slave = c->tx_param;
slave->dma_dev = &dma_dev->dev;
slave->src_master = 1;
slave->dst_master = 0;
}
if (c->rx_param) {
struct dw_dma_slave *slave = c->rx_param;
slave->dma_dev = &dma_dev->dev;
slave->src_master = 1;
slave->dst_master = 0;
}
spi_pdata.dma_filter = lpss_dma_filter;
spi_pdata.tx_param = c->tx_param;
spi_pdata.rx_param = c->rx_param;
spi_pdata.enable_dma = c->rx_param && c->tx_param;
ssp = &spi_pdata.ssp;
ssp->phys_base = pci_resource_start(dev, 0);
@ -128,6 +192,9 @@ static void pxa2xx_spi_pci_remove(struct pci_dev *dev)
static const struct pci_device_id pxa2xx_spi_pci_devices[] = {
{ PCI_VDEVICE(INTEL, 0x2e6a), PORT_CE4100 },
{ PCI_VDEVICE(INTEL, 0x0f0e), PORT_BYT },
{ PCI_VDEVICE(INTEL, 0x228e), PORT_BSW0 },
{ PCI_VDEVICE(INTEL, 0x2290), PORT_BSW1 },
{ PCI_VDEVICE(INTEL, 0x22ac), PORT_BSW2 },
{ },
};
MODULE_DEVICE_TABLE(pci, pxa2xx_spi_pci_devices);

View File

@ -1062,8 +1062,6 @@ pxa2xx_spi_acpi_get_pdata(struct platform_device *pdev)
pdata->num_chipselect = 1;
pdata->enable_dma = true;
pdata->tx_chan_id = -1;
pdata->rx_chan_id = -1;
return pdata;
}

View File

@ -16,13 +16,13 @@
#include <linux/dmaengine.h>
struct uart_8250_dma {
/* Filter function */
dma_filter_fn fn;
/* Parameter to the filter function */
void *rx_param;
void *tx_param;
int rx_chan_id;
int tx_chan_id;
struct dma_slave_config rxconf;
struct dma_slave_config txconf;

View File

@ -216,10 +216,7 @@ static void dw8250_set_termios(struct uart_port *p, struct ktermios *termios,
static bool dw8250_dma_filter(struct dma_chan *chan, void *param)
{
struct dw8250_data *data = param;
return chan->chan_id == data->dma.tx_chan_id ||
chan->chan_id == data->dma.rx_chan_id;
return false;
}
static void dw8250_setup_port(struct uart_8250_port *up)
@ -399,8 +396,6 @@ static int dw8250_probe(struct platform_device *pdev)
if (!IS_ERR(data->rst))
reset_control_deassert(data->rst);
data->dma.rx_chan_id = -1;
data->dma.tx_chan_id = -1;
data->dma.rx_param = data;
data->dma.tx_param = data;
data->dma.fn = dw8250_dma_filter;

View File

@ -25,6 +25,9 @@
#include <asm/byteorder.h>
#include <asm/io.h>
#include <linux/dmaengine.h>
#include <linux/platform_data/dma-dw.h>
#include "8250.h"
/*
@ -1349,6 +1352,9 @@ ce4100_serial_setup(struct serial_private *priv,
#define PCI_DEVICE_ID_INTEL_BYT_UART1 0x0f0a
#define PCI_DEVICE_ID_INTEL_BYT_UART2 0x0f0c
#define PCI_DEVICE_ID_INTEL_BSW_UART1 0x228a
#define PCI_DEVICE_ID_INTEL_BSW_UART2 0x228c
#define BYT_PRV_CLK 0x800
#define BYT_PRV_CLK_EN (1 << 0)
#define BYT_PRV_CLK_M_VAL_SHIFT 1
@ -1414,7 +1420,13 @@ byt_set_termios(struct uart_port *p, struct ktermios *termios,
static bool byt_dma_filter(struct dma_chan *chan, void *param)
{
return chan->chan_id == *(int *)param;
struct dw_dma_slave *dws = param;
if (dws->dma_dev != chan->device->dev)
return false;
chan->private = dws;
return true;
}
static int
@ -1422,35 +1434,57 @@ byt_serial_setup(struct serial_private *priv,
const struct pciserial_board *board,
struct uart_8250_port *port, int idx)
{
struct pci_dev *pdev = priv->dev;
struct device *dev = port->port.dev;
struct uart_8250_dma *dma;
struct dw_dma_slave *tx_param, *rx_param;
struct pci_dev *dma_dev;
int ret;
dma = devm_kzalloc(port->port.dev, sizeof(*dma), GFP_KERNEL);
dma = devm_kzalloc(dev, sizeof(*dma), GFP_KERNEL);
if (!dma)
return -ENOMEM;
switch (priv->dev->device) {
tx_param = devm_kzalloc(dev, sizeof(*tx_param), GFP_KERNEL);
if (!tx_param)
return -ENOMEM;
rx_param = devm_kzalloc(dev, sizeof(*rx_param), GFP_KERNEL);
if (!rx_param)
return -ENOMEM;
switch (pdev->device) {
case PCI_DEVICE_ID_INTEL_BYT_UART1:
dma->rx_chan_id = 3;
dma->tx_chan_id = 2;
case PCI_DEVICE_ID_INTEL_BSW_UART1:
rx_param->src_id = 3;
tx_param->dst_id = 2;
break;
case PCI_DEVICE_ID_INTEL_BYT_UART2:
dma->rx_chan_id = 5;
dma->tx_chan_id = 4;
case PCI_DEVICE_ID_INTEL_BSW_UART2:
rx_param->src_id = 5;
tx_param->dst_id = 4;
break;
default:
return -EINVAL;
}
dma->rxconf.slave_id = dma->rx_chan_id;
rx_param->src_master = 1;
rx_param->dst_master = 0;
dma->rxconf.src_maxburst = 16;
dma->txconf.slave_id = dma->tx_chan_id;
tx_param->src_master = 1;
tx_param->dst_master = 0;
dma->txconf.dst_maxburst = 16;
dma_dev = pci_get_slot(pdev->bus, PCI_DEVFN(PCI_SLOT(pdev->devfn), 0));
rx_param->dma_dev = &dma_dev->dev;
tx_param->dma_dev = &dma_dev->dev;
dma->fn = byt_dma_filter;
dma->rx_param = &dma->rx_chan_id;
dma->tx_param = &dma->tx_chan_id;
dma->rx_param = rx_param;
dma->tx_param = tx_param;
ret = pci_default_setup(priv, board, port, idx);
port->port.iotype = UPIO_MEM;
@ -1893,6 +1927,20 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = {
.subdevice = PCI_ANY_ID,
.setup = pci_default_setup,
},
{
.vendor = PCI_VENDOR_ID_INTEL,
.device = PCI_DEVICE_ID_INTEL_BSW_UART1,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.setup = byt_serial_setup,
},
{
.vendor = PCI_VENDOR_ID_INTEL,
.device = PCI_DEVICE_ID_INTEL_BSW_UART2,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.setup = byt_serial_setup,
},
/*
* ITE
*/
@ -5192,6 +5240,14 @@ static struct pci_device_id serial_pci_tbl[] = {
PCI_ANY_ID, PCI_ANY_ID,
PCI_CLASS_COMMUNICATION_SERIAL << 8, 0xff0000,
pbn_byt },
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BSW_UART1,
PCI_ANY_ID, PCI_ANY_ID,
PCI_CLASS_COMMUNICATION_SERIAL << 8, 0xff0000,
pbn_byt },
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BSW_UART2,
PCI_ANY_ID, PCI_ANY_ID,
PCI_CLASS_COMMUNICATION_SERIAL << 8, 0xff0000,
pbn_byt },
/*
* Intel Quark x1000

View File

@ -37,6 +37,7 @@
#include <linux/of_device.h>
#include <linux/of_gpio.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
#include <linux/atmel_pdc.h>
#include <linux/atmel_serial.h>
#include <linux/uaccess.h>

View File

@ -1403,7 +1403,7 @@ static void work_fn_rx(struct work_struct *work)
unsigned long flags;
int count;
chan->device->device_control(chan, DMA_TERMINATE_ALL, 0);
dmaengine_terminate_all(chan);
dev_dbg(port->dev, "Read %zu bytes with cookie %d\n",
sh_desc->partial, sh_desc->cookie);

View File

@ -461,8 +461,7 @@ static void sdc_disable_channel(struct mx3fb_info *mx3_fbi)
spin_unlock_irqrestore(&mx3fb->lock, flags);
mx3_fbi->txd->chan->device->device_control(mx3_fbi->txd->chan,
DMA_TERMINATE_ALL, 0);
dmaengine_terminate_all(mx3_fbi->txd->chan);
mx3_fbi->txd = NULL;
mx3_fbi->cookie = -EINVAL;
}

64
include/linux/dma/dw.h Normal file
View File

@ -0,0 +1,64 @@
/*
* Driver for the Synopsys DesignWare DMA Controller
*
* Copyright (C) 2007 Atmel Corporation
* Copyright (C) 2010-2011 ST Microelectronics
* Copyright (C) 2014 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _DMA_DW_H
#define _DMA_DW_H
#include <linux/clk.h>
#include <linux/device.h>
#include <linux/dmaengine.h>
#include <linux/platform_data/dma-dw.h>
struct dw_dma;
/**
* struct dw_dma_chip - representation of DesignWare DMA controller hardware
* @dev: struct device of the DMA controller
* @irq: irq line
* @regs: memory mapped I/O space
* @clk: hclk clock
* @dw: struct dw_dma that is filed by dw_dma_probe()
*/
struct dw_dma_chip {
struct device *dev;
int irq;
void __iomem *regs;
struct clk *clk;
struct dw_dma *dw;
};
/* Export to the platform drivers */
int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata);
int dw_dma_remove(struct dw_dma_chip *chip);
/* DMA API extensions */
struct dw_desc;
struct dw_cyclic_desc {
struct dw_desc **desc;
unsigned long periods;
void (*period_callback)(void *param);
void *period_callback_param;
};
struct dw_cyclic_desc *dw_dma_cyclic_prep(struct dma_chan *chan,
dma_addr_t buf_addr, size_t buf_len, size_t period_len,
enum dma_transfer_direction direction);
void dw_dma_cyclic_free(struct dma_chan *chan);
int dw_dma_cyclic_start(struct dma_chan *chan);
void dw_dma_cyclic_stop(struct dma_chan *chan);
dma_addr_t dw_dma_get_src_addr(struct dma_chan *chan);
dma_addr_t dw_dma_get_dst_addr(struct dma_chan *chan);
#endif /* _DMA_DW_H */

View File

@ -199,15 +199,12 @@ enum dma_ctrl_flags {
* configuration data in statically from the platform). An additional
* argument of struct dma_slave_config must be passed in with this
* command.
* @FSLDMA_EXTERNAL_START: this command will put the Freescale DMA controller
* into external start mode.
*/
enum dma_ctrl_cmd {
DMA_TERMINATE_ALL,
DMA_PAUSE,
DMA_RESUME,
DMA_SLAVE_CONFIG,
FSLDMA_EXTERNAL_START,
};
/**
@ -307,7 +304,9 @@ enum dma_slave_buswidth {
* struct dma_slave_config - dma slave channel runtime config
* @direction: whether the data shall go in or out on this slave
* channel, right now. DMA_MEM_TO_DEV and DMA_DEV_TO_MEM are
* legal values.
* legal values. DEPRECATED, drivers should use the direction argument
* to the device_prep_slave_sg and device_prep_dma_cyclic functions or
* the dir field in the dma_interleaved_template structure.
* @src_addr: this is the physical address where DMA slave data
* should be read (RX), if the source is memory this argument is
* ignored.
@ -755,6 +754,16 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_interleaved_dma(
return chan->device->device_prep_interleaved_dma(chan, xt, flags);
}
static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_sg(
struct dma_chan *chan,
struct scatterlist *dst_sg, unsigned int dst_nents,
struct scatterlist *src_sg, unsigned int src_nents,
unsigned long flags)
{
return chan->device->device_prep_dma_sg(chan, dst_sg, dst_nents,
src_sg, src_nents, flags);
}
static inline int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps)
{
if (!chan || !caps)

View File

@ -1,111 +0,0 @@
/*
* Driver for the Synopsys DesignWare DMA Controller
*
* Copyright (C) 2007 Atmel Corporation
* Copyright (C) 2010-2011 ST Microelectronics
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef DW_DMAC_H
#define DW_DMAC_H
#include <linux/dmaengine.h>
/**
* struct dw_dma_slave - Controller-specific information about a slave
*
* @dma_dev: required DMA master device. Depricated.
* @bus_id: name of this device channel, not just a device name since
* devices may have more than one channel e.g. "foo_tx"
* @cfg_hi: Platform-specific initializer for the CFG_HI register
* @cfg_lo: Platform-specific initializer for the CFG_LO register
* @src_master: src master for transfers on allocated channel.
* @dst_master: dest master for transfers on allocated channel.
*/
struct dw_dma_slave {
struct device *dma_dev;
u32 cfg_hi;
u32 cfg_lo;
u8 src_master;
u8 dst_master;
};
/**
* struct dw_dma_platform_data - Controller configuration parameters
* @nr_channels: Number of channels supported by hardware (max 8)
* @is_private: The device channels should be marked as private and not for
* by the general purpose DMA channel allocator.
* @chan_allocation_order: Allocate channels starting from 0 or 7
* @chan_priority: Set channel priority increasing from 0 to 7 or 7 to 0.
* @block_size: Maximum block size supported by the controller
* @nr_masters: Number of AHB masters supported by the controller
* @data_width: Maximum data width supported by hardware per AHB master
* (0 - 8bits, 1 - 16bits, ..., 5 - 256bits)
*/
struct dw_dma_platform_data {
unsigned int nr_channels;
bool is_private;
#define CHAN_ALLOCATION_ASCENDING 0 /* zero to seven */
#define CHAN_ALLOCATION_DESCENDING 1 /* seven to zero */
unsigned char chan_allocation_order;
#define CHAN_PRIORITY_ASCENDING 0 /* chan0 highest */
#define CHAN_PRIORITY_DESCENDING 1 /* chan7 highest */
unsigned char chan_priority;
unsigned short block_size;
unsigned char nr_masters;
unsigned char data_width[4];
};
/* bursts size */
enum dw_dma_msize {
DW_DMA_MSIZE_1,
DW_DMA_MSIZE_4,
DW_DMA_MSIZE_8,
DW_DMA_MSIZE_16,
DW_DMA_MSIZE_32,
DW_DMA_MSIZE_64,
DW_DMA_MSIZE_128,
DW_DMA_MSIZE_256,
};
/* Platform-configurable bits in CFG_HI */
#define DWC_CFGH_FCMODE (1 << 0)
#define DWC_CFGH_FIFO_MODE (1 << 1)
#define DWC_CFGH_PROTCTL(x) ((x) << 2)
#define DWC_CFGH_SRC_PER(x) ((x) << 7)
#define DWC_CFGH_DST_PER(x) ((x) << 11)
/* Platform-configurable bits in CFG_LO */
#define DWC_CFGL_LOCK_CH_XFER (0 << 12) /* scope of LOCK_CH */
#define DWC_CFGL_LOCK_CH_BLOCK (1 << 12)
#define DWC_CFGL_LOCK_CH_XACT (2 << 12)
#define DWC_CFGL_LOCK_BUS_XFER (0 << 14) /* scope of LOCK_BUS */
#define DWC_CFGL_LOCK_BUS_BLOCK (1 << 14)
#define DWC_CFGL_LOCK_BUS_XACT (2 << 14)
#define DWC_CFGL_LOCK_CH (1 << 15) /* channel lockout */
#define DWC_CFGL_LOCK_BUS (1 << 16) /* busmaster lockout */
#define DWC_CFGL_HS_DST_POL (1 << 18) /* dst handshake active low */
#define DWC_CFGL_HS_SRC_POL (1 << 19) /* src handshake active low */
/* DMA API extensions */
struct dw_cyclic_desc {
struct dw_desc **desc;
unsigned long periods;
void (*period_callback)(void *param);
void *period_callback_param;
};
struct dw_cyclic_desc *dw_dma_cyclic_prep(struct dma_chan *chan,
dma_addr_t buf_addr, size_t buf_len, size_t period_len,
enum dma_transfer_direction direction);
void dw_dma_cyclic_free(struct dma_chan *chan);
int dw_dma_cyclic_start(struct dma_chan *chan);
void dw_dma_cyclic_stop(struct dma_chan *chan);
dma_addr_t dw_dma_get_src_addr(struct dma_chan *chan);
dma_addr_t dw_dma_get_dst_addr(struct dma_chan *chan);
#endif /* DW_DMAC_H */

13
include/linux/fsldma.h Normal file
View File

@ -0,0 +1,13 @@
/*
* This is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#ifndef FSL_DMA_H
#define FSL_DMA_H
/* fsl dma API for enxternal start */
int fsl_dma_external_start(struct dma_chan *dchan, int enable);
#endif

View File

@ -0,0 +1,59 @@
/*
* Driver for the Synopsys DesignWare DMA Controller
*
* Copyright (C) 2007 Atmel Corporation
* Copyright (C) 2010-2011 ST Microelectronics
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _PLATFORM_DATA_DMA_DW_H
#define _PLATFORM_DATA_DMA_DW_H
#include <linux/device.h>
/**
* struct dw_dma_slave - Controller-specific information about a slave
*
* @dma_dev: required DMA master device. Depricated.
* @src_id: src request line
* @dst_id: dst request line
* @src_master: src master for transfers on allocated channel.
* @dst_master: dest master for transfers on allocated channel.
*/
struct dw_dma_slave {
struct device *dma_dev;
u8 src_id;
u8 dst_id;
u8 src_master;
u8 dst_master;
};
/**
* struct dw_dma_platform_data - Controller configuration parameters
* @nr_channels: Number of channels supported by hardware (max 8)
* @is_private: The device channels should be marked as private and not for
* by the general purpose DMA channel allocator.
* @chan_allocation_order: Allocate channels starting from 0 or 7
* @chan_priority: Set channel priority increasing from 0 to 7 or 7 to 0.
* @block_size: Maximum block size supported by the controller
* @nr_masters: Number of AHB masters supported by the controller
* @data_width: Maximum data width supported by hardware per AHB master
* (0 - 8bits, 1 - 16bits, ..., 5 - 256bits)
*/
struct dw_dma_platform_data {
unsigned int nr_channels;
bool is_private;
#define CHAN_ALLOCATION_ASCENDING 0 /* zero to seven */
#define CHAN_ALLOCATION_DESCENDING 1 /* seven to zero */
unsigned char chan_allocation_order;
#define CHAN_PRIORITY_ASCENDING 0 /* chan0 highest */
#define CHAN_PRIORITY_DESCENDING 1 /* chan7 highest */
unsigned char chan_priority;
unsigned short block_size;
unsigned char nr_masters;
unsigned char data_width[4];
};
#endif /* _PLATFORM_DATA_DMA_DW_H */

View File

@ -23,6 +23,8 @@
#define PXA2XX_CS_ASSERT (0x01)
#define PXA2XX_CS_DEASSERT (0x02)
struct dma_chan;
/* device.platform_data for SSP controller devices */
struct pxa2xx_spi_master {
u32 clock_enable;
@ -30,10 +32,9 @@ struct pxa2xx_spi_master {
u8 enable_dma;
/* DMA engine specific config */
int rx_chan_id;
int tx_chan_id;
int rx_slave_id;
int tx_slave_id;
bool (*dma_filter)(struct dma_chan *chan, void *param);
void *tx_param;
void *rx_param;
/* For non-PXA arches */
struct ssp_device ssp;

View File

@ -10,7 +10,7 @@
#ifndef __INCLUDE_SOUND_ATMEL_ABDAC_H
#define __INCLUDE_SOUND_ATMEL_ABDAC_H
#include <linux/dw_dmac.h>
#include <linux/platform_data/dma-dw.h>
/**
* struct atmel_abdac_pdata - board specific ABDAC configuration

View File

@ -10,7 +10,7 @@
#ifndef __INCLUDE_SOUND_ATMEL_AC97C_H
#define __INCLUDE_SOUND_ATMEL_AC97C_H
#include <linux/dw_dmac.h>
#include <linux/platform_data/dma-dw.h>
#define AC97C_CAPTURE 0x01
#define AC97C_PLAYBACK 0x02

View File

@ -9,7 +9,6 @@
*/
#include <linux/clk.h>
#include <linux/bitmap.h>
#include <linux/dw_dmac.h>
#include <linux/dmaengine.h>
#include <linux/dma-mapping.h>
#include <linux/init.h>
@ -25,6 +24,9 @@
#include <sound/pcm_params.h>
#include <sound/atmel-abdac.h>
#include <linux/platform_data/dma-dw.h>
#include <linux/dma/dw.h>
/* DAC register offsets */
#define DAC_DATA 0x0000
#define DAC_CTRL 0x0008

View File

@ -31,7 +31,8 @@
#include <sound/atmel-ac97c.h>
#include <sound/memalloc.h>
#include <linux/dw_dmac.h>
#include <linux/platform_data/dma-dw.h>
#include <linux/dma/dw.h>
#include <mach/cpu.h>

View File

@ -34,7 +34,8 @@ struct mmp_dma_data {
SNDRV_PCM_INFO_MMAP_VALID | \
SNDRV_PCM_INFO_INTERLEAVED | \
SNDRV_PCM_INFO_PAUSE | \
SNDRV_PCM_INFO_RESUME)
SNDRV_PCM_INFO_RESUME | \
SNDRV_PCM_INFO_NO_PERIOD_WAKEUP)
static struct snd_pcm_hardware mmp_pcm_hardware[] = {
{