1
0
Fork 0

Merge branch 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma

Pull slave-dma updates from Vinod Koul:
 "Some notable changes are:
   - new driver for AMBA AXI NBPF by Guennadi
   - new driver for sun6i controller by Maxime
   - pl330 drivers fixes from Lar's
   - sh-dma updates and fixes from Laurent, Geert and Kuninori
   - Documentation updates from Geert
   - drivers fixes and updates spread over dw, edma, freescale, mpc512x
     etc.."

* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (72 commits)
  dmaengine: sun6i: depends on RESET_CONTROLLER
  dma: at_hdmac: fix invalid remaining bytes detection
  dmaengine: nbpfaxi: don't build this driver where it cannot be used
  dmaengine: nbpf_error_get_channel() can be static
  dma: pl08x: Use correct specifier for size_t values
  dmaengine: Remove the context argument to the prep_dma_cyclic operation
  dmaengine: nbpfaxi: convert to tasklet
  dmaengine: nbpfaxi: fix a theoretical race
  dmaengine: add a driver for AMBA AXI NBPF DMAC IP cores
  dmaengine: add device tree binding documentation for the nbpfaxi driver
  dmaengine: edma: Do not register second device when booted with DT
  dmaengine: edma: Do not change the error code returned from edma_alloc_slot
  dmaengine: rcar-dmac: Add device tree bindings documentation
  dmaengine: shdma: Allocate cyclic sg list dynamically
  dmaengine: shdma: Make channel filter ignore unrelated devices
  dmaengine: sh: Rework Kconfig and Makefile
  dmaengine: sun6i: Fix memory leaks
  dmaengine: sun6i: Free the interrupt before killing the tasklet
  dmaengine: sun6i: Remove switch statement from buswidth convertion routine
  dmaengine: of: kconfig: select DMA_ENGINE when DMA_OF is selected
  ...
hifive-unleashed-5.1
Linus Torvalds 2014-08-11 07:14:01 -07:00
commit c7a19c795b
59 changed files with 4031 additions and 984 deletions

View File

@ -47,6 +47,7 @@ The full ID of peripheral types can be found below.
20 ASRC
21 ESAI
22 SSI Dual FIFO (needs firmware ver >= 2)
23 Shared ASRC
The third cell specifies the transfer priority as below.

View File

@ -0,0 +1,29 @@
* Freescale MPC512x and MPC8308 DMA Controller
The DMA controller in Freescale MPC512x and MPC8308 SoCs can move
blocks of memory contents between memory and peripherals or
from memory to memory.
Refer to "Generic DMA Controller and DMA request bindings" in
the dma/dma.txt file for a more detailed description of binding.
Required properties:
- compatible: should be "fsl,mpc5121-dma" or "fsl,mpc8308-dma";
- reg: should contain the DMA controller registers location and length;
- interrupt for the DMA controller: syntax of interrupt client node
is described in interrupt-controller/interrupts.txt file.
- #dma-cells: the length of the DMA specifier, must be <1>.
Each channel of this DMA controller has a peripheral request line,
the assignment is fixed in hardware. This one cell
in dmas property of a client device represents the channel number.
Example:
dma0: dma@14000 {
compatible = "fsl,mpc5121-dma";
reg = <0x14000 0x1800>;
interrupts = <65 0x8>;
#dma-cells = <1>;
};
DMA clients must use the format described in dma/dma.txt file.

View File

@ -0,0 +1,61 @@
* Renesas "Type-AXI" NBPFAXI* DMA controllers
* DMA controller
Required properties
- compatible: must be one of
"renesas,nbpfaxi64dmac1b4"
"renesas,nbpfaxi64dmac1b8"
"renesas,nbpfaxi64dmac1b16"
"renesas,nbpfaxi64dmac4b4"
"renesas,nbpfaxi64dmac4b8"
"renesas,nbpfaxi64dmac4b16"
"renesas,nbpfaxi64dmac8b4"
"renesas,nbpfaxi64dmac8b8"
"renesas,nbpfaxi64dmac8b16"
- #dma-cells: must be 2: the first integer is a terminal number, to which this
slave is connected, the second one is flags. Flags is a bitmask
with the following bits defined:
#define NBPF_SLAVE_RQ_HIGH 1
#define NBPF_SLAVE_RQ_LOW 2
#define NBPF_SLAVE_RQ_LEVEL 4
Optional properties:
You can use dma-channels and dma-requests as described in dma.txt, although they
won't be used, this information is derived from the compatibility string.
Example:
dma: dma-controller@48000000 {
compatible = "renesas,nbpfaxi64dmac8b4";
reg = <0x48000000 0x400>;
interrupts = <0 12 0x4
0 13 0x4
0 14 0x4
0 15 0x4
0 16 0x4
0 17 0x4
0 18 0x4
0 19 0x4>;
#dma-cells = <2>;
dma-channels = <8>;
dma-requests = <8>;
};
* DMA client
Required properties:
dmas and dma-names are required, as described in dma.txt.
Example:
#include <dt-bindings/dma/nbpfaxi.h>
...
dmas = <&dma 0 (NBPF_SLAVE_RQ_HIGH | NBPF_SLAVE_RQ_LEVEL)
&dma 1 (NBPF_SLAVE_RQ_HIGH | NBPF_SLAVE_RQ_LEVEL)>;
dma-names = "rx", "tx";

View File

@ -0,0 +1,29 @@
* R-Car Audio DMAC peri peri Device Tree bindings
Required properties:
- compatible: should be "renesas,rcar-audmapp"
- #dma-cells: should be <1>, see "dmas" property below
Example:
audmapp: audio-dma-pp@0xec740000 {
compatible = "renesas,rcar-audmapp";
#dma-cells = <1>;
reg = <0 0xec740000 0 0x200>;
};
* DMA client
Required properties:
- dmas: a list of <[DMA multiplexer phandle] [SRS/DRS value]> pairs,
where SRS/DRS values are fixed handles, specified in the SoC
manual as the value that would be written into the PDMACHCR.
- dma-names: a list of DMA channel names, one per "dmas" entry
Example:
dmas = <&audmapp 0x2d00
&audmapp 0x3700>;
dma-names = "src0_ssiu0",
"dvc0_ssiu0";

View File

@ -0,0 +1,98 @@
* Renesas R-Car DMA Controller Device Tree bindings
Renesas R-Car Generation 2 SoCs have have multiple multi-channel DMA
controller instances named DMAC capable of serving multiple clients. Channels
can be dedicated to specific clients or shared between a large number of
clients.
DMA clients are connected to the DMAC ports referenced by an 8-bit identifier
called MID/RID.
Each DMA client is connected to one dedicated port of the DMAC, identified by
an 8-bit port number called the MID/RID. A DMA controller can thus serve up to
256 clients in total. When the number of hardware channels is lower than the
number of clients to be served, channels must be shared between multiple DMA
clients. The association of DMA clients to DMAC channels is fully dynamic and
not described in these device tree bindings.
Required Properties:
- compatible: must contain "renesas,rcar-dmac"
- reg: base address and length of the registers block for the DMAC
- interrupts: interrupt specifiers for the DMAC, one for each entry in
interrupt-names.
- interrupt-names: one entry per channel, named "ch%u", where %u is the
channel number ranging from zero to the number of channels minus one.
- clock-names: "fck" for the functional clock
- clocks: a list of phandle + clock-specifier pairs, one for each entry
in clock-names.
- clock-names: must contain "fck" for the functional clock.
- #dma-cells: must be <1>, the cell specifies the MID/RID of the DMAC port
connected to the DMA client
- dma-channels: number of DMA channels
Example: R8A7790 (R-Car H2) SYS-DMACs
dmac0: dma-controller@e6700000 {
compatible = "renesas,rcar-dmac";
reg = <0 0xe6700000 0 0x20000>;
interrupts = <0 197 IRQ_TYPE_LEVEL_HIGH
0 200 IRQ_TYPE_LEVEL_HIGH
0 201 IRQ_TYPE_LEVEL_HIGH
0 202 IRQ_TYPE_LEVEL_HIGH
0 203 IRQ_TYPE_LEVEL_HIGH
0 204 IRQ_TYPE_LEVEL_HIGH
0 205 IRQ_TYPE_LEVEL_HIGH
0 206 IRQ_TYPE_LEVEL_HIGH
0 207 IRQ_TYPE_LEVEL_HIGH
0 208 IRQ_TYPE_LEVEL_HIGH
0 209 IRQ_TYPE_LEVEL_HIGH
0 210 IRQ_TYPE_LEVEL_HIGH
0 211 IRQ_TYPE_LEVEL_HIGH
0 212 IRQ_TYPE_LEVEL_HIGH
0 213 IRQ_TYPE_LEVEL_HIGH
0 214 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "error",
"ch0", "ch1", "ch2", "ch3",
"ch4", "ch5", "ch6", "ch7",
"ch8", "ch9", "ch10", "ch11",
"ch12", "ch13", "ch14";
clocks = <&mstp2_clks R8A7790_CLK_SYS_DMAC0>;
clock-names = "fck";
#dma-cells = <1>;
dma-channels = <15>;
};
dmac1: dma-controller@e6720000 {
compatible = "renesas,rcar-dmac";
reg = <0 0xe6720000 0 0x20000>;
interrupts = <0 220 IRQ_TYPE_LEVEL_HIGH
0 216 IRQ_TYPE_LEVEL_HIGH
0 217 IRQ_TYPE_LEVEL_HIGH
0 218 IRQ_TYPE_LEVEL_HIGH
0 219 IRQ_TYPE_LEVEL_HIGH
0 308 IRQ_TYPE_LEVEL_HIGH
0 309 IRQ_TYPE_LEVEL_HIGH
0 310 IRQ_TYPE_LEVEL_HIGH
0 311 IRQ_TYPE_LEVEL_HIGH
0 312 IRQ_TYPE_LEVEL_HIGH
0 313 IRQ_TYPE_LEVEL_HIGH
0 314 IRQ_TYPE_LEVEL_HIGH
0 315 IRQ_TYPE_LEVEL_HIGH
0 316 IRQ_TYPE_LEVEL_HIGH
0 317 IRQ_TYPE_LEVEL_HIGH
0 318 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "error",
"ch0", "ch1", "ch2", "ch3",
"ch4", "ch5", "ch6", "ch7",
"ch8", "ch9", "ch10", "ch11",
"ch12", "ch13", "ch14";
clocks = <&mstp2_clks R8A7790_CLK_SYS_DMAC1>;
clock-names = "fck";
#dma-cells = <1>;
dma-channels = <15>;
};

View File

@ -35,9 +35,11 @@ Required properties:
Each dmas request consists of 4 cells:
1. A phandle pointing to the DMA controller
2. Device Type
2. Device signal number, the signal line for single and burst requests
connected from the device to the DMA40 engine
3. The DMA request line number (only when 'use fixed channel' is set)
4. A 32bit mask specifying; mode, direction and endianness [NB: This list will grow]
4. A 32bit mask specifying; mode, direction and endianness
[NB: This list will grow]
0x00000001: Mode:
Logical channel when unset
Physical channel when set
@ -54,6 +56,74 @@ Each dmas request consists of 4 cells:
Normal priority when unset
High priority when set
Existing signal numbers for the DB8500 ASIC. Unless specified, the signals are
bidirectional, i.e. the same for RX and TX operations:
0: SPI controller 0
1: SD/MMC controller 0 (unused)
2: SD/MMC controller 1 (unused)
3: SD/MMC controller 2 (unused)
4: I2C port 1
5: I2C port 3
6: I2C port 2
7: I2C port 4
8: Synchronous Serial Port SSP0
9: Synchronous Serial Port SSP1
10: Multi-Channel Display Engine MCDE RX
11: UART port 2
12: UART port 1
13: UART port 0
14: Multirate Serial Port MSP2
15: I2C port 0
16: USB OTG in/out endpoints 7 & 15
17: USB OTG in/out endpoints 6 & 14
18: USB OTG in/out endpoints 5 & 13
19: USB OTG in/out endpoints 4 & 12
20: SLIMbus or HSI channel 0
21: SLIMbus or HSI channel 1
22: SLIMbus or HSI channel 2
23: SLIMbus or HSI channel 3
24: Multimedia DSP SXA0
25: Multimedia DSP SXA1
26: Multimedia DSP SXA2
27: Multimedia DSP SXA3
28: SD/MM controller 2
29: SD/MM controller 0
30: MSP port 1 on DB8500 v1, MSP port 3 on DB8500 v2
31: MSP port 0 or SLIMbus channel 0
32: SD/MM controller 1
33: SPI controller 2
34: i2c3 RX2 TX2
35: SPI controller 1
36: USB OTG in/out endpoints 3 & 11
37: USB OTG in/out endpoints 2 & 10
38: USB OTG in/out endpoints 1 & 9
39: USB OTG in/out endpoints 8
40: SPI controller 3
41: SD/MM controller 3
42: SD/MM controller 4
43: SD/MM controller 5
44: Multimedia DSP SXA4
45: Multimedia DSP SXA5
46: SLIMbus channel 8 or Multimedia DSP SXA6
47: SLIMbus channel 9 or Multimedia DSP SXA7
48: Crypto Accelerator 1
49: Crypto Accelerator 1 TX or Hash Accelerator 1 TX
50: Hash Accelerator 1 TX
51: memcpy TX (to be used by the DMA driver for memcpy operations)
52: SLIMbus or HSI channel 4
53: SLIMbus or HSI channel 5
54: SLIMbus or HSI channel 6
55: SLIMbus or HSI channel 7
56: memcpy (to be used by the DMA driver for memcpy operations)
57: memcpy (to be used by the DMA driver for memcpy operations)
58: memcpy (to be used by the DMA driver for memcpy operations)
59: memcpy (to be used by the DMA driver for memcpy operations)
60: memcpy (to be used by the DMA driver for memcpy operations)
61: Crypto Accelerator 0
62: Crypto Accelerator 0 TX or Hash Accelerator 0 TX
63: Hash Accelerator 0 TX
Example:
uart@80120000 {

View File

@ -0,0 +1,45 @@
Allwinner A31 DMA Controller
This driver follows the generic DMA bindings defined in dma.txt.
Required properties:
- compatible: Must be "allwinner,sun6i-a31-dma"
- reg: Should contain the registers base address and length
- interrupts: Should contain a reference to the interrupt used by this device
- clocks: Should contain a reference to the parent AHB clock
- resets: Should contain a reference to the reset controller asserting
this device in reset
- #dma-cells : Should be 1, a single cell holding a line request number
Example:
dma: dma-controller@01c02000 {
compatible = "allwinner,sun6i-a31-dma";
reg = <0x01c02000 0x1000>;
interrupts = <0 50 4>;
clocks = <&ahb1_gates 6>;
resets = <&ahb1_rst 6>;
#dma-cells = <1>;
};
Clients:
DMA clients connected to the A31 DMA controller must use the format
described in the dma.txt file, using a two-cell specifier for each
channel: a phandle plus one integer cells.
The two cells in order are:
1. A phandle pointing to the DMA controller.
2. The port ID as specified in the datasheet
Example:
spi2: spi@01c6a000 {
compatible = "allwinner,sun6i-a31-spi";
reg = <0x01c6a000 0x1000>;
interrupts = <0 67 4>;
clocks = <&ahb1_gates 22>, <&spi2_clk>;
clock-names = "ahb", "mod";
dmas = <&dma 25>, <&dma 25>;
dma-names = "rx", "tx";
resets = <&ahb1_rst 22>;
};

View File

@ -84,31 +84,32 @@ The slave DMA usage consists of following steps:
the given transaction.
Interface:
struct dma_async_tx_descriptor *(*chan->device->device_prep_slave_sg)(
struct dma_async_tx_descriptor *dmaengine_prep_slave_sg(
struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_data_direction direction,
unsigned long flags);
struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)(
struct dma_async_tx_descriptor *dmaengine_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_data_direction direction);
struct dma_async_tx_descriptor *(*device_prep_interleaved_dma)(
struct dma_async_tx_descriptor *dmaengine_prep_interleaved_dma(
struct dma_chan *chan, struct dma_interleaved_template *xt,
unsigned long flags);
The peripheral driver is expected to have mapped the scatterlist for
the DMA operation prior to calling device_prep_slave_sg, and must
keep the scatterlist mapped until the DMA operation has completed.
The scatterlist must be mapped using the DMA struct device. So,
normal setup should look like this:
The scatterlist must be mapped using the DMA struct device.
If a mapping needs to be synchronized later, dma_sync_*_for_*() must be
called using the DMA struct device, too.
So, normal setup should look like this:
nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len);
if (nr_sg == 0)
/* error */
desc = chan->device->device_prep_slave_sg(chan, sgl, nr_sg,
direction, flags);
desc = dmaengine_prep_slave_sg(chan, sgl, nr_sg, direction, flags);
Once a descriptor has been obtained, the callback information can be
added and the descriptor must then be submitted. Some DMA engine
@ -188,7 +189,7 @@ Further APIs:
description of this API.
This can be used in conjunction with dma_async_is_complete() and
the cookie returned from 'descriptor->submit()' to check for
the cookie returned from dmaengine_submit() to check for
completion of a specific DMA transaction.
Note:

View File

@ -1414,6 +1414,34 @@ void edma_clear_event(unsigned channel)
}
EXPORT_SYMBOL(edma_clear_event);
/*
* edma_assign_channel_eventq - move given channel to desired eventq
* Arguments:
* channel - channel number
* eventq_no - queue to move the channel
*
* Can be used to move a channel to a selected event queue.
*/
void edma_assign_channel_eventq(unsigned channel, enum dma_event_q eventq_no)
{
unsigned ctlr;
ctlr = EDMA_CTLR(channel);
channel = EDMA_CHAN_SLOT(channel);
if (channel >= edma_cc[ctlr]->num_channels)
return;
/* default to low priority queue */
if (eventq_no == EVENTQ_DEFAULT)
eventq_no = edma_cc[ctlr]->default_queue;
if (eventq_no >= edma_cc[ctlr]->num_tc)
return;
map_dmach_queue(ctlr, channel, eventq_no);
}
EXPORT_SYMBOL(edma_assign_channel_eventq);
static int edma_setup_from_hw(struct device *dev, struct edma_soc_info *pdata,
struct edma *edma_cc)
{
@ -1470,7 +1498,8 @@ static int edma_setup_from_hw(struct device *dev, struct edma_soc_info *pdata,
queue_priority_map[i][1] = -1;
pdata->queue_priority_mapping = queue_priority_map;
pdata->default_queue = 0;
/* Default queue has the lowest priority */
pdata->default_queue = i - 1;
return 0;
}

View File

@ -498,6 +498,7 @@
compatible = "fsl,mpc5121-dma";
reg = <0x14000 0x1800>;
interrupts = <65 0x8>;
#dma-cells = <1>;
};
};

View File

@ -25,7 +25,7 @@
* Define the default configuration for dual address memory-memory transfer.
* The 0x400 value represents auto-request, external->external.
*/
#define RS_DUAL (DM_INC | SM_INC | 0x400 | TS_INDEX2VAL(XMIT_SZ_32BIT))
#define RS_DUAL (DM_INC | SM_INC | RS_AUTO | TS_INDEX2VAL(XMIT_SZ_32BIT))
static unsigned long dma_find_base(unsigned int chan)
{

View File

@ -13,17 +13,17 @@
#ifndef DMA_REGISTER_H
#define DMA_REGISTER_H
/* DMA register */
#define SAR 0x00
#define DAR 0x04
#define TCR 0x08
#define CHCR 0x0C
#define DMAOR 0x40
/* DMA registers */
#define SAR 0x00 /* Source Address Register */
#define DAR 0x04 /* Destination Address Register */
#define TCR 0x08 /* Transfer Count Register */
#define CHCR 0x0C /* Channel Control Register */
#define DMAOR 0x40 /* DMA Operation Register */
/* DMAOR definitions */
#define DMAOR_AE 0x00000004
#define DMAOR_AE 0x00000004 /* Address Error Flag */
#define DMAOR_NMIF 0x00000002
#define DMAOR_DME 0x00000001
#define DMAOR_DME 0x00000001 /* DMA Master Enable */
/* Definitions for the SuperH DMAC */
#define REQ_L 0x00000000
@ -34,18 +34,20 @@
#define ACK_W 0x00020000
#define ACK_H 0x00000000
#define ACK_L 0x00010000
#define DM_INC 0x00004000
#define DM_DEC 0x00008000
#define DM_FIX 0x0000c000
#define SM_INC 0x00001000
#define SM_DEC 0x00002000
#define SM_FIX 0x00003000
#define DM_INC 0x00004000 /* Destination addresses are incremented */
#define DM_DEC 0x00008000 /* Destination addresses are decremented */
#define DM_FIX 0x0000c000 /* Destination address is fixed */
#define SM_INC 0x00001000 /* Source addresses are incremented */
#define SM_DEC 0x00002000 /* Source addresses are decremented */
#define SM_FIX 0x00003000 /* Source address is fixed */
#define RS_IN 0x00000200
#define RS_OUT 0x00000300
#define RS_AUTO 0x00000400 /* Auto Request */
#define RS_ERS 0x00000800 /* DMA extended resource selector */
#define TS_BLK 0x00000040
#define TM_BUR 0x00000020
#define CHCR_DE 0x00000001
#define CHCR_TE 0x00000002
#define CHCR_IE 0x00000004
#define CHCR_DE 0x00000001 /* DMA Enable */
#define CHCR_TE 0x00000002 /* Transfer End Flag */
#define CHCR_IE 0x00000004 /* Interrupt Enable */
#endif

View File

@ -30,62 +30,62 @@ static const struct sh_dmae_slave_config sh7722_dmae_slaves[] = {
{
.slave_id = SHDMA_SLAVE_SCIF0_TX,
.addr = 0xffe0000c,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x21,
}, {
.slave_id = SHDMA_SLAVE_SCIF0_RX,
.addr = 0xffe00014,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x22,
}, {
.slave_id = SHDMA_SLAVE_SCIF1_TX,
.addr = 0xffe1000c,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x25,
}, {
.slave_id = SHDMA_SLAVE_SCIF1_RX,
.addr = 0xffe10014,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x26,
}, {
.slave_id = SHDMA_SLAVE_SCIF2_TX,
.addr = 0xffe2000c,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x29,
}, {
.slave_id = SHDMA_SLAVE_SCIF2_RX,
.addr = 0xffe20014,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x2a,
}, {
.slave_id = SHDMA_SLAVE_SIUA_TX,
.addr = 0xa454c098,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_32BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_32BIT),
.mid_rid = 0xb1,
}, {
.slave_id = SHDMA_SLAVE_SIUA_RX,
.addr = 0xa454c090,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_32BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_32BIT),
.mid_rid = 0xb2,
}, {
.slave_id = SHDMA_SLAVE_SIUB_TX,
.addr = 0xa454c09c,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_32BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_32BIT),
.mid_rid = 0xb5,
}, {
.slave_id = SHDMA_SLAVE_SIUB_RX,
.addr = 0xa454c094,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_32BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_32BIT),
.mid_rid = 0xb6,
}, {
.slave_id = SHDMA_SLAVE_SDHI0_TX,
.addr = 0x04ce0030,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_16BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_16BIT),
.mid_rid = 0xc1,
}, {
.slave_id = SHDMA_SLAVE_SDHI0_RX,
.addr = 0x04ce0030,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_16BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_16BIT),
.mid_rid = 0xc2,
},
};

View File

@ -36,122 +36,122 @@ static const struct sh_dmae_slave_config sh7724_dmae_slaves[] = {
{
.slave_id = SHDMA_SLAVE_SCIF0_TX,
.addr = 0xffe0000c,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x21,
}, {
.slave_id = SHDMA_SLAVE_SCIF0_RX,
.addr = 0xffe00014,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x22,
}, {
.slave_id = SHDMA_SLAVE_SCIF1_TX,
.addr = 0xffe1000c,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x25,
}, {
.slave_id = SHDMA_SLAVE_SCIF1_RX,
.addr = 0xffe10014,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x26,
}, {
.slave_id = SHDMA_SLAVE_SCIF2_TX,
.addr = 0xffe2000c,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x29,
}, {
.slave_id = SHDMA_SLAVE_SCIF2_RX,
.addr = 0xffe20014,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x2a,
}, {
.slave_id = SHDMA_SLAVE_SCIF3_TX,
.addr = 0xa4e30020,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x2d,
}, {
.slave_id = SHDMA_SLAVE_SCIF3_RX,
.addr = 0xa4e30024,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x2e,
}, {
.slave_id = SHDMA_SLAVE_SCIF4_TX,
.addr = 0xa4e40020,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x31,
}, {
.slave_id = SHDMA_SLAVE_SCIF4_RX,
.addr = 0xa4e40024,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x32,
}, {
.slave_id = SHDMA_SLAVE_SCIF5_TX,
.addr = 0xa4e50020,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x35,
}, {
.slave_id = SHDMA_SLAVE_SCIF5_RX,
.addr = 0xa4e50024,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_8BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x36,
}, {
.slave_id = SHDMA_SLAVE_USB0D0_TX,
.addr = 0xA4D80100,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_32BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_32BIT),
.mid_rid = 0x73,
}, {
.slave_id = SHDMA_SLAVE_USB0D0_RX,
.addr = 0xA4D80100,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_32BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_32BIT),
.mid_rid = 0x73,
}, {
.slave_id = SHDMA_SLAVE_USB0D1_TX,
.addr = 0xA4D80120,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_32BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_32BIT),
.mid_rid = 0x77,
}, {
.slave_id = SHDMA_SLAVE_USB0D1_RX,
.addr = 0xA4D80120,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_32BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_32BIT),
.mid_rid = 0x77,
}, {
.slave_id = SHDMA_SLAVE_USB1D0_TX,
.addr = 0xA4D90100,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_32BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_32BIT),
.mid_rid = 0xab,
}, {
.slave_id = SHDMA_SLAVE_USB1D0_RX,
.addr = 0xA4D90100,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_32BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_32BIT),
.mid_rid = 0xab,
}, {
.slave_id = SHDMA_SLAVE_USB1D1_TX,
.addr = 0xA4D90120,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_32BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_32BIT),
.mid_rid = 0xaf,
}, {
.slave_id = SHDMA_SLAVE_USB1D1_RX,
.addr = 0xA4D90120,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_32BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_32BIT),
.mid_rid = 0xaf,
}, {
.slave_id = SHDMA_SLAVE_SDHI0_TX,
.addr = 0x04ce0030,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_16BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_16BIT),
.mid_rid = 0xc1,
}, {
.slave_id = SHDMA_SLAVE_SDHI0_RX,
.addr = 0x04ce0030,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_16BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_16BIT),
.mid_rid = 0xc2,
}, {
.slave_id = SHDMA_SLAVE_SDHI1_TX,
.addr = 0x04cf0030,
.chcr = DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL(XMIT_SZ_16BIT),
.chcr = DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL(XMIT_SZ_16BIT),
.mid_rid = 0xc9,
}, {
.slave_id = SHDMA_SLAVE_SDHI1_RX,
.addr = 0x04cf0030,
.chcr = DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL(XMIT_SZ_16BIT),
.chcr = DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL(XMIT_SZ_16BIT),
.mid_rid = 0xca,
},
};

View File

@ -123,28 +123,28 @@ static const struct sh_dmae_slave_config sh7757_dmae0_slaves[] = {
{
.slave_id = SHDMA_SLAVE_SDHI_TX,
.addr = 0x1fe50030,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_16BIT),
.mid_rid = 0xc5,
},
{
.slave_id = SHDMA_SLAVE_SDHI_RX,
.addr = 0x1fe50030,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_16BIT),
.mid_rid = 0xc6,
},
{
.slave_id = SHDMA_SLAVE_MMCIF_TX,
.addr = 0x1fcb0034,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_32BIT),
.mid_rid = 0xd3,
},
{
.slave_id = SHDMA_SLAVE_MMCIF_RX,
.addr = 0x1fcb0034,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_32BIT),
.mid_rid = 0xd7,
},
@ -154,56 +154,56 @@ static const struct sh_dmae_slave_config sh7757_dmae1_slaves[] = {
{
.slave_id = SHDMA_SLAVE_SCIF2_TX,
.addr = 0x1f4b000c,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x21,
},
{
.slave_id = SHDMA_SLAVE_SCIF2_RX,
.addr = 0x1f4b0014,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x22,
},
{
.slave_id = SHDMA_SLAVE_SCIF3_TX,
.addr = 0x1f4c000c,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x29,
},
{
.slave_id = SHDMA_SLAVE_SCIF3_RX,
.addr = 0x1f4c0014,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x2a,
},
{
.slave_id = SHDMA_SLAVE_SCIF4_TX,
.addr = 0x1f4d000c,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x41,
},
{
.slave_id = SHDMA_SLAVE_SCIF4_RX,
.addr = 0x1f4d0014,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x42,
},
{
.slave_id = SHDMA_SLAVE_RSPI_TX,
.addr = 0xfe480004,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_16BIT),
.mid_rid = 0xc1,
},
{
.slave_id = SHDMA_SLAVE_RSPI_RX,
.addr = 0xfe480004,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_16BIT),
.mid_rid = 0xc2,
},
@ -213,70 +213,70 @@ static const struct sh_dmae_slave_config sh7757_dmae2_slaves[] = {
{
.slave_id = SHDMA_SLAVE_RIIC0_TX,
.addr = 0x1e500012,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x21,
},
{
.slave_id = SHDMA_SLAVE_RIIC0_RX,
.addr = 0x1e500013,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x22,
},
{
.slave_id = SHDMA_SLAVE_RIIC1_TX,
.addr = 0x1e510012,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x29,
},
{
.slave_id = SHDMA_SLAVE_RIIC1_RX,
.addr = 0x1e510013,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x2a,
},
{
.slave_id = SHDMA_SLAVE_RIIC2_TX,
.addr = 0x1e520012,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0xa1,
},
{
.slave_id = SHDMA_SLAVE_RIIC2_RX,
.addr = 0x1e520013,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0xa2,
},
{
.slave_id = SHDMA_SLAVE_RIIC3_TX,
.addr = 0x1e530012,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0xa9,
},
{
.slave_id = SHDMA_SLAVE_RIIC3_RX,
.addr = 0x1e530013,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0xaf,
},
{
.slave_id = SHDMA_SLAVE_RIIC4_TX,
.addr = 0x1e540012,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0xc5,
},
{
.slave_id = SHDMA_SLAVE_RIIC4_RX,
.addr = 0x1e540013,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0xc6,
},
@ -286,70 +286,70 @@ static const struct sh_dmae_slave_config sh7757_dmae3_slaves[] = {
{
.slave_id = SHDMA_SLAVE_RIIC5_TX,
.addr = 0x1e550012,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x21,
},
{
.slave_id = SHDMA_SLAVE_RIIC5_RX,
.addr = 0x1e550013,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x22,
},
{
.slave_id = SHDMA_SLAVE_RIIC6_TX,
.addr = 0x1e560012,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x29,
},
{
.slave_id = SHDMA_SLAVE_RIIC6_RX,
.addr = 0x1e560013,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x2a,
},
{
.slave_id = SHDMA_SLAVE_RIIC7_TX,
.addr = 0x1e570012,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x41,
},
{
.slave_id = SHDMA_SLAVE_RIIC7_RX,
.addr = 0x1e570013,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x42,
},
{
.slave_id = SHDMA_SLAVE_RIIC8_TX,
.addr = 0x1e580012,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x45,
},
{
.slave_id = SHDMA_SLAVE_RIIC8_RX,
.addr = 0x1e580013,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x46,
},
{
.slave_id = SHDMA_SLAVE_RIIC9_TX,
.addr = 0x1e590012,
.chcr = SM_INC | 0x800 | 0x40000000 |
.chcr = SM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x51,
},
{
.slave_id = SHDMA_SLAVE_RIIC9_RX,
.addr = 0x1e590013,
.chcr = DM_INC | 0x800 | 0x40000000 |
.chcr = DM_INC | RS_ERS | 0x40000000 |
TS_INDEX2VAL(XMIT_SZ_8BIT),
.mid_rid = 0x52,
},

View File

@ -393,6 +393,22 @@ config XILINX_VDMA
channels, Memory Mapped to Stream (MM2S) and Stream to
Memory Mapped (S2MM) for the data transfers.
config DMA_SUN6I
tristate "Allwinner A31 SoCs DMA support"
depends on MACH_SUN6I || COMPILE_TEST
depends on RESET_CONTROLLER
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Support for the DMA engine for Allwinner A31 SoCs.
config NBPFAXI_DMA
tristate "Renesas Type-AXI NBPF DMA support"
select DMA_ENGINE
depends on ARM || COMPILE_TEST
help
Support for "Type-AXI" NBPF DMA IPs from Renesas
config DMA_ENGINE
bool
@ -406,6 +422,7 @@ config DMA_ACPI
config DMA_OF
def_bool y
depends on OF
select DMA_ENGINE
comment "DMA Clients"
depends on DMA_ENGINE

View File

@ -1,5 +1,5 @@
ccflags-$(CONFIG_DMADEVICES_DEBUG) := -DDEBUG
ccflags-$(CONFIG_DMADEVICES_VDEBUG) += -DVERBOSE_DEBUG
subdir-ccflags-$(CONFIG_DMADEVICES_DEBUG) := -DDEBUG
subdir-ccflags-$(CONFIG_DMADEVICES_VDEBUG) += -DVERBOSE_DEBUG
obj-$(CONFIG_DMA_ENGINE) += dmaengine.o
obj-$(CONFIG_DMA_VIRTUAL_CHANNELS) += virt-dma.o
@ -48,3 +48,5 @@ obj-$(CONFIG_FSL_EDMA) += fsl-edma.o
obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o
obj-y += xilinx/
obj-$(CONFIG_INTEL_MIC_X100_DMA) += mic_x100_dma.o
obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o
obj-$(CONFIG_DMA_SUN6I) += sun6i-dma.o

View File

@ -7,7 +7,6 @@ TODO for slave dma
- imx-dma
- imx-sdma
- mxs-dma.c
- dw_dmac
- intel_mid_dma
4. Check other subsystems for dma drivers and merge/move to dmaengine
5. Remove dma_slave_config's dma direction.

View File

@ -1040,7 +1040,7 @@ static int pl08x_fill_llis_for_desc(struct pl08x_driver_data *pl08x,
if (early_bytes) {
dev_vdbg(&pl08x->adev->dev,
"%s byte width LLIs (remain 0x%08x)\n",
"%s byte width LLIs (remain 0x%08zx)\n",
__func__, bd.remainder);
prep_byte_width_lli(pl08x, &bd, &cctl, early_bytes,
num_llis++, &total_bytes);
@ -1653,7 +1653,7 @@ static struct dma_async_tx_descriptor *pl08x_prep_slave_sg(
static struct dma_async_tx_descriptor *pl08x_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
unsigned long flags)
{
struct pl08x_dma_chan *plchan = to_pl08x_chan(chan);
struct pl08x_driver_data *pl08x = plchan->host;
@ -1662,7 +1662,7 @@ static struct dma_async_tx_descriptor *pl08x_prep_dma_cyclic(
dma_addr_t slave_addr;
dev_dbg(&pl08x->adev->dev,
"%s prepare cyclic transaction of %d/%d bytes %s %s\n",
"%s prepare cyclic transaction of %zd/%zd bytes %s %s\n",
__func__, period_len, buf_len,
direction == DMA_MEM_TO_DEV ? "to" : "from",
plchan->name);

View File

@ -294,14 +294,16 @@ static int atc_get_bytes_left(struct dma_chan *chan)
ret = -EINVAL;
goto out;
}
atchan->remain_desc -= (desc_cur->lli.ctrla & ATC_BTSIZE_MAX)
<< (desc_first->tx_width);
if (atchan->remain_desc < 0) {
count = (desc_cur->lli.ctrla & ATC_BTSIZE_MAX)
<< desc_first->tx_width;
if (atchan->remain_desc < count) {
ret = -EINVAL;
goto out;
} else {
ret = atchan->remain_desc;
}
atchan->remain_desc -= count;
ret = atchan->remain_desc;
} else {
/*
* Get residual bytes when current
@ -893,12 +895,11 @@ atc_dma_cyclic_fill_desc(struct dma_chan *chan, struct at_desc *desc,
* @period_len: number of bytes for each period
* @direction: transfer direction, to or from device
* @flags: tx descriptor status flags
* @context: transfer context (ignored)
*/
static struct dma_async_tx_descriptor *
atc_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
unsigned long flags)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
struct at_dma_slave *atslave = chan->private;

View File

@ -335,7 +335,7 @@ static void bcm2835_dma_issue_pending(struct dma_chan *chan)
static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
unsigned long flags)
{
struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
enum dma_slave_buswidth dev_width;

View File

@ -433,7 +433,7 @@ static struct dma_async_tx_descriptor *jz4740_dma_prep_slave_sg(
static struct dma_async_tx_descriptor *jz4740_dma_prep_dma_cyclic(
struct dma_chan *c, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
unsigned long flags)
{
struct jz4740_dmaengine_chan *chan = to_jz4740_dma_chan(c);
struct jz4740_dma_desc *desc;
@ -614,4 +614,4 @@ module_platform_driver(jz4740_dma_driver);
MODULE_AUTHOR("Lars-Peter Clausen <lars@metafoo.de>");
MODULE_DESCRIPTION("JZ4740 DMA driver");
MODULE_LICENSE("GPLv2");
MODULE_LICENSE("GPL v2");

View File

@ -279,6 +279,19 @@ static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc *first)
channel_set_bit(dw, CH_EN, dwc->mask);
}
static void dwc_dostart_first_queued(struct dw_dma_chan *dwc)
{
struct dw_desc *desc;
if (list_empty(&dwc->queue))
return;
list_move(dwc->queue.next, &dwc->active_list);
desc = dwc_first_active(dwc);
dev_vdbg(chan2dev(&dwc->chan), "%s: started %u\n", __func__, desc->txd.cookie);
dwc_dostart(dwc, desc);
}
/*----------------------------------------------------------------------*/
static void
@ -335,10 +348,7 @@ static void dwc_complete_all(struct dw_dma *dw, struct dw_dma_chan *dwc)
* the completed ones.
*/
list_splice_init(&dwc->active_list, &list);
if (!list_empty(&dwc->queue)) {
list_move(dwc->queue.next, &dwc->active_list);
dwc_dostart(dwc, dwc_first_active(dwc));
}
dwc_dostart_first_queued(dwc);
spin_unlock_irqrestore(&dwc->lock, flags);
@ -467,10 +477,7 @@ static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc)
/* Try to continue after resetting the channel... */
dwc_chan_disable(dw, dwc);
if (!list_empty(&dwc->queue)) {
list_move(dwc->queue.next, &dwc->active_list);
dwc_dostart(dwc, dwc_first_active(dwc));
}
dwc_dostart_first_queued(dwc);
spin_unlock_irqrestore(&dwc->lock, flags);
}
@ -677,17 +684,9 @@ static dma_cookie_t dwc_tx_submit(struct dma_async_tx_descriptor *tx)
* possible, perhaps even appending to those already submitted
* for DMA. But this is hard to do in a race-free manner.
*/
if (list_empty(&dwc->active_list)) {
dev_vdbg(chan2dev(tx->chan), "%s: started %u\n", __func__,
desc->txd.cookie);
list_add_tail(&desc->desc_node, &dwc->active_list);
dwc_dostart(dwc, dwc_first_active(dwc));
} else {
dev_vdbg(chan2dev(tx->chan), "%s: queued %u\n", __func__,
desc->txd.cookie);
list_add_tail(&desc->desc_node, &dwc->queue);
}
dev_vdbg(chan2dev(tx->chan), "%s: queued %u\n", __func__, desc->txd.cookie);
list_add_tail(&desc->desc_node, &dwc->queue);
spin_unlock_irqrestore(&dwc->lock, flags);
@ -1092,9 +1091,12 @@ dwc_tx_status(struct dma_chan *chan,
static void dwc_issue_pending(struct dma_chan *chan)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
unsigned long flags;
if (!list_empty(&dwc->queue))
dwc_scan_descriptors(to_dw_dma(chan->device), dwc);
spin_lock_irqsave(&dwc->lock, flags);
if (list_empty(&dwc->active_list))
dwc_dostart_first_queued(dwc);
spin_unlock_irqrestore(&dwc->lock, flags);
}
static int dwc_alloc_chan_resources(struct dma_chan *chan)

View File

@ -23,6 +23,7 @@
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/of.h>
#include <linux/platform_data/edma.h>
@ -256,8 +257,13 @@ static int edma_terminate_all(struct edma_chan *echan)
* echan->edesc is NULL and exit.)
*/
if (echan->edesc) {
int cyclic = echan->edesc->cyclic;
echan->edesc = NULL;
edma_stop(echan->ch_num);
/* Move the cyclic channel back to default queue */
if (cyclic)
edma_assign_channel_eventq(echan->ch_num,
EVENTQ_DEFAULT);
}
vchan_get_all_descriptors(&echan->vchan, &head);
@ -592,7 +598,7 @@ struct dma_async_tx_descriptor *edma_prep_dma_memcpy(
static struct dma_async_tx_descriptor *edma_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long tx_flags, void *context)
unsigned long tx_flags)
{
struct edma_chan *echan = to_edma_chan(chan);
struct device *dev = chan->device->dev;
@ -718,12 +724,15 @@ static struct dma_async_tx_descriptor *edma_prep_dma_cyclic(
edesc->absync = ret;
/*
* Enable interrupts for every period because callback
* has to be called for every period.
* Enable period interrupt only if it is requested
*/
edesc->pset[i].param.opt |= TCINTEN;
if (tx_flags & DMA_PREP_INTERRUPT)
edesc->pset[i].param.opt |= TCINTEN;
}
/* Place the cyclic channel to highest priority queue */
edma_assign_channel_eventq(echan->ch_num, EVENTQ_0);
return vchan_tx_prep(&echan->vchan, &edesc->vdesc, tx_flags);
}
@ -993,7 +1002,7 @@ static int edma_dma_device_slave_caps(struct dma_chan *dchan,
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = true;
caps->cmd_terminate = true;
caps->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
caps->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
return 0;
}
@ -1040,7 +1049,7 @@ static int edma_probe(struct platform_device *pdev)
ecc->dummy_slot = edma_alloc_slot(ecc->ctlr, EDMA_SLOT_ANY);
if (ecc->dummy_slot < 0) {
dev_err(&pdev->dev, "Can't allocate PaRAM dummy slot\n");
return -EIO;
return ecc->dummy_slot;
}
dma_cap_zero(ecc->dma_slave.cap_mask);
@ -1125,7 +1134,7 @@ static int edma_init(void)
}
}
if (EDMA_CTLRS == 2) {
if (!of_have_populated_dt() && EDMA_CTLRS == 2) {
pdev1 = platform_device_register_full(&edma_dev_info1);
if (IS_ERR(pdev1)) {
platform_driver_unregister(&edma_driver);

View File

@ -1092,7 +1092,6 @@ fail:
* @period_len: length of a single period
* @dir: direction of the operation
* @flags: tx descriptor status flags
* @context: operation context (ignored)
*
* Prepares a descriptor for cyclic DMA operation. This means that once the
* descriptor is submitted, we will be submitting in a @period_len sized
@ -1105,8 +1104,7 @@ fail:
static struct dma_async_tx_descriptor *
ep93xx_dma_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t dma_addr,
size_t buf_len, size_t period_len,
enum dma_transfer_direction dir, unsigned long flags,
void *context)
enum dma_transfer_direction dir, unsigned long flags)
{
struct ep93xx_dma_chan *edmac = to_ep93xx_dma_chan(chan);
struct ep93xx_dma_desc *desc, *first;

View File

@ -248,11 +248,12 @@ static void fsl_edma_chan_mux(struct fsl_edma_chan *fsl_chan,
unsigned int slot, bool enable)
{
u32 ch = fsl_chan->vchan.chan.chan_id;
void __iomem *muxaddr = fsl_chan->edma->muxbase[ch / DMAMUX_NR];
void __iomem *muxaddr;
unsigned chans_per_mux, ch_off;
chans_per_mux = fsl_chan->edma->n_chans / DMAMUX_NR;
ch_off = fsl_chan->vchan.chan.chan_id % chans_per_mux;
muxaddr = fsl_chan->edma->muxbase[ch / chans_per_mux];
if (enable)
edma_writeb(fsl_chan->edma,
@ -516,7 +517,7 @@ err:
static struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t dma_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
unsigned long flags)
{
struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
struct fsl_edma_desc *fsl_desc;
@ -724,6 +725,7 @@ static struct dma_chan *fsl_edma_xlate(struct of_phandle_args *dma_spec,
{
struct fsl_edma_engine *fsl_edma = ofdma->of_dma_data;
struct dma_chan *chan, *_chan;
unsigned long chans_per_mux = fsl_edma->n_chans / DMAMUX_NR;
if (dma_spec->args_count != 2)
return NULL;
@ -732,7 +734,7 @@ static struct dma_chan *fsl_edma_xlate(struct of_phandle_args *dma_spec,
list_for_each_entry_safe(chan, _chan, &fsl_edma->dma_dev.channels, device_node) {
if (chan->client_count)
continue;
if ((chan->chan_id / DMAMUX_NR) == dma_spec->args[0]) {
if ((chan->chan_id / chans_per_mux) == dma_spec->args[0]) {
chan = dma_get_slave_channel(chan);
if (chan) {
chan->device->privatecnt++;

View File

@ -396,10 +396,17 @@ static dma_cookie_t fsl_dma_tx_submit(struct dma_async_tx_descriptor *tx)
struct fsldma_chan *chan = to_fsl_chan(tx->chan);
struct fsl_desc_sw *desc = tx_to_fsl_desc(tx);
struct fsl_desc_sw *child;
unsigned long flags;
dma_cookie_t cookie = -EINVAL;
spin_lock_irqsave(&chan->desc_lock, flags);
spin_lock_bh(&chan->desc_lock);
#ifdef CONFIG_PM
if (unlikely(chan->pm_state != RUNNING)) {
chan_dbg(chan, "cannot submit due to suspend\n");
spin_unlock_bh(&chan->desc_lock);
return -1;
}
#endif
/*
* assign cookies to all of the software descriptors
@ -412,7 +419,7 @@ static dma_cookie_t fsl_dma_tx_submit(struct dma_async_tx_descriptor *tx)
/* put this transaction onto the tail of the pending queue */
append_ld_queue(chan, desc);
spin_unlock_irqrestore(&chan->desc_lock, flags);
spin_unlock_bh(&chan->desc_lock);
return cookie;
}
@ -458,6 +465,88 @@ static struct fsl_desc_sw *fsl_dma_alloc_descriptor(struct fsldma_chan *chan)
return desc;
}
/**
* fsldma_clean_completed_descriptor - free all descriptors which
* has been completed and acked
* @chan: Freescale DMA channel
*
* This function is used on all completed and acked descriptors.
* All descriptors should only be freed in this function.
*/
static void fsldma_clean_completed_descriptor(struct fsldma_chan *chan)
{
struct fsl_desc_sw *desc, *_desc;
/* Run the callback for each descriptor, in order */
list_for_each_entry_safe(desc, _desc, &chan->ld_completed, node)
if (async_tx_test_ack(&desc->async_tx))
fsl_dma_free_descriptor(chan, desc);
}
/**
* fsldma_run_tx_complete_actions - cleanup a single link descriptor
* @chan: Freescale DMA channel
* @desc: descriptor to cleanup and free
* @cookie: Freescale DMA transaction identifier
*
* This function is used on a descriptor which has been executed by the DMA
* controller. It will run any callbacks, submit any dependencies.
*/
static dma_cookie_t fsldma_run_tx_complete_actions(struct fsldma_chan *chan,
struct fsl_desc_sw *desc, dma_cookie_t cookie)
{
struct dma_async_tx_descriptor *txd = &desc->async_tx;
dma_cookie_t ret = cookie;
BUG_ON(txd->cookie < 0);
if (txd->cookie > 0) {
ret = txd->cookie;
/* Run the link descriptor callback function */
if (txd->callback) {
chan_dbg(chan, "LD %p callback\n", desc);
txd->callback(txd->callback_param);
}
}
/* Run any dependencies */
dma_run_dependencies(txd);
return ret;
}
/**
* fsldma_clean_running_descriptor - move the completed descriptor from
* ld_running to ld_completed
* @chan: Freescale DMA channel
* @desc: the descriptor which is completed
*
* Free the descriptor directly if acked by async_tx api, or move it to
* queue ld_completed.
*/
static void fsldma_clean_running_descriptor(struct fsldma_chan *chan,
struct fsl_desc_sw *desc)
{
/* Remove from the list of transactions */
list_del(&desc->node);
/*
* the client is allowed to attach dependent operations
* until 'ack' is set
*/
if (!async_tx_test_ack(&desc->async_tx)) {
/*
* Move this descriptor to the list of descriptors which is
* completed, but still awaiting the 'ack' bit to be set.
*/
list_add_tail(&desc->node, &chan->ld_completed);
return;
}
dma_pool_free(chan->desc_pool, desc, desc->async_tx.phys);
}
/**
* fsl_chan_xfer_ld_queue - transfer any pending transactions
* @chan : Freescale DMA channel
@ -526,31 +615,58 @@ static void fsl_chan_xfer_ld_queue(struct fsldma_chan *chan)
}
/**
* fsldma_cleanup_descriptor - cleanup and free a single link descriptor
* fsldma_cleanup_descriptors - cleanup link descriptors which are completed
* and move them to ld_completed to free until flag 'ack' is set
* @chan: Freescale DMA channel
* @desc: descriptor to cleanup and free
*
* This function is used on a descriptor which has been executed by the DMA
* controller. It will run any callbacks, submit any dependencies, and then
* free the descriptor.
* This function is used on descriptors which have been executed by the DMA
* controller. It will run any callbacks, submit any dependencies, then
* free these descriptors if flag 'ack' is set.
*/
static void fsldma_cleanup_descriptor(struct fsldma_chan *chan,
struct fsl_desc_sw *desc)
static void fsldma_cleanup_descriptors(struct fsldma_chan *chan)
{
struct dma_async_tx_descriptor *txd = &desc->async_tx;
struct fsl_desc_sw *desc, *_desc;
dma_cookie_t cookie = 0;
dma_addr_t curr_phys = get_cdar(chan);
int seen_current = 0;
/* Run the link descriptor callback function */
if (txd->callback) {
chan_dbg(chan, "LD %p callback\n", desc);
txd->callback(txd->callback_param);
fsldma_clean_completed_descriptor(chan);
/* Run the callback for each descriptor, in order */
list_for_each_entry_safe(desc, _desc, &chan->ld_running, node) {
/*
* do not advance past the current descriptor loaded into the
* hardware channel, subsequent descriptors are either in
* process or have not been submitted
*/
if (seen_current)
break;
/*
* stop the search if we reach the current descriptor and the
* channel is busy
*/
if (desc->async_tx.phys == curr_phys) {
seen_current = 1;
if (!dma_is_idle(chan))
break;
}
cookie = fsldma_run_tx_complete_actions(chan, desc, cookie);
fsldma_clean_running_descriptor(chan, desc);
}
/* Run any dependencies */
dma_run_dependencies(txd);
/*
* Start any pending transactions automatically
*
* In the ideal case, we keep the DMA controller busy while we go
* ahead and free the descriptors below.
*/
fsl_chan_xfer_ld_queue(chan);
dma_descriptor_unmap(txd);
chan_dbg(chan, "LD %p free\n", desc);
dma_pool_free(chan->desc_pool, desc, txd->phys);
if (cookie > 0)
chan->common.completed_cookie = cookie;
}
/**
@ -617,13 +733,14 @@ static void fsldma_free_desc_list_reverse(struct fsldma_chan *chan,
static void fsl_dma_free_chan_resources(struct dma_chan *dchan)
{
struct fsldma_chan *chan = to_fsl_chan(dchan);
unsigned long flags;
chan_dbg(chan, "free all channel resources\n");
spin_lock_irqsave(&chan->desc_lock, flags);
spin_lock_bh(&chan->desc_lock);
fsldma_cleanup_descriptors(chan);
fsldma_free_desc_list(chan, &chan->ld_pending);
fsldma_free_desc_list(chan, &chan->ld_running);
spin_unlock_irqrestore(&chan->desc_lock, flags);
fsldma_free_desc_list(chan, &chan->ld_completed);
spin_unlock_bh(&chan->desc_lock);
dma_pool_destroy(chan->desc_pool);
chan->desc_pool = NULL;
@ -842,7 +959,6 @@ static int fsl_dma_device_control(struct dma_chan *dchan,
{
struct dma_slave_config *config;
struct fsldma_chan *chan;
unsigned long flags;
int size;
if (!dchan)
@ -852,7 +968,7 @@ static int fsl_dma_device_control(struct dma_chan *dchan,
switch (cmd) {
case DMA_TERMINATE_ALL:
spin_lock_irqsave(&chan->desc_lock, flags);
spin_lock_bh(&chan->desc_lock);
/* Halt the DMA engine */
dma_halt(chan);
@ -860,9 +976,10 @@ static int fsl_dma_device_control(struct dma_chan *dchan,
/* Remove and free all of the descriptors in the LD queue */
fsldma_free_desc_list(chan, &chan->ld_pending);
fsldma_free_desc_list(chan, &chan->ld_running);
fsldma_free_desc_list(chan, &chan->ld_completed);
chan->idle = true;
spin_unlock_irqrestore(&chan->desc_lock, flags);
spin_unlock_bh(&chan->desc_lock);
return 0;
case DMA_SLAVE_CONFIG:
@ -904,11 +1021,10 @@ static int fsl_dma_device_control(struct dma_chan *dchan,
static void fsl_dma_memcpy_issue_pending(struct dma_chan *dchan)
{
struct fsldma_chan *chan = to_fsl_chan(dchan);
unsigned long flags;
spin_lock_irqsave(&chan->desc_lock, flags);
spin_lock_bh(&chan->desc_lock);
fsl_chan_xfer_ld_queue(chan);
spin_unlock_irqrestore(&chan->desc_lock, flags);
spin_unlock_bh(&chan->desc_lock);
}
/**
@ -919,6 +1035,17 @@ static enum dma_status fsl_tx_status(struct dma_chan *dchan,
dma_cookie_t cookie,
struct dma_tx_state *txstate)
{
struct fsldma_chan *chan = to_fsl_chan(dchan);
enum dma_status ret;
ret = dma_cookie_status(dchan, cookie, txstate);
if (ret == DMA_COMPLETE)
return ret;
spin_lock_bh(&chan->desc_lock);
fsldma_cleanup_descriptors(chan);
spin_unlock_bh(&chan->desc_lock);
return dma_cookie_status(dchan, cookie, txstate);
}
@ -996,52 +1123,18 @@ static irqreturn_t fsldma_chan_irq(int irq, void *data)
static void dma_do_tasklet(unsigned long data)
{
struct fsldma_chan *chan = (struct fsldma_chan *)data;
struct fsl_desc_sw *desc, *_desc;
LIST_HEAD(ld_cleanup);
unsigned long flags;
chan_dbg(chan, "tasklet entry\n");
spin_lock_irqsave(&chan->desc_lock, flags);
/* update the cookie if we have some descriptors to cleanup */
if (!list_empty(&chan->ld_running)) {
dma_cookie_t cookie;
desc = to_fsl_desc(chan->ld_running.prev);
cookie = desc->async_tx.cookie;
dma_cookie_complete(&desc->async_tx);
chan_dbg(chan, "completed_cookie=%d\n", cookie);
}
/*
* move the descriptors to a temporary list so we can drop the lock
* during the entire cleanup operation
*/
list_splice_tail_init(&chan->ld_running, &ld_cleanup);
spin_lock_bh(&chan->desc_lock);
/* the hardware is now idle and ready for more */
chan->idle = true;
/*
* Start any pending transactions automatically
*
* In the ideal case, we keep the DMA controller busy while we go
* ahead and free the descriptors below.
*/
fsl_chan_xfer_ld_queue(chan);
spin_unlock_irqrestore(&chan->desc_lock, flags);
/* Run all cleanup for descriptors which have been completed */
fsldma_cleanup_descriptors(chan);
/* Run the callback for each descriptor, in order */
list_for_each_entry_safe(desc, _desc, &ld_cleanup, node) {
/* Remove from the list of transactions */
list_del(&desc->node);
/* Run all cleanup for this descriptor */
fsldma_cleanup_descriptor(chan, desc);
}
spin_unlock_bh(&chan->desc_lock);
chan_dbg(chan, "tasklet exit\n");
}
@ -1225,7 +1318,11 @@ static int fsl_dma_chan_probe(struct fsldma_device *fdev,
spin_lock_init(&chan->desc_lock);
INIT_LIST_HEAD(&chan->ld_pending);
INIT_LIST_HEAD(&chan->ld_running);
INIT_LIST_HEAD(&chan->ld_completed);
chan->idle = true;
#ifdef CONFIG_PM
chan->pm_state = RUNNING;
#endif
chan->common.device = &fdev->common;
dma_cookie_init(&chan->common);
@ -1365,6 +1462,69 @@ static int fsldma_of_remove(struct platform_device *op)
return 0;
}
#ifdef CONFIG_PM
static int fsldma_suspend_late(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct fsldma_device *fdev = platform_get_drvdata(pdev);
struct fsldma_chan *chan;
int i;
for (i = 0; i < FSL_DMA_MAX_CHANS_PER_DEVICE; i++) {
chan = fdev->chan[i];
if (!chan)
continue;
spin_lock_bh(&chan->desc_lock);
if (unlikely(!chan->idle))
goto out;
chan->regs_save.mr = get_mr(chan);
chan->pm_state = SUSPENDED;
spin_unlock_bh(&chan->desc_lock);
}
return 0;
out:
for (; i >= 0; i--) {
chan = fdev->chan[i];
if (!chan)
continue;
chan->pm_state = RUNNING;
spin_unlock_bh(&chan->desc_lock);
}
return -EBUSY;
}
static int fsldma_resume_early(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct fsldma_device *fdev = platform_get_drvdata(pdev);
struct fsldma_chan *chan;
u32 mode;
int i;
for (i = 0; i < FSL_DMA_MAX_CHANS_PER_DEVICE; i++) {
chan = fdev->chan[i];
if (!chan)
continue;
spin_lock_bh(&chan->desc_lock);
mode = chan->regs_save.mr
& ~FSL_DMA_MR_CS & ~FSL_DMA_MR_CC & ~FSL_DMA_MR_CA;
set_mr(chan, mode);
chan->pm_state = RUNNING;
spin_unlock_bh(&chan->desc_lock);
}
return 0;
}
static const struct dev_pm_ops fsldma_pm_ops = {
.suspend_late = fsldma_suspend_late,
.resume_early = fsldma_resume_early,
};
#endif
static const struct of_device_id fsldma_of_ids[] = {
{ .compatible = "fsl,elo3-dma", },
{ .compatible = "fsl,eloplus-dma", },
@ -1377,6 +1537,9 @@ static struct platform_driver fsldma_of_driver = {
.name = "fsl-elo-dma",
.owner = THIS_MODULE,
.of_match_table = fsldma_of_ids,
#ifdef CONFIG_PM
.pm = &fsldma_pm_ops,
#endif
},
.probe = fsldma_of_probe,
.remove = fsldma_of_remove,

View File

@ -134,12 +134,36 @@ struct fsldma_device {
#define FSL_DMA_CHAN_PAUSE_EXT 0x00001000
#define FSL_DMA_CHAN_START_EXT 0x00002000
#ifdef CONFIG_PM
struct fsldma_chan_regs_save {
u32 mr;
};
enum fsldma_pm_state {
RUNNING = 0,
SUSPENDED,
};
#endif
struct fsldma_chan {
char name[8]; /* Channel name */
struct fsldma_chan_regs __iomem *regs;
spinlock_t desc_lock; /* Descriptor operation lock */
struct list_head ld_pending; /* Link descriptors queue */
struct list_head ld_running; /* Link descriptors queue */
/*
* Descriptors which are queued to run, but have not yet been
* submitted to the hardware for execution
*/
struct list_head ld_pending;
/*
* Descriptors which are currently being executed by the hardware
*/
struct list_head ld_running;
/*
* Descriptors which have finished execution by the hardware. These
* descriptors have already had their cleanup actions run. They are
* waiting for the ACK bit to be set by the async_tx API.
*/
struct list_head ld_completed; /* Link descriptors queue */
struct dma_chan common; /* DMA common channel */
struct dma_pool *desc_pool; /* Descriptors pool */
struct device *dev; /* Channel device */
@ -148,6 +172,10 @@ struct fsldma_chan {
struct tasklet_struct tasklet;
u32 feature;
bool idle; /* DMA controller is idle */
#ifdef CONFIG_PM
struct fsldma_chan_regs_save regs_save;
enum fsldma_pm_state pm_state;
#endif
void (*toggle_ext_pause)(struct fsldma_chan *fsl_chan, int enable);
void (*toggle_ext_start)(struct fsldma_chan *fsl_chan, int enable);

View File

@ -866,7 +866,7 @@ static struct dma_async_tx_descriptor *imxdma_prep_slave_sg(
static struct dma_async_tx_descriptor *imxdma_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t dma_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
unsigned long flags)
{
struct imxdma_channel *imxdmac = to_imxdma_chan(chan);
struct imxdma_engine *imxdma = imxdmac->imxdma;

View File

@ -271,6 +271,7 @@ struct sdma_channel {
unsigned int chn_count;
unsigned int chn_real_count;
struct tasklet_struct tasklet;
struct imx_dma_data data;
};
#define IMX_DMA_SG_LOOP BIT(0)
@ -749,6 +750,11 @@ static void sdma_get_pc(struct sdma_channel *sdmac,
emi_2_per = sdma->script_addrs->asrc_2_mcu_addr;
per_2_per = sdma->script_addrs->per_2_per_addr;
break;
case IMX_DMATYPE_ASRC_SP:
per_2_emi = sdma->script_addrs->shp_2_mcu_addr;
emi_2_per = sdma->script_addrs->mcu_2_shp_addr;
per_2_per = sdma->script_addrs->per_2_per_addr;
break;
case IMX_DMATYPE_MSHC:
per_2_emi = sdma->script_addrs->mshc_2_mcu_addr;
emi_2_per = sdma->script_addrs->mcu_2_mshc_addr;
@ -911,14 +917,13 @@ static int sdma_request_channel(struct sdma_channel *sdmac)
int channel = sdmac->channel;
int ret = -EBUSY;
sdmac->bd = dma_alloc_coherent(NULL, PAGE_SIZE, &sdmac->bd_phys, GFP_KERNEL);
sdmac->bd = dma_zalloc_coherent(NULL, PAGE_SIZE, &sdmac->bd_phys,
GFP_KERNEL);
if (!sdmac->bd) {
ret = -ENOMEM;
goto out;
}
memset(sdmac->bd, 0, PAGE_SIZE);
sdma->channel_control[channel].base_bd_ptr = sdmac->bd_phys;
sdma->channel_control[channel].current_bd_ptr = sdmac->bd_phys;
@ -1120,7 +1125,7 @@ err_out:
static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t dma_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
unsigned long flags)
{
struct sdma_channel *sdmac = to_sdma_chan(chan);
struct sdma_engine *sdma = sdmac->sdma;
@ -1414,12 +1419,14 @@ err_dma_alloc:
static bool sdma_filter_fn(struct dma_chan *chan, void *fn_param)
{
struct sdma_channel *sdmac = to_sdma_chan(chan);
struct imx_dma_data *data = fn_param;
if (!imx_dma_is_general_purpose(chan))
return false;
chan->private = data;
sdmac->data = *data;
chan->private = &sdmac->data;
return true;
}

View File

@ -1532,11 +1532,17 @@ static int idmac_alloc_chan_resources(struct dma_chan *chan)
#ifdef DEBUG
if (chan->chan_id == IDMAC_IC_7) {
ic_sof = ipu_irq_map(69);
if (ic_sof > 0)
request_irq(ic_sof, ic_sof_irq, 0, "IC SOF", ichan);
if (ic_sof > 0) {
ret = request_irq(ic_sof, ic_sof_irq, 0, "IC SOF", ichan);
if (ret)
dev_err(&chan->dev->device, "request irq failed for IC SOF");
}
ic_eof = ipu_irq_map(70);
if (ic_eof > 0)
request_irq(ic_eof, ic_eof_irq, 0, "IC EOF", ichan);
if (ic_eof > 0) {
ret = request_irq(ic_eof, ic_eof_irq, 0, "IC EOF", ichan);
if (ret)
dev_err(&chan->dev->device, "request irq failed for IC EOF");
}
}
#endif

View File

@ -601,7 +601,7 @@ static struct dma_async_tx_descriptor *
mmp_pdma_prep_dma_cyclic(struct dma_chan *dchan,
dma_addr_t buf_addr, size_t len, size_t period_len,
enum dma_transfer_direction direction,
unsigned long flags, void *context)
unsigned long flags)
{
struct mmp_pdma_chan *chan;
struct mmp_pdma_desc_sw *first = NULL, *prev = NULL, *new;

View File

@ -389,7 +389,7 @@ struct mmp_tdma_desc *mmp_tdma_alloc_descriptor(struct mmp_tdma_chan *tdmac)
static struct dma_async_tx_descriptor *mmp_tdma_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t dma_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
unsigned long flags)
{
struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan);
struct mmp_tdma_desc *desc;

View File

@ -53,6 +53,7 @@
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/of_irq.h>
#include <linux/of_dma.h>
#include <linux/of_platform.h>
#include <linux/random.h>
@ -1036,7 +1037,15 @@ static int mpc_dma_probe(struct platform_device *op)
if (retval)
goto err_free2;
return retval;
/* Register with OF helpers for DMA lookups (nonfatal) */
if (dev->of_node) {
retval = of_dma_controller_register(dev->of_node,
of_dma_xlate_by_chan_id, mdma);
if (retval)
dev_warn(dev, "Could not register for OF lookup\n");
}
return 0;
err_free2:
if (mdma->is_mpc8308)
@ -1057,6 +1066,8 @@ static int mpc_dma_remove(struct platform_device *op)
struct device *dev = &op->dev;
struct mpc_dma *mdma = dev_get_drvdata(dev);
if (dev->of_node)
of_dma_controller_free(dev->of_node);
dma_async_device_unregister(&mdma->dma);
if (mdma->is_mpc8308) {
free_irq(mdma->irq2, mdma);

View File

@ -413,16 +413,14 @@ static int mxs_dma_alloc_chan_resources(struct dma_chan *chan)
struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma;
int ret;
mxs_chan->ccw = dma_alloc_coherent(mxs_dma->dma_device.dev,
CCW_BLOCK_SIZE, &mxs_chan->ccw_phys,
GFP_KERNEL);
mxs_chan->ccw = dma_zalloc_coherent(mxs_dma->dma_device.dev,
CCW_BLOCK_SIZE,
&mxs_chan->ccw_phys, GFP_KERNEL);
if (!mxs_chan->ccw) {
ret = -ENOMEM;
goto err_alloc;
}
memset(mxs_chan->ccw, 0, CCW_BLOCK_SIZE);
if (mxs_chan->chan_irq != NO_IRQ) {
ret = request_irq(mxs_chan->chan_irq, mxs_dma_int_handler,
0, "mxs-dma", mxs_dma);
@ -591,7 +589,7 @@ err_out:
static struct dma_async_tx_descriptor *mxs_dma_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t dma_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
unsigned long flags)
{
struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma;

1517
drivers/dma/nbpfaxi.c 100644

File diff suppressed because it is too large Load Diff

View File

@ -218,3 +218,38 @@ struct dma_chan *of_dma_simple_xlate(struct of_phandle_args *dma_spec,
&dma_spec->args[0]);
}
EXPORT_SYMBOL_GPL(of_dma_simple_xlate);
/**
* of_dma_xlate_by_chan_id - Translate dt property to DMA channel by channel id
* @dma_spec: pointer to DMA specifier as found in the device tree
* @of_dma: pointer to DMA controller data
*
* This function can be used as the of xlate callback for DMA driver which wants
* to match the channel based on the channel id. When using this xlate function
* the #dma-cells propety of the DMA controller dt node needs to be set to 1.
* The data parameter of of_dma_controller_register must be a pointer to the
* dma_device struct the function should match upon.
*
* Returns pointer to appropriate dma channel on success or NULL on error.
*/
struct dma_chan *of_dma_xlate_by_chan_id(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct dma_device *dev = ofdma->of_dma_data;
struct dma_chan *chan, *candidate = NULL;
if (!dev || dma_spec->args_count != 1)
return NULL;
list_for_each_entry(chan, &dev->channels, device_node)
if (chan->chan_id == dma_spec->args[0]) {
candidate = chan;
break;
}
if (!candidate)
return NULL;
return dma_get_slave_channel(candidate);
}
EXPORT_SYMBOL_GPL(of_dma_xlate_by_chan_id);

View File

@ -853,8 +853,7 @@ static struct dma_async_tx_descriptor *omap_dma_prep_slave_sg(
static struct dma_async_tx_descriptor *omap_dma_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction dir, unsigned long flags,
void *context)
size_t period_len, enum dma_transfer_direction dir, unsigned long flags)
{
struct omap_dmadev *od = to_omap_dma_dev(chan->device);
struct omap_chan *c = to_omap_dma_chan(chan);

File diff suppressed because it is too large Load Diff

View File

@ -61,12 +61,17 @@ struct bam_desc_hw {
#define DESC_FLAG_INT BIT(15)
#define DESC_FLAG_EOT BIT(14)
#define DESC_FLAG_EOB BIT(13)
#define DESC_FLAG_NWD BIT(12)
struct bam_async_desc {
struct virt_dma_desc vd;
u32 num_desc;
u32 xfer_len;
/* transaction flags, EOT|EOB|NWD */
u16 flags;
struct bam_desc_hw *curr_desc;
enum dma_transfer_direction dir;
@ -490,6 +495,14 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
if (!async_desc)
goto err_out;
if (flags & DMA_PREP_FENCE)
async_desc->flags |= DESC_FLAG_NWD;
if (flags & DMA_PREP_INTERRUPT)
async_desc->flags |= DESC_FLAG_EOT;
else
async_desc->flags |= DESC_FLAG_INT;
async_desc->num_desc = num_alloc;
async_desc->curr_desc = async_desc->desc;
async_desc->dir = direction;
@ -793,8 +806,11 @@ static void bam_start_dma(struct bam_chan *bchan)
else
async_desc->xfer_len = async_desc->num_desc;
/* set INT on last descriptor */
desc[async_desc->xfer_len - 1].flags |= DESC_FLAG_INT;
/* set any special flags on the last descriptor */
if (async_desc->num_desc == async_desc->xfer_len)
desc[async_desc->xfer_len - 1].flags = async_desc->flags;
else
desc[async_desc->xfer_len - 1].flags |= DESC_FLAG_INT;
if (bchan->tail + async_desc->xfer_len > MAX_DESCRIPTORS) {
u32 partial = MAX_DESCRIPTORS - bchan->tail;

View File

@ -889,8 +889,7 @@ static struct dma_async_tx_descriptor *s3c24xx_dma_prep_memcpy(
static struct dma_async_tx_descriptor *s3c24xx_dma_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t addr, size_t size, size_t period,
enum dma_transfer_direction direction, unsigned long flags,
void *context)
enum dma_transfer_direction direction, unsigned long flags)
{
struct s3c24xx_dma_chan *s3cchan = to_s3c24xx_dma_chan(chan);
struct s3c24xx_dma_engine *s3cdma = s3cchan->host;

View File

@ -612,7 +612,7 @@ static struct dma_async_tx_descriptor *sa11x0_dma_prep_slave_sg(
static struct dma_async_tx_descriptor *sa11x0_dma_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t addr, size_t size, size_t period,
enum dma_transfer_direction dir, unsigned long flags, void *context)
enum dma_transfer_direction dir, unsigned long flags)
{
struct sa11x0_dma_chan *c = to_sa11x0_dma_chan(chan);
struct sa11x0_dma_desc *txd;

View File

@ -2,21 +2,39 @@
# DMA engine configuration for sh
#
#
# DMA Engine Helpers
#
config SH_DMAE_BASE
bool "Renesas SuperH DMA Engine support"
depends on (SUPERH && SH_DMA) || ARCH_SHMOBILE || COMPILE_TEST
depends on SUPERH || ARCH_SHMOBILE || COMPILE_TEST
depends on !SUPERH || SH_DMA
depends on !SH_DMA_API
default y
select DMA_ENGINE
help
Enable support for the Renesas SuperH DMA controllers.
#
# DMA Controllers
#
config SH_DMAE
tristate "Renesas SuperH DMAC support"
depends on SH_DMAE_BASE
help
Enable support for the Renesas SuperH DMA controllers.
if SH_DMAE
config SH_DMAE_R8A73A4
def_bool y
depends on ARCH_R8A73A4
depends on OF
endif
config SUDMAC
tristate "Renesas SUDMAC support"
depends on SH_DMAE_BASE
@ -34,7 +52,3 @@ config RCAR_AUDMAC_PP
depends on SH_DMAE_BASE
help
Enable support for the Renesas R-Car Audio DMAC Peripheral Peripheral controllers.
config SHDMA_R8A73A4
def_bool y
depends on ARCH_R8A73A4 && SH_DMAE != n

View File

@ -1,10 +1,18 @@
#
# DMA Engine Helpers
#
obj-$(CONFIG_SH_DMAE_BASE) += shdma-base.o shdma-of.o
obj-$(CONFIG_SH_DMAE) += shdma.o
#
# DMA Controllers
#
shdma-y := shdmac.o
ifeq ($(CONFIG_OF),y)
shdma-$(CONFIG_SHDMA_R8A73A4) += shdma-r8a73a4.o
endif
shdma-$(CONFIG_SH_DMAE_R8A73A4) += shdma-r8a73a4.o
shdma-objs := $(shdma-y)
obj-$(CONFIG_SH_DMAE) += shdma.o
obj-$(CONFIG_SUDMAC) += sudmac.o
obj-$(CONFIG_RCAR_HPB_DMAE) += rcar-hpbdma.o
obj-$(CONFIG_RCAR_AUDMAC_PP) += rcar-audmapp.o

View File

@ -22,6 +22,7 @@
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/dmaengine.h>
#include <linux/of_dma.h>
#include <linux/platform_data/dma-rcar-audmapp.h>
#include <linux/platform_device.h>
#include <linux/shdma-base.h>
@ -45,8 +46,9 @@
struct audmapp_chan {
struct shdma_chan shdma_chan;
struct audmapp_slave_config *config;
void __iomem *base;
dma_addr_t slave_addr;
u32 chcr;
};
struct audmapp_device {
@ -56,7 +58,16 @@ struct audmapp_device {
void __iomem *chan_reg;
};
struct audmapp_desc {
struct shdma_desc shdma_desc;
dma_addr_t src;
dma_addr_t dst;
};
#define to_shdma_chan(c) container_of(c, struct shdma_chan, dma_chan)
#define to_chan(chan) container_of(chan, struct audmapp_chan, shdma_chan)
#define to_desc(sdesc) container_of(sdesc, struct audmapp_desc, shdma_desc)
#define to_dev(chan) container_of(chan->shdma_chan.dma_chan.device, \
struct audmapp_device, shdma_dev.dma_dev)
@ -90,70 +101,82 @@ static void audmapp_halt(struct shdma_chan *schan)
}
static void audmapp_start_xfer(struct shdma_chan *schan,
struct shdma_desc *sdecs)
struct shdma_desc *sdesc)
{
struct audmapp_chan *auchan = to_chan(schan);
struct audmapp_device *audev = to_dev(auchan);
struct audmapp_slave_config *cfg = auchan->config;
struct audmapp_desc *desc = to_desc(sdesc);
struct device *dev = audev->dev;
u32 chcr = cfg->chcr | PDMACHCR_DE;
u32 chcr = auchan->chcr | PDMACHCR_DE;
dev_dbg(dev, "src/dst/chcr = %pad/%pad/%x\n",
&cfg->src, &cfg->dst, cfg->chcr);
dev_dbg(dev, "src/dst/chcr = %pad/%pad/%08x\n",
&desc->src, &desc->dst, chcr);
audmapp_write(auchan, cfg->src, PDMASAR);
audmapp_write(auchan, cfg->dst, PDMADAR);
audmapp_write(auchan, desc->src, PDMASAR);
audmapp_write(auchan, desc->dst, PDMADAR);
audmapp_write(auchan, chcr, PDMACHCR);
}
static struct audmapp_slave_config *
audmapp_find_slave(struct audmapp_chan *auchan, int slave_id)
static void audmapp_get_config(struct audmapp_chan *auchan, int slave_id,
u32 *chcr, dma_addr_t *dst)
{
struct audmapp_device *audev = to_dev(auchan);
struct audmapp_pdata *pdata = audev->pdata;
struct audmapp_slave_config *cfg;
int i;
*chcr = 0;
*dst = 0;
if (!pdata) { /* DT */
*chcr = ((u32)slave_id) << 16;
auchan->shdma_chan.slave_id = (slave_id) >> 8;
return;
}
/* non-DT */
if (slave_id >= AUDMAPP_SLAVE_NUMBER)
return NULL;
return;
for (i = 0, cfg = pdata->slave; i < pdata->slave_num; i++, cfg++)
if (cfg->slave_id == slave_id)
return cfg;
return NULL;
if (cfg->slave_id == slave_id) {
*chcr = cfg->chcr;
*dst = cfg->dst;
break;
}
}
static int audmapp_set_slave(struct shdma_chan *schan, int slave_id,
dma_addr_t slave_addr, bool try)
{
struct audmapp_chan *auchan = to_chan(schan);
struct audmapp_slave_config *cfg =
audmapp_find_slave(auchan, slave_id);
u32 chcr;
dma_addr_t dst;
audmapp_get_config(auchan, slave_id, &chcr, &dst);
if (!cfg)
return -ENODEV;
if (try)
return 0;
auchan->config = cfg;
auchan->chcr = chcr;
auchan->slave_addr = slave_addr ? : dst;
return 0;
}
static int audmapp_desc_setup(struct shdma_chan *schan,
struct shdma_desc *sdecs,
struct shdma_desc *sdesc,
dma_addr_t src, dma_addr_t dst, size_t *len)
{
struct audmapp_chan *auchan = to_chan(schan);
struct audmapp_slave_config *cfg = auchan->config;
if (!cfg)
return -ENODEV;
struct audmapp_desc *desc = to_desc(sdesc);
if (*len > (size_t)AUDMAPP_LEN_MAX)
*len = (size_t)AUDMAPP_LEN_MAX;
desc->src = src;
desc->dst = dst;
return 0;
}
@ -164,7 +187,9 @@ static void audmapp_setup_xfer(struct shdma_chan *schan,
static dma_addr_t audmapp_slave_addr(struct shdma_chan *schan)
{
return 0; /* always fixed address */
struct audmapp_chan *auchan = to_chan(schan);
return auchan->slave_addr;
}
static bool audmapp_channel_busy(struct shdma_chan *schan)
@ -183,7 +208,7 @@ static bool audmapp_desc_completed(struct shdma_chan *schan,
static struct shdma_desc *audmapp_embedded_desc(void *buf, int i)
{
return &((struct shdma_desc *)buf)[i];
return &((struct audmapp_desc *)buf)[i].shdma_desc;
}
static const struct shdma_ops audmapp_shdma_ops = {
@ -234,16 +259,39 @@ static void audmapp_chan_remove(struct audmapp_device *audev)
dma_dev->chancnt = 0;
}
static struct dma_chan *audmapp_of_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
dma_cap_mask_t mask;
struct dma_chan *chan;
u32 chcr = dma_spec->args[0];
if (dma_spec->args_count != 1)
return NULL;
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
chan = dma_request_channel(mask, shdma_chan_filter, NULL);
if (chan)
to_shdma_chan(chan)->hw_req = chcr;
return chan;
}
static int audmapp_probe(struct platform_device *pdev)
{
struct audmapp_pdata *pdata = pdev->dev.platform_data;
struct device_node *np = pdev->dev.of_node;
struct audmapp_device *audev;
struct shdma_dev *sdev;
struct dma_device *dma_dev;
struct resource *res;
int err, i;
if (!pdata)
if (np)
of_dma_controller_register(np, audmapp_of_xlate, pdev);
else if (!pdata)
return -ENODEV;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
@ -260,7 +308,7 @@ static int audmapp_probe(struct platform_device *pdev)
sdev = &audev->shdma_dev;
sdev->ops = &audmapp_shdma_ops;
sdev->desc_size = sizeof(struct shdma_desc);
sdev->desc_size = sizeof(struct audmapp_desc);
dma_dev = &sdev->dma_dev;
dma_dev->copy_align = LOG2_DEFAULT_XFER_SIZE;
@ -305,12 +353,18 @@ static int audmapp_remove(struct platform_device *pdev)
return 0;
}
static const struct of_device_id audmapp_of_match[] = {
{ .compatible = "renesas,rcar-audmapp", },
{},
};
static struct platform_driver audmapp_driver = {
.probe = audmapp_probe,
.remove = audmapp_remove,
.driver = {
.owner = THIS_MODULE,
.name = "rcar-audmapp-engine",
.of_match_table = audmapp_of_match,
},
};
module_platform_driver(audmapp_driver);

View File

@ -45,7 +45,7 @@ enum {
((((i) & TS_LOW_BIT) << TS_LOW_SHIFT) |\
(((i) & TS_HI_BIT) << TS_HI_SHIFT))
#define CHCR_TX(xmit_sz) (DM_FIX | SM_INC | 0x800 | TS_INDEX2VAL((xmit_sz)))
#define CHCR_RX(xmit_sz) (DM_INC | SM_FIX | 0x800 | TS_INDEX2VAL((xmit_sz)))
#define CHCR_TX(xmit_sz) (DM_FIX | SM_INC | RS_ERS | TS_INDEX2VAL((xmit_sz)))
#define CHCR_RX(xmit_sz) (DM_INC | SM_FIX | RS_ERS | TS_INDEX2VAL((xmit_sz)))
#endif

View File

@ -206,45 +206,6 @@ static int shdma_setup_slave(struct shdma_chan *schan, int slave_id,
return 0;
}
/*
* This is the standard shdma filter function to be used as a replacement to the
* "old" method, using the .private pointer. If for some reason you allocate a
* channel without slave data, use something like ERR_PTR(-EINVAL) as a filter
* parameter. If this filter is used, the slave driver, after calling
* dma_request_channel(), will also have to call dmaengine_slave_config() with
* .slave_id, .direction, and either .src_addr or .dst_addr set.
* NOTE: this filter doesn't support multiple DMAC drivers with the DMA_SLAVE
* capability! If this becomes a requirement, hardware glue drivers, using this
* services would have to provide their own filters, which first would check
* the device driver, similar to how other DMAC drivers, e.g., sa11x0-dma.c, do
* this, and only then, in case of a match, call this common filter.
* NOTE 2: This filter function is also used in the DT case by shdma_of_xlate().
* In that case the MID-RID value is used for slave channel filtering and is
* passed to this function in the "arg" parameter.
*/
bool shdma_chan_filter(struct dma_chan *chan, void *arg)
{
struct shdma_chan *schan = to_shdma_chan(chan);
struct shdma_dev *sdev = to_shdma_dev(schan->dma_chan.device);
const struct shdma_ops *ops = sdev->ops;
int match = (long)arg;
int ret;
if (match < 0)
/* No slave requested - arbitrary channel */
return true;
if (!schan->dev->of_node && match >= slave_num)
return false;
ret = ops->set_slave(schan, match, 0, true);
if (ret < 0)
return false;
return true;
}
EXPORT_SYMBOL(shdma_chan_filter);
static int shdma_alloc_chan_resources(struct dma_chan *chan)
{
struct shdma_chan *schan = to_shdma_chan(chan);
@ -295,6 +256,51 @@ esetslave:
return ret;
}
/*
* This is the standard shdma filter function to be used as a replacement to the
* "old" method, using the .private pointer. If for some reason you allocate a
* channel without slave data, use something like ERR_PTR(-EINVAL) as a filter
* parameter. If this filter is used, the slave driver, after calling
* dma_request_channel(), will also have to call dmaengine_slave_config() with
* .slave_id, .direction, and either .src_addr or .dst_addr set.
* NOTE: this filter doesn't support multiple DMAC drivers with the DMA_SLAVE
* capability! If this becomes a requirement, hardware glue drivers, using this
* services would have to provide their own filters, which first would check
* the device driver, similar to how other DMAC drivers, e.g., sa11x0-dma.c, do
* this, and only then, in case of a match, call this common filter.
* NOTE 2: This filter function is also used in the DT case by shdma_of_xlate().
* In that case the MID-RID value is used for slave channel filtering and is
* passed to this function in the "arg" parameter.
*/
bool shdma_chan_filter(struct dma_chan *chan, void *arg)
{
struct shdma_chan *schan;
struct shdma_dev *sdev;
int match = (long)arg;
int ret;
/* Only support channels handled by this driver. */
if (chan->device->device_alloc_chan_resources !=
shdma_alloc_chan_resources)
return false;
if (match < 0)
/* No slave requested - arbitrary channel */
return true;
schan = to_shdma_chan(chan);
if (!schan->dev->of_node && match >= slave_num)
return false;
sdev = to_shdma_dev(schan->dma_chan.device);
ret = sdev->ops->set_slave(schan, match, 0, true);
if (ret < 0)
return false;
return true;
}
EXPORT_SYMBOL(shdma_chan_filter);
static dma_async_tx_callback __ld_cleanup(struct shdma_chan *schan, bool all)
{
struct shdma_desc *desc, *_desc;
@ -662,15 +668,16 @@ static struct dma_async_tx_descriptor *shdma_prep_slave_sg(
static struct dma_async_tx_descriptor *shdma_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
unsigned long flags)
{
struct shdma_chan *schan = to_shdma_chan(chan);
struct shdma_dev *sdev = to_shdma_dev(schan->dma_chan.device);
struct dma_async_tx_descriptor *desc;
const struct shdma_ops *ops = sdev->ops;
unsigned int sg_len = buf_len / period_len;
int slave_id = schan->slave_id;
dma_addr_t slave_addr;
struct scatterlist sgl[SHDMA_MAX_SG_LEN];
struct scatterlist *sgl;
int i;
if (!chan)
@ -694,7 +701,16 @@ static struct dma_async_tx_descriptor *shdma_prep_dma_cyclic(
slave_addr = ops->slave_addr(schan);
/*
* Allocate the sg list dynamically as it would consumer too much stack
* space.
*/
sgl = kcalloc(sg_len, sizeof(*sgl), GFP_KERNEL);
if (!sgl)
return NULL;
sg_init_table(sgl, sg_len);
for (i = 0; i < sg_len; i++) {
dma_addr_t src = buf_addr + (period_len * i);
@ -704,8 +720,11 @@ static struct dma_async_tx_descriptor *shdma_prep_dma_cyclic(
sg_dma_len(&sgl[i]) = period_len;
}
return shdma_prep_sg(schan, sgl, sg_len, &slave_addr,
desc = shdma_prep_sg(schan, sgl, sg_len, &slave_addr,
direction, flags, true);
kfree(sgl);
return desc;
}
static int shdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,

View File

@ -62,7 +62,7 @@ struct sh_dmae_desc {
#define to_sh_dev(chan) container_of(chan->shdma_chan.dma_chan.device,\
struct sh_dmae_device, shdma_dev.dma_dev)
#ifdef CONFIG_SHDMA_R8A73A4
#ifdef CONFIG_SH_DMAE_R8A73A4
extern const struct sh_dmae_pdata r8a73a4_dma_pdata;
#define r8a73a4_shdma_devid (&r8a73a4_dma_pdata)
#else

View File

@ -38,12 +38,12 @@
#include "../dmaengine.h"
#include "shdma.h"
/* DMA register */
#define SAR 0x00
#define DAR 0x04
#define TCR 0x08
#define CHCR 0x0C
#define DMAOR 0x40
/* DMA registers */
#define SAR 0x00 /* Source Address Register */
#define DAR 0x04 /* Destination Address Register */
#define TCR 0x08 /* Transfer Count Register */
#define CHCR 0x0C /* Channel Control Register */
#define DMAOR 0x40 /* DMA Operation Register */
#define TEND 0x18 /* USB-DMAC */
@ -239,9 +239,8 @@ static void dmae_init(struct sh_dmae_chan *sh_chan)
{
/*
* Default configuration for dual address memory-memory transfer.
* 0x400 represents auto-request.
*/
u32 chcr = DM_INC | SM_INC | 0x400 | log2size_to_chcr(sh_chan,
u32 chcr = DM_INC | SM_INC | RS_AUTO | log2size_to_chcr(sh_chan,
LOG2_DEFAULT_XFER_SIZE);
sh_chan->xmit_shift = calc_xmit_shift(sh_chan, chcr);
chcr_write(sh_chan, chcr);

View File

@ -580,7 +580,7 @@ err_dir:
static struct dma_async_tx_descriptor *
sirfsoc_dma_prep_cyclic(struct dma_chan *chan, dma_addr_t addr,
size_t buf_len, size_t period_len,
enum dma_transfer_direction direction, unsigned long flags, void *context)
enum dma_transfer_direction direction, unsigned long flags)
{
struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan);
struct sirfsoc_dma_desc *sdesc = NULL;

View File

@ -2531,8 +2531,7 @@ d40_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
static struct dma_async_tx_descriptor *
dma40_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t dma_addr,
size_t buf_len, size_t period_len,
enum dma_transfer_direction direction, unsigned long flags,
void *context)
enum dma_transfer_direction direction, unsigned long flags)
{
unsigned int periods = buf_len / period_len;
struct dma_async_tx_descriptor *txd;

File diff suppressed because it is too large Load Diff

View File

@ -1055,7 +1055,7 @@ static struct dma_async_tx_descriptor *tegra_dma_prep_slave_sg(
static struct dma_async_tx_descriptor *tegra_dma_prep_dma_cyclic(
struct dma_chan *dc, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
unsigned long flags)
{
struct tegra_dma_channel *tdc = to_tegra_dma_chan(dc);
struct tegra_dma_desc *dma_desc = NULL;

View File

@ -0,0 +1,20 @@
/*
* Copyright (C) 2013-2014 Renesas Electronics Europe Ltd.
* Author: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*/
#ifndef DT_BINDINGS_NBPFAXI_H
#define DT_BINDINGS_NBPFAXI_H
/**
* Use "#dma-cells = <2>;" with the second integer defining slave DMA flags:
*/
#define NBPF_SLAVE_RQ_HIGH 1
#define NBPF_SLAVE_RQ_LOW 2
#define NBPF_SLAVE_RQ_LEVEL 4
#endif

View File

@ -37,7 +37,6 @@
*/
typedef s32 dma_cookie_t;
#define DMA_MIN_COOKIE 1
#define DMA_MAX_COOKIE INT_MAX
static inline int dma_submit_error(dma_cookie_t cookie)
{
@ -671,7 +670,7 @@ struct dma_device {
struct dma_async_tx_descriptor *(*device_prep_dma_cyclic)(
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context);
unsigned long flags);
struct dma_async_tx_descriptor *(*device_prep_interleaved_dma)(
struct dma_chan *chan, struct dma_interleaved_template *xt,
unsigned long flags);
@ -746,7 +745,7 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_cyclic(
unsigned long flags)
{
return chan->device->device_prep_dma_cyclic(chan, buf_addr, buf_len,
period_len, dir, flags, NULL);
period_len, dir, flags);
}
static inline struct dma_async_tx_descriptor *dmaengine_prep_interleaved_dma(

View File

@ -41,6 +41,8 @@ extern struct dma_chan *of_dma_request_slave_channel(struct device_node *np,
const char *name);
extern struct dma_chan *of_dma_simple_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma);
extern struct dma_chan *of_dma_xlate_by_chan_id(struct of_phandle_args *dma_spec,
struct of_dma *ofdma);
#else
static inline int of_dma_controller_register(struct device_node *np,
struct dma_chan *(*of_dma_xlate)
@ -66,6 +68,8 @@ static inline struct dma_chan *of_dma_simple_xlate(struct of_phandle_args *dma_s
return NULL;
}
#define of_dma_xlate_by_chan_id NULL
#endif
#endif /* __LINUX_OF_DMA_H */

View File

@ -40,6 +40,7 @@ enum sdma_peripheral_type {
IMX_DMATYPE_ASRC, /* ASRC */
IMX_DMATYPE_ESAI, /* ESAI */
IMX_DMATYPE_SSI_DUAL, /* SSI Dual FIFO */
IMX_DMATYPE_ASRC_SP, /* Shared ASRC */
};
enum imx_dma_prio {

View File

@ -150,6 +150,8 @@ void edma_clear_event(unsigned channel);
void edma_pause(unsigned channel);
void edma_resume(unsigned channel);
void edma_assign_channel_eventq(unsigned channel, enum dma_event_q eventq_no);
struct edma_rsv_info {
const s16 (*rsv_chans)[2];

View File

@ -95,19 +95,21 @@ struct sh_dmae_pdata {
};
/* DMAOR definitions */
#define DMAOR_AE 0x00000004
#define DMAOR_AE 0x00000004 /* Address Error Flag */
#define DMAOR_NMIF 0x00000002
#define DMAOR_DME 0x00000001
#define DMAOR_DME 0x00000001 /* DMA Master Enable */
/* Definitions for the SuperH DMAC */
#define DM_INC 0x00004000
#define DM_DEC 0x00008000
#define DM_FIX 0x0000c000
#define SM_INC 0x00001000
#define SM_DEC 0x00002000
#define SM_FIX 0x00003000
#define CHCR_DE 0x00000001
#define CHCR_TE 0x00000002
#define CHCR_IE 0x00000004
#define DM_INC 0x00004000 /* Destination addresses are incremented */
#define DM_DEC 0x00008000 /* Destination addresses are decremented */
#define DM_FIX 0x0000c000 /* Destination address is fixed */
#define SM_INC 0x00001000 /* Source addresses are incremented */
#define SM_DEC 0x00002000 /* Source addresses are decremented */
#define SM_FIX 0x00003000 /* Source address is fixed */
#define RS_AUTO 0x00000400 /* Auto Request */
#define RS_ERS 0x00000800 /* DMA extended resource selector */
#define CHCR_DE 0x00000001 /* DMA Enable */
#define CHCR_TE 0x00000002 /* Transfer End Flag */
#define CHCR_IE 0x00000004 /* Interrupt Enable */
#endif