1
0
Fork 0

Merge branch 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma

* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (37 commits)
  Improve slave/cyclic DMA engine documentation
  dmaengine: pl08x: handle the rest of enums in pl08x_width
  DMA: PL08x: cleanup selection of burst size
  DMA: PL08x: avoid recalculating cctl at each prepare
  DMA: PL08x: cleanup selection of buswidth
  DMA: PL08x: constify plchan->cd and plat->slave_channels
  DMA: PL08x: separately store source/destination cctl
  DMA: PL08x: separately store source/destination slave address
  DMA: PL08x: clean up LLI debugging
  DMA: PL08x: select LLI bus only once per LLI setup
  DMA: PL08x: remove unused constants
  ARM: mxs-dma: reset after disable channel
  dma: intel_mid_dma: remove redundant pci_set_drvdata calls
  dma: mxs-dma: fix unterminated platform_device_id table
  dmaengine: pl330: make platform data optional
  dmaengine: imx-sdma: return proper error if kzalloc fails
  pch_dma: Fix CTL register access issue
  dmaengine: mxs-dma: skip request_irq for NO_IRQ
  dmaengine/coh901318: fix slave submission semantics
  dmaengine/ste_dma40: allow memory buswidth/burst to be configured
  ...

Fix trivial whitespace conflict in drivers/dma/mv_xor.c
hifive-unleashed-5.1
Linus Torvalds 2011-08-01 13:46:37 -10:00
commit 12ff47e7f5
17 changed files with 607 additions and 394 deletions

View File

@ -10,87 +10,181 @@ NOTE: For DMA Engine usage in async_tx please see:
Below is a guide to device driver writers on how to use the Slave-DMA API of the
DMA Engine. This is applicable only for slave DMA usage only.
The slave DMA usage consists of following steps
The slave DMA usage consists of following steps:
1. Allocate a DMA slave channel
2. Set slave and controller specific parameters
3. Get a descriptor for transaction
4. Submit the transaction and wait for callback notification
4. Submit the transaction
5. Issue pending requests and wait for callback notification
1. Allocate a DMA slave channel
Channel allocation is slightly different in the slave DMA context, client
drivers typically need a channel from a particular DMA controller only and even
in some cases a specific channel is desired. To request a channel
dma_request_channel() API is used.
Interface:
struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
dma_filter_fn filter_fn,
void *filter_param);
where dma_filter_fn is defined as:
typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
Channel allocation is slightly different in the slave DMA context,
client drivers typically need a channel from a particular DMA
controller only and even in some cases a specific channel is desired.
To request a channel dma_request_channel() API is used.
When the optional 'filter_fn' parameter is set to NULL dma_request_channel
simply returns the first channel that satisfies the capability mask. Otherwise,
when the mask parameter is insufficient for specifying the necessary channel,
the filter_fn routine can be used to disposition the available channels in the
system. The filter_fn routine is called once for each free channel in the
system. Upon seeing a suitable channel filter_fn returns DMA_ACK which flags
that channel to be the return value from dma_request_channel. A channel
allocated via this interface is exclusive to the caller, until
dma_release_channel() is called.
Interface:
struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
dma_filter_fn filter_fn,
void *filter_param);
where dma_filter_fn is defined as:
typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
The 'filter_fn' parameter is optional, but highly recommended for
slave and cyclic channels as they typically need to obtain a specific
DMA channel.
When the optional 'filter_fn' parameter is NULL, dma_request_channel()
simply returns the first channel that satisfies the capability mask.
Otherwise, the 'filter_fn' routine will be called once for each free
channel which has a capability in 'mask'. 'filter_fn' is expected to
return 'true' when the desired DMA channel is found.
A channel allocated via this interface is exclusive to the caller,
until dma_release_channel() is called.
2. Set slave and controller specific parameters
Next step is always to pass some specific information to the DMA driver. Most of
the generic information which a slave DMA can use is in struct dma_slave_config.
It allows the clients to specify DMA direction, DMA addresses, bus widths, DMA
burst lengths etc. If some DMA controllers have more parameters to be sent then
they should try to embed struct dma_slave_config in their controller specific
structure. That gives flexibility to client to pass more parameters, if
required.
Interface:
int dmaengine_slave_config(struct dma_chan *chan,
struct dma_slave_config *config)
Next step is always to pass some specific information to the DMA
driver. Most of the generic information which a slave DMA can use
is in struct dma_slave_config. This allows the clients to specify
DMA direction, DMA addresses, bus widths, DMA burst lengths etc
for the peripheral.
If some DMA controllers have more parameters to be sent then they
should try to embed struct dma_slave_config in their controller
specific structure. That gives flexibility to client to pass more
parameters, if required.
Interface:
int dmaengine_slave_config(struct dma_chan *chan,
struct dma_slave_config *config)
Please see the dma_slave_config structure definition in dmaengine.h
for a detailed explaination of the struct members. Please note
that the 'direction' member will be going away as it duplicates the
direction given in the prepare call.
3. Get a descriptor for transaction
For slave usage the various modes of slave transfers supported by the
DMA-engine are:
slave_sg - DMA a list of scatter gather buffers from/to a peripheral
dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the
For slave usage the various modes of slave transfers supported by the
DMA-engine are:
slave_sg - DMA a list of scatter gather buffers from/to a peripheral
dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the
operation is explicitly stopped.
The non NULL return of this transfer API represents a "descriptor" for the given
transaction.
Interface:
struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_sg)(
struct dma_chan *chan,
struct scatterlist *dst_sg, unsigned int dst_nents,
struct scatterlist *src_sg, unsigned int src_nents,
A non-NULL return of this transfer API represents a "descriptor" for
the given transaction.
Interface:
struct dma_async_tx_descriptor *(*chan->device->device_prep_slave_sg)(
struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_data_direction direction,
unsigned long flags);
struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)(
struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)(
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_data_direction direction);
4. Submit the transaction and wait for callback notification
To schedule the transaction to be scheduled by dma device, the "descriptor"
returned in above (3) needs to be submitted.
To tell the dma driver that a transaction is ready to be serviced, the
descriptor->submit() callback needs to be invoked. This chains the descriptor to
the pending queue.
The transactions in the pending queue can be activated by calling the
issue_pending API. If channel is idle then the first transaction in queue is
started and subsequent ones queued up.
On completion of the DMA operation the next in queue is submitted and a tasklet
triggered. The tasklet would then call the client driver completion callback
routine for notification, if set.
Interface:
void dma_async_issue_pending(struct dma_chan *chan);
The peripheral driver is expected to have mapped the scatterlist for
the DMA operation prior to calling device_prep_slave_sg, and must
keep the scatterlist mapped until the DMA operation has completed.
The scatterlist must be mapped using the DMA struct device. So,
normal setup should look like this:
==============================================================================
nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len);
if (nr_sg == 0)
/* error */
Additional usage notes for dma driver writers
1/ Although DMA engine specifies that completion callback routines cannot submit
any new operations, but typically for slave DMA subsequent transaction may not
be available for submit prior to callback routine being called. This requirement
is not a requirement for DMA-slave devices. But they should take care to drop
the spin-lock they might be holding before calling the callback routine
desc = chan->device->device_prep_slave_sg(chan, sgl, nr_sg,
direction, flags);
Once a descriptor has been obtained, the callback information can be
added and the descriptor must then be submitted. Some DMA engine
drivers may hold a spinlock between a successful preparation and
submission so it is important that these two operations are closely
paired.
Note:
Although the async_tx API specifies that completion callback
routines cannot submit any new operations, this is not the
case for slave/cyclic DMA.
For slave DMA, the subsequent transaction may not be available
for submission prior to callback function being invoked, so
slave DMA callbacks are permitted to prepare and submit a new
transaction.
For cyclic DMA, a callback function may wish to terminate the
DMA via dmaengine_terminate_all().
Therefore, it is important that DMA engine drivers drop any
locks before calling the callback function which may cause a
deadlock.
Note that callbacks will always be invoked from the DMA
engines tasklet, never from interrupt context.
4. Submit the transaction
Once the descriptor has been prepared and the callback information
added, it must be placed on the DMA engine drivers pending queue.
Interface:
dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc)
This returns a cookie can be used to check the progress of DMA engine
activity via other DMA engine calls not covered in this document.
dmaengine_submit() will not start the DMA operation, it merely adds
it to the pending queue. For this, see step 5, dma_async_issue_pending.
5. Issue pending DMA requests and wait for callback notification
The transactions in the pending queue can be activated by calling the
issue_pending API. If channel is idle then the first transaction in
queue is started and subsequent ones queued up.
On completion of each DMA operation, the next in queue is started and
a tasklet triggered. The tasklet will then call the client driver
completion callback routine for notification, if set.
Interface:
void dma_async_issue_pending(struct dma_chan *chan);
Further APIs:
1. int dmaengine_terminate_all(struct dma_chan *chan)
This causes all activity for the DMA channel to be stopped, and may
discard data in the DMA FIFO which hasn't been fully transferred.
No callback functions will be called for any incomplete transfers.
2. int dmaengine_pause(struct dma_chan *chan)
This pauses activity on the DMA channel without data loss.
3. int dmaengine_resume(struct dma_chan *chan)
Resume a previously paused DMA channel. It is invalid to resume a
channel which is not currently paused.
4. enum dma_status dma_async_is_tx_complete(struct dma_chan *chan,
dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used)
This can be used to check the status of the channel. Please see
the documentation in include/linux/dmaengine.h for a more complete
description of this API.
This can be used in conjunction with dma_async_is_complete() and
the cookie returned from 'descriptor->submit()' to check for
completion of a specific DMA transaction.
Note:
Not all DMA engine drivers can return reliable information for
a running DMA channel. It is recommended that DMA engine users
pause or stop (via dmaengine_terminate_all) the channel before
using this API.

View File

@ -9,6 +9,5 @@ TODO for slave dma
- mxs-dma.c
- dw_dmac
- intel_mid_dma
- ste_dma40
4. Check other subsystems for dma drivers and merge/move to dmaengine
5. Remove dma_slave_config's dma direction.

View File

@ -156,14 +156,10 @@ struct pl08x_driver_data {
#define PL08X_BOUNDARY_SHIFT (10) /* 1KB 0x400 */
#define PL08X_BOUNDARY_SIZE (1 << PL08X_BOUNDARY_SHIFT)
/* Minimum period between work queue runs */
#define PL08X_WQ_PERIODMIN 20
/* Size (bytes) of each LLI buffer allocated for one transfer */
# define PL08X_LLI_TSFR_SIZE 0x2000
/* Maximum times we call dma_pool_alloc on this pool without freeing */
#define PL08X_MAX_ALLOCS 0x40
#define MAX_NUM_TSFR_LLIS (PL08X_LLI_TSFR_SIZE/sizeof(struct pl08x_lli))
#define PL08X_ALIGN 8
@ -495,10 +491,10 @@ static inline u32 pl08x_cctl_bits(u32 cctl, u8 srcwidth, u8 dstwidth,
struct pl08x_lli_build_data {
struct pl08x_txd *txd;
struct pl08x_driver_data *pl08x;
struct pl08x_bus_data srcbus;
struct pl08x_bus_data dstbus;
size_t remainder;
u32 lli_bus;
};
/*
@ -551,8 +547,7 @@ static void pl08x_fill_lli_for_desc(struct pl08x_lli_build_data *bd,
llis_va[num_llis].src = bd->srcbus.addr;
llis_va[num_llis].dst = bd->dstbus.addr;
llis_va[num_llis].lli = llis_bus + (num_llis + 1) * sizeof(struct pl08x_lli);
if (bd->pl08x->lli_buses & PL08X_AHB2)
llis_va[num_llis].lli |= PL080_LLI_LM_AHB2;
llis_va[num_llis].lli |= bd->lli_bus;
if (cctl & PL080_CONTROL_SRC_INCR)
bd->srcbus.addr += len;
@ -605,9 +600,9 @@ static int pl08x_fill_llis_for_desc(struct pl08x_driver_data *pl08x,
cctl = txd->cctl;
bd.txd = txd;
bd.pl08x = pl08x;
bd.srcbus.addr = txd->src_addr;
bd.dstbus.addr = txd->dst_addr;
bd.lli_bus = (pl08x->lli_buses & PL08X_AHB2) ? PL080_LLI_LM_AHB2 : 0;
/* Find maximum width of the source bus */
bd.srcbus.maxwidth =
@ -622,25 +617,15 @@ static int pl08x_fill_llis_for_desc(struct pl08x_driver_data *pl08x,
/* Set up the bus widths to the maximum */
bd.srcbus.buswidth = bd.srcbus.maxwidth;
bd.dstbus.buswidth = bd.dstbus.maxwidth;
dev_vdbg(&pl08x->adev->dev,
"%s source bus is %d bytes wide, dest bus is %d bytes wide\n",
__func__, bd.srcbus.buswidth, bd.dstbus.buswidth);
/*
* Bytes transferred == tsize * MIN(buswidths), not max(buswidths)
*/
max_bytes_per_lli = min(bd.srcbus.buswidth, bd.dstbus.buswidth) *
PL080_CONTROL_TRANSFER_SIZE_MASK;
dev_vdbg(&pl08x->adev->dev,
"%s max bytes per lli = %zu\n",
__func__, max_bytes_per_lli);
/* We need to count this down to zero */
bd.remainder = txd->len;
dev_vdbg(&pl08x->adev->dev,
"%s remainder = %zu\n",
__func__, bd.remainder);
/*
* Choose bus to align to
@ -649,6 +634,16 @@ static int pl08x_fill_llis_for_desc(struct pl08x_driver_data *pl08x,
*/
pl08x_choose_master_bus(&bd, &mbus, &sbus, cctl);
dev_vdbg(&pl08x->adev->dev, "src=0x%08x%s/%u dst=0x%08x%s/%u len=%zu llimax=%zu\n",
bd.srcbus.addr, cctl & PL080_CONTROL_SRC_INCR ? "+" : "",
bd.srcbus.buswidth,
bd.dstbus.addr, cctl & PL080_CONTROL_DST_INCR ? "+" : "",
bd.dstbus.buswidth,
bd.remainder, max_bytes_per_lli);
dev_vdbg(&pl08x->adev->dev, "mbus=%s sbus=%s\n",
mbus == &bd.srcbus ? "src" : "dst",
sbus == &bd.srcbus ? "src" : "dst");
if (txd->len < mbus->buswidth) {
/* Less than a bus width available - send as single bytes */
while (bd.remainder) {
@ -840,15 +835,14 @@ static int pl08x_fill_llis_for_desc(struct pl08x_driver_data *pl08x,
{
int i;
dev_vdbg(&pl08x->adev->dev,
"%-3s %-9s %-10s %-10s %-10s %s\n",
"lli", "", "csrc", "cdst", "clli", "cctl");
for (i = 0; i < num_llis; i++) {
dev_vdbg(&pl08x->adev->dev,
"lli %d @%p: csrc=0x%08x, cdst=0x%08x, cctl=0x%08x, clli=0x%08x\n",
i,
&llis_va[i],
llis_va[i].src,
llis_va[i].dst,
llis_va[i].cctl,
llis_va[i].lli
"%3d @%p: 0x%08x 0x%08x 0x%08x 0x%08x\n",
i, &llis_va[i], llis_va[i].src,
llis_va[i].dst, llis_va[i].lli, llis_va[i].cctl
);
}
}
@ -1054,64 +1048,105 @@ pl08x_dma_tx_status(struct dma_chan *chan,
/* PrimeCell DMA extension */
struct burst_table {
int burstwords;
u32 burstwords;
u32 reg;
};
static const struct burst_table burst_sizes[] = {
{
.burstwords = 256,
.reg = (PL080_BSIZE_256 << PL080_CONTROL_SB_SIZE_SHIFT) |
(PL080_BSIZE_256 << PL080_CONTROL_DB_SIZE_SHIFT),
.reg = PL080_BSIZE_256,
},
{
.burstwords = 128,
.reg = (PL080_BSIZE_128 << PL080_CONTROL_SB_SIZE_SHIFT) |
(PL080_BSIZE_128 << PL080_CONTROL_DB_SIZE_SHIFT),
.reg = PL080_BSIZE_128,
},
{
.burstwords = 64,
.reg = (PL080_BSIZE_64 << PL080_CONTROL_SB_SIZE_SHIFT) |
(PL080_BSIZE_64 << PL080_CONTROL_DB_SIZE_SHIFT),
.reg = PL080_BSIZE_64,
},
{
.burstwords = 32,
.reg = (PL080_BSIZE_32 << PL080_CONTROL_SB_SIZE_SHIFT) |
(PL080_BSIZE_32 << PL080_CONTROL_DB_SIZE_SHIFT),
.reg = PL080_BSIZE_32,
},
{
.burstwords = 16,
.reg = (PL080_BSIZE_16 << PL080_CONTROL_SB_SIZE_SHIFT) |
(PL080_BSIZE_16 << PL080_CONTROL_DB_SIZE_SHIFT),
.reg = PL080_BSIZE_16,
},
{
.burstwords = 8,
.reg = (PL080_BSIZE_8 << PL080_CONTROL_SB_SIZE_SHIFT) |
(PL080_BSIZE_8 << PL080_CONTROL_DB_SIZE_SHIFT),
.reg = PL080_BSIZE_8,
},
{
.burstwords = 4,
.reg = (PL080_BSIZE_4 << PL080_CONTROL_SB_SIZE_SHIFT) |
(PL080_BSIZE_4 << PL080_CONTROL_DB_SIZE_SHIFT),
.reg = PL080_BSIZE_4,
},
{
.burstwords = 1,
.reg = (PL080_BSIZE_1 << PL080_CONTROL_SB_SIZE_SHIFT) |
(PL080_BSIZE_1 << PL080_CONTROL_DB_SIZE_SHIFT),
.burstwords = 0,
.reg = PL080_BSIZE_1,
},
};
/*
* Given the source and destination available bus masks, select which
* will be routed to each port. We try to have source and destination
* on separate ports, but always respect the allowable settings.
*/
static u32 pl08x_select_bus(u8 src, u8 dst)
{
u32 cctl = 0;
if (!(dst & PL08X_AHB1) || ((dst & PL08X_AHB2) && (src & PL08X_AHB1)))
cctl |= PL080_CONTROL_DST_AHB2;
if (!(src & PL08X_AHB1) || ((src & PL08X_AHB2) && !(dst & PL08X_AHB2)))
cctl |= PL080_CONTROL_SRC_AHB2;
return cctl;
}
static u32 pl08x_cctl(u32 cctl)
{
cctl &= ~(PL080_CONTROL_SRC_AHB2 | PL080_CONTROL_DST_AHB2 |
PL080_CONTROL_SRC_INCR | PL080_CONTROL_DST_INCR |
PL080_CONTROL_PROT_MASK);
/* Access the cell in privileged mode, non-bufferable, non-cacheable */
return cctl | PL080_CONTROL_PROT_SYS;
}
static u32 pl08x_width(enum dma_slave_buswidth width)
{
switch (width) {
case DMA_SLAVE_BUSWIDTH_1_BYTE:
return PL080_WIDTH_8BIT;
case DMA_SLAVE_BUSWIDTH_2_BYTES:
return PL080_WIDTH_16BIT;
case DMA_SLAVE_BUSWIDTH_4_BYTES:
return PL080_WIDTH_32BIT;
default:
return ~0;
}
}
static u32 pl08x_burst(u32 maxburst)
{
int i;
for (i = 0; i < ARRAY_SIZE(burst_sizes); i++)
if (burst_sizes[i].burstwords <= maxburst)
break;
return burst_sizes[i].reg;
}
static int dma_set_runtime_config(struct dma_chan *chan,
struct dma_slave_config *config)
{
struct pl08x_dma_chan *plchan = to_pl08x_chan(chan);
struct pl08x_driver_data *pl08x = plchan->host;
struct pl08x_channel_data *cd = plchan->cd;
enum dma_slave_buswidth addr_width;
dma_addr_t addr;
u32 maxburst;
u32 width, burst, maxburst;
u32 cctl = 0;
int i;
if (!plchan->slave)
return -EINVAL;
@ -1119,11 +1154,9 @@ static int dma_set_runtime_config(struct dma_chan *chan,
/* Transfer direction */
plchan->runtime_direction = config->direction;
if (config->direction == DMA_TO_DEVICE) {
addr = config->dst_addr;
addr_width = config->dst_addr_width;
maxburst = config->dst_maxburst;
} else if (config->direction == DMA_FROM_DEVICE) {
addr = config->src_addr;
addr_width = config->src_addr_width;
maxburst = config->src_maxburst;
} else {
@ -1132,46 +1165,40 @@ static int dma_set_runtime_config(struct dma_chan *chan,
return -EINVAL;
}
switch (addr_width) {
case DMA_SLAVE_BUSWIDTH_1_BYTE:
cctl |= (PL080_WIDTH_8BIT << PL080_CONTROL_SWIDTH_SHIFT) |
(PL080_WIDTH_8BIT << PL080_CONTROL_DWIDTH_SHIFT);
break;
case DMA_SLAVE_BUSWIDTH_2_BYTES:
cctl |= (PL080_WIDTH_16BIT << PL080_CONTROL_SWIDTH_SHIFT) |
(PL080_WIDTH_16BIT << PL080_CONTROL_DWIDTH_SHIFT);
break;
case DMA_SLAVE_BUSWIDTH_4_BYTES:
cctl |= (PL080_WIDTH_32BIT << PL080_CONTROL_SWIDTH_SHIFT) |
(PL080_WIDTH_32BIT << PL080_CONTROL_DWIDTH_SHIFT);
break;
default:
width = pl08x_width(addr_width);
if (width == ~0) {
dev_err(&pl08x->adev->dev,
"bad runtime_config: alien address width\n");
return -EINVAL;
}
cctl |= width << PL080_CONTROL_SWIDTH_SHIFT;
cctl |= width << PL080_CONTROL_DWIDTH_SHIFT;
/*
* Now decide on a maxburst:
* If this channel will only request single transfers, set this
* down to ONE element. Also select one element if no maxburst
* is specified.
*/
if (plchan->cd->single || maxburst == 0) {
cctl |= (PL080_BSIZE_1 << PL080_CONTROL_SB_SIZE_SHIFT) |
(PL080_BSIZE_1 << PL080_CONTROL_DB_SIZE_SHIFT);
if (plchan->cd->single)
maxburst = 1;
burst = pl08x_burst(maxburst);
cctl |= burst << PL080_CONTROL_SB_SIZE_SHIFT;
cctl |= burst << PL080_CONTROL_DB_SIZE_SHIFT;
if (plchan->runtime_direction == DMA_FROM_DEVICE) {
plchan->src_addr = config->src_addr;
plchan->src_cctl = pl08x_cctl(cctl) | PL080_CONTROL_DST_INCR |
pl08x_select_bus(plchan->cd->periph_buses,
pl08x->mem_buses);
} else {
for (i = 0; i < ARRAY_SIZE(burst_sizes); i++)
if (burst_sizes[i].burstwords <= maxburst)
break;
cctl |= burst_sizes[i].reg;
plchan->dst_addr = config->dst_addr;
plchan->dst_cctl = pl08x_cctl(cctl) | PL080_CONTROL_SRC_INCR |
pl08x_select_bus(pl08x->mem_buses,
plchan->cd->periph_buses);
}
plchan->runtime_addr = addr;
/* Modify the default channel data to fit PrimeCell request */
cd->cctl = cctl;
dev_dbg(&pl08x->adev->dev,
"configured channel %s (%s) for %s, data width %d, "
"maxburst %d words, LE, CCTL=0x%08x\n",
@ -1270,23 +1297,6 @@ static int pl08x_prep_channel_resources(struct pl08x_dma_chan *plchan,
return 0;
}
/*
* Given the source and destination available bus masks, select which
* will be routed to each port. We try to have source and destination
* on separate ports, but always respect the allowable settings.
*/
static u32 pl08x_select_bus(struct pl08x_driver_data *pl08x, u8 src, u8 dst)
{
u32 cctl = 0;
if (!(dst & PL08X_AHB1) || ((dst & PL08X_AHB2) && (src & PL08X_AHB1)))
cctl |= PL080_CONTROL_DST_AHB2;
if (!(src & PL08X_AHB1) || ((src & PL08X_AHB2) && !(dst & PL08X_AHB2)))
cctl |= PL080_CONTROL_SRC_AHB2;
return cctl;
}
static struct pl08x_txd *pl08x_get_txd(struct pl08x_dma_chan *plchan,
unsigned long flags)
{
@ -1338,8 +1348,8 @@ static struct dma_async_tx_descriptor *pl08x_prep_dma_memcpy(
txd->cctl |= PL080_CONTROL_SRC_INCR | PL080_CONTROL_DST_INCR;
if (pl08x->vd->dualmaster)
txd->cctl |= pl08x_select_bus(pl08x,
pl08x->mem_buses, pl08x->mem_buses);
txd->cctl |= pl08x_select_bus(pl08x->mem_buses,
pl08x->mem_buses);
ret = pl08x_prep_channel_resources(plchan, txd);
if (ret)
@ -1356,7 +1366,6 @@ static struct dma_async_tx_descriptor *pl08x_prep_slave_sg(
struct pl08x_dma_chan *plchan = to_pl08x_chan(chan);
struct pl08x_driver_data *pl08x = plchan->host;
struct pl08x_txd *txd;
u8 src_buses, dst_buses;
int ret;
/*
@ -1390,42 +1399,22 @@ static struct dma_async_tx_descriptor *pl08x_prep_slave_sg(
txd->direction = direction;
txd->len = sgl->length;
txd->cctl = plchan->cd->cctl &
~(PL080_CONTROL_SRC_AHB2 | PL080_CONTROL_DST_AHB2 |
PL080_CONTROL_SRC_INCR | PL080_CONTROL_DST_INCR |
PL080_CONTROL_PROT_MASK);
/* Access the cell in privileged mode, non-bufferable, non-cacheable */
txd->cctl |= PL080_CONTROL_PROT_SYS;
if (direction == DMA_TO_DEVICE) {
txd->ccfg |= PL080_FLOW_MEM2PER << PL080_CONFIG_FLOW_CONTROL_SHIFT;
txd->cctl |= PL080_CONTROL_SRC_INCR;
txd->cctl = plchan->dst_cctl;
txd->src_addr = sgl->dma_address;
if (plchan->runtime_addr)
txd->dst_addr = plchan->runtime_addr;
else
txd->dst_addr = plchan->cd->addr;
src_buses = pl08x->mem_buses;
dst_buses = plchan->cd->periph_buses;
txd->dst_addr = plchan->dst_addr;
} else if (direction == DMA_FROM_DEVICE) {
txd->ccfg |= PL080_FLOW_PER2MEM << PL080_CONFIG_FLOW_CONTROL_SHIFT;
txd->cctl |= PL080_CONTROL_DST_INCR;
if (plchan->runtime_addr)
txd->src_addr = plchan->runtime_addr;
else
txd->src_addr = plchan->cd->addr;
txd->cctl = plchan->src_cctl;
txd->src_addr = plchan->src_addr;
txd->dst_addr = sgl->dma_address;
src_buses = plchan->cd->periph_buses;
dst_buses = pl08x->mem_buses;
} else {
dev_err(&pl08x->adev->dev,
"%s direction unsupported\n", __func__);
return NULL;
}
txd->cctl |= pl08x_select_bus(pl08x, src_buses, dst_buses);
ret = pl08x_prep_channel_resources(plchan, txd);
if (ret)
return NULL;
@ -1676,6 +1665,20 @@ static irqreturn_t pl08x_irq(int irq, void *dev)
return mask ? IRQ_HANDLED : IRQ_NONE;
}
static void pl08x_dma_slave_init(struct pl08x_dma_chan *chan)
{
u32 cctl = pl08x_cctl(chan->cd->cctl);
chan->slave = true;
chan->name = chan->cd->bus_id;
chan->src_addr = chan->cd->addr;
chan->dst_addr = chan->cd->addr;
chan->src_cctl = cctl | PL080_CONTROL_DST_INCR |
pl08x_select_bus(chan->cd->periph_buses, chan->host->mem_buses);
chan->dst_cctl = cctl | PL080_CONTROL_SRC_INCR |
pl08x_select_bus(chan->host->mem_buses, chan->cd->periph_buses);
}
/*
* Initialise the DMAC memcpy/slave channels.
* Make a local wrapper to hold required data
@ -1707,9 +1710,8 @@ static int pl08x_dma_init_virtual_channels(struct pl08x_driver_data *pl08x,
chan->state = PL08X_CHAN_IDLE;
if (slave) {
chan->slave = true;
chan->name = pl08x->pd->slave_channels[i].bus_id;
chan->cd = &pl08x->pd->slave_channels[i];
pl08x_dma_slave_init(chan);
} else {
chan->cd = &pl08x->pd->memcpy_channel;
chan->name = kasprintf(GFP_KERNEL, "memcpy%d", i);

View File

@ -1216,7 +1216,7 @@ static int __init at_dma_probe(struct platform_device *pdev)
atdma->dma_common.cap_mask = pdata->cap_mask;
atdma->all_chan_mask = (1 << pdata->nr_channels) - 1;
size = io->end - io->start + 1;
size = resource_size(io);
if (!request_mem_region(io->start, size, pdev->dev.driver->name)) {
err = -EBUSY;
goto err_kfree;
@ -1362,7 +1362,7 @@ static int __exit at_dma_remove(struct platform_device *pdev)
atdma->regs = NULL;
io = platform_get_resource(pdev, IORESOURCE_MEM, 0);
release_mem_region(io->start, io->end - io->start + 1);
release_mem_region(io->start, resource_size(io));
kfree(atdma);

View File

@ -41,6 +41,8 @@ struct coh901318_desc {
struct coh901318_lli *lli;
enum dma_data_direction dir;
unsigned long flags;
u32 head_config;
u32 head_ctrl;
};
struct coh901318_base {
@ -661,6 +663,9 @@ static struct coh901318_desc *coh901318_queue_start(struct coh901318_chan *cohc)
coh901318_desc_submit(cohc, cohd);
/* Program the transaction head */
coh901318_set_conf(cohc, cohd->head_config);
coh901318_set_ctrl(cohc, cohd->head_ctrl);
coh901318_prep_linked_list(cohc, cohd->lli);
/* start dma job on this channel */
@ -1091,8 +1096,6 @@ coh901318_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
} else
goto err_direction;
coh901318_set_conf(cohc, config);
/* The dma only supports transmitting packages up to
* MAX_DMA_PACKET_SIZE. Calculate to total number of
* dma elemts required to send the entire sg list
@ -1129,16 +1132,18 @@ coh901318_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
if (ret)
goto err_lli_fill;
/*
* Set the default ctrl for the channel to the one from the lli,
* things may have changed due to odd buffer alignment etc.
*/
coh901318_set_ctrl(cohc, lli->control);
COH_DBG(coh901318_list_print(cohc, lli));
/* Pick a descriptor to handle this transfer */
cohd = coh901318_desc_get(cohc);
cohd->head_config = config;
/*
* Set the default head ctrl for the channel to the one from the
* lli, things may have changed due to odd buffer alignment
* etc.
*/
cohd->head_ctrl = lli->control;
cohd->dir = direction;
cohd->flags = flags;
cohd->desc.tx_submit = coh901318_tx_submit;

View File

@ -510,8 +510,8 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v
dma_chan_name(chan));
list_del_rcu(&device->global_node);
} else if (err)
pr_err("dmaengine: failed to get %s: (%d)\n",
dma_chan_name(chan), err);
pr_debug("dmaengine: failed to get %s: (%d)\n",
dma_chan_name(chan), err);
else
break;
if (--device->privatecnt == 0)

View File

@ -902,7 +902,7 @@ static void ep93xx_dma_free_chan_resources(struct dma_chan *chan)
*
* Returns a valid DMA descriptor or %NULL in case of failure.
*/
struct dma_async_tx_descriptor *
static struct dma_async_tx_descriptor *
ep93xx_dma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest,
dma_addr_t src, size_t len, unsigned long flags)
{

View File

@ -1305,8 +1305,10 @@ static int __init sdma_probe(struct platform_device *pdev)
goto err_request_irq;
sdma->script_addrs = kzalloc(sizeof(*sdma->script_addrs), GFP_KERNEL);
if (!sdma->script_addrs)
if (!sdma->script_addrs) {
ret = -ENOMEM;
goto err_alloc;
}
if (of_id)
pdev->id_entry = of_id->data;

View File

@ -1351,7 +1351,6 @@ int dma_suspend(struct pci_dev *pci, pm_message_t state)
return -EAGAIN;
}
device->state = SUSPENDED;
pci_set_drvdata(pci, device);
pci_save_state(pci);
pci_disable_device(pci);
pci_set_power_state(pci, PCI_D3hot);
@ -1380,7 +1379,6 @@ int dma_resume(struct pci_dev *pci)
}
device->state = RUNNING;
iowrite32(REG_BIT0, device->dma_base + DMA_CFG);
pci_set_drvdata(pci, device);
return 0;
}

View File

@ -1706,16 +1706,14 @@ static int __init ipu_probe(struct platform_device *pdev)
ipu_data.irq_fn, ipu_data.irq_err, ipu_data.irq_base);
/* Remap IPU common registers */
ipu_data.reg_ipu = ioremap(mem_ipu->start,
mem_ipu->end - mem_ipu->start + 1);
ipu_data.reg_ipu = ioremap(mem_ipu->start, resource_size(mem_ipu));
if (!ipu_data.reg_ipu) {
ret = -ENOMEM;
goto err_ioremap_ipu;
}
/* Remap Image Converter and Image DMA Controller registers */
ipu_data.reg_ic = ioremap(mem_ic->start,
mem_ic->end - mem_ic->start + 1);
ipu_data.reg_ic = ioremap(mem_ic->start, resource_size(mem_ic));
if (!ipu_data.reg_ic) {
ret = -ENOMEM;
goto err_ioremap_ic;

View File

@ -1304,7 +1304,8 @@ static int mv_xor_shared_probe(struct platform_device *pdev)
if (!res)
return -ENODEV;
msp->xor_base = devm_ioremap(&pdev->dev, res->start, resource_size(res));
msp->xor_base = devm_ioremap(&pdev->dev, res->start,
resource_size(res));
if (!msp->xor_base)
return -EBUSY;

View File

@ -327,10 +327,12 @@ static int mxs_dma_alloc_chan_resources(struct dma_chan *chan)
memset(mxs_chan->ccw, 0, PAGE_SIZE);
ret = request_irq(mxs_chan->chan_irq, mxs_dma_int_handler,
0, "mxs-dma", mxs_dma);
if (ret)
goto err_irq;
if (mxs_chan->chan_irq != NO_IRQ) {
ret = request_irq(mxs_chan->chan_irq, mxs_dma_int_handler,
0, "mxs-dma", mxs_dma);
if (ret)
goto err_irq;
}
ret = clk_enable(mxs_dma->clk);
if (ret)
@ -535,6 +537,7 @@ static int mxs_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
switch (cmd) {
case DMA_TERMINATE_ALL:
mxs_dma_disable_chan(mxs_chan);
mxs_dma_reset_chan(mxs_chan);
break;
case DMA_PAUSE:
mxs_dma_pause_chan(mxs_chan);
@ -707,6 +710,8 @@ static struct platform_device_id mxs_dma_type[] = {
}, {
.name = "mxs-dma-apbx",
.driver_data = MXS_DMA_APBX,
}, {
/* end of list */
}
};

View File

@ -45,7 +45,8 @@
#define DMA_STATUS_MASK_BITS 0x3
#define DMA_STATUS_SHIFT_BITS 16
#define DMA_STATUS_IRQ(x) (0x1 << (x))
#define DMA_STATUS_ERR(x) (0x1 << ((x) + 8))
#define DMA_STATUS0_ERR(x) (0x1 << ((x) + 8))
#define DMA_STATUS2_ERR(x) (0x1 << (x))
#define DMA_DESC_WIDTH_SHIFT_BITS 12
#define DMA_DESC_WIDTH_1_BYTE (0x3 << DMA_DESC_WIDTH_SHIFT_BITS)
@ -61,6 +62,9 @@
#define MAX_CHAN_NR 8
#define DMA_MASK_CTL0_MODE 0x33333333
#define DMA_MASK_CTL2_MODE 0x00003333
static unsigned int init_nr_desc_per_channel = 64;
module_param(init_nr_desc_per_channel, uint, 0644);
MODULE_PARM_DESC(init_nr_desc_per_channel,
@ -133,6 +137,7 @@ struct pch_dma {
#define PCH_DMA_CTL3 0x0C
#define PCH_DMA_STS0 0x10
#define PCH_DMA_STS1 0x14
#define PCH_DMA_STS2 0x18
#define dma_readl(pd, name) \
readl((pd)->membase + PCH_DMA_##name)
@ -183,13 +188,19 @@ static void pdc_enable_irq(struct dma_chan *chan, int enable)
{
struct pch_dma *pd = to_pd(chan->device);
u32 val;
int pos;
if (chan->chan_id < 8)
pos = chan->chan_id;
else
pos = chan->chan_id + 8;
val = dma_readl(pd, CTL2);
if (enable)
val |= 0x1 << chan->chan_id;
val |= 0x1 << pos;
else
val &= ~(0x1 << chan->chan_id);
val &= ~(0x1 << pos);
dma_writel(pd, CTL2, val);
@ -202,10 +213,17 @@ static void pdc_set_dir(struct dma_chan *chan)
struct pch_dma_chan *pd_chan = to_pd_chan(chan);
struct pch_dma *pd = to_pd(chan->device);
u32 val;
u32 mask_mode;
u32 mask_ctl;
if (chan->chan_id < 8) {
val = dma_readl(pd, CTL0);
mask_mode = DMA_CTL0_MODE_MASK_BITS <<
(DMA_CTL0_BITS_PER_CH * chan->chan_id);
mask_ctl = DMA_MASK_CTL0_MODE & ~(DMA_CTL0_MODE_MASK_BITS <<
(DMA_CTL0_BITS_PER_CH * chan->chan_id));
val &= mask_mode;
if (pd_chan->dir == DMA_TO_DEVICE)
val |= 0x1 << (DMA_CTL0_BITS_PER_CH * chan->chan_id +
DMA_CTL0_DIR_SHIFT_BITS);
@ -213,18 +231,24 @@ static void pdc_set_dir(struct dma_chan *chan)
val &= ~(0x1 << (DMA_CTL0_BITS_PER_CH * chan->chan_id +
DMA_CTL0_DIR_SHIFT_BITS));
val |= mask_ctl;
dma_writel(pd, CTL0, val);
} else {
int ch = chan->chan_id - 8; /* ch8-->0 ch9-->1 ... ch11->3 */
val = dma_readl(pd, CTL3);
mask_mode = DMA_CTL0_MODE_MASK_BITS <<
(DMA_CTL0_BITS_PER_CH * ch);
mask_ctl = DMA_MASK_CTL2_MODE & ~(DMA_CTL0_MODE_MASK_BITS <<
(DMA_CTL0_BITS_PER_CH * ch));
val &= mask_mode;
if (pd_chan->dir == DMA_TO_DEVICE)
val |= 0x1 << (DMA_CTL0_BITS_PER_CH * ch +
DMA_CTL0_DIR_SHIFT_BITS);
else
val &= ~(0x1 << (DMA_CTL0_BITS_PER_CH * ch +
DMA_CTL0_DIR_SHIFT_BITS));
val |= mask_ctl;
dma_writel(pd, CTL3, val);
}
@ -236,33 +260,37 @@ static void pdc_set_mode(struct dma_chan *chan, u32 mode)
{
struct pch_dma *pd = to_pd(chan->device);
u32 val;
u32 mask_ctl;
u32 mask_dir;
if (chan->chan_id < 8) {
mask_ctl = DMA_MASK_CTL0_MODE & ~(DMA_CTL0_MODE_MASK_BITS <<
(DMA_CTL0_BITS_PER_CH * chan->chan_id));
mask_dir = 1 << (DMA_CTL0_BITS_PER_CH * chan->chan_id +\
DMA_CTL0_DIR_SHIFT_BITS);
val = dma_readl(pd, CTL0);
val &= ~(DMA_CTL0_MODE_MASK_BITS <<
(DMA_CTL0_BITS_PER_CH * chan->chan_id));
val &= mask_dir;
val |= mode << (DMA_CTL0_BITS_PER_CH * chan->chan_id);
val |= mask_ctl;
dma_writel(pd, CTL0, val);
} else {
int ch = chan->chan_id - 8; /* ch8-->0 ch9-->1 ... ch11->3 */
mask_ctl = DMA_MASK_CTL2_MODE & ~(DMA_CTL0_MODE_MASK_BITS <<
(DMA_CTL0_BITS_PER_CH * ch));
mask_dir = 1 << (DMA_CTL0_BITS_PER_CH * ch +\
DMA_CTL0_DIR_SHIFT_BITS);
val = dma_readl(pd, CTL3);
val &= ~(DMA_CTL0_MODE_MASK_BITS <<
(DMA_CTL0_BITS_PER_CH * ch));
val &= mask_dir;
val |= mode << (DMA_CTL0_BITS_PER_CH * ch);
val |= mask_ctl;
dma_writel(pd, CTL3, val);
}
dev_dbg(chan2dev(chan), "pdc_set_mode: chan %d -> %x\n",
chan->chan_id, val);
}
static u32 pdc_get_status(struct pch_dma_chan *pd_chan)
static u32 pdc_get_status0(struct pch_dma_chan *pd_chan)
{
struct pch_dma *pd = to_pd(pd_chan->chan.device);
u32 val;
@ -272,9 +300,27 @@ static u32 pdc_get_status(struct pch_dma_chan *pd_chan)
DMA_STATUS_BITS_PER_CH * pd_chan->chan.chan_id));
}
static u32 pdc_get_status2(struct pch_dma_chan *pd_chan)
{
struct pch_dma *pd = to_pd(pd_chan->chan.device);
u32 val;
val = dma_readl(pd, STS2);
return DMA_STATUS_MASK_BITS & (val >> (DMA_STATUS_SHIFT_BITS +
DMA_STATUS_BITS_PER_CH * (pd_chan->chan.chan_id - 8)));
}
static bool pdc_is_idle(struct pch_dma_chan *pd_chan)
{
if (pdc_get_status(pd_chan) == DMA_STATUS_IDLE)
u32 sts;
if (pd_chan->chan.chan_id < 8)
sts = pdc_get_status0(pd_chan);
else
sts = pdc_get_status2(pd_chan);
if (sts == DMA_STATUS_IDLE)
return true;
else
return false;
@ -495,11 +541,11 @@ static int pd_alloc_chan_resources(struct dma_chan *chan)
list_add_tail(&desc->desc_node, &tmp_list);
}
spin_lock_bh(&pd_chan->lock);
spin_lock_irq(&pd_chan->lock);
list_splice(&tmp_list, &pd_chan->free_list);
pd_chan->descs_allocated = i;
pd_chan->completed_cookie = chan->cookie = 1;
spin_unlock_bh(&pd_chan->lock);
spin_unlock_irq(&pd_chan->lock);
pdc_enable_irq(chan, 1);
@ -517,10 +563,10 @@ static void pd_free_chan_resources(struct dma_chan *chan)
BUG_ON(!list_empty(&pd_chan->active_list));
BUG_ON(!list_empty(&pd_chan->queue));
spin_lock_bh(&pd_chan->lock);
spin_lock_irq(&pd_chan->lock);
list_splice_init(&pd_chan->free_list, &tmp_list);
pd_chan->descs_allocated = 0;
spin_unlock_bh(&pd_chan->lock);
spin_unlock_irq(&pd_chan->lock);
list_for_each_entry_safe(desc, _d, &tmp_list, desc_node)
pci_pool_free(pd->pool, desc, desc->txd.phys);
@ -536,10 +582,10 @@ static enum dma_status pd_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
dma_cookie_t last_completed;
int ret;
spin_lock_bh(&pd_chan->lock);
spin_lock_irq(&pd_chan->lock);
last_completed = pd_chan->completed_cookie;
last_used = chan->cookie;
spin_unlock_bh(&pd_chan->lock);
spin_unlock_irq(&pd_chan->lock);
ret = dma_async_is_complete(cookie, last_completed, last_used);
@ -654,7 +700,7 @@ static int pd_device_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
if (cmd != DMA_TERMINATE_ALL)
return -ENXIO;
spin_lock_bh(&pd_chan->lock);
spin_lock_irq(&pd_chan->lock);
pdc_set_mode(&pd_chan->chan, DMA_CTL0_DISABLE);
@ -664,7 +710,7 @@ static int pd_device_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
list_for_each_entry_safe(desc, _d, &list, desc_node)
pdc_chain_complete(pd_chan, desc);
spin_unlock_bh(&pd_chan->lock);
spin_unlock_irq(&pd_chan->lock);
return 0;
}
@ -693,30 +739,45 @@ static irqreturn_t pd_irq(int irq, void *devid)
struct pch_dma *pd = (struct pch_dma *)devid;
struct pch_dma_chan *pd_chan;
u32 sts0;
u32 sts2;
int i;
int ret = IRQ_NONE;
int ret0 = IRQ_NONE;
int ret2 = IRQ_NONE;
sts0 = dma_readl(pd, STS0);
sts2 = dma_readl(pd, STS2);
dev_dbg(pd->dma.dev, "pd_irq sts0: %x\n", sts0);
for (i = 0; i < pd->dma.chancnt; i++) {
pd_chan = &pd->channels[i];
if (sts0 & DMA_STATUS_IRQ(i)) {
if (sts0 & DMA_STATUS_ERR(i))
set_bit(0, &pd_chan->err_status);
if (i < 8) {
if (sts0 & DMA_STATUS_IRQ(i)) {
if (sts0 & DMA_STATUS0_ERR(i))
set_bit(0, &pd_chan->err_status);
tasklet_schedule(&pd_chan->tasklet);
ret = IRQ_HANDLED;
tasklet_schedule(&pd_chan->tasklet);
ret0 = IRQ_HANDLED;
}
} else {
if (sts2 & DMA_STATUS_IRQ(i - 8)) {
if (sts2 & DMA_STATUS2_ERR(i))
set_bit(0, &pd_chan->err_status);
tasklet_schedule(&pd_chan->tasklet);
ret2 = IRQ_HANDLED;
}
}
}
/* clear interrupt bits in status register */
dma_writel(pd, STS0, sts0);
if (ret0)
dma_writel(pd, STS0, sts0);
if (ret2)
dma_writel(pd, STS2, sts2);
return ret;
return ret0 | ret2;
}
#ifdef CONFIG_PM

View File

@ -82,7 +82,7 @@ struct dma_pl330_dmac {
spinlock_t pool_lock;
/* Peripheral channels connected to this DMAC */
struct dma_pl330_chan peripherals[0]; /* keep at end */
struct dma_pl330_chan *peripherals; /* keep at end */
};
struct dma_pl330_desc {
@ -451,8 +451,13 @@ static struct dma_pl330_desc *pl330_get_desc(struct dma_pl330_chan *pch)
desc->txd.cookie = 0;
async_tx_ack(&desc->txd);
desc->req.rqtype = peri->rqtype;
desc->req.peri = peri->peri_id;
if (peri) {
desc->req.rqtype = peri->rqtype;
desc->req.peri = peri->peri_id;
} else {
desc->req.rqtype = MEMTOMEM;
desc->req.peri = 0;
}
dma_async_tx_descriptor_init(&desc->txd, &pch->chan);
@ -529,10 +534,10 @@ pl330_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
struct pl330_info *pi;
int burst;
if (unlikely(!pch || !len || !peri))
if (unlikely(!pch || !len))
return NULL;
if (peri->rqtype != MEMTOMEM)
if (peri && peri->rqtype != MEMTOMEM)
return NULL;
pi = &pch->dmac->pif;
@ -577,7 +582,7 @@ pl330_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
int i, burst_size;
dma_addr_t addr;
if (unlikely(!pch || !sgl || !sg_len))
if (unlikely(!pch || !sgl || !sg_len || !peri))
return NULL;
/* Make sure the direction is consistent */
@ -666,17 +671,12 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id)
struct dma_device *pd;
struct resource *res;
int i, ret, irq;
int num_chan;
pdat = adev->dev.platform_data;
if (!pdat || !pdat->nr_valid_peri) {
dev_err(&adev->dev, "platform data missing\n");
return -ENODEV;
}
/* Allocate a new DMAC and its Channels */
pdmac = kzalloc(pdat->nr_valid_peri * sizeof(*pch)
+ sizeof(*pdmac), GFP_KERNEL);
pdmac = kzalloc(sizeof(*pdmac), GFP_KERNEL);
if (!pdmac) {
dev_err(&adev->dev, "unable to allocate mem\n");
return -ENOMEM;
@ -685,7 +685,7 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id)
pi = &pdmac->pif;
pi->dev = &adev->dev;
pi->pl330_data = NULL;
pi->mcbufsz = pdat->mcbuf_sz;
pi->mcbufsz = pdat ? pdat->mcbuf_sz : 0;
res = &adev->res;
request_mem_region(res->start, resource_size(res), "dma-pl330");
@ -717,27 +717,35 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id)
INIT_LIST_HEAD(&pd->channels);
/* Initialize channel parameters */
for (i = 0; i < pdat->nr_valid_peri; i++) {
struct dma_pl330_peri *peri = &pdat->peri[i];
pch = &pdmac->peripherals[i];
num_chan = max(pdat ? pdat->nr_valid_peri : 0, (u8)pi->pcfg.num_chan);
pdmac->peripherals = kzalloc(num_chan * sizeof(*pch), GFP_KERNEL);
switch (peri->rqtype) {
case MEMTOMEM:
for (i = 0; i < num_chan; i++) {
pch = &pdmac->peripherals[i];
if (pdat) {
struct dma_pl330_peri *peri = &pdat->peri[i];
switch (peri->rqtype) {
case MEMTOMEM:
dma_cap_set(DMA_MEMCPY, pd->cap_mask);
break;
case MEMTODEV:
case DEVTOMEM:
dma_cap_set(DMA_SLAVE, pd->cap_mask);
break;
default:
dev_err(&adev->dev, "DEVTODEV Not Supported\n");
continue;
}
pch->chan.private = peri;
} else {
dma_cap_set(DMA_MEMCPY, pd->cap_mask);
break;
case MEMTODEV:
case DEVTOMEM:
dma_cap_set(DMA_SLAVE, pd->cap_mask);
break;
default:
dev_err(&adev->dev, "DEVTODEV Not Supported\n");
continue;
pch->chan.private = NULL;
}
INIT_LIST_HEAD(&pch->work_list);
spin_lock_init(&pch->lock);
pch->pl330_chid = NULL;
pch->chan.private = peri;
pch->chan.device = pd;
pch->chan.chan_id = i;
pch->dmac = pdmac;

View File

@ -14,6 +14,7 @@
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/amba/bus.h>
#include <plat/ste_dma40.h>
@ -45,9 +46,6 @@
#define D40_ALLOC_PHY (1 << 30)
#define D40_ALLOC_LOG_FREE 0
/* Hardware designer of the block */
#define D40_HW_DESIGNER 0x8
/**
* enum 40_command - The different commands and/or statuses.
*
@ -186,6 +184,8 @@ struct d40_base;
* @log_def: Default logical channel settings.
* @lcla: Space for one dst src pair for logical channel transfers.
* @lcpa: Pointer to dst and src lcpa settings.
* @runtime_addr: runtime configured address.
* @runtime_direction: runtime configured direction.
*
* This struct can either "be" a logical or a physical channel.
*/
@ -200,6 +200,7 @@ struct d40_chan {
struct dma_chan chan;
struct tasklet_struct tasklet;
struct list_head client;
struct list_head pending_queue;
struct list_head active;
struct list_head queue;
struct stedma40_chan_cfg dma_cfg;
@ -645,7 +646,20 @@ static struct d40_desc *d40_first_active_get(struct d40_chan *d40c)
static void d40_desc_queue(struct d40_chan *d40c, struct d40_desc *desc)
{
list_add_tail(&desc->node, &d40c->queue);
list_add_tail(&desc->node, &d40c->pending_queue);
}
static struct d40_desc *d40_first_pending(struct d40_chan *d40c)
{
struct d40_desc *d;
if (list_empty(&d40c->pending_queue))
return NULL;
d = list_first_entry(&d40c->pending_queue,
struct d40_desc,
node);
return d;
}
static struct d40_desc *d40_first_queued(struct d40_chan *d40c)
@ -802,6 +816,11 @@ static void d40_term_all(struct d40_chan *d40c)
d40_desc_free(d40c, d40d);
}
/* Release pending descriptors */
while ((d40d = d40_first_pending(d40c))) {
d40_desc_remove(d40d);
d40_desc_free(d40c, d40d);
}
d40c->pending_tx = 0;
d40c->busy = false;
@ -2092,7 +2111,7 @@ dma40_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t dma_addr,
struct scatterlist *sg;
int i;
sg = kcalloc(periods + 1, sizeof(struct scatterlist), GFP_KERNEL);
sg = kcalloc(periods + 1, sizeof(struct scatterlist), GFP_NOWAIT);
for (i = 0; i < periods; i++) {
sg_dma_address(&sg[i]) = dma_addr;
sg_dma_len(&sg[i]) = period_len;
@ -2152,24 +2171,87 @@ static void d40_issue_pending(struct dma_chan *chan)
spin_lock_irqsave(&d40c->lock, flags);
/* Busy means that pending jobs are already being processed */
list_splice_tail_init(&d40c->pending_queue, &d40c->queue);
/* Busy means that queued jobs are already being processed */
if (!d40c->busy)
(void) d40_queue_start(d40c);
spin_unlock_irqrestore(&d40c->lock, flags);
}
static int
dma40_config_to_halfchannel(struct d40_chan *d40c,
struct stedma40_half_channel_info *info,
enum dma_slave_buswidth width,
u32 maxburst)
{
enum stedma40_periph_data_width addr_width;
int psize;
switch (width) {
case DMA_SLAVE_BUSWIDTH_1_BYTE:
addr_width = STEDMA40_BYTE_WIDTH;
break;
case DMA_SLAVE_BUSWIDTH_2_BYTES:
addr_width = STEDMA40_HALFWORD_WIDTH;
break;
case DMA_SLAVE_BUSWIDTH_4_BYTES:
addr_width = STEDMA40_WORD_WIDTH;
break;
case DMA_SLAVE_BUSWIDTH_8_BYTES:
addr_width = STEDMA40_DOUBLEWORD_WIDTH;
break;
default:
dev_err(d40c->base->dev,
"illegal peripheral address width "
"requested (%d)\n",
width);
return -EINVAL;
}
if (chan_is_logical(d40c)) {
if (maxburst >= 16)
psize = STEDMA40_PSIZE_LOG_16;
else if (maxburst >= 8)
psize = STEDMA40_PSIZE_LOG_8;
else if (maxburst >= 4)
psize = STEDMA40_PSIZE_LOG_4;
else
psize = STEDMA40_PSIZE_LOG_1;
} else {
if (maxburst >= 16)
psize = STEDMA40_PSIZE_PHY_16;
else if (maxburst >= 8)
psize = STEDMA40_PSIZE_PHY_8;
else if (maxburst >= 4)
psize = STEDMA40_PSIZE_PHY_4;
else
psize = STEDMA40_PSIZE_PHY_1;
}
info->data_width = addr_width;
info->psize = psize;
info->flow_ctrl = STEDMA40_NO_FLOW_CTRL;
return 0;
}
/* Runtime reconfiguration extension */
static void d40_set_runtime_config(struct dma_chan *chan,
struct dma_slave_config *config)
static int d40_set_runtime_config(struct dma_chan *chan,
struct dma_slave_config *config)
{
struct d40_chan *d40c = container_of(chan, struct d40_chan, chan);
struct stedma40_chan_cfg *cfg = &d40c->dma_cfg;
enum dma_slave_buswidth config_addr_width;
enum dma_slave_buswidth src_addr_width, dst_addr_width;
dma_addr_t config_addr;
u32 config_maxburst;
enum stedma40_periph_data_width addr_width;
int psize;
u32 src_maxburst, dst_maxburst;
int ret;
src_addr_width = config->src_addr_width;
src_maxburst = config->src_maxburst;
dst_addr_width = config->dst_addr_width;
dst_maxburst = config->dst_maxburst;
if (config->direction == DMA_FROM_DEVICE) {
dma_addr_t dev_addr_rx =
@ -2188,8 +2270,11 @@ static void d40_set_runtime_config(struct dma_chan *chan,
cfg->dir);
cfg->dir = STEDMA40_PERIPH_TO_MEM;
config_addr_width = config->src_addr_width;
config_maxburst = config->src_maxburst;
/* Configure the memory side */
if (dst_addr_width == DMA_SLAVE_BUSWIDTH_UNDEFINED)
dst_addr_width = src_addr_width;
if (dst_maxburst == 0)
dst_maxburst = src_maxburst;
} else if (config->direction == DMA_TO_DEVICE) {
dma_addr_t dev_addr_tx =
@ -2208,68 +2293,39 @@ static void d40_set_runtime_config(struct dma_chan *chan,
cfg->dir);
cfg->dir = STEDMA40_MEM_TO_PERIPH;
config_addr_width = config->dst_addr_width;
config_maxburst = config->dst_maxburst;
/* Configure the memory side */
if (src_addr_width == DMA_SLAVE_BUSWIDTH_UNDEFINED)
src_addr_width = dst_addr_width;
if (src_maxburst == 0)
src_maxburst = dst_maxburst;
} else {
dev_err(d40c->base->dev,
"unrecognized channel direction %d\n",
config->direction);
return;
return -EINVAL;
}
switch (config_addr_width) {
case DMA_SLAVE_BUSWIDTH_1_BYTE:
addr_width = STEDMA40_BYTE_WIDTH;
break;
case DMA_SLAVE_BUSWIDTH_2_BYTES:
addr_width = STEDMA40_HALFWORD_WIDTH;
break;
case DMA_SLAVE_BUSWIDTH_4_BYTES:
addr_width = STEDMA40_WORD_WIDTH;
break;
case DMA_SLAVE_BUSWIDTH_8_BYTES:
addr_width = STEDMA40_DOUBLEWORD_WIDTH;
break;
default:
if (src_maxburst * src_addr_width != dst_maxburst * dst_addr_width) {
dev_err(d40c->base->dev,
"illegal peripheral address width "
"requested (%d)\n",
config->src_addr_width);
return;
"src/dst width/maxburst mismatch: %d*%d != %d*%d\n",
src_maxburst,
src_addr_width,
dst_maxburst,
dst_addr_width);
return -EINVAL;
}
if (chan_is_logical(d40c)) {
if (config_maxburst >= 16)
psize = STEDMA40_PSIZE_LOG_16;
else if (config_maxburst >= 8)
psize = STEDMA40_PSIZE_LOG_8;
else if (config_maxburst >= 4)
psize = STEDMA40_PSIZE_LOG_4;
else
psize = STEDMA40_PSIZE_LOG_1;
} else {
if (config_maxburst >= 16)
psize = STEDMA40_PSIZE_PHY_16;
else if (config_maxburst >= 8)
psize = STEDMA40_PSIZE_PHY_8;
else if (config_maxburst >= 4)
psize = STEDMA40_PSIZE_PHY_4;
else if (config_maxburst >= 2)
psize = STEDMA40_PSIZE_PHY_2;
else
psize = STEDMA40_PSIZE_PHY_1;
}
ret = dma40_config_to_halfchannel(d40c, &cfg->src_info,
src_addr_width,
src_maxburst);
if (ret)
return ret;
/* Set up all the endpoint configs */
cfg->src_info.data_width = addr_width;
cfg->src_info.psize = psize;
cfg->src_info.big_endian = false;
cfg->src_info.flow_ctrl = STEDMA40_NO_FLOW_CTRL;
cfg->dst_info.data_width = addr_width;
cfg->dst_info.psize = psize;
cfg->dst_info.big_endian = false;
cfg->dst_info.flow_ctrl = STEDMA40_NO_FLOW_CTRL;
ret = dma40_config_to_halfchannel(d40c, &cfg->dst_info,
dst_addr_width,
dst_maxburst);
if (ret)
return ret;
/* Fill in register values */
if (chan_is_logical(d40c))
@ -2282,12 +2338,14 @@ static void d40_set_runtime_config(struct dma_chan *chan,
d40c->runtime_addr = config_addr;
d40c->runtime_direction = config->direction;
dev_dbg(d40c->base->dev,
"configured channel %s for %s, data width %d, "
"maxburst %d bytes, LE, no flow control\n",
"configured channel %s for %s, data width %d/%d, "
"maxburst %d/%d elements, LE, no flow control\n",
dma_chan_name(chan),
(config->direction == DMA_FROM_DEVICE) ? "RX" : "TX",
config_addr_width,
config_maxburst);
src_addr_width, dst_addr_width,
src_maxburst, dst_maxburst);
return 0;
}
static int d40_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
@ -2308,9 +2366,8 @@ static int d40_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
case DMA_RESUME:
return d40_resume(d40c);
case DMA_SLAVE_CONFIG:
d40_set_runtime_config(chan,
return d40_set_runtime_config(chan,
(struct dma_slave_config *) arg);
return 0;
default:
break;
}
@ -2341,6 +2398,7 @@ static void __init d40_chan_init(struct d40_base *base, struct dma_device *dma,
INIT_LIST_HEAD(&d40c->active);
INIT_LIST_HEAD(&d40c->queue);
INIT_LIST_HEAD(&d40c->pending_queue);
INIT_LIST_HEAD(&d40c->client);
tasklet_init(&d40c->tasklet, dma_tasklet,
@ -2502,25 +2560,6 @@ static int __init d40_phy_res_init(struct d40_base *base)
static struct d40_base * __init d40_hw_detect_init(struct platform_device *pdev)
{
static const struct d40_reg_val dma_id_regs[] = {
/* Peripheral Id */
{ .reg = D40_DREG_PERIPHID0, .val = 0x0040},
{ .reg = D40_DREG_PERIPHID1, .val = 0x0000},
/*
* D40_DREG_PERIPHID2 Depends on HW revision:
* DB8500ed has 0x0008,
* ? has 0x0018,
* DB8500v1 has 0x0028
* DB8500v2 has 0x0038
*/
{ .reg = D40_DREG_PERIPHID3, .val = 0x0000},
/* PCell Id */
{ .reg = D40_DREG_CELLID0, .val = 0x000d},
{ .reg = D40_DREG_CELLID1, .val = 0x00f0},
{ .reg = D40_DREG_CELLID2, .val = 0x0005},
{ .reg = D40_DREG_CELLID3, .val = 0x00b1}
};
struct stedma40_platform_data *plat_data;
struct clk *clk = NULL;
void __iomem *virtbase = NULL;
@ -2529,8 +2568,9 @@ static struct d40_base * __init d40_hw_detect_init(struct platform_device *pdev)
int num_log_chans = 0;
int num_phy_chans;
int i;
u32 val;
u32 rev;
u32 pid;
u32 cid;
u8 rev;
clk = clk_get(&pdev->dev, NULL);
@ -2554,32 +2594,32 @@ static struct d40_base * __init d40_hw_detect_init(struct platform_device *pdev)
if (!virtbase)
goto failure;
/* HW version check */
for (i = 0; i < ARRAY_SIZE(dma_id_regs); i++) {
if (dma_id_regs[i].val !=
readl(virtbase + dma_id_regs[i].reg)) {
d40_err(&pdev->dev,
"Unknown hardware! Expected 0x%x at 0x%x but got 0x%x\n",
dma_id_regs[i].val,
dma_id_regs[i].reg,
readl(virtbase + dma_id_regs[i].reg));
goto failure;
}
}
/* This is just a regular AMBA PrimeCell ID actually */
for (pid = 0, i = 0; i < 4; i++)
pid |= (readl(virtbase + resource_size(res) - 0x20 + 4 * i)
& 255) << (i * 8);
for (cid = 0, i = 0; i < 4; i++)
cid |= (readl(virtbase + resource_size(res) - 0x10 + 4 * i)
& 255) << (i * 8);
/* Get silicon revision and designer */
val = readl(virtbase + D40_DREG_PERIPHID2);
if ((val & D40_DREG_PERIPHID2_DESIGNER_MASK) !=
D40_HW_DESIGNER) {
d40_err(&pdev->dev, "Unknown designer! Got %x wanted %x\n",
val & D40_DREG_PERIPHID2_DESIGNER_MASK,
D40_HW_DESIGNER);
if (cid != AMBA_CID) {
d40_err(&pdev->dev, "Unknown hardware! No PrimeCell ID\n");
goto failure;
}
rev = (val & D40_DREG_PERIPHID2_REV_MASK) >>
D40_DREG_PERIPHID2_REV_POS;
if (AMBA_MANF_BITS(pid) != AMBA_VENDOR_ST) {
d40_err(&pdev->dev, "Unknown designer! Got %x wanted %x\n",
AMBA_MANF_BITS(pid),
AMBA_VENDOR_ST);
goto failure;
}
/*
* HW revision:
* DB8500ed has revision 0
* ? has revision 1
* DB8500v1 has revision 2
* DB8500v2 has revision 3
*/
rev = AMBA_REV_BITS(pid);
/* The number of physical channels on this HW */
num_phy_chans = 4 * (readl(virtbase + D40_DREG_ICFG) & 0x7) + 4;

View File

@ -184,9 +184,6 @@
#define D40_DREG_PERIPHID0 0xFE0
#define D40_DREG_PERIPHID1 0xFE4
#define D40_DREG_PERIPHID2 0xFE8
#define D40_DREG_PERIPHID2_REV_POS 4
#define D40_DREG_PERIPHID2_REV_MASK (0xf << D40_DREG_PERIPHID2_REV_POS)
#define D40_DREG_PERIPHID2_DESIGNER_MASK 0xf
#define D40_DREG_PERIPHID3 0xFEC
#define D40_DREG_CELLID0 0xFF0
#define D40_DREG_CELLID1 0xFF4

View File

@ -172,8 +172,11 @@ struct pl08x_dma_chan {
int phychan_hold;
struct tasklet_struct tasklet;
char *name;
struct pl08x_channel_data *cd;
dma_addr_t runtime_addr;
const struct pl08x_channel_data *cd;
dma_addr_t src_addr;
dma_addr_t dst_addr;
u32 src_cctl;
u32 dst_cctl;
enum dma_data_direction runtime_direction;
dma_cookie_t lc;
struct list_head pend_list;
@ -202,7 +205,7 @@ struct pl08x_dma_chan {
* @mem_buses: buses which memory can be accessed from: PL08X_AHB1 | PL08X_AHB2
*/
struct pl08x_platform_data {
struct pl08x_channel_data *slave_channels;
const struct pl08x_channel_data *slave_channels;
unsigned int num_slave_channels;
struct pl08x_channel_data memcpy_channel;
int (*get_signal)(struct pl08x_dma_chan *);