1
0
Fork 0

spi: imx: reorder HW operations enable order to avoid possible RX data loss

The overflow may happen due to rescheduling for another task and/or interrupt
if we enable SPI HW before starting RX DMA. So RX DMA enabled first to make
sure data would be read out from FIFO ASAP. TX DMA enabled next to start
filling TX FIFO with new data. And finaly SPI HW enabled to start actual
data transfer.

The risk rise in case of heavy system load and high SPI clock.

Signed-off-by: Anton Bondarenko <anton.bondarenko.sama@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
hifive-unleashed-5.1
Anton Bondarenko 2015-12-05 17:57:00 +01:00 committed by Mark Brown
parent e47b33c076
commit fab44ef1ad
1 changed files with 10 additions and 2 deletions

View File

@ -956,10 +956,18 @@ static int spi_imx_dma_transfer(struct spi_imx_data *spi_imx,
if (left)
writel(dma | (left << MX51_ECSPI_DMA_RXT_WML_OFFSET),
spi_imx->base + MX51_ECSPI_DMA);
/*
* Set these order to avoid potential RX overflow. The overflow may
* happen if we enable SPI HW before starting RX DMA due to rescheduling
* for another task and/or interrupt.
* So RX DMA enabled first to make sure data would be read out from FIFO
* ASAP. TX DMA enabled next to start filling TX FIFO with new data.
* And finaly SPI HW enabled to start actual data transfer.
*/
dma_async_issue_pending(master->dma_rx);
dma_async_issue_pending(master->dma_tx);
spi_imx->devtype_data->trigger(spi_imx);
dma_async_issue_pending(master->dma_tx);
dma_async_issue_pending(master->dma_rx);
/* Wait SDMA to finish the data transfer.*/
timeout = wait_for_completion_timeout(&spi_imx->dma_tx_completion,
IMX_DMA_TIMEOUT);